CN112417937B - Substation video target detection method based on time sequence - Google Patents

Substation video target detection method based on time sequence Download PDF

Info

Publication number
CN112417937B
CN112417937B CN202010666328.2A CN202010666328A CN112417937B CN 112417937 B CN112417937 B CN 112417937B CN 202010666328 A CN202010666328 A CN 202010666328A CN 112417937 B CN112417937 B CN 112417937B
Authority
CN
China
Prior art keywords
point
model
pixel
substation
gaussian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010666328.2A
Other languages
Chinese (zh)
Other versions
CN112417937A (en
Inventor
张正文
王露
马涛
陈杰
马慧卓
张禹森
秦三营
田二胜
王韬尉
聂向欣
段维鹏
朱志勇
燕彩丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xiong'an Xuji Electric Technology Co ltd
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
State Grid Corp of China SGCC
Xuji Group Co Ltd
Original Assignee
Hebei Xiong'an Xuji Electric Technology Co ltd
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
State Grid Corp of China SGCC
Xuji Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xiong'an Xuji Electric Technology Co ltd, Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co, State Grid Corp of China SGCC, Xuji Group Co Ltd filed Critical Hebei Xiong'an Xuji Electric Technology Co ltd
Priority to CN202010666328.2A priority Critical patent/CN112417937B/en
Publication of CN112417937A publication Critical patent/CN112417937A/en
Application granted granted Critical
Publication of CN112417937B publication Critical patent/CN112417937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a substation video target detection method based on a time sequence, which comprises the following steps: constructing a substation moving target detection model, and calibrating the data content of a plurality of devices in a substation monitoring video; constructing a background updating model based on a Gaussian mixture model, and extracting periodically moving pixels in a transformer substation monitoring video; and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping. By constructing a transformer substation moving target detection model and a background updating model based on a Gaussian mixture model, calibrating the data content of main equipment in a monitoring video and extracting periodically moving pixels in the transformer substation monitoring video, the accurate identification of a transformer substation video target is realized, the problem that a current transformer substation video monitoring system is difficult to alarm in time under the abnormal operation condition is solved, the operation state of equipment in a transformer substation can be monitored in real time, transformer substation faults are prevented, the operation safety of the system is improved, and the power supply reliability of a power distribution network is improved.

Description

Substation video target detection method based on time sequence
Technical Field
The invention relates to the technical field of power equipment detection, in particular to a substation video target detection method based on a time sequence.
Background
The transformer substation is an important node in the power grid, and the continuous development of the intelligent transformer substation can provide a solid foundation for the safe operation of the intelligent power grid. Video monitoring is an important component of an intelligent substation auxiliary system, and can provide information on equipment running state, running environment and fireproof and antitheft aspects at any time. The unattended transformer substation is further required to automatically acquire the related data of the equipment in the video stream by means of an intelligent video monitoring method, and evaluate the operation condition of the equipment according to the visual information. The monitoring video at the present stage is used as auxiliary production information of the transformer substation, more is used as data for collecting evidence after equipment fails, and can not give an alarm in time under the abnormal operation condition. Therefore, the detection and tracking of the target object in the video are the basis for realizing the intelligent monitoring of the running condition of the equipment in the station, and have important research significance.
At present, research on a video target detection method of a transformer substation at home and abroad has a certain foundation. The main idea of the area-based tracking method is that a corresponding area template is established according to the detected target, the area of the object to be detected is matched with the template, and the area with the highest matching degree is judged to be a new target area; the tracking method based on prediction needs to perform machine learning on moving targets of a plurality of frames in a video, fit the moving track of the targets, predict the moving condition of the object to be detected according to time and speed parameters, and has good effect of processing the object with smooth and slow moving mode, but has unsatisfactory effect under the condition that the processing target has affine transformation and is blocked.
Disclosure of Invention
The embodiment of the invention aims to provide a substation video target detection method based on a time sequence, which is used for calibrating the data content of main equipment in a substation monitoring video and extracting periodically moving pixels in the substation monitoring video by constructing a substation moving target detection model and a background updating model based on a Gaussian mixture model, so that the accurate identification of the substation video target is realized, the problem that the current substation video monitoring system is difficult to alarm in time under the abnormal operation condition is solved, the operation state of equipment in a substation can be monitored in real time, the substation fault is prevented, the intelligent development of the substation is facilitated, the operation and maintenance efficiency of the substation is improved, the operation safety of a power system is improved, and the power supply reliability of a power distribution network is improved.
In order to solve the technical problems, the embodiment of the invention provides a substation video target detection method based on a time sequence, which comprises the following steps:
constructing a substation moving target detection model, and calibrating the data content of a plurality of devices in a substation monitoring video;
constructing a background updating model based on a Gaussian mixture model, and extracting periodically moving pixels in a transformer substation monitoring video;
and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping.
Further, the construction of the substation moving object detection model specifically comprises the following steps:
acquiring local extreme point position data of the substation moving target in a scale space by adopting a characteristic point extraction method;
acquiring gradient distribution of neighborhood pixel points;
and judging whether the characteristic points between the two images of the transformer substation moving target are matched according to the local extreme point position data and the gradient distribution of the neighborhood pixel points.
Further, the step of obtaining the local extreme point position data of the substation moving target in the scale space by adopting a characteristic point extraction method comprises the following steps:
selecting one pixel point and eight adjacent pixel points thereof from a certain scale space in the Gaussian difference space as a comparison object of the two-dimensional neighborhood;
selecting a 3X 3 point set of the same area in the upper scale space and the lower scale space adjacent to the scale space, and forming a three-dimensional neighborhood space with the adjacent pixel points;
judging whether the pixel point is a maximum value point or a minimum value point in the three-dimensional adjacent space;
and if the pixel points are maximum value points or minimum value points in the three-dimensional adjacent space, the pixel points are characteristic points on the corresponding scale space.
Further, the background updating model based on the Gaussian mixture model is constructed specifically as follows:
establishing a plurality of Gaussian models for each pixel point of a frame in the transformer substation monitoring video so as to simulate the brightness density function of the pixel point;
recording the mean and variance of each Gaussian model;
judging whether the updated value of the pixel point belongs to any Gaussian model or not;
if the updated value of the pixel belongs to any Gaussian model, judging that the pixel belongs to a background point;
if the updated value of the pixel point does not belong to any Gaussian model, judging that the pixel point belongs to a foreground point, and creating a Gaussian model for the pixel point.
Further, the substation video target detection method based on the time sequence further comprises the following steps:
judging whether the generation probability of the Gaussian model in the background point is smaller than a first preset value or not;
and if the generation probability of the Gaussian model in the background point is smaller than the first preset value, removing the Gaussian model from the background updating model.
Further, the substation video target detection method based on the time sequence further comprises the following steps:
judging whether the probability of being hit by the Gaussian model of the foreground point is larger than a second preset value or not;
if the probability of being hit by the Gaussian model of the foreground point is larger than the second preset value, adding the foreground point as the background point, adding the corresponding Gaussian model into the background updating model, and removing one Gaussian model with the lowest hit probability in the background updating model.
Further, the foreground object extraction method based on dynamic time warping comprises the following steps:
extracting motion tracks of brightness values of M pixel points of the plurality of moving targets in the transformer substation monitoring video in the whole sequence, and intercepting an original sequence by taking L frames (L < N) as subsequence lengths to obtain M subsequence sets, wherein each subsequence set comprises N-L+1 subsequences with the length of L;
performing cluster analysis on the subsequence set by using a K-means algorithm, calculating the distance in the K-means algorithm according to an improved DTW method, clustering the motion track subsequence of each pixel point into K clusters, and recording the cluster center point as a reference object for subsequence matching;
modeling the background image by using a Gaussian mixture model, and extracting foreground pixel points;
and for the pixel point of which the Gaussian mixture model judges as the background point, establishing a time sequence with the length of L for the pixel point, matching the time sequence with the clustering model, if the time sequence is divided into any one of the clusters and the similarity is larger than a confidence threshold value rho, the pixel point belongs to the background point, and if the pixel point is an outlier, the pixel point is the foreground point, and updating the background updating model.
The technical scheme provided by the embodiment of the invention has the following beneficial technical effects:
by constructing a transformer substation moving target detection model and a background updating model based on a Gaussian mixture model, calibrating the data content of main equipment in a transformer substation monitoring video and extracting periodically moving pixels in the transformer substation monitoring video, accurate identification of a transformer substation video target is realized, the problem that a current transformer substation video monitoring system is difficult to alarm in time under abnormal operation conditions is solved, the operation state of equipment in a transformer substation can be monitored in real time, transformer substation faults are prevented, intelligent development of the transformer substation is facilitated, the operation and maintenance efficiency of the transformer substation is improved, the operation safety of a power system is improved, and the power supply reliability of a power distribution network is improved.
Drawings
Fig. 1 is a flowchart of a substation video target detection method based on a time sequence provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-scale Gaussian difference space provided by an embodiment of the invention;
fig. 3 is a diagram for extracting a movement track of a reference point of a normal operation state of a transformer substation, which is provided by the embodiment of the invention;
fig. 4 is a graph of a characteristic point gray value change trace according to an embodiment of the present invention;
FIG. 5 is a diagram of a feature point cluster center sequence and a motion sequence during vibration provided by an embodiment of the invention;
FIG. 6 is a diagram of foreground extraction results during vibration of the device according to an embodiment of the present invention;
fig. 7 is a diagram of a complete foreground object extraction result during vibration according to an embodiment of the present invention.
Detailed Description
The objects, technical solutions and advantages of the present invention will become more apparent by the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
Fig. 1 is a flowchart of a substation video target detection method based on a time sequence provided by an embodiment of the invention.
Referring to fig. 1, an embodiment of the present invention provides a method for detecting a video object of a substation based on a time sequence, including the following steps:
s100, a substation moving object detection model is built, and data content of a plurality of devices in a substation monitoring video is calibrated.
The method for detecting the transformer substation moving target by adopting the characteristic point extraction method is adopted. The feature points can find the positioning information of the target object through the description of the key position features in the image, for example, the outline of the target is determined through the position information of the edges and the vertex angles of the object. If the difference between the characteristic point and other pixel points in the image frame is obvious, the aim of detecting and tracking the equipment target represented by the same characteristic point can be achieved by searching the same characteristic point in the video sequence.
The scale-invariant feature transformation is a method for describing local features of images, extremum points are found on a spatial scale, and the matching degree between the extremum points in two images is determined by comparing the amplitude and the direction of the extremum points. The two-dimensional image scale space L (x, y, σ) based on gaussian convolution kernels is expressed as:
L(x,y,σ)=G(x,y,σ)*f(x,y),
Figure GDA0002802507460000051
wherein f (x, y) is the value of the midpoint (x, y) of the image, and σ is the degree of blurring of the image;
the principle of retina imaging is used for reference, and a multi-level model structure similar to a pyramid is constructed by using a scale invariant feature transformation method so as to simulate the process of image distance change during imaging. The resolution of the bottom layer of the image pyramid with the resolution of MxN is consistent with that of the original image, the resolution of the image of the ith layer is reduced from the bottom to the top, and the resolution of the image of the ith layer is reduced to 2 of the original image -i M×2 -i N; in addition, each layer of the pyramid divides the image into S scale levels, and the images between adjacent scales of the same level are differentiated to construct a Gaussian difference space.
Fig. 2 is a schematic diagram of a multi-scale gaussian differential space provided by an embodiment of the present invention.
Please refer to fig. 2, wherein
Figure GDA0002802507460000061
The gaussian difference space D (x, y, σ) between the first image and the second image in each hierarchy is defined as:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
the local extremum point in the scale space is the characteristic point of the image under the scale, and the determination of the local extremum point comprises the following steps:
(1) And selecting a pixel point and 8 points around the pixel point on a certain scale in the Gaussian difference space as comparison objects of the two-dimensional neighborhood.
(2) And (3) selecting a 3X 3 point set of the same area in the upper scale and the lower scale adjacent to the scale, and forming a three-dimensional neighborhood space with 8 points in the step (1).
(3) If the pixel point is the maximum value or the minimum value point in the three-dimensional neighborhood space, the point is considered as a characteristic point on the corresponding scale.
After the extreme point position of the Gaussian difference space is determined, describing the characteristic of the extreme point by utilizing the gradient distribution of the neighborhood pixel points, and expressing the amplitude m (x, y, sigma) and the direction theta (x, y, sigma) of the extreme point neighborhood pixel gradient on the scale sigma as follows:
Figure GDA0002802507460000062
Figure GDA0002802507460000063
and establishing a statistical histogram according to the characteristics of the neighborhood gradient, selecting the maximum value as the main gradient direction of the neighborhood pixels of the extreme point, and using the rest values as the auxiliary directions to establish a description characteristic set of the extreme point. And matching the characteristic points between the two images by taking the amplitude value and the direction of the extreme point as criteria.
S200, constructing a background updating model based on the Gaussian mixture model, and extracting periodically moving pixels in the transformer substation monitoring video.
The background updating model based on the Gaussian mixture model is that a plurality of Gaussian distribution models are built for each pixel in a frame in a video sequence to simulate the brightness density function of the pixel point, and the average value and the variance in each Gaussian model are recorded. When updating the background, if the updated value of the pixel belongs to a certain gaussian model, the point still belongs to the background, and if it does not match all known gaussian models, the point is considered to have become foreground.
Image in ith frame image in video sequenceThe brightness value of the pixel point (x, y) is f i (x, y) establishing N Gaussian distribution models for the point, wherein the distribution function g of the jth model ij (x, y) is:
Figure GDA0002802507460000071
wherein mu is ij 、σ ij Respectively obtaining the mean value and the variance of the jth Gaussian distribution model of the pixel point;
the Gaussian mixture model distribution function m of the pixel point ij (x, y) is:
Figure GDA0002802507460000072
wherein p is ij (x, y) is the probability that the pixel belongs to the j-th gaussian model distribution, and satisfies the following equation:
Figure GDA0002802507460000073
the pixel point corresponds to the kth frame
Figure GDA0002802507460000074
The gaussian distribution for which the matching model is expressed as:
Figure GDA0002802507460000075
wherein p is ks (x, y) is the pixel point at the time of the kth frame
Figure GDA0002802507460000076
Probability of distribution of the Gaussian model g ks (x, y) is the +.>
Figure GDA0002802507460000077
A distribution function of the gaussian model.
If all p are at the same time ks (x,y)g ks (x, y) are all less than the set threshold value ζ, then the point is considered to have become part of the foreground image, and a new gaussian model is built for it. If the probability of gaussian model generation in the background is small, it is removed from the current hybrid model; if the Gaussian model of the foreground pixel point is frequently hit after being established, the pixel point becomes a background, the corresponding Gaussian model is supplemented into the background model, and a model with the lowest hit frequency is removed to ensure stable quantity.
S300, solving the background updating model by adopting a foreground object extraction method based on dynamic time warping.
The Gaussian model is built on the basis of the brightness values of the pixel points in the image, and the improved method for extracting the foreground target of the model by utilizing the time sequence is mainly based on whether the current brightness value sequence of the pixel points is matched with the sequence in normal operation or not, namely, the mode that the brightness values of the pixel points are arranged together according to a certain sequence relation. The patent provides a method for improving a foreground object extraction mechanism of a Gaussian mixture model based on a dynamic time bending distance so as to reduce the false negative rate of a foreground target point in an original model. The dynamic time warping distance is based on time planning and distance measurement, so that the nonlinear similarity between two time sequences is calculated, and the method is commonly used in the field of pattern recognition. Is provided with X= (X) 1 ,x 2 ,…,x n ) And y= (Y) 1 ,y 2 ,…,y m ) Two sequences, matrix A is constructed m×n
Figure GDA0002802507460000081
From A m×n First element a 11 Starting with a position operator delta 1 (1,0),δ 2 (0, 1) and delta 3 (1, 1) vector displacement to the last element a mn The path L travelled is noted as the minimum bending distance of the sequences X and Y, expressed as:
min{a ij1 ,a ij2 ,a ij3 }∈L,i=1,2,…,m;j=1,2,…,n
by means of the element L contained in L n Solving the dynamic time warping distance DTW between X and Y XY
Figure GDA0002802507460000082
Wherein k is the number of elements traversed by the path L;
the traditional dynamic time bending distance adopts a Euclidean distance method to calculate the similarity, and a in a matrix is constructed ij The modification is as follows:
Figure GDA0002802507460000091
the minimum curved path also translates into a maximum similarity path:
max{a ij1 ,a ij2 ,a ij3 }∈L,i=1,2,…,m;j=1,2,…,n
if the full-sequence matching with the same length is performed, only a matrix A is calculated m×n The path formed by diagonal elements in (a) is only needed.
The foreground object extraction method based on dynamic time warping has the following calculation flow:
(1) Subsequence extraction
When the device operates normally, motion tracks of M total pixel brightness values on each target object (including a foreground object and a background) in a video in the whole sequence (N frames) are extracted, an original sequence is intercepted by taking L frames (L < N) as subsequence lengths, and M sets are obtained, wherein each set comprises N-L+1 subsequences with the length of L.
(2) Sub-sequence clustering
And (3) carrying out cluster analysis on the subsequence set obtained in the step (1) by using a K-means algorithm. The distance in the K-means is calculated according to an improved DTW method, the motion track subsequence of each pixel point is clustered into K clusters, and the center point of each cluster is recorded as a reference object for subsequence matching;
(3) Background model establishment
And modeling the background image by using a Gaussian mixture model, and primarily extracting foreground pixel points.
(4) Enhanced foreground extraction method
The method comprises the steps that a time sequence with the length of L is established for a pixel point with the Gaussian mixture model judged to be a background, the time sequence is matched with a clustering model obtained in the step (2) of the flow, and if the sequence is divided into a certain cluster and the similarity is larger than a confidence threshold value rho, the pixel point belongs to the background; if the pixel point is an outlier, the foreground pixel is judged. And updating the background mode according to the identification result of the step.
Fig. 3 is a diagram for extracting a movement trace of a reference point of a normal operation state of a transformer substation according to an embodiment of the present invention.
Fig. 4 is a graph of a characteristic point gray value change trace according to an embodiment of the present invention.
Fig. 5 is a diagram of a feature point clustering center sequence and a motion sequence during vibration provided by an embodiment of the present invention.
Fig. 6 is a diagram of foreground extraction results when equipment provided by an embodiment of the invention vibrates.
Fig. 7 is a diagram of a complete foreground object extraction result during vibration according to an embodiment of the present invention.
Referring to fig. 3, fig. 4, fig. 5, fig. 6 and fig. 7, the video (40 frames) of the transformer substation in normal operation is taken as the full sequence of the motion track, fig. 3 is an image of the transformer substation in normal operation, and the brightness change sequences in the A, B, C three-point extraction sequences are respectively selected from the insulating sleeve, the background and the circuit breaker in the figure, and the change track is shown in fig. 4.
Taking 8 frames as the length of one subsequence (total N=3×33 subsequences), clustering to obtain k=3×4 cluster centers. FIG. 5 shows the change of brightness values of the clustering center sequence in the normal operation of three points A, B, C and the change of brightness values of three points A, B, C when vibration occurs in the video, as shown in part (d) of FIG. 5, when vibration occurs, the change of brightness values of two points A, B is obvious and periodic, and the two points can be seen from the video to belong to the insulating sleeveThe area affected by vibration, while point C is not affected by sleeve vibration, the brightness value change is substantially consistent with normal operation. Based on the Gaussian mixture model of the calculation point A, B, C by the method, the result that all three reference points are background pixels is obtained, and the result does not accord with the actual situation. Comparing the reference point brightness change sequence with the clustering center point sample in normal operation, the method can obtain that the A, B two points do not belong to any cluster, and the locus of the C point clusters in the center C when the sleeve is vibrated 4 The cluster in which the sleeve is located, therefore, yields A, B that both points should be part of the image foreground object when the sleeve vibrates.
Fig. 6 is a result of extracting foreground objects from an image when a sleeve vibrates, only the edge part of the sleeve with obvious change in the vibration process is marked, and the brightness value of a pixel of a sleeve main body in the motion process is slightly changed or basically unchanged, so that foreground objects when the vibration occurs are more intuitively described, and the foreground objects are clustered according to the brightness value of the pixel, so that a more complete foreground object image is obtained, as shown in fig. 7. The practicability and the effectiveness of the method can be verified according to the video target detection result.
The embodiment of the invention aims to protect a substation video target detection method based on a time sequence, which comprises the following steps: constructing a substation moving target detection model, and calibrating the data content of a plurality of devices in a substation monitoring video; constructing a background updating model based on a Gaussian mixture model, and extracting periodically moving pixels in a transformer substation monitoring video; and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping. The technical scheme has the following effects:
by constructing a transformer substation moving target detection model and a background updating model based on a Gaussian mixture model, calibrating the data content of main equipment in a transformer substation monitoring video and extracting periodically moving pixels in the transformer substation monitoring video, accurate identification of a transformer substation video target is realized, the problem that a current transformer substation video monitoring system is difficult to alarm in time under abnormal operation conditions is solved, the operation state of equipment in a transformer substation can be monitored in real time, transformer substation faults are prevented, intelligent development of the transformer substation is facilitated, the operation and maintenance efficiency of the transformer substation is improved, the operation safety of a power system is improved, and the power supply reliability of a power distribution network is improved.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.

Claims (6)

1. The substation video target detection method based on the time sequence is characterized by comprising the following steps of:
constructing a substation moving target detection model, and calibrating the data content of a plurality of devices in a substation monitoring video;
constructing a background updating model based on a Gaussian mixture model, and extracting periodically moving pixels in a transformer substation monitoring video;
solving the background updating model by adopting a foreground object extraction method based on dynamic time warping;
the foreground object extraction method based on dynamic time warping comprises the following steps:
extracting motion tracks of brightness values of M pixel points of a plurality of moving targets in the transformer substation monitoring video in the whole sequence, and intercepting an original sequence by taking L frames as subsequence lengths, wherein L is less than N and N is the number of frames of the whole sequence, so as to obtain M subsequence sets, wherein each subsequence set comprises N-L+1 subsequences with the length of L;
performing cluster analysis on the subsequence set by using a K-means algorithm, calculating the distance in the K-means algorithm according to an improved DTW method, clustering the motion track subsequence of each pixel point into K clusters, recording the cluster center point as a reference object for subsequence matching, wherein the improved DTW method is used for constructing a matrix and calculating a maximum similarity path,
the matrix A m×n The method comprises the following steps:
Figure FDA0004039722160000011
Figure FDA0004039722160000021
wherein x is i And y i Time series x= (X) 1 ,x 2 ,…,x n ) And y= (Y) 1 ,y 2 ,…,y m ) I is the sequence number in time series X, j is the sequence number of time series Y;
the maximum similarity path is as follows:
max{a ij1 ,a ij2 ,a ij3 }∈L,i=1,2,…,m;j=1,2,,n
wherein delta 1 、δ 2 、δ 3 Respectively, the position operators delta 1 (1,0)、δ 2 (0,1)、δ 3 (1,1);
Modeling a background image by using a Gaussian mixture model, and extracting foreground pixel points;
and for the pixel point of which the Gaussian mixture model judges as the background point, establishing a time sequence with the length of L for the pixel point, matching the time sequence with a clustering model, if the time sequence is divided into any one of the clusters, the similarity is larger than a confidence threshold value rho, the pixel point belongs to the background point, and if the pixel point is an outlier, the pixel point is a foreground point, and updating the background updating model.
2. The method for detecting the video target of the transformer substation based on the time sequence according to claim 1, wherein the construction of the moving target detection model of the transformer substation is specifically as follows:
acquiring local extreme point position data of the substation moving target in a scale space by adopting a characteristic point extraction method;
acquiring gradient distribution of neighborhood pixel points;
and judging whether the characteristic points between the two images of the transformer substation moving target are matched according to the local extreme point position data and the gradient distribution of the neighborhood pixel points.
3. The method for detecting a video object of a transformer substation based on time series according to claim 2, wherein the step of obtaining local extremum point position data of the moving object of the transformer substation in a scale space by using a feature point extraction method comprises the following steps:
selecting one pixel point and eight adjacent pixel points thereof from a certain scale space in the Gaussian difference space as a comparison object of the two-dimensional neighborhood;
selecting a 3X 3 point set of the same area in the upper scale space and the lower scale space adjacent to the scale space, and forming a three-dimensional neighborhood space with the adjacent pixel points;
judging whether the pixel point is a maximum value point or a minimum value point in the three-dimensional neighborhood space;
and if the pixel points are maximum value points or minimum value points in the three-dimensional neighborhood space, the pixel points are characteristic points on the corresponding scale space.
4. The substation video object detection method based on the time sequence according to claim 1, wherein the building of the background update model based on the mixed gaussian model is specifically:
establishing a plurality of Gaussian models for each pixel point of a frame in the transformer substation monitoring video so as to simulate the brightness density function of the pixel point;
recording the mean and variance of each Gaussian model;
judging whether the updated value of the pixel point belongs to any Gaussian model or not;
if the updated value of the pixel belongs to any Gaussian model, judging that the pixel belongs to a background point;
if the updated value of the pixel point does not belong to any Gaussian model, judging that the pixel point belongs to a foreground point, and creating a Gaussian model for the pixel point.
5. The time series-based substation video object detection method according to claim 4, further comprising:
judging whether the generation probability of the Gaussian model in the background point is smaller than a first preset value or not;
and if the generation probability of the Gaussian model in the background point is smaller than the first preset value, removing the Gaussian model from the background updating model.
6. The time series-based substation video object detection method according to claim 4, further comprising:
judging whether the probability of being hit by the Gaussian model of the foreground point is larger than a second preset value or not;
if the probability of being hit by the Gaussian model of the foreground point is larger than the second preset value, adding the foreground point as the background point, adding the corresponding Gaussian model into the background updating model, and removing one Gaussian model with the lowest hit probability in the background updating model.
CN202010666328.2A 2020-07-10 2020-07-10 Substation video target detection method based on time sequence Active CN112417937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010666328.2A CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010666328.2A CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Publications (2)

Publication Number Publication Date
CN112417937A CN112417937A (en) 2021-02-26
CN112417937B true CN112417937B (en) 2023-05-16

Family

ID=74844155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666328.2A Active CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Country Status (1)

Country Link
CN (1) CN112417937B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148072B (en) * 2018-02-12 2023-05-02 庄龙飞 Sport course scoring method and system
CN117676136A (en) * 2023-11-16 2024-03-08 广州群接龙网络科技有限公司 Method and system for processing group-connected data
CN117474983B (en) * 2023-12-27 2024-03-12 广东力创信息技术有限公司 Early warning method based on light-vision linkage and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109145820A (en) * 2018-08-22 2019-01-04 四创科技有限公司 A kind of river location mask method based on video dynamic image
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8675917B2 (en) * 2011-10-31 2014-03-18 International Business Machines Corporation Abandoned object recognition using pedestrian detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109145820A (en) * 2018-08-22 2019-01-04 四创科技有限公司 A kind of river location mask method based on video dynamic image
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Antonio Hern'andez-Vela,et al.Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D.《Pattern Recognition Letters》.2013, *
动摄像机情况下目标检测及轨迹分析;王亮芬;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100515;正文第2.2、3.2节 *
基于背景重构的运动目标检测算法;董文明 等;《重庆邮电大学学报(自然科学版)》;20081231;正文第754-757页 *

Also Published As

Publication number Publication date
CN112417937A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112417937B (en) Substation video target detection method based on time sequence
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
US20210042556A1 (en) Pixel-level based micro-feature extraction
Dong et al. Deep metric learning-based for multi-target few-shot pavement distress classification
CN106845364B (en) Rapid automatic target detection method
CN111626128A (en) Improved YOLOv 3-based pedestrian detection method in orchard environment
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN108734109B (en) Visual target tracking method and system for image sequence
CN106815578A (en) A kind of gesture identification method based on Depth Motion figure Scale invariant features transform
CN113052876A (en) Video relay tracking method and system based on deep learning
CN106952293A (en) A kind of method for tracking target based on nonparametric on-line talking
CN113362374A (en) High-altitude parabolic detection method and system based on target tracking network
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN112100435A (en) Automatic labeling method based on edge end traffic audio and video synchronization sample
Charouh et al. Improved background subtraction-based moving vehicle detection by optimizing morphological operations using machine learning
CN110969645A (en) Unsupervised abnormal track detection method and unsupervised abnormal track detection device for crowded scenes
CN109901553B (en) Heterogeneous industrial big data collaborative modeling process fault monitoring method based on multiple visual angles
CN111860097B (en) Abnormal behavior detection method based on fuzzy theory
CN116647644B (en) Campus interactive monitoring method and system based on digital twin technology
Leyva et al. Video anomaly detection based on wake motion descriptors and perspective grids
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
CN112733770A (en) Regional intrusion monitoring method and device
CN113076963B (en) Image recognition method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant