CN112417937A - Transformer substation video target detection method based on time sequence - Google Patents

Transformer substation video target detection method based on time sequence Download PDF

Info

Publication number
CN112417937A
CN112417937A CN202010666328.2A CN202010666328A CN112417937A CN 112417937 A CN112417937 A CN 112417937A CN 202010666328 A CN202010666328 A CN 202010666328A CN 112417937 A CN112417937 A CN 112417937A
Authority
CN
China
Prior art keywords
model
point
transformer substation
gaussian
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010666328.2A
Other languages
Chinese (zh)
Other versions
CN112417937B (en
Inventor
张正文
王露
马涛
陈杰
马慧卓
张禹森
秦三营
田二胜
王韬尉
聂向欣
段维鹏
朱志勇
燕彩丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xiong'an Xuji Electric Technology Co ltd
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
State Grid Corp of China SGCC
Xuji Group Co Ltd
Original Assignee
Hebei Xiong'an Xuji Electric Technology Co ltd
Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co
State Grid Corp of China SGCC
Xuji Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xiong'an Xuji Electric Technology Co ltd, Xiongan New Area Power Supply Company State Grid Hebei Electric Power Co, State Grid Corp of China SGCC, Xuji Group Co Ltd filed Critical Hebei Xiong'an Xuji Electric Technology Co ltd
Priority to CN202010666328.2A priority Critical patent/CN112417937B/en
Publication of CN112417937A publication Critical patent/CN112417937A/en
Application granted granted Critical
Publication of CN112417937B publication Critical patent/CN112417937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a transformer substation video target detection method based on a time sequence, which comprises the following steps: constructing a transformer substation moving target detection model, and calibrating data contents of a plurality of devices in a transformer substation monitoring video; constructing a background updating model based on a Gaussian mixture model, and extracting pixels of periodic motion in a monitoring video of the transformer substation; and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping. The method has the advantages that the data content of main equipment in the monitoring video is calibrated and the periodically moving pixels in the monitoring video of the transformer substation are extracted by constructing the moving object detection model of the transformer substation and the background updating model based on the Gaussian mixture model, so that the accurate identification of the video object of the transformer substation is realized, the problem that the conventional video monitoring system of the transformer substation is difficult to give an alarm in time under the abnormal operation condition is solved, the operation state of the equipment in the transformer substation can be monitored in real time, the fault of the transformer substation is prevented, the operation safety of the system is improved, and the power supply reliability of.

Description

Transformer substation video target detection method based on time sequence
Technical Field
The invention relates to the technical field of power equipment detection, in particular to a transformer substation video target detection method based on time series.
Background
The transformer substation is an important node in a power grid, and the continuous development of the intellectualization of the transformer substation can provide a solid foundation for the safe operation of the intelligent power grid. The video monitoring is an important component of an intelligent substation auxiliary system, and can constantly provide information on the running state, the running environment and the fire prevention and theft prevention of equipment. The unattended transformer substation needs to automatically acquire relevant data of equipment in video streams by means of an intelligent video monitoring method, and the running condition of the equipment is evaluated according to visual information. At present, the monitoring video is used as auxiliary production information of the transformer substation, more is used as data for collecting evidences after equipment fails, and timely warning cannot be given under abnormal operation conditions. Therefore, the detection and tracking of the target object in the video is the basis for realizing the intelligent monitoring of the operation condition of the equipment in the station, and has important research significance.
At present, research aiming at a transformer substation video target detection method at home and abroad has a certain foundation. The main idea of the region-based tracking method is to establish a corresponding region template according to a detected target, match the region of an object to be detected with the template, and judge the region with the highest matching degree as a new target region, but the method has larger error of the result obtained under the condition that the target is subjected to affine transformation or is shielded, and the selection of the tracking region directly influences the final tracking effect; the prediction-based tracking method needs machine learning on moving targets of a plurality of frames in a video, fits a motion track of the targets, and predicts the motion condition of the object to be detected according to time and speed parameters.
Disclosure of Invention
The invention aims to provide a substation video target detection method based on a time sequence, which is characterized in that a substation moving target detection model and a background updating model based on a Gaussian mixture model are constructed, the data content of main equipment in a substation monitoring video is calibrated, and periodically moving pixels in the substation monitoring video are extracted, so that the accurate identification of the substation video target is realized, the problem that the conventional substation video monitoring system is difficult to alarm in time under the abnormal operation condition is solved, the operation state of equipment in a substation can be monitored in real time, the fault of the substation is prevented, the intelligent development of the substation is facilitated, the operation and maintenance efficiency of the substation is improved, the operation safety of a power system is improved, and the power supply reliability of a power distribution network is improved.
In order to solve the technical problem, an embodiment of the present invention provides a transformer substation video target detection method based on a time sequence, including the following steps:
constructing a transformer substation moving target detection model, and calibrating data contents of a plurality of devices in a transformer substation monitoring video;
constructing a background updating model based on a Gaussian mixture model, and extracting pixels of periodic motion in a monitoring video of the transformer substation;
and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping.
Further, the building of the substation moving object detection model specifically includes:
acquiring local extreme point position data of the moving target of the transformer substation in a scale space by adopting a characteristic point extraction method;
acquiring gradient distribution of neighborhood pixels;
and judging whether the feature points between the two images of the moving target of the transformer substation are matched or not according to the local extreme point position data and the gradient distribution of the neighborhood pixels.
Further, the method for acquiring the position data of the local extreme point of the moving target of the transformer substation in the scale space by adopting the feature point extraction method comprises the following steps:
selecting a pixel point and eight adjacent pixel points thereof on a certain scale space in a Gaussian difference space as a comparison object of a two-dimensional neighborhood;
selecting a 3 x 3 point set of the same region in the upper and lower two scale spaces adjacent to the scale space, and forming a three-dimensional neighborhood space with the adjacent pixel points;
judging whether the pixel points are maximum value points or minimum value points in the three-dimensional adjacent space;
and if the pixel point is the maximum value point or the minimum value point in the three-dimensional adjacent space, the pixel point is the feature point on the corresponding scale space.
Further, the building of the background update model based on the gaussian mixture model specifically includes:
establishing a plurality of Gaussian models for each pixel point of a frame in the transformer substation monitoring video to simulate a brightness density function of the pixel points;
recording the mean and variance of each Gaussian model;
judging whether the updated value of the pixel belongs to any one of the Gaussian models;
if the updated value of the pixel point belongs to any one of the Gaussian models, judging that the pixel point belongs to a background point;
and if the updated value of the pixel does not belong to any Gaussian model, judging that the pixel belongs to a foreground point, and establishing a Gaussian model for the pixel.
Further, the transformer substation video target detection method based on the time sequence further comprises the following steps:
judging whether the Gaussian model generation probability in the background points is smaller than a first preset value or not;
and if the Gaussian model generation probability in the background points is smaller than the first preset value, removing the Gaussian model from the background updating model.
Further, the transformer substation video target detection method based on the time sequence further comprises the following steps:
judging whether the hit probability of the Gaussian model of the foreground point is greater than a second preset value or not;
if the hit probability of the Gaussian models of the foreground points is greater than the second preset value, the foreground points are added as the background points, the corresponding Gaussian models are added into the background updating model, and the Gaussian model with the lowest hit probability in the background updating model is removed.
Further, the method for extracting the foreground object based on the dynamic time warping comprises the following steps:
extracting the motion trail of the brightness values of M pixel points of the plurality of moving targets in the whole sequence in the transformer substation monitoring video, and intercepting an original sequence by taking L frames (L is less than N) as the length of a subsequence to obtain M subsequence sets, wherein each subsequence set comprises N-L +1 subsequences with the length of L;
clustering analysis is carried out on the subsequence set by utilizing a K-means algorithm, the distance in the K-means algorithm is calculated according to an improved DTW method, the motion trail subsequence of each pixel point is clustered into K clusters, and the cluster center point is recorded as a reference object matched with the subsequences;
modeling the background image by applying a Gaussian mixture model, and extracting foreground pixel points;
and for the pixel point judged as the background point by the Gaussian mixture model, establishing a time sequence with the length of L for the pixel point, matching the time sequence with the clustering model, if the time sequence is divided into any one of the clusters and the similarity is greater than a confidence threshold rho, determining that the pixel point belongs to the background point, and if the pixel point is an outlier, determining that the pixel point is the foreground point and updating the background updating model.
The technical scheme of the embodiment of the invention has the following beneficial technical effects:
the method has the advantages that the data content of main equipment in the substation monitoring video is calibrated and the periodically moving pixels in the substation monitoring video are extracted by constructing the substation moving target detection model and the background updating model based on the Gaussian mixture model, so that the accurate identification of the substation video target is realized, the problem that the conventional substation video monitoring system is difficult to alarm in time under the abnormal operation condition is solved, the operation state of equipment in the substation can be monitored in real time, the fault of the substation is prevented, the intelligent development of the substation is favorably realized, the operation and maintenance efficiency of the substation is improved, the operation safety of an electric power system is improved, and the power supply reliability of a power distribution network is improved.
Drawings
Fig. 1 is a flowchart of a method for detecting a transformer substation video target based on a time sequence according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-scale Gaussian difference space provided by an embodiment of the present invention;
fig. 3 is a reference point motion trajectory extraction diagram of a normal operation state of a transformer substation according to an embodiment of the present invention;
FIG. 4 is a diagram of a variation trajectory of gray-level values of feature points according to an embodiment of the present invention;
FIG. 5 is a diagram of a feature point clustering center sequence and a vibration motion sequence provided by an embodiment of the present invention;
FIG. 6 is a diagram of a foreground extraction result when the apparatus provided by the embodiment of the present invention vibrates;
fig. 7 is a diagram of a complete foreground object extraction result during vibration according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Fig. 1 is a flowchart of a substation video target detection method based on a time sequence according to an embodiment of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for detecting a transformer substation video target based on a time sequence, including the following steps:
s100, constructing a transformer substation moving target detection model, and calibrating data contents of a plurality of devices in a transformer substation monitoring video.
This patent adopts the characteristic point to draw the method and detects transformer substation's moving target. The feature points can find the positioning information of the target object through the description of key position features in the image, for example, the contour of the target is determined through the position information of the edge and the top corner of the object. If the difference between the characteristic point and other pixel points in the image frame is obvious, the purpose of detecting and tracking the equipment target represented by the point can be achieved by searching the same characteristic point in the video sequence.
The scale invariant feature transformation is a method for describing local features of images, and is characterized by searching for extreme points on a spatial scale and determining the matching degree between the extreme points in two images by comparing the amplitude and the direction of the extreme points. The two-dimensional image scale space L (x, y, σ) based on the gaussian convolution kernel is expressed as:
L(x,y,σ)=G(x,y,σ)*f(x,y),
Figure RE-GDA0002802507460000051
wherein f (x, y) is the value of the point (x, y) in the image, and sigma is the blurring degree of the image;
by using the principle of retinal imaging, a multilevel model structure similar to a pyramid is constructed by using a scale invariant feature transformation method so as to simulate the process of image distance change during imaging. The resolution of the pyramid bottom layer of the image with the resolution of M multiplied by N is consistent with that of the original image, the image resolution from the bottom to the top is reduced step by step, and the image resolution of the ith layer is reduced to 2 of the original image-iM×2-iN; in addition, the image is divided into S scale levels in each layer of the pyramid, and the difference is carried out on the images between adjacent scales in the same layer to construct a Gaussian difference space.
Fig. 2 is a schematic diagram of a multi-scale gaussian difference space according to an embodiment of the present invention.
Please refer to fig. 2, wherein
Figure RE-GDA0002802507460000061
The gaussian difference space D (x, y, σ) between the first image and the second image in each level is defined as:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
the local extreme point in the scale space is a feature point of the image under the scale, and the determination of the local extreme point comprises the following steps:
(1) and selecting a pixel point and 8 points around the pixel point on a certain scale in the Gaussian difference space as a comparison object of the two-dimensional neighborhood.
(2) And (3) selecting a 3 x 3 point set of the same region in the upper and lower scales adjacent to the scale, and forming a three-dimensional neighborhood space with the 8 points in the step (1).
(3) And if the pixel point is the maximum value or the minimum value point in the three-dimensional neighborhood space, the pixel point is considered as a feature point on the corresponding scale.
After the position of an extreme point in a Gaussian difference space is determined, the gradient distribution of neighborhood pixel points is used for describing the characteristics of the extreme point, and the amplitude m (x, y, sigma) and the direction theta (x, y, sigma) of the gradient of the neighborhood pixel points of the extreme point on the scale sigma are expressed as follows:
Figure RE-GDA0002802507460000062
Figure RE-GDA0002802507460000063
and establishing a statistical histogram according to the characteristics of the neighborhood gradient, selecting the maximum value as the gradient main direction of the neighborhood pixels of the extreme point, and establishing a description characteristic set of the extreme point by taking the rest values as the auxiliary directions. And matching the feature points between the two images by taking the position of the extreme point and the amplitude and the direction of the neighborhood pixel point as criteria.
S200, constructing a background updating model based on the Gaussian mixture model, and extracting pixels which periodically move in the transformer substation monitoring video.
The background updating model based on the Gaussian mixture model is that in a video sequence, a plurality of Gaussian distribution models are built for each pixel in a frame to simulate the brightness density function of the pixel, and the average value and the variance in each Gaussian model are recorded. When updating the background, if the updated value of the pixel belongs to a certain gaussian model, the point still belongs to the background, and if the updated value does not match with all known gaussian models, the point is considered to have become the foreground.
The brightness value of a pixel point (x, y) in the ith frame image in the video sequence is fi(x, y) establishing N Gaussian distribution models for the point, wherein the distribution function g of the jth modelij(x, y) is:
Figure RE-GDA0002802507460000071
in the formula, muij、σijRespectively the mean value and the variance of the jth Gaussian distribution model of the pixel point;
then the Gaussian mixture model distribution function m of the pixel pointij(x, y) is:
Figure RE-GDA0002802507460000072
in the formula, pij(x, y) is the probability that the pixel belongs to the j-th Gaussian model distribution, and satisfies the following formula:
Figure RE-GDA0002802507460000073
then the pixel point is at the corresponding th frame
Figure RE-GDA0002802507460000074
The matching model for each gaussian distribution is expressed as:
Figure RE-GDA0002802507460000075
in the formula, pks(x, y) is that the pixel point is at the kth frame
Figure RE-GDA0002802507460000076
Probability of a Gaussian model distribution, gks(x, y) is that the pixel point is at the kth frame
Figure RE-GDA0002802507460000077
A distribution function of a gaussian model.
If all p at that momentks(x,y)gksAnd (x, y) are both smaller than the set threshold value xi, the point is considered to become a part of the foreground image, and a new Gaussian model is established for the point. If the Gaussian model generation probability in the background is small, removing the Gaussian model generation probability from the current mixed model; if the Gaussian model of the foreground pixel point is frequently hit after being established, the pixel point becomes a background, the corresponding Gaussian model is supplemented into the background model, and a model with the lowest hitting frequency is removed to ensure the stability of the quantity.
And S300, solving the background updating model by adopting a foreground object extraction method based on dynamic time warping.
The Gaussian model is established on the basis of the brightness values of pixel points in an image, and the improvement method for extracting the model foreground target by utilizing the time sequence is mainly based on whether the current brightness value sequence of the pixel points is matched with a sequence in normal operation or not, namely, a mode that the brightness values of the pixel points are arranged together according to a certain sequence relation. The patent provides a method for improving a Gaussian mixture model foreground object extraction mechanism based on dynamic time warping distance, so as to reduce the rate of missing report of foreground target points in an original model. The dynamic time warping distance is based on time planning and distance measurement, so that the nonlinear similarity between two time sequences is calculated, and the method is commonly used in the field of pattern recognition. Is provided with X ═ X1,x2,…,xn) And Y ═ Y1,y2,…,ym) Two sequences, constructing a matrix Am×n
Figure RE-GDA0002802507460000081
From Am×nFirst element a11Starting with the position operator delta1(1,0),δ2(0,1) and δ3(1,1) making displacement vector to the last element amnThe path L traversed is noted as a sequenceThe minimum bending distance of X and Y is expressed as:
min{aij1,aij2,aij3}∈L,i=1,2,…,m;j=1,2,…,n
using the element L contained in LnDetermining the dynamic time warping distance DTW between X and YXY
Figure RE-GDA0002802507460000082
In the formula, k is the number of elements traversed by the path L;
the similarity calculation is carried out by adopting a Euclidean distance method in the traditional dynamic time bending distance, and a in a construction matrixijChanging to the following steps:
Figure RE-GDA0002802507460000091
finding the minimum curved path also translates to finding the maximum similarity path:
max{aij1,aij2,aij3}∈L,i=1,2,…,m;j=1,2,…,n
if the full sequence matching with the same length is carried out, only the matrix A is calculatedm×nA path composed of diagonal elements in (1) is sufficient.
The foreground object extraction method based on dynamic time warping comprises the following calculation processes:
(1) subsequence extraction
When the equipment normally operates, the motion trail of the brightness values of M pixel points on each target object (including a foreground object and a background) in the video in the whole sequence (N frames) is extracted, the original sequence is intercepted by taking L frames (L is less than N) as the length of the subsequence, M sets are obtained, and each set comprises N-L +1 subsequences with the length of L.
(2) Subsequence clustering
And (3) carrying out clustering analysis on the subsequence set obtained in the step (1) by using a K-means algorithm. Calculating the distance in the K-mean value according to an improved DTW method, clustering the motion track subsequences of each pixel point into K clusters, and recording the center point of each cluster as a reference object for subsequence matching;
(3) background model building
And modeling the background image by applying a Gaussian mixture model, and preliminarily extracting foreground pixel points.
(4) Enhanced foreground extraction method
For the pixel point of which the Gaussian mixture model is judged as the background, establishing a time sequence with the length of L for the pixel point, matching the time sequence with the clustering model obtained in the process step (2), and if the sequence is divided into a certain cluster and the similarity is greater than a confidence coefficient threshold rho, determining that the pixel point belongs to the background; if the pixel point is an outlier, the pixel point is determined to be a foreground pixel. And updating the background mode according to the identification result of the step.
Fig. 3 is a reference point motion trajectory extraction diagram of a normal operation state of a transformer substation according to an embodiment of the present invention.
Fig. 4 is a diagram of a variation locus of gray-scale values of feature points according to an embodiment of the present invention.
Fig. 5 is a diagram of a feature point clustering center sequence and a vibration motion sequence provided in the embodiment of the present invention.
Fig. 6 is a diagram of a foreground extraction result when the device provided by the embodiment of the present invention vibrates.
Fig. 7 is a diagram of a complete foreground object extraction result during vibration according to the embodiment of the present invention.
Referring to fig. 3, fig. 4, fig. 5, fig. 6 and fig. 7, a video (40 frames) of a normal operation of the substation is taken as a full sequence of a motion trajectory, fig. 3 is an image of a normal operation state of the device, and A, B, C points are respectively selected from an insulating sleeve, a background and a breaker in the sequence to extract a brightness change sequence, and the change trajectory is as shown in fig. 4.
The length of 8 frames is taken as a subsequence (N is 3 × 33 subsequences in total), and clustering is performed to obtain k is 3 × 4 clustering centers. FIG. 5 shows the variation of brightness values of the cluster center sequence when A, B, C points are operating normally and the variation of brightness values of A, B, C points when there is a vibration in the videoAs shown in part (d) of fig. 5, when a vibration occurs, the change in the luminance values at A, B two points is significant and periodic, and it can be seen from the video that these two points belong to the region affected by the vibration of the bushing, while point C is not affected by the vibration of the bushing, and the change in the luminance values is substantially the same as that in normal operation. Based on the method disclosed by the patent, a Gaussian mixture model of the point A, B, C is calculated, and the results that the three reference points are all background pixels are obtained, so that the method is not in accordance with the actual situation. Comparing the reference point brightness variation sequence with the clustering center point sample in normal operation, A, B points can be obtained, which do not belong to any cluster, and the track of the C point is clustered in the center C when the casing has vibration4The cluster, therefore, yields A, B two points that should be part of the image foreground object when the casing vibrates.
Fig. 6 is a result of extracting the foreground object of the image when the casing vibrates, only the casing edge portion with obvious change in the vibration process is marked, and the brightness value of the pixels of the casing main body changes slightly or basically unchanged in the motion process, so as to describe the foreground object in the vibration process more intuitively, the foreground object is clustered according to the brightness value of the pixels, and a more complete foreground object image is obtained as shown in fig. 7. The practicability and effectiveness of the method can be verified according to the video target detection result.
The embodiment of the invention aims to protect a transformer substation video target detection method based on a time sequence, which comprises the following steps: constructing a transformer substation moving target detection model, and calibrating data contents of a plurality of devices in a transformer substation monitoring video; constructing a background updating model based on a Gaussian mixture model, and extracting pixels of periodic motion in a monitoring video of the transformer substation; and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping. The technical scheme has the following effects:
the method has the advantages that the data content of main equipment in the substation monitoring video is calibrated and the periodically moving pixels in the substation monitoring video are extracted by constructing the substation moving target detection model and the background updating model based on the Gaussian mixture model, so that the accurate identification of the substation video target is realized, the problem that the conventional substation video monitoring system is difficult to alarm in time under the abnormal operation condition is solved, the operation state of equipment in the substation can be monitored in real time, the fault of the substation is prevented, the intelligent development of the substation is favorably realized, the operation and maintenance efficiency of the substation is improved, the operation safety of an electric power system is improved, and the power supply reliability of a power distribution network is improved.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (7)

1. A transformer substation video target detection method based on time series is characterized by comprising the following steps:
constructing a transformer substation moving target detection model, and calibrating data contents of a plurality of devices in a transformer substation monitoring video;
constructing a background updating model based on a Gaussian mixture model, and extracting pixels of periodic motion in a monitoring video of the transformer substation;
and solving the background updating model by adopting a foreground object extraction method based on dynamic time warping.
2. The time-sequence-based substation video target detection method according to claim 1, wherein the building of the substation moving target detection model specifically comprises:
acquiring local extreme point position data of the moving target of the transformer substation in a scale space by adopting a characteristic point extraction method;
acquiring gradient distribution of neighborhood pixels;
and judging whether the feature points between the two images of the moving target of the transformer substation are matched or not according to the local extreme point position data and the gradient distribution of the neighborhood pixels.
3. The substation video target detection method based on the time sequence according to claim 2, wherein the step of obtaining the position data of the local extreme point of the substation moving target in the scale space by using a feature point extraction method comprises the following steps:
selecting a pixel point and eight adjacent pixel points thereof on a certain scale space in a Gaussian difference space as a comparison object of a two-dimensional neighborhood;
selecting a 3 x 3 point set of the same region in the upper and lower two scale spaces adjacent to the scale space, and forming a three-dimensional neighborhood space with the adjacent pixel points;
judging whether the pixel points are maximum value points or minimum value points in the three-dimensional adjacent space;
and if the pixel point is the maximum value point or the minimum value point in the three-dimensional adjacent space, the pixel point is the feature point on the corresponding scale space.
4. The time-series-based substation video target detection method according to claim 1, wherein the building of the background update model based on the Gaussian mixture model specifically comprises:
establishing a plurality of Gaussian models for each pixel point of a frame in the transformer substation monitoring video to simulate a brightness density function of the pixel points;
recording the mean and variance of each Gaussian model;
judging whether the updated value of the pixel belongs to any one of the Gaussian models;
if the updated value of the pixel point belongs to any one of the Gaussian models, judging that the pixel point belongs to a background point;
and if the updated value of the pixel does not belong to any Gaussian model, judging that the pixel belongs to a foreground point, and establishing a Gaussian model for the pixel.
5. The substation video target detection method based on the time series according to claim 4, further comprising:
judging whether the Gaussian model generation probability in the background points is smaller than a first preset value or not;
and if the Gaussian model generation probability in the background points is smaller than the first preset value, removing the Gaussian model from the background updating model.
6. The substation video target detection method based on the time series according to claim 4, further comprising:
judging whether the hit probability of the Gaussian model of the foreground point is greater than a second preset value or not;
if the hit probability of the Gaussian models of the foreground points is greater than the second preset value, the foreground points are added as the background points, the corresponding Gaussian models are added into the background updating model, and the Gaussian model with the lowest hit probability in the background updating model is removed.
7. The substation video target detection method based on the time series according to claim 1, wherein the foreground object extraction method based on the dynamic time warping comprises the following steps:
extracting the motion trail of the brightness values of M pixel points of the plurality of moving targets in the whole sequence in the transformer substation monitoring video, and intercepting an original sequence by taking L frames (L is less than N) as the length of a subsequence to obtain M subsequence sets, wherein each subsequence set comprises N-L +1 subsequences with the length of L;
clustering analysis is carried out on the subsequence set by utilizing a K-means algorithm, the distance in the K-means algorithm is calculated according to an improved DTW method, the motion trail subsequence of each pixel point is clustered into K clusters, and the cluster center point is recorded as a reference object matched with the subsequences;
modeling the background image by applying a Gaussian mixture model, and extracting foreground pixel points;
and for the pixel point judged as the background point by the Gaussian mixture model, establishing a time sequence with the length of L for the pixel point, matching the time sequence with the clustering model, if the time sequence is divided into any one of the clusters and the similarity is greater than a confidence threshold rho, determining that the pixel point belongs to the background point, and if the pixel point is an outlier, determining that the pixel point is the foreground point and updating the background updating model.
CN202010666328.2A 2020-07-10 2020-07-10 Substation video target detection method based on time sequence Active CN112417937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010666328.2A CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010666328.2A CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Publications (2)

Publication Number Publication Date
CN112417937A true CN112417937A (en) 2021-02-26
CN112417937B CN112417937B (en) 2023-05-16

Family

ID=74844155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666328.2A Active CN112417937B (en) 2020-07-10 2020-07-10 Substation video target detection method based on time sequence

Country Status (1)

Country Link
CN (1) CN112417937B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148072A (en) * 2018-02-12 2019-08-20 庄龙飞 Sport course methods of marking and system
CN117474983A (en) * 2023-12-27 2024-01-30 广东力创信息技术有限公司 Early warning method based on light-vision linkage and related device
CN117676136A (en) * 2023-11-16 2024-03-08 广州群接龙网络科技有限公司 Method and system for processing group-connected data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130108102A1 (en) * 2011-10-31 2013-05-02 International Business Machines Corporation Abandoned Object Recognition Using Pedestrian Detection
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109145820A (en) * 2018-08-22 2019-01-04 四创科技有限公司 A kind of river location mask method based on video dynamic image
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130108102A1 (en) * 2011-10-31 2013-05-02 International Business Machines Corporation Abandoned Object Recognition Using Pedestrian Detection
CN103942850A (en) * 2014-04-24 2014-07-23 中国人民武装警察部队浙江省总队医院 Medical staff on-duty monitoring method based on video analysis and RFID (radio frequency identification) technology
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109145820A (en) * 2018-08-22 2019-01-04 四创科技有限公司 A kind of river location mask method based on video dynamic image
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTONIO HERN´ANDEZ-VELA,ET AL: "Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D", 《PATTERN RECOGNITION LETTERS》 *
王亮芬: "动摄像机情况下目标检测及轨迹分析", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
董文明 等: "基于背景重构的运动目标检测算法", 《重庆邮电大学学报(自然科学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148072A (en) * 2018-02-12 2019-08-20 庄龙飞 Sport course methods of marking and system
CN110148072B (en) * 2018-02-12 2023-05-02 庄龙飞 Sport course scoring method and system
CN117676136A (en) * 2023-11-16 2024-03-08 广州群接龙网络科技有限公司 Method and system for processing group-connected data
CN117474983A (en) * 2023-12-27 2024-01-30 广东力创信息技术有限公司 Early warning method based on light-vision linkage and related device
CN117474983B (en) * 2023-12-27 2024-03-12 广东力创信息技术有限公司 Early warning method based on light-vision linkage and related device

Also Published As

Publication number Publication date
CN112417937B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN112417937A (en) Transformer substation video target detection method based on time sequence
Dong et al. Deep metric learning-based for multi-target few-shot pavement distress classification
CN110765964B (en) Method for detecting abnormal behaviors in elevator car based on computer vision
CN108549846B (en) Pedestrian detection and statistics method combining motion characteristics and head-shoulder structure
CN108509859A (en) A kind of non-overlapping region pedestrian tracting method based on deep neural network
CN110929679B (en) GAN-based unsupervised self-adaptive pedestrian re-identification method
CN105976368A (en) Insulator positioning method
CN109409418B (en) Loop detection method based on bag-of-words model
CN113052876A (en) Video relay tracking method and system based on deep learning
CN106952293A (en) A kind of method for tracking target based on nonparametric on-line talking
CN113362374A (en) High-altitude parabolic detection method and system based on target tracking network
CN104809742A (en) Article safety detection method in complex scene
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN109901553B (en) Heterogeneous industrial big data collaborative modeling process fault monitoring method based on multiple visual angles
CN105913011B (en) Human body anomaly detection method based on parameter self-regulation neural network
CN112733770A (en) Regional intrusion monitoring method and device
CN112183164A (en) Pedestrian attribute identification method under surveillance video
CN111860097B (en) Abnormal behavior detection method based on fuzzy theory
ELBAŞI et al. Control charts approach for scenario recognition in video sequences
CN111160101B (en) Video personnel tracking and counting method based on artificial intelligence
CN113076963A (en) Image recognition method and device and computer readable storage medium
CN113505812A (en) High-voltage circuit breaker track action identification method based on double-current convolutional network
CN113658223A (en) Multi-pedestrian detection and tracking method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant