CN113822240B - Method and device for extracting abnormal behaviors from power field operation video data - Google Patents

Method and device for extracting abnormal behaviors from power field operation video data Download PDF

Info

Publication number
CN113822240B
CN113822240B CN202111382210.8A CN202111382210A CN113822240B CN 113822240 B CN113822240 B CN 113822240B CN 202111382210 A CN202111382210 A CN 202111382210A CN 113822240 B CN113822240 B CN 113822240B
Authority
CN
China
Prior art keywords
video image
extracting
pixel
operation video
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111382210.8A
Other languages
Chinese (zh)
Other versions
CN113822240A (en
Inventor
樊志伟
王天师
利雅琳
谭伟
张春梅
高杨
李明
刘惠华
吴金珠
熊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Zhongshan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Zhongshan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202111382210.8A priority Critical patent/CN113822240B/en
Publication of CN113822240A publication Critical patent/CN113822240A/en
Application granted granted Critical
Publication of CN113822240B publication Critical patent/CN113822240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Abstract

The invention provides a method and a device for extracting abnormal behaviors from power field operation video data, which are characterized in that video image data of an operation field are acquired and preprocessed, and then a moving target area is extracted from the preprocessed operation video image; and finally, carrying out abnormal behavior recognition on the moving target region by using a pre-trained fusion model based on Catboost and Lasso. The invention analyzes the monitoring video data in real time, monitors the people in the working area, tracks the motion state of the working personnel in the electrified area for inspection and maintenance, and identifies the abnormal behavior in real time if the abnormal condition and the electric shock condition occur, thereby alarming in time and achieving the aim of protecting the personal safety of the working personnel.

Description

Method and device for extracting abnormal behaviors from power field operation video data
Technical Field
The invention belongs to the technical field of electric power system safety protection, and particularly relates to a method and a device for extracting abnormal behaviors from electric power field operation video data.
Background
With the increasing expansion of the power grid scale, the power operation activities become frequent, and the lean and modern management requirements of power supply enterprises in new situations are difficult to meet by the traditional manual on-site supervision and inspection and the management and control mode reviewed afterwards. The electric power enterprise has urgent needs to establish an electric power operation field visualization and intelligent management and control platform, and more efficient and intelligent cooperative supervision and management are carried out on the electric power operation field.
Video monitoring technology is widely used in the power industry at present, but still remains in the traditional stage of monitoring videos by means of manual work. The manual monitoring video has some disadvantages, for example, the video monitoring is high in monitoring picture and information amount, most video data are normal data, and the efficiency of monitoring and identifying by monitoring personnel through human eyes is not high. Due to the fact that the number of monitoring pictures is large, the monitoring time is long, monitoring personnel are prone to visual fatigue, key information is often omitted, emergency situations cannot be reported and processed in time, and the optimal crisis processing time is missed.
Disclosure of Invention
In view of this, the present invention is directed to solve the problem that key information may be omitted in the existing manual monitoring video, which may result in that the key information cannot be reported and processed in an emergency.
In order to solve the technical problems, the invention provides the following technical scheme:
in a first aspect, the present invention provides a method for extracting abnormal behavior from power field operation video data, including:
acquiring a job video image and preprocessing the job video image;
extracting a moving target area from the preprocessed operation video image;
and carrying out abnormal behavior recognition on the moving target region by using a pre-trained fusion model based on Catboost and Lasso.
Further, the preprocessing the job video image specifically includes:
and performing noise reduction on the job video image by using a median filtering method.
Further, the extracting of the moving target region from the preprocessed job video image specifically includes:
respectively establishing a background model and an optical flow field for pixel points in the preprocessed operation video image, and respectively extracting a foreground part in the operation video image by using the background model and the optical flow field;
and fusing the foreground part obtained by the background model and the foreground part obtained by the optical flow field to obtain a moving target area in the operation video image.
Further, establishing a background model for the pixel points in the preprocessed job video image, and extracting the foreground part in the job video image by using the background model specifically includes:
establishing a background model of the pixel points, wherein the expression of the background model is as follows:
Figure 82345DEST_PATH_IMAGE001
where t represents a sequence of job video image frames,
Figure 873583DEST_PATH_IMAGE002
representing the neighborhood of a pixel (x, y), (x)i,yi) To represent
Figure 258123DEST_PATH_IMAGE002
One of the random pixel points of (1),
Figure 818417DEST_PATH_IMAGE003
is a pixel point (x)i,yi) The value of the pixel of (a) is,
Figure 590064DEST_PATH_IMAGE004
a background model representing the pixel points (x, y);
circularly utilizing the background model, and generating a background model sample set with the same sample number for each pixel point in the preprocessed operation video image;
respectively judging the background model sample set of each pixel point, and if the number of samples with pixel values larger than a first foreground threshold value in the background model sample set is smaller than a preset threshold value, marking the current pixel point as a foreground pixel point;
and obtaining a foreground part in the operation video image according to the marked foreground pixel points.
Further, establishing an optical flow field for the pixel points in the preprocessed operation video image, and extracting the foreground part in the operation video image by using the optical flow field specifically comprises:
calculating the optical flow values of pixel points in the preprocessed operation video image by using an optical flow method, and obtaining an optical flow field of the operation video image according to the optical flow values;
and performing threshold segmentation on the optical flow field of the operation video image through a second foreground threshold to obtain a foreground part in the operation video image.
Further, the pre-training process of the fusion model based on the Catboost and the Lasso specifically includes:
collecting operator behavior picture training samples in the electric power operation site, and marking the training samples;
performing feature extraction on the pictures in the marked training samples by using a PCA-SURF feature extraction method, and compressing the extracted features into 25 dimensions;
taking 25-dimensional features as input of a Catboost model, setting the iteration times in the operation parameters to be 30 times, and setting the maximum depth to be 3;
performing iterative operation according to the operation parameters to obtain 480-dimensional characteristics;
and taking the 480-dimensional characteristics as input of the Lasso model, and performing model hyper-parameter tuning on the Lasso model through cross validation to obtain a fusion model based on the Catboost and the Lasso.
Further, the feature extraction of the image in the marked training sample by using the PCA-SURF feature extraction method specifically includes:
distinguishing and solving image pixel points by using a Hessian matrix, and classifying the pixel points according to a judgment result so as to determine the positions of N characteristic points in the image;
counting Haar wavelet characteristics in the characteristic point region range, and calculating the sum of horizontal and vertical Haar wavelet characteristics of all characteristic points in a sector with a preset radius so as to determine the main direction of the characteristic points;
taking a 16 x 16 window by taking the characteristic point as a center, and calculating the gradient amplitude and the gradient direction of each pixel in the window;
performing weighting operation on the gradient amplitude and the gradient direction of each pixel by using a Gaussian window, calculating gradient direction histograms in 8 directions, and drawing an accumulated value of each gradient direction according to the gradient direction histograms so as to obtain a 512-dimensional descriptor vector;
constructing an Nx 512 descriptor matrix of the N feature points according to the descriptor vector corresponding to each feature point, and calculating an Nx N covariance matrix of the Nx 512 descriptor matrix;
calculating eigenvectors corresponding to the first k maximum eigenvalues in the NxN covariance matrix so as to obtain a 512xk projection matrix;
and multiplying the N multiplied by 512 descriptor matrix with the 512 multiplied by k projection matrix to obtain the N multiplied by k dimensionality reduction descriptor matrix.
In a second aspect, the present invention provides an apparatus for extracting abnormal behavior from video data of power field operation, including:
the preprocessing module is used for acquiring a job video image and preprocessing the job video image;
the motion region detection module is used for extracting a motion target region from the preprocessed operation video image;
and the abnormal behavior identification module is used for identifying abnormal behaviors of the moving target area by utilizing a pre-trained fusion model based on Catboost and Lasso.
In a third aspect, the present invention provides an apparatus for extracting abnormal behavior from power field operation video data, the apparatus comprising a processor and a memory:
the memory is used for storing the computer program and sending the instructions of the computer program to the processor;
the processor executes a method of extracting abnormal behavior from the electric power field operation video data according to the first aspect.
In a fourth aspect, the present invention provides a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing a method for extracting abnormal behavior from power field operation video data according to the first aspect.
In summary, the invention provides a method and a device for extracting abnormal behaviors from power field operation video data, which are implemented by acquiring and preprocessing video image data of an operation field, and then extracting a moving target area from the preprocessed operation video image; and finally, carrying out abnormal behavior recognition on the moving target region by using a pre-trained fusion model based on Catboost and Lasso. The invention analyzes the monitoring video data in real time, monitors the people in the working area, tracks the motion state of the working personnel in the electrified area for inspection and maintenance, and identifies the abnormal behavior in real time if the abnormal condition and the electric shock condition occur, thereby alarming in time and achieving the aim of protecting the personal safety of the working personnel.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for extracting abnormal behavior from power field operation video data according to an embodiment of the present invention;
fig. 2 is a diagram of a fusion model based on Catboost and Lasso according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the increasing expansion of the power grid scale, the power operation activities become frequent, and the lean and modern management requirements of power supply enterprises in new situations are difficult to meet by the traditional manual on-site supervision and inspection and the management and control mode reviewed afterwards. The electric power enterprise has urgent needs to establish an electric power operation field visualization and intelligent management and control platform, and more efficient and intelligent cooperative supervision and management are carried out on the electric power operation field.
Video monitoring technology is widely used in the power industry at present, but still remains in the traditional stage of monitoring videos by means of manual work. The manual monitoring video has some disadvantages, for example, the video monitoring is high in monitoring picture and information amount, most video data are normal data, and the efficiency of monitoring and identifying by monitoring personnel through human eyes is not high. Due to the fact that the number of monitoring pictures is large, the monitoring time is long, monitoring personnel are prone to visual fatigue, key information is often omitted, emergency situations cannot be reported and processed in time, and the optimal crisis processing time is missed.
Based on the above, the invention provides a method and a device for extracting abnormal behaviors from power field operation video data.
The following is a detailed description of an embodiment of a method for extracting abnormal behavior from power field operation video data according to the present invention.
Referring to fig. 1, the present embodiment provides a method for extracting abnormal behavior from video data of power field operation, including:
s100: acquiring a job video image and preprocessing the job video image;
in addition, a video image captured by a monitoring camera is often accompanied by noise when it is captured and transmitted, and finally displayed. In the embodiment, a median filtering method is adopted to preprocess the video image, so that the noise reduction effect is achieved. As an example, a 3 × 3 median filter is adopted, and for a pixel I at a certain point in the original image, the median filter takes a median obtained by statistically sorting eight pixels in the central region of the point as a response of the filtered I point.
S200: extracting a moving target area from the preprocessed operation video image;
it should be noted that, in this embodiment, extracting a moving target region from a preprocessed job video image specifically includes:
s201: respectively establishing a background model and an optical flow field for pixel points in the preprocessed operation video image, and respectively extracting a foreground part in the operation video image by using the background model and the optical flow field;
it should be noted that in this embodiment, two methods are adopted for extracting the foreground portion from the job video image, including establishing a background model for the pixel points in the preprocessed job video image, and extracting the foreground portion from the job video image by using the background model. The method mainly comprises the steps of establishing a background model for each pixel point in an image, wherein the expression of the background model is as follows:
Figure 353752DEST_PATH_IMAGE001
where t represents a sequence of job video image frames,
Figure 298574DEST_PATH_IMAGE002
the neighborhood of the pixel point (x, y) is represented, and (xi, yi) is represented
Figure 962642DEST_PATH_IMAGE002
One of the random pixel points of (1),
Figure 905191DEST_PATH_IMAGE003
is the pixel value of the pixel point (xi, yi),
Figure 405442DEST_PATH_IMAGE004
representing a pixelBackground model of point (x, y).
And then, circularly utilizing the background model, generating a background model sample set with the same sample number for each pixel point in the preprocessed operation video image, and if the sample number is 10, circularly operating each pixel point for 10 times by using the background model, and storing the obtained 10 pixel values in the sample set.
And respectively judging the background model sample set of each pixel point, and if the number of samples with pixel values larger than a first foreground threshold value in the background model sample set is smaller than a preset threshold value, marking the current pixel point as a foreground pixel point. That is, 10 pixel values are stored in the sample set, and if at least 7 of the 10 pixel values reach a preset pixel threshold, the pixel point corresponding to the sample set is considered as a background pixel point, otherwise, the pixel point is considered as a foreground pixel point. And finally, obtaining a foreground part in the operation video image according to the marked foreground pixel points.
And secondly, establishing an optical flow field for pixel points in the preprocessed operation video image, and extracting a foreground part in the operation video image by using the optical flow field. In consideration of the fact that the illumination condition of the monitoring scene of the operation site is relatively stable, the optical flow method with high detection precision is selected in the embodiment. And calculating the optical flow value of each point by using an optical flow method to obtain the optical flow field of each point. And setting a certain threshold value to carry out threshold segmentation on the optical flow field, and distinguishing the foreground from the background to obtain a moving target area.
S202: fusing the foreground part obtained by the background model and the foreground part obtained by the optical flow field to obtain a moving target area in the operation video image;
it should be noted that, the foreground portions in the operation video image can be obtained according to the above two methods, and the two foreground portions are fused by using the existing image fusion technology to obtain a more accurate foreground portion, which is the moving target region in this embodiment.
In the embodiment, the foreground part in the operation video image is extracted, the moving target area is identified, the real-time monitoring of the personnel on the operation site is substantially carried out, and the moving working condition of the personnel on the operation site is tracked.
S300: and carrying out abnormal behavior recognition on the moving target region by using a pre-trained fusion model based on Catboost and Lasso.
It should be noted that, for the working state of the staff in the electric power working site, the walking inspection and the squat maintenance are mainly used. Walking and squatting are therefore considered normal activities. When an electric shock occurs, the person suddenly falls, loses consciousness, falls from a support, and the like, so that the person falls as an abnormal behavior. At present, most of mainstream image recognition algorithms adopt complex network structures such as neural networks or deep learning, but the complex network algorithms require enough sample number to avoid overfitting of the model in the training process. In the current power operation image recognition scene, it is difficult to collect enough samples of the abnormal classes, which finally results in a sample set at a relatively small level. In consideration of the situations, the embodiment selects the traditional mechanical algorithm for model building, but the algorithms cannot directly process the picture file, so that the picture file needs to be converted into the numerical features by relying on a feature extraction technology. Therefore, in the embodiment, a video abnormal behavior extraction method based on a PCA-SURF feature extraction method and a Catboost + Lasso model fusion algorithm is provided, and includes two parts, namely feature extraction and model construction.
The feature extraction mainly adopts a feature extraction method based on PCA-SURF. After the moving target region is obtained through step S202, feature extraction is performed on the moving target region, and a basis is laid for subsequent modeling. Common image feature extraction methods include HOG, SFIT, SURF and the like, and considering that the use scene of the method is real-time monitoring and has high real-time requirement, a SURF method with short running time is adopted, the SURF is improved aiming at the running performance of the SFIT extraction method, and the running time is 3-4 times faster than that of the SFIT. In order to further reduce the feature dimension and improve the runtime performance of the final model, the present embodiment improves the existing SURF, and adds the PCA algorithm to perform dimension reduction on the descriptor vector in the SURF algorithm, so as to improve the matching efficiency. The proposed PCA-SURF feature extraction method is implemented as follows:
1) and (4) feature detection, namely, performing feature detection by using an SURF algorithm, namely, performing discrimination solution on image pixel points by using a Hessian matrix, and classifying the pixels according to a judgment result to determine the positions of the feature points of the image.
2) And (3) determining the principal direction, namely counting the Haar wavelet characteristics in the characteristic point region range in order to ensure the rotation invariance, and calculating the sum of the horizontal and vertical Haar wavelet characteristics of all the characteristic points in a sector with the radius of 6S (S is a scale factor).
3) And (4) generating a characteristic point descriptor, namely taking a 16 multiplied by 16 window by taking the characteristic point as a center after the main direction of the characteristic point is determined. The gradient magnitude and gradient direction of each pixel are calculated and then weighted with a gaussian window. Then, calculating a gradient direction histogram of 8 directions on each 16 × 16 small block, and drawing an accumulated value of each gradient direction to obtain a descriptor vector of 16 × 16 × 2=512 dimensions;
4) assuming that there are N feature points, then all feature point descriptor vectors form an N × 512 descriptor matrix. Calculating an NxN covariance matrix for the N vectors;
5) calculating eigenvectors corresponding to the first k maximum eigenvalues of the covariance matrix, wherein the k vectors form a 512xk projection matrix;
6) multiplying the N multiplied by 512 descriptor matrix with 512xk projection matrix to obtain Nxk dimension reduction descriptor matrix, namely the matrix formed by dimension reduction descriptor vectors. At this time, the descriptor vectors of the N feature points are all in k dimensions. In this example, K is 25.
Comparing the operating time of the PCA-SURF feature matching algorithm provided by the embodiment with the operating time of the algorithm of the SIFT and SURF algorithms, intercepting 6 groups of images from the monitoring video for testing, and obtaining an average value after performing a plurality of experiments on the 6 groups of images, wherein the operating time of the algorithm is shown in the following table:
Figure DEST_PATH_IMAGE005
therefore, the processing speed of the image can be effectively improved by adopting the PCA-SURF feature extraction method.
And secondly, constructing a model, namely constructing an abnormal behavior recognition model by adopting a fusion model based on Catboost and Lasso. Because the number of sample sets is limited, the method adopts the traditional machine learning to construct the model. The traditional machine model mainly comprises logistic regression, decision tree, Support Vector Machine (SVM) and integrated model algorithm. After comparing the performances of the models, the embodiment determines to adopt an algorithm based on the fusion of the Catboost model and the Lasso model to construct the abnormal behavior identification model. The basic principle is that the combined features are constructed through Catboost, and then the combined features are trained through Lasso, so that the purpose of classifying the prediction samples is achieved.
Referring to FIG. 2, FIG. 2 is a diagram of the fusion model of Catboost and Lasso. In the figure, x is the input characteristic, and the Catboost training x performs node splitting to generate 5 leaf nodes of two trees. The left tree in the figure has 3 leaf nodes and the right tree has 2 leaf nodes. Therefore, for each tree of the input x, the Catboost outputs the tree to a leaf node, and assuming that the tree falls on a first node in the left sub-tree and falls on a second node in the right sub-tree, the one-hot code in the left sub-tree is [1,0,0], the one-hot code in the right sub-tree is [0,1], and the final feature is the combination of two one-hot codes [1,0,0,0,1], and the one-hot code is input to the linear classifier as the converted feature, and finally is fused through the Lasso model to obtain the classification model of the fusion model.
When feature transformation is performed, the tree of the tree included in the Catboost model is the number of the following combined features, the vector length of each combined feature is unequal, and the length depends on the number of leaf nodes of the tree. For example, assuming that 50 trees are obtained after training, 50 combined features can be obtained.
Based on the above, the method for identifying abnormal behaviors of the moving target area by using the pre-trained fusion model based on the Catboost and the Lasso specifically comprises the following steps:
1) and collecting operator behavior picture training samples in the electric power operation field, and marking the training samples. The flag is normal or abnormal and the image is scaled to a uniform size, the present embodiment selects a picture size of 128 x 128.
2) And (3) performing feature extraction on the pictures in the marked training sample by using a PCA-SURF feature extraction method, and compressing the extracted features into 25 dimensions.
3) And taking the 25-dimensional features as the input of the Catboost model, setting the iteration number n _ estimators =30 times in the operation parameters, and setting the maximum depth as max _ depth = 3. The iteration times and the maximum depth are the optimal model hyper-parameters obtained through parameter tuning. Other optimal operation parameters to be set further include that the lossfunction is set to Logloss, and the learning rate learning _ rate = 0.15.
4) And carrying out iterative operation according to the operation parameters to obtain 480-dimensional characteristics. Since the number of iterations n _ estimators was set to 30 in the previous step, the feature combination is then 30 features. The one-hot code conversion is carried out on the 30 characteristics respectively, and the maximum possible number of leaves of each tree needs to be known, which depends on the maximum tree depth of the Catboost, namely 2^ (max _ depth +1) possible values. Max _ depth =3 in the previous step, so the characteristic dimension after one-hot is 30 × 2(3+1)=480。
5) And taking the 480-dimensional characteristics as input of the Lasso model, and performing model hyper-parameter tuning on the Lasso model through cross validation to obtain a fusion model based on the Catboost and the Lasso.
After training through actual data, the data comparison of each model performance test is shown in the following table. It can be seen that the algorithm of the Catboost + Lasso fusion model is the best-performing model in the accuracy class indexes, and has better performance in the training time and the recognition time, which is considered to be the optimal model algorithm selection in the current application scenario in a comprehensive manner.
Figure 701425DEST_PATH_IMAGE006
The embodiment provides a method for extracting abnormal behaviors from power field operation video data, which constructs an intelligent video monitoring system for a power field operation through a PCA-SURF feature extraction method and a Catboost + Lasso model fusion algorithm. The system can detect the target of the worker, grasp the motion state of each motion worker in the operation place in real time and judge the behavior of the worker, has high reliability, can replace the traditional method that the video is recorded and stored singly, needs a manual day and night monitoring mode, and realizes automation, unmanned and intelligent system monitoring and behavior identification.
The above is a detailed description of an embodiment of a method for extracting abnormal behavior from power field operation video data according to the present invention, and the following is a detailed description of an embodiment of an apparatus for extracting abnormal behavior from power field operation video data according to the present invention.
The embodiment provides a device for extracting abnormal behaviors from power field operation video data, which comprises:
the preprocessing module is used for acquiring a job video image and preprocessing the job video image;
the motion region detection module is used for extracting a motion target region from the preprocessed operation video image;
and the abnormal behavior identification module is used for identifying abnormal behaviors of the moving target area by utilizing a pre-trained fusion model based on Catboost and Lasso.
It should be noted that, the apparatus of this embodiment is used to implement the method for extracting abnormal behavior from video data of power field operation in the foregoing embodiment, and the specific settings of each module are based on implementing the method, which is not described herein again.
The embodiment provides a device for extracting abnormal behaviors from video data of power field operation, which is based on a PCA-SURF feature extraction method and a Catboost + Lasso model fusion algorithm, is used for carrying out target detection on workers, mastering the motion state of each motion worker in a working place in real time and judging the behaviors of the workers, has higher reliability, can replace the traditional method that the video is recorded and stored singly, and needs a manual day and night monitoring mode, thereby realizing automation, unmanned and intelligent system monitoring and behavior identification.
The above is a detailed description of an embodiment of an apparatus for extracting abnormal behavior from power field operation video data according to the present invention, and the following is a detailed description of an embodiment of an apparatus for extracting abnormal behavior from power field operation video data according to the present invention.
The embodiment provides a device for extracting abnormal behaviors from power field operation video data, which comprises a processor and a memory, wherein the processor comprises:
the memory is used for storing the computer program and sending the instructions of the computer program to the processor;
the processor executes a method for extracting abnormal behaviors from the video data of the power field operation according to the instructions of the computer program.
The above is a detailed description of an embodiment of an apparatus for extracting abnormal behavior from power field operation video data according to the present invention, and the following is a detailed description of an embodiment of a computer storage medium according to the present invention.
The present embodiment provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for extracting abnormal behavior from electric power field operation video data according to the foregoing embodiments is implemented.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A method for extracting abnormal behaviors from power field operation video data is characterized by comprising the following steps:
acquiring a job video image and preprocessing the job video image;
extracting a moving target area from the preprocessed job video image, which specifically comprises the following steps:
respectively establishing a background model and an optical flow field for pixel points in the preprocessed operation video image, and respectively extracting a foreground part in the operation video image by using the background model and the optical flow field;
fusing the foreground part obtained by the background model and the foreground part obtained by the optical flow field to obtain a moving target area in the operation video image;
and carrying out abnormal behavior identification on the moving target region by utilizing a pre-trained fusion model based on Catboost and Lasso.
2. The method for extracting abnormal behavior from video data of power field operation according to claim 1, wherein the preprocessing the operation video image is specifically:
and carrying out noise reduction processing on the operation video image by using a median filtering method.
3. The method according to claim 1, wherein the establishing a background model for the pixel points in the pre-processed work video image and extracting the foreground portion from the work video image by using the background model specifically comprises:
establishing a background model of a pixel point, wherein the expression of the background model is as follows:
Figure 159599DEST_PATH_IMAGE001
where t represents a sequence of job video image frames,
Figure 752385DEST_PATH_IMAGE002
representing the neighborhood of a pixel (x, y), (x)i,yi) To represent
Figure 209912DEST_PATH_IMAGE002
One of the random pixel points of (1),
Figure 837333DEST_PATH_IMAGE003
is a pixel point (x)i,yi) The value of the pixel of (a) is,
Figure 722113DEST_PATH_IMAGE004
a background model representing the pixel points (x, y);
circularly utilizing the background model, and generating a background model sample set with the same sample number for each pixel point in the preprocessed operation video image;
respectively judging a background model sample set of each pixel point, and if the number of samples with pixel values larger than a first foreground threshold value in the background model sample set is smaller than a preset threshold value, marking the current pixel point as a foreground pixel point;
and obtaining a foreground part in the operation video image according to the marked foreground pixel points.
4. The method for extracting abnormal behaviors from power field operation video data according to claim 1, wherein the step of establishing an optical flow field for pixel points in the preprocessed operation video image and extracting a foreground part in the operation video image by using the optical flow field specifically comprises the steps of:
calculating the optical flow values of pixel points in the preprocessed operation video image by using an optical flow method, and obtaining the optical flow field of the operation video image according to the optical flow values;
and performing threshold segmentation on the optical flow field of the operation video image through a second foreground threshold to obtain a foreground part in the operation video image.
5. The method for extracting abnormal behavior from video data of power field operation according to claim 1, wherein the pre-training process based on the fusion model of Catboost and Lasso specifically comprises:
collecting training samples of behavior pictures of operators in an electric power operation field, and marking the training samples;
performing feature extraction on the marked pictures in the training sample by using a PCA-SURF feature extraction method, and compressing the extracted features into 25 dimensions;
taking 25-dimensional features as input of a Catboost model, setting the iteration times in the operation parameters to be 30 times, and setting the maximum depth to be 3;
performing iterative operation according to the operation parameters to obtain 480-dimensional characteristics;
and taking the 480-dimensional features as input of a Lasso model, and performing model hyper-parameter tuning on the Lasso model through cross validation to obtain a fusion model based on the Catboost and the Lasso.
6. The method according to claim 5, wherein the feature extraction of the marked pictures in the training samples by using the PCA-SURF feature extraction method specifically comprises:
carrying out discrimination solution on image pixel points by using a Hessian matrix, and classifying the pixel points according to a judgment result so as to determine the positions of N characteristic points in an image, wherein N is the number of the characteristic points;
counting Haar wavelet characteristics in the characteristic point region range, and calculating the sum of horizontal and vertical Haar wavelet characteristics of all characteristic points in a sector with a preset radius so as to determine the main direction of the characteristic points;
taking a 16 x 16 window by taking the characteristic point as a center, and calculating the gradient amplitude and the gradient direction of each pixel in the window;
performing weighting operation on the gradient amplitude and the gradient direction of each pixel by using a Gaussian window, calculating gradient direction histograms in 8 directions, and drawing an accumulated value of each gradient direction according to the gradient direction histograms so as to obtain a 512-dimensional descriptor vector;
constructing an Nx 512 descriptor matrix of N feature points according to the descriptor vector corresponding to each feature point, and calculating an Nx N covariance matrix of the Nx 512 descriptor matrix;
calculating eigenvectors corresponding to the first k largest eigenvalues in the NxN covariance matrix so as to obtain a 512xk projection matrix;
and multiplying the N multiplied by 512 descriptor matrix with the 512 multiplied by k projection matrix to obtain the N multiplied by k dimension reduction descriptor matrix.
7. An apparatus for extracting abnormal behavior from power field operation video data, comprising:
the system comprises a preprocessing module, a video processing module and a video processing module, wherein the preprocessing module is used for acquiring a job video image and preprocessing the job video image;
the moving region detection module is configured to extract a moving target region from the preprocessed job video image, and specifically includes:
respectively establishing a background model and an optical flow field for pixel points in the preprocessed operation video image, and respectively extracting a foreground part in the operation video image by using the background model and the optical flow field;
fusing the foreground part obtained by the background model and the foreground part obtained by the optical flow field to obtain a moving target area in the operation video image;
and the abnormal behavior identification module is used for identifying the abnormal behavior of the moving target area by utilizing a pre-trained fusion model based on the Catboost and the Lasso.
8. An apparatus for extracting abnormal behavior from power field operation video data, the apparatus comprising a processor and a memory:
the memory is used for storing a computer program and sending instructions of the computer program to the processor;
the processor executes the method for extracting the abnormal behavior of the electric power field operation video data according to any one of claims 1-6 according to the instructions of the computer program.
9. A computer storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when being executed by a processor, implements a method for extracting abnormal behavior from electrical field work video data according to any one of claims 1 to 6.
CN202111382210.8A 2021-11-22 2021-11-22 Method and device for extracting abnormal behaviors from power field operation video data Active CN113822240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111382210.8A CN113822240B (en) 2021-11-22 2021-11-22 Method and device for extracting abnormal behaviors from power field operation video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111382210.8A CN113822240B (en) 2021-11-22 2021-11-22 Method and device for extracting abnormal behaviors from power field operation video data

Publications (2)

Publication Number Publication Date
CN113822240A CN113822240A (en) 2021-12-21
CN113822240B true CN113822240B (en) 2022-03-25

Family

ID=78917918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111382210.8A Active CN113822240B (en) 2021-11-22 2021-11-22 Method and device for extracting abnormal behaviors from power field operation video data

Country Status (1)

Country Link
CN (1) CN113822240B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758302A (en) * 2022-05-07 2022-07-15 广东电网有限责任公司广州供电局 Electric power scene abnormal behavior detection method based on decentralized attention mechanism

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460320A (en) * 2017-12-19 2018-08-28 杭州海康威视数字技术股份有限公司 Based on the monitor video accident detection method for improving unit analysis

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020190B2 (en) * 2013-01-31 2015-04-28 International Business Machines Corporation Attribute-based alert ranking for alert adjudication
US10628703B2 (en) * 2017-12-19 2020-04-21 International Business Machines Corporation Identifying temporal changes of industrial objects by matching images
CN108596169B (en) * 2018-03-12 2021-05-14 北京建筑大学 Block signal conversion and target detection method and device based on video stream image
CN109785361A (en) * 2018-12-22 2019-05-21 国网内蒙古东部电力有限公司 Substation's foreign body intrusion detection system based on CNN and MOG
US11886961B2 (en) * 2019-09-25 2024-01-30 Sap Se Preparing data for machine learning processing
CN111598179B (en) * 2020-05-21 2022-10-04 国网电力科学研究院有限公司 Power monitoring system user abnormal behavior analysis method, storage medium and equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460320A (en) * 2017-12-19 2018-08-28 杭州海康威视数字技术股份有限公司 Based on the monitor video accident detection method for improving unit analysis

Also Published As

Publication number Publication date
CN113822240A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
KR101410489B1 (en) Face detection and method and apparatus
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
CN106778609A (en) A kind of electric power construction field personnel uniform wears recognition methods
Huang et al. Detection of human faces using decision trees
Hu Design and implementation of abnormal behavior detection based on deep intelligent analysis algorithms in massive video surveillance
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
CN113553979B (en) Safety clothing detection method and system based on improved YOLO V5
CN111091057A (en) Information processing method and device and computer readable storage medium
CN113822240B (en) Method and device for extracting abnormal behaviors from power field operation video data
CN114049581A (en) Weak supervision behavior positioning method and device based on action fragment sequencing
Inthajak et al. Medical image blob detection with feature stability and KNN classification
CN113515977A (en) Face recognition method and system
KR101675692B1 (en) Method and apparatus for crowd behavior recognition based on structure learning
CN114241522A (en) Method, system, equipment and storage medium for field operation safety wearing identification
CN113205060A (en) Human body action detection method adopting circulatory neural network to judge according to bone morphology
CN111832475B (en) Face false detection screening method based on semantic features
CN114067360A (en) Pedestrian attribute detection method and device
Lulio et al. Cognitive-merged statistical pattern recognition method for image processing in mobile robot navigation
Alani et al. Convolutional neural network-based Face Mask Detection
Zeno et al. Face validation based anomaly detection using variational autoencoder
Hunter et al. Exploiting sparse representations in very high-dimensional feature spaces obtained from patch-based processing
Kumar et al. Skin based occlusion detection and face recognition using machine learning techniques
Subramanian et al. An optical flow feature and McFIS based approach for 3-dimensional human action recognition
Kumar A comparative study on machine learning algorithms using HOG features for vehicle tracking and detection
Ahuja et al. Object Detection and classification for Autonomous Drones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant