CN113139521B - Pedestrian boundary crossing monitoring method for electric power monitoring - Google Patents

Pedestrian boundary crossing monitoring method for electric power monitoring Download PDF

Info

Publication number
CN113139521B
CN113139521B CN202110534198.1A CN202110534198A CN113139521B CN 113139521 B CN113139521 B CN 113139521B CN 202110534198 A CN202110534198 A CN 202110534198A CN 113139521 B CN113139521 B CN 113139521B
Authority
CN
China
Prior art keywords
pedestrian
value
shadow
video
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110534198.1A
Other languages
Chinese (zh)
Other versions
CN113139521A (en
Inventor
江鹏宇
杨亚飞
周旭战
袁世通
韩威
范晓鹏
刘云飞
马仁婷
秦铭阳
张璜
杨宏佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Sanmenxia Electric Power Co ltd
Zhongnan Electric Power Test and Research Institute of China Datang Group Science and Technology Research Institute Co Ltd
Original Assignee
Datang Sanmenxia Electric Power Co ltd
Zhongnan Electric Power Test and Research Institute of China Datang Group Science and Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Sanmenxia Electric Power Co ltd, Zhongnan Electric Power Test and Research Institute of China Datang Group Science and Technology Research Institute Co Ltd filed Critical Datang Sanmenxia Electric Power Co ltd
Priority to CN202110534198.1A priority Critical patent/CN113139521B/en
Publication of CN113139521A publication Critical patent/CN113139521A/en
Application granted granted Critical
Publication of CN113139521B publication Critical patent/CN113139521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention relates to a pedestrian boundary crossing monitoring method for power monitoring, which has the technical scheme that the pedestrian identification of a moving target is completed by integrating motion information into pedestrian detection, the pedestrian boundary crossing detection is completed by tracking the target pedestrian and extracting a warning line in a video sequence, the pedestrian boundary crossing detection is suitable for being applied to the actual power industry field environment, the pedestrian boundary crossing monitoring method can be used for identifying and tracking the pedestrian in a power video monitoring scene, simultaneously the boundary crossing behavior can be detected and alarmed, the accuracy rate reaches more than 92 percent, the field requirement is met in real time, the problems which are always desired to be solved but difficult to be solved by technicians in the field are solved, and powerful guarantee is provided for the safe and stable operation of a thermal power plant.

Description

Pedestrian boundary crossing monitoring method for electric power monitoring
Technical Field
The invention relates to the technical field of video monitoring, in particular to a pedestrian boundary crossing monitoring method for electric power monitoring.
Background
In the thermal power industry, some special areas need to be monitored, and serious consequences caused by the fact that pedestrians enter dangerous areas by mistake are prevented. There is therefore a need for an efficient and accurate method of warning of the false entry of a pedestrian into a dangerous area. The existing moving target detection method usually detects only a moving target without identifying whether the moving target is a pedestrian or directly detecting the pedestrian on a full video frame, and the algorithm detection real-time performance is low. The motion detection method mainly comprises an interframe difference method, a mixed Gaussian background modeling method and a mean background modeling method. The interframe difference method is to perform difference between adjacent frames or similar frames in a video sequence so as to obtain a moving object in the video sequence. The Gaussian mixture background modeling method is used as a moving object detection algorithm with more applications, a moving object and a background in a video sequence are distinguished through statistical difference, and the mean background modeling method is to perform weighted average on a section of the video sequence without the moving object in the video sequence to obtain the background in a video sequence scene, so that the moving object in a subsequent video sequence is detected. At present, most of pedestrian recognition methods are directed at videos for pedestrian recognition, however, most of backgrounds in the videos do not need to be recognized, so that the requirements of power monitoring are difficult to meet, and improvement and innovation of the methods are imperative.
Disclosure of Invention
In view of the above situation, to overcome the defects of the prior art, the present invention aims to provide a pedestrian boundary crossing monitoring method for electric power monitoring, which can integrate motion information into pedestrian identification to improve the rapidity and accuracy of pedestrian target identification of motion and meet the requirement of electric power system boundary crossing monitoring.
The technical scheme of the invention is as follows:
a pedestrian boundary crossing monitoring method for power monitoring comprises the following steps:
step 1: video capture
Reading video data shot by a monitoring camera, converting the video data into image data, and converting a color image into a gray image to finish video acquisition;
step 2: moving object extraction
(1) Reading the first 60 frames of images of a video sequence, weighting and averaging the gray level images of the first 60 frames to obtain an averaged gray image, establishing a sample set for each pixel point in the gray image by adopting a Vibe motion background modeling method, wherein the pixel values of the surrounding points of the pixel point are sampling values:
Figure GDA0003823596060000011
wherein, B n An average background image is established for collecting N frames of images, wherein N is the number of frames for averaging and is a continuous N frames of images stored in a video set including a current frame; it is clear that the larger the value of N, B n The closer to the background image in the actual scene, the longer the background modeling time is with the increase of the value of N, so that the factors of modeling accuracy, efficiency and the like are comprehensively considered, and N =60 is selected by the method to complete background modeling in the video scene;
(2) Reading a subsequent video frame of the video sequence, comparing a pixel value of a certain pixel point in the subsequent video frame with a sampling value in a sample set, and if the distance between two pixel values is less than a set distance R, considering that the two pixel values are close; similarly, the distance between the pixel value of the pixel point and all sample values in the sample set is calculated, so that the number of the pixel points similar to the sample set can be obtained, and if the number is greater than a set threshold value, the pixel point is judged as the background, so that the extraction of the moving target is completed;
the method comprises the following steps of replacing an original distance calculation model in the Vibe algorithm with a cone model, and introducing a self-adaptive threshold value method to enable the Vibe algorithm to adapt to complex background changes, wherein the specific method comprises the following steps: firstly, averaging the gray levels of a current video frame sample set, wherein the calculation formula is as follows:
Figure GDA0003823596060000021
in the formula: f (v) i ) Is the gray value of the sample point, n is the number of the sample points in the sample set, and V is the average value of the sample set;
calculating the standard deviation of the sample set:
Figure GDA0003823596060000022
the size of the threshold R is adjusted according to the size of the calculated standard deviation, which is expressed as follows:
R=σ×γ
in the formula: r is an adaptive threshold value of a Vibe algorithm to be adjusted, sigma is a standard deviation of a current video frame, and gamma is an amplitude multiplier factor;
(3) Shadow elimination is carried out on the extracted moving object;
and step 3: carrying out pedestrian identification in the motion area;
and 4, step 4: pedestrian tracking
Tracking the pedestrian detected in the step 3, finishing the tracking of the pedestrian by combining Kalman filtering and Hungarian algorithm, estimating the position of the pedestrian at the next moment by the Kalman filtering according to the current moment position of the pedestrian, finishing the optimal matching of the estimated position by the Hungarian algorithm, and finishing the tracking of the pedestrian;
and 5: pedestrian cross border detection
Firstly, extracting straight lines in a video sequence, extracting yellow warning lines through a color space, extracting a warning line region, finishing the extraction of the straight lines by adopting Hough transformation, and drawing the straight lines into a subsequent video sequence to be used as basic lines for pedestrian boundary crossing detection; the pedestrian boundary crossing detection is realized by adopting a one-way detection mode, the motion information of the pedestrian is integrated into a pedestrian boundary crossing detection algorithm to judge the motion direction of the pedestrian, and when the pedestrian breaks into a dangerous area, the system judges that the pedestrian has boundary crossing behavior, so that the alarm is finished.
Preferably, the specific method for identifying pedestrians in the motion area in step 3 is as follows:
the method comprises the steps that a YOLO neural network algorithm with high recognition speed and high recognition rate is adopted to recognize pedestrians in a moving area, so that moving information is integrated into pedestrian recognition;
the YOLO network structure is composed of convolution layers and full connection layers, the convolution layers extract semantic information of images, the full connection layers output coordinates of center points of predicted range frames, the YOLO network structure achieves cross-channel integration information by superposing 1 × 1 convolution layers, dimensionality is reduced, network generalization capability is improved, and the YOLO algorithm flow is as follows:
(1) Reading images of a moving area in a video, and normalizing the images into a fixed size;
(2) The input image is divided into an S multiplied by S grid by the YOLO model, and the center coordinates of the target to be detected are supposed to fall into the S multiplied by S grid, and the grid is responsible for detecting the target;
(3) B bounding boxes are predicted in each grid unit, and the probability that each bounding box contains the class to which the bounding box belongs and the confidence coefficient of the suspected target are predicted;
(4) Performing category confidence scoring, setting an optimal threshold judgment parameter, and eliminating a boundary window with too low probability of a suspected target;
(5) Eliminating a window of suspected target redundancy by adopting a non-maximum value inhibition method;
(6) And outputting the identification target frame and the category.
The method completes the pedestrian identification of the moving target by integrating the motion information into the pedestrian detection, completes the detection of the pedestrian crossing by tracking the target pedestrian and extracting the warning line in the video sequence, and has the following advantages compared with the prior art:
(1) Through the improvement of the Vibe algorithm and the setting of the self-adaptive algorithm, the self-adaptive threshold value R can be automatically adjusted to a certain degree according to the change of the environment in the video sequence, so that the moving target can be more accurately detected along with the change of a scene, the extracted moving target is more perfect, and the detection and tracking of pedestrians in the subsequent video sequence are conveniently completed;
(2) The motion information extracted by the Vibe is fused with a YOLO neural network detection algorithm, a detection result is used as a regression problem by a YOLO model, end-to-end training and learning target detection processes are realized, the interference of irrelevant backgrounds is eliminated, the speed of pedestrian detection is accelerated, meanwhile, the phenomenon of false alarm of non-pedestrian behaviors in subsequent behavior detection is eliminated, and the detection speed is improved;
(3) The motion direction information of the pedestrian is integrated into the pedestrian border crossing detection behavior, so that the phenomenon of repeated alarm is eliminated;
(4) In the thermal power industry, need monitor some special areas, prevent that the pedestrian from missing the dangerous area and causing serious consequence, the application provides a set of detection method that transgresses suitable for the electric power industry, be fit for using in actual electric power industry site environment, can discern and trail the pedestrian in the scene to the electric power video monitoring, can detect and report to the police the behavior of transgression simultaneously, the rate of accuracy reaches more than 92%, and the real-time meets the on-the-spot demand, the problem that technical staff in the field had eagerly solved but was difficult to solve has been solved all the time solved, safety for thermal power plant, steady operation provides the powerful guarantee.
Drawings
FIG. 1 is a schematic diagram of a background model of a Vibe motion background modeling method.
Fig. 2 is a block diagram of the pedestrian tracking flow of the method.
FIG. 3 is a block diagram of a process of the present invention.
Detailed Description
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
As shown in fig. 1, the pedestrian boundary crossing monitoring method for power monitoring of the present invention includes the following steps:
step 1: video capture
Reading video data shot by a monitoring camera, converting the video data into image data, and converting a color image into a gray image to finish video acquisition;
step 2: moving object extraction
(1) Reading the first 60 frames of images of a video sequence, weighting and averaging the gray level images of the first 60 frames to obtain an averaged gray image, establishing a sample set for each pixel point in the gray image by adopting a Vibe motion background modeling method, wherein the pixel values of the surrounding points of the pixel point are sampling values:
Figure GDA0003823596060000041
wherein, B n An average background image is established for collecting N frames of images, wherein N is the number of frames for averaging and is a continuous N frames of images stored in a video set including a current frame; it is clear that the larger the value of N, B n The closer to the background image in the actual scene, but with the increase of the N value, the longer the background modeling time, so that the factors such as modeling accuracy and efficiency are comprehensively considered, and N =60 is selected by the method to complete background modeling in the video scene;
the value of the amplitude multiplier factor gamma is 0.5, and the set value of the adaptive threshold value R is 20-50; the reason for this is that in order to prevent the threshold value from being adjusted too sharply in actual production, the present invention sets the adaptive threshold value to upper and lower limits of 20 to 50;
(2) Reading a subsequent video frame of the video sequence, comparing a pixel value of a certain pixel point in the subsequent video frame with a sampling value in a sample set, and if the distance between two pixel values is less than a set distance R, considering that the two pixel values are close; similarly, the distance between the pixel value of the pixel point and all sample values in the sample set is calculated, so that the number of the pixel points similar to the sample set can be obtained, and if the number is greater than a set threshold value, the pixel point is judged as the background, so that the extraction of the moving target is completed;
in the original Vibe algorithm, as shown in fig. 1, the criterion for whether a pixel in a video sequence is foreground or background isA sphere S with the pixel v (x) as the center and R as the radius R (v (x)), by judging the background sample set and S R (v (x)) to determine whether the pixel is foreground or background; however, the R value in the Vibe algorithm is a fixed constant, and the R value cannot be automatically adjusted according to the complexity of the environment, so that the detection of the moving target is not accurate enough in a specific occasion, that is, the distance between two pixels is calculated in the conventional Vibe algorithm, the euclidean distance is adopted, and the R value cannot be automatically adjusted according to the complexity of the environment, so that the environment adaptability is poor, the actual production requirement of a power plant cannot be met, and the Vibe algorithm needs to be improved to a certain extent; therefore, the invention enables the R to automatically adjust the R value to a certain degree according to the change of the environment in the video sequence by setting the self-adaptive algorithm, so that the moving target can be detected more accurately along with the change of the scene. The method comprises the following steps of replacing an original distance calculation model in the Vibe algorithm with a cone model, and introducing a self-adaptive threshold value method to enable the Vibe algorithm to adapt to complex background changes, wherein the specific method comprises the following steps: firstly, averaging the gray levels of a current video frame sample set, wherein the calculation formula is as follows:
Figure GDA0003823596060000051
in the formula: f (v) i ) Is the gray value of the sample point, n is the number of the sample points in the sample set, and V is the average value of the sample set;
calculating the standard deviation of the sample set:
Figure GDA0003823596060000052
the size of the threshold R is adjusted according to the size of the calculated standard deviation, which is expressed as follows:
R=σ×γ
in the formula: r is an adaptive threshold value of a Vibe algorithm needing to be adjusted, sigma is a standard deviation of a current video frame, and gamma is an amplitude multiplier factor;
(3) Shadow elimination is carried out on the extracted moving object, and the specific method comprises the following steps:
the shadow moving along with the object can be generated under the influence of light in the moving process of the object, the accuracy of the detected moving object is reduced due to the existence of the shadow, and the processing of the subsequent moving object is interfered, so that the shadow generated by illumination needs to be restrained by adopting a reasonable method. The shadow is eliminated by fusing the HSV color space and the LBP texture feature method, firstly, the hue difference of a moving object and the shadow is utilized, a shadow area is preliminarily judged by the HSV color space, and meanwhile, the texture feature of the shadow area is basically unchanged before and after the shadow is generated, so that the texture feature is further extracted from the shadow area, whether the area is the shadow is further judged, and the shadow elimination is carried out;
the HSV color space may accomplish the detection of shadows according to the following equation:
Figure GDA0003823596060000061
in the formula: s. the k (x, y) are the shaded areas,
Figure GDA0003823596060000062
and
Figure GDA0003823596060000063
the brightness values of the foreground and background pixels, alpha and beta are the threshold values of the shadow brightness,
Figure GDA0003823596060000064
and
Figure GDA0003823596060000065
saturation values, T, for foreground and background pixels S Is a threshold value for the saturation of the shadow,
Figure GDA0003823596060000066
and
Figure GDA0003823596060000067
hue, T, of foreground and background pixels H A threshold value for a shade tone;
meanwhile, the CLBP algorithm and the HSV color space are fused to eliminate the shadow, the CLBP algorithm is an operator for describing the local texture characteristics of the image and has illumination invariance, so that the CLBP algorithm is an improved LBP algorithm, and the formula is expressed as follows:
Figure GDA0003823596060000068
in the formula: p is the number of adjacent pixels, and R is the radius of the neighborhood; g p Is the current pixel point (x) c ,y c ) Gray value of adjacent pixel point, g c Is the current pixel point (x) c ,y c ) N' is the number of windows;
Figure GDA0003823596060000069
wherein, g N′ Is the average value of the gray scale of the whole image, g a The gray value of a certain pixel point;
D p =g p -g c wherein D is p The difference value of the gray values of the adjacent pixel points and the current pixel point is obtained;
Figure GDA00038235960600000610
wherein D is c The gray value difference value of the current pixel point and all adjacent pixel points is the average value;
Figure GDA00038235960600000611
describing the gray difference of a local window for a traditional LBP operator;
Figure GDA00038235960600000612
is a characteristic of the gray level difference in the local window,
Figure GDA00038235960600000613
taking S (x ') as a discriminant function for the gray level difference of the central pixel point, and taking the value as 1 when the independent variable x' is greater than 0, otherwise, taking the value as 0;
three are connected in series
Figure GDA00038235960600000614
The histogram of (2) is fused, the texture correlation characteristic between the candidate shadow area and the background image is calculated, and the dissimilarity D (T-L) between the two histograms is measured by adopting the chi-square distance, and the calculation formula is as follows:
Figure GDA0003823596060000071
in the formula: x is the total number of histograms Bin, T x And L x Respectively representing the Bin values of the sample and the template on the x-th area, and obtaining the correlation of the two areas through the dissimilarity, wherein the calculation formula is as follows:
Figure GDA0003823596060000072
in the formula: n' is the number of pixel points in the candidate shadow region, H (x) is a binary function, D (T-L) is the dissimilarity, T a Is a threshold value;
according to the formula, when the dissimilarity D (T-L) is less than the threshold value T a H (x) is 1, otherwise 0;
c is the texture similarity score in the current frame and the background frame, if the value of c is larger than the threshold value, the candidate area is a shadow area, the detected shadow is set as 0, the candidate area is divided into backgrounds, and shadow elimination is carried out;
after the shadow is eliminated, performing morphological processing on the extracted moving object, recording a moving object area in the video, and waiting for the next processing; a small amount of holes may still exist in the moving target after the self-adaptive threshold and the shadow are eliminated, so that the extracted moving target can be more complete in shape through morphological processing, and the morphological processing is the prior art;
and step 3: performing pedestrian identification in a moving area
The specific method for identifying the pedestrian in the motion area comprises the following steps:
the traditional image feature extraction method comprises the steps of HOG feature and Haar feature. With the emergence of neural networks, pedestrian recognition is completed by adopting a deep learning method more and more at present, and the method adopts a YOLO neural network algorithm with high recognition speed and high recognition rate to realize the recognition of pedestrians in a moving area, so that the movement information is integrated into the pedestrian recognition;
the YOLO model takes the detection result as a regression problem, realizes the end-to-end training and learning target detection process, and has very high application type. The YOLO network structure is composed of convolution layers and full connection layers, the convolution layers extract semantic information of images, the full connection layers output coordinates of center points of predicted range frames, the YOLO network structure achieves cross-channel integration information by superposing 1 × 1 convolution layers, dimensionality is reduced, network generalization capability is improved, and a YOLO algorithm flow is as follows:
(1) Reading images of a moving area in a video, and normalizing the images into a fixed size;
(2) The YOLO model divides an input image into S multiplied by S grids, and supposes that the center coordinates of a target to be detected all fall into the S multiplied by S grids which are responsible for detecting the target;
(3) B bounding boxes are predicted in each grid unit, and the probability that each bounding box contains the class to which the bounding box belongs and the confidence coefficient of the suspected target are predicted;
(4) Performing category confidence scoring, setting an optimal threshold judgment parameter, and eliminating a boundary window with too low probability of a suspected target;
(5) Eliminating a window of suspected target redundancy by adopting a non-maximum value inhibition method;
(6) And outputting the identification target frame and the category.
And 4, step 4: pedestrian tracking
Tracking the pedestrian detected in the step 3 is an important step for performing behavior recognition on the pedestrian in the video subsequently;
the pedestrian tracking is completed by combining efficient and conveniently-realized Kalman filtering and the Hungarian algorithm, the Kalman filtering can estimate the position of the pedestrian at the next moment through the current moment position of the pedestrian, and the estimated position is subjected to optimal matching by the Hungarian algorithm, so that the pedestrian tracking is completed;
the method for completing the tracking of the pedestrians by combining Kalman filtering and Hungarian algorithm can be divided into a prediction stage and a matching stage:
(1) Prediction phase
A Kalman filter is adopted to predict the position of the pedestrian in the next frame, and the Kalman filter is used as a linear estimation method to establish the relationship between frames. Set of positions at time instant of assumed pedestrian k-1
S k-1 ={s′ 1 ,s′ 2 ,s′ 3 ,…,s′ b And then the position prediction of the k time is completed through the following formula.
Figure GDA0003823596060000081
In the formula:
Figure GDA0003823596060000082
in order to be the result of the prediction of the last state,
Figure GDA0003823596060000083
is a priori estimate of time k, u k-1 A and B are system matrixes,
the error of this prediction can be found by the following equation:
Figure GDA0003823596060000084
in the formula:
Figure GDA0003823596060000085
covariance moment for current prior estimateThe number of the arrays is changed,
Figure GDA0003823596060000086
and Q is a covariance matrix of the current state, and Q is a system process covariance matrix, wherein in an actual environment, Q changes with the environment. The prediction result P of the current frame can be obtained by correcting the prediction result through errors k ={p 1 ,p 2 ,p 3 ,…,p b };
(2) Matching phase
Let the k time detection result be S k ={s 1 ,s 2 ,s 3 ,…,s b Get the detection result S by Euclidean distance k Each element and the prediction result P k ={p 1 ,p 2 ,p 3 ,…,p b The Euclidean distance among all elements forms a cost matrix C:
Figure GDA0003823596060000087
in the formula: d is a radical of e In order to predict the Euclidean distance between the pedestrian position centroid and the pedestrian detection centroid, each element value c of the cost matrix is obtained by calculating the Euclidean distance z,j Expressing the matching degree between the predicted pedestrian position and the detected pedestrian position, and obtaining an MxN cost matrix C by setting the number of the current frame detection frames as N and the number of the detected tracks as M m,n The Hungarian maximum matching algorithm is to associate detected pedestrians to different running tracks, the sum of all associated cost values when each frame is allocated is the total cost value of the frame for association, and meanwhile, in all schemes, the sum of the total cost between all detected pedestrians and the running tracks is minimized, so that the tracking of the pedestrians is completed.
And 5: pedestrian boundary crossing detection
In the power industry, dangerous areas all have warning lines which are always straight lines, so that in the pedestrian out-of-range detection, straight lines in a video sequence are extracted at first, the colors of the warning lines in the power industry are always yellow, the yellow warning lines can be extracted through a color space, after the warning line areas are extracted, the straight lines are drawn into subsequent frames of the video sequence, hough transformation is adopted to complete the extraction of the straight lines, and the straight lines are drawn into the subsequent video sequence to serve as basic lines of the pedestrian out-of-range detection; according to the border crossing behavior, border crossing detection can be divided into one-way detection and two-way detection, the behavior of breaking into a dangerous area is often required to be detected in the power industry, the behavior of leaving the dangerous area can not be detected, the method adopts a one-way detection mode to realize the border crossing detection of the pedestrian, the motion information of the pedestrian is integrated into the algorithm of the border crossing detection of the pedestrian to judge the motion direction of the pedestrian, and when the pedestrian breaks into the dangerous area, the system judges that the pedestrian breaks into the border crossing behavior, so that the alarm is completed;
the specific method for detecting the pedestrian crossing comprises the following steps:
(1) Extracting the color of the warning line through the HSV color space, wherein the warning line is yellow, so that the extraction of a yellow straight line in the video is completed by reading a first frame of the video;
(2) Extracting the edge of the yellow area through Canny edge detection;
(3) Carrying out statistical Hough transform processing on the extracted edges, and extracting a warning straight line;
(4) Recording the central position of a moving pedestrian rectangular frame in pedestrian detection, and judging the moving direction of a pedestrian through the change of the middle position of adjacent frames;
(5) And judging whether the pedestrian issues out-of-range behaviors or not by judging whether intersection points exist between the lower edge of the rectangular frame of the pedestrian and the straight line or not and the moving direction of the pedestrian.
The method obtains good effect through practical application, and the method is compared with the method 1 to obtain the following conclusion, wherein the method 1 is a method for modeling a mixed Gaussian background to complete the modeling of the background, and the tracking of a moving target is completed through a Camshift algorithm to realize the border crossing detection of pedestrians;
TABLE 1 model method comparison
Figure GDA0003823596060000101
The effectiveness of the pedestrian boundary crossing detection behavior designed by the method is tested by selecting the scene between the electronics in the power industry, and the detection effect is as follows:
TABLE 2 comparison of model Performance
Figure GDA0003823596060000102
Through comparison of the method and the method 1, the detection rate of the method is far higher than that of the method 1, and meanwhile, the processing frame rate after pedestrian identification is increased by the method is slightly lower than that of the method 1, but the actual field requirements can still be met, so that the method has higher practicability in an electric power scene; the method mainly aims at general scenes at present, a special algorithm is rarely used for completing pedestrian in videos and detecting the border crossing in the power industry, the existing general border crossing detection algorithm is difficult to achieve the recognition rate of more than 85% in the background of the power industry, the method can be used for recognizing and tracking the pedestrian in the scenes monitored by the power video, meanwhile, the border crossing behavior can be detected and alarmed, the accuracy rate reaches more than 92%, the field requirement is met in real time, the problem that technicians in the field are eagerly to solve but difficult to solve is solved, and powerful guarantee is provided for safe and stable operation of a thermal power plant.

Claims (3)

1. A pedestrian boundary crossing monitoring method for power monitoring is characterized by comprising the following steps:
step 1: video capture
Reading video data shot by a monitoring camera, converting the video data into image data, and converting a color image into a gray image to finish video acquisition;
step 2: moving object extraction
(1) Reading the first 60 frames of images of a video sequence, weighting and averaging the gray level images of the first 60 frames to obtain an averaged gray image, establishing a sample set for each pixel point in the gray image by adopting a Vibe motion background modeling method, wherein the pixel values of the surrounding points of the pixel point are sampling values:
Figure FDA0003823596050000011
wherein, B n An average background image is established for collecting N frames of images, wherein N is the number of frames for averaging and is a continuous N frames of images stored in a video set including a current frame; it is clear that the larger the value of N, B n The closer to the background image in the actual scene, the longer the background modeling time is with the increase of the value of N, so that the method selects N =60 to complete the background modeling in the video scene in consideration of modeling accuracy and efficiency factors;
(2) Reading a subsequent video frame of the video sequence, comparing a pixel value of a certain pixel point in the subsequent video frame with a sampling value in a sample set, and if the distance between two pixel values is less than a set distance R, considering that the two pixel values are close; similarly, the distance between the pixel value of the pixel point and all sample values in the sample set is calculated, so that the number of the pixel points similar to the sample set can be obtained, and if the number is greater than a set threshold value, the pixel point is judged as a background, so that the extraction of the moving target is completed;
the method comprises the following steps of replacing an original distance calculation model in the Vibe algorithm with a cone model, and introducing a self-adaptive threshold value method to enable the Vibe algorithm to adapt to complex background changes, wherein the specific method comprises the following steps: firstly, averaging the gray levels of a current video frame sample set, wherein the calculation formula is as follows:
Figure FDA0003823596050000012
in the formula: f (v) i ) The gray value of the sample point is, n is the number of the sample points in the sample set, and V is the average value of the sample set;
calculating the standard deviation of the sample set:
Figure FDA0003823596050000013
the size of the threshold R is adjusted according to the size of the calculated standard deviation, which is expressed as follows:
R=σ×γ
in the formula: r is an adaptive threshold value of a Vibe algorithm to be adjusted, sigma is a standard deviation of a current video frame, and gamma is an amplitude multiplier factor;
(3) Shadow elimination is carried out on the extracted moving object;
and step 3: carrying out pedestrian identification in the motion area;
and 4, step 4: pedestrian tracking
Tracking the pedestrian detected in the step 3, and completing the tracking of the pedestrian by combining Kalman filtering and Hungarian algorithm, wherein the Kalman filtering can estimate the position of the pedestrian at the next moment through the current moment position of the pedestrian, and the optimal matching of the estimated position is completed through the Hungarian algorithm, so that the tracking of the pedestrian is completed;
and 5: pedestrian cross border detection
Firstly, extracting straight lines in a video sequence, extracting yellow warning lines through a color space, extracting a warning line region, finishing the extraction of the straight lines by adopting Hough transformation, and drawing the straight lines into a subsequent video sequence to be used as basic lines for pedestrian boundary crossing detection; the method comprises the steps that the cross-border detection of pedestrians is achieved in a one-way detection mode, the motion information of the pedestrians is integrated into a pedestrian cross-border detection algorithm to judge the motion direction of the pedestrians, and when the pedestrians break into a dangerous area, a system judges that the pedestrians have cross-border behaviors, and then alarming is finished;
the specific method for eliminating the shadow of the extracted moving object in the step 2 comprises the following steps:
shadow is eliminated by fusing an HSV color space and an LBP texture feature method, firstly, a shadow area is preliminarily judged by the HSV color space by utilizing the chromaticity difference of a moving object and the shadow, and meanwhile, the texture feature of the area generating the shadow is basically unchanged before and after the shadow is generated, so that the texture feature is further extracted from the shadow area, whether the area is the shadow is further judged, and the shadow elimination is carried out;
the HSV color space may accomplish the detection of shadows according to the following equation:
Figure FDA0003823596050000021
in the formula: s k (x, y) are the shaded areas,
Figure FDA0003823596050000022
and
Figure FDA0003823596050000023
the brightness values of the foreground and background pixels, alpha and beta are the threshold values of the shadow brightness,
Figure FDA0003823596050000024
and
Figure FDA0003823596050000025
saturation values, T, for foreground and background pixels S Is a threshold value for the saturation of the shadow,
Figure FDA0003823596050000026
and
Figure FDA0003823596050000027
hue, T, of foreground and background pixels H A threshold value for shadow tones;
meanwhile, the CLBP algorithm and the HSV color space are fused to eliminate the shadow, the CLBP algorithm is an improved version of the LBP algorithm, and the formula is expressed as follows:
Figure FDA0003823596050000031
in the formula: p is the number of adjacent pixels, and R is the radius of the neighborhood; g is a radical of formula p Is the current pixel point (x) c ,y c ) Gray value of adjacent pixel point, g c Is the current pixel point (x) c ,y c ) N' is the number of windows;
Figure FDA0003823596050000032
wherein, g N‘ Is the average value of the gray levels of the whole image, g a The gray value of a certain pixel point;
D p =g p -g c (ii) a Wherein D is p The difference value of the gray values of the adjacent pixel points and the current pixel point is obtained;
Figure FDA0003823596050000033
wherein D is c The gray value difference value of the current pixel point and all the adjacent pixel points is the average value;
Figure FDA0003823596050000034
describing the gray difference of a local window for a traditional LBP operator;
Figure FDA0003823596050000035
is a characteristic of the gray level difference in the local window,
Figure FDA0003823596050000036
s (x ') is a discriminant function as the gray level difference of the center pixel point, and when the independent variable x'>When 0, the value is 1, otherwise the value is 0;
three are connected in series
Figure FDA0003823596050000037
By calculating the texture correlation between the candidate shadow region and the background image, usingChi-squared distance is used to measure the dissimilarity D (T-L) between the two histograms, which is calculated as follows:
Figure FDA0003823596050000038
in the formula: x is the total number of histograms Bin, T x And L x Respectively representing the Bin values of the sample and the template on the x-th area, and obtaining the correlation of the two areas through the dissimilarity, wherein the calculation formula is as follows:
Figure FDA0003823596050000039
in the formula: n' is the number of pixel points in the candidate shadow region, H (x) is a binary function, D (T-L) is the dissimilarity, T a Is a threshold value;
according to the formula, when the dissimilarity D (T-L) is less than the threshold value T a H (x) is 1, otherwise 0;
c is the texture similarity score in the current frame and the background frame, if the value of c is larger than the threshold value, the candidate area is a shadow area, the detected shadow is set as 0, the candidate area is divided into backgrounds, and shadow elimination is carried out;
after the shadow is eliminated, performing morphological processing on the extracted moving object, recording a moving object area in the video, and waiting for the next processing;
the specific method for identifying the pedestrians in the moving area in the step 3 comprises the following steps:
the method comprises the steps that a YOLO neural network algorithm with high recognition speed and high recognition rate is adopted to recognize pedestrians in a moving area, so that moving information is integrated into pedestrian recognition;
the YOLO network structure is composed of convolution layers and full connection layers, the convolution layers extract semantic information of images, the full connection layers output coordinates of center points of predicted range frames, the YOLO network structure achieves cross-channel integration information by superposing 1 × 1 convolution layers, dimensionality is reduced, network generalization capability is improved, and the YOLO algorithm flow is as follows:
(1) Reading images of a moving area in a video, and normalizing the images into a fixed size;
(2) The input image is divided into an S multiplied by S grid by the YOLO model, and the center coordinates of the target to be detected are supposed to fall into the S multiplied by S grid, and the grid is responsible for detecting the target;
(3) B bounding boxes are predicted by each grid unit, and the probability of each bounding box containing the class to which the bounding box belongs and the confidence coefficient of the suspected target are predicted;
(4) Performing category confidence scoring, setting an optimal threshold judgment parameter, and eliminating a boundary window with too low probability of a suspected target;
(5) Eliminating a window of suspected target redundancy by adopting a non-maximum value inhibition method;
(6) Outputting and identifying a target frame and a category;
the specific method for detecting the pedestrian boundary crossing in the step 5 comprises the following steps:
(1) Extracting the color of a warning line through an HSV color space, wherein the warning line is yellow, so that the first frame of the video is read to finish the extraction of a yellow straight line in the video;
(2) Extracting the edge of the yellow area through Canny edge detection;
(3) Performing statistical Hough transform processing on the extracted edges, and extracting a warning straight line;
(4) Recording the central position of a moving pedestrian rectangular frame in pedestrian detection, and judging the moving direction of a pedestrian through the change of the middle position of adjacent frames;
(5) And judging whether the pedestrian issues the border crossing behavior or not by judging whether the intersection point exists between the lower edge of the rectangular frame of the pedestrian and the straight line or not and the moving direction of the pedestrian.
2. The pedestrian boundary crossing monitoring method for power monitoring according to claim 1, wherein the amplitude multiplier factor γ in step 2 is 0.5, and the setting value of the adaptive threshold R is 20-50.
3. The pedestrian delimitation monitoring method for power monitoring as claimed in claim 1, wherein the method of completing the tracking of pedestrians by combining kalman filtering and hungarian algorithm in the step 4 can be divided into a prediction stage and a matching stage: (1) Prediction phase
The method adopts a Kalman filter to predict the position of the pedestrian appearing in the next frame, the Kalman filter is used as a linear estimation method, the relation between frames can be established, and a position set S of the pedestrian at the k-1 moment is assumed k-1 ={s′ 1 ,s′ 2 ,s′ 3 ,…,s′ b Position prediction at time k is done by:
Figure FDA0003823596050000051
in the formula:
Figure FDA0003823596050000052
in order to be the result of the prediction of the last state,
Figure FDA0003823596050000053
is a priori estimate of time k, u k-1 A and B are system matrixes;
the error of this prediction can be found by the following equation:
Figure FDA0003823596050000054
in the formula:
Figure FDA0003823596050000055
is the covariance matrix of the current a priori estimates,
Figure FDA0003823596050000056
the covariance matrix of the current state is Q, the covariance matrix of the system process is Q, and Q changes with the environment in the actual environment; the prediction result P of the current frame can be obtained by correcting the prediction result through the error k ={p 1 ,p 2 ,p 3 ,…,p b };
(2) Matching phase
Let the k time detection result be S k ={s 1 ,s 2 ,s 3 ,…,s b Get the detection result S by Euclidean distance k Individual element and predicted result
P k ={p 1 ,p 2 ,p 3 ,…,p b The Euclidean distance among all elements forms a cost matrix C:
Figure FDA0003823596050000057
in the formula: d e In order to predict the Euclidean distance between the pedestrian position centroid and the pedestrian detection centroid, each element value c of the cost matrix is obtained by calculating the Euclidean distance z,j Expressing the matching degree between the predicted pedestrian position and the detected pedestrian position, and obtaining an MxN cost matrix C by setting the number of the current frame detection frames as N and the number of the detected tracks as M m,n The Hungarian maximum matching algorithm is to associate detected pedestrians to different running tracks, the sum of all associated cost values when each frame is allocated is the total cost value of the frame for association, and meanwhile, in all schemes, the sum of the total cost between all detected pedestrians and the running tracks is minimized, so that the tracking of the pedestrians is completed.
CN202110534198.1A 2021-05-17 2021-05-17 Pedestrian boundary crossing monitoring method for electric power monitoring Active CN113139521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110534198.1A CN113139521B (en) 2021-05-17 2021-05-17 Pedestrian boundary crossing monitoring method for electric power monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110534198.1A CN113139521B (en) 2021-05-17 2021-05-17 Pedestrian boundary crossing monitoring method for electric power monitoring

Publications (2)

Publication Number Publication Date
CN113139521A CN113139521A (en) 2021-07-20
CN113139521B true CN113139521B (en) 2022-10-11

Family

ID=76817195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110534198.1A Active CN113139521B (en) 2021-05-17 2021-05-17 Pedestrian boundary crossing monitoring method for electric power monitoring

Country Status (1)

Country Link
CN (1) CN113139521B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657298A (en) * 2021-08-20 2021-11-16 软通动力信息技术(集团)股份有限公司 Pedestrian intrusion identification method, device, equipment and medium based on large displacement tracking
CN113673459A (en) * 2021-08-26 2021-11-19 中国科学院自动化研究所 Video-based production construction site safety inspection method, system and equipment
CN113890972A (en) * 2021-09-22 2022-01-04 温州大学大数据与信息技术研究院 Monitoring area target tracking system
CN113920535B (en) * 2021-10-12 2023-11-17 广东电网有限责任公司广州供电局 Electronic region detection method based on YOLOv5
CN116206250A (en) * 2021-11-30 2023-06-02 中兴通讯股份有限公司 Method and device for detecting human body boundary crossing and computer readable storage medium
CN114613098B (en) * 2021-12-20 2023-11-03 中国铁路上海局集团有限公司科学技术研究所 Tray stacking out-of-range detection method
CN114463687B (en) * 2022-04-12 2022-07-08 北京云恒科技研究院有限公司 Movement track prediction method based on big data
CN114792319B (en) * 2022-06-23 2022-09-20 国网浙江省电力有限公司电力科学研究院 Transformer substation inspection method and system based on transformer substation image
CN117629514A (en) * 2024-01-26 2024-03-01 吉林大学 SF6 leakage amount detection system and method based on mid-infrared thermal imaging

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065409A (en) * 2012-12-14 2013-04-24 广州供电局有限公司 Power transmission line monitoring and early warning system
CN106327525A (en) * 2016-09-12 2017-01-11 安徽工业大学 Machine room important place border-crossing behavior real-time monitoring method
CN107230188A (en) * 2017-04-19 2017-10-03 湖北工业大学 A kind of method of video motion shadow removing
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
WO2020042419A1 (en) * 2018-08-29 2020-03-05 汉王科技股份有限公司 Gait-based identity recognition method and apparatus, and electronic device
CN111460949A (en) * 2020-03-25 2020-07-28 上海电机学院 Real-time monitoring method and system for preventing external damage of power transmission line
CN112541397A (en) * 2020-11-17 2021-03-23 南京林业大学 Flame detection method based on improved ViBe algorithm and lightweight convolutional network
CN112733770A (en) * 2021-01-18 2021-04-30 全程(上海)智能科技有限公司 Regional intrusion monitoring method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065409A (en) * 2012-12-14 2013-04-24 广州供电局有限公司 Power transmission line monitoring and early warning system
CN106327525A (en) * 2016-09-12 2017-01-11 安徽工业大学 Machine room important place border-crossing behavior real-time monitoring method
CN107230188A (en) * 2017-04-19 2017-10-03 湖北工业大学 A kind of method of video motion shadow removing
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference
WO2020042419A1 (en) * 2018-08-29 2020-03-05 汉王科技股份有限公司 Gait-based identity recognition method and apparatus, and electronic device
CN111460949A (en) * 2020-03-25 2020-07-28 上海电机学院 Real-time monitoring method and system for preventing external damage of power transmission line
CN112541397A (en) * 2020-11-17 2021-03-23 南京林业大学 Flame detection method based on improved ViBe algorithm and lightweight convolutional network
CN112733770A (en) * 2021-01-18 2021-04-30 全程(上海)智能科技有限公司 Regional intrusion monitoring method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FAST PEDESTRIAN DETECTION AND TRACKING BASED ON VIBE COMBINED HOG-SVM SCHEME;Lang Wang et al.;《International Journal of Innovative Computing, Information and Control》;20191231;第15卷(第6期);第2305-2320页 *
ViBe算法鬼影抑制方法研究;马永杰 等;《激光与光电子学进展》;20200131;第57卷(第2期);第021007-1——021007-8页 *
一种用于电力监控的行人运动检测与跟踪算法;江鹏宇 等;《电力科学与工程》;20190630;第35卷(第6期);第31-36页 *
去除鬼影及阴影的视觉背景提取运动目标检测算法;方岚,于凤芹;《激光与光电子学进展》;20190731;第56卷(第13期);第131002-1——131002-8页 *
基于HSV颜色空间和局部纹理的阴影消除算法研究;龙浩 等;《电子测量技术》;20200930;第43卷(第18期);第81-87页 *
视频图像序列中运动目标检测与跟踪算法的研究;杨波;《中国优秀硕士学位论文全文数据库 信息科技辑》;20201215;I138-126 *
视频监控中行人异常行为分析研究;高翔;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180815;I136-343 *

Also Published As

Publication number Publication date
CN113139521A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN109740413B (en) Pedestrian re-identification method, device, computer equipment and computer storage medium
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN101389004B (en) Moving target classification method based on on-line study
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN110298297B (en) Flame identification method and device
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN111339883A (en) Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene
CN111369596B (en) Escalator passenger flow volume statistical method based on video monitoring
CN106682665B (en) Seven-segment type digital display instrument number identification method based on computer vision
US9418426B1 (en) Model-less background estimation for foreground detection in video sequences
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN112906550B (en) Static gesture recognition method based on watershed transformation
CN111241987B (en) Multi-target model visual tracking method based on cost-sensitive three-branch decision
CN113657250A (en) Flame detection method and system based on monitoring video
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN113192038A (en) Method for identifying and monitoring abnormal smoke and fire in existing flame environment based on deep learning
Rastegar et al. An intelligent control system using an efficient License Plate Location and Recognition Approach
CN111402185B (en) Image detection method and device
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters
Wang et al. An efficient method of shadow elimination based on image region information in HSV color space
CN113095332B (en) Saliency region detection method based on feature learning
CN110334703B (en) Ship detection and identification method in day and night image
Gong et al. Pedestrian detection algorithm based on integral channel features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant