CN113989732A - Real-time monitoring method, system, equipment and readable medium based on deep learning - Google Patents

Real-time monitoring method, system, equipment and readable medium based on deep learning Download PDF

Info

Publication number
CN113989732A
CN113989732A CN202111092134.7A CN202111092134A CN113989732A CN 113989732 A CN113989732 A CN 113989732A CN 202111092134 A CN202111092134 A CN 202111092134A CN 113989732 A CN113989732 A CN 113989732A
Authority
CN
China
Prior art keywords
image
deep learning
time monitoring
real
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111092134.7A
Other languages
Chinese (zh)
Inventor
蔡剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Eracom Contracting And Engineering Co ltd
Original Assignee
China Eracom Contracting And Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Eracom Contracting And Engineering Co ltd filed Critical China Eracom Contracting And Engineering Co ltd
Priority to CN202111092134.7A priority Critical patent/CN113989732A/en
Publication of CN113989732A publication Critical patent/CN113989732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a real-time monitoring method, a system, equipment and a readable medium based on deep learning. A real-time monitoring method based on deep learning comprises the following steps: acquiring video data of a monitored area, and preprocessing the video data to obtain a plurality of target images; performing target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not. Firstly, target images are obtained based on preprocessing, an interested region which is possibly stored with intrusion risks in each target image is identified and obtained from a plurality of target images, then corresponding video segments are extracted from the video data according to the identified interested regions, finally whether the video data are illegal or not is finally confirmed by using a behavior analysis method of similarity measurement, and misjudgment is avoided after image or video identification for many times.

Description

Real-time monitoring method, system, equipment and readable medium based on deep learning
Technical Field
The present invention relates to electronic devices, and more particularly, to a real-time monitoring method, system, device and readable medium based on deep learning.
Background
There are many methods for target detection and tracking in the current research, and even though some difficulties in video monitoring systems are overcome, more and more deep difficult problems still wait for us to explore and solve, for example, the system applicability problem in special weather conditions (such as rainy days and haze days), the illumination intensity adaptive problem, and the like. If the problems can not be solved effectively, the further improvement and application promotion of the intelligent video monitoring system are also very disadvantageous.
Disclosure of Invention
In view of the shortcomings of the prior art, one of the objectives of the present invention is to provide a real-time monitoring method based on deep learning, which uses a multiple recognition mode to ensure that the real-time monitoring of illegal intrusion can be realized even under special environments.
The second purpose of the present invention is to provide a real-time monitoring system.
It is a third object of the present invention to provide an electronic apparatus.
It is a further object of the present invention to provide a computer readable medium.
In order to achieve the purpose, the invention adopts the following technical scheme:
in one aspect, the invention provides a real-time monitoring method based on deep learning, which comprises the following steps:
acquiring video data of a monitored area, and preprocessing the video data to obtain a plurality of target images; the target image is an image with abnormal data;
performing target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region;
and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not.
Further, the real-time monitoring method based on deep learning includes:
decomposing the video data to obtain a plurality of first images;
screening a plurality of first images by using a background difference method to obtain a plurality of interference processing images;
and carrying out image enhancement on the plurality of preprocessed images to obtain a plurality of target images.
Further, in the real-time monitoring method based on deep learning, the video data includes day video data and night video data; the first image comprises a first day image and a first night image;
the background subtraction method comprises the steps of:
acquiring a standard background image of a monitored area; the standard background image comprises a day background image and a night background image;
and comparing and analyzing the first image and the standard background image, and judging that the image with the moving target is a target image.
Further, in the real-time monitoring method based on deep learning, the first recognition model is obtained through the following steps:
acquiring a first training set; the first training set comprises a plurality of first training images with intrusion risks;
and training the initialized image recognition model based on deep learning by using the first training set to obtain the first recognition model.
Further, the real-time monitoring method based on deep learning includes the following steps:
putting the video clips into an intrusion identification model, and judging whether an illegal intrusion condition exists or not; the intrusion recognition model is generated based on a hidden Markov model.
Further, the real-time monitoring method based on deep learning further includes:
and when the illegal intrusion is judged, sending alarm information.
Further, according to the real-time monitoring method based on deep learning, the illegal intrusion includes an intrusion behavior and a window climbing behavior.
In another aspect, the present invention further provides a real-time monitoring system using any one of the above real-time monitoring methods based on deep learning, including:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring video data of a monitored area and preprocessing the video data to obtain a plurality of target images;
the processing module is used for carrying out target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region; and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not.
In another aspect, the present invention further provides an electronic device, including:
a processor;
a memory storing a computer program; the computer program, when executed by the processor, implements a deep learning based real-time monitoring method as described in any of the preceding.
In another aspect, the present invention further provides a computer readable medium storing a computer program, which when executed by a processor implements the real-time monitoring method based on deep learning according to any one of the foregoing.
Compared with the prior art, the real-time monitoring method, the system, the equipment and the readable medium based on deep learning have the following beneficial effects:
the real-time monitoring method provided by the invention is used for firstly obtaining target images based on preprocessing, identifying and obtaining an interested area which is possibly stored with the intrusion risk in each target image from a plurality of target images, then extracting corresponding video segments from the video data according to the identified interested areas, finally determining whether the target images are illegal intrusion by using a similar measurement behavior analysis method, finally determining whether the target images are illegal intrusion risk or not through image or video identification for many times, greatly improving the safety coefficient, avoiding false alarm or no alarm, being suitable for different environmental conditions and greatly improving the identification capability.
Drawings
FIG. 1 is a flow chart of a real-time monitoring method provided by the present invention;
FIG. 2 is a flow chart of the image enhancement steps provided by the present invention;
FIG. 3 is a flow chart of one embodiment of the image enhancement step provided by the present invention;
fig. 4 is a block diagram of a real-time monitoring system provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It is to be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of specific embodiments of the invention, and are not intended to limit the invention.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps, but may include other steps not expressly listed or inherent to such process or method. Also, without further limitation, one or more devices or subsystems, elements or structures or components beginning with "comprise. The appearances of the phrases "in one embodiment," "in another embodiment," and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Referring to fig. 1, the present invention provides a real-time monitoring method based on deep learning, which is applied to a central control terminal, where the central control terminal includes a cloud, a server, and a mobile terminal, and a first recognition model based on deep learning is loaded inside the central control terminal; the safety monitoring who is the community is used specifically, and specific real-time monitoring system includes the camera and with well accuse terminal, and the camera is installed in the position that needs carry out the control, for example a side of a multilayer building, and well accuse terminal can receive the surveillance video data of camera in real time, and then discerns with video data, judges whether have the illegal invasion condition to take place.
In particular, the camera is preferably a camera that can be adjusted to operate according to the light intensity, for example, a camera with a night vision function that starts to be used after sunset or when the light intensity is low. Certainly, the same monitoring area can also be used for acquiring video data by using two cameras, one camera is a common camera and is specially used when the light intensity is higher, and the other camera is a night vision camera and is specially used when the light intensity is lower.
The real-time monitoring method based on deep learning comprises the following steps:
s1, acquiring video data of a monitoring area, and preprocessing the video data to obtain a plurality of target images; the target image is an image with abnormal data; the monitoring area includes, but is not limited to, a door corridor, a side of a building, and may also be a certain area of an accessible position of other buildings, which is determined according to actual requirements.
The sources of the video data comprise real-time detection data of the camera and storage data detected by the camera, namely real-time monitoring aims at real-time monitoring under a certain time and space and automatic identification of a certain section of monitoring video. For example, if a resident of a building loses something, if the resident looks over a certain period of the monitoring video, the monitoring video in the period may need to be played once, but by applying the real-time monitoring method provided by the present invention, the segment of the illegal intrusion in the monitoring video can be quickly identified for reference.
Further, the preprocessing is used to extract a target image, and the preprocessing may be to decompose the video data to obtain each frame of picture in the video data, and then obtain a desired picture as the target image according to a certain recognition algorithm. Those skilled in the art can select an appropriate recognition algorithm for acquiring the target image according to actual requirements.
Generally, the content of the image in the monitoring area of the camera is not changed much, and the abnormal data is different and unchanged in the current monitoring area, for example, after a side of a certain building is monitored, a bird flies into the monitoring image, so that in the monitoring video data at this time, a certain section of video clip will have an image of the bird, and then all the frame image data in the video clip are images with abnormal data, i.e., target images. Preferably, as long as the frames in the monitoring video clip related to rain are set as the target images, the main monitoring is performed for the rain condition.
S2, carrying out target detection and tracking on a plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region; the first recognition model is obtained based on deep neural network model training, and a specific training method is available for a person skilled in the art by referring to a conventional training mode of the deep neural network, which is not limited in the present invention. The invasion risk comprises door opening, door prying, window climbing and the like, and can be set according to actual requirements.
The identification of the region of interest is used as a second judgment, and aims to screen the intrusion risk condition possibly occurring in the monitoring region, so as to prepare for step S3, improve the identification precision, and prevent false alarm. For example, the target image is certainly not selected as the region of interest due to bird-in-the-mirror recognition, but a person lying on a window is likely to be recognized as the region of interest.
And S3, identifying the video segments corresponding to the regions of interest by using a behavior analysis method, and judging whether the video segments are illegal invasion or not. Those skilled in the art can select an appropriate behavior analysis method of the similarity metric according to actual situations to identify the illegal intrusion, and the invention is not limited.
In some embodiments, the intrusion includes intrusion behavior and window-climbing behavior.
Further, based on the region of interest, a specific position (for example, a 100 th frame of picture) of the corresponding target image in the video data is determined, and then a video clip obtained by extracting a first predetermined number of previous continuous pictures and a second predetermined number of subsequent continuous pictures of the target image is identified by using a behavior analysis method. That is, the video clip in this embodiment includes a segment of normal video data and a segment of video data with intrusion risk.
Preferably, the video segment formed by the first predetermined number of consecutive pictures has a duration of preferably 5-1440min (the specific first predetermined number is the number of pictures included in the video duration, for example, a certain video data is 60 frames per second, and the duration of 5min is 300 frames, and the first predetermined number is 300). The video segment formed by the second predetermined number of consecutive pictures is preferably of a duration of 1-5 min.
The real-time monitoring method provided by the invention is used for firstly obtaining target images based on preprocessing, identifying and obtaining an interested area which is possibly stored with the intrusion risk in each target image from a plurality of target images, then extracting corresponding video segments from the video data according to the identified interested areas, finally determining whether the target images are illegal intrusion by using a similar measurement behavior analysis method, finally determining whether the target images are illegal intrusion risk or not through image or video identification for many times, greatly improving the safety coefficient, avoiding false alarm or no alarm, being suitable for different environmental conditions and greatly improving the identification capability.
Further, as a preferable scheme, in this embodiment, the preprocessing includes:
decomposing the video data to obtain a plurality of first images; specifically, the first image is preferably extracted according to frame data.
Screening a plurality of first images by using a background difference method to obtain a plurality of interference processing images; the background difference method is to compare and analyze the current frame of the image sequence with a pre-constructed model background, and detect a moving target by comparing the background and the foreground respectively according to the comparison result, and the specific operation method is as follows: firstly, an image is selected as an initial background model, then the image and the background image in the video are subtracted one by one, then the subtraction result is compared with a preselected threshold, if the subtraction result is greater than the specified threshold, the subtraction result can be used as a moving target, and if the subtraction result is less than the specified threshold, the subtraction result can be used as a background area.
And carrying out image enhancement on the plurality of preprocessed images to obtain a plurality of target images.
Further, in this embodiment, please refer to fig. 2 and fig. 3 together, the image enhancement is a Multi-Scale Retinex (Multi-Scale Retinex image enhancement) algorithm based on HIS (human vision color space), the Scale of the filter function is selected according to the characteristics of the image itself, the image enhancement effects of various scales are automatically fused, and the detail features of the target are highlighted on the premise of keeping the background smooth. And then, selecting scale parameters according to the Weber law and human visual characteristics, so that the brightness and contrast of the processed image are obviously improved.
Specifically, the image enhancement algorithm comprises the following steps:
s11, converting the original RGB (red green blue color system) image into HSI space.
The three components in HSI can be derived from the following calculation method for R, G, B:
Figure BDA0003267929960000061
Figure BDA0003267929960000062
Figure BDA0003267929960000063
Figure BDA0003267929960000064
and S12, selecting the adaptive scale parameters.
In the HSI space, the image is firstly subjected to region division, and then the image is subjected to adaptive scale parameter selection in each region.
When the two-dimensional function is solved, the information of two aspects of image gradient and background intensity is taken into account, and different brightness on image pixels is divided into different areas according to the gradient rate of the two aspects. The background intensity of the image is represented by a letter I (x, y), and is obtained by weighting and averaging neighborhood pixels in the image:
Figure BDA0003267929960000065
wherein m, n and r represent weights, and L in the formula is a set formed by four adjacent domains in a diagonal line of the picture.
The maximum difference I of the image pixels is calculated with the following formula (with the aim of dividing the luminance area of the image):
Figure BDA0003267929960000066
I1=a*Id,I2=b*Id,I3=c*Id
wherein, a is 0.01, b is 0.5, and c is 0.7. I is1、I2、I3The threshold values are low, medium and high luminance, respectively.
Gradient threshold Gi(i ═ 1, 2, and 3), which represents the gradual change rate of the image, and is calculated by the following formula:
Figure BDA0003267929960000071
once it is determined which brightness region the image belongs to, the calculation of the parameters in the region for the images in different regions is started, and the specific formula is as follows:
if the image belongs to a low-brightness area, the formula for calculating the parameters is as follows:
Figure BDA0003267929960000072
Figure BDA0003267929960000073
if the image belongs to a middle brightness area, the formula for calculating the parameters is as follows:
σ2=D2+WI2·abs(I(x,y)-I2)+WG2·abs(G(x,y)-G2·I(x,y))
WI2=α·log(I(x,y)/I2),WG2=β·log(G(x,y)/(G2·I(x,y)))
if the image belongs to a high-brightness area, the formula for calculating the parameters is as follows:
σ3=D3+WI3·abs(I(x,y)-I3)+WG3·abs(G(x,y)-G3·I(x,y)2)
WI3=α·log(I(x,y)/I3),WG3=β·log(G(x,y)/(G3·I(x,y)2))
d1, D2 and D3 in the above formula are reference parameters, and the value ranges of the finally obtained parameters are all fluctuated from top to bottom in the three parameters D1, D2 and D3, wherein the three parameters selected here are D1-30, D2-80 and D3-120 respectively; i (x, y) represents the background intensity of the image, G (x, y) represents the gradient of the image; w (ii) and w (gi) (i ═ 1, 2, 3) respectively represent the background intensity and gradient weight of the image, and if the adjustment of the background intensity and the gradient of the image is desired, α and β can be directly adjusted, the larger the values of the two parameters are, the more obvious the influence on the brightness is, and in the practical application process, the required values of α and β are selected according to the degree of the whole brightness of the image.
And S13, fusing different scale information.
Replacing the fixed parameters in the MSR (Multi-Scale Retinex) method with the method for selecting Scale parameters mentioned in the above steps, and performing enhancement operation on the image by using the method, the formula of the obtained enhanced image is as follows:
Figure BDA0003267929960000081
s14, transforming the graphics from the HIS form to the RGB form.
Further, as a preferable scheme, in this embodiment, the video data includes day video data and night video data; the first image comprises a first day image and a first night image; further, the night video data is obtained by shooting through a camera with a night vision function.
The background subtraction method comprises the steps of:
acquiring a standard background image of a monitored area; the standard background image comprises a day background image and a night background image;
and comparing and analyzing the first image and the standard background image, and judging that the image with the moving target is a target image.
Specifically, the implementation of the embodiment can ensure that the illegal intrusion condition in the video data can be identified efficiently and accurately no matter at night or in the daytime.
Further, as a preferable solution, in this embodiment, the first recognition model is obtained through the following steps:
acquiring a first training set; the first training set comprises a plurality of first training images with intrusion risks;
and training the initialized image recognition model based on deep learning by using the first training set to obtain the first recognition model. Specifically, the model network of the deep learning-based image recognition model includes: back propagation (BP algorithm) algorithm, clustering neural networks.
Further, as a preferred scheme, in this embodiment, the behavior analysis method includes the steps of:
putting the video clips into an intrusion identification model, and judging whether an illegal intrusion condition exists or not; the intrusion recognition model is generated based on a hidden Markov model.
In this embodiment, it is only necessary to know the judgment criteria of specific abnormal behaviors (for example, intrusion behavior and window-climbing behavior) in advance, and then extract the motion characteristic information such as the shape of the target from the video sequence according to the criteria, and operate the characteristics to perform manual modeling. In this embodiment, the intrusion recognition model is generated by a hidden markov model.
Firstly, extracting features of human body edges, and dividing behaviors into normal behaviors and abnormal behaviors by using a Support Vector Machine (SVM) mode; in the process of behavior modeling, a hybrid hidden Markov model mode is applied, and then the reliability measurement of time is applied to identify abnormal behaviors; and then, positioning detection is adopted to track the motion track of the moving target, and modeling is adopted aiming at the track to realize the identification of abnormal behaviors.
Of course, in the specific implementation, a behavior analysis method of the similarity measure may also be used, and the behavior analysis method of the similarity measure achieves the identification of the abnormal behavior in a manner of automatically learning the normal behavior in the video sequence without defining a behavior model of a human in advance.
The specific detection process comprises the following steps: the method comprises the steps of conducting segmentation processing on video segments to obtain a plurality of video subsections, extracting features from the video subsections to form vector features, conducting behavior analysis by using clustering and similarity measurement, and taking the video segments with few categories as exceptions.
Further, as a preferable scheme, in this embodiment, the method further includes:
and when the illegal intrusion is judged, sending alarm information. Further, the alarm information includes alarm information sent to managers, security personnel and owners, and also includes driving-away alarm information for intruders (such as thieves). Specifically, those skilled in the art may use different reminding manners according to actual needs and objects to which the report message is sent, for example, a manner of sending a warning short message to the mobile terminal is used for managers, security personnel and owners, and a loud sound driving warning sound is sent by using a speaker closest to the intruder.
Referring to fig. 4, the present invention further provides a real-time monitoring system using the deep learning based real-time monitoring method according to any of the foregoing embodiments, including:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring video data of a monitored area and preprocessing the video data to obtain a plurality of target images;
the processing module is used for carrying out target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region; and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not.
The present invention also provides an electronic device comprising:
a processor;
a memory storing a computer program; the computer program, when executed by the processor, implements a method for real-time monitoring based on deep learning according to any of the preceding embodiments.
The electronic device can be any intelligent device in real time, such as a cloud, a server, a computer, a mobile terminal and the like.
The present invention also provides a computer-readable medium storing a computer program, which when executed by a processor implements the real-time monitoring method based on deep learning according to any of the foregoing embodiments.
More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the scope of the appended claims.

Claims (10)

1. A real-time monitoring method based on deep learning is characterized by comprising the following steps:
acquiring video data of a monitored area, and preprocessing the video data to obtain a plurality of target images; the target image is an image with abnormal data;
performing target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region;
and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not.
2. The deep learning based real-time monitoring method according to claim 1, wherein the preprocessing comprises:
decomposing the video data to obtain a plurality of first images;
screening a plurality of first images by using a background difference method to obtain a plurality of interference processing images;
and carrying out image enhancement on the plurality of preprocessed images to obtain a plurality of target images.
3. The deep learning based real-time monitoring method according to claim 2, wherein the video data comprises day video data and night video data; the first image comprises a first day image and a first night image;
the background subtraction method comprises the steps of:
acquiring a standard background image of a monitored area; the standard background image comprises a day background image and a night background image;
and comparing and analyzing the first image and the standard background image, and judging that the image with the moving target is a target image.
4. The deep learning based real-time monitoring method according to claim 1, wherein the first recognition model is obtained by the following steps:
acquiring a first training set; the first training set comprises a plurality of first training images with intrusion risks;
and training the initialized image recognition model based on deep learning by using the first training set to obtain the first recognition model.
5. The deep learning based real-time monitoring method according to claim 1, wherein the behavior analysis method comprises the steps of:
putting the video clips into an intrusion identification model, and judging whether an illegal intrusion condition exists or not; the intrusion recognition model is generated based on a hidden Markov model.
6. The deep learning based real-time monitoring method according to claim 1, further comprising:
and when the illegal intrusion is judged, sending alarm information.
7. The deep learning-based real-time monitoring method according to claim 1, wherein the illegal intrusion includes an intrusion behavior and a window-climbing behavior.
8. A real-time monitoring system using the deep learning based real-time monitoring method according to any one of claims 1 to 7, comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring video data of a monitored area and preprocessing the video data to obtain a plurality of target images;
the processing module is used for carrying out target detection and tracking on the plurality of target images by using a first recognition model based on deep learning to obtain an interested region of each target image; the region of interest is a region with intrusion risk in the monitoring region; and identifying the video segments corresponding to the plurality of interested areas by using a behavior analysis method, and judging whether the video segments are illegal invasion or not.
9. An electronic device, comprising:
a processor;
a memory storing a computer program; the computer program, when executed by the processor, implements a deep learning based real-time monitoring method as claimed in any one of claims 1-7.
10. A computer-readable medium, in which a computer program is stored which, when being executed by a processor, carries out a method for real-time monitoring based on deep learning according to any one of claims 1 to 7.
CN202111092134.7A 2021-09-17 2021-09-17 Real-time monitoring method, system, equipment and readable medium based on deep learning Pending CN113989732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092134.7A CN113989732A (en) 2021-09-17 2021-09-17 Real-time monitoring method, system, equipment and readable medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092134.7A CN113989732A (en) 2021-09-17 2021-09-17 Real-time monitoring method, system, equipment and readable medium based on deep learning

Publications (1)

Publication Number Publication Date
CN113989732A true CN113989732A (en) 2022-01-28

Family

ID=79736016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092134.7A Pending CN113989732A (en) 2021-09-17 2021-09-17 Real-time monitoring method, system, equipment and readable medium based on deep learning

Country Status (1)

Country Link
CN (1) CN113989732A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743163A (en) * 2022-04-29 2022-07-12 北京容联易通信息技术有限公司 Video intelligent monitoring algorithm architecture method and system based on deep learning
CN114913447A (en) * 2022-02-17 2022-08-16 国政通科技有限公司 Police intelligent command room system and method based on scene recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913447A (en) * 2022-02-17 2022-08-16 国政通科技有限公司 Police intelligent command room system and method based on scene recognition
CN114743163A (en) * 2022-04-29 2022-07-12 北京容联易通信息技术有限公司 Video intelligent monitoring algorithm architecture method and system based on deep learning

Similar Documents

Publication Publication Date Title
US11532156B2 (en) Methods and systems for fire detection
CN109166261B (en) Image processing method, device and equipment based on image recognition and storage medium
WO2018130016A1 (en) Parking detection method and device based on monitoring video
CN111178183B (en) Face detection method and related device
US7391907B1 (en) Spurious object detection in a video surveillance system
Dedeoglu et al. Real-time fire and flame detection in video
US7859564B2 (en) Video surveillance system
US8553086B2 (en) Spatio-activity based mode matching
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN111523397B (en) Intelligent lamp post visual identification device, method and system and electronic equipment thereof
KR20080054368A (en) Flame detecting method and device
Calderara et al. Smoke detection in video surveillance: a MoG model in the wavelet domain
CN113989732A (en) Real-time monitoring method, system, equipment and readable medium based on deep learning
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
CN108230607B (en) Image fire detection method based on regional characteristic analysis
CN114708555A (en) Forest fire prevention monitoring method based on data processing and electronic equipment
Yoon et al. An intelligent automatic early detection system of forest fire smoke signatures using Gaussian mixture model
Huang et al. Rapid detection of camera tampering and abnormal disturbance for video surveillance system
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
Lin et al. Real-time active tampering detection of surveillance camera and implementation on digital signal processor
KR20200060868A (en) multi-view monitoring system using object-oriented auto-tracking function
Frejlichowski et al. SmartMonitor: An approach to simple, intelligent and affordable visual surveillance system
CN107346421B (en) Video smoke detection method based on color invariance
CN113553992A (en) Escalator-oriented complex scene target tracking method and system
CN109118546A (en) A kind of depth of field hierarchical estimation method based on single-frame images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination