CN116193075A - Intelligent monitoring method and system based on control of Internet of things - Google Patents

Intelligent monitoring method and system based on control of Internet of things Download PDF

Info

Publication number
CN116193075A
CN116193075A CN202310060355.9A CN202310060355A CN116193075A CN 116193075 A CN116193075 A CN 116193075A CN 202310060355 A CN202310060355 A CN 202310060355A CN 116193075 A CN116193075 A CN 116193075A
Authority
CN
China
Prior art keywords
monitoring
monitoring video
module
intelligent
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310060355.9A
Other languages
Chinese (zh)
Inventor
陈刚
汪泽州
鲍建飞
陈伦
钱锋强
储建新
邓亮
干玉成
姚佳
姚征
舒能文
潘克勤
王晨波
周弘毅
李想
李豹
孙豪豪
孙帅
胡燕伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiyan Nanyuan Electric Power Engineering Co ltd
State Grid Zhejiang Electric Power Co Ltd Haiyan County Power Supply Co
Original Assignee
Haiyan Nanyuan Electric Power Engineering Co ltd
State Grid Zhejiang Electric Power Co Ltd Haiyan County Power Supply Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiyan Nanyuan Electric Power Engineering Co ltd, State Grid Zhejiang Electric Power Co Ltd Haiyan County Power Supply Co filed Critical Haiyan Nanyuan Electric Power Engineering Co ltd
Priority to CN202310060355.9A priority Critical patent/CN116193075A/en
Publication of CN116193075A publication Critical patent/CN116193075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses an intelligent monitoring method and system based on control of the Internet of things, comprising the following steps: s1: the monitoring module acquires a monitoring video; s2: the intelligent module gathers, cleans and distributes the monitoring video; s3: the analysis module trains a neural network model by using the data set, and obtains a classification result of the monitoring video by taking an image frame of the monitoring video as input; s4: the early warning module carries out early warning based on the classification result. The beneficial effects of the invention are as follows: the method can clean and distribute the monitoring video and classify the monitoring video for early warning.

Description

Intelligent monitoring method and system based on control of Internet of things
Technical Field
The invention relates to the technical field of intelligent monitoring, in particular to an intelligent monitoring method and system based on control of the Internet of things.
Background
At present, the monitoring system can monitor for 24 hours all the day, so that the working efficiency of inspection security is greatly improved, and the effect of using the monitoring system is particularly remarkable when monitoring and managing a large area.
In the prior art, a monitoring system is generally connected with a display system, and a monitoring picture is displayed on the display system, so that the problem that monitoring videos cannot be cleaned and distributed and the monitoring videos are classified for early warning exists.
For example, a "monitoring system" disclosed in chinese patent literature, its bulletin number: CN101986704a, filing date: the invention transmits real-time images of monitored places in real time through Internet network at the year 11 and 30 2010, and can monitor the current situation of the monitored places at any time and any place through Internet network no matter in office buildings, monitoring centers and all parts of the world, but has the problems that monitoring videos cannot be cleaned and distributed and the monitoring videos can be classified for early warning.
Disclosure of Invention
Aiming at the defect that the prior art cannot clean and distribute the monitoring video and classify the monitoring video for early warning, the invention provides an intelligent monitoring method and system based on the control of the Internet of things, which can clean and distribute the monitoring video and classify the monitoring video for early warning.
The following is a technical scheme of the invention, an intelligent monitoring method based on control of the Internet of things, comprising the following steps:
s1: the monitoring module acquires a monitoring video;
s2: the intelligent module gathers, cleans and distributes the monitoring video;
s3: the analysis module trains a neural network model by using the data set, and obtains a classification result of the monitoring video by taking an image frame of the monitoring video as input;
s4: the early warning module carries out early warning based on the classification result.
In the scheme, the monitoring module acquires the monitoring video, the intelligent module cleans the repeated monitoring video to obtain the cleaned monitoring video, the monitoring video is transmitted to the database through the first communication module, the cleaned monitoring video is transmitted to the server through the second communication module, the continuous frames which do not change for a plurality of seconds in the monitoring video are deleted, so that the data storage and concurrent processing pressure of the server are reduced, the data processing and communication efficiency is further improved, the analysis module trains the neural network model by using the data set, the classification result of the monitoring video is obtained by using the image frame of the monitoring video as input, and the early warning module carries out early warning based on the classification result. The method can clean and distribute the monitoring video and classify the monitoring video for early warning.
Preferably, in step S2, the monitoring video is cleaned, including the steps of:
s21: defining a repetitive monitoring video;
s22: the intelligent module collects monitoring videos;
s23: judging whether the monitoring video has the repeated monitoring video, if so, deleting the repeated monitoring video and splicing the residual monitoring video to be used as the cleaned monitoring video, and if not, taking the monitoring video as the cleaned monitoring video.
In the scheme, in the monitoring video picture, picture change does not occur for a plurality of seconds continuously, the picture change is defined as repeated monitoring video, and repeated monitoring video is deleted, so that the data storage and concurrent processing pressure of a server are reduced, and the data processing and communication efficiency is further improved.
Preferably, in step S3, the images of the data set are all color images of 3 channels and 32 pixels wide, and the classification of the data set includes normal personnel, normal articles, suspicious personnel and suspicious articles, and the number of images of each classification is not less than 500.
In the scheme, the image specification of the data set is unified and classified, sample errors can be reduced, the data set is classified and marked according to requirements, the neural network model is convenient to achieve the aim of classification, the number of images of each classification is not less than 500, and the data volume of the database is reduced on the premise that the effectiveness of training of the neural network model is ensured.
Preferably, in step S3, the analysis module trains the neural network model with the data set, including the steps of:
s31: loading and normalizing the data set;
s32: defining a convolutional neural network model;
s33: defining a loss function and an optimizer;
s34: training a neural network model;
s35: and testing the neural network model.
In this scheme, the data set is normalized using a transform function, the data range of the data is defined as [ -1,1], and the data set is divided into a training data set, a verification data set and a test data set, with the ratio of 4:1:1. And initializing a neural network, and setting a convolution layer, a pooling layer and a full connection layer. Optimization was performed using a multi-class cross entropy loss function and a random gradient descent method. When the neural network is trained by using the training set, the value of the loss function and the zero clearing gradient are initialized before each traversal, so that the influence caused by iteration is avoided. And testing the neural network by using the test set, and if the classification precision of the test set reaches the target value, conforming to the expectation.
Preferably, in step S32, the number of input channels of the first convolution layer is 3, the number of output channels is 6, and the convolution kernel size is 5;
the height and width of the output are halved by the pooling layer;
the number of input channels of the second convolution layer is 6, the number of output channels is 16, and the convolution kernel size is 5;
400 inputs, 120 outputs of the first full link layer;
the second full link layer has 120 inputs and 84 outputs;
the third full link layer has 84 inputs and 10 outputs.
In this scheme, the number of input channels of the first convolution layer is 3, the number of output channels is 6, and the convolution kernel size is 5. The pooling layer sets the height and width of the output halved. The second convolution layer has an input channel number of 6, an output channel number of 16, and a convolution kernel size of 5. The first full link layer flattens the data into one dimension, 400 inputs, 120 outputs. The second full link layer has 120 inputs and 84 outputs. The third full link layer has 84 inputs and 10 outputs. The input image is input into a first convolution layer, the first convolution layer is activated and pooled, the pooled result is input into a second convolution layer, the second convolution layer is activated and pooled, the pooled result is flattened, the flattened result is output through a first full-link layer, the output of the first full-link layer is input into a second full-link layer and output of the second full-link layer is obtained, the output of the second full-link layer is input into a third full-link layer and output of the third full-link layer is obtained, and the output of the third full-link layer is used for classification. The effectiveness of neural network model training is ensured.
Preferably, the monitoring video is extracted from the path of the cleaned monitoring video through OpenCV, the monitoring video is converted into continuous image frames, and the image frames are stored as monitoring images in jpg format.
In the scheme, the cleaned monitoring video is converted into the image frames through OpenCV and stored, so that the neural network model can conveniently obtain the classification of the image frames by taking the image frames as input, and the classification of the monitoring video is obtained. The surveillance videos can be categorized.
Preferably, the display module pages the real-time monitoring video in a round-robin manner, and the paging switching period is 10 to 30 seconds.
In the scheme, the display module adopts four-grid or nine-grid paging wheel broadcasting real-time monitoring videos, the real-time monitoring video display area is equally divided into a plurality of monitoring areas, each monitoring area is used for displaying the real-time monitoring videos of each monitoring module, when the number of the monitoring modules is larger than that of the monitoring areas, the paging mode is adopted to regenerate the plurality of monitoring areas to be used for displaying the real-time monitoring videos of other monitoring modules, the timing starting switching paging is set, and the period of switching paging is 10 to 30 seconds. And all monitoring videos are conveniently displayed dynamically.
An intelligent monitoring system based on internet of things control, comprising:
the monitoring module is used for acquiring a monitoring video;
the intelligent module is used for summarizing, cleaning and distributing the monitoring video and is connected with the monitoring module;
the server is used for collecting and processing the monitoring video and is connected with the intelligent module;
the database is used for storing the monitoring video, the monitoring image information and the data set and is connected with the server;
the setting module is used for setting a data set and connecting the data set with the database;
the analysis module is used for training the neural network model, classifying the monitoring video to obtain a classification result, and connecting the server and the database;
the display module is used for displaying the monitoring video and the monitoring image information and is connected with the server;
and the early warning module is used for early warning based on the classification result and is connected with the server.
In the scheme, the monitoring module acquires the monitoring video, the intelligent module cleans the repeated monitoring video to obtain the cleaned monitoring video, the monitoring video is transmitted to the database through the first communication module, the cleaned monitoring video is transmitted to the server through the second communication module, the continuous frames which do not change for a plurality of seconds in the monitoring video are deleted, so that the data storage and concurrent processing pressure of the server are reduced, the data processing and communication efficiency is further improved, the analysis module trains the neural network model by using the data set, the classification result of the monitoring video is obtained by using the image frame of the monitoring video as input, and the early warning module carries out early warning based on the classification result. The method can clean and distribute the monitoring video and classify the monitoring video for early warning.
Preferably, the power assembly of the monitoring module is connected with the camera in the shell, a transmission chain is sleeved between two transmission wheels rotatably arranged in the shell, and a motor arranged on the shell is connected with one of the transmission wheels.
In this scheme, monitoring module sets up the shell, avoids the camera to shoot the unclear condition in the rainy day, drives the drive wheel through the motor and realizes that the camera gets into the inside of shell or outstanding in the shell, can change the camera shooting angle and protect the camera. And the quality of the monitoring video is improved.
Preferably, the intelligent module transmits the monitoring video to the database through the first communication module, and transmits the cleaned monitoring video to the server through the second communication module.
The beneficial effects of the invention are as follows: the method can clean and distribute the monitoring video and classify the monitoring video for early warning.
Drawings
Fig. 1 is a schematic diagram of an intelligent monitoring system based on control of the internet of things.
Fig. 2 is a flow chart of an intelligent monitoring method based on control of the internet of things.
In the figure 1, a monitoring module; 2. an intelligent module; 3. a server; 4. a database; 5. setting a module; 6. an analysis module; 7. a display module; 8. and an early warning module.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings.
Examples: as shown in fig. 1, an intelligent monitoring system based on control of the internet of things includes:
the monitoring module 1 is used for acquiring a monitoring video;
the intelligent module 2 is used for summarizing, cleaning and distributing the monitoring video and is connected with the monitoring module 1;
the server 3 is used for collecting and processing the monitoring video and is connected with the intelligent module 2;
a database 4 for storing the monitoring video, the monitoring image information and the data set, and connected with the server 3;
a setting module 5, configured to set a data set, and connect to the database 4;
the analysis module 6 is used for training a neural network model, classifying the monitoring video to obtain a classification result, and connecting the server 3 and the database 4;
the display module 7 is used for displaying the monitoring video and the monitoring image information and is connected with the server 3;
and the early warning module 8 is used for early warning based on the classification result and is connected with the server 3.
The monitoring module 1 is used for acquiring a monitoring video, and the monitoring module 1 can be a camera. The monitoring module 1 is provided with a shell, the power assembly is arranged in the shell and is connected with a camera in the shell, the power assembly can drive the camera to enter the shell or protrude out of the shell, the power assembly comprises two driving wheels rotatably arranged in the shell, a driving chain is sleeved between the two driving wheels, and one driving wheel is connected with a motor arranged on the shell.
The intelligent module 2 is used for summarizing and cleaning the monitoring video of the monitoring module 1, transmitting the summarized monitoring video to the database 4 through the first communication module, and transmitting the cleaned monitoring video to the server 3 through the second communication module. And transmitting the summarized monitoring videos to the database 4 through the first communication module for backing up the source monitoring videos. And transmitting the cleaned monitoring video to the server 3 through the second communication module, deleting pictures which are not changed in a plurality of continuous seconds in the monitoring video, thereby reducing the data storage and concurrent processing pressure of the server 3 and further improving the data processing and communication efficiency. The cleaning steps of the monitoring video are as follows: defining a repetitive surveillance video. Namely, in the monitoring video picture, picture change does not occur for a plurality of seconds continuously, and the picture is defined as a repetitive monitoring video; acquiring a monitoring video, judging whether the monitoring video has repetitive monitoring video, if so, deleting the repetitive monitoring video and splicing the residual video to be used as the cleaned monitoring video, and if not, taking the monitoring video as the cleaned monitoring video.
The server 3 is used for collecting and processing the monitoring video. The method comprises the steps of obtaining a path of a cleaned monitoring video from a server 3, extracting the monitoring video from the path of the monitoring video through OpenCV, converting the monitoring video into continuous image frames, storing the image frames into monitoring images in jpg format, storing the monitoring images in the path of the image to be analyzed, and associating the image to be analyzed with the path of the monitoring video.
The database 4 is used for storing the monitoring video of the monitoring module 1, the data set of the setting module 5 and the monitoring image information of the analyzing module 6.
The setting module 5 is used for setting up a data set, which is stored in the database 4. The images in the dataset were all 3-channel high 32-pixel wide 32-pixel color images. The classification of the data set comprises normal personnel, normal articles, suspicious personnel and suspicious articles, and the number of images of each classification is not less than 500.
The analysis module 6 acquires a data set from the database 4 and trains a neural network, acquires monitoring images from the image path to be analyzed, obtains classification of the monitoring images through the neural network, and stores the classified monitoring image information to the database 4. The monitoring image information includes a monitoring image, a classification result, a monitoring image path, and a monitoring video path.
The analysis module 6 trains the neural network, comprising the steps of: loading and normalizing the data set, and dividing the data set into a test data set and a verification data set; defining a convolutional neural network; defining a loss function and an optimizer; training a neural network; the neural network is tested.
The data set is loaded and normalized, dividing the data set into a test data set and a validation data set. The data set is normalized using a transform function, defining the data range of the data as [ -1,1], dividing the data set into a training data set, a validation data set, and a test data set, in a 4:1:1 ratio.
A convolutional neural network is defined. And initializing a neural network, and setting a convolution layer, a pooling layer and a full connection layer. The first convolution layer has an input channel number of 3, an output channel number of 6, and a convolution kernel size of 5. The pooling layer sets the height and width of the output halved. The second convolution layer has an input channel number of 6, an output channel number of 16, and a convolution kernel size of 5. The first full link layer flattens the data into one dimension, 400 inputs, 120 outputs. The second full link layer has 120 inputs and 84 outputs. The third full link layer has 84 inputs and 10 outputs. The input image is input into a first convolution layer, the first convolution layer is activated and pooled, the pooled result is input into a second convolution layer, the second convolution layer is activated and pooled, the pooled result is flattened, the flattened result is output through a first full-link layer, the output of the first full-link layer is input into a second full-link layer and output of the second full-link layer is obtained, the output of the second full-link layer is input into a third full-link layer and output of the third full-link layer is obtained, and the output of the third full-link layer is used for classification.
A loss function and an optimizer are defined. A multi-class cross entropy loss function and a random gradient descent method are used.
Training the neural network. When the neural network is trained by using the training set, the value of the loss function and the zero clearing gradient are initialized before each traversal, so that the influence caused by iteration is avoided.
The neural network is tested. And testing the neural network by using the test set, and if the classification precision of the test set reaches the target value, conforming to the expectation.
The display module 7 displays real-time monitoring video and monitoring image information. Because the display module 7 needs to display the real-time monitoring videos of the monitoring modules 1 at the same time, the real-time monitoring videos are paged by adopting four-grid or nine-grid paging wheels. The real-time monitoring video display area is divided into a plurality of monitoring areas equally, each monitoring area is used for displaying the real-time monitoring video of each monitoring module 1, when the number of the monitoring modules 1 is larger than that of the monitoring areas, the plurality of monitoring areas are regenerated by adopting a paging mode and are used for displaying the real-time monitoring video of other monitoring modules 1, the timing starting switching paging is set, and the period of switching paging is 10 to 30 seconds. The image monitoring information is displayed by a form, a classification result, a monitoring image path and a monitoring video path, the text of the classification result is marked according to the classification result, if the classification result is suspicious personnel or suspicious articles, the font color of the classification result is set to be red, the monitoring image path is related to the monitoring image of the database 4, after the monitoring image path triggers a click command, the corresponding monitoring image is extracted from the database 4, the monitoring video path is related to the monitoring video of the server 3 by opening a browser, the corresponding monitoring video is extracted from the server 3 after the monitoring video path triggers the click command, and the monitoring video is displayed by opening the browser by the display module 7.
The early warning module 8 carries out early warning according to the classification result of the monitoring image information. If the classification result of the monitoring image information is suspicious personnel or suspicious articles, the early warning module 8 reminds the personnel of confirming the monitoring image through the set early warning voice.
As shown in fig. 2, an intelligent monitoring method based on control of the internet of things comprises the following steps:
s1: the monitoring module 1 acquires a monitoring video;
s2: the intelligent module 2 gathers, cleans and distributes the monitoring video;
s3: the analysis module 6 trains the neural network model, takes the image frame of the monitoring video as input to obtain the classification result of the monitoring video;
s4: the early warning module 8 performs early warning based on the classification result.
The monitoring module 1 acquires a monitoring video, the intelligent module 2 cleans the repeated monitoring video to obtain a cleaned monitoring video, the monitoring video is transmitted to the database 4 through the first communication module, the cleaned monitoring video is transmitted to the server 3 through the second communication module, pictures which are not changed in the monitoring video and are continuous for a plurality of seconds are deleted, so that the data storage and concurrent processing pressure of the server 3 are reduced, the data processing and communication efficiency is further improved, the analysis module 6 trains a neural network model through data sets, the classification result of the monitoring video is obtained by taking the image frame of the monitoring video as input, and the early warning module 8 carries out early warning based on the classification result. The method can clean and distribute the monitoring video and classify the monitoring video for early warning.

Claims (10)

1. The intelligent monitoring method based on the control of the Internet of things is characterized by comprising the following steps of:
s1: the monitoring module acquires a monitoring video;
s2: the intelligent module gathers, cleans and distributes the monitoring video;
s3: the analysis module trains a neural network model by using the data set, and obtains a classification result of the monitoring video by taking an image frame of the monitoring video as input;
s4: the early warning module carries out early warning based on the classification result.
2. The intelligent monitoring method based on the control of the internet of things according to claim 1, wherein in step S2, the monitoring video is cleaned, comprising the following steps:
s21: defining a repetitive monitoring video;
s22: the intelligent module collects monitoring videos;
s23: judging whether the monitoring video has the repeated monitoring video, if so, deleting the repeated monitoring video and splicing the residual monitoring video to be used as the cleaned monitoring video, and if not, taking the monitoring video as the cleaned monitoring video.
3. The intelligent monitoring method based on the control of the internet of things according to claim 1, wherein in the step S3, the images of the data set are all color images of 3 channels with a height of 32 pixels and a width of 32 pixels, and the classification of the data set comprises normal personnel, normal articles, suspicious personnel and suspicious articles, and the number of the images of each classification is not less than 500.
4. The intelligent monitoring method based on the control of the internet of things according to claim 1, wherein in step S3, the analysis module trains the neural network model with the data set, comprising the steps of:
s31: loading and normalizing the data set;
s32: defining a convolutional neural network model;
s33: defining a loss function and an optimizer;
s34: training a neural network model;
s35: and testing the neural network model.
5. The intelligent monitoring method based on the control of the internet of things according to claim 1 or 4, wherein in step S32, the number of input channels of the first convolution layer is 3, the number of output channels is 6, and the convolution kernel size is 5;
the height and width of the output are halved by the pooling layer;
the number of input channels of the second convolution layer is 6, the number of output channels is 16, and the convolution kernel size is 5;
400 inputs, 120 outputs of the first full link layer;
the second full link layer has 120 inputs and 84 outputs;
the third full link layer has 84 inputs and 10 outputs.
6. The intelligent monitoring method based on the control of the internet of things according to claim 1 or 4, wherein the monitoring video is extracted from the path of the cleaned monitoring video through OpenCV, the monitoring video is converted into continuous image frames, and the image frames are stored as monitoring images in jpg format.
7. The intelligent monitoring method based on the control of the internet of things according to claim 1, wherein the display module pages the live monitoring video in a round-robin manner, and the paging switching period is 10 to 30 seconds.
8. An intelligent monitoring system based on control of the internet of things, which is applicable to the intelligent monitoring method based on control of the internet of things as set forth in any one of claims 1 to 7, and is characterized by comprising:
the monitoring module is used for acquiring a monitoring video;
the intelligent module is used for summarizing, cleaning and distributing the monitoring video and is connected with the monitoring module;
the server is used for collecting and processing the monitoring video and is connected with the intelligent module;
the database is used for storing the monitoring video, the monitoring image information and the data set and is connected with the server;
the setting module is used for setting a data set and connecting the data set with the database;
the analysis module is used for training the neural network model, classifying the monitoring video to obtain a classification result, and connecting the server and the database;
the display module is used for displaying the monitoring video and the monitoring image information and is connected with the server;
and the early warning module is used for early warning based on the classification result and is connected with the server.
9. The intelligent monitoring system based on the control of the internet of things according to claim 8, wherein the power component of the monitoring module is connected with the camera in the housing, the transmission chain is sleeved between two transmission wheels rotatably installed in the housing, and the motor arranged on the housing is connected with one of the transmission wheels.
10. The intelligent monitoring system based on internet of things control according to claim 8 or 9, wherein the intelligent module transmits the monitoring video to the database through the first communication module, and transmits the cleaned monitoring video to the server through the second communication module.
CN202310060355.9A 2023-01-18 2023-01-18 Intelligent monitoring method and system based on control of Internet of things Pending CN116193075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310060355.9A CN116193075A (en) 2023-01-18 2023-01-18 Intelligent monitoring method and system based on control of Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310060355.9A CN116193075A (en) 2023-01-18 2023-01-18 Intelligent monitoring method and system based on control of Internet of things

Publications (1)

Publication Number Publication Date
CN116193075A true CN116193075A (en) 2023-05-30

Family

ID=86447175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310060355.9A Pending CN116193075A (en) 2023-01-18 2023-01-18 Intelligent monitoring method and system based on control of Internet of things

Country Status (1)

Country Link
CN (1) CN116193075A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061711A (en) * 2023-10-11 2023-11-14 深圳市爱为物联科技有限公司 Video monitoring safety management method and system based on Internet of things
CN117221494A (en) * 2023-10-07 2023-12-12 杭州讯意迪科技有限公司 Audio and video comprehensive management and control platform based on Internet of things and big data
CN117061711B (en) * 2023-10-11 2024-07-09 深圳市爱为物联科技有限公司 Video monitoring safety management method and system based on Internet of things

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117221494A (en) * 2023-10-07 2023-12-12 杭州讯意迪科技有限公司 Audio and video comprehensive management and control platform based on Internet of things and big data
CN117061711A (en) * 2023-10-11 2023-11-14 深圳市爱为物联科技有限公司 Video monitoring safety management method and system based on Internet of things
CN117061711B (en) * 2023-10-11 2024-07-09 深圳市爱为物联科技有限公司 Video monitoring safety management method and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN101846576B (en) Video-based liquid leakage analyzing and alarming system
CN116193075A (en) Intelligent monitoring method and system based on control of Internet of things
CN109377713B (en) Fire early warning method and system
CN111757096A (en) Video operation and maintenance management system and method
CN112770088A (en) AI video linkage perception monitoring system
CN110211052A (en) A kind of single image to the fog method based on feature learning
CN101605273B (en) Method and subsystem for evaluating colour saturation quality
CN110619460A (en) Classroom quality assessment system and method based on deep learning target detection
CN114819627A (en) High-definition electronic screen production quality intelligent monitoring analysis system based on machine vision
CN211184122U (en) Intelligent video analysis system for linkage of railway operation safety prevention and control and large passenger flow early warning
CN116456075A (en) Automatic inspection system for monitoring video quality
CN115116004A (en) Office area abnormal behavior detection system and method based on deep learning
CN112906488A (en) Security protection video quality evaluation system based on artificial intelligence
CN102413355A (en) Detecting method for video signal deletion in video quality diagnostic system
CN103530864A (en) Environment-friendly video monitoring and blackness analyzing system for inorganized discharge of smoke
CN112148555A (en) Intelligent reading and identifying method and system for fault warning information
CN115309871B (en) Industrial big data processing method and system based on artificial intelligence algorithm
CN111541877A (en) Automatic monitoring system for substation equipment
CN110991243A (en) Straw combustion identification method based on combination of color channel HSV and convolutional neural network
CN116704440A (en) Intelligent comprehensive acquisition and analysis system based on big data
CN114841932A (en) Foreign matter detection method, system, equipment and medium for photovoltaic panel of photovoltaic power station
CN116310928A (en) Cloud edge joint calculation intelligent video identification method and application
CN115002448A (en) Video image quality diagnosis method and system applied to security monitoring
CN115361531A (en) Intelligent ring main unit monitoring system based on remote video monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination