CN114429617A - Abnormal recognition result processing method applied to smart city box body detection - Google Patents

Abnormal recognition result processing method applied to smart city box body detection Download PDF

Info

Publication number
CN114429617A
CN114429617A CN202210095823.1A CN202210095823A CN114429617A CN 114429617 A CN114429617 A CN 114429617A CN 202210095823 A CN202210095823 A CN 202210095823A CN 114429617 A CN114429617 A CN 114429617A
Authority
CN
China
Prior art keywords
door
picture
steps
following
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210095823.1A
Other languages
Chinese (zh)
Inventor
梁帅
王璇
孙中光
孙维
张建鑫
张宇
彭迈
王超
郜栋
蒋波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Original Assignee
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Smart City Science And Technology Research Institute Co ltd, CCTEG Chongqing Research Institute Co Ltd filed Critical Chongqing Smart City Science And Technology Research Institute Co ltd
Priority to CN202210095823.1A priority Critical patent/CN114429617A/en
Publication of CN114429617A publication Critical patent/CN114429617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to the technical field of video monitoring, and particularly discloses an abnormal recognition result processing method applied to intelligent city box detection, which comprises the following steps: s1, acquiring video data in real time, and performing framing processing on the video data to generate a plurality of pictures; s2, inputting the picture into the neural network model for judgment to obtain a judgment result; s3, when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log; and S4, if the door is opened abnormally, marking a preset mark on the picture. By adopting the technical scheme of the invention, the abnormal conditions which have occurred can be quickly searched.

Description

Abnormal recognition result processing method applied to smart city box body detection
Technical Field
The invention relates to the technical field of video monitoring, in particular to an abnormal recognition result processing method applied to intelligent city box body detection.
Background
In the construction of smart cities, some dangerous areas and important places are not allowed to enter by common people, such as a box-type transformer station with high voltage electricity, a microcomputer room for deploying important equipment and the like, and in order to avoid personnel safety accidents or loss of public equipment caused by abnormal opening of a box body, the abnormal opening state of the box body needs to be monitored and alarmed in real time.
The conventional scheme for monitoring abnormal door opening of the existing box at home and abroad is to use a door magnetic sensor, the monitoring scheme can only realize monitoring of the opening and closing state of the box door, real-time analysis of a split door scene cannot be realized, and the method has great limitation. For example, the box body may not have door magnet installation conditions, or door magnet data cannot be uploaded to a remote control center; the scene cannot be intelligently identified, and normal opening and abnormal opening cannot be identified; the door magnetic element is easy to damage, so that the door opening of the box body cannot be normally monitored.
In other schemes, video equipment can be linked, video monitoring equipment is installed to monitor the box body continuously for 24 hours, when the door magnetic sensor monitors that the box body is opened abnormally, an alarm message is pushed, and then a manager opens the corresponding video monitoring equipment to browse or playback historical video data in real time, but the monitoring method needs manual processing and is low in working efficiency.
Therefore, the applicant applies for a box abnormal state identification method based on computer vision on the same day, and the door state is automatically identified by training a neural network model. However, when the user needs to review, the corresponding picture or video when the door is opened abnormally cannot be found out intuitively and quickly from the pictures or videos.
Therefore, an anomaly identification result processing method applied to smart city box detection and capable of fast retrieval is needed.
Disclosure of Invention
The invention provides an abnormal recognition result processing method applied to intelligent city box body detection, which can quickly search the abnormal conditions.
In order to solve the technical problem, the present application provides the following technical solutions:
the abnormal recognition result processing method applied to the intelligent city box body detection comprises the following steps.
S1, acquiring video data in real time, and performing framing processing on the video data to generate a plurality of pictures;
s2, inputting the picture into the neural network model for judgment to obtain a judgment result;
s3, when the door is opened, judging whether the door is opened abnormally based on the operation log;
and S4, if the door is opened abnormally, marking a preset mark on the picture.
The basic scheme principle and the beneficial effects are as follows:
according to the scheme, after the video data are acquired in real time, the frame division processing is carried out, the video data are converted into the pictures of one frame and one frame, and then the pictures are input into the neural network model to be judged, so that the judgment result of whether the box body is opened or not can be obtained. When the judgment result is that the door is opened, whether the door is opened abnormally can be accurately judged through the operation log. And finally, marking a preset identifier on the picture with the abnormal door opening. When the user rechecks, the user can conveniently and quickly search and determine the corresponding picture when the door opening abnormity occurs.
Further, in S2, position information of the door is also obtained from the neural network model;
in S4, if the door is opened abnormally, the door is marked on the picture based on the position information of the door.
And the box door is marked, so that a user can conveniently and quickly determine the position of the box door, and meanwhile, whether the recognition of the neural network model is accurate or not can be rechecked.
Further, the method includes step S5, re-integrating the marked pictures and the unmarked pictures into the video stream.
When a user watches back a video, even if the video is dragged, the situation of abnormal door opening is not easy to miss.
Further, in S2, the method is further configured to pre-process the picture, and input the pre-processed picture into the neural network model.
Through preprocessing the picture, the neural network model can be conveniently identified.
Further, in S3, the time of door opening is also obtained, it is determined whether a door opening plan corresponding to the time is recorded in the operation log, and if not, it is determined that the door is opened abnormally.
When the staff is in normal inspection or maintenance, the door opening plan is recorded in the operation log. And when the door opening plan corresponding to the time does not exist in the operation log, indicating that the non-working personnel normally operate, so that the abnormal door opening can be judged.
Further, in S4, a labeling parameter is further obtained, and a preset identifier is labeled on the picture based on the labeling parameter, where the labeling parameter includes an identifier size, an identifier position, and an identifier transparency.
The mark is convenient to adjust.
Further, in S4, the box is marked on the picture based on the position information of the box.
Further, in S4, the box door and the box body are marked as being selected by a rectangular frame.
Further, in S4, a preset identifier is marked on the picture through FFmpeg.
Drawings
Fig. 1 is a flowchart illustrating an anomaly identification result processing method applied to smart city box detection according to an embodiment.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, the method for processing an abnormal recognition result applied to box detection in a smart city of the present embodiment includes the following steps:
and S1, acquiring video data in real time, and performing framing processing on the video data to generate a plurality of pictures.
In this embodiment, video data in the format of.264 is acquired and decoded by FFMpeg. And performing framing processing on the decoded video data by adopting a segmentation component in OpenCV.
And S2, preprocessing the picture, inputting the preprocessed picture into the neural network model for judgment, and obtaining a judgment result, the position information of the box door and the position information of the box body. In this embodiment, during the preprocessing, the image is converted into a grayscale image, the grayscale image is subjected to eight-bit plane decomposition, and one of the eight-bit plane decomposition is extracted and input into the neural network model for judgment.
And S3, when the door is opened according to the judgment result, acquiring the time of opening the door, judging whether the operation log records the door opening plan corresponding to the time, and if not, judging that the door is opened abnormally.
In this embodiment, before the staff patrols and examines, maintains etc. the operation, has corresponding to patrol and examine the schedule or maintain the schedule, wherein involves the content that gets into the box and can draw out and add as the plan of opening the door and in the operation log, as the foundation of judgement.
S4, if the door is opened abnormally, marking the door on the picture based on the position information of the door; and marking the box body on the picture based on the position information of the box body. In this embodiment, the box door and the box body are marked as being selected through the rectangular frame. And obtaining marking parameters, and marking preset marks on the pictures based on the marking parameters, wherein the marking parameters comprise the size of the marks, the positions of the marks and the transparency of the marks.
In this embodiment, the size of the mark, i.e., the length of the mark and the width of the mark. The position of the mark is the distance between the mark and the left edge of the picture and the distance between the mark and the upper edge of the picture. The transparency of the marking is expressed in percent, e.g., 100% transparency, i.e., the marking is not visible at all. The mark is made of a picture formed by rasterizing the words 'abnormal door opening'.
And S5, reintegrating the marked pictures and the unmarked pictures into a video stream. In this embodiment, the box door, the box body and the preset identifier are marked on the picture through the FFmpeg. The marked and unmarked pictures are also reintegrated into a video stream by FFmpeg.
In order to make a judgment by using the neural network model, the embodiment further provides a training method of the neural network model, which includes the following steps:
and A1, acquiring the video data in the preset format and decoding. In this embodiment, the preset format is an h.264 format, and decoding is performed by FFMpeg. In order to reduce the volume of the video and reduce the occupied bandwidth during transmission, the video is subjected to coding compression so as to reduce the volume. In the h.264 format, the encoding algorithm is intra-frame compression and inter-frame compression, the intra-frame compression is an algorithm for generating an I frame, the inter-frame compression is an algorithm for generating a B frame and a P frame, wherein the I frame is a key frame, the recorded information of the P frame is the difference between the frame and a previous key frame (or P frame), the B frame is a bidirectional difference frame, and the recorded information is the difference between the frame and previous and next frames. Decoding is required to recover each frame of video data.
And A2, framing the decoded video data according to a first preset rule to generate a plurality of pictures. In this embodiment, a segmentation component in OpenCV is used for framing. In this embodiment, the first preset rule includes an interval of processing frames, a size of a picture, and a storage path of the picture. The interval of the frames is processed, i.e. how many frames a picture is taken once. For example, 1 picture is generated at a frame rate of 25fps and an interval of 25 frames to be processed, i.e., 1 second.
And A3, preprocessing the pictures, classifying and marking the pictures, and constructing a training picture set. The indicia include open doors and unopened doors. Also comprises a box door and a box body.
During preprocessing, the image is converted into a gray image, the gray image is subjected to eight-bit plane decomposition, and one bit plane image classification mark is extracted. Grayscale images, i.e., 8-bit 256 color images. By extracting each bit of the image respectively, one image can be decomposed into 8 images.
In this embodiment, during the preprocessing, after eight-bit plane decomposition is performed on the grayscale image, several bit plane images are discarded according to a preset standard, and one of the bit plane images is randomly extracted from the rest of the bit plane images for classification and labeling. The predetermined criterion is to discard the bit-plane image that does not contain valid information. For example, the 8-bit plane maps formed by decomposition are respectively denoted as bit plane map 1, bit plane map 2, bit plane map 3, and bit plane map 4 … … -bit plane map 8. according to the experience, bit plane map 1 and bit plane map 2 usually contain valid information enough for training, so the predetermined criterion is to discard bit plane maps 3 to 8, and randomly extract bit plane image 1 or bit plane image 2 for classification labeling.
A4, inputting the training picture set into a neural network model for training; the neural network model is a convolutional neural network model. In this embodiment, the convolutional neural network model is a convolutional neural network model component in OpenCV.
Example two
The difference between this embodiment and the first embodiment is that in S1 of this embodiment, whether there is a person in the monitored area is also determined by the face recognition module; when people exist in the monitoring area, face recognition is carried out, whether the people are on record or not is judged, if not, the corresponding video data is marked as high-priority video data, and if the people are on record, the corresponding video data is marked as low-priority video data. And during framing, performing framing processing on the high-priority video data corresponding to the camera in the preset area preferentially.
When no person exists in the monitoring area, the possibility of abnormal opening of the box door is low, pictures are extracted at preset time intervals and input into the neural network model for judgment, instead of judging the pictures in real time, and the pressure of processing by the neural network model can be reduced. The face data of the workers are collected in advance, and the workers can be accurately identified through face identification. When the staff appears in the monitoring area, namely near the box body, the possibility of maintenance or inspection is high, the corresponding video data is marked as low-priority video data, and the processing does not need to be carried out immediately when the framing module is busy.
EXAMPLE III
The difference between this embodiment and the first embodiment is that, in this embodiment, S1 further includes:
s101, slicing: slicing video data to generate a description file and a plurality of media segments; the description file is used to record the shooting date, the total time length of the video data, and the number and time length of each media clip. In this embodiment, video data is sliced in milliseconds. In this embodiment, the description file is an m3u8 file, and the media segment is a ts file. For example, the total time length of video data is 10 seconds, the slice is 10 ts files, the time length of a single ts file is 1 second, and the ts files are numbered from 001 to 010. The shooting date is, for example, 2021-7-15-12:01:00: 001.
S102, identification: and performing binarization processing on the media segment, judging whether the media segment after binarization processing contains a preset identifier, if so, jumping to a correction step, and if not, jumping to a storage step. In this embodiment, binarization processing is performed on each frame of media segment. The preset identification objects comprise persons, animals and the like.
S103, a correction step: carrying out noise reduction processing on the media segments; in this embodiment, the media segments are grayed first, and then gaussian filtering noise reduction is performed.
S104, a storage step: storing the description file and the media segment after the noise reduction processing.
The embodiment can store the processed video data, and facilitates subsequent calling. Since video data is composed of frames from frame to frame, it is impossible to completely separate consecutive frames from frame during slicing, i.e. several frames may be lost between two consecutive media segments, resulting in loss of picture quality. By carrying out noise reduction processing on the sliced media segments, the quality of the picture can be effectively improved. However, if all media segments are denoised, a significant amount of computing resources are consumed. Moreover, when no human or animal passes through the monitoring range of the camera, the pictures are static, the difference between the pictures of each frame is small, and even if a plurality of frames are lost in the middle, the picture quality is not influenced too much. In this embodiment, the preset identifier may be a person or an animal according to the monitored object, for example, the preset identifier is an animal, and when the media segment includes the animal, the media segment is subjected to noise reduction processing. Because animals appear in the pictures, the difference between the pictures of each frame is increased, if a plurality of frames are lost in the middle, the influence on the picture quality is increased, at the moment, the noise reduction processing is carried out, the definition of the picture is improved, so that the influence caused by the possibility of losing the frames is counteracted, and the picture quality loss is reduced. Moreover, animals appear in the picture, the possibility that the media segment is called by workers to be watched later is higher, and the impression can be improved after the noise reduction processing is carried out.
In the identification step of S102 in this embodiment, if the preset identification object is included, the type of the preset identification object is also determined; and determining a first preset quantity value based on the type of the preset identification object, omitting the identification step for the media segments with the first preset quantity behind the media segments, and directly performing the correction step.
Because the monitoring range of the camera is fixed, under normal conditions, the time of people and animals passing under the camera is also in a certain range (the animals usually act faster, and the time is less than the time of people passing). When an animal or a person appears in the monitoring range of the camera, the animal or the person is reflected on the media segment, namely the media segment contains the preset identification object. The animal or the person needs a certain time to leave the monitoring range, and the possibility that the animal or the person is contained in the media segment in the time is high, so that the identification step is omitted, the correction step is directly carried out, the identification process can be further simplified, and the computing resource is saved. In this embodiment, the first preset number is determined comprehensively according to the estimated average speed of the preset identification object, the duration of the media segment, and the monitoring range of the camera. The estimated average speed may be determined based on the installation position of the camera.
Further, in this embodiment, the actual average speed of the preset identifier is calculated based on the moving distance of the preset identifier of the media segment, and whether the absolute value of the difference between the actual average speed and the estimated average speed and the ratio of the estimated average speed are greater than the threshold value is determined. Namely: V1-V2/V2, wherein V1 is the actual average speed and V2 is the estimated average speed.
If the first preset number is larger than the threshold value, the first preset number is comprehensively determined according to the estimated average speed of the preset identification object, the duration of the media segments and the monitoring range of the camera. If the first preset number is smaller than or equal to the threshold value, the first preset number is comprehensively determined according to the actual average speed of the preset identification object, the duration of the media segments and the monitoring range of the camera. In this embodiment, the threshold is 20%.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (9)

1. The abnormal recognition result processing method applied to the smart city box body detection is characterized by comprising the following steps of:
s1, acquiring video data in real time, and performing framing processing on the video data to generate a plurality of pictures;
s2, inputting the picture into a neural network model for judgment to obtain a judgment result;
s3, when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log;
and S4, if the door is opened abnormally, marking a preset mark on the picture.
2. The method of claim 1, wherein the method comprises the following steps: in S2, the position information of the door is also obtained from the neural network model;
in S4, if the door is opened abnormally, the door is marked on the picture based on the position information of the door.
3. The method of claim 2, wherein the method comprises the following steps: and S5, reintegrating the marked pictures and the unmarked pictures into a video stream.
4. The method of claim 3, wherein the method comprises the following steps: and in the step S2, the method is further configured to preprocess the picture, and input the preprocessed picture into the neural network model.
5. The abnormal recognition result processing method applied to smart city box detection as recited in claim 4, wherein: in S3, the time of opening the door is also obtained, and it is determined whether or not a door opening plan corresponding to the time is recorded in the operation log, and if not, it is determined that the door is opened abnormally.
6. The method of claim 5, wherein the method comprises the following steps: in S4, a labeling parameter is further obtained, and a preset identifier is labeled on the picture based on the labeling parameter, where the labeling parameter includes an identifier size, an identifier position, and an identifier transparency.
7. The method of claim 6, wherein the method comprises the following steps: in S4, the box is marked on the picture based on the position information of the box.
8. The method of claim 7, wherein the method comprises the following steps: in S4, the box door and the box body are marked as being selected through the rectangular frame.
9. The method of claim 1, wherein the method comprises the following steps: in S4, a preset identifier is marked on the picture through FFmpeg.
CN202210095823.1A 2022-01-26 2022-01-26 Abnormal recognition result processing method applied to smart city box body detection Pending CN114429617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210095823.1A CN114429617A (en) 2022-01-26 2022-01-26 Abnormal recognition result processing method applied to smart city box body detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210095823.1A CN114429617A (en) 2022-01-26 2022-01-26 Abnormal recognition result processing method applied to smart city box body detection

Publications (1)

Publication Number Publication Date
CN114429617A true CN114429617A (en) 2022-05-03

Family

ID=81313636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210095823.1A Pending CN114429617A (en) 2022-01-26 2022-01-26 Abnormal recognition result processing method applied to smart city box body detection

Country Status (1)

Country Link
CN (1) CN114429617A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115352A (en) * 2022-08-29 2022-09-27 山东中鲁实业有限公司 Public equipment operation control system based on digital city operation management service

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247025A1 (en) * 2013-10-30 2016-08-25 Yulong Computer Telecommunication Scientific (Shenzhen) Co., Ltd. Terminal and Method for Managing Video File
US20190087646A1 (en) * 2017-09-20 2019-03-21 Google Llc Systems and Methods of Detecting and Responding to a Visitor to a Smart Home Environment
CN111401284A (en) * 2020-03-23 2020-07-10 武汉大学 Door opening and closing state identification method based on image processing
CN111428083A (en) * 2020-03-19 2020-07-17 平安国际智慧城市科技股份有限公司 Video monitoring warning method, device, equipment and storage medium
EP3754538A1 (en) * 2019-06-17 2020-12-23 Interel Pte. Ltd. Detection of door opening and closing events
WO2021042547A1 (en) * 2019-09-04 2021-03-11 平安科技(深圳)有限公司 Behavior identification method, device and computer-readable storage medium
CN113177475A (en) * 2021-04-29 2021-07-27 上海大学 Building site distribution box on-off state identification method based on deep learning target detection
CN113393625A (en) * 2021-05-08 2021-09-14 中电海康集团有限公司 Anti-theft alarm evidence obtaining method and system for intelligent lamp pole

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247025A1 (en) * 2013-10-30 2016-08-25 Yulong Computer Telecommunication Scientific (Shenzhen) Co., Ltd. Terminal and Method for Managing Video File
US20190087646A1 (en) * 2017-09-20 2019-03-21 Google Llc Systems and Methods of Detecting and Responding to a Visitor to a Smart Home Environment
EP3754538A1 (en) * 2019-06-17 2020-12-23 Interel Pte. Ltd. Detection of door opening and closing events
WO2021042547A1 (en) * 2019-09-04 2021-03-11 平安科技(深圳)有限公司 Behavior identification method, device and computer-readable storage medium
CN111428083A (en) * 2020-03-19 2020-07-17 平安国际智慧城市科技股份有限公司 Video monitoring warning method, device, equipment and storage medium
CN111401284A (en) * 2020-03-23 2020-07-10 武汉大学 Door opening and closing state identification method based on image processing
CN113177475A (en) * 2021-04-29 2021-07-27 上海大学 Building site distribution box on-off state identification method based on deep learning target detection
CN113393625A (en) * 2021-05-08 2021-09-14 中电海康集团有限公司 Anti-theft alarm evidence obtaining method and system for intelligent lamp pole

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115352A (en) * 2022-08-29 2022-09-27 山东中鲁实业有限公司 Public equipment operation control system based on digital city operation management service

Similar Documents

Publication Publication Date Title
CN101860731B (en) Video information processing method, system and server
US7609941B2 (en) Multimedia data recording apparatus, monitor system, and multimedia data recording method
CN1254108C (en) Method and apparatus to select the best video frame to transmit to a remote station for closed circuit television (CCTV)based residential area security monitoring
KR20200058260A (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN and Driving Method Thereof
CA2716637A1 (en) Video analytics with pre-processing at the source end
CN101764998A (en) Alarm interlocking method in network video monitoring system and network video monitoring system
US10657783B2 (en) Video surveillance method based on object detection and system thereof
CN114500950A (en) Box abnormal state detection system and method based on smart city
CN109488383B (en) Intelligent video analysis system and application method thereof
CN109389794A (en) A kind of Intellectualized Video Monitoring method and system
CN116055690B (en) Method and equipment for processing machine room monitoring video
CN114429617A (en) Abnormal recognition result processing method applied to smart city box body detection
CN117319609A (en) Internet of things big data intelligent video monitoring system and method
CN111178241A (en) Intelligent monitoring system and method based on video analysis
CN211184122U (en) Intelligent video analysis system for linkage of railway operation safety prevention and control and large passenger flow early warning
CN117612060A (en) Video early warning system, method, equipment and medium based on artificial intelligent detection
CN114429616A (en) Box abnormal state identification method based on computer vision
CN114390260A (en) Hazardous area and important place monitoring platform applied to smart city
CN106803795B (en) Video monitoring system fault identification, positioning and warning system and method based on detection frame
CN112804492B (en) Communication prompting method and device for electronic peepholes
CN113378633A (en) Method and system for detecting quality of streaming media signal
CN116962650B (en) Video playback method and system for monitoring equipment
CN113536847A (en) Industrial scene video analysis system and method based on deep learning
CN114205667B (en) Broadcast television broadcast abnormal picture identification and fault analysis system
CN113763659A (en) Mine area personnel invasion monitoring system based on visual identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination