CN114390260A - Hazardous area and important place monitoring platform applied to smart city - Google Patents

Hazardous area and important place monitoring platform applied to smart city Download PDF

Info

Publication number
CN114390260A
CN114390260A CN202210095825.0A CN202210095825A CN114390260A CN 114390260 A CN114390260 A CN 114390260A CN 202210095825 A CN202210095825 A CN 202210095825A CN 114390260 A CN114390260 A CN 114390260A
Authority
CN
China
Prior art keywords
preset
user terminal
module
door
pushing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210095825.0A
Other languages
Chinese (zh)
Inventor
梁帅
孙维
钟宇
文学峰
戴书球
谭一川
王璇
张建鑫
冉昆鹏
赵庆川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Original Assignee
Chongqing Smart City Science And Technology Research Institute Co ltd
CCTEG Chongqing Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Smart City Science And Technology Research Institute Co ltd, CCTEG Chongqing Research Institute Co Ltd filed Critical Chongqing Smart City Science And Technology Research Institute Co ltd
Priority to CN202210095825.0A priority Critical patent/CN114390260A/en
Publication of CN114390260A publication Critical patent/CN114390260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Human Computer Interaction (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to the technical field of video monitoring, and particularly discloses a monitoring platform applied to dangerous areas and important places of smart cities, which comprises a monitoring terminal, a server and a user terminal, wherein the monitoring terminal is used for acquiring video data of a preset area; the server is used for acquiring video data acquired by the monitoring terminal in real time and performing framing processing to generate a plurality of pictures; finally, inputting the picture into a neural network model for judgment to obtain a judgment result; when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log, if the door is opened abnormally, marking a preset identifier on the picture, and integrating the preset identifier into a video stream again; the server is also used for pushing the video stream to a preset user terminal. By adopting the technical scheme of the invention, managers can know the field condition in time conveniently.

Description

Hazardous area and important place monitoring platform applied to smart city
Technical Field
The invention relates to the technical field of video monitoring, in particular to a monitoring platform applied to dangerous areas and important places of smart cities.
Background
In the construction of smart cities, some dangerous areas and important places are not allowed to enter by common people, such as a box-type transformer station with high voltage electricity, a microcomputer room for deploying important equipment and the like, and in order to avoid personnel safety accidents or loss of public equipment caused by abnormal opening of a box body, the abnormal opening state of the box body needs to be monitored and alarmed in real time.
The conventional scheme for monitoring abnormal door opening of the existing box at home and abroad is to use a door magnetic sensor, the monitoring scheme can only realize monitoring of the opening and closing state of the box door, real-time analysis of a split door scene cannot be realized, and the method has great limitation. For example, the box body may not have door magnet installation conditions, or door magnet data cannot be uploaded to a remote control center; the scene cannot be intelligently identified, and normal opening and abnormal opening cannot be identified; the door magnetic element is easy to damage, so that the door opening of the box body cannot be normally monitored.
In other schemes, video equipment can be linked, video monitoring equipment is installed to monitor the box body continuously for 24 hours, when the door magnetic sensor monitors that the box body is opened abnormally, an alarm message is pushed, and then a manager opens the corresponding video monitoring equipment to browse or playback historical video data in real time, but the monitoring method needs manual processing and is low in working efficiency.
Therefore, a monitoring platform for dangerous areas and important places applied to smart cities, which is convenient for managers to know the field conditions in time, is needed.
Disclosure of Invention
The invention provides a monitoring platform applied to dangerous areas and important places of smart cities, which can facilitate managers to know the field conditions in time.
In order to solve the technical problem, the present application provides the following technical solutions:
the monitoring platform is applied to dangerous areas and important places of smart cities, comprises a monitoring terminal for acquiring video data of a preset area, a server and a user terminal;
the server is used for acquiring video data acquired by the monitoring terminal in real time and performing framing processing to generate a plurality of pictures; finally, inputting the picture into a neural network model for judgment to obtain a judgment result; when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log, if the door is opened abnormally, marking a preset identifier on the picture, and integrating the preset identifier into a video stream again;
the server is also used for pushing the video stream to a preset user terminal.
The basic scheme principle and the beneficial effects are as follows:
in the scheme, video data of preset areas such as a box type transformation station with high voltage electricity and a microcomputer room for deploying important equipment can be collected in real time through the monitoring terminal. The server carries out frame processing on the video data, converts the video data into pictures of one frame and one frame, inputs the pictures into the neural network model for judgment, and can obtain the judgment result of whether the box body is opened. When the judgment result is that the door is opened, whether the door is opened abnormally can be accurately judged through the operation log. And finally, marking a preset identifier on the picture with the abnormal door opening. And sending the reintegrated video stream to a preset user terminal. The preset user terminal may be a user terminal used by a manager. The management personnel can know the field condition in time conveniently. Through the preset identification, when the manager rechecks, the manager can quickly determine the corresponding position when the door opening abnormity occurs.
Further, the server comprises an identification module and a pushing module, wherein the identification module is used for judging whether personnel exist in a preset area according to the video data, when the personnel exist in the preset area, the face identification is carried out, whether the personnel belong to the staff is judged, if the personnel belong to the staff, the pushing module is used for obtaining a judgment result of whether the door is opened abnormally, and if the door is opened abnormally, the video stream which is reintegrated is pushed to a user terminal corresponding to the staff.
The corresponding staff can be reminded that the current door opening behavior of the staff is not recorded in the operation log.
Further, the user terminal is also used for receiving an authentication request, performing identity authentication, and generating an identification pattern after the identity authentication is successful;
the server also comprises a verification module, wherein the verification module is used for extracting the identification pattern from the video data, if the identification pattern is successfully extracted, the picture containing the identification pattern is stored, and the pushing module is also used for pushing the picture containing the identification pattern and the video stream containing the corresponding staff to a preset user terminal.
After seeing the video stream, the staff member can input an authentication request to the user terminal and then perform authentication. After the identity authentication, the user terminal confirms that the operation is carried out by the staff himself, and an identification pattern is generated. The staff can show the identification pattern to monitor terminal. The monitoring terminal can shoot a scene that the worker shows the identification pattern. The normal inspection or maintenance operation of the box body should be registered in the operation log, but the actual operation has many emergencies, and the case that the normal inspection or maintenance operation is not registered in advance may occur. At the moment, if the staff normally performs routing inspection or maintenance, the mode of identity authentication and display of the identification pattern can be used for double authentication, the record of the presence of the staff is reserved, and the file is reserved for future reference. The impression of the staff can be intensified compared to authentication alone, while the staff is made aware that the current behavior is under surveillance.
Further, the user terminal is also used for generating an identification pattern and then sending pattern generation information to the server;
the verification module is further used for extracting the identification pattern from the video data after receiving the pattern generation information.
The verification module extracts the identification pattern after receiving the pattern generation information, and compared with real-time extraction, the verification module has smaller data processing amount and can reduce the energy consumption of a system.
Further, the identification module judges whether the video stream belongs to a worker, the pushing module is used for obtaining a judgment result of whether the door is opened abnormally, and if the door is opened abnormally, the pushing module pushes the video stream containing non-workers to a preset user terminal; the pushing module is also used for sending timing viewing information to a preset user terminal;
the preset user terminal is used for judging whether the video stream containing the non-working personnel is checked within a first preset time after receiving the timing checking information, and if not, generating reminding information.
When the non-staff appears near the box body, the danger of abnormal opening of the door is high, and management staff is required to timely handle the door. By sending the timing viewing information to the preset user terminal, the manager can be supervised to view in time.
Further, the verification module does not receive the pattern generation information within a second preset time, and the pushing module is further configured to push the video stream containing the corresponding staff to a preset user terminal and send the timing viewing information to the preset user terminal.
If the staff does not register in the operation log in advance, verification is not carried out, the possibility of risk is high, and management staff can be reminded to deal with the operation log in time by adopting the same processing measures as non-staff.
Further, the identification pattern includes a number of digits or letters.
And the mode of numbers or letters is adopted, so that the characteristics are clear and the identification is convenient. Meanwhile, the success rate of recognition can be ensured by using the existing schemes such as license plate recognition and the like.
Further, the first preset time is 30-120 seconds.
Drawings
FIG. 1 is a logic diagram of a monitoring platform for hazardous areas and important places in a smart city according to an embodiment.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, the monitoring platform applied to the hazardous area and the important place of the smart city in the embodiment includes a server, a plurality of monitoring terminals and a plurality of user terminals.
The monitoring terminal is installed in a preset area and used for collecting video data of the preset area. The predetermined area is, for example, a box transformer station with high voltage electricity, a dangerous area of a microcomputer room where important equipment is deployed, or an important place. In this embodiment, the monitoring terminal adopts a camera.
The server comprises a docking module, a framing module, an analysis module, an identification module, an integration module, an identification module, a pushing module and a verification module.
The butt joint module is used for acquiring video data acquired by the camera in real time;
the identification module is used for judging whether people exist in the preset area or not according to the video data collected by the camera in the preset area;
the identification module is also used for carrying out face identification when personnel exist in the preset area, judging whether the personnel belong to the staff, marking the corresponding video data as high-priority video data if the personnel do not belong to the staff, and marking the corresponding video data as low-priority video data if the personnel belong to the staff.
The framing module is used for framing the video data to generate a plurality of pictures. The framing module is further configured to number the pictures sequentially, and in this embodiment, the pictures are numbered sequentially by using arabic numerals. When the framing module performs framing processing, the high-priority video data is preferentially subjected to framing processing.
The analysis module is used for extracting pictures at preset time intervals when no person exists in the monitoring area, and inputting the pictures into the neural network model for judgment, wherein in the embodiment, 1 picture is extracted.
The analysis module is also used for determining the number of preset pictures input into the neural network model according to the high-priority video data and the low-priority video data when people exist in the monitored area, and preferentially inputting the pictures corresponding to the high-priority video data into the neural network model for judgment. The number of the preset pictures corresponding to the high-priority video data is larger than that of the preset pictures corresponding to the low-priority video data. In this embodiment, when the analysis module inputs the pictures into the neural network model, one picture is extracted at intervals of a preset number and input into the neural network model until the number of the input pictures is equal to the preset number of pictures or the judgment result of the neural network model is that the door is opened.
For example, the number of the preset pictures corresponding to the high-priority video data is 20, and one picture is extracted every 5 pictures at intervals and input into the neural network model.
The analysis module is also used for obtaining a judgment result from the neural network model; when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log; in this embodiment, before the staff patrols and examines, maintains etc. the operation, have corresponding patrolling and examining schedule or maintenance schedule, wherein involve and get into the content of box and can draw out and add as the plan of opening the door and add in the operation log, as the foundation of judgement. The door opening time can be obtained through the video data, whether a door opening plan corresponding to the time is recorded in the operation log or not is judged, and if not, the door is judged to be opened abnormally.
And the identification module is used for marking a preset identification on the picture when the judgment result is that the door is opened abnormally. In this embodiment, a labeling parameter may also be set, and a preset identifier is labeled on the picture based on the labeling parameter, where the labeling parameter includes an identifier size, an identifier position, and an identifier transparency. The size of the mark, i.e. the length of the mark, the width of the mark. The position of the mark is the distance between the mark and the left edge of the picture and the distance between the mark and the upper edge of the picture. The transparency of the marking is expressed in percent, e.g., 100% transparency, i.e., the marking is not visible at all. The mark is made of a picture formed by rasterizing the words 'abnormal door opening'. In this embodiment, a preset identifier is marked on the picture through FFmpeg.
The integration module is used for integrating the marked pictures and the unmarked pictures into a video stream again.
The pushing module is used for pushing the video stream to a preset user terminal. In this embodiment, the preset user terminal is a user terminal used by a manager. The user terminal may be a mobile phone or a tablet computer, and a mobile phone is used in this embodiment.
Specifically, if the face recognition result is that the face recognition module belongs to the staff, the judgment result of the neural network model is that the door is opened abnormally, and the pushing module is used for pushing the reintegrated video stream to the user terminal of the corresponding staff. In this embodiment, the user terminal corresponding to the worker is the user terminal currently registered by the worker.
And the user terminal corresponding to the staff is also used for receiving the verification request, performing identity verification and generating the identification pattern after the identity verification is successful. The authentication may be one of face recognition, fingerprint recognition, or password entry. The identification pattern comprises a number of digits or letters. In this embodiment, two digits are included, and the digits occupy the entire screen display.
The user terminal is also used for generating the identification pattern and then sending pattern generation information to the server;
the verification module is further used for extracting the identification pattern from the video data after receiving the pattern generation information, if the identification pattern is successfully extracted, the picture containing the identification pattern is stored, and the pushing module is further used for pushing the picture containing the identification pattern and the video stream containing the corresponding staff to a preset user terminal.
And when the verification module does not receive the pattern generation information within the second preset time, the pushing module is also used for pushing the video stream containing the corresponding staff to a preset user terminal and sending the timing viewing information to the preset user terminal.
If the result of the face recognition is that the face recognition does not belong to the staff, and the judgment result of the neural network model is that the door is opened abnormally, the pushing module pushes the video stream containing the non-staff to a preset user terminal, and sends timing viewing information to the preset user terminal.
The preset user terminal is used for judging whether the video stream containing the non-working personnel is checked within a first preset time after receiving the timing checking information, and if not, generating reminding information. The first preset time is 30-120 seconds. In this example, 30 seconds.
In order to make a judgment by using the neural network model, the embodiment further provides a training method of the neural network model, which includes the following steps:
and S1, acquiring video data and performing framing to generate a plurality of pictures. In this embodiment, a segmentation component in OpenCV is used for framing.
And S2, carrying out classification and marking on the pictures to construct a training picture set. The indicia include open doors and unopened doors.
S3, inputting the training picture set into a neural network model for training; the neural network model is a convolutional neural network model. In this embodiment, the convolutional neural network model is a convolutional neural network model component in OpenCV.
Example two
The difference between the present embodiment and the first embodiment is that the server of the present embodiment further includes a slicing module, a modification module, and a storage module.
The slicing module is used for slicing the video data to generate a description file and a plurality of media segments; the description file is used to record the shooting date, the total time length of the video data, and the number and time length of each media clip. In this embodiment, video data is sliced in milliseconds.
In this embodiment, the description file is an m3u8 file, and the media segment is a ts file. For example, the total time length of video data is 10 seconds, the slice is 10 ts files, the time length of a single ts file is 1 second, and the ts files are numbered from 001 to 010. The shooting date is, for example, 2021-7-15-12:01:00: 001.
The identification module is also used for carrying out binarization processing on the media segment, judging whether the media segment after binarization processing contains a preset identification object or not, if so, processing by the correction module, and if not, processing by the storage module. In this embodiment, binarization processing is performed on each frame of media segment. The preset identification objects comprise persons, animals and the like.
The correcting module is used for carrying out noise reduction processing on the media fragments; in this embodiment, the media segments are grayed first, and then gaussian filtering noise reduction is performed.
And the storage module is used for storing the description file and the media segment after the noise reduction processing.
The embodiment can store the processed video data, and facilitates subsequent calling. Since video data is composed of frames of a frame, when slicing, it is impossible to completely separate consecutive frames from each other, i.e. between two consecutive media segments, several frames may be lost, resulting in loss of picture quality. By carrying out noise reduction processing on the sliced media segments, the quality of the picture can be effectively improved. However, if all media segments are denoised, a significant amount of computing resources are consumed. Moreover, when no human or animal passes through the monitoring range of the camera, the pictures are static, the difference between the pictures of each frame is small, and even if a plurality of frames are lost in the middle, the picture quality is not influenced too much. In this embodiment, the preset identifier may be a person or an animal according to the monitored object, for example, the preset identifier is an animal, and when the media segment includes the animal, the media segment is subjected to noise reduction processing. Because animals appear in the pictures, the difference between the pictures of each frame is increased, if a plurality of frames are lost in the middle, the influence on the picture quality is increased, at the moment, the noise reduction processing is carried out, the definition of the picture is improved, so that the influence caused by the possibility of losing the frames is counteracted, and the picture quality loss is reduced. Moreover, animals appear in the picture, the possibility that the media segment is called by workers to be watched later is higher, and the impression can be improved after the noise reduction processing is carried out.
When the identification module is used for processing, if the identification module contains a preset identification object, the type of the preset identification object is also judged; and determining a first preset quantity value based on the type of the preset identifier, omitting the processing of the identification module for the media segments with the first preset quantity after the media segments, and directly processing the media segments by the correction module.
Because the monitoring range of camera is certain, under normal conditions, the time that personnel, animal in the piping lane passed under the camera also are in certain extent (animal because fast, the time can be less than the time that the people passed through). When an animal or a person appears in the monitoring range of the camera, the animal or the person is reflected on the media segment, namely the media segment contains the preset identification object. The animal or the person needs a certain time to leave the monitoring range, and the possibility that the animal or the person is contained in the media segment in the time is high, so that the identification module processing is omitted, the correction module processing is directly carried out, the identification process can be further simplified, and the computing resource is saved. In this embodiment, the first preset number is determined comprehensively according to the estimated average speed of the preset identification object, the duration of the media segment, and the monitoring range of the camera. The estimated average speed may be determined based on the installation position of the camera.
Further, in this embodiment, the actual average speed of the preset identifier is calculated based on the moving distance of the preset identifier of the media segment, and whether the absolute value of the difference between the actual average speed and the estimated average speed and the ratio of the estimated average speed are greater than the threshold value is determined. Namely: V1-V2/V2, wherein V1 is the actual average speed and V2 is the estimated average speed.
If the first preset number is larger than the threshold value, the first preset number is comprehensively determined according to the estimated average speed of the preset identification object, the duration of the media segments and the monitoring range of the camera. If the first preset number is smaller than or equal to the threshold value, the first preset number is comprehensively determined according to the actual average speed of the preset identification object, the duration of the media segments and the monitoring range of the camera. In this embodiment, the threshold is 20%.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (8)

1. The monitoring platform is applied to dangerous areas and important places of smart cities, comprises a monitoring terminal for acquiring video data of a preset area, and is characterized by further comprising a server and a user terminal;
the server is used for acquiring video data acquired by the monitoring terminal in real time and performing framing processing to generate a plurality of pictures; finally, inputting the picture into a neural network model for judgment to obtain a judgment result; when the judgment result is that the door is opened, judging whether the door is opened abnormally or not based on the operation log, if the door is opened abnormally, marking a preset identifier on the picture, and integrating the preset identifier into a video stream again;
the server is also used for pushing the video stream to a preset user terminal.
2. The monitoring platform for dangerous areas and important places applied to smart cities according to claim 1, wherein: the server comprises an identification module and a pushing module, wherein the identification module is used for judging whether personnel exist in a preset area according to video data, when the personnel exist in the preset area, the face identification is carried out, whether the personnel belong to the staff is judged, if the personnel belong to the staff, the pushing module is used for acquiring a judgment result of whether the door is opened abnormally, and if the door is opened abnormally, the reintegrated video stream is pushed to a user terminal corresponding to the staff.
3. The monitoring platform for dangerous areas and important places applied to smart cities according to claim 2, wherein: the user terminal is also used for receiving the verification request, performing identity verification and generating an identification pattern after the identity verification is successful;
the server also comprises a verification module, wherein the verification module is used for extracting the identification pattern from the video data, if the identification pattern is successfully extracted, the picture containing the identification pattern is stored, and the pushing module is also used for pushing the picture containing the identification pattern and the video stream containing the corresponding staff to a preset user terminal.
4. The monitoring platform for dangerous areas and important places applied to smart cities according to claim 3, wherein: the user terminal is also used for generating an identification pattern and sending pattern generation information to the server;
the verification module is further used for extracting the identification pattern from the video data after receiving the pattern generation information.
5. The monitoring platform for dangerous areas and important places of smart cities as claimed in claim 4, wherein: the identification module is used for judging whether the door is opened abnormally or not, if not, the pushing module is used for obtaining a judgment result of whether the door is opened abnormally or not, and if the door is opened abnormally, the pushing module is used for pushing a video stream containing non-workers to a preset user terminal; the pushing module is also used for sending timing viewing information to a preset user terminal;
the preset user terminal is used for judging whether the video stream containing the non-working personnel is checked within a first preset time after receiving the timing checking information, and if not, generating reminding information.
6. The monitoring platform for dangerous areas and important places applied to smart cities according to claim 5, wherein: the verification module does not receive the pattern generation information within the second preset time, and the pushing module is further used for pushing the video stream containing the corresponding staff to a preset user terminal and sending the timing viewing information to the preset user terminal.
7. The monitoring platform for dangerous areas and important places of smart city as claimed in claim 6, wherein: the identification pattern includes a number of digits or letters.
8. The monitoring platform for dangerous areas and important places applied to smart cities according to claim 7, wherein: the first preset time is 30-120 seconds.
CN202210095825.0A 2022-01-26 2022-01-26 Hazardous area and important place monitoring platform applied to smart city Pending CN114390260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210095825.0A CN114390260A (en) 2022-01-26 2022-01-26 Hazardous area and important place monitoring platform applied to smart city

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210095825.0A CN114390260A (en) 2022-01-26 2022-01-26 Hazardous area and important place monitoring platform applied to smart city

Publications (1)

Publication Number Publication Date
CN114390260A true CN114390260A (en) 2022-04-22

Family

ID=81203304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210095825.0A Pending CN114390260A (en) 2022-01-26 2022-01-26 Hazardous area and important place monitoring platform applied to smart city

Country Status (1)

Country Link
CN (1) CN114390260A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190277A (en) * 2022-09-08 2022-10-14 中达安股份有限公司 Safety monitoring method, device and equipment for construction area and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190277A (en) * 2022-09-08 2022-10-14 中达安股份有限公司 Safety monitoring method, device and equipment for construction area and storage medium

Similar Documents

Publication Publication Date Title
CN110110575A (en) A kind of personnel leave post detection method and device
CN106408832B (en) A kind of monitoring method and system of noiseless visitor
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
CN110795963A (en) Monitoring method, device and equipment based on face recognition
CN109903414A (en) One kind is based on Internet of Things monitoring office attendance control system and method
CN109636990A (en) A kind of cell electric vehicle management system and method based on RFID detection and recognition of face
CN103093192B (en) The recognition methods that high voltage transmission line is waved
CN112488483A (en) AI technology-based EHS transparent management system and management method
CN107426533A (en) A kind of video monitoring image recognition system based on video-encryption compression and image identification
CN109389794A (en) A kind of Intellectualized Video Monitoring method and system
CN114500950A (en) Box abnormal state detection system and method based on smart city
CN111402532A (en) Comprehensive security video management control system
CN112866647A (en) Intelligent property management system based on smart community
CN110111436A (en) A kind of face is registered method, apparatus and system
CN111045372B (en) Intelligent construction site management system
CN114390260A (en) Hazardous area and important place monitoring platform applied to smart city
CN110096606A (en) A kind of expatriate's management method, device and electronic equipment
CN210222962U (en) Intelligent electronic fence system
CN115171260A (en) Intelligent access control system based on face recognition
CN113793433A (en) Household safety intelligent access control system
CN114429617A (en) Abnormal recognition result processing method applied to smart city box body detection
CN113112744A (en) Security management method and device, electronic equipment and storage medium
CN114429616A (en) Box abnormal state identification method based on computer vision
CN208421930U (en) vehicle parking management system
CN212137832U (en) Intelligent interrogation supervising and monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination