CN111126153B - Safety monitoring method, system, server and storage medium based on deep learning - Google Patents

Safety monitoring method, system, server and storage medium based on deep learning Download PDF

Info

Publication number
CN111126153B
CN111126153B CN201911165549.5A CN201911165549A CN111126153B CN 111126153 B CN111126153 B CN 111126153B CN 201911165549 A CN201911165549 A CN 201911165549A CN 111126153 B CN111126153 B CN 111126153B
Authority
CN
China
Prior art keywords
monitoring
target
monitored
information
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911165549.5A
Other languages
Chinese (zh)
Other versions
CN111126153A (en
Inventor
马延旭
火一莽
万月亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201911165549.5A priority Critical patent/CN111126153B/en
Publication of CN111126153A publication Critical patent/CN111126153A/en
Application granted granted Critical
Publication of CN111126153B publication Critical patent/CN111126153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the invention discloses a safety monitoring method, a system, a server and a storage medium based on deep learning. The method comprises the following steps: acquiring video data of a target to be monitored in a monitoring area; analyzing the video data through a first model trained in advance to acquire monitoring characteristic information of the target to be monitored in a monitoring area; judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information; and when the target to be monitored is abnormal, alarm operation is carried out. According to the invention, through training based on deep learning on video data, semantic structured extraction of the video data and effective management of the video data are realized.

Description

Safety monitoring method, system, server and storage medium based on deep learning
Technical Field
The embodiment of the invention relates to a video image analysis technology, in particular to a safety monitoring method, a system, a server and a storage medium based on deep learning.
Background
Currently, video surveillance systems have become an important means for related-area applications. Through video monitoring, monitoring personnel or security personnel can be more direct and effective carry out safety protection and investigation to the monitored area. However, because the video data is huge in volume and complex in format, the storage cost is high and the management is difficult, and the ambiguity of massive video information, unstructured data forms and contents is faced, so that the manual retrieval mode is time-consuming and labor-consuming, a large amount of videos are lost without carding, and the construction effect of a monitoring system is seriously affected.
Disclosure of Invention
The invention provides a safety monitoring method, a system, a server and a storage medium based on deep learning, so as to realize semantic structured extraction of video data and effective management of the video data.
In a first aspect, an embodiment of the present invention provides a security monitoring method based on deep learning, including:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a first model trained in advance to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarm operation is carried out.
Further, the monitoring characteristic information includes one or more of monitoring target action information, monitoring target stay time information, smoke detection information, and monitoring target clothing information.
Further, the analyzing the video data through the pre-trained first model to obtain the monitoring feature information of the object to be monitored in the monitoring area includes:
and training the convolutional neural network on the video data to obtain a first model corresponding to the analysis monitoring target action information, the monitoring target stay time information, the smoke detection information and the monitoring target clothing information.
Further, analyzing the video data through a first model trained in advance to obtain monitoring feature information of the object to be monitored in the monitoring area includes:
identifying the video data based on the monitoring target action information by using the first model so as to acquire the monitoring target action information of the target to be monitored;
calculating a first confidence coefficient of the monitoring target action information according to a first preset weight value;
and judging the action state of the target to be monitored according to the preset parameter threshold value and the first confidence coefficient of the action information of the target to be monitored.
Further, analyzing the video data through the first model trained in advance to obtain monitoring feature information of the object to be monitored in the monitoring area further includes:
identifying the video data based on the monitoring target stay time information by using the first model so as to acquire the monitoring target stay time information of the target to be monitored;
calculating a second confidence coefficient of the monitoring target stay time information according to a second preset weight value and the monitoring target stay time information;
and judging the stay state of the target to be monitored according to the preset parameter threshold value and the second confidence coefficient of the stay time information of the monitored target.
Further, analyzing the video data through the first model trained in advance to obtain monitoring feature information of the object to be monitored in the monitoring area further includes:
identifying the video data based on the smoke detection information by using the first model so as to acquire the smoke detection information of the object to be monitored;
calculating the smoke concentration according to the smoke detection information and determining the smoking action of the area where the smoke is and the object to be monitored;
calculating a third confidence coefficient of smoke detection information according to the smoke concentration, the region where the smoke is located and the smoking action of the object to be monitored;
and judging the smoking state of the target to be monitored according to the preset parameter threshold value and the third confidence coefficient of the smoke detection information.
Further, analyzing the video data through the first model trained in advance to obtain monitoring feature information of the object to be monitored in the monitoring area further includes:
identifying the video data based on the monitoring target clothing information by using the first model so as to acquire the monitoring target clothing information of the target to be monitored;
calculating a fourth confidence coefficient of the monitoring target clothing information according to the monitoring target clothing information and a preset weight value of the monitoring target clothing information;
and judging the dressing state of the target to be monitored according to the preset parameter threshold value and the fourth confidence coefficient of the clothing information of the target to be monitored.
Further, judging whether the object to be monitored has an abnormality according to the state of the object to be monitored includes:
if the action state of the target to be monitored is the frame setting, the target to be monitored is abnormal;
if the stay state of the target to be monitored is too long, the target to be monitored is abnormal;
if the smoking state of the object to be monitored is smoking, the object to be monitored is abnormal;
if the dressing state of the object to be monitored is abnormal dressing, the object to be monitored is abnormal
In a second aspect, an embodiment of the present invention further provides a safety monitoring system based on deep learning, including:
the first acquisition module is used for acquiring video data of a target to be monitored in the monitoring area;
the second acquisition module is used for analyzing the video data through the first model trained in advance so as to acquire monitoring characteristic information of a target to be monitored in the monitoring area;
the judging module is used for judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and the alarm module is used for carrying out alarm operation when the object to be monitored is abnormal.
In a third aspect, an embodiment of the present invention further provides a server, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor executes the computer program to implement the steps of the deep learning-based security monitoring method in any one of the foregoing embodiments.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the deep learning based security monitoring method of any of the above embodiments.
According to the invention, through training based on deep learning on video data, the technical problem that in the prior art, aiming at a large amount of video information, unstructured data forms and contents in video cannot be analyzed and combed in time, so that data loss is caused is solved, and the technical effects of semantic structured extraction on video data and effective management on video data are realized.
Drawings
Fig. 1 is a flowchart of a security monitoring method based on deep learning according to a first embodiment of the present invention;
fig. 2 is a flowchart of a safety monitoring method based on deep learning according to a second embodiment of the present invention;
FIG. 3 is a flow chart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention;
FIG. 4 is a flow chart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention;
FIG. 5 is a flow chart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention;
FIG. 6 is a flowchart of another safety monitoring method based on deep learning according to the second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Furthermore, the terms "first," "second," and the like, may be used herein to describe various directions, acts, steps, or elements, etc., but these directions, acts, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first acquisition module may be referred to as a second acquisition module, and similarly, a second acquisition module may be referred to as a first acquisition module, without departing from the scope of the present application. The first acquisition module and the second acquisition module are both acquisition modules, but they are not the same acquisition module. The terms "first," "second," and the like, are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Example 1
Fig. 1 is a flowchart of a safety monitoring method based on deep learning according to an embodiment of the present invention, where the method may be executed by a processor, and the method may be applicable to video analysis of an object to be monitored in a monitored area. As shown in fig. 1, the safety monitoring method based on deep learning specifically includes the following steps:
step S110, obtaining video data of a target to be monitored in a monitoring area;
specifically, monitoring personnel can install monitoring equipment such as a monitoring camera in the area needing to be monitored, monitor the monitored area in real time and obtain corresponding monitoring videos.
Step S120, analyzing the video data through a pre-trained first model to obtain monitoring characteristic information of a target to be monitored in a monitoring area;
specifically, the monitoring personnel can pre-establish a training model for data analysis of the video data, and after the video data is obtained, the monitoring personnel can analyze the video data through the pre-established training model, so that the data (such as behavior data of the target to be monitored) contained in the video content, namely, the monitoring characteristic information, is identified or extracted.
Step S130, judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
specifically, after the detection feature information obtained by training the model in step S120, whether the target to be monitored is abnormal is determined according to the detection feature information. In this embodiment, the monitoring characteristic information may include one or more of monitoring target action information, monitoring target stay time information, smoke detection information, and monitoring target clothing information.
And step 140, performing alarm operation when the object to be monitored is abnormal.
Specifically, whether the target to be monitored is abnormal or not can be judged according to the threshold value or the confidence coefficient of the monitoring feature information, for example, when the monitoring feature information is the action information of the monitoring target, if the confidence coefficient of the action information of the monitoring target is greater than or equal to the preset confidence coefficient threshold value, the action information of the monitoring target is not abnormal, and if the confidence coefficient of the information of the target to be monitored is smaller than the preset confidence coefficient threshold value, the action information of the target to be monitored is abnormal, and at the moment, the safety monitoring system needs to perform alarm operation to remind a monitoring person that the monitoring area possibly has abnormality such as suspicious personnel or smoking personnel.
The first embodiment of the invention has the beneficial effects that the training based on deep learning is carried out on the video data, so that the technical problem that the unstructured data form and content in the video cannot be analyzed and combed in time to cause data loss aiming at a large amount of video information in the prior art is solved, and the technical effects of semantic structured extraction of the video data and effective management of the video data are realized.
Example two
The second embodiment of the invention is further optimized based on the first embodiment. Fig. 2 is a flowchart of a safety monitoring method based on deep learning according to a second embodiment of the present invention. As shown in fig. 2, the safety monitoring method based on deep learning of the present embodiment includes:
step S210, obtaining video data of a target to be monitored in a monitoring area;
specifically, monitoring personnel can install monitoring equipment such as a monitoring camera in the area needing to be monitored, monitor the monitored area in real time and obtain corresponding monitoring videos.
In this embodiment, the monitoring characteristic information may include one or more of monitoring target action information, monitoring target stay time information, smoke detection information, and monitoring target clothing information.
Step S220, training a convolutional neural network on the video data to obtain a first model corresponding to analysis monitoring target action information, monitoring target stay time information, smoke detection information and monitoring target clothing information;
specifically, a monitoring person or a staff member may have preset a training model for analyzing video data. For example, targets of interest such as personnel, vehicles, non-motor vehicles and the like in the monitoring video image are continuously monitored and tracked, and key frame images are preferentially selected to identify attributes of the vehicles, so that monitoring characteristic information such as license plates, vehicle types, brands, personnel gender, age clothes and the like is obtained. In this embodiment, after obtaining structured information such as monitoring target action information, monitoring target stay time information, smoke detection information and monitoring characteristic information of monitoring target clothing information, a monitoring person or a staff may implement storage, calculation and application of structured video data through a large-scale parallel processing database, a data mining, a distributed file system, a pencil test database and a cloud computing platform expandable storage system, that is, after training for deep learning is performed on video data, unstructured data forms (such as human bodies and vehicles) and video semantic contents in the video may be obtained, and then the obtained data information may be stored in a corresponding storage system according to the structured forms, so as to implement effective analysis, organization and management of the video data.
Step S231, the first model is used for identifying the video data based on the monitoring target action information so as to acquire the monitoring target action information of the target to be monitored;
specifically, in this embodiment, the human behavior recognition may use skeleton behavior detection, that is, joint point Estimation (phase Estimation) through Red Green Blue (RGB) images. Each moment (frame) skeleton corresponds to the coordinate position information of 18 joint points of the human body, a time sequence is composed of a plurality of frames, and behavior recognition is to judge the type of behavior action of the time-domain pre-segmented sequence, namely 'reading and understanding behavior'. The method comprises the steps of identifying the skeletons of personnel in a specific area, particularly identifying whether personnel in a monitored area fight, wherein the whole process goes through a starting approaching stage, a climax stage of boxing and an ending stage. In contrast, the climax stage of the boxing by waving contains more information, which is most helpful for distinguishing the action. According to the time domain attention model, the importance of different frames in the sequence is automatically learned and known through a sub-network, so that the important frames play a greater role in classification so as to optimize the recognition accuracy.
Step S232, calculating a first confidence coefficient of the monitoring target action information according to a first preset weight value;
specifically, in this embodiment, monitoring the motion of the object to be monitored mainly depends on the body's limb motion and the relative position and movement speed to determine. In calculating the first confidence level, an optimized optical flow method (optical flow method refers to a simple and practical expression mode of image motion, and is generally defined as apparent motion of an image brightness mode in an image sequence, that is, expression of motion speed of a point on the surface of a space object on an imaging plane of a vision sensor), is used to calculate relative position, motion vector, relative motion speed, speed of limb contact and the like of a human body. Confidence, also known as reliability, or confidence level, confidence coefficient, i.e. when a sample makes an estimate of the overall parameter, its conclusion is always uncertain due to the randomness of the sample. Therefore, a method of stating probability, that is, interval estimation in mathematical statistics, is adopted, that is, how large the estimated value and the overall parameter are within a certain allowable error range, and the corresponding probability is called confidence or confidence.
Step S233, judging the action state of the target to be monitored according to the preset parameter threshold value and the first confidence coefficient of the action information of the target to be monitored.
Specifically, parameter thresholds of an algorithm can be preset, corresponding weight values are set for different parameter thresholds, and after a first confidence coefficient of the motion information of the monitoring target is obtained through calculation according to different weight values, parameter thresholds and motion information of the monitoring target, the motion state of the target to be monitored is judged according to the first confidence coefficient. For example, when the monitoring person or the working person needs to recognize the fighting behavior, if the first confidence coefficient is greater than or equal to a confidence coefficient threshold value (may be a preset parameter threshold value of the action information of the monitoring target), it is indicated that no abnormality occurs in the monitoring area, that is, no abnormality exists in the action state of the target to be monitored, that is, no fighting occurs; if the first confidence coefficient is smaller than the confidence coefficient threshold value, the condition that the abnormality exists in the monitoring area is indicated that the action state of the target to be monitored is the frame taking, and the safety monitoring system sends out an alarm signal at the moment, so that monitoring personnel or staff are informed of carrying out safety check processing on the monitoring area, and safety in the monitoring area is ensured. In this embodiment, facial expression recognition can be assisted, so that the motion state of the target to be monitored can be accurately judged.
Fig. 3 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention. Fig. 4 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention. Fig. 5 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the present invention.
As shown in fig. 3, an alternative embodiment of steps S231 to S233 may be:
step S241, recognizing the video data based on the stay time information of the monitoring target by using the first model to acquire the stay time information of the monitoring target of the target to be monitored;
step S242, calculating a second confidence coefficient of the monitoring target stay time information according to a second preset weight value and the monitoring target stay time information;
and step S243, judging the stay state of the target to be monitored according to the preset parameter threshold value and the second confidence coefficient of the stay time information of the target to be monitored.
Specifically, in this embodiment, the monitoring may be performed with respect to the residence time of the target to be monitored in the monitoring area. And in the first model for monitoring the target stay time, calculating a second confidence coefficient according to a second preset weight value and the monitoring target stay time information, and judging the stay state of the target to be monitored according to the second confidence coefficient, a preset parameter threshold value and the second confidence coefficient. When the second confidence coefficient is larger than or equal to a preset confidence coefficient threshold value (which can be a preset parameter threshold value of the stay time information of the monitoring target), the stay state of the target to be monitored is indicated to be normal stay, and when the second confidence coefficient is smaller than the preset confidence coefficient threshold value, the stay state of the target to be monitored is indicated to be overlong stay, and at the moment, the safety monitoring system can send out an alarm signal, so that monitoring personnel or staff are notified to carry out safety check processing on stay personnel or vehicles in the monitoring area, and safety in the monitoring area is ensured.
As shown in fig. 4, alternative embodiments of steps S231 to S233 may also be:
step S251, the first model is used for identifying the video data based on the smoke detection information so as to acquire the smoke detection information of the object to be monitored;
step S252, calculating the smoke concentration according to the smoke detection information and determining the smoke smoking action of the area where the smoke is located and the object to be monitored;
step S253, calculating a third confidence coefficient of smoke detection information according to the smoke concentration, the region where the smoke is located and the smoking action of the object to be monitored;
and step S254, judging the smoking state of the object to be monitored according to the preset parameter threshold value and the third confidence coefficient of the smoke detection information.
Specifically, in this embodiment, the smoking state of the object to be monitored may be detected. The smoking detection mainly depends on detection of smoke concentration, detection of smoke position and detection of smoking action, so that three neural networks are needed to detect the smoke concentration, the region where the smoke is located and the smoking action of a target to be monitored respectively, third confidence coefficients of the smoke concentration, the region where the smoke is located and the smoking action are obtained under the corresponding neural networks respectively, and the third confidence coefficients of the smoke detection information are obtained after calculation according to different weights. When the third confidence coefficient is larger than or equal to a confidence coefficient threshold value (which can be a preset parameter threshold value of smoke detection information), the smoking state of the object to be monitored is indicated to be non-smoking, and when the third confidence coefficient is smaller than the confidence coefficient threshold value, the smoking state of the object to be monitored is indicated to be smoking, and at the moment, the safety monitoring system can send out an alarm signal, so that monitoring staff or staff are notified to check staff in the monitoring area, and the health of air in the monitoring area and the safety of facilities are ensured.
As shown in fig. 5, alternative embodiments of steps S231 to S233 may also be:
step S261, performing recognition on the video data based on the monitoring target clothing information by using the first model so as to acquire the monitoring target clothing information of the target to be monitored;
step S262, calculating a fourth confidence coefficient of the monitoring target clothing information according to the monitoring target clothing information and the preset weight value of the monitoring target clothing information;
step S263, judging the dressing state of the target to be monitored according to the preset parameter threshold value and the fourth confidence coefficient of the clothing information of the target to be monitored.
Specifically, in this embodiment, detection may be performed with respect to the wearing state of the object to be monitored. When the state of the wearing apparel is detected, the clothing is mainly identified by the color of the clothing and the special mark of the clothing, and the detection of the color of the clothing needs to consider the error problem caused by the environment and the chromatic aberration. Therefore, before the dressing state of the target to be monitored is detected, the data sets of different clothing colors and the data sets of different special marks of the clothing can be identified through the neural network, and when the error range of the training model for the dressing state of the target to be monitored is within the allowable error range, the method is applied to the detection of the dressing state of the target to be monitored. After the monitoring target clothing information of the target to be monitored is obtained, the fourth confidence coefficient can be calculated according to different weight values. When the fourth confidence coefficient is larger than or equal to the confidence coefficient threshold value (which can be a preset parameter threshold value of the clothing information of the monitored target), the dressing state of the monitored target is indicated to be normal dressing, and when the fourth confidence coefficient is smaller than the confidence coefficient threshold value, the dressing state of the monitored target is indicated to be abnormal dressing, and at the moment, the safety monitoring system can send out an alarm signal, so that monitoring staff or staff are notified to check staff in the monitoring area, and suspicious staff is prevented from harming safety of other people or public facilities.
Fig. 7 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention. In this embodiment, the safety monitoring method based on deep learning further includes:
step S271, if the motion state of the target to be monitored is the frame, the target to be monitored is abnormal;
step S272, if the stay state of the object to be monitored is too long, the object to be monitored is abnormal;
step S273, if the smoking state of the object to be monitored is smoking, the object to be monitored is abnormal;
step S274, if the dressing state of the object to be monitored is abnormal dressing, the object to be monitored is abnormal;
and step S280, performing alarm operation when the object to be monitored is abnormal.
Specifically, the action state, the stay state, the smoking state and the wearing state of the object to be monitored can be determined according to the first, second, third and fourth confidence degrees respectively. When the state of any object to be monitored is abnormal, the safety monitoring system can carry out alarm operation, so that monitoring personnel or security personnel are informed of potential safety hazards possibly existing in the monitoring area, and countermeasures are taken in a targeted mode.
The second embodiment of the invention has the beneficial effects that the training based on deep learning is carried out on the video data, and the data analysis is carried out on the information of the target to be monitored in the video data in the corresponding neural network, so that the technical problems that the unstructured data form and the content in the video cannot be analyzed and combed in time to cause data loss aiming at a large amount of video information in the prior art are solved, and the technical effects of semantic structured extraction of the video data and effective management of the video data are realized.
Example III
Fig. 3 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention. As shown in fig. 3, the deep learning-based security monitoring system 300 of the present embodiment includes:
a first obtaining module 310, configured to obtain video data of an object to be monitored in the monitoring area;
the second obtaining module 320 is configured to analyze the video data through a first model trained in advance, so as to obtain monitoring feature information of a target to be monitored in the monitoring area;
a judging module 330, configured to judge whether the target to be monitored is abnormal according to the monitoring feature information;
and the alarm module 340 is used for performing alarm operation when the object to be monitored is abnormal.
In this embodiment, the monitoring characteristic information includes one or more of monitoring target action information, monitoring target stay time information, smoke detection information, and monitoring target clothing information.
In this embodiment, the deep learning based safety monitoring system 300 further includes:
the training module 350 is configured to perform training on the video data by using a convolutional neural network, so as to obtain a first model corresponding to the analysis monitoring target action information, the monitoring target stay time information, the smoke detection information and the monitoring target clothing information.
In this embodiment, the second obtaining module 320 includes:
the first training unit is used for identifying the video data based on the monitoring target action information by using the first model so as to acquire the monitoring target action information of the target to be monitored;
the first calculating unit is used for calculating a first confidence coefficient of the monitoring target action information according to the first preset weight value;
the first judging unit is used for judging the action state of the target to be monitored according to the preset parameter threshold value and the first confidence coefficient of the action information of the target to be monitored.
In this embodiment, the second obtaining module 320 further includes:
the second training unit is used for identifying the video data based on the monitoring target stay time length information by using the first model so as to acquire the monitoring target stay time length information of the target to be monitored;
the second calculating unit is used for calculating a second confidence coefficient of the monitoring target stay time length information according to a second preset weight value and the monitoring target stay time length information;
and the second judging unit is used for judging the stay state of the object to be monitored according to the preset parameter threshold value and the second confidence coefficient of the stay time information of the monitoring object.
In this embodiment, the second obtaining module 320 further includes:
the third training unit is used for identifying the video data based on the smoke detection information by using the first model so as to acquire the smoke detection information of the object to be monitored;
the third calculation unit is used for calculating the smoke concentration according to the smoke detection information and determining the smoke smoking action of the area where the smoke is located and the target to be monitored;
a fourth calculation unit for calculating a third confidence coefficient of smoke detection information according to the smoke concentration, the region where the smoke is located and the smoking action of the object to be monitored;
and the third judging unit is used for judging the smoking state of the target to be monitored according to the preset parameter threshold value and the third confidence coefficient of the smoke detection information.
In this embodiment, the second obtaining module 320 further includes:
the fourth training unit is used for identifying the video data based on the monitoring target clothing information by using the first model so as to acquire the monitoring target clothing information of the target to be monitored;
a fifth calculating unit, configured to calculate a fourth confidence coefficient of the monitoring target clothing information according to the monitoring target clothing information and a preset weight value of the monitoring target clothing information;
and the fourth judging unit is used for judging the dressing state of the target to be monitored according to the preset parameter threshold value and the fourth confidence coefficient of the clothing information of the target to be monitored.
In this embodiment, the judging module 330 includes:
a fifth judging unit, configured to, if the action state of the target to be monitored is frame setting, cause an abnormality in the target to be monitored;
the sixth judging unit is used for judging that the object to be monitored is abnormal if the stay state of the object to be monitored is overlong stay;
a seventh judging unit, configured to, if the smoking status of the target to be monitored is smoking, cause an abnormality in the target to be monitored;
and the eighth judging unit is used for judging that the object to be monitored is abnormal if the dressing state of the object to be monitored is abnormal dressing.
The safety monitoring system based on the deep learning provided by the embodiment of the invention can execute the safety monitoring method based on the deep learning provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention, as shown in fig. 4, the server includes a processor 410, a memory 420, an input device 430 and an output device 440; the number of processors 410 in the server may be one or more, one processor 410 being taken as an example in fig. 4; the processor 410, memory 420, input device 430, and output device 440 in the server may be connected by a bus or other means, for example in fig. 4.
The memory 410 is used as a computer readable storage medium for storing software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the deep learning based safety monitoring system in the embodiment of the present invention (e.g., a first acquisition module, a second acquisition module, a judgment module, an alarm module, and a training module of the deep learning based safety monitoring system). The processor 410 executes various functional applications of the server and data processing by running software programs, instructions and modules stored in the memory 420, i.e., implements the deep learning-based security monitoring method described above.
Namely:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a first model trained in advance to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarm operation is carried out.
Memory 420 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 420 may further include memory remotely located with respect to processor 410, which may be connected to a server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the server. The output 440 may include a display device such as a display screen.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a deep learning-based security monitoring method, the method comprising:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a first model trained in advance to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarm operation is carried out.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the above method operations, and may also perform the related operations in the deep learning based security monitoring method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to execute the method of the embodiments of the present invention.
It should be noted that, in the embodiment of the deep learning based safety monitoring system, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A safety monitoring method based on deep learning, comprising:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a first model trained in advance to acquire monitoring characteristic information of the target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
when the target to be monitored is abnormal, alarming operation is carried out;
the monitoring characteristic information comprises one or more of monitoring target action information, monitoring target stay time information, smoke detection information and monitoring target clothing information;
the analyzing the video data through the first model trained in advance to obtain the monitoring feature information of the target to be monitored in the monitoring area includes:
training the convolutional neural network on the video data to obtain a first model corresponding to the analysis of the monitoring target action information, the monitoring target stay time information, the smoke detection information and the monitoring target clothing information;
the analyzing the video data through the first model trained in advance to obtain the monitoring feature information of the target to be monitored in the monitoring area includes:
identifying the video data based on the monitoring target stay time information by using the first model so as to acquire the monitoring target stay time information of the target to be monitored;
calculating a second confidence coefficient of the monitoring target stay time information according to a second preset weight value and the monitoring target stay time information;
judging the stay state of the target to be monitored according to the preset parameter threshold value of the stay time information of the target to be monitored and the second confidence coefficient;
the monitoring target stay time information is identified and determined by monitoring and tracking the monitoring video image and preferentially selecting the key frame image.
2. The method according to claim 1, wherein the analyzing the video data by the first model trained in advance to obtain the monitoring feature information of the object to be monitored in the monitoring area comprises:
identifying the video data based on the monitoring target action information by using the first model so as to acquire the monitoring target action information of the target to be monitored;
calculating a first confidence coefficient of the monitoring target action information according to a first preset weight value;
and judging the action state of the target to be monitored according to the preset parameter threshold value of the action information of the target to be monitored and the first confidence coefficient.
3. The deep learning based security monitoring method according to claim 1, wherein the analyzing the video data by the first model trained in advance to obtain the monitoring feature information of the object to be monitored in the monitoring area further comprises:
using the first model to identify the video data based on the smoke detection information so as to acquire the smoke detection information of the object to be monitored;
calculating the smoke concentration according to the smoke detection information and determining the smoke smoking action of the area where the smoke is and the target to be monitored;
calculating a third confidence coefficient of the smoke detection information according to the smoke concentration, the region where the smoke is located and the smoking action of the object to be monitored;
judging the smoking state of the target to be monitored according to the preset parameter threshold value of the smoke detection information and the third confidence coefficient;
the smoke concentration, the region where the smoke is located and the confidence of the smoking action of the object to be monitored are determined by detecting the smoke concentration, the region where the smoke is located and the smoking action of the object to be monitored through different neural networks.
4. The deep learning based security monitoring method according to claim 1, wherein the analyzing the video data by the first model trained in advance to obtain the monitoring feature information of the object to be monitored in the monitoring area further comprises:
using the first model to identify the video data based on the monitoring target clothing information so as to acquire the monitoring target clothing information of the target to be monitored;
calculating a fourth confidence coefficient of the monitoring target clothing information according to the monitoring target clothing information and a preset weight value of the monitoring target clothing information;
and judging the dressing state of the target to be monitored according to the preset parameter threshold value of the clothing information of the target to be monitored and the fourth confidence coefficient.
5. The deep learning-based security monitoring method according to any one of claims 1 to 4, wherein the determining whether the object to be monitored has an abnormality according to the state of the object to be monitored includes:
if the action state of the target to be monitored is a frame setting, the target to be monitored is abnormal;
if the stay state of the target to be monitored is too long, the target to be monitored is abnormal;
if the smoking state of the object to be monitored is smoking, the object to be monitored is abnormal;
if the dressing state of the target to be monitored is abnormal dressing, the target to be monitored is abnormal.
6. A deep learning based security monitoring system, comprising:
the first acquisition module is used for acquiring video data of a target to be monitored in the monitoring area;
the second acquisition module is used for analyzing the video data through a first model trained in advance so as to acquire monitoring characteristic information of the target to be monitored in a monitoring area;
the judging module is used for judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
the alarm module is used for carrying out alarm operation when the object to be monitored is abnormal;
the monitoring characteristic information comprises one or more of monitoring target action information, monitoring target stay time information, smoke detection information and monitoring target clothing information;
the training module is used for training the convolutional neural network on the video data to obtain a first model corresponding to the analysis of the monitoring target action information, the monitoring target stay time information, the smoke detection information and the monitoring target clothing information;
the first training unit is used for identifying the video data based on the monitoring target stay time length information by using the first model so as to acquire monitoring target stay time length information of the target to be monitored;
the first calculating unit is used for calculating a second confidence coefficient of the monitoring target stay time length information according to a second preset weight value and the monitoring target stay time length information;
the first judging unit is used for judging the stay state of the object to be monitored according to the preset parameter threshold value of the stay time information of the monitoring object and the second confidence coefficient;
the monitoring target stay time information is identified and determined by monitoring and tracking the monitoring video image and preferentially selecting the key frame image.
7. A server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the deep learning based security monitoring method of any one of claims 1-5 when the computer program is executed.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the deep learning based security monitoring method of any of claims 1-5.
CN201911165549.5A 2019-11-25 2019-11-25 Safety monitoring method, system, server and storage medium based on deep learning Active CN111126153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911165549.5A CN111126153B (en) 2019-11-25 2019-11-25 Safety monitoring method, system, server and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911165549.5A CN111126153B (en) 2019-11-25 2019-11-25 Safety monitoring method, system, server and storage medium based on deep learning

Publications (2)

Publication Number Publication Date
CN111126153A CN111126153A (en) 2020-05-08
CN111126153B true CN111126153B (en) 2023-07-21

Family

ID=70496626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911165549.5A Active CN111126153B (en) 2019-11-25 2019-11-25 Safety monitoring method, system, server and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN111126153B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079194B (en) * 2020-05-22 2022-08-05 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on vehicle state analysis
CN115460433B (en) * 2021-06-08 2024-05-28 京东方科技集团股份有限公司 Video processing method and device, electronic equipment and storage medium
CN113609937B (en) * 2021-07-24 2023-12-22 全图通位置网络有限公司 Emergency processing method, system and storage medium for urban rail transit
CN113889287A (en) * 2021-10-19 2022-01-04 成都万维科技有限责任公司 Data processing method, device, system and storage medium
CN114885119A (en) * 2022-03-29 2022-08-09 西北大学 Intelligent monitoring alarm system and method based on computer vision
CN117253333A (en) * 2023-11-20 2023-12-19 深圳市美安科技有限公司 Fire camera shooting detection device, fire detection alarm method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013125469A (en) * 2011-12-15 2013-06-24 Sogo Keibi Hosho Co Ltd Security device and security action switching method
CN105788364A (en) * 2014-12-25 2016-07-20 中国移动通信集团公司 Early warning information publishing method and early warning information publishing device
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
JP2019149039A (en) * 2018-02-27 2019-09-05 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8237771B2 (en) * 2009-03-26 2012-08-07 Eastman Kodak Company Automated videography based communications
JP2010246000A (en) * 2009-04-09 2010-10-28 Panasonic Corp Video search reproduction device
CN101778260B (en) * 2009-12-29 2012-01-04 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
US9292533B2 (en) * 2010-06-08 2016-03-22 Dell Products L.P. Systems and methods for improving storage efficiency in an information handling system
US8616673B2 (en) * 2010-10-29 2013-12-31 Eastman Kodak Company Method of controlling print density
CN102572215A (en) * 2011-12-14 2012-07-11 深圳市贝尔信智能系统有限公司 City-class visual video analysis method and server
CN102665071B (en) * 2012-05-14 2014-04-09 安徽三联交通应用技术股份有限公司 Intelligent processing and search method for social security video monitoring images
JP6398979B2 (en) * 2013-08-23 2018-10-03 日本電気株式会社 Video processing apparatus, video processing method, and video processing program
US9501693B2 (en) * 2013-10-09 2016-11-22 Honda Motor Co., Ltd. Real-time multiclass driver action recognition using random forests
CN105578126A (en) * 2014-11-11 2016-05-11 杜向阳 Monitoring camera automatic alarm system
US10878342B2 (en) * 2017-03-30 2020-12-29 Intel Corporation Cloud assisted machine learning
CN108734055B (en) * 2017-04-17 2021-03-26 杭州海康威视数字技术股份有限公司 Abnormal person detection method, device and system
CN108021891B (en) * 2017-12-05 2020-04-14 广州大学 Vehicle environment identification method and system based on combination of deep learning and traditional algorithm
CN108288032B (en) * 2018-01-08 2020-11-10 深圳市腾讯计算机系统有限公司 Action characteristic acquisition method, device and storage medium
CN108764148B (en) * 2018-05-30 2020-03-10 东北大学 Multi-region real-time action detection method based on monitoring video
CN108960065B (en) * 2018-06-01 2020-11-17 浙江零跑科技有限公司 Driving behavior detection method based on vision
CN109241946A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Abnormal behaviour monitoring method, device, computer equipment and storage medium
CN109614948B (en) * 2018-12-19 2020-11-03 北京锐安科技有限公司 Abnormal behavior detection method, device, equipment and storage medium
CN110033007B (en) * 2019-04-19 2022-08-09 福州大学 Pedestrian clothing attribute identification method based on depth attitude estimation and multi-feature fusion
CN110163171B (en) * 2019-05-27 2020-07-31 北京字节跳动网络技术有限公司 Method and device for recognizing human face attributes
CN110414313B (en) * 2019-06-06 2024-02-13 平安科技(深圳)有限公司 Abnormal behavior alarming method, device, server and storage medium
CN110472492A (en) * 2019-07-05 2019-11-19 平安国际智慧城市科技股份有限公司 Target organism detection method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013125469A (en) * 2011-12-15 2013-06-24 Sogo Keibi Hosho Co Ltd Security device and security action switching method
CN105788364A (en) * 2014-12-25 2016-07-20 中国移动通信集团公司 Early warning information publishing method and early warning information publishing device
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
JP2019149039A (en) * 2018-02-27 2019-09-05 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method

Also Published As

Publication number Publication date
CN111126153A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111126153B (en) Safety monitoring method, system, server and storage medium based on deep learning
KR102153591B1 (en) Method and apparatus for detecting garbage dumping action in real time on video surveillance system
Sigari et al. A driver face monitoring system for fatigue and distraction detection
CN111325089A (en) Method and apparatus for tracking object
US20210124914A1 (en) Training method of network, monitoring method, system, storage medium and computer device
CN112163469B (en) Smoking behavior recognition method, system, equipment and readable storage medium
CN109887234B (en) Method and device for preventing children from getting lost, electronic equipment and storage medium
US20150071492A1 (en) Abnormal behaviour detection
CN109766755A (en) Face identification method and Related product
US20130335571A1 (en) Vision based target tracking for constrained environments
CN111860430A (en) Identification method and device of fighting behavior, storage medium and electronic device
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
WO2018171875A1 (en) Control device, system and method for determining the perceptual load of a visual and dynamic driving scene
KR20160118830A (en) Method for recognizing object using pressure sensor
CN114218992A (en) Abnormal object detection method and related device
CN115861915A (en) Fire fighting access monitoring method, fire fighting access monitoring device and storage medium
CN117593792A (en) Abnormal gesture detection method and device based on video frame
JP7263094B2 (en) Information processing device, information processing method and program
CN114463779A (en) Smoking identification method, device, equipment and storage medium
CN113792700B (en) Storage battery car in-box detection method and device, computer equipment and storage medium
CN109815921A (en) The prediction technique and device of the class of activity in hydrogenation stations
CN113723355A (en) Target monitoring method and device, storage medium and electronic device
CN114663972A (en) Target marking method and device based on motion difference
JP2018142137A (en) Information processing device, information processing method and program
KR20140037354A (en) System for abandoned and stolen object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant