CN110300257B - Face tracking security monitoring system based on deep learning and use method thereof - Google Patents

Face tracking security monitoring system based on deep learning and use method thereof Download PDF

Info

Publication number
CN110300257B
CN110300257B CN201910511984.2A CN201910511984A CN110300257B CN 110300257 B CN110300257 B CN 110300257B CN 201910511984 A CN201910511984 A CN 201910511984A CN 110300257 B CN110300257 B CN 110300257B
Authority
CN
China
Prior art keywords
module
sliding ring
gas sensor
deep learning
monitoring system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910511984.2A
Other languages
Chinese (zh)
Other versions
CN110300257A (en
Inventor
陈公兴
赖保均
李升凯
邵经纬
叶青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianhe College of Guangdong Polytechnic Normal University
Original Assignee
Tianhe College of Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianhe College of Guangdong Polytechnic Normal University filed Critical Tianhe College of Guangdong Polytechnic Normal University
Priority to CN201910511984.2A priority Critical patent/CN110300257B/en
Publication of CN110300257A publication Critical patent/CN110300257A/en
Application granted granted Critical
Publication of CN110300257B publication Critical patent/CN110300257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B19/00Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Abstract

The invention provides a human face tracking security monitoring system based on deep learning and a using method thereof, and the system comprises a monitor body, a control module, a camera module, a gas sensor module, a power supply circuit module, a networking module and a steering engine module, wherein the monitor body comprises a base, a sliding ring and a protecting cover, a plurality of through holes are arranged on the circumferential surface of the base, the upper top surface of the sliding ring is flush with the upper top surface of the base, the protecting cover is arranged in the sliding ring and is in sliding connection with the sliding ring, one side of the protecting cover, which faces the sliding ring, protrudes, is provided with a through groove, the through groove is vertical to the upper top surface of the sliding ring, the inner wall of the protecting cover is provided with a fixing frame, and the. The invention can carry out acousto-optic alarm to effectively ensure the life safety of people by adopting the gas sensor module for detection; deep learning is also provided, so that the recognition algorithm is more accurate; the power-off protection function is added, so that monitoring can be continued even if the power is maliciously cut off.

Description

Face tracking security monitoring system based on deep learning and use method thereof
Technical Field
The invention relates to the technical field of monitoring, in particular to a face tracking security monitoring system based on deep learning and a using method thereof.
Background
The camera is also called a computer camera, a computer eye, an electronic eye and the like, is a video input device, is widely applied to aspects such as video conferences, telemedicine, real-time monitoring and the like, and is the most widely applied monitoring camera.
As disclosed in CN206097257U, in the current prior art, a single monitoring camera has a problem of dead zone. The monitoring range of the camera is limited, and omnibearing monitoring cannot be carried out.
Another typical security system disclosed in the prior art, such as US201816030559, generally adopts a manner of adding a deployment point to implement omnidirectional monitoring, and this manner often causes problems of more required devices, complex wiring and high cost.
Furthermore, the closed-circuit television monitoring system disclosed in the prior art such as KR20180022318 is too simple to complete a single monitoring operation, and cannot complete complex image processing, and both the single monitoring operation and the complex image processing operation are analyzed and processed by special software in the later stage, which consumes a lot of time. These existing cell monitoring systems are usually passive monitoring systems, and corresponding countermeasures are usually taken only after a security accident such as burglary occurs, but it is likely that the security accident is late, and the personal, property and psychology of the victim are adversely affected. Meanwhile, the monitored equipment cannot perform complex control operation, cannot perform complex image processing, and cannot perform accurate face recognition.
Through massive retrieval and analysis of the applicant, the security monitoring system is improved or changed on the basis of the prior art.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a face tracking security monitoring system based on deep learning and a use method thereof, which enlarge the monitoring range, can identify images more accurately and can control the rotation of equipment through a mobile terminal.
The technical problem solved by the invention can be realized by adopting the following technical scheme:
a human face tracking security monitoring system based on deep learning comprises a monitor body, a control module, a camera module, a gas sensor module, a power circuit module, a networking module and a steering engine module, wherein the monitor body comprises a base, a sliding ring and a protecting cover, a plurality of through holes are formed in the circumferential surface of the base, the upper top surface of the sliding ring is flush with the upper top surface of the base, the protecting cover is arranged in the sliding ring and is in sliding connection with the sliding ring, the protecting cover protrudes towards one side of the sliding ring, a through groove is formed in the protecting cover and is perpendicular to the upper top surface of the sliding ring, a fixing frame is arranged on the inner wall of the protecting cover, the fixing frame is perpendicular and fixedly connected with the inner wall of the upper top surface of the protecting cover, a driving mechanism is arranged on the circumferential diameter of the protecting cover, teeth are arranged on the inner wall of the sliding ring, the driving mechanism is meshed with the teeth, the control module, the camera module, the gas sensor module, the power circuit module and the steering engine module are fixedly arranged on the movable plate respectively, and the movable plate is fixedly connected with the fixed frame.
Optionally, the camera module is including seeking to trail device and camera group, it includes infrared sensor, rotates seat and second actuating mechanism to seek to trail the device, camera group sets up seek to trail the top of device and towards the protecting cover lead to the groove and stretch out, infrared sensor is fixed to be set up rotate on the seat, second actuating mechanism with rotate a drive connection.
Optionally, the gas sensor module includes base and gas sensor group, the base with fly leaf fixed connection, gas sensor group sets up directly over the base, the base is equipped with third actuating mechanism, third actuating mechanism with gas sensor group swing joint.
Optionally, the control module is connected to the camera module, the gas sensor module, the power circuit module, the networking module and the steering engine module respectively.
In addition, the invention also provides a use method of the face tracking security monitoring system based on deep learning, and the use method comprises the following steps: the method comprises a training method and a control method, wherein the training method comprises the following steps:
step 1: initializing all filters and parameters/weights with random values;
step 2: the neural network takes the training image as input and obtains the output probability of each class through a forward propagation step;
and step 3: calculating total error of output layer
Sum of error is ∑1/2(target probability-output probability)2
And 4, step 4: calculating error gradients of all weights in the network by using back propagation, updating all filter values/weights and parameter values by using gradient descent to minimize output errors, adjusting the total errors according to contribution of the weights, correctly classifying specific images in a mode of reducing the output errors, and updating a filter matrix and a connection weight;
and 5: and (3) repeating the step (2) on all the images in the training set, so that the convolutional neural network can be trained, all the weights and parameters in the convolutional neural network are optimized, and the images in the training set are correctly classified.
Optionally, the control method includes: when the movement of the person cannot be detected by the camera group, the current position is kept, the image information of the position where the camera group is located is detected in real time, and the detection data is transmitted to the control module.
Optionally, the control method includes: when the data detected by the camera module is found to exist, the camera module is started to work, and the control module controls the tracking device to monitor the trace of the person in real time.
Optionally, the control method includes: and in the process of real-time monitoring by the control module, the camera module takes a snapshot in real time and sets the snapshot as a monitoring position.
Optionally, the control method includes: when the gas sensor module collects the concentration value of the harmful gas at the position of the gas sensor module, the concentration value of the harmful gas at the position of the gas sensor module is detected in real time, and the data is transmitted to the controller, and when the concentration value of the harmful gas detected in real time is smaller than a first set value, the controller starts a real-time detection state.
Optionally, the control method includes: when the concentration value of the harmful gas in the detection data is larger than a first set value and smaller than a second set value, starting an early warning state;
and when the temperature in the detection data is higher than a second set value, calculating the gas concentration values of the front and the back twice corresponding to each position in the position information, setting the gas concentration values as emergency positions, and alarming and reminding.
The beneficial effects obtained by the invention are as follows: the steering engine angle is monitored and adjusted through the mobile phone, the camera is controlled to follow and an image or video stream containing a human face is collected, and therefore the purpose of tracking and recognizing the human face is achieved; the gas sensor module is adopted to detect the position in real time, and sound and light alarm is carried out when harmful gas is encountered, so that personnel who enter a harmful area are reminded to evacuate as soon as possible, and the life safety of the personnel is effectively guaranteed; the face recognition is carried out through the neural network framework, the deep learning is realized, so that the recognition algorithm is more accurate, and particularly, the robustness is better when the error is larger in the acquisition process, and a large amount of wrong alarms can not be given to a user; the tracking device is matched with the camera module and the controller, so that the device can identify the face and automatically move along with the human body, no dead angle exists in the monitoring range, the power-off protection function is added, the monitoring can be continued even if the power is interrupted maliciously, the human flow can be detected, and the commercial data analysis is provided; when a stranger intrudes or gas leaks, the monitoring system can immediately give an audible and visual alarm and send the alarm to the mobile phone of the user, so that the user can check the picture through the mobile phone to make treatment in time.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. Like reference numerals designate corresponding parts throughout the different views.
Fig. 1 is a control flow of a face tracking security monitoring system based on deep learning according to the present invention.
Fig. 2 is a schematic structural diagram of a face tracking security monitoring system based on deep learning according to the present invention.
Fig. 3 is a front view of a face tracking security monitoring system based on deep learning according to the present invention.
Fig. 4 is a rear view of the face tracking security monitoring system based on deep learning according to the present invention.
Fig. 5 is a top view of a face tracking security monitoring system based on deep learning according to the present invention.
Fig. 6 is a bottom view of the face tracking security monitoring system based on deep learning of the present invention.
Fig. 7 is a schematic structural diagram of the protective cover of the face tracking security monitoring system based on deep learning according to the present invention.
Fig. 8 is a schematic structural diagram of the sliding ring of the face tracking security monitoring system based on deep learning of the present invention.
Fig. 9 is a block diagram of a face tracking security monitoring system based on deep learning according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to embodiments thereof; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Other systems, methods, and/or features of the present embodiments will become apparent to those skilled in the art upon review of the following detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims. Additional features of the disclosed embodiments are described in, and will be apparent from, the detailed description that follows.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by the terms "upper", "lower", "left", "right", etc. based on the orientation or positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but it is not intended to indicate or imply that the device or component referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limiting the present patent, and the specific meaning of the terms described above will be understood by those of ordinary skill in the art according to the specific circumstances.
The first embodiment is as follows: a human face tracking security monitoring system based on deep learning comprises a monitor body, a control module, a camera module, a gas sensor module, a power circuit module, a networking module and a steering engine module, wherein the monitor body comprises a base 2, a sliding ring 4 and a protecting cover 1, a plurality of through holes 6 are arranged on the circumferential surface of the base 2, the upper top surface of the sliding ring 4 is flush with the upper top surface of the base 2, the protecting cover 1 is arranged in the sliding ring 4 and is in sliding connection with the sliding ring 4, the protecting cover 1 protrudes towards one side of the sliding ring 4, a through groove 3 is arranged on the protecting cover 1, the through groove 3 is perpendicular to the upper top surface of the sliding ring 4, a fixing frame 7 is arranged on the inner wall of the protecting cover 1, the fixing frame 7 is vertically and fixedly connected with the inner wall of the upper top surface of the protecting cover 1, and a driving mechanism is arranged on the circumference of the protecting cover 1 close to the, the inner wall of the sliding ring 4 is provided with teeth, the driving mechanism is meshed with the teeth, the control module, the camera module, the gas sensor module, the power circuit module and the steering engine module are respectively and fixedly arranged on the movable plate, and the movable plate is fixedly connected with the fixed frame 7. The camera module is including seeking to trail device and camera group, it includes infrared sensor, rotates seat and actuating mechanism to seek to trail the device, camera group sets up seek the top of trail device and towards protecting cover 1 lead to groove 3 and stretch out, infrared sensor is fixed to be set up rotate on the seat 8, second actuating mechanism 10 with rotate 8 drive connections in the seat. The gas sensor module comprises a base and a gas sensor group, the base is fixedly connected with the movable plate, the gas sensor group is arranged right above the base, the base is provided with a second driving mechanism, and the second driving mechanism is movably connected with the gas sensor group. The control module is respectively connected with the camera module, the gas sensor module, the power circuit module, the networking module and the steering engine module.
In addition, the invention also provides a use method of the face tracking security monitoring system based on deep learning, and the use method comprises the following steps: the method comprises a training method and a control method, wherein the training method comprises the following steps: step 1: initializing all filters and parameters/weights with random values; step 2: the neural network takes the training image as input and obtains the output probability of each class through a forward propagation step; and step 3: calculating the total error of the output layer sums all 4 classes. The total error is calculated as follows: sum of error is ∑1/2(target probability-output probability) 2; and 4, step 4: calculating error gradients of all weights in the network by using back propagation, updating all filter values/weights and parameter values by using gradient descent to minimize output errors, adjusting the total errors according to contribution of the weights, correctly classifying specific images in a mode of reducing the output errors, and updating a filter matrix and a connection weight; and 5: repeating the steps 2-4 on all the images in the training set, so that the convolutional neural network can be trained, all the weights and parameters in the convolutional neural network are optimized, and the images in the training set are correctly classified. In addition, the control method includes: when the movement of the person cannot be detected by the camera group, the current position is kept, the image information of the position where the camera group is located is detected in real time, and the detection data is transmitted to the control module. The control method comprises the following steps: when the data detected by the camera module is found to exist, the camera module is started to work, and the control module controls the tracking device to monitor the trace of the person in real time. The control method comprises the following steps: and in the process of real-time monitoring by the control module, the camera module takes a snapshot in real time and sets the snapshot as a monitoring position. The control method comprises the following steps: when the gas sensor module collects the concentration value of the harmful gas at the position of the gas sensor module, the concentration value of the harmful gas at the position of the gas sensor module is detected in real time, and the data is transmitted to the controller, and when the concentration value of the harmful gas detected in real time is smaller than a first set value, the controller starts a real-time detection state. The control method comprises the following steps: when present, isWhen the concentration value of the harmful gas in the detection data is larger than a first set value and smaller than a second set value, starting an early warning state; and when the temperature in the detection data is higher than a second set value, calculating the gas concentration values of the front and the back twice corresponding to each position in the position information, setting the gas concentration values as emergency positions, and alarming and reminding.
Example two: a human face tracking security monitoring system based on deep learning comprises a monitor body, a control module, a camera module, a gas sensor module, a power circuit module, a networking module and a steering engine module, wherein the monitor body comprises a base 2, a sliding ring 4 and a protecting cover 1, a plurality of through holes 6 are arranged on the circumferential surface of the base 2, the upper top surface of the sliding ring 4 is flush with the upper top surface of the base 2, the protecting cover 1 is arranged in the sliding ring 4 and is in sliding connection with the sliding ring 4, the protecting cover 1 protrudes towards one side of the sliding ring 4, a through groove 3 is arranged on the protecting cover 1, the through groove 3 is perpendicular to the upper top surface of the sliding ring 4, a fixing frame 7 is arranged on the inner wall of the protecting cover 1, the fixing frame 7 is vertically and fixedly connected with the inner wall of the upper top surface of the protecting cover 1, and a driving mechanism is arranged on the circumference of the protecting cover 1 close to the, the inner wall of the sliding ring 4 is provided with teeth, the driving mechanism is meshed with the teeth, the control module, the camera module, the gas sensor module, the power circuit module and the steering engine module are respectively and fixedly arranged on the movable plate, and the movable plate is fixedly connected with the fixed frame 7. Specifically, the steering wheel module can control the sliding ring 4 with protecting cover pivoted angle works as the networking module is connected with mobile terminal such as cell-phone and is paird the back, and the device just can be in mobile terminal's control is realized rotating down, at this in-process, mobile terminal can control the sliding ring 4 with the orientation of protecting cover 1, specifically, control the orientation that leads to groove 3. The through groove 3 is internally provided with the camera module and the gas sensor module, and the camera module and the gas sensor module extend out towards the direction away from the protecting cover 1, so that the camera module and the gas sensor module can more conveniently detect and monitor in real time. In addition, the tooth that the inboard of sliding ring 4 was equipped with, actuating mechanism and steering wheel module cooperation that can each other makes protecting cover 1 can rotate along the axis of self, reaches real time monitoring's effect. In the process of rotating the protective cover 1, the fixing frame follows the protective cover 1 to rotate, and meanwhile, the gas sensor module, the camera module and the power module on the fixing frame are driven to synchronously rotate together.
The camera module is including seeking to trail device and camera group, it includes infrared sensor, rotates seat 8 and actuating mechanism to seek to trail the device, camera group sets up seek the top of trail device and towards protecting cover 1 lead to groove 3 and stretch out, infrared sensor is fixed to be set up rotate on the seat 8, second actuating mechanism 10 with rotate 8 drive connections in the seat. Specifically, semicircular teeth are arranged under the rotating seat and extend out in the direction away from the infrared sensor, meanwhile, a second driving mechanism 11 is arranged under the teeth, the second driving mechanism 11 is connected with a gear, and the gear is meshed with the teeth. In addition, the rotating seat 8 is provided with a protrusion 11, the bottom plate 9 is symmetrically provided with two supporting plates, the two supporting plates are opposite to each other to form a groove, and the protrusion 11 is hinged to the groove. The infrared sensor of the tracking device transmits the signal to the control module when the infrared sensor detects that the person is walking. The control module can control the driving mechanism to rotate to drive the teeth to rotate, and at the moment, the infrared sensor can rotate along the axis, so that the aim of tracking the movement of people is fulfilled.
The gas sensor module comprises a base and a gas sensor group, the base is fixedly connected with the movable plate, the gas sensor group is arranged right above the base, the base is provided with a third driving mechanism, and the third driving mechanism is movably connected with the gas sensor group. Specifically, the gas sensor modules are respectively and correspondingly provided with corresponding position information, and when the gas sensor device detects that the concentration value of the harmful gas reaches a certain threshold value, the gas sensor device transmits the concentration value of the harmful gas and the position information to the control module together. The gas sensor module comprises a movable plate, a fixing frame and a gas sensor module, wherein a base is arranged between the movable plate and the fixing frame, and the base is convenient for the gas sensor module to be fixedly connected with the base. In addition, in this embodiment, the base is further provided with a third driving mechanism, so that the base can rotate relative to the movable plate, and in the rotating process, the gas sensor module can be ensured to detect gas concentration values at different positions, thereby increasing the efficient working efficiency. Meanwhile, the gas sensor module can detect various gases such as liquefied gas, benzene, alkane, alcohol, hydrogen and the like, is suitable for air quality detection of a home environment, and is particularly most sensitive to alkane smoke such as natural gas, liquefied petroleum gas and the like. The whole monitoring system is matched, potential threats which cannot be observed by the camera can be sensed, and the defects of the monitoring system in the aspect of gas sensing are overcome.
The control module is respectively connected with the camera module, the gas sensor module, the power circuit module, the networking module and the steering engine module. Specifically, the control module is connected with the camera module, the gas sensor module, the power circuit module, the networking module and the steering engine module, so that the whole device forms a high-level whole body, and the high-efficiency operation efficiency of the whole device is ensured. The power module is provided with an external power interface and a 5V polymer lithium ion battery as a standby power supply, when the external power supply is available, the conversion triode in the power module stops the standby power supply, and if the external power supply is accidentally cut off, the monitoring system can be switched to the standby power supply, so that the cruising and anti-interference capability of monitoring is improved. The steering wheel module can drive wholly the rotation of equipment, it is specific, the steering wheel module includes first actuating mechanism second actuating mechanism with third actuating mechanism, the steering wheel module the effect with first actuating mechanism second actuating mechanism with third actuating mechanism is the same, and this place is no longer repeated. In addition, the steering wheel of steering wheel module has characteristics such as small, the reaction is fast, turned angle is accurate, is fit for this monitored control system and uses. In addition, the networking module enables the mobile terminal to be networked with a monitoring system, so that the mobile terminal can control the monitoring system to rotate, and the tracking and monitoring function for a specific target is facilitated. In addition, the networking module enables all devices to be networked with one another to form a huge monitoring network, information is shared, and efficient security monitoring effect is achieved.
In addition, the invention provides a use method of a face tracking security monitoring system based on deep learning, the use method comprises a training method and a control method, and specifically, the training method comprises the following steps:
step 1: initializing all filters and parameters/weights with random values;
step 2: the neural network takes a training image as input, and obtains the output probability of each class through forward propagation step convolution, ReLU and pooling operation to realize forward propagation in a complete connection layer;
and step 3: calculating the total error of the output layer, i.e. summing all 4 classes
Sum of error is ∑1/2(target probability-output probability)2
And 4, step 4: calculating error gradients of all weights in the network by using back propagation, updating all filter values/weights and parameter values by using gradient descent to minimize output errors, adjusting the total errors according to contribution of the weights, correctly classifying specific images in a mode of reducing the output errors, and updating a filter matrix and a connection weight;
and 5: and (3) repeating the step (2) on all the images in the training set, so that the convolutional neural network can be trained, all the weights and parameters in the convolutional neural network are optimized, and the images in the training set are correctly classified. Specifically, assuming that the output probability of one ship image is [0.2,0.4,0.1,0.3], the output probability is also random since the weight is randomly assigned to the first training sample. The total error is adjusted according to its contribution to the weight. When the same image is input again, the output probability may become [0.1,0.1,0.7,0.1], which is closer to the target vector [0,0,1,0 ]. This means that it has been learned how to correctly classify a particular image by adjusting its weights/filters and reducing the output error. The parameters of the number, the size, the network structure and the like of the filters are fixed before the step 1, and the parameters are not changed in the training process, and only the filter matrix and the connection weight are updated. Through the above steps, the convolutional neural network can be trained, which means that all weights and parameters in the convolutional neural network are optimized, and the images in the training set can be classified correctly. On the basis, an image face recognition technology is also matched, wherein an image or a video stream containing a human face is acquired by a camera or a camera, the human face is automatically detected and tracked in the image, and then a series of related technologies of the face, which are also commonly called portrait recognition and facial recognition, are carried out on the detected human face. The identification steps are as follows: acquiring and detecting a face image; preprocessing a face image; extracting, matching and identifying the facial image features. In OpenCV, two features (i.e., two methods) are mainly used for face detection, a Haar feature and an LBP feature. And carrying out face detection by using a trained classifier in an XML format.
In addition, the control method includes: when the movement of the person cannot be detected by the camera group, the current position is kept, the image information of the position where the camera group is located is detected in real time, and the detection data is transmitted to the control module. In particular, the method comprises the following steps of,
the control method comprises the following steps: when the data detected by the camera module is found to exist, the camera module is started to work, and the control module controls the tracking device to monitor the trace of the person in real time; in the process of real-time monitoring by the control module, the camera module takes a snapshot in real time and sets the snapshot as a monitoring position; when the gas sensor module collects the concentration value of harmful gas at the position of real-time detection and transmits the data to the controller, and when the concentration value of the detected harmful gas is smaller than a first set value, the controller starts a real-time detection state; when the concentration value of the harmful gas in the detection data is larger than a first set value and smaller than a second set value, starting an early warning state; and when the temperature in the detection data is higher than a second set value, calculating the gas concentration values of the front and the back twice corresponding to each position in the position information, setting the gas concentration values as emergency positions, and alarming and reminding. Specifically, the initial position of the camera module is stationary, and when people move, the camera module can rotate at an emergency speed under the control of the control module, and follow the movement track of the following people. In addition, various data are detected and collected in real time in the following process, such as: current temperature, amount of smoke, etc. All data are transmitted to the control module, and the control module compares all groups of data in real time. After the control module compares the data, if abnormal conditions exist, the control module can perform alarm operation through the networking module.
In summary, according to the face tracking security monitoring system based on deep learning and the control method thereof, the steering engine angle is monitored and adjusted through the mobile phone, the camera is controlled to follow and acquire images or video streams containing faces, and therefore the purpose of tracking and recognizing the faces is achieved; the gas sensor module is adopted to detect the position in real time, and sound and light alarm is carried out when harmful gas is encountered, so that personnel who enter a harmful area are reminded to evacuate as soon as possible, and the life safety of the personnel is effectively guaranteed; the face recognition is carried out through the neural network framework, the deep learning is realized, so that the recognition algorithm is more accurate, and particularly, the robustness is better when the error is larger in the acquisition process, and a large amount of wrong alarms can not be given to a user; the tracking device is matched with the camera module and the controller, so that the device can identify the face and automatically move along with the human body, no dead angle exists in the monitoring range, the power-off protection function is added, the monitoring can be continued even if the power is interrupted maliciously, the human flow can be detected, and the commercial data analysis is provided; when a stranger intrudes or gas leaks, the monitoring system can immediately give an audible and visual alarm and send the alarm to the mobile phone of the user, so that the user can check the picture through the mobile phone to make treatment in time.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. That is, the methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in an order different than that described, and/or various components may be added, omitted, and/or combined. Moreover, features described with respect to certain configurations may be combined in various other configurations, as different aspects and elements of the configurations may be combined in a similar manner. Further, elements therein may be updated as technology evolves, i.e., many elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of the exemplary configurations including implementations. However, configurations may be practiced without these specific details, for example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configurations will provide those skilled in the art with an enabling description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
In conclusion, it is intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that these examples are illustrative only and are not intended to limit the scope of the invention. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (10)

1. A face tracking security monitoring system based on deep learning comprises a monitor body, a control module, a camera module, a gas sensor module, a movable plate, a power circuit module, a networking module and a steering engine module and is characterized in that the monitor body comprises a base (2), a sliding ring (4) and a protective cover (1), a plurality of through holes (6) are formed in the circumferential surface of the base (2), the upper top surface of the sliding ring (4) is flush with the upper top surface of the base (2), the protective cover (1) is arranged in the sliding ring (4) and is in sliding connection with the sliding ring (4), the protective cover (1) protrudes towards one side of the sliding ring (4), a through groove (3) is formed in the protective cover (1), the through groove (3) is perpendicular to the upper top surface of the sliding ring (4), a fixing frame (7) is arranged on the inner wall of the protective cover (1), the fixing frame (7) is fixedly connected with the inner wall of the top face of the protecting cover (1) in a vertical mode, the protecting cover (1) is close to a driving mechanism arranged on the circumference of the sliding ring (4), teeth are arranged on the inner wall of the sliding ring (4), the driving mechanism is meshed with the teeth, the control module is used for controlling the camera module, the gas sensor module, the power circuit module and the steering engine module are fixedly arranged on the movable plate respectively, the movable plate is fixedly connected with the fixing frame (7), the camera module is arranged in the through groove (3), and meanwhile the camera module extends out towards the direction away from the protecting cover (1).
2. The deep learning-based human face tracking security monitoring system according to claim 1, wherein the camera module comprises a tracking device and a camera group, the tracking device comprises an infrared sensor, a rotating seat (8) and a second driving mechanism (10), the camera group is arranged above the tracking device and extends towards the through slot (3) of the protecting cover (1), the infrared sensor is fixedly arranged on the rotating seat (8), and the second driving mechanism (10) is in driving connection with the rotating seat (8).
3. The system according to claim 1, wherein the gas sensor module comprises a base and a gas sensor group, the base is fixedly connected with the movable plate, the gas sensor group is arranged right above the base, the base is provided with a second driving mechanism, and the second driving mechanism is movably connected with the gas sensor group.
4. The system for face tracking, security and protection monitoring and control based on deep learning of claim 1, wherein the control module is connected with the camera module, the gas sensor module, the power circuit module, the networking module and the steering engine module respectively.
5. A use method of a face tracking security monitoring system based on deep learning, which is applied to the monitoring system according to one of claims 1 to 4, and is characterized in that the use method comprises a training method and a control method, and the training method comprises the following steps:
step 1: initializing all filters and parameters/weights with random values;
step 2: the neural network takes the training image as input and obtains the output probability of each class through a forward propagation step;
and step 3: calculating total error of output layer
Total error ═ Σ 1/2 (target probability-output probability)2
And 4, step 4: calculating error gradients of all weights in the network by using back propagation, updating all filter values/weights and parameter values by using gradient descent to minimize output errors, adjusting the total errors according to contribution of the weights, correctly classifying specific images in a mode of reducing the output errors, and updating a filter matrix and a connection weight;
and 5: and (3) repeating the step (2) on all the images in the training set, so that the convolutional neural network can be trained, all the weights and parameters in the convolutional neural network are optimized, and the images in the training set are correctly classified.
6. The use method of the face tracking security monitoring system based on deep learning of claim 5, wherein the control method comprises: when the movement of the person cannot be detected by the camera group, the current position is kept, the image information of the position where the camera group is located is detected in real time, and the detection data is transmitted to the control module.
7. The use method of the face tracking security monitoring system based on deep learning of claim 6, wherein the control method comprises: when the data detected by the camera module is found to exist, the camera module is started to work, and the control module controls the tracking device to monitor the trace of the person in real time.
8. The use method of the face tracking security monitoring system based on deep learning of claim 7, wherein the control method comprises: and in the process of real-time monitoring by the control module, the camera module takes a snapshot in real time and sets the snapshot as a monitoring position.
9. The use method of the face tracking security monitoring system based on deep learning of claim 8, wherein the control method comprises: when the gas sensor module collects the concentration value of the harmful gas at the position of the gas sensor module, the concentration value of the harmful gas at the position of the gas sensor module is detected in real time, and the data is transmitted to the controller, and when the concentration value of the harmful gas detected in real time is smaller than a first set value, the controller starts a real-time detection state.
10. The use method of the face tracking security monitoring system based on deep learning of claim 9, wherein the control method comprises: when the concentration value of the harmful gas in the detection data is larger than a first set value and smaller than a second set value, starting an early warning state;
and when the temperature in the detection data is higher than a second set value, calculating the gas concentration values of the front and the back twice corresponding to each position in the position information, setting the gas concentration values as emergency positions, and alarming and reminding.
CN201910511984.2A 2019-06-13 2019-06-13 Face tracking security monitoring system based on deep learning and use method thereof Active CN110300257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910511984.2A CN110300257B (en) 2019-06-13 2019-06-13 Face tracking security monitoring system based on deep learning and use method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910511984.2A CN110300257B (en) 2019-06-13 2019-06-13 Face tracking security monitoring system based on deep learning and use method thereof

Publications (2)

Publication Number Publication Date
CN110300257A CN110300257A (en) 2019-10-01
CN110300257B true CN110300257B (en) 2020-12-08

Family

ID=68027928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910511984.2A Active CN110300257B (en) 2019-06-13 2019-06-13 Face tracking security monitoring system based on deep learning and use method thereof

Country Status (1)

Country Link
CN (1) CN110300257B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111963838B (en) * 2020-08-14 2022-04-01 罗均海 Automatic tracking turntable base based on human body recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202309958U (en) * 2011-07-08 2012-07-04 杭州开锐电子电气有限公司 Intelligent spherical teaching camera with trace facility
CN207399399U (en) * 2017-10-16 2018-05-22 南京永辉信息科技有限公司 A kind of video monitoring equipment being easily installed
WO2018096953A1 (en) * 2016-11-22 2018-05-31 ソニー株式会社 Image pickup device, display system, and display method
CN109146849A (en) * 2018-07-26 2019-01-04 昆明理工大学 A kind of road surface crack detection method based on convolutional neural networks and image recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203840462U (en) * 2014-05-13 2014-09-17 亚忆电子(深圳)有限公司 Automatic rotary network high-definition monitoring image pick-up device
CN109873975A (en) * 2017-12-03 2019-06-11 天津捷赢科技有限公司 A kind of intelligent remote monitoring and control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202309958U (en) * 2011-07-08 2012-07-04 杭州开锐电子电气有限公司 Intelligent spherical teaching camera with trace facility
WO2018096953A1 (en) * 2016-11-22 2018-05-31 ソニー株式会社 Image pickup device, display system, and display method
CN207399399U (en) * 2017-10-16 2018-05-22 南京永辉信息科技有限公司 A kind of video monitoring equipment being easily installed
CN109146849A (en) * 2018-07-26 2019-01-04 昆明理工大学 A kind of road surface crack detection method based on convolutional neural networks and image recognition

Also Published As

Publication number Publication date
CN110300257A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
US10963681B2 (en) Face concealment detection
CN103839373B (en) A kind of unexpected abnormality event Intelligent Recognition alarm device and warning system
Li et al. Fall detection for elderly person care using convolutional neural networks
CN104238732B (en) Device, method and computer readable recording medium for detecting facial movements to generate signals
CN110458101B (en) Criminal personnel sign monitoring method and equipment based on combination of video and equipment
CA2880597C (en) System and method of alerting central monitoring station and registered users about a potential duress situation using a mobile application
CN112364696B (en) Method and system for improving family safety by utilizing family monitoring video
Alshbatat et al. Automated vision-based surveillance system to detect drowning incidents in swimming pools
CN108638082A (en) Security robot system based on Internet of Things
CN112911156B (en) Patrol robot and security system based on computer vision
CN111564224A (en) Intelligent monitoring system with health monitoring function and implementation method thereof
CN110300257B (en) Face tracking security monitoring system based on deep learning and use method thereof
US20220125359A1 (en) Systems and methods for automated monitoring of human behavior
CN107403534A (en) A kind of indoor security system
CN113044694A (en) Construction site elevator people counting system and method based on deep neural network
CN113314230A (en) Intelligent epidemic prevention method, device, equipment and storage medium based on big data
CN209803819U (en) Face recognition recorder
Varghese et al. Video anomaly detection in confined areas
CN110611877B (en) Violence abnormal behavior monitoring system and method based on unmanned aerial vehicle
Abirami et al. Effective face mask and social distance detection with alert system for covid-19 using YOLOv5 model
CN112153065A (en) Environment perception information verification system based on artificial intelligence and network security
Morbale et al. Quadcopter Drone with Face Recognition
CN114627603B (en) Warehouse safety early warning method and system
Sun et al. Driver fatigue alarm based on eye detection and gaze estimation
AU2020104377A4 (en) Intelligent Safe Home System for the Elderly People

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant