CN110197158B - Security cloud system and application thereof - Google Patents

Security cloud system and application thereof Download PDF

Info

Publication number
CN110197158B
CN110197158B CN201910469910.7A CN201910469910A CN110197158B CN 110197158 B CN110197158 B CN 110197158B CN 201910469910 A CN201910469910 A CN 201910469910A CN 110197158 B CN110197158 B CN 110197158B
Authority
CN
China
Prior art keywords
positioning
face
layer
subnet
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910469910.7A
Other languages
Chinese (zh)
Other versions
CN110197158A (en
Inventor
文武
文勇
胡振兴
李昌席
陈科鹏
陈巧丽
韦梦丽
梁夏菲
何宁英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Aige Software Technology Co.,Ltd.
Original Assignee
Guangxi Nanning Boruitong Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Nanning Boruitong Software Technology Co ltd filed Critical Guangxi Nanning Boruitong Software Technology Co ltd
Priority to CN201910469910.7A priority Critical patent/CN110197158B/en
Publication of CN110197158A publication Critical patent/CN110197158A/en
Application granted granted Critical
Publication of CN110197158B publication Critical patent/CN110197158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over

Abstract

The invention discloses a security cloud system and application thereof, belonging to the field of intelligent security, and comprising a public security alarm unit, a security early warning unit, a face recognition positioning training unit and a face snapshot positioning tracking unit, wherein the public security alarm unit and the security early warning unit are both connected with the face recognition positioning training unit, and the face recognition positioning training unit is connected with the face snapshot positioning tracking unit. Through setting up real-time people's face snapshot positioning and tracking unit, take a candid photograph the people's face in the scene, and compare with public safety system public face data, when having discovered the personnel by the filing in the public safety system, send early warning information or alarm information etc. according to the different ranks of the personnel of filing, the staff of informing security systems that can advance, the better thing that prevents to disturb social order takes place, can take precautions against the mistake, the better medical alarm that prevents the hospital, the emergence of the condition such as community biography and school's turning over to sell children.

Description

Security cloud system and application thereof
Technical Field
The invention relates to the field of intelligent security, in particular to a security cloud system and application thereof.
Background
Many existing security systems monitor a certain environment through monitoring the current situation of a scene. However, the supervision is that the alarm can be given to inform corresponding personnel only after a violent or criminal event occurs, and the condition of the personnel in a certain environment cannot be identified and analyzed in advance, so that early warning is realized, and the occurrence of the violent or criminal event is prevented more effectively.
For example, the medical staff of present hospital, many hospitals have hindered normal medical order because some career's of career have taken an alarm, need corresponding security personnel can carry out the management and control after the medical outbreak simultaneously, hard management. Therefore, corresponding personnel cannot be supervised in advance in many scene fields, meanwhile, the system is not well communicated with public security departments, data intercommunication is not achieved, and social instability time cannot be informed to supervise the corresponding personnel in advance.
Disclosure of Invention
The invention aims to provide a security cloud system and application thereof, and aims to solve the technical problems that the existing security heartache cannot carry out face recognition monitoring on social personnel, and early warning, informing and monitoring are carried out in advance.
A security cloud system comprises a public security alarm unit, a security early warning unit, a face recognition positioning training unit and a face snapshot positioning tracking unit, wherein the public security alarm unit and the security early warning unit are connected with the face recognition positioning training unit, the face recognition positioning training unit is connected with the face snapshot positioning tracking unit, the public security alarm unit is used for providing face data identified in each scene to the face recognition positioning training unit and receiving alarm signals of the face recognition positioning training unit for automatic alarm and displaying alarm information, the face snapshot positioning tracking unit is used for collecting the face data and transmitting the face data to the face recognition positioning training unit, corresponding faces are tracked according to the tracking data transmitted by the face recognition positioning training unit, the face recognition positioning training unit is used for receiving the face data collected by the face snapshot positioning tracking unit and comparing the face data with the identified face data provided by the public security alarm unit, and when the same face data are compared, the face recognition positioning training unit sends out an early warning signal to inform the security early warning unit and/or the public security early warning unit, and informs security personnel of the early warning alarm.
Furthermore, the face snapshot positioning and tracking unit comprises a face acquisition module and a face tracking and positioning module, the face acquisition module is connected with the face tracking and positioning module, and the face tracking and positioning module controls the face acquisition module to rotate to track the face.
Furthermore, the face recognition positioning training unit comprises a data storage module, a face recognition module and a face positioning training module, the data storage module is connected with the face recognition module, the face recognition module is connected with the face positioning training module, the data storage module is used for storing information of identified face data provided by the public security alarm unit, the face recognition module is used for comparing the face data acquired by the face acquisition module with the identified face data, when the comparison has the same or similar data, an early warning or alarm signal is sent to the public security alarm unit and/or the security early warning unit, the face positioning training module is used for training the model positioning model and providing the positioning model for the face tracking positioning module to perform tracking positioning.
Further, the specific process of the face positioning training module for training the positioning model is as follows:
the method comprises the steps of constructing a pyramid twin network model, training the pyramid twin network model, testing the pyramid twin network model, and transmitting the trained and tested pyramid twin network model to a face tracking and positioning module to realize face tracking.
Furthermore, the pyramid twin network model consists of a twin network, a characteristic pyramid network and a classified positioning parallel network, wherein the twin network consists of subnets formed by two VGGs (vertical gradient generator) which share the same parameters, the subnets formed by the two VGGs are used for extracting the characteristics of the target image and the search image respectively, after the twin network finishes the characteristic extraction of the target image and the search image, target characteristic layers and search characteristic layers with different scales are obtained respectively, and 6 layers of characteristics are extracted from the characteristic layers with different layers and different scales for constructing the pyramid network;
after the characteristic pyramid network is constructed, the characteristic pyramid network is combined with a classification and positioning parallel network for positioning and tracking a target in real time, the classification and positioning parallel network consists of a candidate frame subnet, a classifier subnet and a positioning regression subnet, the candidate frame subnet, the classifier subnet and the positioning regression subnet respectively generate a candidate frame, a confidence coefficient and a coordinate offset, and the classifier subnet and the positioning regression subnet are executed in parallel.
Further, the subnets formed by the two VGGs are a target subnet and a search subnet, feature extraction is performed on the target image and the search image respectively, and the target subnet and the search subnet share the same weight and bias, the target subnet and the search subnet are formed by eleven layers of convolutional layers, and the eleven layers of convolutional layers are respectively: the first layer is composed of 2 convolution units, the second layer is composed of 2 convolution units, the third layer is composed of 3 convolution units, the fourth layer is composed of 3 convolution units, the fifth layer is composed of 3 convolution units, the sixth layer is composed of 1 convolution unit, the seventh layer is composed of 1 convolution unit, the eighth layer is composed of 2 convolution units, the ninth layer is composed of 2 convolution units, the tenth layer is composed of 2 convolution units, and the eleventh layer is composed of 2 convolution units;
the feature pyramid network is composed of feature layers obtained by a target subnet and a search subnet, the total number of the feature layers is 6, and the feature pyramid network comprises the following steps: the first layer is composed of a tenth layer of feature layers in the target subnet; the second layer is composed of a tenth layer of characteristic layers in the search subnet; the third layer is composed of a seventh layer of characteristic layers in the target subnet; the fourth layer is composed of a sixth layer of characteristic layers in the search subnet; the fifth layer is composed of a fourth layer of characteristic layers in the target subnet; the sixth layer consists of the third layer of feature layers in the search subnet,
the classification and positioning parallel network comprises a candidate frame subnet, a classifier subnet and a positioning regression subnet, wherein the candidate frame subnet comprises a candidate frame and is used for predicting the capacity of a target, the classifier subnet comprises a normalized exponential function classifier and is used for distinguishing the capacity of the target from the capacity of a non-target, the positioning regression subnet comprises a 3x3 convolution kernel and is used for positioning the target, each layer of image features are divided into n x n grids by the candidate frame, n is a positive integer, each grid generates 6 candidate frames with fixed sizes, and the confidence coefficient and the coordinate offset are respectively generated by the candidate frames through the classifier subnet and the positioning regression subnet.
Further, the concrete process of training the pyramid twin network model comprises the following steps:
acquiring an original video sequence from a video database, and performing image preprocessing on the video sequence to obtain a target training set and a search training set, wherein targets of the training sets are all in the central positions of images;
after the training set image processing is completed, inputting paired target training set and search training set pictures into subnets corresponding to the twin network to obtain a target characteristic layer and a search characteristic layer, extracting characteristic layers with different levels and different scales to construct a pyramid network, and constructing candidate frames with different positions and different sizes in each layer of characteristic layer of the pyramid network based on a candidate frame size formula and a position formula;
inputting each layer of feature layer of the pyramid network into a classification positioning parallel network to obtain an output result of the parallel network, and performing similarity matching on the output result and a label real value to obtain a positive sample and a negative sample;
calculating an error between a matching result and a real value of a label by using a target loss function, reversely propagating the error layer by layer to an input layer, adjusting weight and bias in the network based on a small batch of random gradient descent optimization algorithm, obtaining an optimal error value, and finishing primary network model training;
and repeating the steps until the error value of the target loss function is converged to the minimum value.
Further, the police alarm unit comprises a police database module and a police automatic alarm module, the police database module is connected with the police automatic alarm module, the police database module is used for storing the face data of identified persons in the police and relevant item information marked, and the police automatic alarm module is used for automatically alarming and informing the police to be dispatched.
The system is applied to a school security system, a hospital security system, a community security system or an industrial park security system.
By adopting the technical scheme, the invention has the following technical effects:
the invention takes a snapshot of the face in the scene by arranging the real-time face snapshot positioning and tracking unit, compares the face with the public face data of the public security system, sends out early warning information or warning information according to different grades of the personnel who have been put on record in the public security system when finding out the personnel who have been put on record in the public security system, can inform the personnel of the security system in advance, better prevents the occurrence of the things disturbing the social order, can prevent the accidents, and better prevents the occurrence of the situations of medical alarm in hospitals, community distribution, school going to sell children and the like.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a block diagram of the face recognition positioning training unit and the face snapshot positioning tracking unit of the present invention.
FIG. 3 is a timing diagram illustrating the operation of the system of the present invention.
FIG. 4 is a block diagram of a police alerting unit module of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings by way of examples of preferred embodiments. It should be noted, however, that the numerous details set forth in the description are merely for the purpose of providing the reader with a thorough understanding of one or more aspects of the present invention, which may be practiced without these specific details.
Referring to fig. 1, the present invention provides a security cloud system, which includes a public security alarm unit, a security early warning unit, a face recognition and positioning training unit, and a face snapshot and positioning tracking unit. The public security alarm unit and the security early warning unit are connected with the face recognition positioning training unit. The face recognition positioning training unit is connected with the face snapshot positioning tracking unit. The public security alarm unit is used for providing the identified face data in each scene to the face recognition positioning training unit, receiving the alarm signal of the face recognition positioning training unit, automatically alarming and displaying alarm information. The face snapshot positioning and tracking unit is used for acquiring face data, transmitting the face data to the face recognition positioning training unit, and tracking the corresponding face according to the tracking data transmitted by the face recognition positioning training unit. The face recognition positioning training unit is used for receiving face data collected by the face snapshot positioning tracking unit and marked face data provided by the public security alarm unit for comparison, when the same face data are compared, an early warning signal is sent to inform the security early warning unit and/or the public security alarm unit, and the security early warning unit is used for receiving early warning or warning information of the face recognition positioning training unit for warning and informing security personnel. The security early warning unit is used in various scenes, such as a security room of a hospital, a security room of a entrance guard of a community and the like. In different scenarios, the police alarm units are all the same. The face recognition positioning training unit is mainly used as a face recognition processing center or a data processing center of a camera. The face snapshot positioning tracking unit is mainly installed on the cameras in all scenes.
In the embodiment of the invention, the face snapshot positioning and tracking unit comprises a face acquisition module and a face tracking positioning module, wherein the face acquisition module is connected with the face tracking positioning module, and the face tracking positioning module controls the face acquisition module to rotate to track the face.
In the embodiment of the invention, the face recognition positioning training unit comprises a data storage module, a face recognition module and a face positioning training module, wherein the data storage module is connected with the face recognition module, the face recognition module is connected with the face positioning training module, the data storage module is used for storing information of identified face data provided by a public security alarm unit, the face recognition module is used for comparing the face data acquired by the face acquisition module with the identified face data, when the same or similar data are compared, an early warning or warning signal is sent to the public security alarm unit and/or a security early warning unit, the face positioning training module is used for training a model positioning model and providing the positioning model for the face tracking positioning module to perform tracking positioning.
In the embodiment of the invention, the specific process of training the positioning model by the face positioning training module is as follows:
the method comprises the steps of constructing a pyramid twin network model, training the pyramid twin network model, testing the pyramid twin network model, and transmitting the trained and tested pyramid twin network model to a face tracking and positioning module to realize face tracking.
Training a pyramid twin network model:
(1) And acquiring an original video sequence from a video database in the ILSVRC, and performing image processing on the video sequence to obtain a target training set and a search training set. Wherein the size of the target training set image is 127x127x3, the size of the search training set image is 255x255x3, and the targets of the training sets are all in the center position of the images;
(2) After the image processing of the training set is completed, inputting paired training set pictures into subnets corresponding to the twin network to obtain a target feature layer and a search feature layer, extracting the feature layers with different levels and different scales to construct the pyramid network of the invention, and constructing candidate frames with different positions and different sizes in each layer of feature layer of the pyramid network based on a candidate frame size formula and a position formula;
(3) And inputting each layer of characteristic layer of the pyramid network into the classification positioning parallel network to obtain an output result of the current parallel network, and performing similarity matching on the output result and a label real value to obtain a positive sample and a negative sample.
(4) Calculating the error between the matching result and the real value of the label by using a target loss function, reversely propagating the error layer by layer to an input layer, and adjusting the weight and the bias in the network based on a small batch of random gradient descent optimization algorithm to obtain an optimal error value so as to finish the training of a network model;
(5) And repeating the steps until the error value of the target loss function is converged to the minimum value.
In the embodiment of the invention, the pyramid twin network model consists of a twin network, a characteristic pyramid network and a classified positioning parallel network, wherein the twin network consists of subnets formed by two VGGs, the subnets formed by the two VGGs share the same parameters, the subnets formed by the two VGGs are used for extracting the characteristics of a target image and a search image respectively, after the twin network finishes the characteristic extraction of the target image and the search image, target characteristic layers and search characteristic layers with different scales are obtained respectively, and 6 layers of characteristics are extracted from the characteristic layers with different levels and different scales to construct the pyramid network.
The twin network is composed of two subnets made of VGG, which are respectively called a target subnet and a search subnet, perform feature extraction on a target image and a search image, respectively, and share the same weight and bias. Wherein VGG is used as the basic network of the twin network and is composed of eleven convolutional layers. The eleven layers of convolution layers are respectively: the first layer convolution layer conv1 is composed of 2 convolution conv1_1 and conv1_2 with the size of 224x224x 64; the second convolution layer conv2 is composed of 2 convolution conv2_1 and conv2_2 with the size of 112x112 x128; the third layer of convolutional layer conv3 is composed of 3 convolutional layers conv3_1, conv3 _2and conv3_3 with the size of 56x56 x256; the fourth convolution layer conv4 is composed of 3 convolution conv4_1, conv4 _2and conv4_3 with the size of 28x28x 512; the fifth convolutional layer conv5 is composed of 3 convolutional conv5_1, conv5_2 and conv5_3 with the size of 14x14 x512; the sixth convolutional layer is composed of convolution conv6 with the size of 3x3x 1024; the seventh convolutional layer is composed of convolution conv7 with the size of 1x1x 1024; the eighth convolutional layer conv8 is composed of 2 convolutional conv8_1 of size 1x1x256 and convolutional conv8_2 of size 3x3x 512; the ninth convolutional layer conv9 is composed of 2 convolutional conv9_1 of size 1x1x128 and convolutional conv9_2 of 3x3x 256; the tenth convolutional layer conv10 is composed of 2 convolutional conv10_1 of size 1x1x128 and convolutional conv10_2 of size 3x3x 256; the eleventh convolutional layer conv11 is composed of 2 convolutional conv11_1 of size 1x1x128 and convolutional conv11_2 of 3x3x 256.
After the characteristic pyramid network is constructed, the characteristic pyramid network is combined with a classification and positioning parallel network for positioning and tracking a target in real time, the classification and positioning parallel network is composed of a candidate frame subnet, a classifier subnet and a positioning regression subnet, the candidate frame subnet, the classifier subnet and the positioning regression subnet respectively generate a candidate frame, a confidence coefficient and a coordinate offset, and the classifier subnet and the positioning regression subnet are executed in parallel.
In the embodiment of the invention, the subnets formed by the two VGGs are a target subnet and a search subnet, feature extraction is respectively carried out on a target image and a search image, and the target subnet and the search subnet share the same weight and bias, the target subnet and the search subnet are both formed by eleven convolutional layers, and the eleven convolutional layers are respectively: the first layer is composed of 2 convolution units, the second layer is composed of 2 convolution units, the third layer is composed of 3 convolution units, the fourth layer is composed of 3 convolution units, the fifth layer is composed of 3 convolution units, the sixth layer is composed of 1 convolution unit, the seventh layer is composed of 1 convolution unit, the eighth layer is composed of 2 convolution units, the ninth layer is composed of 2 convolution units, the tenth layer is composed of 2 convolution units, and the eleventh layer is composed of 2 convolution units;
the feature pyramid network is composed of feature layers obtained by a target subnet and a search subnet, the total number of the feature layers is 6, and the feature pyramid network comprises the following steps: the first layer is composed of a tenth layer of feature layers in the target subnet; the second layer is composed of a tenth layer of feature layers in the search subnet; the third layer is composed of a seventh layer of characteristic layers in the target subnet; the fourth layer is composed of a sixth layer of characteristic layers in the search subnet; the fifth layer is composed of a fourth layer of characteristic layers in the target subnet; the sixth layer consists of the third layer of feature layers in the search subnet,
the classification and positioning parallel network consists of a candidate frame subnet, a classifier subnet and a positioning regression subnet, wherein the candidate frame subnet consists of candidate frames and is used for predicting the capability of a target, the classifier subnet consists of a normalized exponential function classifier and is used for distinguishing the capability of the target from the capability of a non-target, the positioning regression subnet consists of a 3x3 convolution kernel and is used for positioning the target, each layer of image features are divided into n x n grids by the candidate frames, n is a positive integer, each grid generates 6 candidate frames with fixed size, and the confidence degrees and the coordinate offsets are respectively generated by the candidate frames through the classifier subnet and the positioning regression subnet.
In the embodiment of the invention, the concrete process of training the pyramid twin network model comprises the following steps:
acquiring an original video sequence from a video database, and carrying out image preprocessing on the video sequence to obtain a target training set and a search training set, wherein targets of the training sets are all in the center of an image;
after the training set image processing is completed, inputting paired target training set and search training set pictures into subnets corresponding to the twin network to obtain a target characteristic layer and a search characteristic layer, extracting characteristic layers with different levels and different scales to construct a pyramid network, and constructing candidate frames with different positions and different sizes in each layer of characteristic layer of the pyramid network based on a candidate frame size formula and a position formula;
inputting each layer of feature layer of the pyramid network into a classification positioning parallel network to obtain an output result of the parallel network, and performing similarity matching on the output result and a label real value to obtain a positive sample and a negative sample;
calculating an error between a matching result and a real value of a label by using a target loss function, reversely propagating the error layer by layer to an input layer, adjusting weight and bias in the network based on a small batch of random gradient descent optimization algorithm, obtaining an optimal error value, and finishing one-time network model training;
and repeating the steps until the error value of the target loss function is converged to the minimum value.
In the embodiment of the invention, the public security alarm unit comprises a public security database module and a public security automatic alarm module, the public security database module is connected with the public security automatic alarm module, the public security database module is used for storing identified personnel face data and relevant marked relevant item information in the public security, and the public security automatic alarm module is used for automatically alarming and informing the police to be dispatched.
An application of a security cloud system is applied to a school security system, a hospital security system, a community security system or an industrial park security system. The main use object in the hospital field is a security worker who can timely arrive at the site according to the comparison result and take corresponding measures for medical alarm and illegal accompanying and attending; the field of schools can confirm the identity of a child carrier, so that the child is prevented from being impersonated by a fake relative; the community can be used for monitoring and early warning illegal personnel, inertial stealing, unstable personnel and the like; specific applications: can be used for tracking and judging the track of a specific person, and the like.
Public security organs provide portrait data of various evasion, while hospitals can provide mental illness violence portrait data (communities, learning, gas stations and automobiles 4S all concern the people with dimensionality according to the hospitals), and a multi-level multi-dimensional portrait big data is constructed together. When the escaped goes to a hospital for a visit, the hospital pays attention to the person as in a public security organization. When the distributor goes to the hospital, perhaps the hospital does not pay attention to the distributor, but the community pays attention to the distributor. The system provides an entrance and a set of auditing mechanisms for wide and fine big data of the society, ensures that all social levels concern people concerned about the big data, and customizes different business strategies for different people.
While there have been shown and described what are at present considered the fundamental principles and essential features of the invention and its advantages, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and amendments can be made without departing from the principle of the present invention, and these modifications and amendments should also be considered as the protection scope of the present invention.

Claims (3)

1. A security cloud system is characterized by comprising a public security alarm unit, a security early warning unit, a face recognition positioning training unit and a face snapshot positioning tracking unit, wherein the public security alarm unit and the security early warning unit are connected with the face recognition positioning training unit, the face recognition positioning training unit is connected with the face snapshot positioning tracking unit, the public security alarm unit is used for providing identified face data in each scene to the face recognition positioning training unit, meanwhile, the alarm signal of the face recognition positioning training unit is received for automatic alarm, alarm information is displayed, the face snapshot positioning tracking unit is used for acquiring face data and transmitting the face data to the face recognition positioning training unit, corresponding faces are tracked according to the tracking data transmitted by the face recognition positioning training unit, the face recognition positioning training unit is used for receiving the face data acquired by the face snapshot positioning tracking unit and comparing the identified face data provided by the public security alarm unit, and when the same face data are compared, the early warning signal is transmitted to the security early warning unit and/or the public security early warning unit, and the security early warning unit is used for receiving the early warning of the face data acquired by the face recognition positioning tracking unit or alarming and notifying security personnel;
the face capturing, positioning and tracking unit comprises a face acquisition module and a face tracking and positioning module, the face acquisition module is connected with the face tracking and positioning module, and the face tracking and positioning module controls the face acquisition module to rotate to track a face;
the face recognition positioning training unit comprises a data storage module, a face recognition module and a face positioning training module, the data storage module is connected with the face recognition module, the face recognition module is connected with the face positioning training module, the data storage module is used for storing information of identified face data provided by the public security alarm unit, the face recognition module is used for comparing the face data acquired by the face acquisition module with the identified face data, when the same or similar data exist in comparison, an early warning or alarm signal is sent out and transmitted to the public security alarm unit and/or the security early warning unit, the face positioning training module is used for training a model positioning model and providing a positioning model for the face tracking positioning module to perform tracking positioning;
the specific process of training the positioning model by the face positioning training module is as follows:
constructing a pyramid twin network model, training the pyramid twin network model, testing the pyramid twin network model, and transmitting the trained and tested pyramid twin network model to a face tracking and positioning module to realize face tracking;
the pyramid twin network model consists of a twin network, a characteristic pyramid network and a classified positioning parallel network, wherein the twin network consists of subnets formed by two VGGs, the subnets formed by the two VGGs share the same parameters, the subnets formed by the two VGGs are used for extracting the characteristics of a target image and a search image respectively, after the twin network finishes the characteristic extraction of the target image and the search image, target characteristic layers and search characteristic layers with different scales are obtained respectively, and 6 layers of characteristics are extracted from the characteristic layers with different levels and different scales for constructing the pyramid network;
after the characteristic pyramid network is constructed, the characteristic pyramid network is combined with a classification and positioning parallel network and used for positioning and tracking a target in real time, the classification and positioning parallel network consists of a candidate frame subnet, a classifier subnet and a positioning regression subnet, the candidate frame subnet, the classifier subnet and the positioning regression subnet respectively generate a candidate frame, a confidence coefficient and a coordinate offset, and the classifier subnet and the positioning regression subnet are executed in parallel;
the subnets formed by the two VGGs are a target subnet and a search subnet, feature extraction is carried out on a target image and a search image respectively, the target subnet and the search image share the same weight and bias, the target subnet and the search subnet are formed by eleven convolutional layers, and the eleven convolutional layers are respectively: the first layer is composed of 2 convolution units, the second layer is composed of 2 convolution units, the third layer is composed of 3 convolution units, the fourth layer is composed of 3 convolution units, the fifth layer is composed of 3 convolution units, the sixth layer is composed of 1 convolution unit, the seventh layer is composed of 1 convolution unit, the eighth layer is composed of 2 convolution units, the ninth layer is composed of 2 convolution units, the tenth layer is composed of 2 convolution units, and the eleventh layer is composed of 2 convolution units;
the feature pyramid network is composed of feature layers obtained by a target subnet and a search subnet, the total number of the feature layers is 6, and the feature pyramid network comprises the following steps: the first layer is composed of a eleventh layer of feature layers in the target subnet; the second layer is composed of a tenth layer of characteristic layers in the search subnet; the third layer is composed of a seventh layer of characteristic layers in the target subnet; the fourth layer is composed of a sixth layer of characteristic layers in the search subnet; the fifth layer is composed of a fourth layer of characteristic layers in the target subnet; the sixth level consists of the third level of feature level in the search subnet,
the classification and positioning parallel network consists of a candidate frame subnet, a classifier subnet and a positioning regression subnet, wherein the candidate frame subnet consists of candidate frames and is used for predicting the capability of a target, the classifier subnet consists of a normalized exponential function classifier and is used for distinguishing the capability of the target from the capability of a non-target, the positioning regression subnet consists of a 3x3 convolution kernel and is used for positioning the target, each layer of image features are divided into n x n grids by the candidate frames, n is a positive integer, each grid generates 6 candidate frames with fixed size, and the confidence coefficient and the coordinate offset are respectively generated by the candidate frames through the classifier subnet and the positioning regression subnet;
the concrete process of training the pyramid twin network model comprises the following steps:
acquiring an original video sequence from a video database, and carrying out image preprocessing on the video sequence to obtain a target training set and a search training set, wherein targets of the training sets are all in the center of an image;
after the training set image processing is completed, inputting paired target training set and search training set pictures into subnets corresponding to the twin network to obtain a target characteristic layer and a search characteristic layer, extracting characteristic layers with different levels and different scales to construct a pyramid network, and constructing candidate frames with different positions and different sizes in each layer of characteristic layer of the pyramid network based on a candidate frame size formula and a position formula;
inputting each layer of feature layer of the pyramid network into a classification positioning parallel network to obtain an output result of the parallel network, and performing similarity matching on the output result and a label real value to obtain a positive sample and a negative sample;
calculating an error between a matching result and a real value of a label by using a target loss function, reversely propagating the error layer by layer to an input layer, adjusting weight and bias in the network based on a small batch of random gradient descent optimization algorithm, obtaining an optimal error value, and finishing one-time network model training;
and repeating the steps until the error value of the target loss function is converged to the minimum value.
2. The security cloud system of claim 1, wherein: the police alarm unit comprises a police database module and a police automatic alarm module, the police database module is connected with the police automatic alarm module, the police database module is used for storing the face data of identified persons in the police and relevant item information marked, and the police automatic alarm module is used for automatically alarming and informing police dispatch.
3. The application of the security cloud system of claim 2, wherein: use of the system according to any of claims 1-2 in a school security system, a hospital security system, a community security system or an industrial park security system.
CN201910469910.7A 2019-05-31 2019-05-31 Security cloud system and application thereof Active CN110197158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910469910.7A CN110197158B (en) 2019-05-31 2019-05-31 Security cloud system and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910469910.7A CN110197158B (en) 2019-05-31 2019-05-31 Security cloud system and application thereof

Publications (2)

Publication Number Publication Date
CN110197158A CN110197158A (en) 2019-09-03
CN110197158B true CN110197158B (en) 2023-04-18

Family

ID=67753645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910469910.7A Active CN110197158B (en) 2019-05-31 2019-05-31 Security cloud system and application thereof

Country Status (1)

Country Link
CN (1) CN110197158B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781778B (en) * 2019-10-11 2021-04-20 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN111161891B (en) * 2019-12-31 2023-06-30 重庆亚德科技股份有限公司 Traditional Chinese medicine information management platform
CN113158933A (en) * 2021-04-28 2021-07-23 广州瀚信通信科技股份有限公司 Method, system, device and storage medium for identifying lost personnel

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901340A (en) * 2010-08-04 2010-12-01 惠州市华阳多媒体电子有限公司 Suspect tracking method and system
US10902615B2 (en) * 2017-11-13 2021-01-26 Qualcomm Incorporated Hybrid and self-aware long-term object tracking
CN108052925B (en) * 2017-12-28 2021-08-03 江西高创保安服务技术有限公司 Intelligent management method for community personnel files
CN108200405A (en) * 2018-02-05 2018-06-22 成都伦索科技有限公司 A kind of video monitoring system based on recognition of face
CN109711320B (en) * 2018-12-24 2021-05-11 兴唐通信科技有限公司 Method and system for detecting violation behaviors of staff on duty
CN109685066B (en) * 2018-12-24 2021-03-09 中国矿业大学(北京) Mine target detection and identification method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN110197158A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110197158B (en) Security cloud system and application thereof
CN108229335A (en) It is associated with face identification method and device, electronic equipment, storage medium, program
CN106791655B (en) A kind of method for processing video frequency and device
CN106878670B (en) A kind of method for processing video frequency and device
US20080273088A1 (en) Intelligent surveillance system and method for integrated event based surveillance
WO2015076905A2 (en) Systems and methods for shared surveillance
TW202115648A (en) Behavioral analysis methods, electronic devices and computer storage medium
CN103514694A (en) Intrusion detection monitoring system
CN110390031A (en) Information processing method and device, vision facilities and storage medium
CN107038675A (en) A kind of public security inventory management system and its operation method
CN111522995A (en) Target object analysis method and device and electronic equipment
CN108710856A (en) A kind of face identification method based on video flowing
KR20180085505A (en) System for learning based real time guidance through face recognition and the method thereof
Kuppala et al. Benefits of Artificial Intelligence in the Legal System and Law Enforcement
EP3910539A1 (en) Systems and methods of identifying persons-of-interest
CN107920224A (en) A kind of abnormality alarming method, equipment and video monitoring system
CA3069539C (en) Role-based perception filter
McClain The horizons of technological control: automated surveillance in the New York subway
US20230156160A1 (en) Intelligent surveillance camera control and notification system
KR102420151B1 (en) Mobile ondemand cctv system based on collective cross check and tracking
Xu The application of deep learning-based face recognition system in public safety
Pavithra et al. DL Based Automatic Robbery Detection using Video Surveillance in Residential Areas
Sudheer et al. Multipurpose Surveillance System
Abramov Institutional Factors of the Appropriateness of Facilitating the Implementation of Promising Technologies
Huang et al. An Electronic Fence Application in in Mass Rapid Transit Station Scenarios with the Edge-Computing Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 530007, Rooms 302 and 305, 3rd Floor, Block A, Shennengda Technology Incubation Park, R&D Building, No. 2, East Section of Gaoxin Avenue, High tech Zone, Nanning City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi Aige Software Technology Co.,Ltd.

Address before: 530007, Rooms 302 and 305, 3rd Floor, Block A, Shennengda Technology Incubation Park, R&D Building, No. 2, East Section of Gaoxin Avenue, High tech Zone, Nanning City, Guangxi Zhuang Autonomous Region

Patentee before: GUANGXI NANNING BORUITONG SOFTWARE TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder