CN112804491A - Campus security supervision method, system, server and storage medium - Google Patents
Campus security supervision method, system, server and storage medium Download PDFInfo
- Publication number
- CN112804491A CN112804491A CN202011633227.1A CN202011633227A CN112804491A CN 112804491 A CN112804491 A CN 112804491A CN 202011633227 A CN202011633227 A CN 202011633227A CN 112804491 A CN112804491 A CN 112804491A
- Authority
- CN
- China
- Prior art keywords
- information
- identification
- conflict
- acquiring
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000009471 action Effects 0.000 claims abstract description 76
- 238000012544 monitoring process Methods 0.000 claims abstract description 37
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 6
- 238000012360 testing method Methods 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 9
- 210000002414 leg Anatomy 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 6
- 210000003414 extremity Anatomy 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000037213 diet Effects 0.000 description 5
- 235000005911 diet Nutrition 0.000 description 5
- 238000009434 installation Methods 0.000 description 4
- 210000000689 upper leg Anatomy 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000002310 elbow joint Anatomy 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 210000000323 shoulder joint Anatomy 0.000 description 3
- 210000000544 articulatio talocruralis Anatomy 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010001488 Aggression Diseases 0.000 description 1
- 206010061225 Limb injury Diseases 0.000 description 1
- 206010071368 Psychological trauma Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 208000012761 aggressive behavior Diseases 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 244000309466 calf Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009323 psychological health Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10009—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves
- G06K7/10019—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves resolving collision on the communication channels between simultaneously or concurrently interrogated record carriers.
- G06K7/10079—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves resolving collision on the communication channels between simultaneously or concurrently interrogated record carriers. the collision being resolved in the spatial domain, e.g. temporary shields for blindfolding the interrogator in specific directions
- G06K7/10089—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves resolving collision on the communication channels between simultaneously or concurrently interrogated record carriers. the collision being resolved in the spatial domain, e.g. temporary shields for blindfolding the interrogator in specific directions the interrogation device using at least one directional antenna or directional interrogation field to resolve the collision
- G06K7/10099—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves resolving collision on the communication channels between simultaneously or concurrently interrogated record carriers. the collision being resolved in the spatial domain, e.g. temporary shields for blindfolding the interrogator in specific directions the interrogation device using at least one directional antenna or directional interrogation field to resolve the collision the directional field being used for pinpointing the location of the record carrier, e.g. for finding or locating an RFID tag amongst a plurality of RFID tags, each RFID tag being associated with an object, e.g. for physically locating the RFID tagged object in a warehouse
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
- H04L67/025—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Tourism & Hospitality (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Artificial Intelligence (AREA)
- Toxicology (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Electromagnetism (AREA)
- Biophysics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Alarm Systems (AREA)
Abstract
The application relates to a campus security supervision method, a campus security supervision system, a server and a storage medium, which relate to the field of teaching management of the Internet of things, and the method comprises the following steps: identifying the obtained figure action information in the current monitoring image; judging whether the character action information is conflict action information according to a conflict algorithm; if yes, feeding back conflict prompt information; identifying figure identity information corresponding to the figure action information in the current monitoring image, and taking an object corresponding to the figure identity information as a participating object; feeding back the figure identity information and the number of the figure identity information; and feeding back the obtained consumption information corresponding to the participating object. The method and the system have the effect of facilitating the supervisor or the parent to find the conflict events occurring in the campus in time.
Description
Technical Field
The application relates to the field of campus management of the Internet of things, in particular to a campus security supervision method, a campus security supervision system, a server and a storage medium.
Background
Campus deception is an aggressive behavior which occurs inside and outside a campus and takes an object as a participating subject, and the campus deception has various expression forms, such as physical violence, speech deception, property damage, isolation and the like, wherein the physical violence is the most serious and the speech deception is the most frequent; since campus deception is covert, it is not easy for a supervisor or a parent to find in time, and thus, a fellow student who has deceived the campus deception may be greatly injured psychologically or physically.
Disclosure of Invention
In order to facilitate a supervisor or a parent to find out a conflict event occurring in a campus in time, the application provides a campus security supervision method, a system, a server and a storage medium.
In a first aspect, the present application provides a campus security monitoring method, which adopts the following technical scheme:
a campus security administration method, comprising:
identifying the obtained figure action information in the current monitoring image;
judging whether the character action information is conflict action information according to a conflict algorithm;
if yes, feeding back conflict prompt information;
identifying figure identity information corresponding to the figure action information in the current monitoring image, and taking an object corresponding to the figure identity information as a participating object;
feeding back the figure identity information and the number of the figure identity information; and the number of the first and second groups,
and feeding back the obtained consumption information corresponding to the participating object.
By adopting the technical scheme, a plurality of cameras are installed in a school to obtain a current monitoring image, if the figure action information in the monitoring image is identified as conflict action information, the occurrence of a campus conflict event or a campus cheating event is judged at the moment, and conflict prompt information is fed back to prompt a supervisor; the identity information of the person of the conflict object is fed back, so that a supervisor can conveniently manage the participant who has the conflict event or the deception event;
after the conflict event is ended, the person to be exposed to the force of the force applying person may be threatened again rather than direct limb conflict, and at the moment, the life situations of the person to be exposed to the force applying person and the force applying person are analyzed by collecting the consumption information of the force applying person and the person to be exposed to the force applying person, wherein the consumption information comprises medical consumption information, life consumption information and diet consumption information of the person to be exposed to the force applying person and the person to be exposed to the force applying person in a campus, whether an implicit conflict event without limb conflict exists after the conflict event occurs is judged according to the information, and the effect of monitoring/managing the cheating event by a supervisor is.
Optionally, after the collision prompt information is fed back, the method further includes:
acquiring current position information corresponding to the person identity information;
forming an observation range according to the current position information and the observation distance information;
acquiring information of a position to be positioned in an observation range;
feeding back identification information corresponding to the information of the position to be positioned;
and taking the object corresponding to the identification information as a participating object.
By adopting the technical scheme, due to the uncertainty of the installation position and the installation angle of the camera and the position of the object in the event of conflict and the error range of the identity information of the identified person, the identity information of a plurality of persons can not be identified, and after at least one person is identified, all identification information which can possibly be the participating object can be further determined according to the position of the identified person identity information and other objects which are far away from the object in the observation range, so that the accuracy of detecting the identity of the object participating in the event of conflict is improved.
Optionally, after the collision prompt information is fed back, the method further includes:
using the identification information and/or the person identity information as an identification group;
acquiring the pre-judging position information of any identification information and/or person identity information;
judging whether an identification group exists in a pre-judging range formed by the pre-judging position information and the pre-judging distance information;
if yes, feeding back the pre-judgment prompt information.
By adopting the technical scheme, all the participating objects which are likely to participate in the conflict event are taken as an identification group, if the objects of the identification group appear in the same prejudgment range again, the objects in the identification group are likely to have secondary conflicts, and the supervisor is prompted to timely prevent the potentially occurring conflict event by feeding back the prejudgment prompt information.
Optionally, after the feeding back the pre-judgment prompt information, the method further includes:
acquiring related objects corresponding to the participating objects according to a relational algorithm;
and adding the related objects into the identification group.
By adopting the technical scheme, the related objects are closely related to the participating objects, the related objects of the person to be exposed to force can also become objects to be exposed to force again, and the related objects of the person to be exposed to force can become objects to be exposed to force twice, so that if the closely related objects of the object to be exposed to force and the object to be exposed to force appear in the same prejudgment range, the collision event can occur again, and at the moment, the supervisor can be prompted by the prejudgment prompt message, so that the effect of preventing in advance is achieved.
Optionally, the relationship algorithm includes:
acquiring identification position information of each identification information;
forming an identification range by using the identification position information and the identification distance information;
acquiring other identification information in the identification range;
acquiring the times of other identification information in the identification range as effective times;
judging whether the effective times are greater than threshold times;
and if so, taking the personnel of other identification information corresponding to the effective times as related objects.
By adopting the technical scheme, the social situation of each object is observed, and the frequency of the observed object appearing in the identification range with other object/objects is larger than the threshold frequency, so that the observed object and other object/objects are not met by chance but have strong correlation, and the object/objects are taken as the related objects of the object.
Optionally, after the collision prompt information is fed back, the method further includes:
forming problem information according to the identification group and the conflict prompt information;
feeding back the question information to the participating object;
and acquiring biological characteristic information when the participating object feeds back the problem information to acquire lie detection judgment information.
By adopting the technical scheme, the question information is formed according to the time, the place, the participating objects and other information of the occurring conflict event, so that the question information is more targeted relative to the participating objects, the lie detection judgment information is used for assisting in judging whether the participating objects answer the question or not and potential hidden information exists, and a supervisor can conveniently judge whether the hidden conflict event exists or not.
Optionally, the collision algorithm includes:
acquiring body range overlapping information of the figure action information of the judgment object;
acquiring arm action information of figure action information of a judgment object;
acquiring leg action information of figure action information of a judgment object;
acquiring body orientation information of the figure action information of the judgment object;
obtaining current posture information according to the body range overlapping information, the arm action information, the leg action information and the body orientation information;
matching the current attitude information with the conflict action information;
if the matching is successful, the current attitude information is the matched conflict action information.
In a second aspect, the present application provides a campus security auxiliary monitoring system, which adopts the following technical solution:
a campus security assistance surveillance system, comprising:
the input module is used for acquiring a current monitoring image;
the judgment module is connected with the input module and used for identifying the acquired figure action information in the current monitoring image, judging whether the figure action information is conflict action information according to a conflict algorithm, and if so, feeding back conflict prompt information;
the analysis module is connected with the judgment module and used for identifying the figure identity information corresponding to the figure action information in the current monitoring image and taking an object corresponding to the figure identity information as a participating object;
the feedback module is connected with the analysis module and used for feeding back the person identity information and the number of the person identity information; and feeding back the obtained consumption information corresponding to the participating object.
By adopting the technical scheme, a plurality of cameras are installed in a school to obtain a current monitoring image, if the figure action information in the monitoring image is identified as conflict action information, the occurrence of a campus conflict event or a campus cheating event is judged at the moment, and conflict prompt information is fed back to prompt a supervisor; the identity information of the person of the conflict object is fed back, so that a supervisor can conveniently manage the participant who has the conflict event or the deception event;
after the conflict event is ended, the person to be exposed to the force of the force applying person may be threatened again rather than direct limb conflict, and at the moment, the life situations of the person to be exposed to the force applying person and the force applying person are analyzed by collecting the consumption information of the force applying person and the person to be exposed to the force applying person, wherein the consumption information comprises medical consumption information, life consumption information and diet consumption information of the person to be exposed to the force applying person and the person to be exposed to the force applying person in a campus, whether an implicit conflict event without limb conflict exists after the conflict event occurs is judged according to the information, and the effect of monitoring/managing the cheating event by a supervisor is.
In a third aspect, the present application provides a server, which adopts the following technical solutions:
a server comprising a memory and a processor, including a memory and a processor, the memory having stored thereon a computer program that can be loaded by the processor and executed in any of the campus security administration methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium, a computer readable storage medium storing a computer program capable of being loaded by a processor and executing any one of the above campus security administration methods.
Drawings
FIG. 1 is a block flow diagram of an embodiment of the present application;
FIG. 2 is a block flow diagram of an embodiment of the present application;
fig. 3 is a system block diagram of an embodiment of the present application.
Description of reference numerals: 2001. An input module; 2002. a judgment module; 2003. an analysis module; 2004. and a feedback module.
Detailed Description
The present application is described in further detail below with reference to figures 1-3.
The present embodiment is only for explaining the present application and is not limited to the present application, and the technical subject of the present invention can make modifications of the present embodiment without inventive contribution as required after reading the present specification, but is protected by patent laws within the scope of the claims of the present application.
The collision event occurs in the campus, and the campus security management method of the related art generally performs investigation and penalty on the participating objects after the collision event occurs, but does not provide an effective supervision method for potential subsequent influences of the collision event.
The embodiment of the application discloses a campus security supervision method.
With reference to figure 1 of the drawings,
step 100: and identifying the obtained person action information in the current monitoring image.
The current monitoring image is an image information stream acquired by a camera, the image information stream consists of continuously acquired multi-frame images, a plurality of cameras are distributed in a plurality of scenes in a campus, such as scenes of a classroom, a playground, an area outside a teaching building, a school shady path, a dining room, the periphery of a bedroom building and the like, and the cameras are used for acquiring behaviors of an object in each scene; the acquisition frequency of the single frame image can be 0.05s and 0.1s …, so that the character action information in the single frame image can be analyzed later.
When a person is in conflict, the change range of the body action relative to the daily behavior action of the person is relatively large, the facial expression of the person also changes correspondingly, and the person action information can be human skeleton information or human facial expression information.
The Open Pose network is adopted when the human skeleton information is obtained, the Open Pose is an Open source library which is based on a convolutional neural network and supervised learning and takes CAFFE as a frame, the tracking of facial expressions, trunks, four limbs and even fingers of a person can be realized, the Open source library is not only suitable for a single person but also suitable for multiple persons, and has better robustness.
With reference to figure 1 of the drawings,
step 200: and judging whether the character action information is conflict action information or not according to a conflict algorithm.
The conflict action information is a position information/information base which is collected in advance and used for representing human skeleton points when human body conflicts;
the method for judging whether the human skeleton information is the conflict action information can adopt a method of deep learning by using a trained model, and the embodiment of the application adopts a method of judging according to a conflict algorithm.
Wherein, the conflict algorithm is as follows:
acquiring body range overlapping information of the figure action information of the judgment object;
acquiring arm action information of figure action information of a judgment object;
acquiring leg action information of figure action information of a judgment object;
acquiring body orientation information of the figure action information of the judgment object;
obtaining current posture information according to the body range overlapping information, the arm action information, the leg action information and the body orientation information;
matching the current attitude information with the conflict action information;
if the matching is successful, the current attitude information is the matched conflict action information.
When a conflict event occurs, at least two persons participate, if more than two persons participate, any two persons in the multiple persons are selected as judgment objects.
After the current monitoring image is obtained, establishing a two-dimensional coordinate system in the monitoring image, judging whether body ranges of two character action information have overlapping areas or not according to whether human skeleton information is blocked to collect blocked areas or according to a mode that a human-shaped frame is calibrated by a target detection algorithm, and outputting body range overlapping information if the body ranges of the two character action information have overlapping areas;
the arm action information is included angle information of an upper arm and a forearm in human skeleton information, and the included angle information of the upper arm and the forearm is calculated through the collected coordinates of a shoulder point, an elbow point and a wrist point;
the leg action information comprises first leg action information and second leg action information, the first leg action information comprises included angle information of thighs and shanks and transverse distance information of two ankle joints in human skeleton information, and the second leg action information comprises included angle information of body and thigh in the human skeleton information;
the body orientation information is position information between an abscissa of an elbow joint and an abscissa of a shoulder joint in the human skeleton information when the two body bodies are opposite.
The method for matching the current posture information and the conflict action information comprises the following two judgment objects: [ body range overlap information exists ] AND [ arm motion information is less than 145 DEG or arm motion information is greater than 40 DEG ] AND [ (lateral distance information of both ankle joints is greater than 1.2 times the width of both shoulders AND an included angle between the thigh AND the calf of at least two knee joints is less than 165 DEG) or (body AND thigh included angle is less than 150 DEG) ] AND [ abscissa of at least one elbow joint of two judgment objects is located between abscissas of shoulder joints of two judgment objects ].
Wherein the abscissa of at least one elbow joint of the two judgment subjects located between the abscissas of the shoulder joints of the two judgment subjects indicates that there may be arm collision between the two persons and that they are diametrically opposed.
Step 201: if yes, feeding back conflict prompt information;
step 202: and if the matching is unsuccessful, continuously identifying the character action information of the current monitoring image.
Installing a plurality of cameras in a school to acquire a current monitoring image, judging that a campus conflict event or a campus cheating event occurs if human body information in the monitoring image is recognized as conflict action information, and feeding back conflict prompt information to prompt a supervisor; and the identity information of the person of the conflict object is fed back, so that the supervisor can conveniently manage the participants who have conflict events or have the deception events.
With reference to figure 1 of the drawings,
step 300: and identifying the character identity information in the current monitoring image, taking an object corresponding to the character identity information as a participating object, and feeding back the character identity information and the number of the character identity information.
The method for acquiring the personal identity information includes a neural network adopting a target detection algorithm or a classification algorithm, and the embodiment of the application is exemplified by the target detection algorithm:
target detection algorithm target detection is to extract a target from an image, and the current target detection algorithm comprises: R-CNN, Fast R-CNN, Faster R-CNN, YOLO, SSD, FPN, MASK R-CNN.
The embodiment of the application can adopt an SSD model based on VGG-16 network as training basis, and introduces a classic SSD model as follows: the SSD is a single-detection deep neural network, and meanwhile, the regression thought of the YOLO and the anchors mechanism of the Faster R-CNN are combined, the calculation complexity of the neural network can be simplified by adopting the regression thought, and the real-time performance of the algorithm is improved; features with different aspect ratio sizes can be extracted by adopting an anchors mechanism, and meanwhile, compared with a method for extracting global features at a certain position by using a YOLO (YOLO), the method for extracting the local features is more reasonable and effective in the aspect of identification; in addition, the SSD adopts a multi-scale target feature extraction method aiming at the characteristic that features of different scales express different features, and the design is favorable for improving the robustness of detecting the targets of different scales.
The method comprises the steps that a monitor image is obtained, a collision participant is obtained, and the monitor image is used for monitoring the collision participant, wherein the collision participant is obtained by identifying which object in an object library the collision participant is in the current monitor image, the participant comprises an empower and an empower, and a supervisor can perform specific investigation after determining the identity of the collision participant.
In addition, when a collision event occurs, no matter a human body re-recognition mode or a human face recognition mode is adopted, errors exist, and particularly, under the influence of factors such as the installation position of a camera, the installation angle, the uncertainty of the position of the object where the collision event occurs and the like, a certain misjudgment rate may exist in the mode of collecting and verifying human body features through images, so that the accuracy rate of identity recognition on each participating object participating in the collision needs to be further improved.
Step 301: and acquiring current position information corresponding to the person identity information.
The current position information is position information of the participating object in the campus, and the number of the acquired person identity information is at least one.
The method for acquiring the current position information comprises the following steps:
the first method is as follows: a plurality of RFID identifiers are arranged in a campus, each object carries a unique electronic tag, the electronic tags correspond to person identity information one by one, and the objects are students.
The second method comprises the following steps: each object carries a terminal with a positioning module, the ID of the positioning module corresponds to the identity information of the person one by one, and the positioning module can adopt a GPS positioning mode or a Beidou system positioning mode.
Step 302: and forming an observation range according to the current position information and the observation distance information.
The method for forming the observation range can comprise the following steps: the observation range may be a circle, a sector, or other regular/irregular shape including the observation region, with the current position information as a center and the observation distance information as a radius/diameter.
Step 303: and acquiring information of the position to be positioned in the observation range.
The information of the position to be positioned is the position information of other objects which are positioned in the observation area and carry the RFID electronic tags or the positioning module terminals except the current position information.
Step 304: and feeding back identification information corresponding to the information of the position to be positioned, and taking an object corresponding to the identification information as a participating object.
The identification information is ID of a positioning module carried by a participating object/ID of an electronic tag, and the person identity information corresponds to the identification information one by one.
After at least one piece of person identity information is identified, all identification information which can be possibly participating objects can be further determined according to the position of the identified person identity information and other objects which are within the observation range of the object, and the accuracy of detecting the identity of the object participating in the conflict event is improved.
With reference to figure 1 of the drawings,
step 400: and feeding back the obtained consumption information corresponding to the participating object.
The consumption information comprises but is not limited to medical consumption information, life consumption information and diet consumption information, the campus comprises a teaching system and life service infrastructure such as a canteen, a medical office, a supermarket and the like, the consumption system arranged in the campus such as a POS machine is in wireless connection with card swiping systems of the canteen, the medical office, the supermarket and the like, each object is provided with a unique one-card, the ID of each one-card corresponds to the RFID electronic tag one by one, and when the object consumes through the one-card in the campus, the consumption situation of the object in the campus can be acquired.
Since the cheating behavior of the rioter may not be limited to only the conflict of the body actions, but may also include economic lassos, there may be some correlation in the consumption level between the rioter and the rioted, such as: the diet consumption information of the person subjected to violence far exceeds the amount required by daily diet consumption, and the consumption level of the violence is obviously lower than that of the daily consumption; alternatively, the riot may have some limb injuries, generate medical consumption information in a doctor's office, and so on. Therefore, the supervisor pays close attention to the consumption information of the participating objects, analyzes the living conditions of the person to be exposed and the person to be exposed, and judges whether the conflict event of implicit non-body conflict exists after the conflict event occurs according to the information, so that the effect of monitoring/managing the cheating event by the supervisor is facilitated.
After the conflict happens, even if the supervisor stops the conflict in time, the objects may have secondary conflicts due to "recovery psychology", and therefore, the participating objects need to be further supervised after the conflict event occurs.
With reference to figure 2 of the drawings,
step 500: and taking the identification information and/or the person identity information as an identification group, acquiring the pre-judgment position information of any identification information and/or person identity information, and judging whether the identification group judges whether to feed back the pre-judgment prompt information or not in a pre-judgment range formed by the pre-judgment position information and the pre-judgment distance information.
The prejudged position information is an RFID electronic tag carried by the identification information/person identity information or position information of the positioning terminal in the campus.
All participating objects which are likely to participate in the conflict event are taken as an identification group, if the objects of the identification group appear in the same prejudgment range again, the objects in the identification group are likely to have secondary conflicts, and the supervisor is prompted to prevent the potential conflict event in time by feeding back prejudgment prompting information.
Step 5010: if yes, feeding back the pre-judgment prompt information.
Step 5011: if not, judging whether the identification group exists again.
In addition, after the conflict event occurs, the object may still organize the conflict event by using its own social network, and therefore, further monitoring of the behavior of the related object in the social network of the participating object is required.
Step 5020: and acquiring related objects corresponding to the participating objects according to a relational algorithm.
Step 5021: and adding the related objects into the identification group.
Wherein, the related objects are close contact persons in the social network of the participating objects.
Because the related object of the person being exposed to force may become the object being exposed to force/force again, and the related object of the person being exposed to force may become the object being exposed to force/force twice, if the objects closely related to the object being exposed to force and object being exposed to force appear in the same anticipation range, the collision event may occur again, and at this time, the supervisor may be prompted by the anticipation prompting message, so as to achieve the effect of preventing in advance.
Wherein, the relational algorithm comprises:
step 5030: and acquiring the identification position information of each identification information.
The identification position information is position information of the identification information in the campus.
Step 5031: and forming an identification range by using the identification position information and the identification distance information.
Step 5032: and acquiring other identification information in the identification range.
Wherein the other identification information is other identification information except the representation information corresponding to the identification position information.
Step 5033: and acquiring the times of other identification information in the identification range as effective times.
Step 5034: and judging whether the effective times are greater than the threshold times.
Step 5035: and if so, taking the personnel of other identification information corresponding to the effective times as related objects.
Step 5036: if not, outputting the irrelevant object information.
By observing the social situation of each object, the observed object and other object/objects appear in the identification range more than the threshold number of times, which indicates that the observed object and other object/objects may not meet by chance but have strong correlation, so that the object/objects are taken as the related objects of the object.
Because the situation of deception is not only in the behaviors of limbs and economy but also in the situations of speech, abuse and the like, psychological trauma can be brought to the person to be exposed, and the fact that the person to be exposed possibly cannot be explained to the supervisor, the parents and the like due to the stress of the person to be exposed, in order to facilitate the supervisor to know the psychological health conditions of the participating objects more comprehensively, the embodiment of the application is further provided with a lie detection device in the campus, and the principle of the lie detection device is as follows: psychological changes when a person lies inevitably cause changes of physiological parameters (such as skin electricity, heartbeat, blood pressure, respiration, brain waves and sound), and usually the psychological changes are only restricted by vegetative nerves and are difficult to control by brain consciousness.
The related art method mainly provides some problems capable of forming different degrees of stimulation for a tested person, and the specific test method comprises the following steps: the method comprises a crime-rope problem testing method, a crime-course testing method, a tension peak testing method, a related-unrelated problem cross testing method, a suspected informed participation testing method, a silence testing method and a true-false comparison testing method (forced admission testing method), wherein at present, the international common methods comprise a crime-course testing method and a crime-rope problem testing method.
With reference to figure 2 of the drawings,
step 600: and forming problem information according to the identification group and the conflict prompt information, feeding the problem information back to the participating object, and acquiring biological characteristic information when the participating object feeds back the problem information to acquire lie detection judgment information.
Wherein, the biological characteristic information is: the biological characteristic information acquisition equipment can be arranged at the desk and chair of each object or at the medical office.
The question information is formed according to the time, the place, the participating objects and other information of the occurring conflict events, so that the question information is more pertinent to the participating objects, whether potential hidden information exists in the questions is answered in an auxiliary mode when the participating objects are judged through feedback information, and a supervisor can conveniently judge whether hidden conflict events exist.
By means of lie detection only, there may be a large error, facilitating further analysis of the subject's current situation.
With reference to figure 2 of the drawings,
step 700: dividing the object into a plurality of independent object intervals, presetting a personal evaluation information table corresponding to each object one by one, feeding back the evaluation information table corresponding to each object in the object interval to which the object belongs in sequence, and acquiring the manual evaluation information in the personal evaluation information table.
The object interval can be divided according to a class mode, the personal evaluation information table of each object represents an evaluation item of the object evaluated by other objects, and the abnormal condition of the object in daily performance can be assisted to be judged in a mode of manual evaluation among the objects, so that a supervisor can be assisted to collect all-around evaluation information of the object; all the objects are evaluated by distributing a preset personal evaluation information table to each object, and the participator is not only selected, so that psychological pressure is not easily added to the participator.
With reference to figure 3 of the drawings,
the embodiment of the present application further provides a campus security auxiliary monitoring system, including:
an input module 2001, configured to obtain a current monitoring image;
a judging module 2002 connected to the input module 2001, configured to identify the acquired person motion information in the current monitored image, judge whether the person motion information is collision motion information according to a collision algorithm, and if yes, feed back collision prompt information;
an analysis module 2003 connected to the judgment module 2002 for identifying the person identity information corresponding to the person action information in the current monitored image, and taking the object corresponding to the person identity information as a participating object;
a feedback module 2004 connected to the analysis module 2003 for feeding back the person identification information and the number of the person identification information; and feeding back the obtained consumption information corresponding to the participating object.
The embodiment of the present application further provides a server, which includes a memory and a processor, and includes the memory and the processor, where the memory stores thereon a computer program that can be loaded by the processor and execute any one of the above campus security monitoring methods.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The non-volatile memory may be ROM, Programmable Read Only Memory (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory.
Volatile memory can be RAM, which acts as external cache memory. There are many different types of RAM, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synclink DRAM (SLDRAM), and direct memory bus RAM.
The processor mentioned in any of the above may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method for transmitting feedback information. The processing unit and the storage unit may be decoupled, and are respectively disposed on different physical devices, and are connected in a wired or wireless manner to implement respective functions of the processing unit and the storage unit, so as to support the system chip to implement various functions in the foregoing embodiments. Alternatively, the processing unit and the memory may be coupled to the same device.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program that can be loaded by a processor and execute any one of the above campus security administration methods.
The computer-readable storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The embodiments are preferred embodiments of the present application, and the scope of the present application is not limited by the embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.
Claims (10)
1. A campus security supervision method, comprising:
identifying the obtained figure action information in the current monitoring image;
judging whether the character action information is conflict action information according to a conflict algorithm;
if yes, feeding back conflict prompt information;
identifying figure identity information corresponding to the figure action information in the current monitoring image, and taking an object corresponding to the figure identity information as a participating object;
feeding back the figure identity information and the number of the figure identity information; and the number of the first and second groups,
and feeding back the obtained consumption information corresponding to the participating object.
2. The campus security supervision method according to claim 1, further comprising, after feeding back the collision prompt message:
acquiring current position information corresponding to the person identity information;
forming an observation range according to the current position information and the observation distance information;
acquiring information of a position to be positioned in an observation range;
feeding back identification information corresponding to the information of the position to be positioned;
and taking the object corresponding to the identification information as a participating object.
3. The campus security monitoring method according to claim 2, wherein after the feedback of the conflict alert information, further comprising:
using the identification information and/or the person identity information as an identification group;
acquiring the pre-judging position information of any identification information and/or person identity information;
judging whether an identification group exists in a pre-judging range formed by the pre-judging position information and the pre-judging distance information;
if yes, feeding back the pre-judgment prompt information.
4. The campus security monitoring method of claim 3, wherein after the feedback of the pre-judgment prompt message, the method further comprises:
acquiring related objects corresponding to the participating objects according to a relational algorithm;
and adding the related objects into the identification group.
5. The campus security administration method of claim 4, wherein the relational algorithm comprises:
acquiring identification position information of each identification information;
forming an identification range by using the identification position information and the identification distance information;
acquiring other identification information in the identification range;
acquiring the times of other identification information in the identification range as effective times;
judging whether the effective times are greater than threshold times;
and if so, taking the personnel of other identification information corresponding to the effective times as related objects.
6. The campus security monitoring method of claim 3, wherein after the feedback of the conflict alert information, further comprising:
forming problem information according to the identification group and the conflict prompt information;
feeding back the question information to the participating object;
and acquiring biological characteristic information when the participating object feeds back the problem information to acquire lie detection judgment information.
7. The campus security administration method of claim 1, wherein the collision algorithm comprises:
acquiring body range overlapping information of the figure action information of the judgment object;
acquiring arm action information of figure action information of a judgment object;
acquiring leg action information of figure action information of a judgment object;
acquiring body orientation information of the figure action information of the judgment object;
obtaining current posture information according to the body range overlapping information, the arm action information, the leg action information and the body orientation information;
matching the current attitude information with the conflict action information;
if the matching is successful, the current attitude information is the matched conflict action information.
8. A campus security assisted surveillance system, comprising:
an input module (2001) for acquiring a current monitoring image;
the judgment module (2002) is connected with the input module (2001) and used for identifying the acquired person action information in the current monitoring image, judging whether the person action information is conflict action information according to a conflict algorithm, and if so, feeding back conflict prompt information;
the analysis module (2003) is connected with the judgment module (2002) and is used for identifying the person identity information corresponding to the person action information in the current monitoring image and taking an object corresponding to the person identity information as a participating object;
a feedback module (2004) connected to the analysis module (2003) for feeding back the personal identification information and the number of the personal identification information; and feeding back the obtained consumption information corresponding to the participating object.
9. A server comprising a memory and a processor, including a memory and a processor, the memory having stored thereon a computer program that can be loaded by the processor and executed to perform the method of campus security administration as claimed in any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program capable of being loaded by a processor and of executing the method of campus security administration of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011633227.1A CN112804491A (en) | 2020-12-31 | 2020-12-31 | Campus security supervision method, system, server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011633227.1A CN112804491A (en) | 2020-12-31 | 2020-12-31 | Campus security supervision method, system, server and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112804491A true CN112804491A (en) | 2021-05-14 |
Family
ID=75808573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011633227.1A Pending CN112804491A (en) | 2020-12-31 | 2020-12-31 | Campus security supervision method, system, server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112804491A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114186896A (en) * | 2021-12-16 | 2022-03-15 | 东莞先知大数据有限公司 | Corridor safety supervision method, electronic equipment and storage medium |
CN114881305A (en) * | 2022-04-25 | 2022-08-09 | 成都诺识信息技术有限公司 | Prediction early warning system and prediction method for canteen |
CN115223099A (en) * | 2022-08-02 | 2022-10-21 | 上海三力信息科技有限公司 | Intelligent monitoring method for safety of entrusting child in school |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545699A (en) * | 2016-10-31 | 2018-01-05 | 郑州蓝视科技有限公司 | A kind of Intelligent campus safety-protection system |
CN108734618A (en) * | 2018-05-16 | 2018-11-02 | 秦勇 | Campus Security prompt management system and method |
CN109147267A (en) * | 2018-10-16 | 2019-01-04 | 温州洪启信息科技有限公司 | Intelligent campus big data safe early warning platform based on cloud platform |
US20200160690A1 (en) * | 2018-11-21 | 2020-05-21 | Hemal B. Kurani | Methods and systems of smart campus security shield |
CN111324772A (en) * | 2019-07-24 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Personnel relationship determination method and device, electronic equipment and storage medium |
CN111738044A (en) * | 2020-01-06 | 2020-10-02 | 西北大学 | Campus violence assessment method based on deep learning behavior recognition |
CN111985428A (en) * | 2020-08-27 | 2020-11-24 | 上海商汤智能科技有限公司 | Security detection method and device, electronic equipment and storage medium |
-
2020
- 2020-12-31 CN CN202011633227.1A patent/CN112804491A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545699A (en) * | 2016-10-31 | 2018-01-05 | 郑州蓝视科技有限公司 | A kind of Intelligent campus safety-protection system |
CN108734618A (en) * | 2018-05-16 | 2018-11-02 | 秦勇 | Campus Security prompt management system and method |
CN109147267A (en) * | 2018-10-16 | 2019-01-04 | 温州洪启信息科技有限公司 | Intelligent campus big data safe early warning platform based on cloud platform |
US20200160690A1 (en) * | 2018-11-21 | 2020-05-21 | Hemal B. Kurani | Methods and systems of smart campus security shield |
CN111324772A (en) * | 2019-07-24 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Personnel relationship determination method and device, electronic equipment and storage medium |
CN111738044A (en) * | 2020-01-06 | 2020-10-02 | 西北大学 | Campus violence assessment method based on deep learning behavior recognition |
CN111985428A (en) * | 2020-08-27 | 2020-11-24 | 上海商汤智能科技有限公司 | Security detection method and device, electronic equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114186896A (en) * | 2021-12-16 | 2022-03-15 | 东莞先知大数据有限公司 | Corridor safety supervision method, electronic equipment and storage medium |
CN114881305A (en) * | 2022-04-25 | 2022-08-09 | 成都诺识信息技术有限公司 | Prediction early warning system and prediction method for canteen |
CN115223099A (en) * | 2022-08-02 | 2022-10-21 | 上海三力信息科技有限公司 | Intelligent monitoring method for safety of entrusting child in school |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | Hidden Markov model-based fall detection with motion sensor orientation calibration: A case for real-life home monitoring | |
CN112804491A (en) | Campus security supervision method, system, server and storage medium | |
Lindesmith | Addiction and opiates | |
US9872637B2 (en) | Medical evaluation system and method using sensors in mobile devices | |
EP1383430B1 (en) | Analysis of the behaviour of a subject | |
CN109492595B (en) | Behavior prediction method and system suitable for fixed group | |
US20160352727A1 (en) | System and method for asset authentication and management | |
JP2005160805A (en) | Individual recognition device and attribute determination device | |
Tiwari et al. | Fusion of ear and soft-biometrics for recognition of newborn | |
Miao et al. | Automatic mental health identification method based on natural gait pattern | |
Sansola | Postmortem iris recognition and its application in human identification | |
Raji et al. | A novel respiration pattern biometric prediction system based on artificial neural network | |
Yang et al. | More to Less (M2L): Enhanced health recognition in the wild with reduced modality of wearable sensors | |
Mahmoud et al. | Monitoring and detecting outliers for elder's life activities in a smart home: A case study | |
Liang et al. | A learning model for the automated assessment of hand-drawn images for visuo-spatial neglect rehabilitation | |
Tiwari et al. | Can ear and soft-biometric traits assist in recognition of newborn? | |
Zaffar et al. | A Novel CNN-RNN Model for E-Cheating Detection Based on Video Surveillance | |
Andrews et al. | Human Detection and Biometric Authentication with Ambient Sensors | |
Nadhif et al. | Gait analysis for biometric surveillances using Kinect™: A study case of axial skeletal movements | |
Mordini | Biometrics, human body, and medicine: A controversial history | |
Rajendran et al. | A review on mental stress detection using PSS method and EEG signal method | |
Owuye et al. | Development of a Multimodal Biometric Model for Population Census | |
Bobrova et al. | Investigation of the color effect of the test object on the pupil response | |
CN117036877B (en) | Emotion recognition method and system for facial expression and gesture fusion | |
Ettefagh et al. | Enhancing Lower Limb Activity Recognition Through Multi-Sensor Data Fusion in Tele-Rehabilitation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210514 |
|
RJ01 | Rejection of invention patent application after publication |