CN114511817A - Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors - Google Patents

Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors Download PDF

Info

Publication number
CN114511817A
CN114511817A CN202111665098.9A CN202111665098A CN114511817A CN 114511817 A CN114511817 A CN 114511817A CN 202111665098 A CN202111665098 A CN 202111665098A CN 114511817 A CN114511817 A CN 114511817A
Authority
CN
China
Prior art keywords
submodule
module
personnel
micro
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111665098.9A
Other languages
Chinese (zh)
Inventor
朱俊丰
苏林媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Liantu Information Technology Co ltd
Original Assignee
Chongqing Liantu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Liantu Information Technology Co ltd filed Critical Chongqing Liantu Information Technology Co ltd
Priority to CN202111665098.9A priority Critical patent/CN114511817A/en
Publication of CN114511817A publication Critical patent/CN114511817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The invention discloses a micro-space-oriented personnel behavior panoramic picture intelligent supervision system which comprises a ubiquitous sensing module, an intelligent learning and training module, an image deconstruction module, an audio deconstruction module, a panoramic three-dimensional management module, a comprehensive picture module and a micro-space early warning and supervision module. Based on real-time ubiquitous sensing and big data intelligent analysis and calculation, 24-hour all-weather monitoring of supervision environment and personnel is realized, and through automatic and intelligent early warning and disposal of a computer, the workload of policemen can be greatly reduced, the human resource input is reduced, and the management efficiency is improved.

Description

Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors
Technical Field
The invention relates to the technical field of special personnel supervision adopting mandatory measures, in particular to a micro-space-oriented intelligent supervision system for panoramic portrayal of personnel behaviors.
Background
Mandatory measure places represented by detention houses, guard houses, prisons, detention centers and the like are faced with various special personnel groups suspected of crimes or crimes. Due to the source complexity, the diversity of the hierarchical structure and the particularity of the identity of the personnel groups, the supervision environment has specific characteristics, and in a centralized management micro-space scene, emotional runaway is easily caused due to failure of various communication activities, so that more conflict events and violation behaviors are generated. At present, for the management of these scenes, real-time monitoring is mainly performed by a technical means of deploying a camera and a voice acquisition device, and the normal supervision of these scenes is realized by combining the daily management of supervision policemen and a mode of 'ten technical defense for civil air defense', but the mode has many defects:
firstly, to the emotion management and control of personnel who receive mandatory measures, can only rely on supervision policeman personal experience to develop, because every personnel object of supervision policeman's management often reaches more than 50 people, receive individual energy and experience's difference and limitation, be difficult to reach in time control and effective discongesting to all personnel's emotions, even with the help of the video monitoring means, also only solved information acquisition's channel problem, can't provide timely emotion and control ability.
Secondly, although a plurality of management specifications are made in daily management work and the management and control capacity is improved in a mode of 'collective autonomy of personnel and dynamic patrol of policemen', the management specifications often flow on the surface due to lack of intelligent supervision and early warning analysis means, and the patrol of policemen also has a large number of supervision dead zones, so that all-weather, dead-angle-free and accurate management and control of the behavior of the personnel in a micro-space scene cannot be realized, a large number of supervision police strength is consumed, and the management efficiency is low.
With the development of modern information technologies such as cloud computing and big data, an artificial intelligence technology system constructed by combining big data and a computer algorithm gradually moves from experimental research to the application field. Therefore, the invention provides a micro-space-oriented personnel behavior panoramic image intelligent supervision system by fully utilizing technical means such as big data, Internet of things, space-time information, artificial intelligence and the like, and aims to design a micro-space-oriented scene to supervise panoramic and intelligent emotion changes and behavior activities of special personnel so as to improve the management efficiency and level of mandatory measure places.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a micro-space-oriented intelligent supervision system for panoramic images of personnel behaviors, so as to effectively solve the problem of real-time and omnibearing supervision of personnel and personnel behaviors in the micro-space environment of various mandatory measure places, improve supervision efficiency, reduce management cost and improve service level.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the utility model provides a personnel's action panorama portrait intelligence supervisory systems towards little space, its key lies in: including ubiquitous perception module, intelligent learning and training module, image deconstruction module, audio frequency deconstruction module, panorama three-dimensional management module, synthesize and portrait module, little space early warning and supervision module, wherein:
the ubiquitous sensing module is used for comprehensively sensing and acquiring information of the environment state of the micro-space in the monitoring place, and accessing external information of a service management system related to the monitored personnel, so that real-time comprehensive sensing acquisition and storage management of behavior and activity of the monitored personnel and the space environment condition are realized;
the intelligent learning and training module is used for classifying, analyzing, identifying and extracting the audio and video information acquired by the ubiquitous sensing module, analyzing and summarizing the emotional characteristics of people through the trained image, voice and behavior characteristics of the people, and managing and verifying learning and training results;
the image deconstruction module is used for automatically extracting and identifying various articles in human facial features, behavior dynamics and supervision places in real time based on the training library support provided by the intelligent learning and training module aiming at the video image information acquired by the ubiquitous sensing module, and classifying and deconstructing the extracted and identified elements according to the image classification in the training library;
the audio deconstruction module is used for automatically extracting and identifying dynamic voice information of a person in real time aiming at the audio information acquired by the ubiquitous sensing module based on the training library support provided by the intelligent learning and training module, performing picking, matching, classifying and deconstruction according to voice classification in the training library and pre-acquired personnel voice membrane information, and pairing the acquired data with a person to be supervised;
the panoramic three-dimensional management module is used for performing realistic three-dimensional modeling and management on the micro-space of the supervision place according to the three-dimensional GIS technology;
the comprehensive portrait module is used for integrating the information processed by the image deconstruction module and the audio deconstruction module on the basis of the gathered basic information of the supervised person and by taking the person as a unit, carrying out personnel behavior analysis and emotion analysis on the basis of the training library support provided by the intelligent learning and training module so as to realize panoramic real-time portrait of the individual person, simultaneously carrying out group portrait on a micro-space scene by taking a prison room as a unit, and reflecting the omnibearing state of the individual person, the group and the micro-space scene through the comprehensive portrait;
the micro-space early warning and supervision module is used for supporting data of the intelligent learning and training module and the comprehensive portrait module, monitoring and analyzing individual and group conditions in the micro-space in real time based on real-time dynamic personnel portrait and environment monitoring information, and performing classified and graded early warning.
Further, the ubiquitous sensing module comprises an image sensing submodule, an audio sensing submodule, an environment sensing submodule and a data gathering and managing submodule, wherein:
the image perception submodule is used for monitoring the whole scene of the micro space of the supervision place without dead angles and storing and managing the obtained video image data;
the audio perception submodule is used for highly restoring and collecting the personnel voice information in the micro space of the supervision place and storing and managing the collected audio information;
the environment perception submodule is used for collecting temperature and humidity data of a micro space of a supervision place and storing and managing collected environment perception information;
the data aggregation and management submodule is used for carrying out aggregation, integration, channel conversion and storage management on various data provided by the image perception submodule, the audio perception submodule and the environment perception submodule, is in butt joint with an external related service management system and aggregates personnel and service information of an external source.
Further, the intelligent learning and training module comprises a feature dictionary classification submodule, an artificial extraction feature classification submodule, a feature deep learning classification submodule, an integrated analysis submodule and a feature library management submodule, wherein:
the feature dictionary classification submodule is used for dividing, classifying and analyzing image information and audio information by adopting a pre-established dictionary library, and realizing classification and extraction of image and audio features of people's expressions, behaviors, voices and articles in a micro space through multi-granularity combined calculation;
the manual extraction feature classification submodule is used for selecting certain personnel images, article images and audio information as analysis samples, manually extracting expression, behavior, voice and article features in the analysis samples, and automatically classifying and extracting implicit features of other images and audio information except the analysis samples by adopting an algorithm model;
the characteristic deep learning classification submodule is used for establishing a massive characteristic big database of expressions, behaviors, voices and articles through big data acquisition and induction, and then carrying out automatic classification and extraction on implicit characteristics of image and audio information;
the comprehensive analysis submodule is used for analyzing and calculating the same object classification features extracted by the feature dictionary classification submodule, the manual extraction feature classification submodule and the feature deep learning classification submodule in a multi-factor weighting calculation mode, obtaining a comprehensive analysis result on the basis of keeping three types of extraction results and outputting the comprehensive analysis result to the feature library management submodule;
the characteristic library management sub-module is used for realizing the function of managing the classified characteristic data output by the comprehensive analysis sub-module.
Further, the image deconstruction module comprises a person expression analysis submodule, a person behavior analysis submodule and an article analysis submodule, wherein:
the human expression analysis sub-module is used for adopting the facial expression feature classification extraction feature library provided by the intelligent learning and training module to dynamically analyze the human expression in the video perception image acquired by the ubiquitous perception module and outputting an analysis result to the comprehensive portrait module;
the personnel behavior analysis submodule is used for adopting a behavior dynamic characteristic classification extraction feature library provided by the intelligent learning and training module to analyze personnel behavior dynamics in the video perception image acquired by the ubiquitous perception module and outputting an analysis result to the comprehensive portrait module;
the article analysis submodule is used for adopting an article feature classification extraction feature library provided by the intelligent learning and training module, analyzing and tracking articles in the video perception image collected by the ubiquitous perception module, accurately extracting and dynamically marking and tracking forbidden articles in a key mode, and outputting an analysis result to the micro-space early warning and monitoring module.
Furthermore, the human expression analysis submodule divides expression features through two levels, namely positive emotion and negative emotion expression classification according to the extreme emotion classification, and expression classification including happiness, joy, tension, fear, hate, indifference, anger and sadness according to the multiple classes of emotion classification;
the human behavior analysis submodule also divides behavior characteristics into two levels, namely positive emotion classification and negative emotion classification according to emotion polarity classification, and also classifies behaviors according to multiple classes of emotion classification, such as happiness, joy, tension, fear, hate, apathy, indifference, anger and sadness.
Further, the audio deconstruction module comprises an audio selection submodule, an acoustic membrane matching submodule and a voice analysis submodule, wherein:
the audio frequency picking sub-module is used for picking out voice and special audio frequency information sent by personnel from the audio frequency information collected by the ubiquitous sensing module by adopting an audio frequency feature library provided by the intelligent learning and training module;
the sound film matching submodule is used for accessing the personnel sound film information extracted by the ubiquitous sensing module, matching and analyzing the personnel voice information and the personnel sound film information which are selected from the audio frequency selecting submodule, and binding the voice obtained by sensing with the personnel;
the voice analysis submodule is used for extracting a feature library according to the audio feature classification provided by the intelligent learning and training module, analyzing the personnel voice information in the audio information, and outputting an analysis result to the comprehensive portrait module after classifying the personnel voice features.
Further, the panoramic three-dimensional management module comprises a GIS engine submodule, a micro-space model management submodule, a micro-space grid management submodule, a micro-space scene management submodule and a panoramic three-dimensional service interface submodule, wherein:
the GIS engine submodule is used for providing management, analysis and calculation of spatial data;
the micro-space model management submodule realizes the addition, deletion, modification and coding of the three-dimensional model in the micro-space based on the basic function of the GIS engine submodule;
the micro-space grid management submodule carries out gridding segmentation on a micro-space plane based on the basic function of the GIS engine submodule and manages grid units formed by segmentation;
the micro-space scene management submodule performs addition, deletion and coding management on GIS space objects except for the three-dimensional model object in the micro-space based on the basic function of the GIS engine submodule;
the three-dimensional service interface sub-module is used for performing interface modularization opening on the basic function, the micro-space model data, the micro-space grid data and the micro-space scene data of the GIS engine sub-module in a universal format interface mode.
Further, the comprehensive portrait module comprises a personnel basic information management sub-module, a behavior and emotion analysis sub-module, a personnel portrait sub-module and a micro-space portrait sub-module, wherein:
the personnel basic information management submodule is used for uniformly managing the basic information of the monitored personnel by gathering the personnel information of the external service management system;
the behavior and emotion analysis submodule is used for integrating the received personnel expression, behavior and voice characteristic classification information analyzed by the image deconstruction module and the audio deconstruction module, and performing comprehensive analysis by combining the micro-space environment state and the personnel basic information to obtain a behavior and emotion analysis result taking the individual of the monitored personnel as a unit;
the person portrait submodule is used for adopting behavior and emotion analysis results to conduct portrait management on a person to be monitored;
the micro-space portrait sub-module is used for taking a single micro-space as a unit and taking a supervised person group in each micro-space as a whole to portrait group behaviors, emotions and stability change conditions.
Furthermore, the person portrait sub-module conducts portrait management on the monitored person in two modes of static portrait and dynamic portrait, the static portrait is characterized by the character, emotion and behavior of the person formed based on the basic information of the person and the long-term dynamic monitoring and analysis result of the person, and the dynamic portrait is characterized by the real-time emotion and behavior of the person according to the real-time dynamic information and the analysis result of ubiquitous perception.
Further, the micro-space early warning and supervision module comprises a personnel abnormity early warning submodule, a group abnormity early warning submodule, a prohibited item dynamic monitoring submodule and a comprehensive supervision submodule, wherein:
the personnel abnormity early warning sub-module is used for dividing personnel abnormity conditions into four levels of green, orange, yellow and red according to personnel dynamic portrait information of the comprehensive portrait module through threshold adjustment, and respectively representing four state intervals of stable, unstable, easy excitation and excitation;
the group abnormity early warning submodule is used for monitoring and analyzing the group abnormity condition by taking a single monitoring room micro-space as a unit and based on the dynamic information of the comprehensive portrait module and the personnel abnormity early warning submodule, and dividing the group abnormity condition into four levels of green, orange, yellow and red through threshold adjustment, wherein the four levels respectively represent four state intervals of stable group, unstable group, easy bias group and biased group;
the forbidden articles dynamic detection submodule is used for dynamically detecting and tracking forbidden articles possibly existing in the micro space;
the comprehensive supervision sub-module takes a three-dimensional GIS scene as a carrier, carries out comprehensive and visual supervision facing to personnel and groups in a micro space, is in butt joint with an external service management system, and outputs a related analysis result to the external service system.
The invention has the following remarkable effects:
firstly, big data and an artificial intelligence technology are fully utilized, a supervision mode which mainly takes personal experience of a dry police officer in the traditional technology is changed, and the personal experience is integrated into collective experience through informatization and intelligent means so as to provide more scientific, accurate and comprehensive supervision support;
based on the combination of GIS spatial analysis and various deep learning algorithms, traditional qualitative analysis is refined into a quantitative analysis mode, and a more accurate judgment basis can be provided no matter macroscopic analysis or microscopic analysis is faced;
based on real-time ubiquitous sensing and big data intelligent analysis and calculation, 24-hour all-weather guard on supervision environments and personnel is realized, and through automatic and intelligent early warning and disposal of a computer, the workload of policemen can be greatly reduced, the human resource input is reduced, and the management efficiency is improved;
and fourthly, based on the accumulation of long-term perception and monitoring data and artificial intelligence calculation analysis data, the control analysis and prediction analysis capabilities of the behaviors and the emotions of the personnel can be continuously improved, the emergency prevention capability is improved, and the traditional 'after-treatment' is converted into a 'before-prevention' working mode.
Drawings
FIG. 1 is a block diagram of the system architecture of the present invention;
FIG. 2 is a schematic diagram of the structure of a ubiquitous sensing module;
FIG. 3 is a schematic diagram of a structure of an intelligent learning and training module;
FIG. 4 is a flow chart of feature extraction for the feature deep learning classification sub-module;
FIG. 5 is a schematic diagram of the structure of an image deconstruction module;
FIG. 6 is a schematic diagram of the structure of an audio deconstruction module;
FIG. 7 is a schematic structural diagram of a panoramic three-dimensional management module;
FIG. 8 is a diagram illustrating a composite image module;
fig. 9 is a schematic structural diagram of the micro-space warning and supervision module.
Detailed Description
The following provides a more detailed description of the embodiments and the operation of the present invention with reference to the accompanying drawings.
As shown in figure 1, a personnel's action panorama portrait intelligence supervisory systems towards little space comprises ubiquitous perception module, intelligent study and training module, image deconstruction module, audio deconstruction module, panorama three-dimensional management module, synthesizes portrait module, little space early warning and supervision module, wherein:
the ubiquitous sensing module is mainly used for comprehensively sensing and acquiring the environmental states of video, audio, temperature, humidity and the like of micro-space in a monitoring place by using the technology of Internet of things, and accessing external information of a service management system related to monitored personnel to realize real-time and comprehensive sensing acquisition and storage management of personnel behavior activities and space environmental conditions;
the external information refers to information in other related personnel, case management and business systems, such as personal basic information and family condition information in a household registration management system, personal basic information and accommodation information in a floating population management system, case condition information in a case management system and the like.
The intelligent learning and training module is used for classifying, analyzing, recognizing and extracting audio and video information acquired by the ubiquitous sensing module by adopting three modes of characteristic dictionary classification, artificial extraction characteristic classification and characteristic deep learning classification, analyzing and summarizing emotional characteristics of personnel through trained personnel images, voices and behavior characteristics, managing and verifying learning and training results, forming a circulating machine automatic learning system and supporting intelligent operation of other functional modules of the system;
the image deconstruction module is used for automatically extracting and identifying facial features and behavior dynamics of personnel and various articles in a supervision place in real time based on training library support provided by the intelligent learning and training module aiming at video image information acquired by a camera in a micro space, classifying and deconstructing extracted and identified elements according to image classification in the training library, and deconstructing continuous dynamic unstructured video data into discrete and standard structured data so as to support, analyze, early warn and supervise work;
in this example, the image deconstruction module is a method for automatically extracting and identifying the facial features and the dynamic behaviors of the person and various articles in the supervision place in real time: aiming at different facial features, dynamic behaviors and articles of people, various video images are dynamically extracted and built into an image library through a mature image recognition algorithm according to classified and deconstructed image data, wherein the facial and behavior dynamic image library is stored by taking individuals as units, and the article image library is stored by taking independent micro-space as units. In the initial state, the system extracts, identifies and stores all the image information which is not acquired and extracted, but with the richness and perfection of corresponding classification and deconstruction image data, only the image with characteristic change exceeding a certain threshold (such as 30%) is stored after the subsequent dynamic extraction and identification. In the image recognition algorithm, a mature big data recognition method is used for recognizing the facial features and the behavior dynamics of the personnel, and the method can be directly used. For the identification of the object image, because of the specificity of the mandatory measure places, various objects are relatively fixed and foreseeable, a feature image dictionary library of the object can be established first, a full-quantity image library of the object which possibly appears is established, and then the extraction and identification of the object are performed through pixel matching.
The method for classifying and deconstructing the extracted and recognized elements according to the image classification in the training library comprises the following steps: the method refers to a process of classifying, analyzing and constructing structured data by the extracted and identified image data. Because the existing classification deconstruction data is required to be referred in the extraction and identification process, the classification labels and the coding labels are directly marked on the extraction and identification images according to the corresponding classification results when the corresponding classification deconstruction data exists in the extraction and identification process, and a classification deconstruction index table is established. For the image data which has no reference in the classification deconstruction data, the computer automatically matches and calculates for prejudgment according to the characteristic point rule of image recognition, and gives out a corresponding prejudgment result, and then the supervision part combines personal experience to perform final classification according to the prejudgment result. For example, facial feature recognition, the computer extracts facial feature points according to the extracted facial image, then classifies the facial image according to expressions such as smile, cry, apathy, excitement, anger and the like according to the distance proportion among a plurality of feature points, then the supervision cadres confirm classification, once a facial expression establishes classification deconstruction data, then similar expressions in a certain threshold range are subjected to computer-automated classification deconstruction.
The audio deconstruction module is used for automatically extracting and identifying dynamic voice information of a person in real time aiming at audio information acquired by audio perception in a micro space based on training library support provided by an intelligent learning and training module, performing sorting, matching, classifying and deconstruction according to voice classification in the training library and pre-acquired voice membrane information of the person, deconstructing mixed and dynamic unstructured audio data into discrete and standard structured data, and pairing the data with the person to be supervised so as to support analysis, early warning and supervision work;
the panoramic three-dimensional management module mainly utilizes a three-dimensional GIS technology to perform realistic three-dimensional modeling and management on a micro space of a supervision place, provides related data and functional interfaces, supports other modules to call data and functions of a three-micro space scene, and realizes that various service functions can be integrated and applied in a three-dimensional scene;
the comprehensive portrait module integrates the information processed by the image deconstruction module and the audio deconstruction module by taking gathered basic information of the supervised personnel as a basis, supports and carries out personnel behavior analysis and emotion analysis based on the intelligent learning and training module so as to realize panoramic real-time portrait of individual personnel, carries out group portrait by taking a prison room as a micro space of a unit, and reflects the omnibearing states of individuals, groups and micro space scenes by the comprehensive portrait;
the micro-space early warning and supervision module relies on the support of the intelligent learning and training module and the comprehensive portrait module, real-time monitoring and analysis are carried out on the individual and group conditions in the micro-space based on real-time dynamic personnel portrait and environment monitoring information, classification and grading early warning is carried out according to a certain index threshold value, and intelligent and automatic information support is provided for comprehensive supervision work of managers.
The classification and grading early warning mechanism is as follows: the early warning mechanism is divided into an individual early warning mechanism and a group early warning mechanism, wherein the individual early warning mechanism takes personnel as a unit, and the group early warning mechanism takes a single micro-space range as a unit. For individuals, on the basis of classification of facial expressions and limb actions in a real-time dynamic person portrait, an individual emotion comprehensive index value is obtained by combining micro-space environmental factors, the index value is divided into four early warning levels of green, orange, yellow and red, the green represents stable and no early warning, the orange represents unstable, early warning reminds attention, the yellow represents easy excitement, early warning pays attention at any time and sounds an alarm, the red represents excitement state, and the artificial treatment task is directly warned and automatically dispatched. For the group, on an individual basis, a group emotion comprehensive index value is obtained through comprehensive calculation of emotion comprehensive indexes of individuals in a centralized state and a centralized area of grid unit personnel divided in a micro space, the comprehensive calculation mode can be an average value calculation method, a weighted average value method with normal distribution can also be adopted, and after the index value is calculated, early warning treatment is carried out according to the method of dividing four early warning levels of green, orange, yellow and red.
In this example, the ubiquitous sensing module is composed of an image sensing sub-module, an audio sensing sub-module, an environment sensing sub-module, and a data aggregation and management sub-module, as shown in fig. 2, where:
the image perception submodule comprises a camera, video data transmission and storage and other functions, two or more high-definition and wide-angle cameras are generally distributed at reasonable positions in a micro-space scene to realize dead-angle-free video monitoring of the whole scene, and the image video information collected in real time is uniformly stored and managed through the video data transmission and storage function;
the audio perception submodule comprises an audio perception device and functions of audio data transmission, storage and the like, two or more high-fidelity and deep noise reduction sound collection devices are generally distributed at reasonable positions in a micro-space scene to realize high-degree reduction collection of personnel voice information in the scene, and the real-time collected audio information is stored and managed uniformly through the functions of audio data transmission and storage;
the environment sensing submodule comprises a temperature sensing device and functions of environment sensing data transmission, storage and the like, a group of temperature and humidity sensing devices are intensively distributed in a micro-space scene to realize the acquisition of environment sensing information, sensing units for monitoring other indexes can be continuously expanded on the device according to the actual application requirements, and the environment sensing information acquired in real time is uniformly stored and managed through the functions of data transmission and storage;
the data collection and management sub-module provides management functions of collection, integration, channel conversion, storage, processing and the like of various data, and is in butt joint with an external related service management system to collect personnel and service related information of external sources except for the image perception sub-module, the audio perception sub-module and the environment perception sub-module.
In this example, the intelligent learning and training module is composed of a feature dictionary classification submodule, a manual feature extraction classification submodule, a feature deep learning classification submodule, a comprehensive analysis submodule, and a feature library management submodule, as shown in fig. 3, wherein:
the feature dictionary classification submodule divides and classifies and analyzes image information and audio information based on a dictionary library of expressions, behaviors, voices, articles and the like which are established in advance, and realizes classification and extraction of image and audio features of the articles in human expressions, behaviors, voices and micro-spaces through multi-granularity combined calculation;
the specific method of the dictionary classification method comprises the following steps: for example, the facial image expression dictionary library of the person is pre-established by selecting a universal facial head portrait, dividing the face into forehead, eyebrow, cheek, cheeks, nose, mouth and other parts, and then solidifying various emotional facial expression parameters according to the change rate of the distance proportion between the parts under different emotions such as happiness, joy, tension, anger and the like, wherein each solidified expression part parameter forms a facial expression image dictionary item. When the facial emotion recognition system is in actual use, the facial image to be judged is compared with the dictionary entry to judge the facial expression emotion classification of the person.
The manual extraction feature classification submodule comprises functions of sample selection, manual feature extraction, model calculation and the like, certain images and audio information of people and articles are selected as analysis samples through the samples, then features of expressions, behaviors, voices, articles and the like in the sample information are extracted through the manual feature extraction function, and then implicit feature automatic classification extraction is carried out on other images and audio information by means of a calculation model constructed by algorithms such as regression and a vector machine;
the process of the artificial extraction feature classification method is as follows: for example, the extraction of the expression features of the facial images of the persons is to extract a certain proportion of sample persons (for example, 10% of person samples) from the total number of persons in custody, classify the facial expressions of the sample persons into classes such as happiness, tension, joy, anger and the like according to the emotion by a manual judgment mode, divide the facial images with the divided emotion types into parts such as forehead, eyebrow, eye, cheek, nose, mouth and the like, calculate the proportion change of the distances between the parts of different samples under different emotion classes, and classify the samples according to the emotion to obtain a group of interval range values to form a sample reference value. And when the facial expression image is actually used, the facial expression image of the person to be judged is compared with the sample reference value to judge the facial expression emotion classification of the person.
The characteristic deep learning classification submodule is based on a deep network model, a massive characteristic big database of expressions, behaviors, voices and articles is established through big data acquisition and induction by means of a machine learning algorithm, and then implicit characteristics of images and audio information are automatically classified and extracted in an artificial intelligence mode;
the process of the characteristic deep learning classification method is as follows: for example, extraction of facial expression features of people is to establish a large database of facial emotional expression mass features for panoramic detention people and collected and obtained facial expression image data of people not detented. When each facial expression is judged, a judgment result is obtained by means of big data matching calculation based on a mature machine algorithm, the judgment result is brought into the big characteristic database, and the accuracy of big data matching calculation can be continuously improved along with the continuous accumulation of data in the big database.
The feature extraction step of the feature deep learning classification submodule is shown in fig. 4: the method comprises the steps of collecting expression, behavior and voice of panoramic detention personnel, collecting image data of illegal articles possibly existing in mandatory measure places, establishing a base image and voice database, and extracting and establishing a large image and voice characteristic database corresponding to various emotions by utilizing mature images and voice machine learning and recognition algorithms of facial and body behaviors, voice, articles and the like. When a new detention object needs to be identified and classified, based on the established image and voice characteristic big database and the image and voice characteristic matching identification method, a classification identification result is obtained, and the results are synchronously stored in the image and voice characteristic big database to support subsequent analysis and judgment.
The comprehensive analysis submodule analyzes and calculates the classification features of the same object extracted by the feature dictionary classification submodule, the manual extracted feature classification submodule and the feature deep learning classification submodule based on a multi-factor weighting calculation mode, obtains a comprehensive analysis result on the basis of keeping three types of extraction results, and outputs the comprehensive analysis result to the feature library management submodule;
and the comprehensive analysis submodule carries out reliability analysis according to factors such as sample size, timeliness, manual intervention strength and the like aiming at the same object classification features extracted by the feature dictionary classification submodule, the manual extraction feature classification submodule and the feature deep learning classification submodule, and selects a result with the highest reliability to output to the feature library management submodule on the basis of retaining three types of extraction results. The reliability analysis step is that if the extraction results of the three sub-modules are the same, the result is adopted; if the results of two sub-modules are the same and the other sub-module is different, adopting the result of the two sub-modules; if the results of the three sub-modules are different, selecting a result of the feature deep learning classification sub-module, marking the result as a doubt result, carrying out post judgment correction in a manual mode, taking the correction result as an accurate result, feeding back the correction result to the feature dictionary classification sub-module as a dictionary item, feeding back the correction result to the feature dictionary classification sub-module as sample data, and feeding back the sample data to the feature deep learning classification sub-module to enrich image and voice feature big data.
The characteristic library management submodule realizes the management function of the classification characteristics such as expressions, behaviors, voices, articles and the like obtained by various classification extraction methods, and comprises the functions of inputting, outputting, storing, retrieving and the like of characteristic information.
In this example, the image deconstruction module is composed of a human expression analysis submodule, a human behavior analysis submodule, and an article analysis submodule, as shown in fig. 5, where:
the human expression analysis submodule dynamically analyzes the human expression in the video perception image based on the support of the feature library extracted by the intelligent learning and training module facial expression feature classification, and divides the expression features through two levels, wherein the expression features are divided into two types of expression classifications of positive emotion and negative emotion according to the emotion pole classification, and the expression features are subdivided into classified expressions of happy, nervous, frightened, complained, indifferent, angry, sad and the like according to multiple types of emotion, and outputs the analysis result to the comprehensive portrait module;
the human behavior analysis submodule extracts support of a feature library based on behavior dynamic feature classification of the intelligent learning and training module, analyzes human behavior dynamics in a video perception image, and also divides behavior features through two levels, wherein the behavior features are classified into two classes of positive emotion and negative emotion according to emotion polarity classification, and the behavior features are subdivided into classes of happiness, joy, tension, fear, hate, indifference, anger, sadness and the like according to multiple classes of emotion, and outputs analysis results to the comprehensive portrait module;
the article analysis submodule analyzes and tracks articles in the video perception image based on the support of the article feature classification extraction feature library of the intelligent learning and training module, accurately extracts forbidden articles and dynamically marks and tracks forbidden articles, and outputs an analysis result to the micro-space early warning and monitoring module.
In this example, the audio deconstruction module is composed of an audio selection sub-module, an acoustic membrane matching sub-module, and a voice analysis sub-module, as shown in fig. 6, where:
the audio frequency picking submodule picks out voice and special audio frequency information sent by personnel from the mixed audio frequency information aiming at the audio frequency information in the micro space collected by the audio frequency perception submodule based on the support of an intelligent learning and training module audio frequency feature library;
the voice film matching submodule accesses personnel voice film information of the data gathering and managing submodule, based on a mature voice film matching algorithm, the personnel voice information selected from the audio frequency selecting submodule is matched with the personnel voice film information for analysis, and perceived voices are bound with personnel;
the voice analysis submodule extracts feature library support based on audio feature classification of the intelligent learning and training module, analyzes personnel voice information in audio perception data, and divides the personnel voice features through two levels, wherein the personnel voice features are classified into two types of voices of positive emotion and negative emotion according to emotion polarity, and the personnel voice features are subdivided into classified voices of happiness, joy, tension, fear, hate, indifference, anger, sadness and the like according to multiple types of emotion, and outputs an analysis result to the comprehensive portrait module;
in this example, the panoramic three-dimensional management module is composed of a GIS engine submodule, a micro-space model management submodule, a micro-space grid management submodule, a micro-space scene management submodule, and a panoramic three-dimensional service interface submodule, as shown in fig. 7, wherein:
the GIS engine submodule adopts mature three-dimensional GIS business software and provides basic GIS functions of spatial data management, analysis, calculation and the like through secondary development and integration;
the micro-space model management submodule realizes the functions of adding, deleting, modifying, encoding and the like of the three-dimensional model in the micro-space based on the basic function of the GIS engine submodule;
the micro-space grid management submodule reasonably divides a micro-space plane based on the basic function of the GIS engine submodule, manages grid units formed by division, and the division of the micro-space grid is mainly convenient for the spatial correlation analysis of the behavior activity and the emotion of a person;
the micro-space scene management submodule manages GIS space objects such as icons, characters, graphs and other various labels and notes in the micro-space except for the three-dimensional model object on the basis of the basic function of the GIS engine submodule, such as addition, deletion, coding and the like;
the three-dimensional service interface sub-module performs interface modularization opening on functions or data such as a GIS basic function, micro-space model data, micro-space grid data and micro-space scene data in a universal format interface mode such as Web service and Rest service, and supports integrated calling of other functional modules of the system;
in this example, the comprehensive portrait module is composed of a personnel basic information management sub-module, a behavior and emotion analysis sub-module, a personnel portrait sub-module, and a micro-space portrait sub-module, as shown in fig. 8, where:
the personnel basic information management submodule uniformly manages the basic information of the supervised personnel by converging the personnel information of the external service system, and has the functions of adding, modifying, deleting, synchronizing and the like of the personnel basic information;
the behavior and emotion analysis submodule integrates the received personnel expression, behavior and voice characteristic classification information analyzed by the image deconstruction module and the audio deconstruction module, and performs comprehensive analysis by combining the micro-space environment state and the personnel basic information to obtain behavior and emotion analysis results taking the individuals of the monitored personnel as units;
the personnel portrait submodule conducts portrait management on a monitored person based on the analysis result of the behavior and emotion analysis submodule, wherein the portrait management comprises two modes of static portrait and dynamic portrait, the static portrait is characterized by personnel characters, emotions and behavior characteristics formed based on personnel basic information and the long-term dynamic monitoring and analysis result of the personnel, and the dynamic portrait is characterized by the real-time emotion and behavior characteristics of the personnel according to the ubiquitous real-time dynamic information and the analysis result;
the micro-space portrait sub-module takes a single micro-space as a unit, takes a supervised person group in each micro-space as a whole, and portrays the group behavior, emotion and stability change conditions;
in this example, the micro-space early warning and supervision module is composed of a personnel abnormity early warning submodule, a group abnormity early warning submodule, a prohibited item dynamic monitoring submodule, and a comprehensive supervision submodule, as shown in fig. 9, wherein:
the personnel abnormity early warning sub-module divides personnel abnormity conditions into four levels of green, orange, yellow and red through threshold adjustment according to personnel dynamic image information of the comprehensive image module, the four levels respectively represent stable, unstable, excitable and excitable state intervals, and management personnel conduct supervision decision based on abnormity early warning information;
the group abnormity early warning submodule monitors and analyzes the group abnormity condition by taking a single monitoring room micro space as a unit and based on the dynamic information of the comprehensive portrait module and the personnel abnormity early warning submodule, divides the group abnormity condition into four levels of green, orange, yellow and red through threshold adjustment, and respectively represents four state intervals of stable group, unstable group, easy bias group and biased group, and a manager carries out supervision decision based on the abnormity early warning information;
the forbidden articles dynamic detection submodule carries out dynamic detection and tracking on forbidden articles possibly existing in the micro-space based on the support of the analysis function of the intelligent learning and training module;
the comprehensive supervision submodule takes a three-dimensional GIS scene as a carrier, carries out comprehensive and visual supervision for personnel and groups in the micro space, comprises the functions of index monitoring analysis, micro space state inspection, personnel information inquiry, prison scheduling, personnel allocation management, perception equipment state monitoring and the like, is in butt joint with an external service system, outputs a related analysis result to the external service system, and supports the operation and management work of other service systems.
The operation flow of each functional module in the system is as follows: the ubiquitous sensing module is an input source of the whole system and is responsible for collecting various sensing information and gathering external service system information and outputting sensing data to the intelligent learning and training module, the image deconstruction module and the audio deconstruction module, the intelligent learning and training module supports the analysis and calculation of the image deconstruction module and the audio deconstruction module, the analysis and calculation results of the image deconstruction module and the audio deconstruction module are fed back to the intelligent learning and training module to supplement feature library information, the analysis and calculation accuracy of the intelligent learning and training module is continuously improved through a machine learning algorithm, the analysis results of the image deconstruction module and the audio deconstruction module are combined with the environment sensing information of the ubiquitous sensing module and are gathered to the comprehensive image module to realize the comprehensive image management of personnel and micro-space, and the comprehensive image module outputs the result to the micro-space early warning and monitoring module, with supporting the synthesis, visual early warning analysis and supervision work, little space early warning and supervision module is whole system's display, use, output terminal, the management is alert based on the PC end, remove end APP, command dispatch large screen controls this module, realize the dispatch use to whole system, and be connected with external system, to the output management of outside service system, analysis, decision-making information, panorama three-dimensional management module then provides basic three-dimensional GIS ability, support intelligent study and training module, synthesize portrait module, the operation of little space early warning and supervision module, thereby can effectively solve the real-time of personnel and personnel's action under the little space environment of all kinds of mandatory measure places, all-round supervision difficult problem, promote supervision efficiency, reduce management cost, improve service level.
The technical solution provided by the present invention is described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. The utility model provides a personnel's action panorama portrait intelligence supervisory systems towards little space which characterized in that: including ubiquitous perception module, intelligent learning and training module, image deconstruction module, audio frequency deconstruction module, panorama three-dimensional management module, synthesize and portrait module, little space early warning and supervision module, wherein:
the ubiquitous sensing module is used for comprehensively sensing and acquiring information of the environment state of the micro space of the monitoring place, and accessing external information of a service management system related to the monitored personnel to realize real-time comprehensive sensing, acquisition, storage and management of behavior and activity of the monitored personnel and the space environment condition;
the intelligent learning and training module is used for classifying, analyzing, identifying and extracting the audio and video information acquired by the ubiquitous sensing module, analyzing and summarizing the emotional characteristics of people through the trained image, voice and behavior characteristics of the people, and managing and verifying learning and training results;
the image deconstruction module is used for automatically extracting and identifying various articles in human facial features, behavior dynamics and supervision places in real time based on the training library support provided by the intelligent learning and training module aiming at the video image information acquired by the ubiquitous sensing module, and classifying and deconstructing the extracted and identified elements according to the image classification in the training library;
the audio deconstruction module is used for automatically extracting and identifying dynamic voice information of a person in real time aiming at the audio information acquired by the ubiquitous sensing module based on the training library support provided by the intelligent learning and training module, performing picking, matching, classifying and deconstruction according to voice classification in the training library and pre-acquired personnel voice membrane information, and pairing the acquired data with a person to be supervised;
the panoramic three-dimensional management module is used for performing realistic three-dimensional modeling and management on the micro-space of the supervision place according to the three-dimensional GIS technology;
the comprehensive portrait module is used for integrating the information processed by the image deconstruction module and the audio deconstruction module on the basis of the gathered basic information of the supervised person and by taking the person as a unit, carrying out personnel behavior analysis and emotion analysis on the basis of the training library support provided by the intelligent learning and training module so as to realize panoramic real-time portrait of the individual person, simultaneously carrying out group portrait on a micro-space scene by taking a prison room as a unit, and reflecting the omnibearing state of the individual person, the group and the micro-space scene through the comprehensive portrait;
the micro-space early warning and supervision module is used for supporting data of the intelligent learning and training module and the comprehensive portrait module, monitoring and analyzing individual and group conditions in the micro-space in real time based on real-time dynamic personnel portrait and environment monitoring information, and performing classified and graded early warning.
2. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the ubiquitous sensing module comprises an image sensing submodule, an audio sensing submodule, an environment sensing submodule and a data gathering and managing submodule, wherein:
the image perception submodule is used for monitoring the whole scene of the micro space of the supervision place without dead angles and storing and managing the obtained video image data;
the audio perception submodule is used for highly restoring and collecting the personnel voice information in the micro space of the supervision place and storing and managing the collected audio information;
the environment perception submodule is used for collecting temperature and humidity data of a micro space of a supervision place and storing and managing collected environment perception information;
the data aggregation and management submodule is used for carrying out aggregation, integration, channel conversion and storage management on various data provided by the image perception submodule, the audio perception submodule and the environment perception submodule, is in butt joint with an external related service management system and aggregates personnel and service information of an external source.
3. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the intelligent learning and training module comprises a feature dictionary classification submodule, a manual extraction feature classification submodule, a feature deep learning classification submodule, an integrated analysis submodule and a feature library management submodule, wherein:
the feature dictionary classification submodule is used for dividing, classifying and analyzing image information and audio information by adopting a pre-established dictionary library, and realizing classification and extraction of image and audio features of people's expressions, behaviors, voices and articles in a micro space through multi-granularity combined calculation;
the manual extraction feature classification submodule is used for selecting certain personnel images, article images and audio information as analysis samples, manually extracting expression, behavior, voice and article features in the analysis samples, and automatically classifying and extracting implicit features of other images and audio information except the analysis samples by adopting an algorithm model;
the characteristic deep learning classification submodule is used for establishing a massive characteristic big database of expressions, behaviors, voices and articles through big data acquisition and induction, and then carrying out automatic classification and extraction on implicit characteristics of image and audio information;
the comprehensive analysis submodule is used for analyzing and calculating the same object classification features extracted by the feature dictionary classification submodule, the manual extraction feature classification submodule and the feature deep learning classification submodule in a multi-factor weighting calculation mode, obtaining a comprehensive analysis result on the basis of keeping three types of extraction results and outputting the comprehensive analysis result to the feature library management submodule;
the characteristic library management submodule is used for realizing the function of managing the classified characteristic data output by the comprehensive analysis submodule.
4. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the image deconstruction module comprises a person expression analysis submodule, a person behavior analysis submodule and an article analysis submodule, wherein:
the human expression analysis sub-module is used for adopting the facial expression feature classification extraction feature library provided by the intelligent learning and training module to dynamically analyze the human expression in the video perception image acquired by the ubiquitous perception module and outputting an analysis result to the comprehensive portrait module;
the personnel behavior analysis submodule is used for adopting a behavior dynamic characteristic classification extraction feature library provided by the intelligent learning and training module to analyze personnel behavior dynamics in the video perception image acquired by the ubiquitous perception module and outputting an analysis result to the comprehensive portrait module;
the article analysis submodule is used for adopting an article feature classification extraction feature library provided by the intelligent learning and training module, analyzing and tracking articles in the video perception image collected by the ubiquitous perception module, accurately extracting and dynamically marking and tracking forbidden articles in a key mode, and outputting an analysis result to the micro-space early warning and monitoring module.
5. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to claim 4, characterized in that: the human expression analysis submodule also divides expression characteristics through two levels, namely positive emotion and negative emotion classification according to the emotion level classification, and expression classification including happiness, joy, tension, fear, hate, apathy, indifference, anger and sadness according to the multi-class emotion classification;
the human behavior analysis submodule also divides the behavior characteristics into two levels, namely positive emotion classification and negative emotion classification according to emotion polarity classification, and negative emotion classification behaviors such as happiness, joy, tension, fear, hate, apathy, anger and sadness according to multiple classes of emotion classification.
6. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the audio deconstruction module comprises an audio picking submodule, an acoustic membrane matching submodule and a voice analysis submodule, wherein:
the audio frequency picking sub-module is used for picking out voice and special audio frequency information sent by personnel from the audio frequency information collected by the ubiquitous sensing module by adopting an audio frequency feature library provided by the intelligent learning and training module;
the sound film matching submodule is used for accessing the personnel sound film information extracted by the ubiquitous sensing module, matching and analyzing the personnel voice information and the personnel sound film information which are selected from the audio frequency selecting submodule, and binding the voice obtained by sensing with the personnel;
the voice analysis submodule is used for extracting a feature library according to the audio feature classification provided by the intelligent learning and training module, analyzing the personnel voice information in the audio information, and outputting an analysis result to the comprehensive portrait module after classifying the personnel voice features.
7. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the panoramic three-dimensional management module comprises a GIS engine submodule, a micro-space model management submodule, a micro-space grid management submodule, a micro-space scene management submodule and a panoramic three-dimensional service interface submodule, wherein:
the GIS engine submodule is used for providing management, analysis and calculation of spatial data;
the micro-space model management submodule realizes the addition, deletion, modification and coding of the three-dimensional model in the micro-space based on the basic function of the GIS engine submodule;
the micro-space grid management submodule carries out gridding segmentation on a micro-space plane based on the basic function of the GIS engine submodule and manages grid units formed by segmentation;
the micro-space scene management submodule performs addition, deletion and coding management on GIS space objects except for the three-dimensional model object in the micro-space based on the basic function of the GIS engine submodule;
the three-dimensional service interface sub-module is used for performing interface modularization opening on the basic function, the micro-space model data, the micro-space grid data and the micro-space scene data of the GIS engine sub-module in a universal format interface mode.
8. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: synthesize and portrait module including personnel basic information management submodule, action and emotion analysis submodule, personnel portrait submodule, little space portrait submodule, wherein:
the personnel basic information management submodule is used for uniformly managing the basic information of the monitored personnel by gathering the personnel information of the external service management system;
the behavior and emotion analysis submodule is used for integrating the received personnel expression, behavior and voice characteristic classification information analyzed by the image deconstruction module and the audio deconstruction module, and performing comprehensive analysis by combining the micro-space environment state and the personnel basic information to obtain a behavior and emotion analysis result taking the individual of the monitored personnel as a unit;
the person portrait submodule is used for adopting behavior and emotion analysis results to conduct portrait management on a person to be monitored;
the micro-space portrait sub-module is used for taking a single micro-space as a unit and taking a supervised person group in each micro-space as a whole to portrait group behaviors, emotions and stability change conditions.
9. The micro-space oriented intelligent supervision system for panoramic images of personnel behaviors according to claim 8, characterized in that: the personnel portrait sub-module conducts portrait management on supervised personnel in two modes of static portrait and dynamic portrait, the static portrait is characterized by character, emotion and behavior characteristics of the personnel formed on the basis of personnel basic information and long-term dynamic monitoring and analysis results of the personnel, and the dynamic portrait is characterized by real-time emotion and behavior characteristics of the personnel according to ubiquitous real-time dynamic information and analysis results.
10. The personnel behavior panoramic image intelligent supervision system facing the micro-space according to the claim 1, characterized in that: the micro-space early warning and supervision module comprises a personnel abnormity early warning submodule, a group abnormity early warning submodule, a prohibited item dynamic monitoring submodule and a comprehensive supervision submodule, wherein:
the personnel abnormity early warning sub-module is used for dividing personnel abnormity conditions into four levels of green, orange, yellow and red according to personnel dynamic portrait information of the comprehensive portrait module through threshold adjustment, and respectively representing four state intervals of stable, unstable, easy excitation and excitation;
the group abnormity early warning submodule is used for monitoring and analyzing the group abnormity condition by taking a single monitoring room micro-space as a unit and based on the dynamic information of the comprehensive portrait module and the personnel abnormity early warning submodule, and dividing the group abnormity condition into four levels of green, orange, yellow and red through threshold adjustment, wherein the four levels respectively represent four state intervals of stable group, unstable group, easy bias group and biased group;
the forbidden articles dynamic detection submodule is used for dynamically detecting and tracking forbidden articles possibly existing in the micro space;
the comprehensive supervision sub-module takes a three-dimensional GIS scene as a carrier, carries out comprehensive and visual supervision facing to personnel and groups in a micro space, is in butt joint with an external service management system, and outputs a related analysis result to the external service system.
CN202111665098.9A 2021-12-31 2021-12-31 Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors Pending CN114511817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111665098.9A CN114511817A (en) 2021-12-31 2021-12-31 Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111665098.9A CN114511817A (en) 2021-12-31 2021-12-31 Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors

Publications (1)

Publication Number Publication Date
CN114511817A true CN114511817A (en) 2022-05-17

Family

ID=81548548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665098.9A Pending CN114511817A (en) 2021-12-31 2021-12-31 Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors

Country Status (1)

Country Link
CN (1) CN114511817A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271905A (en) * 2023-11-21 2023-12-22 杭州小策科技有限公司 Crowd image-based lateral demand analysis method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271905A (en) * 2023-11-21 2023-12-22 杭州小策科技有限公司 Crowd image-based lateral demand analysis method and system
CN117271905B (en) * 2023-11-21 2024-02-09 杭州小策科技有限公司 Crowd image-based lateral demand analysis method and system

Similar Documents

Publication Publication Date Title
US20210096911A1 (en) Fine granularity real-time supervision system based on edge computing
CN109858365B (en) Special crowd gathering behavior analysis method and device and electronic equipment
US10579877B2 (en) System and method for selective image processing based on type of detected object
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
KR101765722B1 (en) System and method of generating narrative report based on cognitive computing for recognizing, tracking, searching and predicting vehicles and person attribute objects and events
CN108564052A (en) Multi-cam dynamic human face recognition system based on MTCNN and method
CN110516529A (en) It is a kind of that detection method and system are fed based on deep learning image procossing
CN102348101A (en) Examination room intelligence monitoring system and method thereof
CN109993946A (en) A kind of monitoring alarm method, camera, terminal, server and system
CN115272037A (en) Smart city region public security management early warning method and system based on Internet of things
KR20190079047A (en) A supporting system and method that assist partial inspections of suspicious objects in cctv video streams by using multi-level object recognition technology to reduce workload of human-eye based inspectors
CN112581015B (en) Consultant quality assessment system and assessment method based on AI (advanced technology attachment) test
CN107277470A (en) A kind of network-linked management method and digitlization police service linkage management method
CN110166734A (en) A kind of Intelligence In Baogang Kindergarten monitoring method and system
CN113411542A (en) Intelligent working condition monitoring equipment
CN111079694A (en) Counter assistant job function monitoring device and method
CN115002414A (en) Monitoring method, monitoring device, server and computer readable storage medium
KR20200052418A (en) Automated Violence Detecting System based on Deep Learning
CN114187541A (en) Intelligent video analysis method and storage device for user-defined service scene
CN111754669A (en) College student management system based on face recognition technology
CN113269039A (en) On-duty personnel behavior identification method and system
CN114511817A (en) Micro-space-oriented intelligent supervision system for panoramic portrait of personnel behaviors
CN111191498A (en) Behavior recognition method and related product
WO2022114895A1 (en) System and method for providing customized content service by using image information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination