LU502463B1 - Monitoring and alarming device for recognizing dynamic face and emotion - Google Patents
Monitoring and alarming device for recognizing dynamic face and emotion Download PDFInfo
- Publication number
- LU502463B1 LU502463B1 LU502463A LU502463A LU502463B1 LU 502463 B1 LU502463 B1 LU 502463B1 LU 502463 A LU502463 A LU 502463A LU 502463 A LU502463 A LU 502463A LU 502463 B1 LU502463 B1 LU 502463B1
- Authority
- LU
- Luxembourg
- Prior art keywords
- monitoring
- emotion
- recognition assembly
- alarm
- dynamic face
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 40
- 230000008451 emotion Effects 0.000 title claims abstract description 18
- 230000008909 emotion recognition Effects 0.000 claims abstract description 19
- 230000002159 abnormal effect Effects 0.000 abstract description 8
- 238000000605 extraction Methods 0.000 description 22
- 239000000284 extract Substances 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Alarm Systems (AREA)
Abstract
The present invention discloses a monitoring and alarming device for recognizing a dynamic face and an emotion, including a monitoring mechanism and an alarm mechanism. The monitoring mechanism includes a dynamic face recognition assembly, an emotion recognition assembly, a body gesture recognition assembly, and a state analysis module. The dynamic face recognition assembly, the emotion recognition assembly, and the body gesture recognition assembly are communicated with or electrically connected to the state analysis module, respectively. The alarm mechanism is communicated with or electrically connected to the monitoring mechanism and is configured to issue an alarm. The monitoring and alarming device for recognizing the dynamic face and the emotion receives identity information of a sample object of the dynamic face recognition assembly, an emotion category output result of the emotion recognition assembly, and an action of the sample object recognized by the body gesture recognition assembly via the state analysis module, analyzes a real-time state of the sample object, such as a normal state, an abnormal state, etc., to realize real-time and automatic monitoring for a prisoner. The monitoring and alarming device for recognizing the dynamic face and the emotion alarms the abnormal state of the prisoner via the alarm mechanism, and realizes functions of automatic alarm and of timely informing a prison guard.
Description
MONITORING AND ALARMING DEVICE FOR RECOGNIZING DYNAMIC FACE ANIS**63
TECHNICAL FIELD The present invention belongs to the field of a safety apparatus, and relates to a monitoring and alarming device for recognizing a dynamic face and an emotion.
BACKGROUND ART À traditional monitoring apparatus in a prison 1s generally a camera, which only has a function of passive monitoring. The camera needs a prison guard to view in real time, which largely depends on influence of people. In addition, when prisoners have conflicts or sudden illnesses, the prison guard may not be able to detect the prisoners in time, resulting in a more serious consequence. The present invention effectively solves this problem.
SUMMARY OF THE INVENTION In order to overcome deficiencies in the prior art, the present invention provides a monitoring and alarming device for recognizing a dynamic face and an emotion. In order to achieve the above objective, the present invention adopts the following technical solution: a monitoring and alarming device for recognizing a dynamic face and an emotion includes a monitoring mechanism and an alarm mechanism. The monitoring mechanism includes a dynamic face recognition assembly, an emotion recognition assembly, a body gesture recognition assembly, and a state analysis module. The dynamic face recognition assembly, the emotion recognition assembly, and the body gesture recognition assembly are communicated with or electrically connected to the state analysis module, respectively. The alarm mechanism is communicated with or electrically connected to the monitoring mechanism. The alarm mechanism includes an alarm controller and an alarm device. The alarm controller is communicated with or electrically connected to the state analysis module. The alarm controller is communicated with or electrically connected to the alarm device. Further, the monitoring mechanism also includes a rotating assembly. The rotating assembly is connected to a monitoring shell to control the monitoring shell to rotate within a set range. To sum up, the present invention has the following beneficial effects: 1) In the present invention, identity information of a sample object of the dynamic face recognition assembly, an emotion category output result of the emotion recognition assembly, and an action of the sample object recognized by the body gesture recognition assembly are received via the state analysis module. A state of a sample object, such as a normal state, an abnormal state os is analyzed to realize real-time and automatic monitoring for a prisoner. 2) In the present invention, the abnormal state of the prisoner is alarmed via the alarm mechanism, so as to realize functions of automatic alarm and timely notification of a prison guard.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a device of the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS As shown in FIG. 1, a monitoring and alarming device for recognizing a dynamic face and an emotion includes a monitoring mechanism and an alarm mechanism. The monitoring mechanism includes a dynamic face recognition assembly, an emotion recognition assembly, a body gesture recognition assembly, and a state analysis module. The dynamic face recognition assembly, the emotion recognition assembly, and the body gesture recognition assembly are communicated with or electrically connected to the state analysis module, respectively. The monitoring mechanism 1s configured to recognize the dynamic face, the emotion and a body gesture of a prisoner. The alarm mechanism is communicated with or electrically connected to the monitoring mechanism. The alarm mechanism is configured to alarm an abnormal state of the prisoner.
The monitoring mechanism includes a monitoring shell, the dynamic face recognition assembly, the emotion recognition assembly, the body gesture recognition assembly, the state analysis module, and a rotating assembly. The dynamic face recognition assembly and the body gesture recognition assembly are mounted in the monitoring shell. The rotating assembly is connected to the monitoring shell to control the monitoring shell to rotate within a set range, so as to recognize the dynamic face, the emotion and the body gesture of the prisoner in a designated region.
The dynamic face recognition assembly is configured to recognize the face and the emotion of the prisoner in the designated region. The body gesture recognition assembly is configured to recognize the gesture of the prisoner in the designated region.
The dynamic face recognition assembly includes a target face acquisition module, a face feature extraction module, a face feature storage module, and a sample face acquisition module. The target face acquisition module is communicated with or electrically connected to the face feature extraction module and the face feature storage module. The face feature extraction module is communicated with or electrically connected to the face feature storage module.
The sample Tages 0° acquisition module is communicated with or electrically connected to the face feature extraction module.
The target face acquisition module is configured to acquire a target face image.
When the prisoner is in prison, the target face image is input into the target face acquisition module.
The target face acquisition module transmits the target face image to the face feature storage module.
The face feature extraction module receives the target face image and extracts feature information of the target face image, which specifically includes a first resolution feature and a second resolution feature.
A first resolution corresponding to the first resolution feature is lower than a predetermined resolution.
A second resolution corresponding to the second resolution feature is higher than the predetermined resolution.
By setting the first resolution feature and the second resolution feature, a more reasonable and accurate recognition result can be obtained.
The face feature storage module receives the first resolution feature and the second resolution feature that are extracted by the face feature extraction module, so that the face feature storage module is built-in with target face image, the first resolution feature and the second resolution feature of the target face image, and identity information of a target.
The identity information includes a name, an age, a prison code and other information.
The sample face acquisition module acquires the face image of the prisoner in the designated region, that is, a sample face image.
The sample face acquisition module transmits the sample face image to the face feature extraction module.
The face feature extraction module extracts the first resolution feature and the first resolution feature of the sample face image and transmits the first resolution feature and the first resolution feature to the face feature storage module.
The first resolution feature and the second resolution feature of the sample face image are compared in turn with the first resolution feature and the second resolution feature of the target face image until a difference between the first resolution feature and the second resolution feature of the sample face image as well as the first resolution feature and the second resolution feature of the target face image is within a set value.
Additionally, identity information of the prisoner (that is, the sample) can be confirmed.
The emotion recognition assembly includes a target voice acquisition module, a voice feature extraction module, a voice feature storage module, a sample voice acquisition module, a voice text extraction module, a text storage module, and an emotion recognition module.
The target voice acquisition module is communicated with or electrically connected to a voice feature extraction’ °° module and the voice feature storage module. The voice feature extraction module is communicated with or electrically connected to the voice feature storage module. The sample voice acquisition module is communicated with or electrically connected to the voice feature extraction module. A sample text acquisition module is communicated with or electrically connected to the sample voice acquisition module. The voice text extraction module is communicated with or electrically connected to the voice text extraction module. The voice text extraction module is communicated with or electrically connected to the text storage module. The emotion recognition module is communicated with or electrically connected to the face feature storage module, the voice feature storage module, and the text storage module.
The target voice acquisition module is configured to acquire target voice data. When the prisoner is in prison, the target voice data are input into the target voice acquisition module. The target voice acquisition module transmits the target voice data to the voice feature storage module. The voice feature extraction module receives voice feature data and extracts feature information of a target voice. The voice feature storage module receives the feature information of the target voice extracted by the voice feature extraction module. The voice feature storage module is built-in with the target voice data, the feature information of the target voice, and the identity information of the target. The identity information includes the name, the age, the prison code, etc.
The sample voice acquisition module acquires a voice image of the prisoner in the designated region, that is, sample voice data. The sample voice acquisition module transmits the sample voice data to the voice feature extraction module. The voice feature extraction module extracts and transmits feature information of the sample voice data to the voice feature storage module. The feature information of the sample voice data and the feature information of the target voice data include a voice rate, etc. The feature information of the sample voice data is compared in turn with the feature information of the target voice data. A sample object of the sample voice acquisition module corresponds to a sample object of the sample face acquisition module, and real-time voice information of the sample object is obtained.
The voice text extraction module receives the sample voice data of the sample voice acquisition module and extracts text data. The text storage module is built-in with the text data. The voice text extraction module inputs the text data into the text storage module, and compares the text data with the built-in text data to obtain real-time text information of the sample object.
The emotion recognition module receives a comparison result of the face feature storage’ 00 module, a comparison result of the voice feature storage module, and a comparison result of the text storage module to obtain an output result of an emotion category, so as to realize emotion recognition of the sample object.
5 The body gesture recognition assembly is configured to sense a gesture of the prisoner in the designated region. The body gesture recognition assembly senses a gesture of the sample object recognized by the dynamic face recognition assembly and the emotion recognition assembly, and generates a first gesture sensing signal. The body gesture recognition assembly is built-in with action data corresponding to different gestures, and recognizes an action of the sample object via the first gesture sensing signal.
The dynamic face recognition assembly, the emotion recognition assembly, and the body gesture recognition assembly are communicated with or electrically connected to the state analysis module, respectively. The state analysis module receives identity information of the sample object of the dynamic face recognition assembly, the emotion category output result of the emotion recognition assembly, and the action of the sample object recognized by the body gesture recognition assembly. A real-time state of the sample object, such as a normal state, an abnormal state (the abnormal state is that the prisoner falls to the ground or has physical conflicts with other prisoners), etc. is analyzed so as to realize real-time and automatic monitoring for the prisoner.
The alarm mechanism includes an alarm controller and an alarm device. The alarm controller is communicated with or electrically connected to the state analysis module. The alarm controller is communicated with or electrically connected to the alarm device. The alarm controller receives the real-time state of the sample object output by the state analysis module. If the sample object is in a normal state, the alarm device remains idle or in a closed state. If the sample object is in an abnormal state, the alarm controller controls the alarm device to issue an alarm to realize automatic alarm.
Claims (2)
1. A monitoring and alarming device for recognizing a dynamic face and an emotion, comprising a monitoring mechanism and an alarm mechanism, wherein the monitoring mechanism comprises a dynamic face recognition assembly, an emotion recognition assembly, a body gesture recognition assembly, and a state analysis module, the dynamic face recognition assembly, the emotion recognition assembly, and the body gesture recognition assembly are communicated with or electrically connected to the state analysis module, respectively, the alarm mechanism 1s communicated with or electrically connected to the monitoring mechanism, the alarm mechanism comprises an alarm controller and an alarm device, the alarm controller is communicated with or electrically connected to the state analysis module, and the alarm controller is communicated with or electrically connected to the alarm device.
2. The monitoring and alarming device for recognizing the dynamic face and the emotion according to claim 1, wherein the monitoring mechanism further comprises a rotating assembly, and the rotating assembly is connected to a monitoring shell to control the monitoring shell to rotate within a set range.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
LU502463A LU502463B1 (en) | 2022-07-06 | 2022-07-06 | Monitoring and alarming device for recognizing dynamic face and emotion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
LU502463A LU502463B1 (en) | 2022-07-06 | 2022-07-06 | Monitoring and alarming device for recognizing dynamic face and emotion |
Publications (1)
Publication Number | Publication Date |
---|---|
LU502463B1 true LU502463B1 (en) | 2023-01-06 |
Family
ID=84817649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
LU502463A LU502463B1 (en) | 2022-07-06 | 2022-07-06 | Monitoring and alarming device for recognizing dynamic face and emotion |
Country Status (1)
Country | Link |
---|---|
LU (1) | LU502463B1 (en) |
-
2022
- 2022-07-06 LU LU502463A patent/LU502463B1/en active IP Right Grant
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103714644B (en) | A kind of indoor security system and safety protection method thereof | |
CN103198595A (en) | Intelligent door and window anti-invasion system | |
WO2016019590A1 (en) | Intelligent door lock system, and intelligent door lock and intelligent alarm door | |
CN112698711A (en) | Intelligent detection method and intelligent lock | |
CN210573856U (en) | Prison vehicle driver verification all-in-one | |
CN110826406A (en) | Child high-altitude protection method based on deep learning model | |
CN105931318A (en) | Intelligent entrance guard system with iris recognition and speech recognition | |
LU502463B1 (en) | Monitoring and alarming device for recognizing dynamic face and emotion | |
CN104408802A (en) | Intelligent entrance guard system based on human face identification | |
CN109191769A (en) | A kind of smart home burglary-resisting system | |
CN203720861U (en) | Fingerprint module testing tool | |
CN203982520U (en) | A kind of bimodulus detector being applied in guard against theft and alarm system | |
CN104157055A (en) | Safe-guard door lock with remote warning function | |
TW202036378A (en) | Monitoring device and monitoring method | |
CN113724420A (en) | Bimodal biological recognition anti-theft door and recognition method thereof | |
CN112837462A (en) | Face recognition access control system and access control system control method | |
CN104392526A (en) | Door lock system based on face recognition | |
US20230160230A1 (en) | Door Strike Plate Sensor | |
CN111666780A (en) | Intelligent door control security method based on emotion recognition technology | |
CN116246402A (en) | Monitoring method and device | |
CN106157414A (en) | A kind of recognition of face type gate control system | |
CN116343421A (en) | Security linkage system and control flow thereof | |
CN104594756A (en) | Safe box based on human ear recognition | |
CN104599361A (en) | Face recognition anti-theft entrance guard system | |
CN114293858A (en) | Anti-theft door lock system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FG | Patent granted |
Effective date: 20230106 |