CN114973490A - Monitoring and early warning system based on face recognition - Google Patents
Monitoring and early warning system based on face recognition Download PDFInfo
- Publication number
- CN114973490A CN114973490A CN202210579014.8A CN202210579014A CN114973490A CN 114973490 A CN114973490 A CN 114973490A CN 202210579014 A CN202210579014 A CN 202210579014A CN 114973490 A CN114973490 A CN 114973490A
- Authority
- CN
- China
- Prior art keywords
- information
- characteristic information
- monitoring
- unit
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 53
- 230000004927 fusion Effects 0.000 claims abstract description 17
- 238000003860 storage Methods 0.000 claims abstract description 14
- 238000012795 verification Methods 0.000 claims abstract description 14
- 238000005516 engineering process Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000013179 statistical model Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 4
- 230000014509 gene expression Effects 0.000 claims description 4
- 238000005286 illumination Methods 0.000 abstract description 10
- 230000007547 defect Effects 0.000 abstract description 2
- 239000013598 vector Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000037433 frameshift Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002759 z-score normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/22—Electrical actuation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a monitoring and early warning system based on face recognition, which comprises: the information acquisition module is used for acquiring image characteristic information through a near-infrared face recognition technology, acquiring voiceprint characteristic information through a Mel frequency cepstrum coefficient characteristic principle and storing the voiceprint characteristic information into the information storage module; the characteristic fusion module is used for carrying out characteristic fusion on the image characteristic information and the voiceprint characteristic information to obtain target characteristic information; the characteristic matching module is used for comparing the target characteristic information with the original characteristic information to obtain a verification video; and the early warning management module is connected with the characteristic matching module and used for comparing the monitoring value of the verification video with the set monitoring threshold value to generate an early warning signal. The invention adopts the infrared recognition technology, makes up the defect of insufficient illumination in face recognition to a great extent, and fuses the image characteristic information and the voiceprint characteristic information, so that the face recognition can obtain higher recognition rate.
Description
Technical Field
The invention belongs to the field of security and protection monitoring, and particularly relates to a monitoring and early warning system based on face recognition.
Background
With the rapid development of modern science and technology life, the concept of artificial intelligence is more and more deep into the mind. One of the most representative techniques is a face recognition technique. With the development of research, the face recognition technology has been taken out of laboratories, and has achieved great value in daily life and production of people, however, the application of the face recognition technology in a surveillance video scene faces the challenges of many uncertain factors in many non-constrained scenes, in the surveillance video, a recognition object cannot actively cooperate to face a camera, so that the change of multiple postures of a human face can occur, even the problem of shielding caused by wearing glasses, hats and other accessories is caused, and the change of illumination in the surveillance scene can also affect recognition. Among them, the intensity of light has the greatest influence on face recognition. Therefore, the adaptive capacity and the control capacity of the face recognition technology to the external illumination environment become a key technical bottleneck restricting the popularization of the face recognition technology. Meanwhile, the data quality of the monitoring video is limited by hardware conditions, and the low resolution of the face image data in the video further increases the difficulty for the face recognition task, so that the research, design and realization of the face recognition system based on the monitoring scene are very challenging.
Disclosure of Invention
The invention aims to provide a monitoring and early warning system based on face recognition, which is used for solving the problems in the prior art.
In order to achieve the above object, the present invention provides a monitoring and early warning system based on face recognition, comprising:
the information acquisition module is used for acquiring image characteristic information and voiceprint characteristic information;
the information storage module is connected with the information acquisition module and is used for storing the image characteristic information and the voiceprint characteristic information;
the characteristic fusion module is connected with the information storage module and is used for carrying out characteristic fusion on the image characteristic information and the voiceprint characteristic information to obtain target characteristic information;
the feature matching module is connected with the feature fusion module and used for comparing the target feature information with the original feature information to obtain a verification video;
and the early warning management module is connected with the characteristic matching module and used for comparing the monitoring value of the verification video with a set monitoring threshold value to generate an early warning signal.
Preferably, the information acquisition module includes:
the image acquisition unit is used for acquiring image information through a near-infrared face recognition technology;
the image preprocessing unit is used for processing the image information through a face detection method and a deep learning method to obtain image characteristic information;
and the voiceprint acquisition unit is used for acquiring voiceprint characteristic information through a Mel frequency cepstrum coefficient characteristic principle.
Preferably, the image acquisition unit includes:
the infrared transmitting unit is used for transmitting infrared rays to the human face through the infrared camera;
the infrared receiving unit is used for receiving infrared rays reflected by the human face;
and the acquisition unit is used for acquiring the image information of the human face through the camera.
Preferably, the image preprocessing unit includes:
the modeling unit is used for establishing a local gray scale model reflecting the gray scale distribution rule of the human face target and a shape statistical model reflecting the shape change rule of the human face target;
the detection unit is used for carrying out approximate expression on the image information by utilizing the local gray scale model and the shape statistical model to obtain target image information;
and the feature extraction unit is used for extracting the features of the target image information through a deep learning algorithm to obtain image feature information.
Preferably, the feature fusion module comprises:
the normalizing unit is used for carrying out standardization processing on the image characteristic information and the voiceprint characteristic information;
and the fusion unit is used for performing characteristic fusion on the image characteristic information and the voiceprint characteristic information through a traversal weighting method to obtain target characteristic information.
Preferably, the feature matching module comprises:
the characteristic matching unit is used for matching the target characteristic information with the original characteristic information, and if the matching is successful, a safety signal is generated; if the matching is unsuccessful, generating an external signal;
the signal analysis unit is used for analyzing the external signal, generating a starting marking instruction and an ending marking instruction and sending the starting marking instruction and the ending marking instruction to the monitoring unit;
and the monitoring unit is used for generating a verification video by receiving the starting marking instruction and the ending marking instruction.
Preferably, the early warning management module includes:
the early warning unit is used for comparing the monitoring value of the verification video with a set monitoring threshold value; if the monitoring value of the verification video is larger than or equal to the set monitoring threshold, generating an early warning signal, and marking the corresponding verification video as an early warning video;
and the early warning video storage unit is used for storing the early warning video and distributing security personnel to check the early warning video.
The invention has the technical effects that:
(1) the existing face recognition system is easily influenced by ambient light, although the illumination preprocessing algorithm can eliminate the illumination influence to a certain extent, the illumination preprocessing algorithm can also cause the image to lose some useful information, in order to solve the problem, the invention adopts a near infrared recognition technology when acquiring the image information, so that the defect of insufficient illumination in face recognition is greatly overcome, and the acquired image has proper and uniform brightness, proper contrast and no overexposure.
(2) The invention fuses the image characteristic information and the voiceprint characteristic information, so that the face recognition can obtain higher recognition rate, and simultaneously lays a solid foundation for the use of the face recognition technology in the field of intelligent security monitoring.
(3) The invention can carry out real-time monitoring and early warning on external personnel through information identification and early warning management, the whole process is unattended and can be monitored in all weather, and evidence can be preserved through the early warning management module when special conditions occur, thereby facilitating condition investigation of related personnel.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 is a flow chart of a system in an embodiment of the invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Example one
As shown in fig. 1, the present embodiment provides a monitoring and early warning system based on face recognition, including: the system comprises an information acquisition module, an information storage module, a feature fusion module, a feature matching module and an early warning management module.
The information acquisition module is used for acquiring image characteristic information and voiceprint characteristic information. The information acquisition module comprises an image acquisition unit for acquiring face image information. The key point of face recognition is to extract the identity-related basic features of the face data of an object and eliminate the part affected by non-identity factors. Non-identity factors typically include: ambient lighting, gestures, expressions, decorations, and the like. Among these, lighting is the most important in practical applications, and face recognition systems are often required to be adaptable to different lighting environments. A typical face recognition system uses a common visible face image for recognition, and such a system is susceptible to ambient light. Some pre-processing algorithms are typically used to process the illumination prior to identification. Although the illumination pre-processing algorithm can eliminate the effect of illumination to some extent, it can also cause the image to lose some useful information.
Therefore, the invention adopts near-infrared face recognition, uses an active near-infrared light source with the intensity higher than that of ambient light and a filter with a corresponding wave band, and can obtain a face image irrelevant to the environment. In order to reduce the influence of ambient light on the image of the face in the face recognition process, the intensity of the active light source needs to be higher than that of the ambient light, but strong light interferes with human eyes and reduces the comfort of a user. Prolonged exposure to ultraviolet light can cause permanent damage to human skin and eyes. Imaging in the far infrared band will lose most of the information on the surface of the object and is not generally used for object imaging. Therefore, the near infrared band is the best choice, for example: 780 nm.
The image acquisition unit comprises an infrared emission unit, an infrared receiving unit and an image acquisition unit. The image acquisition unit adopts a color camera and is used for acquiring face image information; the infrared emission unit adopts an infrared camera and is used for emitting infrared rays to the face of a person; the infrared receiving unit comprises an infrared LED and is used for receiving infrared rays reflected by the human face. The infrared LED is a near-infrared light emitting device that converts electric energy into light energy, and has the advantages of small size, low power consumption, good directivity, and the like.
Furthermore, the information acquisition module also comprises a power module and a display screen. The power supply module is used for supplying power to the display screen, the infrared transmitting unit, the image acquisition unit and the infrared receiving unit; the display screen is used for receiving the data information acquired by the image acquisition module and displaying the data information in the form of an image.
The transmitting lens of the infrared transmitting unit and the receiving lens of the infrared receiving unit are arranged beside the receiving lens of the image acquisition unit and face the same, a scanned person can observe image information on the display screen while scanning to judge whether the face is aligned with the transmitting lens of the infrared transmitting unit and the receiving lens of the infrared receiving unit, and therefore the phenomenon that a large amount of time is consumed for alignment is avoided, and therefore the identification efficiency and the identification accuracy can be improved.
The information acquisition module also comprises an image preprocessing unit for carrying out face detection on the acquired image data: firstly, establishing a shape statistical model reflecting the shape change rule of a human face target and a local gray model reflecting the gray distribution rule of the human face target, wherein the local gray model is obtained by training; then, face searching is carried out by using the local gray level model, the searched shape is approximately expressed by using the shape statistical model, meanwhile, rationality is judged, unreasonable shape is adjusted to ensure the rationality of the shape in statistical sense, and target image data to be detected are obtained. And then, extracting the image characteristics of the target image data by adopting a deep learning algorithm and storing the image characteristics in a storage module.
The information acquisition module also comprises a voiceprint acquisition unit which acquires audio information, adopts Mel Frequency Cepstrum Coefficient (MFCC) to extract voiceprint characteristics and stores the voiceprint characteristics into the storage module. The auditory characteristics of human ears are simulated by constructing the Mel triangular filter bank so as to improve the recognition rate and robustness of the voice recognition system. The MFCC extraction process can be explained as follows:
(1) pre-emphasis is performed. Sending the collected voice information to a high-pass filter, namely: (Z) -1-. mu.z -1 . Wherein mu represents a pre-emphasis coefficient, and the value range of mu is 0.9-1. After the pre-emphasis process, the high frequency part of the speech signal is enhanced.
(2) And (5) framing. Considering the characteristic that the voice signal is stable in a short time, the voice signal can be segmented and intercepted in 20-30 ms of each frame. Meanwhile, in order to guarantee continuity between frames, it is necessary to add a frame shift during framing, that is, to set an overlap region between 2 frames.
(3) And (5) windowing. Hamming windows are commonly used to reduce the edge effect of speech frames and increase the continuity of the left and right ends of speech frames.
(4) A Fast Fourier Transform (FFT). The speech signal is converted from the time domain to the frequency domain to be represented, and the energy distribution condition is known by observing the spectrogram, so that the characteristics of the speech signal can be better observed.
(5) A triangular band pass filter. The voice frequency spectrum passes through a group of Mel-scale triangular filter groups, so that the frequency spectrum is smoothed, the influence of harmonic waves is avoided, and the formants of the original voice are highlighted. Moreover, the overall computation amount can be reduced.
(6) And (4) calculating logarithmic energy. And (5) carrying out logarithm operation on the output of each filter bank in the step (5) to obtain a logarithm energy spectrum.
(7) Discrete Cosine Transform (DCT). MFCC coefficients are obtained through DCT transformation, so that a voice signal is converted from a frequency domain to a time domain, and MFCC characteristics can be obtained.
(8) And extracting dynamic difference parameters. The voice signal contains dynamic characteristics in addition to the static characteristics reflected by the MFCC, and the dynamic characteristics of the voice can be described by a difference spectrum of the static characteristics, and a first order difference and a second order difference are commonly used to reflect the dynamic characteristics of the voice signal.
And the characteristic fusion module is used for carrying out characteristic fusion on the image characteristic information and the voiceprint characteristic information. The image feature extraction and the voiceprint feature extraction are 2 relatively independent processes, the feature extraction methods are different and belong to different biological modes, so that normalization processing needs to be introduced before the 2 features are fused, and feature vectors of the 2 features are in the same range, which is beneficial to subsequent comprehensive analysis.
In this embodiment, a z-score normalization method is adopted to normalize the mean and standard deviation based on the original data, and the mathematical formula is shown in formula (1):
wherein x is a matrix composed of feature vectors of a face (voiceprint); mu is the mean value of the matrix; x is the number of new Is the new data after normalization. After normalization processing is carried out on the face features and the voiceprint features, the face features and the voiceprint features are integrated and unified into a consistent interval.
In the embodiment, a traversal weighting method is adopted to fuse the face features and the voiceprint features, and the weights are determined by comparing the recognition rate of each group of weights. The sum of the weights of the face and the voiceprint is always 1, and only changes between 0.1 and 0.9, the step length is 0.1, and the formula (2) shows that:
u f +w s =1 w f =0.1,0.2,…,0.9 (72)
wherein w f Weight, w, representing a face s Representing the weight of the voiceprint. The weight of the voiceprint feature vector and the weight of the face feature vector are changed reversely, namely when the weight of the face feature vector is changed from 0.1-0.9, the voiceprint feature vector is changed from 0.9 to 0.1, and when one weight a is selected by the face feature vector, the weights of the face features of all categories are a, and the weights of all the voiceprint features are 1-a.
When all the feature weights are changed by 0.1-0.9, the recognition rate of each weight needs to be calculated, and the mathematical expression is as follows:
wherein R represents a system identification rate; l and F represent the total number of attempts by a legal user and a illegal person respectively; LR and FR indicate the number of false rejects and false receptions, respectively. And selecting the weight value which enables the R to be the maximum from the weight values, taking the weight value as the weight value of the optimal combination, and taking the weight value as the final weight value after the face and the voiceprint are weighted.
And obtaining original characteristic information by adopting a deep learning method and taking massive pictures and video resources as a roadbed. And inputting the feature information subjected to fusion processing into a feature matching module, and comparing the feature information with an original feature information base. The characteristic matching module comprises a characteristic matching unit, an information analysis unit and a monitoring unit. The characteristic matching unit is used for comparing the characteristic information after fusion processing with data in an original characteristic information base, detecting whether the acquired data is matched with the data in the original characteristic information base or not, and if the matching is successful, generating a safety signal; if the match is unsuccessful, an extraneous signal is generated. And the information analysis unit is used for analyzing the external signals, generating a starting mark instruction and an ending mark instruction and sending the starting mark instruction and the ending mark instruction to the monitoring unit. The monitoring unit marks the video monitored by the monitoring camera after receiving the marking starting instruction, and stops marking after receiving the marking ending instruction; the monitoring unit marks the video monitored between the start mark and the stop mark as a check video; marking unmarked video as normal video; and further analyzing the check video to obtain a monitoring value of the check video.
The monitoring unit adopts an intelligent face recognition video monitoring video storage device. The hardware core of the intelligent face recognition video monitoring video storage device adopts a high-efficiency high-speed embedded digital signal processor DSP, and the software core adopts high-efficiency optimized H.264 coding and decoding and an intelligent face recognition algorithm based on Gabor wavelet transform, PCA principal component analysis and SVM support vector machine classifier. The intelligent face recognition video monitoring and recording storage system based on the DSP has the advantages of powerful functions, suitability, simplicity and convenience in installation and debugging and suitability for security video monitoring in various places.
And the early warning management module is connected with the characteristic matching module and compares the monitoring value of the checking video with the set monitoring threshold value to generate an early warning signal. The early warning unit compares the monitoring value of the check video with a set monitoring threshold value, generates an early warning signal if the monitoring value of the check video is greater than or equal to the set monitoring threshold value, marks the corresponding check video as an early warning video, and transmits the generated early warning video to the early warning video storage unit, so that security personnel can conveniently check the early warning video.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (7)
1. The utility model provides a control early warning system based on face identification which characterized in that includes:
the information acquisition module is used for acquiring image characteristic information and voiceprint characteristic information;
the information storage module is connected with the information acquisition module and used for storing the image characteristic information and the voiceprint characteristic information;
the characteristic fusion module is connected with the information storage module and is used for carrying out characteristic fusion on the image characteristic information and the voiceprint characteristic information to obtain target characteristic information;
the feature matching module is connected with the feature fusion module and used for comparing the target feature information with the original feature information to obtain a verification video;
and the early warning management module is connected with the characteristic matching module and used for comparing the monitoring value of the verification video with a set monitoring threshold value to generate an early warning signal.
2. The monitoring and early-warning system based on the face recognition is characterized in that the information acquisition module comprises:
the image acquisition unit is used for acquiring image information through a near-infrared face recognition technology;
the image preprocessing unit is used for processing the image information through a face detection method and a deep learning method to obtain image characteristic information;
and the voiceprint acquisition unit is used for acquiring voiceprint characteristic information through a Mel frequency cepstrum coefficient characteristic principle.
3. The monitoring and early-warning system based on the face recognition is characterized in that the image acquisition unit comprises:
the infrared transmitting unit is used for transmitting infrared rays to the human face through the infrared camera;
the infrared receiving unit is used for receiving infrared rays reflected by the human face;
and the acquisition unit is used for acquiring image information of the human face through the camera.
4. The monitoring and early-warning system based on the face recognition is characterized in that the image preprocessing unit comprises:
the modeling unit is used for establishing a local gray scale model reflecting the gray scale distribution rule of the human face target and a shape statistical model reflecting the shape change rule of the human face target;
the detection unit is used for carrying out approximate expression on the image information by utilizing the local gray scale model and the shape statistical model to obtain target image information;
and the feature extraction unit is used for extracting the features of the target image information through a deep learning algorithm to obtain image feature information.
5. The monitoring and early-warning system based on the face recognition as claimed in claim 1, wherein the feature fusion module comprises:
the normalizing unit is used for carrying out standardization processing on the image characteristic information and the voiceprint characteristic information;
and the fusion unit is used for performing characteristic fusion on the image characteristic information and the voiceprint characteristic information through a traversal weighting method to obtain target characteristic information.
6. The monitoring and early-warning system based on the face recognition is characterized in that the feature matching module comprises:
the characteristic matching unit is used for matching the target characteristic information with the original characteristic information, and if the matching is successful, a safety signal is generated; if the matching is unsuccessful, generating an external signal;
the signal analysis unit is used for analyzing the external signal, generating a starting marking instruction and an ending marking instruction and sending the starting marking instruction and the ending marking instruction to the monitoring unit;
and the monitoring unit is used for generating a verification video by receiving the starting marking instruction and the ending marking instruction.
7. The monitoring and early-warning system based on the face recognition is characterized in that the early-warning management module comprises:
the early warning unit is used for comparing the monitoring value of the verification video with a set monitoring threshold value; if the monitoring value of the verification video is larger than or equal to the set monitoring threshold, generating an early warning signal, and marking the corresponding verification video as an early warning video;
and the early warning video storage unit is used for storing the early warning video and distributing security personnel to check the early warning video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210579014.8A CN114973490A (en) | 2022-05-26 | 2022-05-26 | Monitoring and early warning system based on face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210579014.8A CN114973490A (en) | 2022-05-26 | 2022-05-26 | Monitoring and early warning system based on face recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114973490A true CN114973490A (en) | 2022-08-30 |
Family
ID=82956244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210579014.8A Pending CN114973490A (en) | 2022-05-26 | 2022-05-26 | Monitoring and early warning system based on face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114973490A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563797A (en) * | 2023-07-10 | 2023-08-08 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034288A (en) * | 2010-12-09 | 2011-04-27 | 江南大学 | Multiple biological characteristic identification-based intelligent door control system |
CN106127156A (en) * | 2016-06-27 | 2016-11-16 | 上海元趣信息技术有限公司 | Robot interactive method based on vocal print and recognition of face |
CN109614880A (en) * | 2018-11-19 | 2019-04-12 | 国家电网有限公司 | A kind of multi-modal biological characteristic fusion method and device |
CN110108704A (en) * | 2019-05-10 | 2019-08-09 | 合肥学院 | A kind of automatic monitoring and pre-alarming method of cyanobacteria and its automatic monitoring and alarming system |
CN111311809A (en) * | 2020-02-21 | 2020-06-19 | 南京理工大学 | Intelligent access control system based on multi-biological-feature fusion |
CN111611977A (en) * | 2020-06-05 | 2020-09-01 | 吉林求是光谱数据科技有限公司 | Face recognition monitoring system and recognition method based on spectrum and multiband fusion |
CN113158727A (en) * | 2020-12-31 | 2021-07-23 | 长春理工大学 | Bimodal fusion emotion recognition method based on video and voice information |
CN113343860A (en) * | 2021-06-10 | 2021-09-03 | 南京工业大学 | Bimodal fusion emotion recognition method based on video image and voice |
-
2022
- 2022-05-26 CN CN202210579014.8A patent/CN114973490A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034288A (en) * | 2010-12-09 | 2011-04-27 | 江南大学 | Multiple biological characteristic identification-based intelligent door control system |
CN106127156A (en) * | 2016-06-27 | 2016-11-16 | 上海元趣信息技术有限公司 | Robot interactive method based on vocal print and recognition of face |
CN109614880A (en) * | 2018-11-19 | 2019-04-12 | 国家电网有限公司 | A kind of multi-modal biological characteristic fusion method and device |
CN110108704A (en) * | 2019-05-10 | 2019-08-09 | 合肥学院 | A kind of automatic monitoring and pre-alarming method of cyanobacteria and its automatic monitoring and alarming system |
CN111311809A (en) * | 2020-02-21 | 2020-06-19 | 南京理工大学 | Intelligent access control system based on multi-biological-feature fusion |
CN111611977A (en) * | 2020-06-05 | 2020-09-01 | 吉林求是光谱数据科技有限公司 | Face recognition monitoring system and recognition method based on spectrum and multiband fusion |
CN113158727A (en) * | 2020-12-31 | 2021-07-23 | 长春理工大学 | Bimodal fusion emotion recognition method based on video and voice information |
CN113343860A (en) * | 2021-06-10 | 2021-09-03 | 南京工业大学 | Bimodal fusion emotion recognition method based on video image and voice |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563797A (en) * | 2023-07-10 | 2023-08-08 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
CN116563797B (en) * | 2023-07-10 | 2023-10-27 | 安徽网谷智能技术有限公司 | Monitoring management system for intelligent campus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210034864A1 (en) | Iris liveness detection for mobile devices | |
CN108470169A (en) | Face identification system and method | |
WO2019210796A1 (en) | Speech recognition method and apparatus, storage medium, and electronic device | |
CN110705392A (en) | Face image detection method and device and storage medium | |
CN111563422B (en) | Service evaluation acquisition method and device based on bimodal emotion recognition network | |
TWI318108B (en) | A real-time face detection under complex backgrounds | |
JP2002269564A (en) | Iris recognition method using daubechies wavelet transform | |
CN104239766A (en) | Video and audio based identity authentication method and system for nuclear power plants | |
KR101937323B1 (en) | System for generating signcription of wireless mobie communication | |
CN112069891B (en) | Deep fake face identification method based on illumination characteristics | |
CN208351494U (en) | Face identification system | |
CN112651319B (en) | Video detection method and device, electronic equipment and storage medium | |
CN109325462A (en) | Recognition of face biopsy method and device based on iris | |
CN114022726A (en) | Personnel and vehicle monitoring method and system based on capsule network | |
CN114973490A (en) | Monitoring and early warning system based on face recognition | |
CN115035052A (en) | Forged face-changing image detection method and system based on identity difference quantification | |
KR20200119425A (en) | Apparatus and method for domain adaptation-based object recognition | |
CN108073873A (en) | Human face detection and tracing system based on high-definition intelligent video camera | |
CN116883900A (en) | Video authenticity identification method and system based on multidimensional biological characteristics | |
CN111880848A (en) | Switching method and device of operating system, terminal and readable storage medium | |
CN110135362A (en) | A kind of fast face recognition method based under infrared camera | |
KR102463353B1 (en) | Apparatus and method for detecting fake faces | |
Chetty et al. | Multimedia sensor fusion for retrieving identity in biometric access control systems | |
CN114647829A (en) | Identity authentication method and device, storage medium and electronic equipment | |
CN114170662A (en) | Face recognition method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220830 |
|
RJ01 | Rejection of invention patent application after publication |