CN109993020B - Human face distribution alarm method and device - Google Patents

Human face distribution alarm method and device Download PDF

Info

Publication number
CN109993020B
CN109993020B CN201711469543.8A CN201711469543A CN109993020B CN 109993020 B CN109993020 B CN 109993020B CN 201711469543 A CN201711469543 A CN 201711469543A CN 109993020 B CN109993020 B CN 109993020B
Authority
CN
China
Prior art keywords
feature
face
similarity
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711469543.8A
Other languages
Chinese (zh)
Other versions
CN109993020A (en
Inventor
莫耀奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Yushi Intelligent Technology Co.,Ltd.
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201711469543.8A priority Critical patent/CN109993020B/en
Publication of CN109993020A publication Critical patent/CN109993020A/en
Application granted granted Critical
Publication of CN109993020B publication Critical patent/CN109993020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention provides a human face control alarm method and a human face control alarm device, which are applied to a server, wherein the server is prestored with a plurality of different feature models and human face features of a target control figure under each feature model, and the method comprises the following steps: acquiring a figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain corresponding human face features; comparing each face feature of the figure image to be recognized with the corresponding face feature of the target control figure respectively to obtain feature similarity between the figure image to be recognized and the target control figure under different feature models; and respectively comparing the feature similarity under each feature model with a preset similarity threshold to obtain the feature similarity of each target not less than the preset similarity threshold, and performing control alarm based on the distribution condition among the feature similarities of each target. The method has the advantages of low manpower consumption and wide application range, and can provide face distribution alarm service with low false alarm rate, high precision and high efficiency.

Description

Human face distribution alarm method and device
Technical Field
The invention relates to the technical field of face recognition, in particular to a face control alarm method and device.
Background
With the continuous development of scientific technology, the application of face recognition technology is becoming more and more extensive, wherein the face recognition control technology is an important branch of face recognition technology. The existing face recognition control technology directly compares the imported face photos with the face photos of the target person, and alarms when the similarity of the photos obtained by comparison exceeds a threshold value, so that the control of the target person is realized.
However, this kind of face control technology will issue an incorrect alarm, i.e. a false alarm, because the quality of the imported photos is low, so the existing face control technology will usually perform photo quality detection on each imported face photo before photo comparison, and after removing the face photo whose photo quality score is smaller than the score threshold, perform photo comparison on the remaining face photo after the removal operation, thereby reducing the false alarm rate of face control. The manual analysis mode is carried out on the false alarm condition of the face when the face is distributed and controlled for many times, so that the accurate score threshold value can be determined, the human resource consumption is high in the process, the analysis time is long, and the accuracy of the obtained score threshold value is not high, so that the distribution and control alarm efficiency is low, and the distribution and control alarm accuracy is not high.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a human face deployment and control alarm method and a human face deployment and control alarm device.
As for the method, a preferred embodiment of the present invention provides a method for alarming for human face deployment, where the method is applied to a server, and the server prestores a plurality of different feature models for performing face feature extraction, and face features corresponding to target deployment characters under each feature model, and the method includes:
acquiring a figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain corresponding face features of the figure image to be recognized under each feature model;
comparing each face feature of the figure image to be recognized with the corresponding face feature of the target control figure respectively to obtain feature similarity between the face feature of the figure image to be recognized and the face feature corresponding to the target control figure under different feature models;
and respectively comparing the feature similarity under each feature model with a preset similarity threshold to obtain the feature similarity of each target not less than the preset similarity threshold, and performing control alarm based on the distribution condition among the feature similarities of each target. The method comprises the steps of obtaining face features corresponding to a current figure image to be recognized under each feature model through a plurality of different feature models, comparing each face feature corresponding to the current figure image to be recognized with the face feature of a target deployment figure under the corresponding model one by one to obtain feature similarity between the face features of the figure image to be recognized under the different feature models and the face features corresponding to the target deployment figure, and performing deployment alarm according to corresponding distribution conditions among the target feature similarities of which all values are not less than a preset similarity threshold, so that the human resource consumption is reduced, the deployment false alarm rate is reduced, and the face deployment alarm service with high alarm precision and high alarm efficiency is provided.
In terms of an apparatus, a preferred embodiment of the present invention provides a face-controlled alarm apparatus, where the apparatus is applied to a server, and the server prestores a plurality of different feature models for performing face feature extraction, and face features corresponding to target-controlled characters under each feature model, and the apparatus includes:
the characteristic extraction module is used for acquiring a figure image to be recognized and extracting the characteristics of the figure image to be recognized through each characteristic model to obtain the corresponding face characteristics of the figure image to be recognized under each characteristic model;
the characteristic comparison module is used for comparing each face characteristic of the figure image to be recognized with the corresponding face characteristic of the target control figure respectively to obtain the characteristic similarity between the face characteristic of the figure image to be recognized and the face characteristic corresponding to the target control figure under different characteristic models;
and the control alarm module is used for comparing the feature similarity under each feature model with a preset similarity threshold respectively to obtain the feature similarity of each target which is not less than the preset similarity threshold, and carrying out control alarm based on the distribution condition among the feature similarities of each target.
Compared with the prior art, the human face control alarm method and the human face control alarm device provided by the preferred embodiment of the invention have the following beneficial effects: the human face deploying and controlling alarm method is low in human resource consumption and wide in application range, can reduce deploying and controlling false alarm rate, and provides human face deploying and controlling alarm service with high alarm precision and alarm efficiency. The method is applied to a server, and a plurality of different feature models for extracting the human face features and the corresponding human face features of the target control figure under each feature model are prestored in the server. Firstly, acquiring a figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain corresponding face features of the figure image to be recognized under each feature model; then, the method compares each face feature of the figure image to be recognized with the corresponding face feature of the target control figure respectively to obtain feature similarity between the face feature of the figure image to be recognized and the face feature corresponding to the target control figure under different feature models; finally, the method respectively compares the feature similarity under each feature model with a preset similarity threshold to obtain each target feature similarity not smaller than the preset similarity threshold, and performs control alarm based on the distribution condition among the target feature similarities, so that the application range of the method is improved through the comparison between a plurality of face features of the figure image to be recognized and the corresponding face features of the target control figure, and the face control alarm with high alarm precision and high alarm efficiency is performed according to the distribution condition among the target similarities of which the median value of the feature similarities obtained through comparison is not smaller than the preset similarity threshold, so as to reduce the control false alarm rate.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the claims of the present invention, and it is obvious for those skilled in the art that other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of a server according to a preferred embodiment of the present invention.
Fig. 2 is a schematic flow chart of a face alarm deployment method according to a preferred embodiment of the present invention.
Fig. 3 is a flowchart illustrating the sub-steps included in step S210 shown in fig. 2.
Fig. 4 is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2.
Fig. 5 is another schematic flow chart of a face alarm deployment method according to a preferred embodiment of the present invention.
Fig. 6 is a block diagram of the face alarm apparatus shown in fig. 1 according to a preferred embodiment of the present invention.
FIG. 7 is a block diagram of the deployment alarm module shown in FIG. 6.
Fig. 8 is another block diagram of the face alarm apparatus shown in fig. 1 according to the preferred embodiment of the present invention.
Icon: 10-a server; 11-a memory; 12-a processor; 13-a communication unit; 100-a human face control alarm device; 110-a feature extraction module; 120-a feature comparison module; 130-deploying and controlling an alarm module; 131-a number statistics submodule; 132-a normalization alarm sub-module; 140-configuration module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
How to provide a human face deployment and control alarm method and device which have the advantages of low human resource consumption, wide application range, low deployment and control false alarm rate, high alarm precision and high alarm efficiency, and is a technical problem which needs to be solved urgently for technical personnel in the field.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a block diagram of a server 10 according to a preferred embodiment of the present invention. In the embodiment of the present invention, the server 10 is configured to perform face recognition and deployment on a target deployed person, and send an alarm signal when it is determined that an image of a person to be recognized matches the target deployed person, where the image of the person to be recognized is an image photo acquired by the server 10 and needing deployment and control comparison. In this embodiment, the server 10 may be, but is not limited to, a cloud server, a cluster server, a distribution server, and the like.
In this embodiment, the server 10 includes a face alarm device 100, a memory 11, a processor 12 and a communication unit 13. The memory 11, the processor 12 and the communication unit 13 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
In this embodiment, the Memory 11 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like. The memory 11 may store a plurality of different feature models for extracting human face features, each feature model corresponds to a characteristic extraction direction with good authority, the corresponding human face features of the same human face image under each feature model may be different, for example, some feature models are good at performing feature extraction on a color image photo, some feature models are good at performing feature extraction when the image illumination intensity is strong, some feature models are good at performing feature extraction from the front angle of the human face, some feature models are good at performing feature extraction from the side angle of the human face, some feature models are good at performing feature extraction when the light transmittance is high, and some feature models are good at performing feature extraction when an interfering object exists in the image photo, so that the human face features extracted correspondingly under each feature model of the same human face image photo may be different from each other. The memory 11 may also be configured to store face features of target deployed and controlled persons under each feature model, where the number of the target deployed and controlled persons may be multiple, the face features of the same target deployed and controlled person under each feature model may be stored in a manner of being associated with each other according to the respective corresponding feature models, or face features of the same target deployed and controlled person may be stored in a manner of being associated with each other according to different target deployed and controlled persons. In this embodiment, the memory 11 is further configured to store a program, and the processor 12 executes the program after receiving the execution instruction.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The Processor 12 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In this embodiment, the communication unit 13 is configured to establish a communication connection between the server 10 and other external devices through a wireless network or a wired network, and perform data transmission through the wireless network or the wired network. The server 10 may obtain the image of the person to be identified from an external Device through the wireless network or the wired network, wherein the external Device may be, but is not limited to, a monitoring Device, a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like. Optionally, the external device is a face capture machine included in the monitoring device.
The face-controlled alarm device 100 includes at least one software functional module capable of being stored in the memory 11 in the form of software or firmware. The processor 12 can be used to execute executable modules corresponding to the face alarm apparatus 100 stored in the memory 11, such as software functional modules and computer programs included in the face alarm apparatus 100. In this embodiment, the face control alarm device 100 may obtain, through the plurality of different feature models, face features corresponding to the to-be-identified person images under the feature models, and obtain feature similarities between the face features of the to-be-identified person images under the different feature models and the face features corresponding to the target control person in a manner of comparing the face features corresponding to the to-be-identified person images with the face features of the target control person under the corresponding models one by one, so as to determine whether to perform alarm with high alarm accuracy and high alarm efficiency for the current to-be-identified person image based on the feature similarities.
It is to be understood that the block diagram shown in fig. 1 is merely a schematic diagram of one structural component of the server 10, and that the server 10 may include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Fig. 2 is a schematic flow chart of a face monitoring alarm method according to a preferred embodiment of the present invention. In the embodiment of the present invention, the face deployment alarm method is applied to the server 10, and is used for performing a face deployment alarm service with low deployment false alarm rate, high alarm accuracy, and high alarm efficiency for a target deployment figure, where the server 10 prestores a plurality of different feature models for performing face feature extraction, and face features corresponding to the target deployment figure under each feature model. The following explains the specific flow and steps of the human face control alarm method shown in fig. 2 in detail.
In the embodiment of the invention, the human face control alarm method comprises the following steps:
step S210, obtaining the figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain the corresponding face features of the figure image to be recognized under each feature model.
In this embodiment, the server 10 may acquire a to-be-recognized character image from an external device connected in a communication manner through a wireless network or a wired network, and perform feature extraction on the acquired to-be-recognized character image by using a plurality of feature models pre-stored in the server 10 to obtain the corresponding face features of the to-be-recognized character image under each feature model. The characteristic extraction directions corresponding to the characteristic models can be different expressions of the same reference parameter, for example, the characteristic extraction direction of one characteristic model which is good at is the face characteristic extraction of a black-and-white image photo, the characteristic extraction direction of the other characteristic model which is good at is the face characteristic extraction of a color image photo, and the reference parameters corresponding to the characteristic extraction directions of the two characteristic models which are good at both belong to the color of the photo; the characteristic extraction directions corresponding to the characteristic models can also be embodied by different reference parameters, for example, the characteristic extraction direction which one characteristic model excels in is the face characteristic extraction of the person in the front direction, the characteristic extraction direction which the other characteristic model excels in is the face characteristic extraction when the illumination intensity is strong, and the reference parameters corresponding to the characteristic extraction directions which the two characteristic models excel in are respectively the extraction angle and the illumination intensity.
Optionally, please refer to fig. 3, which is a flowchart illustrating the sub-steps included in step S210 shown in fig. 2. In this embodiment, the step of extracting features of the to-be-recognized personal image through each feature model in the step S210 to obtain the corresponding face features of the to-be-recognized personal image under each feature model may include the substeps S211 and the substep S212:
and a substep S211 of performing image extraction on the face region in the figure image to be recognized to obtain a corresponding face image in the figure image to be recognized.
In this embodiment, the server 10 may determine a corresponding face area in the to-be-recognized person image by performing image recognition on the acquired to-be-recognized person image, where the face area is an area in which a face exists in the image. The server 10 obtains a corresponding face image in the character image to be recognized by performing image extraction on image units in the corresponding face area in the character image to be recognized. If there are a plurality of face regions corresponding to the to-be-recognized character images, the server 10 may select a corresponding face region from the plurality of face regions under the control of a worker as the face region corresponding to the server 10 that performs the face control alarm this time.
And a substep S212, respectively extracting the features of the face image based on each feature model to obtain the face features matched with the corresponding feature models in the face image.
In this embodiment, the server 10 extracts, from the face image, the face features under the corresponding feature models through the feature models, and performs association storage on the extracted face features belonging to one face image.
Step S220, comparing each face feature of the to-be-identified person image with the corresponding face feature of the target control person respectively to obtain feature similarity between the face feature of the to-be-identified person image and the face feature corresponding to the target control person under different feature models.
In this embodiment, the server 10 obtains feature similarity between the facial features of the to-be-recognized character image and the facial features corresponding to the target authorized character in different feature models by comparing the facial features of the to-be-recognized character image in the same feature model with the facial features corresponding to the target authorized character. The face features of the target deployed and controlled figure pre-stored in the server 10 under each feature model can be obtained and stored by the server 10 in a manner of extracting features of the face image of the target deployed and controlled figure through each feature model.
And step S230, comparing the feature similarity under each feature model with a preset similarity threshold respectively to obtain the feature similarity of each target not less than the preset similarity threshold, and performing control alarm based on the distribution condition among the feature similarities of each target.
In this embodiment, the preset similarity threshold is used to determine a similarity matching degree between the character image to be recognized under the corresponding feature model and the target controlled character, and if the feature similarity corresponding to the character image to be recognized under the corresponding feature model is greater than or equal to the preset similarity threshold, the server 10 may determine that the character image to be recognized is very matched with the target controlled character under the corresponding feature model, and at this time, the feature similarity corresponding to the character image to be recognized under the corresponding feature model is the target feature similarity. The server 10 may obtain each target feature similarity having a value not less than the preset similarity threshold among a plurality of feature similarities by comparing the feature similarity between the image of the person to be identified and the target deployment person under each feature model with the preset similarity threshold, and perform deployment alarm based on the distribution condition among the obtained target feature similarities. The preset similarity threshold may be 85%, 88%, or 90%, and the value may be set differently according to actual requirements.
Optionally, please refer to fig. 4, which is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2. In this embodiment, the step of performing a deployment alarm based on the distribution situation between the similarity of the target features in step S230 may include sub-step S231 and sub-step S232:
and a substep S231 of counting the total number of the similarity degrees of the obtained target feature similarity degrees and comparing the counted total number of the similarity degrees with a preset number of the similarity degrees.
In this embodiment, the preset number of similarities is used to represent the reference precision of the server 10 for performing the current face cloth control alarm, and the server 10 determines whether the to-be-identified person image meets the reference precision of the server 10 for performing the alarm in a manner that the number corresponding to the to-be-identified person image is not less than the total number of similarities of the target feature similarities of the preset similarity threshold value, and the total number of similarities obtained by statistics is compared with the preset number of similarities. The number corresponding to the preset similarity number is greater than or equal to one, and the number is not greater than the number of the feature models stored in the server 10, for example, if the number of the feature models stored in the server 10 is 10, the preset similarity number may be 1, 5, or 8, and the number may be set differently according to actual requirements.
And a substep S232, if the total number of the similarity degrees is greater than or equal to the preset number of the similarity degrees, normalizing the similarity degrees of the target features, and alarming based on a preset alarm threshold value and the normalized similarity degrees of the target features.
In this embodiment, if the total number of the similarities is smaller than the preset number of the similarities, the server 10 may determine that the to-be-identified person image does not meet the reference precision for the server 10 to perform the alarm, and the to-be-identified person image is not matched with the target controlled person.
In this embodiment, if the total number of the similarities is greater than or equal to the preset number of the similarities, normalization processing is performed on the similarities of the target features, so as to unify the similarities of the target features to the same standard. The server 10 may perform normalization of the similarity of each target feature by mapping the similarity of each target feature to any one of all feature models. For example, the server 10 stores N feature models, the feature similarities between the to-be-recognized character image and the target stationed character under the N feature models are S1, S2, … …, SN-1, and SN, respectively, where the target feature similarities with the median value of the feature similarities greater than the preset similarity threshold corresponding to the to-be-recognized character image are S1, S2, … …, and Sm, respectively, and when each target feature similarity is mapped onto the nth feature model, the normalized target feature similarities are S1/SN, S2/SN, … …, and Sm/SN, respectively.
In this embodiment, after the server 10 completes the normalization processing on the feature similarity of each target, the server 10 alarms based on a preset alarm threshold and the feature similarity of each target after the normalization, where the preset alarm threshold is a normal number, the preset alarm threshold is used to represent the distribution density between the feature similarities of each target, and the server 10 may determine whether to alarm through the preset alarm threshold.
Wherein the alarming based on the preset alarming threshold value and the normalized feature similarity of each target comprises:
carrying out variance operation on the normalized feature similarity of each target to obtain corresponding variances;
and comparing the obtained variance with the preset alarm threshold value, and alarming when the variance is smaller than the preset alarm threshold value.
In this embodiment, after the server 10 performs average calculation on the normalized feature similarity of each target, a variance calculation formula may be used to calculate a corresponding variance, where a specific calculation formula is as follows:
Figure BDA0001531743410000111
Figure BDA0001531743410000112
wherein the former formula of the two formulas is an average value calculation formula SiThe corresponding normalized target feature similarity can be characterized, wherein Sa is the average value of the normalized target feature similarities; the latter of the above two formulas is a variance calculation formula, σ2The variance corresponding to the normalized similarity of each target feature can be represented.
In the present embodiment, the server 10 calculates the variance σ obtained by the calculation2Comparing with the preset alarm threshold value, and if the preset alarm threshold value is compared with the preset alarm threshold valueThe result shows the variance σ2If the alarm is greater than or equal to the preset alarm threshold value, the server 10 judges that the alarm is false alarm; if the comparison result shows the variance σ2And when the preset alarm threshold value is smaller than the preset alarm threshold value, the server 10 gives an alarm, and the alarm at the moment can represent that the image of the person to be identified is matched with the target control person.
Fig. 5 is a schematic flow chart of a human face control alarm method according to a preferred embodiment of the present invention. In this embodiment of the present invention, before the step S210, the face deploying and alarming method may further include:
step S209, a preset similarity threshold, a preset similarity number, a preset alarm threshold and each characteristic model are configured.
In this embodiment, the server 10 may directly configure a preset similarity threshold, a preset similarity number, a preset alarm threshold, and each feature model under the control of a worker, or may obtain the configured preset similarity threshold, preset similarity number, preset alarm threshold, and each feature model from other external devices, and correspondingly store and operate the obtained preset similarity threshold, preset similarity number, preset alarm threshold, and each feature model. The server 10 can improve the application scene range of the face monitoring alarm by itself through the stored multiple feature models, and accordingly reduce the limitation of adopting a single model comparison mode in the prior art.
Fig. 6 is a block diagram of the face alarm apparatus 100 shown in fig. 1 according to a preferred embodiment of the present invention. In the embodiment of the present invention, the human face monitoring alarm device 100 includes a feature extraction module 110, a feature comparison module 120, and a monitoring alarm module 130.
The feature extraction module 110 is configured to obtain a to-be-identified person image, and perform feature extraction on the to-be-identified person image through each feature model to obtain a face feature of the to-be-identified person image corresponding to each feature model.
In this embodiment, the feature extraction module 110 may execute the step S210 shown in fig. 2, and the sub-steps S211 and S212 shown in fig. 3, and the specific execution process may refer to the above detailed description of the step S210, the sub-step S211 and the sub-step S212.
The feature comparison module 120 is configured to compare each face feature of the to-be-identified person image with a corresponding face feature of a target control person, so as to obtain feature similarity between the face feature of the to-be-identified person image and the face feature corresponding to the target control person in different feature models.
In this embodiment, the feature comparison module 120 may execute step S220 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S220.
The deployment alarm module 130 is configured to compare the feature similarity of each feature model with a preset similarity threshold, to obtain feature similarities of each target that are not less than the preset similarity threshold, and to perform deployment alarm based on a distribution condition between the feature similarities of each target.
In this embodiment, the deployment alarm module 130 may execute step S230 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S230.
Optionally, please refer to fig. 7, which is a block diagram of the deployment alarm module 130 shown in fig. 6. In this embodiment, the deployment alarm module 130 may include a number statistics sub-module 131 and a normalization alarm sub-module 132.
The number counting submodule 131 is configured to count the total number of similarity degrees of the obtained target feature similarity degrees, and compare the counted total number of similarity degrees with a preset number of similarity degrees.
In this embodiment, the number statistics sub-module 131 may perform the sub-step S231 shown in fig. 4, and the detailed implementation process may refer to the detailed description of the sub-step S231 above.
The normalizing alarm sub-module 132 is configured to, if the total number of the similarities is greater than or equal to the preset number of the similarities, perform normalization processing on the feature similarities of the targets, and alarm based on a preset alarm threshold and the normalized feature similarities of the targets.
In this embodiment, the manner of alarming by the normalization alarm sub-module 132 based on the preset alarm threshold and the normalized similarity of the target features includes:
carrying out variance operation on the normalized feature similarity of each target to obtain corresponding variances;
and comparing the obtained variance with the preset alarm threshold value, and alarming when the variance is smaller than the preset alarm threshold value.
The normalization alarm sub-module 132 may perform the sub-step S232 shown in fig. 4, and the detailed implementation process may refer to the detailed description of the sub-step S232 above.
Fig. 8 is a schematic block diagram of the face alarm apparatus 100 shown in fig. 1 according to another preferred embodiment of the present invention. In the embodiment of the present invention, the face alarm apparatus 100 may further include a configuration module 140.
The configuration module 140 is configured to configure a preset similarity threshold, a preset number of similarities, a preset alarm threshold, and each feature model.
In this embodiment, the configuration module 140 may execute step S209 shown in fig. 5, and the specific execution process may refer to the above detailed description of step S209.
In summary, in the face deployment alarm method and apparatus provided in the preferred embodiment of the present invention, the face deployment alarm method has low human resource consumption and a wide application range, can reduce the deployment false alarm rate, and provides a face deployment alarm service with high alarm accuracy and high alarm efficiency. The method is applied to a server, and a plurality of different feature models for extracting the human face features and the corresponding human face features of the target control figure under each feature model are prestored in the server. Firstly, acquiring a figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain corresponding face features of the figure image to be recognized under each feature model; then, the method compares each face feature of the figure image to be recognized with the corresponding face feature of the target control figure respectively to obtain feature similarity between the face feature of the figure image to be recognized and the face feature corresponding to the target control figure under different feature models; finally, the method respectively compares the feature similarity under each feature model with a preset similarity threshold to obtain each target feature similarity not smaller than the preset similarity threshold, and performs control alarm based on the distribution condition among the target feature similarities, so that the application range of the method is improved through the comparison between a plurality of face features of the figure image to be recognized and the corresponding face features of the target control figure, and the face control alarm with high alarm precision and high alarm efficiency is performed according to the distribution condition among the target similarities of which the median value of the feature similarities obtained through comparison is not smaller than the preset similarity threshold, so as to reduce the control false alarm rate. The server reduces the limitation of using a single model for deployment and control comparison in the prior art through a plurality of pre-stored feature models, and correspondingly improves the application scene range of face deployment and control alarm.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A human face control alarm method is characterized in that the method is applied to a server, a plurality of different feature models for extracting human face features and corresponding human face features of a target control figure under each feature model are prestored in the server, and the method comprises the following steps:
acquiring a figure image to be recognized, and performing feature extraction on the figure image to be recognized through each feature model to obtain corresponding face features of the figure image to be recognized under each feature model;
comparing the face features of the figure image to be recognized under the same feature model with the face features of the target control figure to obtain feature similarity between the face features of the figure image to be recognized under different feature models and the face features corresponding to the target control figure;
respectively comparing the feature similarity under each feature model with a preset similarity threshold to obtain the feature similarity of each target not less than the preset similarity threshold, and performing control alarm based on the distribution condition among the feature similarities of each target;
the step of performing deployment alarm based on the distribution situation among the target feature similarities comprises the following steps:
counting the total similarity number of the obtained target feature similarities, and comparing the counted total similarity number with a preset similarity number;
and if the total number of the similarity degrees is greater than or equal to the preset number of the similarity degrees, normalizing the similarity degrees of the features of the targets, and alarming based on a preset alarm threshold value and the normalized similarity degrees of the features of the targets.
2. The method of claim 1, wherein the step of extracting the features of the to-be-recognized character image through each feature model to obtain the corresponding facial features of the to-be-recognized character image under each feature model comprises:
extracting the face area in the figure image to be recognized to obtain a corresponding face image in the figure image to be recognized;
and respectively extracting the features of the face image based on each feature model to obtain the face features matched with the corresponding feature models in the face image.
3. The method of claim 1, wherein the step of alarming based on the preset alarm threshold and the normalized similarity of the target features comprises:
carrying out variance operation on the normalized feature similarity of each target to obtain corresponding variances;
and comparing the obtained variance with the preset alarm threshold value, and alarming when the variance is smaller than the preset alarm threshold value.
4. The method according to any one of claims 1 to 3, wherein before the step of acquiring the image of the person to be recognized, the method further comprises:
and configuring a preset similarity threshold, a preset similarity number, a preset alarm threshold and each characteristic model.
5. The utility model provides a human face cloth accuse alarm device which characterized in that is applied to the server, the server prestores a plurality of different feature models that are used for carrying out the extraction of human face feature, and the human face feature that target cloth accuse personage corresponds under each feature model, the device includes:
the characteristic extraction module is used for acquiring a figure image to be recognized and extracting the characteristics of the figure image to be recognized through each characteristic model to obtain the corresponding face characteristics of the figure image to be recognized under each characteristic model;
the characteristic comparison module is used for comparing the face characteristics of the figure image to be identified under the same characteristic model with the face characteristics of the target control figure to obtain the characteristic similarity between the face characteristics of the figure image to be identified under different characteristic models and the face characteristics corresponding to the target control figure;
the control alarm module is used for comparing the feature similarity under each feature model with a preset similarity threshold respectively to obtain the feature similarity of each target not less than the preset similarity threshold, and carrying out control alarm based on the distribution condition among the feature similarities of each target;
wherein, the control alarm module includes:
the number counting submodule is used for counting the total similarity number of the obtained target feature similarity and comparing the counted total similarity number with the preset similarity number;
and the normalizing alarm sub-module is used for normalizing the similarity of the characteristics of the targets if the total number of the similarities is greater than or equal to the preset number of the similarities and giving an alarm based on a preset alarm threshold and the normalized similarity of the characteristics of the targets.
6. The apparatus of claim 5, wherein the feature extraction module performs feature extraction on the to-be-recognized character image through each feature model, and the manner of obtaining the face features of the to-be-recognized character image under each feature model comprises:
extracting the face area in the figure image to be recognized to obtain a corresponding face image in the figure image to be recognized;
and respectively extracting the features of the face image based on each feature model to obtain the face features matched with the corresponding feature models in the face image.
7. The apparatus of claim 5, wherein the manner of alarming by the normalization alarm sub-module based on the preset alarm threshold and the normalized similarity of the target features comprises:
carrying out variance operation on the normalized feature similarity of each target to obtain corresponding variances;
and comparing the obtained variance with the preset alarm threshold value, and alarming when the variance is smaller than the preset alarm threshold value.
8. The apparatus of any one of claims 5-7, further comprising:
and the configuration module is used for configuring a preset similarity threshold value, a preset similarity number, a preset alarm threshold value and each characteristic model.
CN201711469543.8A 2017-12-29 2017-12-29 Human face distribution alarm method and device Active CN109993020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711469543.8A CN109993020B (en) 2017-12-29 2017-12-29 Human face distribution alarm method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711469543.8A CN109993020B (en) 2017-12-29 2017-12-29 Human face distribution alarm method and device

Publications (2)

Publication Number Publication Date
CN109993020A CN109993020A (en) 2019-07-09
CN109993020B true CN109993020B (en) 2021-08-31

Family

ID=67109494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711469543.8A Active CN109993020B (en) 2017-12-29 2017-12-29 Human face distribution alarm method and device

Country Status (1)

Country Link
CN (1) CN109993020B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751095A (en) * 2019-10-21 2020-02-04 中国民航信息网络股份有限公司 Identity recognition method, system and readable storage medium
CN111144241B (en) * 2019-12-13 2023-06-20 深圳奇迹智慧网络有限公司 Target identification method and device based on image verification and computer equipment
CN111291627B (en) * 2020-01-16 2024-04-19 广州酷狗计算机科技有限公司 Face recognition method and device and computer equipment
CN113326714B (en) * 2020-02-28 2024-03-22 杭州海康威视数字技术股份有限公司 Target comparison method, target comparison device, electronic equipment and readable storage medium
CN111753756A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Object identification-based deployment alarm method and device and storage medium
CN112487997B (en) * 2020-12-01 2024-04-09 航天信息股份有限公司 Portrait feature extraction method and device
CN113032758B (en) * 2021-03-26 2023-06-16 平安银行股份有限公司 Identification method, device, equipment and storage medium for video question-answering flow
CN113378622A (en) * 2021-04-06 2021-09-10 青岛以萨数据技术有限公司 Specific person identification method, device, system and medium
CN114116849A (en) * 2021-12-01 2022-03-01 南威软件股份有限公司 Secondary studying and judging method and device for portrait early warning information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101310885B1 (en) * 2012-05-31 2013-09-25 주식회사 에스원 Face identification method based on area division and apparatus thereof
CN104992155A (en) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 Method and apparatus for acquiring face positions
CN106709468A (en) * 2016-12-31 2017-05-24 北京中科天云科技有限公司 City region surveillance system and device
CN106878670A (en) * 2016-12-24 2017-06-20 深圳云天励飞技术有限公司 A kind of method for processing video frequency and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101310885B1 (en) * 2012-05-31 2013-09-25 주식회사 에스원 Face identification method based on area division and apparatus thereof
CN104992155A (en) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 Method and apparatus for acquiring face positions
CN106878670A (en) * 2016-12-24 2017-06-20 深圳云天励飞技术有限公司 A kind of method for processing video frequency and device
CN106709468A (en) * 2016-12-31 2017-05-24 北京中科天云科技有限公司 City region surveillance system and device

Also Published As

Publication number Publication date
CN109993020A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109993020B (en) Human face distribution alarm method and device
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
WO2019011165A1 (en) Facial recognition method and apparatus, electronic device, and storage medium
KR20210110823A (en) Image recognition method, training method of recognition model, and related devices and devices
US20150262068A1 (en) Event detection apparatus and event detection method
CN109117857B (en) Biological attribute identification method, device and equipment
CN108108711B (en) Face control method, electronic device and storage medium
CN111339979B (en) Image recognition method and image recognition device based on feature extraction
CN114140713A (en) Image recognition system and image recognition method
CN113222942A (en) Training method of multi-label classification model and method for predicting labels
CN114140712A (en) Automatic image recognition and distribution system and method
CN111291887A (en) Neural network training method, image recognition method, device and electronic equipment
CN110826372A (en) Method and device for detecting human face characteristic points
CN114666473A (en) Video monitoring method, system, terminal and storage medium for farmland protection
CN114511583A (en) Image definition detection method, image definition detection device, electronic device, and storage medium
CN110928889A (en) Training model updating method, device and computer storage medium
CN115457466A (en) Inspection video-based hidden danger detection method and system and electronic equipment
CN114663726A (en) Training method of target type detection model, target detection method and electronic equipment
CN114139016A (en) Data processing method and system for intelligent cell
CN111709404B (en) Machine room legacy identification method, system and equipment
CN114095734A (en) User data compression method and system based on data processing
CN114549884A (en) Abnormal image detection method, device, equipment and medium
CN112241671B (en) Personnel identity recognition method, device and system
CN114173086A (en) User data screening method based on data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220328

Address after: 250101 floor 3b, building A2-5, Hanyu Jingu, high tech Zone, Jinan City, Shandong Province

Patentee after: Jinan Yushi Intelligent Technology Co.,Ltd.

Address before: 310000 1-11 / F, South Block, building 10, No. 88, Jiangling Road, Xixing street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG UNIVIEW TECHNOLOGIES Co.,Ltd.