CN111539271B - Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense - Google Patents

Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense Download PDF

Info

Publication number
CN111539271B
CN111539271B CN202010277581.9A CN202010277581A CN111539271B CN 111539271 B CN111539271 B CN 111539271B CN 202010277581 A CN202010277581 A CN 202010277581A CN 111539271 B CN111539271 B CN 111539271B
Authority
CN
China
Prior art keywords
sub
face
block
original image
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010277581.9A
Other languages
Chinese (zh)
Other versions
CN111539271A (en
Inventor
车国锋
张麟瑞
杨海红
武轩
徐丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Xinguang Photoelectric Technology Co ltd
Original Assignee
Harbin Xinguang Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Xinguang Photoelectric Technology Co ltd filed Critical Harbin Xinguang Photoelectric Technology Co ltd
Priority to CN202010277581.9A priority Critical patent/CN111539271B/en
Publication of CN111539271A publication Critical patent/CN111539271A/en
Application granted granted Critical
Publication of CN111539271B publication Critical patent/CN111539271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the field of face recognition, in particular to a face recognition method based on wearable equipment, which comprises the following steps: dividing an original image into five sub-blocks, wherein the first sub-block to the fourth sub-block can be spliced to form the original image, and the overlapping area of any two mutually overlapped sub-blocks in the first sub-block to the fourth sub-block accords with the minimum detection precision of a preset face recognition algorithm; the fifth sub-block is a reduced graph obtained after the original image length and width are halved; carrying out parallel identification on the five sub-blocks to obtain an identification result; combining the identification result with the overlapping area through coordinate correction and marking; extracting the face contour of the marked original image to obtain a feature vector representing the face feature; and comparing the feature vector with a target vector of a preset face feature library to obtain a comparison result. The invention also includes a wearable face detection apparatus for edge protection. The invention is suitable for the frontier defense application scene, considers the large-size and small-size face recognition problem, and can accurately and rapidly recognize and detect.

Description

Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense
Technical Field
The invention relates to the field of augmented reality and face recognition, in particular to a face recognition method based on wearable equipment and wearable face detection equipment for frontier defense.
Background
With the development of economy, the development speed is accelerated, the information technology is suddenly advanced, the population of frontier defense is dense, the floating population is increased, the frontier defense management problems such as traffic, social security, important area prevention, increasingly prominent network crimes and the like in the border construction are caused, and the construction of modern management means and network information are necessarily important in the future, so that the frontier defense management system is in the same importance as the economic construction.
In recent years, the social crime rate tends to rise year by year, especially the network crime is more serious, the frequency of network evasion occurs, criminal crime laws of criminals are more concealed and advanced, and the difficulty for detecting cases by law enforcement personnel is increased. Meanwhile, malignant events occur sometimes, so that the safety of people on places such as entrance and exit is generally reduced. Meanwhile, when people in frontier defense perform manual investigation on suspects, such as fishing needles in the sea, the success rate is extremely low and the effect is not obvious. The actual problems mainly existing in the prior art are as follows:
firstly, as criminals are expanding continuously, criminal suspects are found out from millions of people photo libraries, which is time-consuming and labor-consuming, and can possibly cause omission and other situations, and the efficiency of case breaking is greatly reduced. Secondly, most of the prior frontier defense management reconnaissance cases still depend on post-accident searching and wanted, and the losses caused to the cases are difficult to effectively compensate. Finally, if the accident can be prevented while the accident is happening, the loss can be controlled within the minimum range at the first time.
The intelligent frontier defense construction is based on the original video monitoring and information construction of the key plug, and the system has a great deal of grasp of video image resources and related valuable pictures. However, for personnel investigation, identity confirmation is still needed by technical or network detection means, and the personnel identity cannot be rapidly positioned by fully utilizing video image resources. Even if a great deal of vigilance is given out, the limit of 'man-sea tactics' which is limited by the labor intensity of naked eyes identification is adopted, and in addition, the manual investigation efficiency is insufficient, the video image shooting is influenced by uncertain factors such as light, angle inclination and the like, the accuracy and timeliness of searching cannot be ensured, and especially when sudden emergency cases occur, the optimal case breaking time is often delayed.
Disclosure of Invention
The invention aims to solve the defects that the face recognition method in the prior art is low in processing speed, easy to miss and difficult to accurately find a target when a large number of people flow.
According to a first aspect of the present invention, there is provided a face recognition method based on a wearable device, comprising: dividing an original image into five sub-blocks, wherein the first sub-block to the fourth sub-block can be spliced to form the original image, and the size of an overlapping area of any two mutually overlapped sub-blocks in the first sub-block to the fourth sub-block accords with the minimum detection precision of a preset face recognition algorithm; the fifth sub-block is a reduced graph obtained after the original image length and width are halved; carrying out parallel recognition on the five sub-blocks by using a preset face recognition algorithm to obtain a recognition result for identifying the region where the face is located; combining the identification result with the overlapping area through coordinate correction, and marking a corrected result on the original image; extracting the face contour of the marked original image to obtain a feature vector representing the face feature; and comparing the feature vector with a target vector of a preset face feature library to obtain a comparison result.
Preferably, if the width of the original image is w, the height is h, and g is twice the minimum detection accuracy of the preset face recognition algorithm, the area occupied by the first sub-block in the original image is
Figure BDA0002445387420000021
Figure BDA0002445387420000022
The second sub-block occupies a region of +.>
Figure BDA0002445387420000023
The third sub-block occupies the original image with the area of
Figure BDA0002445387420000024
Fourth sub-block inThe occupied area in the original image is +.>
Figure BDA0002445387420000025
Preferably, the coordinate correction process is: setting the position coordinates of the identification result in each sub-block in the sub-block as (x, y), and mapping the identification result in the first sub-block into (x, y) of the original image; mapping the recognition result in the second sub-block to the original image
Figure BDA0002445387420000026
Mapping the recognition result in the third sub-block to +.>
Figure BDA0002445387420000027
Mapping the recognition result in the fourth sub-block to +.>
Figure BDA0002445387420000028
Preferably, the overlap region merging procedure is: obtaining the center position coordinates (x c ,y c ) The method comprises the steps of carrying out a first treatment on the surface of the Calculating distances Dcc between the coordinates of the central positions of all the recognition results; merging the areas with the Dcc smaller than the preset value in a mode of selecting the largest outer area of the overlapped area; the original region is replaced with the merged region.
Preferably, the preset face recognition algorithm is an object detection algorithm of HOG & SVM.
Preferably, the face contour extraction is achieved by GBDT forest based enhancement residuals.
Preferably, the face feature library is constructed by the following process: setting M feature vectors in a face feature library, wherein the dimension of each feature vector is n; for each feature vector, constructing an array L by the kth dimension, sequencing the L, and calculating two number difference values Ld with the largest intermediate interval after sequencing; recording the average value Lk of two numbers with the largest dimension k and the largest intermediate interval according to the largest difference value Ld in all the dimensions, and dividing M eigenvectors into two subsets according to k and Lk; repeating the steps until the number of vectors in each subset is not greater than a preset value.
Preferably, the preset value is 8.
Preferably, the specific process of comparing the vector to be detected with the feature vector in the preset face feature library to obtain the comparison result is as follows: and selecting one feature vector from each subset in the feature library to perform vector inner product operation with the vector to be detected, selecting the subset with the largest calculation result, performing vector inner product operation on each feature vector in the subset and the vector to be detected, selecting the feature vector with the largest calculation result as a comparison result, and taking the maximum value of the vector inner product as the similarity.
According to a second aspect of the present invention there is provided a wearable face detection apparatus for security, comprising: the shooting device is used for acquiring an original image in the view; the storage device is used for storing a human face feature library, wherein the feature library comprises preset feature vectors capable of representing human face features; and the processor is used for realizing the face recognition method based on the wearable device according to the first aspect of the invention by executing the computer program instructions so as to recognize whether the face which is the same as the feature stored in the feature library exists in the original image.
The beneficial effects of the invention are as follows:
1. the overlapping area between the adjacent sub-blocks can improve the recognition rate of the small-size face in the visual field;
2. the fifth sub-block can prevent the problem of inaccurate detection when a large-size face appears at the sub-block dividing line;
3. the facial feature vectors in the feature library are subjected to subset division, so that the time for matching is greatly reduced.
4. The method is suitable for the frontier defense application scene, considers the large-size and small-size face recognition problem, and can make judgment in the time that the target is not separated from the visual field of the wearer.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a face recognition method based on a wearable device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of sub-block partitioning;
FIG. 3 (a) is a schematic diagram of one embodiment of an array L formed by k-th-dimension ordering of each vector when dividing feature vector subsets in a feature library; FIG. 3 (b) is a diagram showing two numbers with the largest difference found in the array L; FIG. 3 (c) is a schematic diagram of the feature library divided into subsets according to the kth and mean values Lk;
fig. 4 is a block schematic diagram of a wearable face detection apparatus for security as an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The face recognition method based on the wearable equipment provided by the invention is a method for carrying out face matching recognition on the current picture, and can be used in the wearable equipment, for example, AR glasses are manufactured for side defenders. When the method is used, the image acquisition device of the glasses acquires images in a visual field range, the memory and the processor are arranged in the AR glasses, the memory stores preset target face information, the processor is used for executing the method, the images acquired in real time are processed in parallel to identify the face area, and then the images are matched with the information in the memory to obtain a matching result. The present invention will be described with reference to various embodiments.
< method >
As shown in fig. 1, the method flowchart of the present embodiment includes:
step S1: dividing an original image into five sub-blocks, wherein the first sub-block to the fourth sub-block can be spliced to form the original image, and the size of an overlapping area of any two mutually overlapped sub-blocks in the first sub-block to the fourth sub-block accords with the minimum detection precision of a preset face recognition algorithm; the fifth sub-block is a reduced image obtained by halving the original image length and width.
For step S1, as shown in fig. 2, four sub-blocks A1 to A4 in the drawing represent four areas of upper left, upper right, lower left and lower right of the image, respectively, and A0 represents an image obtained by scaling down the original image by one time. The five sub-blocks are processed in parallel by using a face recognition algorithm, so that the processing speed can be increased. The sub-blocks are provided with an overlapping area, and the size of each of the four sub-blocks A1 to A4 is slightly larger than 1/4 of the original size. This is arranged to solve the missing phenomenon that occurs when a smaller face appears near the sub-block parting line. In order to further reduce the omission caused by the fact that the human face happens to appear on the dividing line, a fifth sub-block is further added, and the fifth sub-block is a graph obtained by reducing the length and the width of the original graph by one time, and the size of the fifth sub-block is one quarter of that of the original graph. The face detection method has the advantages that the detection speed is higher, and the situation that the face happens to be on the boundary can be detected; there is a problem in that recognition for a smaller face is inaccurate due to the size reduction. The fifth sub-block is cooperated with the other four sub-blocks to more accurately and efficiently identify the area where the face is located.
Step S2: and carrying out parallel recognition on the five sub-blocks by using a preset face recognition algorithm to obtain a recognition result for identifying the region where the face is located.
For step S2, five sub-blocks may be processed in parallel through multithreading, and the location of the face is marked in each sub-block. The "recognition result" may be marked with a rectangular box. In the current target recognition algorithm, a rectangular frame is often used for marking a recognition result. The data representation of the rectangular box may be determined by four vertex coordinates or two vertices on a diagonal, or may take other forms. The face recognition algorithm of the present embodiment may be selected in the prior art, and the approximate region where the face is located may be framed. In the step, the parallel algorithm is used for processing the image with smaller size, so that the processing time can be effectively shortened.
Step S3: and combining the identification result with the overlapping area through coordinate correction, and marking the corrected result on the original image.
In step S3, the coordinate correction means that, since the face recognition is performed for each sub-block in step S2, the coordinates of the marked rectangular frame are the coordinates with respect to the sub-block, and therefore, it is necessary to map the coordinates to the coordinates in the original image. The overlapping region merging means that since there is a overlapping region between the sub-blocks, it is possible that different sub-blocks all detect the same face in the overlapping region, and thus it is necessary to merge the repeatedly detected face regions into the same.
Step S4: and extracting the face contour of the marked original image to obtain a feature vector representing the face feature.
For step S4, since the face recognition of step S2 is to mark the approximate area with a square box, this is equivalent to a target location recognition. In this step, feature vectors need to be extracted to determine whether the face feature vector to be detected and the feature vectors already stored are from the same person.
Step S5: and comparing the feature vector with a target vector of a preset face feature library to obtain a comparison result.
For step S5, a specific operation is required to be performed on the vector to be detected and each vector in the feature library to identify a similarity, and the similarity is higher than a certain threshold value as a detection result. The feature library stores pre-processed feature vectors representing face information of different objects. For example, when the method is used for matching and identifying the edge protection suspicious personnel, the face information of wanted personnel can be processed to extract the feature vector and stored, so that the arrangement can avoid directly storing a picture format with larger occupied space, and the processing speed is faster.
In one embodiment, if the original image has a width w, a height h, and g is twice the minimum detection accuracy of the preset face recognition algorithm
The first sub-block occupies the original image with the area of
Figure BDA0002445387420000061
The second sub-block occupies the original image with the area of
Figure BDA0002445387420000062
The third sub-block occupies the original image with the area of
Figure BDA0002445387420000063
/>
The fourth sub-block occupies the original image with the area of
Figure BDA0002445387420000064
And g is the overlapping width of two adjacent sub-blocks from the view of the image, and g needs to be twice as high as the face recognition detection accuracy, so that the more than g/2 width area in each sub-area can possibly detect the face, thus avoiding missing detection to the greatest extent and simultaneously taking the detection speed into consideration. The detection precision of face recognition is an inherent attribute of each algorithm, the minimum size which can be detected by each algorithm model is determined, g can be determined according to the minimum precision of each algorithm, and an algorithm of a composite condition can be searched through g according to specific requirements. For example, if the minimum size that can be detected by some object recognition algorithm is 10 x 10 pixels, g may take 20 pixels.
Correspondingly, when the subblock division is adopted, the coordinate correction process comprises the following steps:
the position coordinates of the recognition result in each sub-block in the sub-block are set to (x, y).
The recognition result in the first sub-block is mapped to (x, y) of the original image.
Mapping the recognition result in the second sub-block to the original image
Figure BDA0002445387420000071
Where it is located.
Mapping the recognition result in the third sub-block to the original image
Figure BDA0002445387420000072
Where it is located.
Mapping the identification result in the fourth sub-block into the original image
Figure BDA0002445387420000073
That is, the coordinate correction process maps the coordinates of the recognition result (e.g., rectangular frame coordinates) in the sub-block to the coordinates in the original image. For this example, the coordinate notation of the first sub-block is consistent with the original image, and therefore, no change is required; mapping the coordinates of the second sub-block to the original image corresponds to right shifting
Figure BDA0002445387420000074
Coordinate transformation of the individual pixels; mapping the coordinates in the third sub-block to the original image corresponds to a downward translation +.>
Figure BDA0002445387420000075
Coordinate transformation of (2); mapping the coordinates in the fourth sub-block to the original image corresponds to a right shift +.>
Figure BDA0002445387420000076
And translate downwards +>
Figure BDA0002445387420000077
Is a coordinate transformation of (a).
The process of overlapping region merging is realized by the coordinates of the center position of the rectangular frame. After the coordinate correction process is completed, all the target frames identified by each sub-block are mapped into the original image, and some frames may exist, the centers of which are very close, and the frames can be regarded as the same face targets and are combined. The specific treatment process is as follows: obtaining the center position coordinates (x c ,y c ) The method comprises the steps of carrying out a first treatment on the surface of the Calculating distances Dcc between the coordinates of the central positions of all the recognition results; merging the areas with the Dcc smaller than the preset value in a mode of selecting the largest outer area of the overlapped area; the original region is replaced with the merged region. The "maximum outer region of the overlap region" may be the intersection of two regions. The union portion of the two regions may also be selected as desired.
The face recognition algorithm can select a target detection algorithm of the HOG and the SVM, and can select other kinds of target detection algorithms as long as a position area where the face is located can be obtained. Face contour extraction is achieved through GBDT forest based on enhanced residual error, the algorithm is existing in the prior art, and the output result is a feature vector.
The embodiment also provides a method for constructing the face feature library, which specifically comprises the following steps:
setting M feature vectors in a face feature library, wherein the dimension of each feature vector is n; for each feature vector, constructing an array L by the kth dimension, sequencing the L, and calculating two number difference values Ld with the largest intermediate interval after sequencing; recording the average value Lk of two numbers with the largest dimension k and the largest intermediate interval according to the largest difference value Ld in all the dimensions, and dividing M eigenvectors into two subsets according to k and Lk; repeating the steps until the number of vectors in each subset is not greater than a preset value. And then selecting one feature vector from each subset in the feature library to perform vector inner product operation with the vector to be detected, selecting the subset with the largest calculation result, performing vector inner product operation on each feature vector in the subset and the vector to be detected, selecting the feature vector with the largest calculation result as a comparison result, and taking the maximum value of the vector inner product as the similarity.
As shown in fig. 3 (a), M feature vectors represent M face information stored in advance, the kth dimension of all M vectors is ordered to form an array L, and two numbers with the largest numerical difference are found, which are in the example shown in fig. 3 (b)
Figure BDA0002445387420000081
And
Figure BDA0002445387420000082
the average value of the two is denoted as Lk, and the difference value is denoted as Ld. Taking k value from 1 to n to calculate all Ld, finding the maximum value of Ld, and calculating the corresponding +.>
Figure BDA0002445387420000083
And->
Figure BDA0002445387420000084
Is a mean value Lk of (c). And then dividing M eigenvectors into subsets at the dimension positions corresponding to the maximum values according to the above, and taking the recorded k-th and Lk values as indexes for subset division. For example, in FIG. 3 (c), it is found that +_in the kth dimension>
Figure BDA0002445387420000085
And->
Figure BDA0002445387420000086
The 1 st to j-1 st vectors are divided into the first subsets and the j to m vectors are divided into the second subsets based on the fact that the value interval of Ld is the largest (the 1 st to m vectors are ordered according to the size of the k-th vector at this time). At this time assume +.>
Figure BDA0002445387420000087
And->
Figure BDA0002445387420000088
Mean value of (1)And (3) Lk, the first found sub-set dividing position can be recorded as the k position of the k dimension, namely, the k dimension value of each vector is counted, and the sub-set dividing position is greater than or equal to the k and is divided into one sub-set and the sub-set dividing position is smaller than the k and is divided into the other sub-set. For each subset, if the number of feature vectors in the subset is greater than a preset value, repeating the previous segmentation step until the number of feature vectors in each subset is not greater than the preset value. In a preferred embodiment, the preset value may be 8, that is, after multiple divisions, the data of the feature vectors in the divided subsets are not greater than 8.
After the above process is divided, the vectors with larger difference are divided into different subsets, when matching is performed, one vector can be selected from the different subsets to calculate the vector inner product of the vector and the feature vector to be detected, the subset corresponding to the vector with the largest calculated result value is selected, the feature vector to be detected and each vector in the subset are subjected to vector inner product, and the numerical value of the vector with the largest numerical value is selected as the matching result. For example, there are 3 subsets in the feature library, each subset has 5 feature vectors, one vector is marked A, B, C in each of the three subsets, the feature vector to be detected is marked as K, then the inner products of K and A, B, C are calculated respectively, if the inner products of K and B are the largest, then the inner product operation is performed on each vector in the subset corresponding to K and B, and the ID value corresponding to the largest value is found as the result of face recognition. The time required by matching the faces can be greatly reduced by matching the parallel execution mode.
The vectors in the feature library and the feature vectors to be measured can be firstly specified before the inner product calculation is carried out, and the specific process is as follows: for vector V, the euclidean distance vd=sqrt (v×v) from the origin is found and v×10/Vd is assigned to V, i.e. v=v×10/Vd. The vector V thus obtained is specified and can be used to calculate the subsequent inner product value.
Up to this point, it can be seen that the main process of this embodiment can be divided into: 1. the original image is subjected to parallel blocking processing, the area of the face in each sub-block is rapidly identified, and the process mainly uses a target detection technology; 2. integrating the face areas identified in the sub-blocks into the original image, wherein the process mainly uses coordinate correction and overlapping area merging technology; 3. carrying out contour extraction on the face image in each box in the integrated image to obtain a feature vector, wherein the process mainly uses a face contour extraction algorithm based on a residual error network; 4. and carrying out vector inner product operation on the extracted feature vector and the face feature vector stored in the feature library in advance, and finding out the maximum result value to be used as a final matching result.
< apparatus >
The present embodiment provides a wearable face detection apparatus for frontier defense, as shown in fig. 4, including:
the shooting device 101 is used for acquiring an original image in a view; the storage device 103 is used for storing a face feature library, wherein the feature library comprises preset feature vectors capable of representing face features; the processor 102 is configured to implement the aforementioned face recognition method based on the wearable device by executing computer program instructions, so as to recognize whether a face identical to a feature stored in the feature library exists in the original image.
The device of this embodiment may be AR glasses, a head-mounted AR device, or another wearable device equipped with a camera, a memory, and a processor. If AR glasses or head-mounted AR equipment are used, the visual field picture shot by the camera can be guaranteed to be similar to that observed by a wearer, and the wearer can conveniently adjust the posture angle to obtain a better visual field. The shooting device can shoot a plurality of frames for one second or shoot a frame for a few seconds at intervals according to the specific scene, and the obtained image frames are used as input of the processor for identification matching.
One preferred application scenario of this embodiment is for wisdom frontier defense, is worn equipment by the frontier defense guard personnel promptly, and the camera acquires field of vision range image. The memory is used for storing preset suspicious personnel information, such as wanted personnel information or personnel information which is included into the monitored object for other purposes. And sending the image acquired by the camera to a processor for matching identification, and if the matching similarity is higher than a set threshold, considering that a target appears in the field of view, and prompting the wearer by the equipment.
The detection method and the detection equipment of the invention are different from the existing face matching technology in that the face recognition for frontier defense is particularly optimized: 1. the face images appearing in the field of view may be of widely varying sizes, i.e., larger faces nearer to the wearer and smaller distances. In view of the situation, the sub-block segmentation method comprises a sub-block overlapping area, so that the problem of inaccurate detection caused by the fact that a smaller face is positioned near a sub-block dividing line is solved; 2. meanwhile, in order to solve the problem that a bigger face close to a wearer is difficult to detect due to the fact that the sub-blocks are easily segmented, a fifth sub-block is added, namely an image obtained after the length and the width are doubled, and the fifth sub-block can be used for detecting the bigger face and is not influenced by the boundary of the sub-blocks; 3. the feature vectors in the feature library are subjected to subset division in advance, and feature similarity is divided into a subset, so that the matching time is reduced to the greatest extent; 4. the sub-blocks are processed in parallel and matched and combined in the feature library, so that whether people similar to the targets in the feature library appear in the field of view can be judged at the highest speed, judgment can be made under the condition that suspected people are not far away from the edge protection personnel, the edge protection personnel can conveniently and quickly take corresponding measures after the suspected people are found so as to ensure the edge protection safety, and the situation that the edge protection personnel cannot find the suspected people in time afterwards due to overlong matching time is avoided.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (6)

1. The face recognition method based on the wearable device is characterized by comprising the following steps of:
dividing an original image into five sub-blocks, wherein the first sub-block to the fourth sub-block can be spliced to form the original image, and the size of an overlapping area of any two mutually overlapped sub-blocks in the first sub-block to the fourth sub-block accords with the minimum detection precision of a preset face recognition algorithm; the fifth sub-block is a reduced graph obtained after the original image length and width are halved;
carrying out parallel recognition on the five sub-blocks by using a preset face recognition algorithm to obtain a recognition result for identifying the region where the face is located;
combining the identification result with the overlapping area through coordinate correction, and marking a corrected result on the original image;
extracting the face contour of the marked original image to obtain a vector to be detected representing the face characteristics;
comparing the vector to be detected with feature vectors in a preset face feature library to obtain a comparison result;
let the width of the original image be w, the height be h, g be twice the minimum detection accuracy of the preset face recognition algorithm
The first sub-block occupies the original image with the area of
Figure FDA0004137778130000011
The second sub-block occupies the original image with the area of
Figure FDA0004137778130000012
The third sub-block occupies the original image with the area of
Figure FDA0004137778130000013
The fourth sub-block occupies the original image with the area of
Figure FDA0004137778130000014
The coordinate correction process comprises the following steps:
the position coordinates of the recognition result in each sub-block in the sub-block are set as (x, y),
mapping the identification result in the first sub-block to (x, y) of the original image;
mapping the recognition result in the second sub-block to the original image
Figure FDA0004137778130000015
Mapping the recognition result in the third sub-block to the original image
Figure FDA0004137778130000016
Mapping the identification result in the fourth sub-block into the original image
Figure FDA0004137778130000017
The overlapping region merging process is as follows:
obtaining the center position coordinates (x c ,y c );
Calculating distances Dcc between the coordinates of the central positions of all the recognition results;
merging the areas with the Dcc smaller than the preset value in a mode of selecting the largest outer area of the overlapped area;
replacing the original area by using the combined area;
the face feature library is constructed through the following processes:
setting M feature vectors in a face feature library, wherein the dimension of each feature vector is n;
for each feature vector, constructing an array L by the kth dimension, sequencing the L, and calculating two number difference values Ld with the largest intermediate interval after sequencing;
recording the average value Lk of two numbers with the largest dimension k and the largest intermediate interval according to the largest difference value Ld in all the dimensions, and dividing M eigenvectors into two subsets according to k and Lk;
repeating the steps until the number of vectors in each subset is not greater than a preset value.
2. The face recognition method based on the wearable device according to claim 1, wherein the preset face recognition algorithm is a target detection algorithm of HOG & SVM.
3. The face recognition method based on the wearable device according to claim 1, wherein face contour extraction is achieved through a GBDT forest based on enhanced residual error.
4. The face recognition method based on the wearable device according to claim 1, wherein the preset value is 8.
5. The face recognition method based on the wearable device according to claim 1, wherein the specific process of comparing the vector to be detected with the feature vector in the preset face feature library to obtain the comparison result is as follows:
and selecting one feature vector from each subset in the feature library to perform vector inner product operation with the vector to be detected, selecting the subset with the largest calculation result, performing vector inner product operation on each feature vector in the subset and the vector to be detected, selecting the feature vector with the largest calculation result as a comparison result, and taking the maximum value of the vector inner product as the similarity.
6. A wearable face detection apparatus for border protection, comprising:
the shooting device is used for acquiring an original image in the view;
the storage device is used for storing a human face feature library, wherein the feature library comprises preset feature vectors capable of representing human face features;
a processor for implementing the wearable device-based face recognition method according to any one of claims 1 to 5 by executing computer program instructions to recognize whether there is a face in the original image that is the same as a feature stored in the feature library.
CN202010277581.9A 2020-04-10 2020-04-10 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense Active CN111539271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010277581.9A CN111539271B (en) 2020-04-10 2020-04-10 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010277581.9A CN111539271B (en) 2020-04-10 2020-04-10 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense

Publications (2)

Publication Number Publication Date
CN111539271A CN111539271A (en) 2020-08-14
CN111539271B true CN111539271B (en) 2023-05-02

Family

ID=71974907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010277581.9A Active CN111539271B (en) 2020-04-10 2020-04-10 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense

Country Status (1)

Country Link
CN (1) CN111539271B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN106599871A (en) * 2016-12-23 2017-04-26 济南大学 Two-dimensional face feature classification method
CN106886771A (en) * 2017-03-15 2017-06-23 同济大学 The main information extracting method of image and face identification method based on modularization PCA
KR20180094453A (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern
CN109740572A (en) * 2019-01-23 2019-05-10 浙江理工大学 A kind of human face in-vivo detection method based on partial color textural characteristics
CN110147776A (en) * 2019-05-24 2019-08-20 北京百度网讯科技有限公司 The method and apparatus for determining face key point position
CN110473169A (en) * 2019-07-10 2019-11-19 哈尔滨新光光电科技股份有限公司 A kind of emulation picture confidence evaluation method
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961937B2 (en) * 2005-10-26 2011-06-14 Hewlett-Packard Development Company, L.P. Pre-normalization data classification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN106096537A (en) * 2016-06-06 2016-11-09 山东大学 A kind of micro-expression automatic identifying method based on multi-scale sampling
CN106599871A (en) * 2016-12-23 2017-04-26 济南大学 Two-dimensional face feature classification method
KR20180094453A (en) * 2017-02-15 2018-08-23 동명대학교산학협력단 FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern
CN106886771A (en) * 2017-03-15 2017-06-23 同济大学 The main information extracting method of image and face identification method based on modularization PCA
CN109740572A (en) * 2019-01-23 2019-05-10 浙江理工大学 A kind of human face in-vivo detection method based on partial color textural characteristics
CN110147776A (en) * 2019-05-24 2019-08-20 北京百度网讯科技有限公司 The method and apparatus for determining face key point position
CN110473169A (en) * 2019-07-10 2019-11-19 哈尔滨新光光电科技股份有限公司 A kind of emulation picture confidence evaluation method
CN110765951A (en) * 2019-10-24 2020-02-07 西安电子科技大学 Remote sensing image airplane target detection method based on bounding box correction algorithm

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Gender recognition based on face image using reinforced local binary patterns;Selvam, IRP等;《IET COMPUTER VISION》;第11卷(第6期);第415-425页 *
基于人脸检测和关键点识别的快速人体组件划分;马旋等;《计算机应用与软件》(第01期);第273-276+324页 *
基于图像分块稀疏表示的人脸识别算法研究;陈晖;《西安文理学院学报(自然科学版)》(第06期);第27-32页 *
基于局部结构的多尺度协作表示人脸识别算法;刘宇凯等;《计算机工程与应用》;第54卷(第17期);第151-157页 *
基于非负矩阵分解的人脸识别方法研究;汪雷;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》(第2期);第I138-1909页 *

Also Published As

Publication number Publication date
CN111539271A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
Yang et al. A multi-scale cascade fully convolutional network face detector
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
Tiwari et al. A computer vision based framework for visual gun detection using SURF
Uliyan et al. Copy move image forgery detection using Hessian and center symmetric local binary pattern
KR101788225B1 (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
Chuang et al. Supervised and unsupervised feature extraction methods for underwater fish species recognition
CN110110755B (en) Pedestrian re-identification detection method and device based on PTGAN region difference and multiple branches
CN111833380B (en) Multi-view image fusion space target tracking system and method
Avula et al. A novel forest fire detection system using fuzzy entropy optimized thresholding and STN-based CNN
CN111091098A (en) Training method and detection method of detection model and related device
CN102156881B (en) Method for detecting salvage target based on multi-scale image phase information
Prasad et al. Passive copy-move forgery detection using SIFT, HOG and SURF features
Ticay-Rivas et al. Pollen classification based on geometrical, descriptors and colour features using decorrelation stretching method
Lai et al. Robust little flame detection on real-time video surveillance system
CN111539271B (en) Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense
Li et al. Crowd density estimation: An improved approach
CA3011713A1 (en) Hash-based appearance search
Singh et al. Template matching for detection & recognition of frontal view of human face through Matlab
Cai et al. Man-made object detection based on texture clustering and geometric structure feature extracting
Jacques et al. Head-shoulder human contour estimation in still images
Fatichah et al. Optical flow feature based for fire detection on video data
Lin et al. Refining PRNU-based detection of image forgeries
Fang et al. A fire detection and localisation method based on keyframes and superpixels for large-space buildings
Southey et al. Object discovery through motion, appearance and shape

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant