CN105184261A - Rapid video face identification method based on big data processing - Google Patents

Rapid video face identification method based on big data processing Download PDF

Info

Publication number
CN105184261A
CN105184261A CN201510577530.7A CN201510577530A CN105184261A CN 105184261 A CN105184261 A CN 105184261A CN 201510577530 A CN201510577530 A CN 201510577530A CN 105184261 A CN105184261 A CN 105184261A
Authority
CN
China
Prior art keywords
judged result
result
bunch
yes
perform step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510577530.7A
Other languages
Chinese (zh)
Other versions
CN105184261B (en
Inventor
陈文�
何明建
舒宇
顾莲军
黄华杰
陈志顺
徐世斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUIZHOU HUACHENG BUILDING TECHNOLOGY Co Ltd
Original Assignee
GUIZHOU HUACHENG BUILDING TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUIZHOU HUACHENG BUILDING TECHNOLOGY Co Ltd filed Critical GUIZHOU HUACHENG BUILDING TECHNOLOGY Co Ltd
Priority to CN201510577530.7A priority Critical patent/CN105184261B/en
Publication of CN105184261A publication Critical patent/CN105184261A/en
Application granted granted Critical
Publication of CN105184261B publication Critical patent/CN105184261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to a rapid vide face identification method based on big data processing, belongs to the video face identification field and solves problems that face identification methods in the prior art have disadvantages of low identification speed and low identification accuracy. According to the method, a face identification database is established, and acquired face images and corresponding partial face characteristic serial data are stored in an image database; one partial face characteristic serial data is respectively generated by each image, and the data is stored in different memories. When face identification is required, face images are acquired, corresponding partial face characteristic serial data is generated by the acquired face images, and matching and identification are carried out based on grades. The method is suitable for rapid face identification.

Description

Based on the fast video face identification method of large data processing
Technical field
The present invention relates to video human face identification field.
Background technology
Biometrics identification technology is being widely used in safety-security area in recent years, and face recognition technology wherein is also widely studied as wherein efficient means.Face recognition technology, relative to other biological identification technology, as iris recognition, fingerprint recognition etc., due to features such as close friend, convenience, becomes the focus of research and use.For face recognition technology, the Main way of current research, first accuracy;
For common recognition of face, current research both domestic and external has preferably resolved this problem, as: disclosed in the people such as Huang Fu, Pan Guangzhen are upper in " Electronic Testing " in 2010 " research based on the sense of reality three-dimensional face modeling of multi-angle photo ";
Zhao Xiao has just waited in 2009 " computing machine and digital engineering " upper disclosed " specific three dimensional human face model building is summarized " etc.
But for more special case, as multiparity's recognition of face, fuzzy recognition of face etc., its accuracy is still not high.
Another important factor is: recognition speed.
At present, facial image recognition speed is depended on to the service ability of computing machine, for mass data, current recognition speed is consuming time also longer, and these all seriously hinder speed as security protection, criminal investigation work and quality.
Summary of the invention
The present invention is the problem that recognition speed is slow, identification accuracy is low in order to solve existing face identification method, thus provides the fast video face identification method based on large data processing.
Based on the fast video face identification method of large data processing, it comprises the following steps:
Step one, set up face recognition database; Described database comprises image data base and P subdata base, and described P is positive integer; A described P subdata base embeds in the storer of P respectively;
Described image data base is for storing the facial image and corresponding local facial feature's serial data that collect;
For storing local facial feature's serial data in each subdata base, the preparation method of each local facial feature's serial data in each subdata base is:
A width facial image in steps A 1, reads image data storehouse;
Steps A 2, with the pixel at center between face two eyebrow on facial image for initial point, being X-axis with horizontal direction, take vertical direction as Y-axis, and the in-plane formed with vertical X axis and Y-axis sets up three-dimensional cartesian coordinate system for Z axis; And determine left eye center pixel coordinate (LeX on facial image, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, and right eyebrow center pixel coordinate (LbX, LbY, LbZ) LbZ);
Steps A 3, facial image is converted into gray-scale map, one by one the gray-scale value of pixel each in gray-scale map and the gray threshold preset is compared, will the pixel set of default gray threshold be greater than, and called after significant point; The pixel reset of standard value will be not more than;
Steps A 4, one by one by each significant point with surround eight the most contiguous pixels of this significant point and form nine grids, and judge whether have significant point in other eight points in these nine grids; If judged result is yes, then by this significant point called after effective pixel points, perform steps A 5; If judged result is no, then by this significant point reset;
Steps A 5, region effective pixel points enclosed are designated as characteristic area, and obtain X characteristic area altogether, X is positive integer;
Steps A 6, for each characteristic area, provide the coordinate (TX of each pixel, TY, TZ), and the coordinate judging each pixel one by one whether with left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) identical, if judged result is yes, then perform steps A 7, if judged result is no, then perform steps A 9,
Steps A 7, to including left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) one of the edge of characteristic area carry out curve fitting, obtain each characteristic area contour curve;
Steps A 8, for the characteristic area in steps A 7, judge the similarity of the zhou duicheng tuxing that this contour curve is the most close with it or centrosymmetric image, and judge whether this similarity is greater than default similarity threshold, if judged result is yes, then to all pixel resets in this characteristic area; If judged result is no, then perform steps A 9;
Whether the Z axis coordinate figure of each pixel in steps A 9, one by one judging characteristic region is greater than the threshold value of setting, if judged result is yes, then by all pixel resets in this characteristic area, if judged result is no, then remaining whole V1 characteristic area arranges by the number comprising pixel, V1 is less than or equal to V, and is set to i level face local characteristic region according to this; I=1,2 ... V1, then performs steps A 10;
Steps A 10, the image of each face local characteristic region in the image of the V1 in steps A 9 is converted to corresponding binary data group one by one, and this V1 data group is fitted together according to the size groups of rank, obtain local facial feature's serial data, separate with zone bit between adjacent two data group;
1st grade of face local characteristic region is that rank is maximum; V1 level face local characteristic region is that rank is minimum;
Steps A 11, by the local facial feature's serial data in steps A 10 stored in one of them subdata base;
Step 2, the storer of P is configured P radio communication device respectively, form P wireless access point AP; And by P wireless access point AP networking, concrete grammar is:
Step B1, C the wireless access point AP being positioned at same communication cell is formed one bunch, C is positive integer, and in this bunch, each wireless access point AP elects a wireless access point AP as a bunch head jointly, and other C-1 wireless access point AP is a bunch member;
Bunch head in each communication cell bunch can intercom mutually, and bunch member being positioned at different bunches can not intercom mutually;
Image data base is configured a radio communication device; Form image data base WAP, described image data base WAP can intercom with each wireless contact AP phase in each communication cell;
Step 3, gather facial image to be identified, described facial image to be identified is processed to the method for steps A 10 according to above-mentioned steps A1, obtains local facial feature's serial data to be identified;
Step 4, one of them wireless access point AP of local facial feature's serial date transfer to be identified step 3 obtained, be designated as initiation wireless access point AP by this wireless access point AP, and perform step C1;
Step C1, described local facial feature's serial data to be identified is being initiated to mate in the subdata base in wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if the judgment is Yes, corresponding facial image then in reads image data storehouse in WAP, exports this facial image as when previous recognition result; If judged result is no, then perform step C2;
Step C2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in current wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step C3; If judged result is no, then perform step 5;
Whether step C3, the quantity judging matching result are 1; If judged result is yes, then perform step C4; If judged result is no, then perform step C5;
Step C4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 5; If judged result is no, then returns and perform step C2;
Step C5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step C6; If judged result is no, then perform step C7;
Step C6, local facial feature's serial data of this being recognized as face recognition result, and generate second level result identification bag, and perform step 8;
Step C7, the local facial feature's serial data alternatively face recognition result this recognized, generate third level result identification bag, and perform step 8;
Step 5, initiate bunch hair of some wireless access point AP to its place bunch and send radio request signal, described bunch of head is designated as and initiates bunch head, initiate some wireless access point AP and judge within the setting-up time cycle, whether receive the recognition result from initiating bunch head, if judged result is yes, then perform step 8, if judged result is no, then perform step 9;
Initiate bunch head within the time cycle, perform step D1 to D7 in turn:
Step D1, local facial feature's serial data to be identified to be mated in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is no, then perform step D2; If the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and sends to an initiation point wireless access point AP, and terminates this recognition of face;
Step D2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step D3; If judged result is no, then perform step 6;
Whether step D3, the quantity judging matching result are 1; If judged result is no, then perform step D4; If judged result is yes, then perform step D5;
Step D4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 6; If judged result is no, then returns and perform step D2;
Step D5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step D6; If judged result is no, then perform step D7;
Corresponding facial image in step D6, reads image data storehouse, using this facial image as working as previous recognition result, and generating second level result identification bag, performing step 7;
Step D7, local facial feature's serial data alternatively face recognition result that this is recognized, and generate third level result identification bag, perform step 7;
Step 6, initiation bunch head, broadcasting the broadcast data packet from initiating wireless access point AP to each bunch of head, wait for a time cycle;
Local facial feature's serial data to be identified is broadcasted by step e 1, each bunch of head in this bunch; Under a time cycle, each wireless access point AP in this bunch, all performs step e 2 to step e 8;
Local facial feature's serial data to be identified is mated by step e 2, each wireless access point AP in respective subdata base, and judge whether local facial feature's serial data of coupling, if the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and bunch hair to its place bunch send first order result identification bag, and perform step e 9; If judged result is no, then perform step e 3;
Step e 3, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in this respective subdata base, and judge whether local facial feature's serial data of coupling, if judged result is yes, then perform step e 4; If judged result is no, then the recognition of face under terminating this time cycle;
Whether step e 4, the quantity judging matching result are 1; If judged result is no, then perform step e 5; If judged result is yes, then perform step e 6;
Step e 5, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is no, then returns and perform step e 2; If judged result is yes, then the recognition of face under terminating this time cycle;
Step e 6, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step e 7; If judged result is no, then perform step e 8;
Corresponding facial image in step e 7, reads image data storehouse, using this facial image as working as previous recognition result, and send second level result identification bag to bunch hair of this bunch, and performs step e 9;
Step e 8, local facial feature's serial data alternatively face recognition result that this is recognized, and to send to and a bunch hair to this bunch send third level result identification bag, perform step e 9;
Bunch head of step e 9, this bunch receives from the first order result identification bag of each wireless access point AP, the second result identification bag or third level result identification bag, and judge whether to there is first order result identification bag, if judged result is yes, then this first order result identification bag is sent to and initiate bunch head; If judged result is no, then perform step e 10;
Bunch head of step e 10, this bunch judges whether to there is second level result identification bag, if judged result is no, then performs step e 12; If judged result is yes, then perform step e 11;
Whether step e 11, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Bunch head of step e 12, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step e 14; If judged result is yes, then perform step e 13;
Whether step e 13, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and third level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Step e 14, terminate this time cycle under recognition of face;
Step 7, an initiation bunch head judge whether to there is second level result identification bag, if judged result is no, then perform step F 2; If judged result is yes, then perform step F 1;
Whether step F 1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Bunch head of step F 2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step F 4; If judged result is yes, then perform step F 3;
Whether step F 3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Step F 4, terminate this time cycle under recognition of face;
Step 8, initiation wireless access point AP judge whether to there is second level result identification bag, if judged result is no, then perform step G2; If judged result is yes, then perform step G1;
Whether step G1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is exported as recognition result; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is exported;
Bunch head of step G2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step G4; If judged result is yes, then perform step G3;
Whether step G3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is exported;
Step F 4, terminate this time cycle under recognition of face;
Step 9, the recognition result that step C6 or step C7 obtain to be exported as final recognition result.
The present invention by setting up face recognition database, by image data base for storing the facial image that collects and corresponding local facial feature's serial data; And everyone picture is all generated local facial feature's serial data, and dispersion is stored in different storeies.When needs carry out recognition of face, by gathering facial image, and by local facial feature's serial data corresponding for the Face image synthesis that collects, and carry out match cognization by rank.
Empirical tests, the present invention improves nearly one times compared to the recognition speed of existing face identification method, when intercepting characteristic quantity and being maximum, identifies that accuracy is close to 99%.
Accompanying drawing explanation
Fig. 1 is schematic flow sheet of the present invention;
Fig. 2 is Principle of Communication schematic diagram between each bunch of the present invention; In figure, XQ represents community; 101 is a bunch head.
Embodiment
Embodiment one, fast video face identification method based on large data processing, it comprises the following steps:
Step one, set up face recognition database; Described database comprises image data base and P subdata base, and described P is positive integer; A described P subdata base embeds in the storer of P respectively;
Described image data base is for storing the facial image and corresponding local facial feature's serial data that collect;
For storing local facial feature's serial data in each subdata base, the preparation method of each local facial feature's serial data in each subdata base is:
A width facial image in steps A 1, reads image data storehouse;
Steps A 2, with the pixel at center between face two eyebrow on facial image for initial point, being X-axis with horizontal direction, take vertical direction as Y-axis, and the in-plane formed with vertical X axis and Y-axis sets up three-dimensional cartesian coordinate system for Z axis; And determine left eye center pixel coordinate (LeX on facial image, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, and right eyebrow center pixel coordinate (LbX, LbY, LbZ) LbZ);
Steps A 3, facial image is converted into gray-scale map, one by one the gray-scale value of pixel each in gray-scale map and the gray threshold preset is compared, will the pixel set of default gray threshold be greater than, and called after significant point; The pixel reset of standard value will be not more than;
Steps A 4, one by one by each significant point with surround eight the most contiguous pixels of this significant point and form nine grids, and judge whether have significant point in other eight points in these nine grids; If judged result is yes, then by this significant point called after effective pixel points, perform steps A 5; If judged result is no, then by this significant point reset;
Steps A 5, region effective pixel points enclosed are designated as characteristic area, and obtain X characteristic area altogether, X is positive integer;
Steps A 6, for each characteristic area, provide the coordinate (TX of each pixel, TY, TZ), and the coordinate judging each pixel one by one whether with left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) identical, if judged result is yes, then perform steps A 7, if judged result is no, then perform steps A 9,
Steps A 7, to including left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) one of the edge of characteristic area carry out curve fitting, obtain each characteristic area contour curve;
Steps A 8, for the characteristic area in steps A 7, judge the similarity of the zhou duicheng tuxing that this contour curve is the most close with it or centrosymmetric image, and judge whether this similarity is greater than default similarity threshold, if judged result is yes, then to all pixel resets in this characteristic area; If judged result is no, then perform steps A 9;
Whether the Z axis coordinate figure of each pixel in steps A 9, one by one judging characteristic region is greater than the threshold value of setting, if judged result is yes, then by all pixel resets in this characteristic area, if judged result is no, then remaining whole V1 characteristic area arranges by the number comprising pixel, V1 is less than or equal to V, and is set to i level face local characteristic region according to this; I=1,2 ... V1, then performs steps A 10;
Steps A 10, the image of each face local characteristic region in the image of the V1 in steps A 9 is converted to corresponding binary data group one by one, and this V1 data group is fitted together according to the size groups of rank, obtain local facial feature's serial data, separate with zone bit between adjacent two data group;
1st grade of face local characteristic region is that rank is maximum; V1 level face local characteristic region is that rank is minimum;
Steps A 11, by the local facial feature's serial data in steps A 10 stored in one of them subdata base;
Step 2, the storer of P is configured P radio communication device respectively, form P wireless access point AP; And by P wireless access point AP networking, concrete grammar is:
Step B1, C the wireless access point AP being positioned at same communication cell is formed one bunch, C is positive integer, and in this bunch, each wireless access point AP elects a wireless access point AP as a bunch head jointly, and other C-1 wireless access point AP is a bunch member;
Bunch head in each communication cell bunch can intercom mutually, and bunch member being positioned at different bunches can not intercom mutually;
Image data base is configured a radio communication device; Form image data base WAP, described image data base WAP can intercom with each wireless contact AP phase in each communication cell;
Step 3, gather facial image to be identified, described facial image to be identified is processed to the method for steps A 10 according to above-mentioned steps A1, obtains local facial feature's serial data to be identified;
Step 4, one of them wireless access point AP of local facial feature's serial date transfer to be identified step 3 obtained, be designated as initiation wireless access point AP by this wireless access point AP, and perform step C1;
Step C1, described local facial feature's serial data to be identified is being initiated to mate in the subdata base in wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if the judgment is Yes, corresponding facial image then in reads image data storehouse in WAP, exports this facial image as when previous recognition result; If judged result is no, then perform step C2;
Step C2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in current wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step C3; If judged result is no, then perform step 5;
Whether step C3, the quantity judging matching result are 1; If judged result is yes, then perform step C4; If judged result is no, then perform step C5;
Step C4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 5; If judged result is no, then returns and perform step C2;
Step C5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step C6; If judged result is no, then perform step C7;
Step C6, local facial feature's serial data of this being recognized as face recognition result, and generate second level result identification bag, and perform step 8;
Step C7, the local facial feature's serial data alternatively face recognition result this recognized, generate third level result identification bag, and perform step 8;
Step 5, initiate bunch hair of some wireless access point AP to its place bunch and send radio request signal, described bunch of head is designated as and initiates bunch head, initiate some wireless access point AP and judge within the setting-up time cycle, whether receive the recognition result from initiating bunch head, if judged result is yes, then perform step 8, if judged result is no, then perform step 9;
Initiate bunch head within the time cycle, perform step D1 to D7 in turn:
Step D1, local facial feature's serial data to be identified to be mated in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is no, then perform step D2; If the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and sends to an initiation point wireless access point AP, and terminates this recognition of face;
Step D2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step D3; If judged result is no, then perform step 6;
Whether step D3, the quantity judging matching result are 1; If judged result is no, then perform step D4; If judged result is yes, then perform step D5;
Step D4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 6; If judged result is no, then returns and perform step D2;
Step D5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step D6; If judged result is no, then perform step D7;
Corresponding facial image in step D6, reads image data storehouse, using this facial image as working as previous recognition result, and generating second level result identification bag, performing step 7;
Step D7, local facial feature's serial data alternatively face recognition result that this is recognized, and generate third level result identification bag, perform step 7;
Step 6, initiation bunch head, broadcasting the broadcast data packet from initiating wireless access point AP to each bunch of head, wait for a time cycle;
Local facial feature's serial data to be identified is broadcasted by step e 1, each bunch of head in this bunch; Under a time cycle, each wireless access point AP in this bunch, all performs step e 2 to step e 8;
Local facial feature's serial data to be identified is mated by step e 2, each wireless access point AP in respective subdata base, and judge whether local facial feature's serial data of coupling, if the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and bunch hair to its place bunch send first order result identification bag, and perform step e 9; If judged result is no, then perform step e 3;
Step e 3, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in this respective subdata base, and judge whether local facial feature's serial data of coupling, if judged result is yes, then perform step e 4; If judged result is no, then the recognition of face under terminating this time cycle;
Whether step e 4, the quantity judging matching result are 1; If judged result is no, then perform step e 5; If judged result is yes, then perform step e 6;
Step e 5, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is no, then returns and perform step e 2; If judged result is yes, then the recognition of face under terminating this time cycle;
Step e 6, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step e 7; If judged result is no, then perform step e 8;
Corresponding facial image in step e 7, reads image data storehouse, using this facial image as working as previous recognition result, and send second level result identification bag to bunch hair of this bunch, and performs step e 9;
Step e 8, local facial feature's serial data alternatively face recognition result that this is recognized, and to send to and a bunch hair to this bunch send third level result identification bag, perform step e 9;
Bunch head of step e 9, this bunch receives from the first order result identification bag of each wireless access point AP, the second result identification bag or third level result identification bag, and judge whether to there is first order result identification bag, if judged result is yes, then this first order result identification bag is sent to and initiate bunch head; If judged result is no, then perform step e 10;
Bunch head of step e 10, this bunch judges whether to there is second level result identification bag, if judged result is no, then performs step e 12; If judged result is yes, then perform step e 11;
Whether step e 11, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Bunch head of step e 12, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step e 14; If judged result is yes, then perform step e 13;
Whether step e 13, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and third level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Step e 14, terminate this time cycle under recognition of face;
Step 7, an initiation bunch head judge whether to there is second level result identification bag, if judged result is no, then perform step F 2; If judged result is yes, then perform step F 1;
Whether step F 1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Bunch head of step F 2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step F 4; If judged result is yes, then perform step F 3;
Whether step F 3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Step F 4, terminate this time cycle under recognition of face;
Step 8, initiation wireless access point AP judge whether to there is second level result identification bag, if judged result is no, then perform step G2; If judged result is yes, then perform step G1;
Whether step G1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is exported as recognition result; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is exported;
Bunch head of step G2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step G4; If judged result is yes, then perform step G3;
Whether step G3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is exported;
Step F 4, terminate this time cycle under recognition of face;
Step 9, the recognition result that step C6 or step C7 obtain to be exported as final recognition result.
In present embodiment, in steps A 2, determine left eye center pixel coordinate (LeX, LeY on facial image, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, LbZ) and the method for right eyebrow center pixel coordinate (LbX, LbY, LbZ) be:
The center of eyeball is all chosen as left eye or right eye center for left eye center pixel coordinate (LeX, LeY, LeZ) and right eye center pixel coordinate (ReX, ReY, ReZ); Nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, and right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) all choose the intersection point of its most left and most line on 2, the right side and the line in 2, face, highest and lowest, as its center.
In present embodiment, steps A 3 and steps A 4 choose effective pixel points can also adopt mode below:
Step H1, centered by any pixel, expand local zone and local background, Region dividing is carried out to local background, local background is made to form nine grids, local zone is positioned at the center of the nine grids that local background is formed, utilize the intensity contrast of local zone and local background, obtain the saliency value of pixel;
Step H2, the saliency value corresponding to all pixels obtained by step H1 in original image, then replaced the gray-scale value corresponding to this pixel by the saliency value corresponding to each pixel in primitive man's image, obtain saliency map;
Step H3, by saliency value higher than setting threshold value T position alternatively target.
In present embodiment, the mode of multiple subdata base is adopted the information of face local feature to be stored, and identify in the mode of network classification when identifying, calculated amount is scattered in different from server that is DataBase combining by this mode, the same time carries out Distributed identification, and information is carried out screening feedback, finally obtain recognition result.Which changes existing image recognition pattern, and recognition speed is greatly improved.
In present embodiment, adopting the mode communication data communication of network cluster dividing, there is not the situation of off-grid in the hoc node that this mode can be tight.Controlled the existing state of the node in bunch by bunch head, guarantee that each node possesses data communication function.This identification is not missed farthest making identification, guarantees once to identify the Data Matching can carrying out maximum, ensures the accuracy of recognition result.
The difference of embodiment two, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, in step B1, in each bunch, C wireless access point AP elects a wireless access point AP as the principle of bunch head to be jointly: the highest-ranking wireless access point AP of security protection preset is bunch head.
Present embodiment embody rule scene is as in public security system, and each child servers is arranged in public security bureau at different levels, for the local facial feature's serial data stored in local personnel.Elect the server (as public security general bureau) laying highest-ranking public security system as bunch head.
The difference of embodiment three, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, in step 5, the information in radio request signal comprises the ID and local facial feature's serial data to be identified that initiate wireless access point AP.
The difference of embodiment four, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, the ID that the information in the second level result identification bag described in step C6 comprises this wireless access point AP, the local facial feature's serial data recognized, identification rank and Q value.
The difference of embodiment five, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, the ID that the information in the third level result identification bag described in step C7 comprises this wireless access point AP, the local facial feature's serial data recognized, identification rank and Q value.
The difference of embodiment six, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, in steps A 3, by the method that each significant point and eight the most contiguous pixels surrounding this significant point form nine grids be:
By significant point and the most contiguous upper, the upper right, the right side, bottom right, under, lower-left, a left side and upper left totally eight pixels form the array that three take advantage of 3.
The difference of embodiment seven, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, in steps A 9, the threshold value of described setting is 0.1.
The difference of embodiment eight, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, after arranging in steps A 9, by remaining whole V1 characteristic area by the number comprising pixel, proceed as follows:
Step I1, judge whether the quantity remaining whole V1 characteristic area exceedes setting value; If judged result is yes, then I2; If judged result is no, then perform steps A 10;
Q1 characteristic area before step I2, intercepting, and further feature region is set to 0; And perform steps A 10.
The difference of embodiment nine, this concrete mode and the fast video face identification method based on large data processing described in embodiment eight is, Q1=30.
30 characteristic areas are carried out recognition of face as parameter by present embodiment, and speed can further improve.Be applicable to the recognition of face occasion that accuracy requirement is relatively low.
The difference of embodiment ten, this concrete mode and the fast video face identification method based on large data processing described in embodiment one is, in steps A 8, the most close described zhou duicheng tuxing or centrosymmetric image comprise: isosceles triangle, isosceles trapezoid, rectangle, rhombus, square, ellipse, semicircle, circle and regular polygon.

Claims (10)

1., based on the fast video face identification method of large data processing, it is characterized in that: it comprises the following steps:
Step one, set up face recognition database; Described database comprises image data base and P subdata base, and described P is positive integer; A described P subdata base embeds in the storer of P respectively;
Described image data base is for storing the facial image and corresponding local facial feature's serial data that collect;
For storing local facial feature's serial data in each subdata base, the preparation method of each local facial feature's serial data in each subdata base is:
A width facial image in steps A 1, reads image data storehouse;
Steps A 2, with the pixel at center between face two eyebrow on facial image for initial point, being X-axis with horizontal direction, take vertical direction as Y-axis, and the in-plane formed with vertical X axis and Y-axis sets up three-dimensional cartesian coordinate system for Z axis; And determine left eye center pixel coordinate (LeX on facial image, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, and right eyebrow center pixel coordinate (LbX, LbY, LbZ) LbZ);
Steps A 3, facial image is converted into gray-scale map, one by one the gray-scale value of pixel each in gray-scale map and the gray threshold preset is compared, will the pixel set of default gray threshold be greater than, and called after significant point; The pixel reset of standard value will be not more than;
Steps A 4, one by one by each significant point with surround eight the most contiguous pixels of this significant point and form nine grids, and judge whether have significant point in other eight points in these nine grids; If judged result is yes, then by this significant point called after effective pixel points, perform steps A 5; If judged result is no, then by this significant point reset;
Steps A 5, region effective pixel points enclosed are designated as characteristic area, and obtain X characteristic area altogether, X is positive integer;
Steps A 6, for each characteristic area, provide the coordinate (TX of each pixel, TY, TZ), and the coordinate judging each pixel one by one whether with left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) identical, if judged result is yes, then perform steps A 7, if judged result is no, then perform steps A 9,
Steps A 7, to including left eye center pixel coordinate (LeX, LeY, LeZ), right eye center pixel coordinate (ReX, ReY, ReZ), nose center pixel coordinate (NX, NY, NZ), mouth center pixel coordinate (MX, MY, MZ), left eyebrow center pixel coordinate (LbX, LbY, or right eyebrow center pixel coordinate (LbX LbZ), LbY, LbZ) one of the edge of characteristic area carry out curve fitting, obtain each characteristic area contour curve;
Steps A 8, for the characteristic area in steps A 7, judge the similarity of the zhou duicheng tuxing that this contour curve is the most close with it or centrosymmetric image, and judge whether this similarity is greater than default similarity threshold, if judged result is yes, then to all pixel resets in this characteristic area; If judged result is no, then perform steps A 9;
Whether the Z axis coordinate figure of each pixel in steps A 9, one by one judging characteristic region is greater than the threshold value of setting, if judged result is yes, then by all pixel resets in this characteristic area, if judged result is no, then remaining whole V1 characteristic area arranges by the number comprising pixel, V1 is less than or equal to V, and is set to i level face local characteristic region according to this; I=1,2 ... V1, then performs steps A 10;
Steps A 10, the image of each face local characteristic region in the image of the V1 in steps A 9 is converted to corresponding binary data group one by one, and this V1 data group is fitted together according to the size groups of rank, obtain local facial feature's serial data, separate with zone bit between adjacent two data group;
1st grade of face local characteristic region is that rank is maximum; V1 level face local characteristic region is that rank is minimum;
Steps A 11, by the local facial feature's serial data in steps A 10 stored in one of them subdata base;
Step 2, the storer of P is configured P radio communication device respectively, form P wireless access point AP; And by P wireless access point AP networking, concrete grammar is:
Step B1, C the wireless access point AP being positioned at same communication cell is formed one bunch, C is positive integer, and in this bunch, each wireless access point AP elects a wireless access point AP as a bunch head jointly, and other C-1 wireless access point AP is a bunch member;
Bunch head in each communication cell bunch can intercom mutually, and bunch member being positioned at different bunches can not intercom mutually;
Image data base is configured a radio communication device; Form image data base WAP, described image data base WAP can intercom with each wireless contact AP phase in each communication cell;
Step 3, gather facial image to be identified, described facial image to be identified is processed to the method for steps A 10 according to above-mentioned steps A1, obtains local facial feature's serial data to be identified;
Step 4, one of them wireless access point AP of local facial feature's serial date transfer to be identified step 3 obtained, be designated as initiation wireless access point AP by this wireless access point AP, and perform step C1;
Step C1, described local facial feature's serial data to be identified is being initiated to mate in the subdata base in wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if the judgment is Yes, corresponding facial image then in reads image data storehouse in WAP, exports this facial image as when previous recognition result; If judged result is no, then perform step C2;
Step C2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in current wireless access point AP, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step C3; If judged result is no, then perform step 5;
Whether step C3, the quantity judging matching result are 1; If judged result is yes, then perform step C4; If judged result is no, then perform step C5;
Step C4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 5; If judged result is no, then returns and perform step C2;
Step C5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step C6; If judged result is no, then perform step C7;
Step C6, local facial feature's serial data of this being recognized as face recognition result, and generate second level result identification bag, and perform step 8;
Step C7, the local facial feature's serial data alternatively face recognition result this recognized, generate third level result identification bag, and perform step 8;
Step 5, initiate bunch hair of some wireless access point AP to its place bunch and send radio request signal, described bunch of head is designated as and initiates bunch head, initiate some wireless access point AP and judge within the setting-up time cycle, whether receive the recognition result from initiating bunch head, if judged result is yes, then perform step 8, if judged result is no, then perform step 9;
Initiate bunch head within the time cycle, perform step D1 to D7 in turn:
Step D1, local facial feature's serial data to be identified to be mated in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is no, then perform step D2; If the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and sends to an initiation point wireless access point AP, and terminates this recognition of face;
Step D2, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in the subdata base of initiating bunch head, and judge whether to mate consistent local facial feature's serial data, if judged result is yes, then perform step D3; If judged result is no, then perform step 6;
Whether step D3, the quantity judging matching result are 1; If judged result is no, then perform step D4; If judged result is yes, then perform step D5;
Step D4, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is yes, then perform step 6; If judged result is no, then returns and perform step D2;
Step D5, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step D6; If judged result is no, then perform step D7;
Corresponding facial image in step D6, reads image data storehouse, using this facial image as working as previous recognition result, and generating second level result identification bag, performing step 7;
Step D7, local facial feature's serial data alternatively face recognition result that this is recognized, and generate third level result identification bag, perform step 7;
Step 6, initiation bunch head, broadcasting the broadcast data packet from initiating wireless access point AP to each bunch of head, wait for a time cycle;
Local facial feature's serial data to be identified is broadcasted by step e 1, each bunch of head in this bunch; Under a time cycle, each wireless access point AP in this bunch, all performs step e 2 to step e 8;
Local facial feature's serial data to be identified is mated by step e 2, each wireless access point AP in respective subdata base, and judge whether local facial feature's serial data of coupling, if the judgment is Yes, then corresponding facial image in reads image data storehouse, using this facial image as working as previous recognition result, and bunch hair to its place bunch send first order result identification bag, and perform step e 9; If judged result is no, then perform step e 3;
Step e 3, in local facial feature's serial data to be identified intercept before Q data group, the initial value of Q is 1; And mate in subdata base in this respective subdata base, and judge whether local facial feature's serial data of coupling, if judged result is yes, then perform step e 4; If judged result is no, then the recognition of face under terminating this time cycle;
Whether step e 4, the quantity judging matching result are 1; If judged result is no, then perform step e 5; If judged result is yes, then perform step e 6;
Step e 5, make the value of Q add 1, and judge whether the value of Q is more than or equal to the quantity of data group in local facial feature's serial data to be identified, if judged result is no, then returns and perform step e 2; If judged result is yes, then the recognition of face under terminating this time cycle;
Step e 6, judge whether the value of current Q is greater than setting retrieval threshold, if judged result is yes, then perform step e 7; If judged result is no, then perform step e 8;
Corresponding facial image in step e 7, reads image data storehouse, using this facial image as working as previous recognition result, and send second level result identification bag to bunch hair of this bunch, and performs step e 9;
Step e 8, local facial feature's serial data alternatively face recognition result that this is recognized, and to send to and a bunch hair to this bunch send third level result identification bag, perform step e 9;
Bunch head of step e 9, this bunch receives from the first order result identification bag of each wireless access point AP, the second result identification bag or third level result identification bag, and judge whether to there is first order result identification bag, if judged result is yes, then this first order result identification bag is sent to and initiate bunch head; If judged result is no, then perform step e 10;
Bunch head of step e 10, this bunch judges whether to there is second level result identification bag, if judged result is no, then performs step e 12; If judged result is yes, then perform step e 11;
Whether step e 11, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Bunch head of step e 12, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step e 14; If judged result is yes, then perform step e 13;
Whether step e 13, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to and initiate bunch head; If judged result is no, then compare the Q value in second level result identification bag, and third level result identification bag maximum for Q value is sent to initiation bunch head; And perform step 7;
Step e 14, terminate this time cycle under recognition of face;
Step 7, an initiation bunch head judge whether to there is second level result identification bag, if judged result is no, then perform step F 2; If judged result is yes, then perform step F 1;
Whether step F 1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Bunch head of step F 2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step F 4; If judged result is yes, then perform step F 3;
Whether step F 3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is sent to initiation wireless access point AP; And perform step 8;
Step F 4, terminate this time cycle under recognition of face;
Step 8, initiation wireless access point AP judge whether to there is second level result identification bag, if judged result is no, then perform step G2; If judged result is yes, then perform step G1;
Whether step G1, the quantity judging this second level result identification bag are 1; If judged result is yes, then this second level result identification bag is exported as recognition result; If judged result is no, then compare the Q value in second level result identification bag, and second level result identification bag maximum for Q value is exported;
Bunch head of step G2, this bunch judges whether to there is third level result identification bag, if judged result is no, then performs step G4; If judged result is yes, then perform step G3;
Whether step G3, the quantity judging this third level result identification bag are 1; If judged result is yes, then this third level result identification bag is sent to initiation wireless access point AP; If judged result is no, then compare the Q value in third level result identification bag, and third level result identification bag maximum for Q value is exported;
Step F 4, terminate this time cycle under recognition of face;
Step 9, the recognition result that step C6 or step C7 obtain to be exported as final recognition result.
2. the fast video face identification method based on large data processing according to claim 1, it is characterized in that in step B1, in each bunch, C wireless access point AP elects a wireless access point AP as the principle of bunch head to be jointly: the highest-ranking wireless access point AP of security protection preset is bunch head.
3. the fast video face identification method based on large data processing according to claim 1, is characterized in that in step 5, and the information in radio request signal comprises the ID and local facial feature's serial data to be identified that initiate wireless access point AP.
4. the fast video face identification method based on large data processing according to claim 1, is characterized in that ID, the local facial feature's serial data recognized, identification rank and Q value that the information in the second level result identification bag described in step C6 comprises this wireless access point AP.
5. the fast video face identification method based on large data processing according to claim 1, is characterized in that ID, the local facial feature's serial data recognized, identification rank and Q value that the information in the third level result identification bag described in step C7 comprises this wireless access point AP.
6. the fast video face identification method based on large data processing according to claim 1, is characterized in that in steps A 3, by the method that each significant point and eight the most contiguous pixels surrounding this significant point form nine grids is:
By significant point and the most contiguous upper, the upper right, the right side, bottom right, under, lower-left, a left side and upper left totally eight pixels form the array of 3 × 3.
7. the fast video face identification method based on large data processing according to claim 1, is characterized in that in steps A 9, and the threshold value of described setting is 0.1.
8. the fast video face identification method based on large data processing according to claim 1, after it is characterized in that arranging in steps A 9, by remaining whole V1 characteristic area by the number comprising pixel, proceeds as follows:
Step I1, judge whether the quantity remaining whole V1 characteristic area exceedes setting value; If judged result is yes, then I2; If judged result is no, then perform steps A 10;
Q1 characteristic area before step I2, intercepting, and further feature region is set to 0; And perform steps A 10.
9. the fast video face identification method based on large data processing according to claim 8, is characterized in that Q1=30.
10. the fast video face identification method based on large data processing according to claim 1, it is characterized in that, in steps A 8, the most close described zhou duicheng tuxing or centrosymmetric image comprise: isosceles triangle, isosceles trapezoid, rectangle, rhombus, square, ellipse, semicircle, circle and regular polygon.
CN201510577530.7A 2015-09-11 2015-09-11 Fast video face identification method based on large data processing Active CN105184261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510577530.7A CN105184261B (en) 2015-09-11 2015-09-11 Fast video face identification method based on large data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510577530.7A CN105184261B (en) 2015-09-11 2015-09-11 Fast video face identification method based on large data processing

Publications (2)

Publication Number Publication Date
CN105184261A true CN105184261A (en) 2015-12-23
CN105184261B CN105184261B (en) 2016-05-18

Family

ID=54906330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510577530.7A Active CN105184261B (en) 2015-09-11 2015-09-11 Fast video face identification method based on large data processing

Country Status (1)

Country Link
CN (1) CN105184261B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104494A (en) * 2018-09-07 2018-12-28 孙思涵 Children loss or loss localization method and system under wireless sensor network based on DNA
CN109271917A (en) * 2018-09-10 2019-01-25 广州杰赛科技股份有限公司 Face identification method, device, computer equipment and readable storage medium storing program for executing
WO2019027766A1 (en) * 2017-08-01 2019-02-07 Motorola Solutions, Inc. Distributed biometric identification system for a mobile environment
CN110929557A (en) * 2019-09-25 2020-03-27 四川大学锦城学院 Intelligent security method, system and processing device based on in-vivo detection
CN111382649A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Face image recognition system and method based on nine-grid principle
CN113239774A (en) * 2021-05-08 2021-08-10 重庆第二师范学院 Video face recognition system and method
CN115050131A (en) * 2022-08-15 2022-09-13 珠海翔翼航空技术有限公司 Airport permission setting method and system based on face feature abstract and cloud mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101329722A (en) * 2007-06-21 2008-12-24 上海北控智能科技有限公司 Human face recognition method for performing recognition algorithm based on neural network
CN101510255A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method for identifying and positioning human face, apparatus and video processing chip
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102147862A (en) * 2011-05-26 2011-08-10 电子科技大学 Face feature extracting method based on survival exponential entropy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101329722A (en) * 2007-06-21 2008-12-24 上海北控智能科技有限公司 Human face recognition method for performing recognition algorithm based on neural network
CN101510255A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method for identifying and positioning human face, apparatus and video processing chip
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102147862A (en) * 2011-05-26 2011-08-10 电子科技大学 Face feature extracting method based on survival exponential entropy

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019027766A1 (en) * 2017-08-01 2019-02-07 Motorola Solutions, Inc. Distributed biometric identification system for a mobile environment
US10528713B2 (en) 2017-08-01 2020-01-07 Motorola Solutions, Inc. Distributed biometric identification system for a mobile environment
GB2578074A (en) * 2017-08-01 2020-04-15 Motorola Solutions Inc Distributed biometric identification system for a mobile environment
GB2578074B (en) * 2017-08-01 2021-03-31 Motorola Solutions Inc Distributed biometric identification system for a mobile environment
CN109104494A (en) * 2018-09-07 2018-12-28 孙思涵 Children loss or loss localization method and system under wireless sensor network based on DNA
CN109271917A (en) * 2018-09-10 2019-01-25 广州杰赛科技股份有限公司 Face identification method, device, computer equipment and readable storage medium storing program for executing
CN109271917B (en) * 2018-09-10 2021-03-02 广州杰赛科技股份有限公司 Face recognition method and device, computer equipment and readable storage medium
CN111382649A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Face image recognition system and method based on nine-grid principle
CN110929557A (en) * 2019-09-25 2020-03-27 四川大学锦城学院 Intelligent security method, system and processing device based on in-vivo detection
CN113239774A (en) * 2021-05-08 2021-08-10 重庆第二师范学院 Video face recognition system and method
CN115050131A (en) * 2022-08-15 2022-09-13 珠海翔翼航空技术有限公司 Airport permission setting method and system based on face feature abstract and cloud mapping

Also Published As

Publication number Publication date
CN105184261B (en) 2016-05-18

Similar Documents

Publication Publication Date Title
CN105184261B (en) Fast video face identification method based on large data processing
WO2021000702A1 (en) Image detection method, device, and system
CN101236600B (en) Image processing apparatus and image processing method
JP2019536120A (en) System and method for verifying authenticity of ID photo
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
KR20190038923A (en) Method, apparatus and system for verifying user identity
JPH11250267A (en) Method and device for detecting position of eye and record medium recording program for detecting position of eye
CN105022999A (en) Man code company real-time acquisition system
CN104408445A (en) Automatic real-time human body detecting method
CN111144284A (en) Method and device for generating depth face image, electronic equipment and medium
CN103996023A (en) Light field face recognition method based on depth belief network
CN109661671B (en) Improvement of image classification using boundary bitmaps
CN114444940B (en) Enterprise data acquisition and analysis system based on big data
CN110378351B (en) Seal identification method and device
CN105224936A (en) A kind of iris feature information extracting method and device
CN102063660B (en) Acquisition method for electronic photograph, client, server and system
CN104156689B (en) Method and device for positioning feature information of target object
CN110909617B (en) Living body face detection method and device based on binocular vision
CN111050027B (en) Lens distortion compensation method, device, equipment and storage medium
CN202587144U (en) Certificate electronic photo collecting and processing system
JP4967045B2 (en) Background discriminating apparatus, method and program
CN110189350A (en) A kind of the determination method, apparatus and storage medium of pupil edge
CN107766782A (en) A kind of method and device of age-colony classification
CN106228163B (en) A kind of poor ternary sequential image feature in part based on feature selecting describes method
CN109145855A (en) A kind of method for detecting human face and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant