CN110188703A - A kind of information push and drainage method based on recognition of face - Google Patents
A kind of information push and drainage method based on recognition of face Download PDFInfo
- Publication number
- CN110188703A CN110188703A CN201910472923.XA CN201910472923A CN110188703A CN 110188703 A CN110188703 A CN 110188703A CN 201910472923 A CN201910472923 A CN 201910472923A CN 110188703 A CN110188703 A CN 110188703A
- Authority
- CN
- China
- Prior art keywords
- face
- current
- gender
- age bracket
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
- G06Q30/0271—Personalized advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- General Engineering & Computer Science (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to internet media technology field, a kind of information push based on recognition of face and drainage method are disclosed.The present invention includes: then to trigger human image collecting in this way whether with the presence of human body in real-time judge predeterminable area;Facial image is acquired, then facial image is identified respectively using gender identification model and age bracket identification model, exports the corresponding gender of face and age bracket in current face's image;Promotional literature corresponding with current gender and age bracket is loaded, and promotional literature is exported to man-machine interface;Whether trigger action is had in real-time judge predeterminable area, and then load is corresponding with Current ad file in this way links, and exports the page corresponding with current link to current man-machine interface.The present invention realizes precision information and pushes and efficiently drainage conversion, enable the advertisement type played it is more accurate be matched to validated user, information transmitting is more efficient, and advertisement can generate real-time, interactive with user during broadcasting, is suitable for promoting the use of.
Description
Technical field
The invention belongs to internet media technology fields, and in particular to a kind of information push and drainage based on recognition of face
Method.
Background technique
Internet has been greatly developed in recent years, and audient and covering surface can compare favourably with traditional media,
Online Media business also rapidly develops therewith, compared to traditional media, has the advantage of itself in terms of interacting marketing.Swashing
In strong competition, online advertisement can neatly adjust its ad content, can satisfy different customer demands, have covering model
Enclose the advantages such as wide, initiative and enthusiasm are strong, expense is relatively low, cost performance is high;And traditional media launches link still in advertisement
A large amount of artificial participations are needed, the popularization and development of advertising business are unfavorable for.
But existing online advertisement Internet-based in playing process still using roll play by the way of, this nothing
Targetedly advertisement plays extremely low to effective audient's coverage rate, causes the waste of advertising resource, meanwhile, though online advertisement entry
It is various, but be the mode unidirectionally played, lack and interacted with user, user only sees advertisement in simple, can not precisely draw
Validated user is flowed, attracts the eyeball of consumer, reduces the propaganda function of advertisement.
Summary of the invention
In order to solve the above problems existing in the present technology, it is an object of that present invention to provide a kind of letters based on recognition of face
Breath push and drainage method, so that advertisement more has specific aim when playing, while enabling between advertisement broadcasting and user
Interaction is generated, the drainage flow and propaganda function of advertising resource are improved.
The technical scheme adopted by the invention is as follows:
A kind of information push and drainage method based on recognition of face, comprising the following steps:
S1. whether with the presence of human body in real-time judge predeterminable area, if so, then triggering human image collecting;
S2. acquire facial image, then using gender identification model and age bracket identification model respectively to facial image into
Row identification, exports the corresponding gender of face and age bracket in current face's image;
S3. promotional literature corresponding with current gender and age bracket is loaded, and promotional literature is exported to man-machine interface;
S4. then the face video for acquiring currently viewing man-machine interface in real time judges whether have triggering to grasp in predeterminable area
Make, linked if so, then load is corresponding with Current ad file, and exports the page corresponding with current link to current man-machine boundary
Face;
S5. whether with the presence of human body in real-time judge predeterminable area, if so, step S2-S4 is then repeated, if not, closing
Human image collecting and man-machine interface repeat step S1.
Preferably, further comprising the steps of:
S6. after exporting current page to man-machine interface, face-image to be sentenced is acquired, is then extracted in current face image
Facial characteristics, and the facial characteristics of current portrait is matched with the facial characteristics of the registration user in registration database;
S7. judge whether the matching result of current portrait and registration database is consistent, if so, then exporting Current ad file
And in the corresponding account for linking to current registration user, if not, output prompts enrollment page to man-machine interface.
Preferably, the training step of gender identification model is as follows in the step S2:
A1. the face picture that 200,000 or more different sexes are acquired by web crawlers, distinguishes all face pictures
Be used as sample after carrying out cleaning operation and shearing manipulation, wherein cleaning operation include face deflection correction, picture luminance adjustment and
Gray processing, shearing manipulation is for enhancing trained robustness;
A2. sample is trained using NESTEROV algorithm, obtains gender identification model.
Preferably, in the step S2, when being identified using gender identification model to facial image, specific steps
It is as follows:
S201a. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain people
Face region;
S202a. in the facial image input gender identification model after frame being selected, 1 group of representative is exported after processing compares
The probability function group of two different sexes;
S203a. step S202a is repeated, multiple groups probability function group is obtained, and asks cumulative and obtains 2 final probability, it is most laggard
Row compares, gender of the big corresponding gender of final probability of probability as face in current face's image.
Preferably, the training step of age bracket identification model is as follows in the step S2:
B1. using multiple include face pictures as training library, and by the plurality of pictures in trained library according to age bracket according to
Secondary arrangement obtains multiple picture groups;
B2. a variety of face characteristics for extracting every picture in each picture group respectively, then extract each picture group respectively
In every picture every kind of face characteristic initial characteristics vector, and to every kind of face characteristic of all pictures in each picture group
Initial characteristics vector be weighted and average, using current average as the feature of this kind of face characteristic of current age section
Vector;
B3. step B2 is repeated until obtaining the feature vector of all face characteristics in all picture groups, by each picture group
In feature vector gather as one, successively sort according to the corresponding age bracket of picture group to get to having multiple groups faces spy
Levy the face age bracket identification model of vector.
Preferably, in the step S2, it is specific to walk when acceptable age section identification model identifies facial image
It is rapid as follows:
S201b. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain people
Face region;
S202b. the feature vector to be sentenced for extracting every kind of face characteristic in human face region, then by every kind of feature vector to be sentenced
Similarity mode is carried out with the feature vector of age bracket identification model respectively;
S202b. the age bracket where the corresponding feature vector of feature vector to be sentenced in human face region is weighted and asks flat
Mean value obtains the corresponding age bracket in current face region;
S204b. using the corresponding age bracket in current face region as the corresponding age bracket of face in current face's image.
Preferably, loading promotional literature corresponding with current gender and age bracket, step is such as in the step S3
Under:
S301. judge whether the corresponding gender of multiple human face regions in current face's image is consistent, as judging result is
Then to export gender and determine information, as judging result be it is no, then the quantity for counting the corresponding human face region of every kind of gender forms the
One wait sentence set;
S302. judge that the current first human face region quantity wait sentence any gender in set with the presence or absence of maximum value, is such as sentenced
Disconnected result be it is yes, then export gender and determine information, as judging result be it is no, then export gender failure information;
S303. it obtains after gender determines information, judges that the corresponding age bracket of the corresponding multiple human face regions of current gender is
It is no consistent, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count each
The quantity of the corresponding image-region of age bracket forms second wait sentence set;
S304. after obtaining gender failure information, judge the corresponding age bracket of multiple human face regions in current face's image
It is whether consistent, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count often
The quantity of the corresponding image-region of a age bracket forms second wait sentence set;
S305. judge that current second whether there is maximum value wait sentence the human face region quantity of any age bracket in set, such as
Judging result be it is yes, then load the promotional literature of the corresponding age bracket of maximum value, as judging result be it is no, then according to current man-machine
The promotional literature of the corresponding age bracket of the main consumer group in region where interface.
Preferably, in the step S4, the trigger action include voice operating, man-machine interactive operation and/or
Head schematic operation.
Preferably, in the step S4, after the face video for acquiring currently viewing man-machine interface, when trigger action is
When the schematic operation of head, the specific steps are as follows:
S401. successive video frames are tracked, the centre coordinate of same human face region in successive video frames is recorded, then passes through ratio
Compared with the centre coordinate of human face region same in successive video frames, the head signal behaviour of the corresponding face in current face region is obtained
Make;
Whether the head dumb show for S402. judging current face is nodding action, if so, then load and Current ad text
The corresponding link of part, and the page corresponding with current link is exported to current man-machine interface, if not, judging the head of current face
Whether portion's dumb show is head shaking movement, if so, then terminating the output of Current ad file, and exports next and current gender
And the corresponding promotional literature of age bracket is to man-machine interface, if not, repeating step S4.
Preferably, whether in the step S1 and S5 detecting in predeterminable area has specific steps existing for human body such as
Under:
SA1. real-time judge emits to whether the microwave signal power in front of current man-machine interface changes, such as judgement knot
Fruit be it is yes, then obtain microwave signal power and change the initial image information in region;
SA2. judge whether current preliminary image information includes face, as judging result be it is yes, then determine in predeterminable area
With the presence of human body, if judging result is no, then repeatedly step SA1.
The invention has the benefit that
It realizes precision information to push and efficiently drain conversion, for age bracket and the advertisement broadcast mode of gender, make
The advertisement type that must be played can be more accurate be matched to validated user, information transmitting is more efficient, and after human bioequivalence again
It triggers advertisement to play, plays the wasting of resources caused by advertisement during avoiding nobody;Meanwhile advertisement during broadcasting can with
Family generates real-time, interactive, can be realized the accurate drainage of validated user by the judgement to trigger action, so that after advertisement is launched
User's conversion ratio improve, and drain after registration prompt can quickly improve registration user volume, convenient for subsequent clients develop,
It is secondary, to region, human interface devices, advertisement playing duration, each advertisement broadcasting time, etc. data count, after being
Phase advertisement launches reference, data analysis and big data model optimization and provides data basis, is suitable for promoting the use of.
Detailed description of the invention
Fig. 1 is flow diagram of the invention.
Specific embodiment
With reference to the accompanying drawing and specific embodiment does further explaination to the present invention.
Embodiment 1:
As shown in Figure 1, the information push that the present embodiment provides a kind of based on recognition of face and drainage method, including following step
It is rapid:
S1. whether with the presence of human body in real-time judge predeterminable area, if so, then triggering human image collecting;Thus nobody is avoided
When broadcast information caused by the wasting of resources, and sensed and triggering following step at the first time when someone appears in predeterminable area
S2. acquire facial image, then using gender identification model and age bracket identification model respectively to facial image into
Row identification, exports the corresponding gender of face and age bracket in current face's image;It, can by age bracket and the dual judgement of gender
Information corresponding with active user is matched more accurately, improves the drainage efficiency of user.
In the present embodiment, the training step of gender identification model is as follows:
A1. the face picture that 200,000 or more different sexes are acquired by web crawlers, distinguishes all face pictures
Be used as sample after carrying out cleaning operation and shearing manipulation, wherein cleaning operation include face deflection correction, picture luminance adjustment and
Gray processing, shearing manipulation is for enhancing trained robustness;Wherein, each gender all has the face of 100,000 different angles
Picture, 200,000 face pictures should include the people of same face picture under different light and same people in different age group
Face image.
A2. sample is trained using NESTEROV algorithm, obtains gender identification model.Using NESTEROV algorithm energy
Enough accelerate the speed of model training.As one of preferred embodiment, gender identification model includes 16 layers, specifically
Are as follows: first layer is Data layers, for obtaining sample data and label;The second layer is Convolution1 layers, uses 24 5x5
Different convolution kernels, export 24 convolved images;Third layer is Relu1 layers, for adjusting neural cloud two by nonlinear function
Side gain activates sparse function;4th layer is Pool1 layers, for carrying out 2x2's with the method that neighborhood pixels are averaged to image
Sub-sampling;Layer 5 is Covolution2, uses the different convolution kernels of 48 3x3, exports 48 convolved images;Layer 6
It is relu2 layers, for adjusting neural cloud two sides gain, activating sparse function by nonlinear function;Layer 7 is Pool2, is used
In the sub-sampling for carrying out 2x2 with the method that neighborhood pixels are averaged to image;8th layer is Convolution3 layers, is used
The different convolution kernels of 3x3 export 72 convolved images;9th layer is relu3 layers, for passing through nonlinear function, adjustment nerve
The gain of cloud two sides activates sparse function;Tenth layer is Pool3 layers, for being carried out to image with the method that neighborhood pixels are averaged
The sub-sampling of 2x2;Eleventh floor is Convolution4 layers, uses the different convolution kernels of 96 3x3, exports 96 trellis diagrams
Picture;Floor 12 and the 13rd layer are INNER_PRODUCT layers, i.e., full articulamentum, according to the spy of full articulamentum and convolutional layer
Property obtain output be 160 nodes;14th layer is Eltwise layers, for being added two result dot matrix, exports 160 sections
Point;15th layer is INNER_PRODUCT layers, i.e., full articulamentum, for exporting 2 as a result, the other probability of i.e. two individual characteies;Tenth
Six layers are softmax layers, for label to be compared with result, calculate the value of loss, using backpropagation, adjust model
Data;Using 16 layers of gender identification model, so that occupying little space in application process, it may be convenient to be added various embedding
Enter formula platform, and the feature extraction of algorithm only needs 160 dimensions, improves the efficiency of training and detection.
Based on above-mentioned, when being identified using gender identification model to facial image, the specific steps are as follows:
S201a. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain people
Face region;Wherein, the cascade classifier in OPENCV can with but be not limited only to as haar feature detection algorithm.As one of which
Preferred embodiment when face information detects, carries out skin cluster to facial image first, obtains the grayscale image of facial image
Picture;Then illumination adjustment is carried out to gray level image, reduces the influence of background illumination;Then to the front and back two for detecting face information
Frame facial image carries out face information comparison, calculates its coincidence ratio, judges the drift condition of face, will deviate biggish face
Information filtering is fallen;Then the face information of reservation is adjusted, according to the position of standard faces face, such as eyes and mouth,
Adjust the identical human face region of human face recognition model correction ratio for obtaining and training;The library DLIB is finally called, haar spy is used
Sign detection algorithm detects face.
S202a. in the facial image input gender identification model after frame being selected, 1 group of representative is exported after processing compares
The probability function group of two different sexes;Wherein, when being handled in the facial image input gender identification model after frame being selected, according to
The gender identification model that training obtains carries out the classification of target facial image using model, is accelerated by calling image processor
The calculating speed of depth recognition classification obtains two different outputs, as after the facial image of input is by 16 layers of processing
Represent the probability function group of two different sexes.
S203a. step S202a is repeated, multiple groups probability function group is obtained, and asks cumulative and obtains 2 final probability, it is most laggard
Row compares, gender of the big corresponding gender of final probability of probability as face in current face's image.
In the present embodiment, the training step of age bracket identification model is as follows:
B1. using multiple include face pictures as training library, and by the plurality of pictures in trained library according to age bracket according to
Secondary arrangement obtains multiple picture groups;It should be noted that each age bracket corresponds to unique picture group.
B2. a variety of face characteristics for extracting every picture in each picture group respectively, then extract each picture group respectively
In every picture every kind of face characteristic initial characteristics vector, and to every kind of face characteristic of all pictures in each picture group
Initial characteristics vector be weighted and average, using current average as the feature of this kind of face characteristic of current age section
Vector;The extraction of feature vector quantizes different face characteristics, is convenient for subsequent calculating, the extraction of feature vector is for difference
Face characteristic can be realized using the different prior art, such as textural characteristics can be using by the quantity and shade depth of texture
The mode to quantize obtains, then then brightness can be compared by setting Benchmark brightness with the brightness of picture
The mode of numeralization obtains, and color characteristic can be by the colour of skin on face being compared with the benchmark colour of skin or will be on face
The mode that the quantity of age spot quantizes obtains.
B3. step B2 is repeated until obtaining the feature vector of all face characteristics in all picture groups, by each picture group
In feature vector gather as one, successively sort according to the corresponding age bracket of picture group to get to having multiple groups faces spy
Levy the face age bracket identification model of vector.It should be noted that the identification model of each age bracket includes minimum 3 features
Vector set, thus precisely to identify that age bracket provides reliable technical foundation.
Based on above-mentioned, when acceptable age section identification model identifies facial image, the specific steps are as follows:
S201b. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain people
Face region;Wherein, the cascade classifier in OPENCV can with but be not limited only to as haar feature detection algorithm.
S202b. the feature vector to be sentenced for extracting every kind of face characteristic in human face region, then by every kind of feature vector to be sentenced
Similarity mode is carried out with the feature vector of age bracket identification model respectively;
S202b. the age bracket where the corresponding feature vector of feature vector to be sentenced in human face region is weighted and asks flat
Mean value obtains the corresponding age bracket in current face region;
S204b. using the corresponding age bracket in current face region as the corresponding age bracket of face in current face's image.
S3. promotional literature corresponding with current gender and age bracket is loaded, and promotional literature is exported to man-machine interface;
In the present embodiment, in step S3, promotional literature corresponding with current gender and age bracket is loaded, steps are as follows:
S301. judge whether the corresponding gender of multiple human face regions in current face's image is consistent, as judging result is
Then to export gender and determine information, as judging result be it is no, then the quantity for counting the corresponding human face region of every kind of gender forms the
One wait sentence set;It should be noted that when the face for including in current face's image is multiple, then to middle packet in current face's figure
Each region of the face contained is cut to obtain multiple human face regions, then carries out step respectively to multiple human face regions
S201a-S203a and step S201b-S204b.
S302. judge that the current first human face region quantity wait sentence any gender in set with the presence or absence of maximum value, is such as sentenced
Disconnected result be it is yes, then export gender and determine information, as judging result be it is no, then export gender failure information;
S303. it obtains after gender determines information, judges that the corresponding age bracket of the corresponding multiple human face regions of current gender is
It is no consistent, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count each
The quantity of the corresponding image-region of age bracket forms second wait sentence set;
S304. after obtaining gender failure information, judge the corresponding age bracket of multiple human face regions in current face's image
It is whether consistent, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count often
The quantity of the corresponding image-region of a age bracket forms second wait sentence set;
S305. judge that current second whether there is maximum value wait sentence the human face region quantity of any age bracket in set, such as
Judging result be it is yes, then load the promotional literature of the corresponding age bracket of maximum value, as judging result be it is no, then according to current man-machine
The promotional literature of the corresponding age bracket of the main consumer group in region where interface.It should be noted where current man-machine interface
The corresponding age bracket of the main consumer group in region be that advertisement putting business is preset in systems.
S4. then the face video for acquiring currently viewing man-machine interface in real time judges whether have triggering to grasp in predeterminable area
Make, linked if so, then load is corresponding with Current ad file, and exports the page corresponding with current link to current man-machine boundary
Face;It is possible thereby to can be realized the accurate drainage of validated user by the judgement to trigger action, so that the use after advertisement dispensing
Family conversion ratio improves, and the registration prompt after drainage can quickly improve registration user volume, develop convenient for subsequent clients.
In the present embodiment, in step S4, trigger action includes voice operating, man-machine interactive operation and/or head signal behaviour
Make.
In the present embodiment, in step S4, after the face video for acquiring currently viewing man-machine interface, when trigger action is head
When schematic operation, the specific steps are as follows:
S401. successive video frames are tracked, the centre coordinate of same human face region in successive video frames is recorded, then passes through ratio
Compared with the centre coordinate of human face region same in successive video frames, the head signal behaviour of the corresponding face in current face region is obtained
Make;From there through multiframe alignments, realize that human face region difference quickly identifies, the identification that can be improved head schematic operation is quasi-
True rate.
Whether the head dumb show for S402. judging current face is nodding action, if so, then load and Current ad text
The corresponding link of part, and the page corresponding with current link is exported to current man-machine interface, if not, judging the head of current face
Whether portion's dumb show is head shaking movement, if so, then terminating the output of Current ad file, and exports next and current gender
And the corresponding promotional literature of age bracket is to man-machine interface, if not, repeating step S4.
S5. whether with the presence of human body in real-time judge predeterminable area, if so, step S2-S4 is then repeated, if not, closing
Human image collecting and man-machine interface repeat step S1.It is possible thereby to waste caused by effectively avoiding nobody period advertisement from playing.
In the present embodiment, whether in step S1 and S5, detecting has human body existing specific step is as follows in predeterminable area:
SA1. real-time judge emits to whether the microwave signal power in front of current man-machine interface changes, such as judgement knot
Fruit be it is yes, then obtain microwave signal power and change the initial image information in region;Transmitting microwave signal can with but not only limit
In using microwave remote sensor, it thus effectively prevent erroneous judgement.
SA2. judge whether current preliminary image information includes face, as judging result be it is yes, then determine in predeterminable area
With the presence of human body, if judging result is no, then repeatedly step SA1.
It is further comprising the steps of in the present embodiment:
S6. after exporting current page to man-machine interface, face-image to be sentenced is acquired, is then extracted in current face image
Facial characteristics, and the facial characteristics of current portrait is matched with the facial characteristics of the registration user in registration database;It is logical
The accurate push for crossing information is furthermore achieved validated user drainage, it is glutinous that user is improved while improving user experience
Degree.
S7. judge whether the matching result of current portrait and registration database is consistent, if so, then exporting Current ad file
And in the corresponding account for linking to current registration user, if not, output prompts enrollment page to man-machine interface;After drainage
Registration prompt can quickly improve registration user volume, develop convenient for subsequent clients.
The present invention is not limited to above-mentioned optional embodiment, anyone can show that other are each under the inspiration of the present invention
The product of kind form.Above-mentioned specific embodiment should not be understood the limitation of pairs of protection scope of the present invention, protection of the invention
Range should be subject to be defined in claims, and specification can be used for interpreting the claims.
Claims (10)
1. a kind of information push and drainage method based on recognition of face, it is characterised in that: the following steps are included:
S1. whether with the presence of human body in real-time judge predeterminable area, if so, then triggering human image collecting;
S2. facial image is acquired, then facial image is known respectively using gender identification model and age bracket identification model
Not, the corresponding gender of face and age bracket in current face's image are exported;
S3. promotional literature corresponding with current gender and age bracket is loaded, and promotional literature is exported to man-machine interface;
S4. then the face video for acquiring currently viewing man-machine interface in real time judges whether there is trigger action in predeterminable area, such as
It is that then load is corresponding with Current ad file links, and exports the page corresponding with current link to current man-machine interface;
S5. whether with the presence of human body in real-time judge predeterminable area, if so, step S2-S4 is then repeated, if not, closing portrait
Acquisition and man-machine interface repeat step S1.
2. it is according to claim 1 based on recognition of face information push and drainage method, it is characterised in that: further include with
Lower step:
S6. after exporting current page to man-machine interface, face-image to be sentenced is acquired, then extracts the face in current face image
Feature, and the facial characteristics of current portrait is matched with the facial characteristics of the registration user in registration database;
S7. judge whether the matching result of current portrait and registration database is consistent, if so, then exporting Current ad file and right
That answers links in the account of current registration user, if not, output prompts enrollment page to man-machine interface.
3. the information push and drainage method according to claim 1 based on recognition of face, it is characterised in that: the step
In rapid S2, the training step of gender identification model is as follows:
A1. the face picture that 200,000 or more different sexes are acquired by web crawlers, carries out all face pictures respectively
Sample is used as after cleaning operation and shearing manipulation, wherein cleaning operation includes face deflection correction, picture luminance adjustment and gray scale
Change;
A2. sample is trained using NESTEROV algorithm, obtains gender identification model.
4. the information push and drainage method according to claim 3 based on recognition of face, it is characterised in that: the step
In rapid S2, when being identified using gender identification model to facial image, the specific steps are as follows:
S201a. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain face area
Domain;
S202a. in the facial image input gender identification model after frame being selected, 1 group is exported after processing compares and represents two
The probability function group of different sexes;
S203a. step S202a is repeated, multiple groups probability function group is obtained, and asks cumulative and obtains 2 final probability, is finally compared
Compared with gender of the big corresponding gender of final probability of probability as face in current face's image.
5. the information push and drainage method according to claim 4 based on recognition of face, it is characterised in that: the step
In rapid S2, the training step of age bracket identification model is as follows:
B1. the picture that multiple include face is successively arranged as training library, and by the plurality of pictures in training library according to age bracket
Column, obtain multiple picture groups;
B2. a variety of face characteristics of every picture in each picture group are extracted respectively, are then extracted respectively every in each picture group
The initial characteristics vector of every kind of face characteristic of picture, and in each picture group every kind of face characteristic of all pictures just
Beginning feature vector, which is weighted, averages, using current average as the feature of this kind of face characteristic of current age section to
Amount;
B3. repeatedly step B2, will be in each picture group up to obtaining the feature vector of all face characteristics in all picture groups
Feature vector as one gather, according to the corresponding age bracket of picture group successively sort to get to have multiple groups face characteristic to
The face age bracket identification model of amount.
6. the information push and drainage method according to claim 5 based on recognition of face, it is characterised in that: the step
In rapid S2, when acceptable age section identification model identifies facial image, the specific steps are as follows:
S201b. face information is detected in facial image using the cascade classifier in OPENCV, and frame selects to obtain face area
Domain;
S202b. the feature vector to be sentenced of every kind of face characteristic in human face region is extracted, then distinguishes every kind of feature vector to be sentenced
Similarity mode is carried out with the feature vector of age bracket identification model;
S202b. the age bracket where the corresponding feature vector of feature vector to be sentenced in human face region is weighted and is averaged,
Obtain the corresponding age bracket in current face region;
S204b. using the corresponding age bracket in current face region as the corresponding age bracket of face in current face's image.
7. the information push and drainage method according to claim 6 based on recognition of face, it is characterised in that: the step
In rapid S3, promotional literature corresponding with current gender and age bracket is loaded, steps are as follows:
S301. judge whether the corresponding gender of multiple human face regions in current face's image consistent, as judging result be it is yes, then
Output gender determine information, as judging result be it is no, then count the corresponding human face region of every kind of gender quantity formation first to
Sentence set;
S302. judge that the current first human face region quantity wait sentence any gender in set with the presence or absence of maximum value, such as judges knot
Fruit be it is yes, then export gender and determine information, as judging result be it is no, then export gender failure information;
S303. obtain after gender determines information, judge the corresponding age bracket of the corresponding multiple human face regions of current gender whether one
Cause, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count each age
The quantity of the corresponding image-region of section forms second wait sentence set;
S304. after obtaining gender failure information, judge whether is the corresponding age bracket of multiple human face regions in current face's image
Unanimously, as judging result be it is yes, then load the promotional literature of corresponding age bracket, as judging result be it is no, then count each year
The quantity of the corresponding image-region of age section forms second wait sentence set;
S305. judge that the current second human face region quantity wait sentence any age bracket in set with the presence or absence of maximum value, such as judges
As a result be it is yes, then load the promotional literature of the corresponding age bracket of maximum value, as judging result be it is no, then according to current man-machine interface
The promotional literature of the corresponding age bracket of the main consumer group in the region at place.
8. -7 any described information push and drainage method based on recognition of face according to claim 1, it is characterised in that: institute
In the step S4 stated, the trigger action includes voice operating, man-machine interactive operation and/or head schematic operation.
9. the information push and drainage method according to claim 8 based on recognition of face, it is characterised in that: the step
In rapid S4, after the face video for acquiring currently viewing man-machine interface, when trigger action is head schematic operation, specific steps are such as
Under:
S401. successive video frames are tracked, the centre coordinate of same human face region in successive video frames is recorded, then connects by comparing
The centre coordinate of same human face region, obtains the head schematic operation of the corresponding face in current face region in continuous video frame;
Whether the head dumb show for S402. judging current face is nodding action, if so, then load and Current ad file pair
The link answered, and the page corresponding with current link is exported to current man-machine interface, if not, judging that the head of current face shows
Whether conation is head shaking movement, if so, then terminating the output of Current ad file, and exports next and current gender and year
The corresponding promotional literature of age section is to man-machine interface, if not, repeating step S4.
10. the information push and drainage method according to claim 1 based on recognition of face, it is characterised in that: described
Whether in step S1 and S5, detecting has human body existing specific step is as follows in predeterminable area:
SA1. real-time judge emits to whether the microwave signal power in front of current man-machine interface changes, as judging result is
It is then to obtain microwave signal power to change the initial image information in region;
SA2. judge whether current preliminary image information includes face, as judging result be it is yes, then determine someone in predeterminable area
Body exists, if judging result is no, then repeatedly step SA1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472923.XA CN110188703A (en) | 2019-05-31 | 2019-05-31 | A kind of information push and drainage method based on recognition of face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472923.XA CN110188703A (en) | 2019-05-31 | 2019-05-31 | A kind of information push and drainage method based on recognition of face |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110188703A true CN110188703A (en) | 2019-08-30 |
Family
ID=67719690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910472923.XA Pending CN110188703A (en) | 2019-05-31 | 2019-05-31 | A kind of information push and drainage method based on recognition of face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110188703A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598638A (en) * | 2019-09-12 | 2019-12-20 | Oppo广东移动通信有限公司 | Model training method, face gender prediction method, device and storage medium |
CN111310743A (en) * | 2020-05-11 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN112132616A (en) * | 2020-09-23 | 2020-12-25 | 范玲珍 | Mobile multimedia advertisement intelligent pushing management system based on big data |
CN113095672A (en) * | 2021-04-09 | 2021-07-09 | 公安部物证鉴定中心 | Method and system for evaluating face image comparison algorithm |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542252A (en) * | 2011-11-18 | 2012-07-04 | 江西财经大学 | Intelligent advertisement delivery system |
CN103514242A (en) * | 2012-12-19 | 2014-01-15 | Tcl集团股份有限公司 | Intelligent interaction method and system for electronic advertising board |
CN104598869A (en) * | 2014-07-25 | 2015-05-06 | 北京智膜科技有限公司 | Intelligent advertisement pushing method based on human face recognition device |
CN104732413A (en) * | 2013-12-20 | 2015-06-24 | 中国科学院声学研究所 | Intelligent individuation video advertisement pushing method and system |
CN105975916A (en) * | 2016-04-28 | 2016-09-28 | 西安电子科技大学 | Age estimation method based on multi-output convolution neural network and ordered regression |
CN106296307A (en) * | 2016-08-24 | 2017-01-04 | 郑州天迈科技股份有限公司 | Electronic stop plate advertisement delivery effect based on recognition of face analyzes method |
CN107506737A (en) * | 2017-08-29 | 2017-12-22 | 四川长虹电器股份有限公司 | Face gender identification method |
US20190108404A1 (en) * | 2017-10-10 | 2019-04-11 | Weixin Xu | Consumer Camera System Design for Globally Optimized Recognition |
-
2019
- 2019-05-31 CN CN201910472923.XA patent/CN110188703A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542252A (en) * | 2011-11-18 | 2012-07-04 | 江西财经大学 | Intelligent advertisement delivery system |
CN103514242A (en) * | 2012-12-19 | 2014-01-15 | Tcl集团股份有限公司 | Intelligent interaction method and system for electronic advertising board |
CN104732413A (en) * | 2013-12-20 | 2015-06-24 | 中国科学院声学研究所 | Intelligent individuation video advertisement pushing method and system |
CN104598869A (en) * | 2014-07-25 | 2015-05-06 | 北京智膜科技有限公司 | Intelligent advertisement pushing method based on human face recognition device |
CN105975916A (en) * | 2016-04-28 | 2016-09-28 | 西安电子科技大学 | Age estimation method based on multi-output convolution neural network and ordered regression |
CN106296307A (en) * | 2016-08-24 | 2017-01-04 | 郑州天迈科技股份有限公司 | Electronic stop plate advertisement delivery effect based on recognition of face analyzes method |
CN107506737A (en) * | 2017-08-29 | 2017-12-22 | 四川长虹电器股份有限公司 | Face gender identification method |
US20190108404A1 (en) * | 2017-10-10 | 2019-04-11 | Weixin Xu | Consumer Camera System Design for Globally Optimized Recognition |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598638A (en) * | 2019-09-12 | 2019-12-20 | Oppo广东移动通信有限公司 | Model training method, face gender prediction method, device and storage medium |
CN111310743A (en) * | 2020-05-11 | 2020-06-19 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN111310743B (en) * | 2020-05-11 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Face recognition method and device, electronic equipment and readable storage medium |
CN112132616A (en) * | 2020-09-23 | 2020-12-25 | 范玲珍 | Mobile multimedia advertisement intelligent pushing management system based on big data |
CN112132616B (en) * | 2020-09-23 | 2021-06-01 | 广东省广汽车数字营销有限公司 | Mobile multimedia advertisement intelligent pushing management system based on big data |
CN113095672A (en) * | 2021-04-09 | 2021-07-09 | 公安部物证鉴定中心 | Method and system for evaluating face image comparison algorithm |
CN113095672B (en) * | 2021-04-09 | 2024-06-07 | 公安部物证鉴定中心 | Evaluation method and system for facial image comparison algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110188703A (en) | A kind of information push and drainage method based on recognition of face | |
CN109829443B (en) | Video behavior identification method based on image enhancement and 3D convolution neural network | |
CN112560810B (en) | Micro-expression recognition method based on multi-scale space-time characteristic neural network | |
CN106529467B (en) | Group behavior recognition methods based on multi-feature fusion | |
CN102081918B (en) | Video image display control method and video image display device | |
CN108399628A (en) | Method and system for tracking object | |
CN109902558B (en) | CNN-LSTM-based human health deep learning prediction method | |
CN106446015A (en) | Video content access prediction and recommendation method based on user behavior preference | |
US20090290791A1 (en) | Automatic tracking of people and bodies in video | |
CN106682108A (en) | Video retrieval method based on multi-modal convolutional neural network | |
CN110287777B (en) | Golden monkey body segmentation algorithm in natural scene | |
CN111860390A (en) | Elevator waiting number detection and statistics method, device, equipment and medium | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
CN107146096A (en) | Intelligent video advertisement display method and device | |
CN113221655A (en) | Face spoofing detection method based on feature space constraint | |
CN111476178A (en) | Micro-expression recognition method based on 2D-3D CNN | |
CN113158983A (en) | Airport scene activity behavior recognition method based on infrared video sequence image | |
CN112766021A (en) | Method for re-identifying pedestrians based on key point information and semantic segmentation information of pedestrians | |
CN110096945A (en) | Indoor Video key frame of video real time extracting method based on machine learning | |
CN115376202A (en) | Deep learning-based method for recognizing passenger behaviors in elevator car | |
CN111724199A (en) | Intelligent community advertisement accurate delivery method and device based on pedestrian active perception | |
CN117218709A (en) | Household old man real-time state monitoring method based on time deformable attention mechanism | |
CN114550270A (en) | Micro-expression identification method based on double-attention machine system | |
CN114937298A (en) | Micro-expression recognition method based on feature decoupling | |
CN111612090B (en) | Image emotion classification method based on content color cross correlation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200422 Address after: 201499 1st floor, No. 1990, Jinbi Road, Fengxian District, Shanghai Applicant after: Shanghai Haoyun Technology Co., Ltd Address before: 510000 Room 1005, East 8 Pazhou Avenue, Haizhu District, Guangzhou City, Guangdong Province Applicant before: Guangzhou Soft Ying Technology Co.,Ltd. |
|
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190830 |