CN108595628A - Method and apparatus for pushed information - Google Patents
Method and apparatus for pushed information Download PDFInfo
- Publication number
- CN108595628A CN108595628A CN201810371349.4A CN201810371349A CN108595628A CN 108595628 A CN108595628 A CN 108595628A CN 201810371349 A CN201810371349 A CN 201810371349A CN 108595628 A CN108595628 A CN 108595628A
- Authority
- CN
- China
- Prior art keywords
- facial image
- matched
- information
- user
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000001815 facial effect Effects 0.000 claims abstract description 259
- 230000004044 response Effects 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 13
- 238000010801 machine learning Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 235000013399 edible fruits Nutrition 0.000 claims 2
- 239000003795 chemical substances by application Substances 0.000 description 35
- 230000006854 communication Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000032696 parturition Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for pushed information.One specific implementation mode of this method includes:In response to receiving the operation requests of user for the target page, the facial image of user is obtained;By facial image input human face recognition model trained in advance, the characteristic information of the face characteristic for characterizing user is generated, wherein human face recognition model is used to characterize the correspondence of facial image and the characteristic information for characterizing face characteristic;Based on the characteristic information generated, target facial image is matched from pre-set face image set to be matched, wherein the facial image to be matched in face image set to be matched is associated with the presupposed information in presupposed information set;Presupposed information associated with target facial image in presupposed information set is determined as target information and is pushed.This embodiment improves the specific aims of information push.
Description
Technical field
The invention relates to field of computer technology, the more particularly, to method and apparatus of pushed information.
Background technology
With the development of science and technology, web-based information advancing technique has been increasingly becoming the main side of network information transmission
Formula.
Currently, information to be pushed show form generally include it is following two:One is the affiliated platform setting of webpage is unified
Template, and then information to be pushed is showed in the form of template.Another kind is by the third party personnel of the affiliated platform of webpage
To design the form that shows of information to be pushed, and then information to be pushed is showed in the form of designed by third party personnel.
Invention content
The embodiment of the present application proposes the method and apparatus for pushed information.
In a first aspect, the embodiment of the present application provides a kind of method for pushed information, this method includes:In response to connecing
The operation requests of user for the target page are received, the facial image of user is obtained;By facial image input people trained in advance
Face identification model generates the characteristic information of the face characteristic for characterizing user, wherein human face recognition model is for characterizing face
The correspondence of image and the characteristic information for characterizing face characteristic;Based on the characteristic information generated, from pre-set
Target facial image is matched in face image set to be matched, wherein the face to be matched in face image set to be matched
Image is associated with the presupposed information in presupposed information set;It will be associated with target facial image pre- in presupposed information set
If information is determined as target information and is pushed.
In some embodiments, based on the characteristic information generated, from pre-set face image set to be matched
Target facial image is matched, including:For each of face image set to be matched facial image to be matched, acquisition is directed to
The predetermined characteristic information to be matched of facial image to be matched, and based on acquired characteristic information to be matched and generated
Characteristic information, similarity calculation is carried out to the facial image of the facial image to be matched and user, obtains result of calculation;Compare
The numerical values recited of each result of calculation obtained, and the facial image to be matched corresponding to the maximum result of calculation of numerical value is true
It is set to target facial image.
In some embodiments, it for each of face image set to be matched facial image to be matched, pre-sets
There is the gender information corresponding to the facial image to be matched;And based on the characteristic information generated, from facial image to be matched
Target facial image is matched in set, including:Obtain the gender information of user;Based on the characteristic information, to be matched generated
The gender information of the gender information and user corresponding to each facial image to be matched in face image set, from people to be matched
Target facial image is matched in face image set.
In some embodiments, the gender information of user is obtained, including:By the facial image input training in advance of user
Gender identification model obtains the gender information of user, wherein gender identification model is used to characterize facial image and the user of user
Gender information correspondence.
In some embodiments, training obtains human face recognition model as follows:Obtain the more of multiple sample of users
A sample facial image, and obtain corresponding to each sample facial image in sample facial image demarcate in advance, multiple
Sample characteristics information, wherein sample characteristics information is used to characterize the face characteristic of the sample of users corresponding to sample facial image;
Using machine learning method, using each sample facial image in multiple sample facial images as input, by it is demarcating in advance,
For sample characteristics information corresponding to each sample facial image in multiple sample facial images as output, training obtains face
Identification model.
Second aspect, the embodiment of the present application provide a kind of device for pushed information, which includes:It obtains single
Member is configured to, in response to receiving the operation requests of user for the target page, obtain the facial image of user;It generates single
Member is configured to, by facial image input human face recognition model trained in advance, generate the face characteristic for characterizing user
Characteristic information, wherein it is corresponding with the characteristic information for characterizing face characteristic that human face recognition model is used to characterize facial image
Relationship;Matching unit is configured to based on the characteristic information generated, from pre-set face image set to be matched
Allot target facial image, wherein in the facial image to be matched and presupposed information set in face image set to be matched
Presupposed information is associated;Push unit is configured to default letter associated with target facial image in presupposed information set
Breath is determined as target information and is pushed.
In some embodiments, matching unit includes:Computing module is configured in face image set to be matched
Each of facial image to be matched, obtain and be directed to the predetermined characteristic information to be matched of facial image to be matched, and be based on
Acquired characteristic information to be matched and the characteristic information generated, to the facial image of the facial image to be matched and user into
Row similarity calculation obtains result of calculation;Comparison module, the numerical value for being configured to compare each result of calculation obtained are big
It is small, and the facial image to be matched corresponding to the maximum result of calculation of numerical value is determined as target facial image.
In some embodiments, it for each of face image set to be matched facial image to be matched, pre-sets
There is the gender information corresponding to the facial image to be matched;And matching unit further includes:Acquisition module is configured to obtain and use
The gender information at family;Matching module is configured to based on the characteristic information generated, each in face image set to be matched
The gender information of gender information and user corresponding to facial image to be matched, mesh is matched from face image set to be matched
Mark facial image.
In some embodiments, acquisition module is further configured to:By the facial image input training in advance of user
Gender identification model obtains the gender information of user, wherein gender identification model is used to characterize facial image and the user of user
Gender information correspondence.
In some embodiments, training obtains human face recognition model as follows:Obtain the more of multiple sample of users
A sample facial image, and obtain corresponding to each sample facial image in sample facial image demarcate in advance, multiple
Sample characteristics information, wherein sample characteristics information is used to characterize the face characteristic of the sample of users corresponding to sample facial image;
Using machine learning method, using each sample facial image in multiple sample facial images as input, by it is demarcating in advance,
For sample characteristics information corresponding to each sample facial image in multiple sample facial images as output, training obtains face
Identification model.
The third aspect, the embodiment of the present application provide a kind of terminal, including:One or more processors;Storage device is used
In the one or more programs of storage, when one or more programs are executed by one or more processors so that at one or more
The method that reason device realizes any embodiment in the above-mentioned method for pushed information.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method that any embodiment in the above-mentioned method for pushed information is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for pushed information, by being directed to mesh in response to receiving user
The operation requests for marking the page, obtain the facial image of user;By facial image input human face recognition model trained in advance, generate
Characteristic information for the face characteristic for characterizing user, wherein human face recognition model is for characterizing facial image and being used to characterize
The correspondence of the characteristic information of face characteristic;Based on the characteristic information generated, from pre-set facial image to be matched
Target facial image is matched in set, wherein facial image and presupposed information to be matched in face image set to be matched
Presupposed information in set is associated;Presupposed information associated with target facial image in presupposed information set is determined as mesh
Mark information is simultaneously pushed, thus it is effectively that the face characteristic of user is associated with the information pushed, it improves information and pushes away
The specific aim sent.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for pushed information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for pushed information of the application;
Fig. 4 is the flow chart according to another embodiment of the method for pushed information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for pushed information of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the terminal device for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the implementation of the method for pushed information or the device for pushed information that can apply the application
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as web browser is answered on terminal device 101,102,103
With, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments with display screen and supported web page browsing, including but not limited to smart mobile phone, tablet when part
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing the multiple softwares or software module of Distributed Services), can also be implemented as single software or soft
Part module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103
Webpage provides the backstage web page server supported.Backstage web page server can be to the operation requests to target pages that receive
Etc. data carry out the processing such as analyzing, and handling result (such as target information) is fed back into terminal device.
It should be noted that the method for pushed information that the embodiment of the present application is provided can be held by server 105
Row, can also be executed, correspondingly, the device for pushed information can be set to server by terminal device 101,102,103
In 105, it can also be set in terminal device 101,102,103.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
It, can also to be implemented as multiple softwares or software module (such as providing the multiple softwares or software module of Distributed Services)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.In the mistake of information to be pushed or generation information to be pushed
Data used in journey need not be in the case of long-range obtain, and above system framework can not include network, and only include
Terminal device or server.
With continued reference to Fig. 2, the flow of one embodiment of the method for pushed information according to the application is shown
200.This is used for the method for pushed information, includes the following steps:
Step 201, in response to receiving the operation requests of user for the target page, the facial image of user is obtained.
In the present embodiment, be used for pushed information method executive agent (such as terminal device shown in FIG. 1 101,
102,103) can in response to receive user operation requests for the target page by wired connection mode or wirelessly connect
The mode of connecing obtains the facial image of user.Wherein, target pages are pre-set, wait for the page that user operates on it.
Operation requests can be access request, browse request etc..It is pre-stored within it should be noted that above-mentioned executive agent can obtain
The facial image of local, above-mentioned user;Alternatively, being shot to above-mentioned user, the facial image of above-mentioned user is obtained;Or
Person, obtain with above-mentioned executive agent communication connection other electronic equipments (such as server 105 shown in FIG. 1) send, on
State the facial image of user.
Step 202, by facial image input human face recognition model trained in advance, the face generated for characterizing user is special
The characteristic information of sign.
In the present embodiment, based on the facial image obtained in step 201, above-mentioned executive agent can be defeated by facial image
Enter human face recognition model trained in advance, generates the characteristic information of the face characteristic for characterizing above-mentioned user.Wherein, feature is believed
Breath can include but is not limited at least one of following:Word, number, symbol, picture, table, vector.Illustratively, feature is believed
Breath can be " the colour of skin:It is black;Face is long:15cm”.
In the present embodiment, human face recognition model can be used for characterizing facial image and the feature for characterizing face characteristic
The correspondence of information.It is based on believing a large amount of facial image and feature specifically, human face recognition model can be technical staff
The statistics of breath and the mapping table for pre-establishing, being stored with the correspondence of multiple facial images and characteristic information, also may be used
To be to advance with machine learning method, based on training sample to the model for carrying out image procossing (for example, convolutional Neural net
Network (Convolutional Neural Network, CNN)) it is trained rear obtained model.
In some optional realization methods of the present embodiment, above-mentioned executive agent or other electronic equipments can be by such as
Lower step trains to obtain above-mentioned human face recognition model:First, multiple sample facial images of multiple sample of users are obtained, and are obtained
The sample characteristics information corresponding to each sample facial image in sample facial images demarcate in advance, multiple, wherein sample
Characteristic information is used to characterize the face characteristic of the sample of users corresponding to sample facial image;Then, using machine learning method,
Using each sample facial image in multiple sample facial images as input, by sample facial images demarcate in advance, multiple
In each sample facial image corresponding to sample characteristics information as output, to initial model (such as convolutional neural networks,
Support vector machines etc.) it is trained, obtain above-mentioned human face recognition model.
Step 203, based on the characteristic information generated, mesh is matched from pre-set face image set to be matched
Mark facial image.
In the present embodiment, the characteristic information generated based on step 202, above-mentioned executive agent can be from pre-set
Target facial image is matched in face image set to be matched.Wherein, facial image to be matched can be that technical staff is advance
It is being arranged, for carrying out matched facial image with the facial image of above-mentioned user, or for corresponding to facial image to be matched
User to be matched upload registered in advance facial image.Target facial image can be face image set to be matched in, with
The facial image of above-mentioned user has the face to be matched of certain general character (such as one or more face characteristics having the same)
Image.
As an example, above-mentioned executive agent can be as follows from pre-set face image set to be matched
Match target facial image:It is above-mentioned firstly, for each of above-mentioned face image set to be matched facial image to be matched
Executive agent can determine the characteristic information to be matched of the facial image to be matched.Then, above-mentioned executive agent can generate
The feature vector to be matched corresponding to feature vector and characteristic information to be matched corresponding to features described above information.Then, on
The feature vector generated and feature vector to be matched can be matched by stating executive agent, from the feature to be matched generated
Target feature vector is determined in vector, wherein the number of a certain component of target feature vector and the respective component of feature vector
It is worth equal.In turn, the facial image to be matched corresponding to target feature vector can be determined as target person by above-mentioned executive agent
Face image.Since a certain component of target feature vector and the respective component of feature vector characterize identical face characteristic, example
Such as features of skin colors, when the two is equal, it is believed that target facial image and the facial image of acquired user have certain be total to
Same point.
It should be noted that in this example, facial image to be matched can be inputted above-mentioned face by above-mentioned executive agent
Identification model, and then determine the characteristic information to be matched of facial image to be matched;Alternatively, above-mentioned executive agent can obtain needle
Characteristic information to be matched predetermined to facial image to be matched.
In some optional realization methods of the present embodiment, above-mentioned executive agent can also be by following steps from advance
Target facial image is matched in the face image set to be matched being arranged:
Firstly, for each of face image set to be matched facial image to be matched, above-mentioned executive agent can obtain
Take and be directed to the predetermined characteristic information to be matched of facial image to be matched, and based on acquired characteristic information to be matched and
The characteristic information generated carries out similarity calculation to the facial image of the facial image to be matched and above-mentioned user, is counted
Calculate result.
Specifically, above-mentioned executive agent can based on acquired characteristic information to be matched and the characteristic information generated,
The feature vector corresponding to feature vector to be matched and the characteristic information corresponding to characteristic information to be matched is generated respectively, in turn
Similarity calculation is carried out to the feature vector and feature vector to be matched that are generated, obtains result of calculation.
Illustratively, the characteristic information to be matched of facial image to be matched is that " face is long:17cm;Eye distance:6cm;The colour of skin:
It is black ".The characteristic information of facial image is that " face is long:14cm;Eye distance:7cm;The colour of skin:It is yellow ".Then above-mentioned executive agent can be generated and be waited for
Feature vector to be matched " [17,6,2] " corresponding to matching characteristic information.Wherein, on the first row of feature vector to be matched
Numerical value " 17 " is for characterizing the long feature of face;Numerical value " 6 " on secondary series is for characterizing eye distance feature;Numerical value " 2 " on third row
For characterizing features of skin colors.It is corresponding, above-mentioned executive agent can generate corresponding to facial image feature vector " [14,7,
1]”.It should be noted that herein, for this feature of the colour of skin, characteristic value is such as can use more than or equal to zero and less than
Numerical value in two indicates that numerical value is bigger, and the characterized colour of skin is more black.In turn, based on obtained feature vector to be matched " [17,6,
2] " and the various method (Euclids for calculating similarity may be used in feature vector " [14,7,1] ", above-mentioned executive agent
Furthest Neighbor, cosine similarity method, Pearson correlation coefficients method etc.) the two is calculated, obtain result of calculation.
It should be noted that herein, the method for determination of the characteristic value in feature vector and feature vector to be matched can
To be that technical staff is preset.Specifically, as an example, in addition to being indicated with the numerical value more than or equal to zero and less than or equal to two
Except, the colour of skin can also use other numerical representation methods, not be limited herein.
Then, above-mentioned executive agent can compare the numerical values recited of each result of calculation obtained, and numerical value is maximum
Result of calculation corresponding to facial image to be matched be determined as target facial image.It is understood that the maximum meter of numerical value
The facial image to be matched corresponding to result is calculated to be in face image set to be matched, is similar to the facial image of above-mentioned user
Spend highest facial image to be matched.
In the present embodiment, the facial image to be matched in face image set to be matched can in presupposed information set
Presupposed information it is associated.Herein, presupposed information can include but is not limited at least one of following:Word, number, symbol,
Picture, video, audio, link.
Illustratively, presupposed information can be the contact method (example of the user to be matched corresponding to facial image to be matched
Such as telephone number, mailbox) or user to be matched upload shoot the video certainly.
Step 204, presupposed information associated with target facial image in presupposed information set is determined as target information
And it is pushed.
In the present embodiment, the target facial image obtained based on step 203, above-mentioned executive agent can be by presupposed informations
Presupposed information associated with target facial image is determined as target information and is pushed in set.Wherein, target information is
Information for being pushed to user.Specifically, above-mentioned executive agent above-mentioned target information can be pushed to target pages to
Display.
It is a signal according to the application scenarios of the method for pushed information of the present embodiment with continued reference to Fig. 3, Fig. 3
Figure.In the application scenarios of Fig. 3, user uses mobile phone access target pages, as shown in reference numeral 301;Mobile phone can to
Family is shot, and obtains the facial image of user, as indicated by reference numeral 302;Then, mobile phone can be by the facial image of user
Input human face recognition model trained in advance, generates the characteristic information of the face characteristic for characterizing user, in turn, is based on giving birth to
At characteristic information, mobile phone can match target facial image from pre-set face image set to be matched, and will
Presupposed information associated with target facial image is determined as target information and is pushed in presupposed information set, such as attached drawing mark
Shown in note 303.
The method that above-described embodiment of the application provides in response to receiving the operation of user for the target page by asking
It asks, obtains the facial image of user;Facial image input human face recognition model trained in advance is generated for characterizing user's
The characteristic information of face characteristic;Based on the characteristic information generated, matched from pre-set face image set to be matched
Go out target facial image, wherein facial image to be matched in face image set to be matched with it is pre- in presupposed information set
If information is associated;Presupposed information associated with target facial image in presupposed information set is determined as target information to go forward side by side
Row push, thus it is effectively that the face characteristic of user is associated with the information pushed, improve the specific aim of information push.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for pushed information.The use
In the flow 400 of the method for pushed information, include the following steps:
Step 401, in response to receiving the operation requests of user for the target page, the facial image of user is obtained.
In the present embodiment, be used for pushed information method executive agent (such as terminal device shown in FIG. 1 101,
102,103) can in response to receive user operation requests for the target page by wired connection mode or wirelessly connect
The mode of connecing obtains the facial image of user.Wherein, target pages are pre-set, wait for the page that user operates on it.
Operation requests can be access request, browse request etc..It is pre-stored within it should be noted that above-mentioned executive agent can obtain
The facial image of local, above-mentioned user;Alternatively, being shot to above-mentioned user, the facial image of above-mentioned user is obtained;Or
Person obtains user sent with the server (such as server 105 shown in FIG. 1) of above-mentioned executive agent communication connection, above-mentioned
Facial image.
Step 402, by facial image input human face recognition model trained in advance, the face generated for characterizing user is special
The characteristic information of sign.
In the present embodiment, based on the facial image obtained in step 401, above-mentioned executive agent can be defeated by facial image
Enter human face recognition model trained in advance, generates the characteristic information of the face characteristic for characterizing above-mentioned user.Wherein, feature is believed
Breath can include but is not limited at least one of following:Word, number, symbol, picture, table, vector.
In the present embodiment, human face recognition model can be used for characterizing facial image and the feature for characterizing face characteristic
The correspondence of information.
Step 403, the gender information of user is obtained.
In the present embodiment, above-mentioned executive agent can be obtained by wired connection type or wireless connection type
The gender information of user.Gender information can be used for characterizing the gender of user.Gender information can include but is not limited to down toward
One item missing:Word, number, symbol.Such as gender information can be:" gender:Man ".Specifically, above-mentioned executive agent can obtain
Take the gender information for being pre-stored within local, above-mentioned user;Alternatively, above-mentioned executive agent can obtain server (such as Fig. 1
Shown in server 105) send, the gender information of above-mentioned user;Or the characteristic information that step 402 obtains is given, it determines
The gender information of above-mentioned user.
In some optional realization methods of the present embodiment, people that above-mentioned executive agent can also obtain step 401
Face image input gender identification model trained in advance, obtains the gender information of user.Wherein, gender identification model can be used for
Characterize the correspondence of the facial image of user and the gender information of user.Specifically, gender identification model can be technology people
Member is pre-established based on the statistics to a large amount of facial image and gender information, is stored with multiple facial images and gender letter
The mapping table of the correspondence of breath can also be to advance with machine learning method, based on training sample to being used to carry out
The model (for example, convolutional neural networks) of image procossing is trained rear obtained model.
Step 404, based on the characteristic information generated, each facial image to be matched in face image set to be matched
The gender information of corresponding gender information and user, matches target facial image from face image set to be matched.
In the present embodiment, the gender letter of the user acquired in the characteristic information that is generated based on step 402, step 403
The gender information corresponding to each facial image to be matched in breath and face image set to be matched, above-mentioned executive agent can
To match target facial image from face image set to be matched.Wherein, for every in face image set to be matched
A facial image to be matched is previously provided with the gender information corresponding to the facial image to be matched.The target person matched
Face image can be the highest face to be matched of different and similarity from the gender information corresponding to the facial image of above-mentioned user
Image;Or it is the highest face figure to be matched of the identical and similarity with the gender information corresponding to the facial image of above-mentioned user
Picture.
As an example, when being intended to match the target facial images different from the gender information of user, above-mentioned executive agent
The to be matched facial image different from the gender information of user can be chosen first from face image set to be matched as time
It chooses face image, and generates candidate face image collection;Then, above-mentioned executive agent can be to identified candidate face image
The characteristic information and the characteristic information corresponding to above-mentioned user of each candidate face image in set match, obtain with
The highest candidate face image of facial image similarity of above-mentioned user, and the candidate face image obtained is determined as target
Facial image.
Step 405, presupposed information associated with target facial image in presupposed information set is determined as target information
And it is pushed.
In the present embodiment, the target facial image obtained based on step 404, above-mentioned executive agent can be by presupposed informations
Presupposed information associated with target facial image is determined as target information and is pushed in set.Wherein, target information is
Information for being pushed to user.Specifically, above-mentioned executive agent above-mentioned target information can be pushed to target pages to
Display.
Above-mentioned steps 401, step 402, step 405 respectively with step 201, step 202, the step in previous embodiment
204 is consistent, and the description above with respect to step 201, step 202 and step 204 is also applied for step 401, step 402 and step
405, details are not described herein again.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the method for pushed information in the present embodiment
Flow 400 highlight based on gender information the step of target facial image is determined from face image set to be matched.As a result,
The present embodiment description scheme can introduce more with the relevant data of face characteristic, to further improve information push
Specific aim.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for pushing letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for pushed information of the present embodiment includes:Acquiring unit 501, generation unit
502, matching unit 503 and push unit 504.Wherein, acquiring unit 501 is configured to be directed to target in response to receiving user
The operation requests of the page obtain the facial image of user;Generation unit 502 is configured to facial image input training in advance
Human face recognition model generates the characteristic information of the face characteristic for characterizing user, wherein human face recognition model is for characterizing people
The correspondence of face image and the characteristic information for characterizing face characteristic;Matching unit 503 is configured to based on being generated
Characteristic information matches target facial image from pre-set face image set to be matched, wherein face figure to be matched
Facial image to be matched during image set closes is associated with the presupposed information in presupposed information set;Push unit 504 is configured to
Presupposed information associated with target facial image in presupposed information set is determined as target information and is pushed.
In the present embodiment, the acquiring unit 501 for the device 500 of the method for pushed information can be in response to receiving
The operation requests of user for the target page obtain the face figure of user by wired connection mode or radio connection
Picture.Wherein, target pages are pre-set, wait for the page that user operates on it.Operation requests can be access request,
Browse request etc..It should be noted that acquiring unit 501 can obtain the face figure for being pre-stored within local, above-mentioned user
Picture;Alternatively, being shot to above-mentioned user, the facial image of above-mentioned user is obtained;It is communicated with acquiring unit 501 alternatively, obtaining
The facial image of user that the server (such as server 105 shown in FIG. 1) of connection is sent, above-mentioned.
In the present embodiment, based on the facial image obtained in acquiring unit 501, generation unit 502 can be by face figure
As input human face recognition model trained in advance, the characteristic information of the face characteristic for characterizing above-mentioned user is generated.Wherein, special
Reference breath can include but is not limited at least one of following:Word, number, symbol, picture, table, vector.
In the present embodiment, human face recognition model can be used for characterizing facial image and the feature for characterizing face characteristic
The correspondence of information.It is based on believing a large amount of facial image and feature specifically, human face recognition model can be technical staff
The statistics of breath and the mapping table for pre-establishing, being stored with the correspondence of multiple facial images and characteristic information, also may be used
To be to advance with machine learning method, based on training sample to acquired after the model of image procossing is trained for carrying out
Model.
In the present embodiment, the characteristic information generated based on generation unit 502, matching unit 503 can be set from advance
Target facial image is matched in the face image set to be matched set.Wherein, facial image to be matched can be technical staff
It is pre-set, for carrying out matched facial image with the facial image of above-mentioned user, or be facial image institute to be matched
The facial image of corresponding user to be matched upload registered in advance.Target facial image can be face image set to be matched
In, with the highest facial image to be matched of facial image similarity of above-mentioned user;Or in face image set to be matched,
There is the people to be matched of certain general character (such as one or more face characteristics having the same) with the facial image of above-mentioned user
Face image.
In the present embodiment, the facial image to be matched in face image set to be matched can in presupposed information set
Presupposed information it is associated.Herein, presupposed information can include but is not limited at least one of following:Word, number, symbol,
Picture, video, audio, link.
In the present embodiment, the target facial image obtained based on matching unit 503, push unit 504 can will be preset
Presupposed information associated with target facial image is determined as target information and is pushed in information aggregate.Wherein, target is believed
Breath is the information for being pushed to user.Specifically, above-mentioned target information can be pushed to target pages use by push unit 504
With display.
In some optional realization methods of the present embodiment, matching unit 503 may include:Computing module is (in figure not
Show), it is configured to, for each of face image set to be matched facial image to be matched, obtain and be directed to the people to be matched
The predetermined characteristic information to be matched of face image, and based on acquired characteristic information to be matched and the feature generated letter
Breath carries out similarity calculation to the facial image of the facial image to be matched and user, obtains result of calculation;Comparison module (figure
In be not shown), be configured to compare the numerical values recited of each result of calculation obtained, and by the maximum result of calculation institute of numerical value
Corresponding facial image to be matched is determined as target facial image.
It is to be matched for each of face image set to be matched in some optional realization methods of the present embodiment
Facial image is previously provided with the gender information corresponding to the facial image to be matched;And matching unit 503 can also wrap
It includes:Acquisition module (not shown) is configured to obtain the gender information of user;Matching module (not shown), configuration
For based on the gender corresponding to the characteristic information generated, each facial image to be matched in face image set to be matched
The gender information of information and user matches target facial image from face image set to be matched.
In some optional realization methods of the present embodiment, acquisition module can be further configured to:By user's
Facial image input gender identification model trained in advance, obtains the gender information of user, wherein gender identification model can be used
In the correspondence of the gender information of the facial image and user of characterization user.
In some optional realization methods of the present embodiment, human face recognition model can be trained as follows
It arrives:Multiple sample facial images of multiple sample of users are obtained, and are obtained every in sample facial image demarcate in advance, multiple
Sample characteristics information corresponding to a sample facial image, wherein sample characteristics information can be used for characterizing sample facial image
The face characteristic of corresponding sample of users;Using machine learning method, by each sample people in multiple sample facial images
Face image is as input, by the sample corresponding to each sample facial image in sample facial images demarcate in advance, multiple
Characteristic information obtains human face recognition model as output, training.
The device 500 that above-described embodiment of the application provides is directed to mesh by acquiring unit 501 in response to receiving user
The operation requests for marking the page, obtain the facial image of user;Facial image is inputted face trained in advance and known by generation unit 502
Other model generates the characteristic information of the face characteristic for characterizing user, wherein human face recognition model is for characterizing facial image
With the correspondence of the characteristic information for characterizing face characteristic;Matching unit 503 is based on the characteristic information generated, from advance
Target facial image is matched in the face image set to be matched being arranged, wherein is waited in face image set to be matched
It is associated with the presupposed information in presupposed information set with facial image;Push unit 504 by presupposed information set with target
The associated presupposed information of facial image is determined as target information and is pushed, to effectively by the face characteristic of user with
The information pushed is associated, improves the specific aim of information push.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the terminal device for realizing the embodiment of the present application
Structural schematic diagram.Terminal device shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include acquiring unit, generation unit, matching unit and push unit.Wherein, the title of these units not structure under certain conditions
The restriction of the pairs of unit itself, for example, acquiring unit is also described as " obtaining the unit of the facial image of user ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:In response to receiving the operation requests of user for the target page, the facial image of user is obtained;Facial image is inputted
Trained human face recognition model in advance generates the characteristic information of the face characteristic for characterizing user, wherein human face recognition model
Correspondence for characterizing facial image and the characteristic information for characterizing face characteristic;Based on the characteristic information generated,
Target facial image is matched from pre-set face image set to be matched, wherein in face image set to be matched
Facial image to be matched it is associated with the presupposed information in presupposed information set;By in presupposed information set with target face figure
As associated presupposed information is determined as target information and is pushed.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of method for pushed information, including:
In response to receiving the operation requests of user for the target page, the facial image of the user is obtained;
By facial image input human face recognition model trained in advance, the face characteristic for characterizing the user is generated
Characteristic information, wherein the human face recognition model is used to characterize facial image and the characteristic information for characterizing face characteristic
Correspondence;
Based on the characteristic information generated, target facial image is matched from pre-set face image set to be matched,
Wherein, the facial image to be matched in the face image set to be matched is related to the presupposed information in presupposed information set
Connection;
Presupposed information associated with the target facial image in the presupposed information set is determined as target information to go forward side by side
Row push.
It is described based on the characteristic information generated 2. according to the method described in claim 1, wherein, it is waited for from pre-set
With matching target facial image in face image set, including:
For each of the face image set to be matched facial image to be matched, obtains and be directed to the facial image to be matched
Predetermined characteristic information to be matched, and based on acquired characteristic information to be matched and the characteristic information generated, to this
Facial image to be matched and the facial image of the user carry out similarity calculation, obtain result of calculation;
Compare the numerical values recited of each result of calculation obtained, and by the people to be matched corresponding to the maximum result of calculation of numerical value
Face image is determined as target facial image.
3. according to the method described in claim 1, wherein, for each of the face image set to be matched people to be matched
Face image is previously provided with the gender information corresponding to the facial image to be matched;And
It is described that target facial image is matched from the face image set to be matched based on the characteristic information generated, packet
It includes:
Obtain the gender information of the user;
Based on the gender corresponding to the characteristic information generated, each facial image to be matched in face image set to be matched
The gender information of information and the user matches target facial image from the face image set to be matched.
4. according to the method described in claim 3, wherein, the gender information for obtaining the user, including:
By the facial image input of user gender identification model trained in advance, the gender information of the user is obtained,
In, the gender identification model is used to characterize the correspondence of the facial image of user and the gender information of user.
5. according to the method described in one of claim 1-4, wherein the human face recognition model is trained as follows
It arrives:
Multiple sample facial images of multiple sample of users are obtained, and obtain sample facial image demarcate in advance, the multiple
In each sample facial image corresponding to sample characteristics information, wherein sample characteristics information is for characterizing sample face figure
As the face characteristic of corresponding sample of users;
It will be pre- using each sample facial image in the multiple sample facial image as input using machine learning method
The sample characteristics information corresponding to each sample facial image in sample facial image first demarcate, the multiple is as defeated
Go out, training obtains human face recognition model.
6. a kind of device for pushed information, including:
Acquiring unit is configured to, in response to receiving the operation requests of user for the target page, obtain the people of the user
Face image;
Generation unit is configured to generate facial image input human face recognition model trained in advance for characterizing
State the characteristic information of the face characteristic of user, wherein the human face recognition model is for characterizing facial image and being used to characterize people
The correspondence of the characteristic information of face feature;
Matching unit is configured to based on the characteristic information generated, from pre-set face image set to be matched
Allot target facial image, wherein the facial image to be matched in the face image set to be matched and presupposed information set
In presupposed information it is associated;
Push unit is configured to presupposed information associated with the target facial image in the presupposed information set is true
It is set to target information and is pushed.
7. device according to claim 6, wherein the matching unit includes:
Computing module is configured to, for each of the face image set to be matched facial image to be matched, obtain needle
To the predetermined characteristic information to be matched of facial image to be matched, and based on acquired characteristic information to be matched and give birth to
At characteristic information, similarity calculation is carried out to the facial image of the facial image to be matched and the user, obtains and calculates knot
Fruit;
Comparison module is configured to compare the numerical values recited of each result of calculation obtained, and the maximum calculating of numerical value is tied
Facial image to be matched corresponding to fruit is determined as target facial image.
8. device according to claim 6, wherein for each of the face image set to be matched people to be matched
Face image is previously provided with the gender information corresponding to the facial image to be matched;And
The matching unit further includes:
Acquisition module is configured to obtain the gender information of the user;
Matching module is configured to based on the characteristic information generated, each people to be matched in face image set to be matched
The gender information of gender information and the user corresponding to face image match mesh from the face image set to be matched
Mark facial image.
9. device according to claim 8, wherein the acquisition module is further configured to:
By the facial image input of user gender identification model trained in advance, the gender information of the user is obtained,
In, the gender identification model is used to characterize the correspondence of the facial image of user and the gender information of user.
10. according to the device described in one of claim 6-9, wherein the human face recognition model is trained as follows
It arrives:
Multiple sample facial images of multiple sample of users are obtained, and obtain sample facial image demarcate in advance, the multiple
In each sample facial image corresponding to sample characteristics information, wherein sample characteristics information is for characterizing sample face figure
As the face characteristic of corresponding sample of users;
It will be pre- using each sample facial image in the multiple sample facial image as input using machine learning method
The sample characteristics information corresponding to each sample facial image in sample facial image first demarcate, the multiple is as defeated
Go out, training obtains human face recognition model.
11. a kind of terminal, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the program is realized when being executed by processor
Method as described in any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810371349.4A CN108595628A (en) | 2018-04-24 | 2018-04-24 | Method and apparatus for pushed information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810371349.4A CN108595628A (en) | 2018-04-24 | 2018-04-24 | Method and apparatus for pushed information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108595628A true CN108595628A (en) | 2018-09-28 |
Family
ID=63614794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810371349.4A Pending CN108595628A (en) | 2018-04-24 | 2018-04-24 | Method and apparatus for pushed information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108595628A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389182A (en) * | 2018-10-31 | 2019-02-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN109559193A (en) * | 2018-10-26 | 2019-04-02 | 深圳壹账通智能科技有限公司 | Product method for pushing, device, computer equipment and the medium of intelligent recognition |
CN109766774A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | User information collection method, apparatus, computer equipment and storage medium |
CN109933723A (en) * | 2019-03-07 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN110135889A (en) * | 2019-04-15 | 2019-08-16 | 深圳壹账通智能科技有限公司 | Method, server and the storage medium of intelligent recommendation book list |
CN110362752A (en) * | 2019-08-12 | 2019-10-22 | 珠海格力电器股份有限公司 | Information pushing method and device and computer readable storage medium |
CN110442783A (en) * | 2019-07-05 | 2019-11-12 | 深圳壹账通智能科技有限公司 | Information-pushing method, device based on recognition of face, computer equipment |
CN110458647A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Product method for pushing, device, computer equipment and storage medium |
CN110516099A (en) * | 2019-08-27 | 2019-11-29 | 北京百度网讯科技有限公司 | Image processing method and device |
CN110942033A (en) * | 2019-11-28 | 2020-03-31 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer medium for pushing information |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111626521A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Tour route generation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150009118A1 (en) * | 2013-07-03 | 2015-01-08 | Nvidia Corporation | Intelligent page turner and scroller |
CN105160312A (en) * | 2015-08-27 | 2015-12-16 | 南京信息工程大学 | Recommendation method for star face make up based on facial similarity match |
CN106504104A (en) * | 2016-10-27 | 2017-03-15 | 江西瓷肌电子商务有限公司 | A kind of method of social activity of being made friends based on face recognition |
CN107302492A (en) * | 2017-06-28 | 2017-10-27 | 歌尔科技有限公司 | Friend-making requesting method, server, client terminal device and the system of social software |
CN107563336A (en) * | 2017-09-07 | 2018-01-09 | 廖海斌 | Human face similarity degree analysis method, the device and system of game are matched for famous person |
-
2018
- 2018-04-24 CN CN201810371349.4A patent/CN108595628A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150009118A1 (en) * | 2013-07-03 | 2015-01-08 | Nvidia Corporation | Intelligent page turner and scroller |
CN105160312A (en) * | 2015-08-27 | 2015-12-16 | 南京信息工程大学 | Recommendation method for star face make up based on facial similarity match |
CN106504104A (en) * | 2016-10-27 | 2017-03-15 | 江西瓷肌电子商务有限公司 | A kind of method of social activity of being made friends based on face recognition |
CN107302492A (en) * | 2017-06-28 | 2017-10-27 | 歌尔科技有限公司 | Friend-making requesting method, server, client terminal device and the system of social software |
CN107563336A (en) * | 2017-09-07 | 2018-01-09 | 廖海斌 | Human face similarity degree analysis method, the device and system of game are matched for famous person |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559193A (en) * | 2018-10-26 | 2019-04-02 | 深圳壹账通智能科技有限公司 | Product method for pushing, device, computer equipment and the medium of intelligent recognition |
CN109389182A (en) * | 2018-10-31 | 2019-02-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating information |
CN111259698B (en) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695B (en) * | 2018-11-30 | 2023-08-29 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN109766774A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | User information collection method, apparatus, computer equipment and storage medium |
CN109933723A (en) * | 2019-03-07 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN110135889A (en) * | 2019-04-15 | 2019-08-16 | 深圳壹账通智能科技有限公司 | Method, server and the storage medium of intelligent recommendation book list |
CN110458647A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Product method for pushing, device, computer equipment and storage medium |
CN110442783A (en) * | 2019-07-05 | 2019-11-12 | 深圳壹账通智能科技有限公司 | Information-pushing method, device based on recognition of face, computer equipment |
WO2021004137A1 (en) * | 2019-07-05 | 2021-01-14 | 深圳壹账通智能科技有限公司 | Information pushing method and apparatus based on face recognition and computer device |
CN110362752A (en) * | 2019-08-12 | 2019-10-22 | 珠海格力电器股份有限公司 | Information pushing method and device and computer readable storage medium |
US11210563B2 (en) | 2019-08-27 | 2021-12-28 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for processing image |
CN110516099A (en) * | 2019-08-27 | 2019-11-29 | 北京百度网讯科技有限公司 | Image processing method and device |
CN111062995A (en) * | 2019-11-28 | 2020-04-24 | 重庆中星微人工智能芯片技术有限公司 | Method and device for generating face image, electronic equipment and computer readable medium |
CN110942033B (en) * | 2019-11-28 | 2023-05-26 | 重庆中星微人工智能芯片技术有限公司 | Method, device, electronic equipment and computer medium for pushing information |
CN110942033A (en) * | 2019-11-28 | 2020-03-31 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer medium for pushing information |
CN111062995B (en) * | 2019-11-28 | 2024-02-23 | 重庆中星微人工智能芯片技术有限公司 | Method, apparatus, electronic device and computer readable medium for generating face image |
CN111626521A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Tour route generation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108595628A (en) | Method and apparatus for pushed information | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108830235A (en) | Method and apparatus for generating information | |
CN109086719A (en) | Method and apparatus for output data | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN108989882A (en) | Method and apparatus for exporting the snatch of music in video | |
CN108960316A (en) | Method and apparatus for generating model | |
CN109255337A (en) | Face critical point detection method and apparatus | |
CN109815365A (en) | Method and apparatus for handling video | |
CN109299477A (en) | Method and apparatus for generating text header | |
CN108280200A (en) | Method and apparatus for pushed information | |
CN108335390A (en) | Method and apparatus for handling information | |
CN108960110A (en) | Method and apparatus for generating information | |
CN110084317A (en) | The method and apparatus of image for identification | |
CN109446442A (en) | Method and apparatus for handling information | |
CN108776692A (en) | Method and apparatus for handling information | |
CN110046571A (en) | The method and apparatus at age for identification | |
CN109117758A (en) | Method and apparatus for generating information | |
CN109862100A (en) | Method and apparatus for pushed information | |
CN108595448A (en) | Information-pushing method and device | |
CN109408748A (en) | Method and apparatus for handling information | |
CN109389182A (en) | Method and apparatus for generating information | |
CN108629011A (en) | Method and apparatus for sending feedback information | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN108573054A (en) | Method and apparatus for pushed information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180928 |
|
RJ01 | Rejection of invention patent application after publication |