CN110472087A - A kind of facial expression image recommended method, device, equipment and medium - Google Patents

A kind of facial expression image recommended method, device, equipment and medium Download PDF

Info

Publication number
CN110472087A
CN110472087A CN201910725995.0A CN201910725995A CN110472087A CN 110472087 A CN110472087 A CN 110472087A CN 201910725995 A CN201910725995 A CN 201910725995A CN 110472087 A CN110472087 A CN 110472087A
Authority
CN
China
Prior art keywords
facial expression
expression image
target object
vector
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910725995.0A
Other languages
Chinese (zh)
Inventor
刘龙坡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910725995.0A priority Critical patent/CN110472087A/en
Publication of CN110472087A publication Critical patent/CN110472087A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This application discloses a kind of facial expression image recommended method, device, equipment and media, which comprises the target two for generating target object divides topological diagram;Divide topological diagram based on target two, determines the feature vector of the target object;Determine the vector similarity of the expression vector of each facial expression image to be recommended in the feature vector and facial expression image set to be recommended of the target object;Recommend facial expression image to the target object based on the vector similarity.Using the application, the real demand of user can be considered comprehensively, can effectively improve the recommendation effect of the facial expression image based on cold start-up.

Description

A kind of facial expression image recommended method, device, equipment and medium
Technical field
This application involves field of computer technology more particularly to a kind of facial expression image recommended method, device, equipment and Jie Matter.
Background technique
With the development of internet technology, the facial expression image for including in the expression resource of some application programs in terminal Quantity is more and more.In face of more and more facial expression images, it will usually recommend some facial expression images to target user, such as recommend The used facial expression image of other users institute with similar interests, or recommend the facial expression image etc. of currently used hot topic.
However, the recommended method of existing facial expression image, can not only have cold start-up, but also the recommended method is simultaneously The real demand of user is not considered, and recommendation effect needs to be further increased.
Summary of the invention
This application provides a kind of facial expression image recommended method, device, equipment and media, to solve above at least one skill Art problem.
On the one hand, this application provides a kind of facial expression image recommended methods, comprising:
Expression in historical interest facial expression image information and the historical interest facial expression image information based on target object Similarity between image, the target two for generating the target object divide topological diagram;
Target two based on the target object divides topological diagram, determines the feature vector of the target object;The target The characteristic of the interest facial expression image of the feature vector characterization target object of object;
Determine each facial expression image to be recommended in the feature vector and facial expression image set to be recommended of the target object The vector similarity of expression vector;
Recommend facial expression image to the target object based on the vector similarity.
On the other hand, the application also provides a kind of facial expression image recommendation apparatus, comprising:
Topological diagram generation module, for based on target object historical interest facial expression image information and the history it is emerging Similarity in interesting facial expression image information between facial expression image, the target two for generating the target object divide topological diagram;
Vector determining module divides topological diagram for the target two based on the target object, determines the target object Feature vector;The characteristic of the interest facial expression image of the feature vector characterization target object of the target object;
Similarity determining module, it is every in the feature vector and facial expression image set to be recommended for determining the target object The vector similarity of the expression vector of a facial expression image to be recommended;
Recommending module, for recommending facial expression image to the target object based on the vector similarity.
On the other hand a kind of equipment is also provided, the equipment includes processor and memory, is stored in the memory At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute Code set or instruction set is stated to be loaded by the processor and executed to realize any of the above-described facial expression image recommended method.
On the other hand a kind of computer storage medium is also provided, at least one instruction, extremely is stored in the storage medium A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, code set or instruction set are by handling Device is loaded and is executed such as any of the above-described facial expression image recommended method.
A kind of facial expression image recommended method, device, equipment and medium provided by the present application, have the following technical effect that
Historical interest facial expression image information and the historical interest expression figure of the embodiment of the present application based on target object As the similarity between facial expression image in information, the target two for generating the target object divides topological diagram;Based on the target object Target two divide topological diagram, determine the feature vector of the target object;The feature vector of the target object characterizes target pair The characteristic of the interest facial expression image of elephant;It determines every in the feature vector and facial expression image set to be recommended of the target object The vector similarity of the expression vector of a facial expression image to be recommended;Based on the vector similarity to the target object recommendation tables Feelings image.This method can efficiently use between the content characteristic and historical interest expression of the historical interest expression of target object Topology information, and target object nodal information that target two is divided in topological diagram is determined in conjunction with the two, to examine comprehensively The real demand for considering user, can effectively improve the recommendation effect of the facial expression image based on cold start-up.
Detailed description of the invention
It in ord to more clearly illustrate embodiments of the present application or technical solution in the prior art and advantage, below will be to implementation Example or attached drawing needed to be used in the description of the prior art are briefly described, it should be apparent that, the accompanying drawings in the following description is only It is only some embodiments of the present application, for those of ordinary skill in the art, without creative efforts, It can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of schematic diagram of implementation environment provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of facial expression image recommended method provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram for generating target two and dividing topological diagram provided by the embodiments of the present application;
Fig. 4 a is the structural schematic diagram that a kind of target two that this Shen embodiment provides divides topological diagram;
Fig. 4 b is the structural schematic diagram that another target two that this Shen embodiment provides divides topological diagram;
Fig. 5 is the schematic diagram of the training of feature vectors model provided by the embodiments of the present application;
Fig. 6 is a kind of block diagram of facial expression image recommendation apparatus provided by the embodiments of the present application;
Fig. 7 is a kind of hardware configuration of equipment for realizing method provided by the embodiment of the present application provided by the present application Schematic diagram.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection It encloses.
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party Formula is described in further detail.
Referring to FIG. 1, it illustrates a kind of schematic diagrames of implementation environment provided by the embodiments of the present application.The implementation environment can To include: terminal 120 and pass through the server 140 of network connection with the terminal 120.Terminal 120 can specifically include operation Software in entity device, such as the application etc. being mounted in equipment, also may include the smart phone for being equipped with application, platform At least one of entity devices of types such as formula computer, tablet computer, laptop, digital assistants, intelligent wearable device. Specifically, operation has operating system in terminal, which can be form (Windows) operating system or Linux operation The desktop operating systems such as system or Mac OS (Apple Desktop operating system) are also possible to iOS (apple mobile terminal operation system System) or the Mobile operating systems such as Android (Android) operating system.
Server 140 can be independent server, be also possible to the server cluster being made of multiple separate servers, An either cloud computing service center.
It should be understood that implementation environment shown in Fig. 1 is not constituted only with a kind of application environment of application scheme to this The restriction of application scheme application environment, other application environments can also include setting than more or fewer computers as shown in the figure Standby or computer device networks connection relationship.
The executing subject of the facial expression image recommended method of the application can be server 140, server 140 can according to The demand at family carries out facial expression image recommendation or carries out facial expression image recommendation from trend user.Terminal quantity is not limited to Fig. 1 institute Show, can be to be multiple, it can be that a user determines facial expression image to be recommended simultaneously that corresponding number of users is also multiple simultaneously Recommended.
It should be noted that if terminal 120 has stronger processing computing capability and storage capacity, the expression figure of the application It can be individually performed by terminal 120 as recommended method or be executed jointly by terminal 120 and server 140.
A kind of specific embodiment of facial expression image recommended method of the application introduced below, Fig. 2 are that the embodiment of the present application provides A kind of facial expression image recommended method flow diagram, this application provides as described in embodiment or flow chart method operation Step, but based on routine or may include more or less operating procedure without creative labor.It is enumerated in embodiment The step of sequence be only one of numerous step execution sequence mode, do not represent and unique execute sequence.As shown in Fig. 2, The executing subject of this method is the server 140 in Fig. 1, and this method may include:
S201: historical interest facial expression image information and the historical interest facial expression image information based on target object Similarity between middle facial expression image, the target two for generating the target object divide topological diagram.
The embodiment of the present application, target object can obtain the user for recommending expression for needs.Specifically, target object can be with For application program a certain in using terminal and need to obtain the application program expression resource or expression library in recommend expression use Family.The application program includes but is not limited to instant messaging program (such as QQ, wechat, microblogging etc.), video program, short-sighted sound interval Sequence, chat tool, interactive process or other need the application program using facial expression image.
Historical interest facial expression image information may include following at least one: the used facial expression image letter of the history of user Breath (in the application program used expression or at the terminal used in expression), the expression figure collected of the history of user The facial expression image information (such as thumb up, comment on, complain, report etc. operation) crossed as information, other historical operations of user.The table Feelings image can be static table feelings image, dynamic expression image, the expression packet being made of multiple facial expression images.Facial expression image letter Breath may include the relationship (whether being the interested facial expression image of a certain user) of facial expression image and user, facial expression image it is interior Hold, classification, the other feature data of the corresponding label of expression and facial expression image belonging to facial expression image.
Two points of topological diagrams are a kind of relational network figures for characterizing node and node, which can be expressed as G= (V, E), wherein V indicates the node set (for example, V=v1 ..., vn) in two points of topological diagrams, and each node respectively corresponds one Entity (for example, user, expression);E indicates to connect the line set on side between associated nodes in two points of topological diagrams (for example, E= E1 ..., en), the incidence relation that every connection side is used to reflect between connected node and node.
There are multiple facial expression images, each facial expression image is respectively not in the historical interest facial expression image information of each target object Identical, the similarity by calculating facial expression image can determine between each expression with the presence or absence of incidence relation.Based on facial expression image Similarity then determine two facial expression images between there are incidence relations.It is closed if it is determined that there is association between two facial expression images System, it is determined that the two has connection relationship in two points of topological diagrams.Later, when building two divides topological diagram, can there will be pass Setting connection side between the corresponding node of the facial expression image of connection relationship.
In an embodiment, Fig. 3 is a kind of flow diagram for generating target two and dividing topological diagram provided by the embodiments of the present application. As shown in figure 3, the method may include:
S301: the historical interest facial expression image information based on target object and the target object generates the target pair The one or the two point of topological diagram of elephant.
Specifically, the one or the two point of topological diagram includes the corresponding target object node of the target object and the history The corresponding facial expression image node of facial expression image in interest facial expression image information.One or two point of topological diagram further includes target object node Connection side between facial expression image node, the connection side indicate that the corresponding facial expression image of facial expression image node is the target pair The facial expression image as interested to node corresponding target object.That is, the target object used the facial expression image, collected this Facial expression image or other operated the facial expression image.
S303: based on the similarity and the first similarity threshold between every two facial expression image node, the every two is determined Connection relationship between facial expression image node.
Specifically, the similarity between the every two facial expression image node is the corresponding facial expression image of facial expression image node Similarity, can be by calculating separately (such as the cosine of vector distance between the corresponding expression vector of every two facial expression image node Distance), as the similarity between corresponding two facial expression image nodes.Later, successively compare between every two facial expression image node Similarity and the first similarity threshold size, determine the connection between every two facial expression image node based on comparative result Relationship.
If the similarity between two facial expression image nodes is more than or equal to the first similarity threshold, it is determined that two expression figures As the corresponding facial expression image of node have incidence relation namely two corresponding facial expression image nodes there are connection relationship ( There is connection side);It is on the contrary, it is determined that the corresponding facial expression image of two facial expression image nodes does not have incidence relation, namely Connection relationship is not present in two corresponding facial expression image nodes.First similarity threshold is default similarity threshold, can Suitability adjustment is carried out according to actual recommendation effect or artificial experience.
S305: based on the connection relationship and the one or the two point of topology between identified every two facial expression image node Figure determines that the target two for obtaining the target object divides topological diagram.
Specifically, constructing the one or two point of topological diagram and every two facial expression image in the one or two point of topological diagram being determined After connection relationship between node, the net of two points of topological diagrams can be rebuild based on the connection relationship between facial expression image node Network structure is attached processing to the facial expression image node with connection relationship, so that the target two for obtaining target object is divided Topological diagram.The target two is divided in topological diagram comprising target object node, facial expression image node, target object node and facial expression image The connection side of node further includes the connection side between facial expression image node.
Fig. 4 a is the structural schematic diagram that a kind of target two that this Shen embodiment provides divides topological diagram.As shown in fig. 4 a, the mesh Mark includes user node u1, facial expression image node t1, facial expression image node t2, facial expression image node t3, table in two points of topological diagrams Feelings graph node t4 and connection t1t2, connection side t3t4 in u1t1~u1t4, connection.
Fig. 4 b is the structural schematic diagram that another target two that this Shen embodiment provides divides topological diagram.As shown in Figure 4 b, should It includes user node u2, user node u3, facial expression image node t1, facial expression image node t2, expression in topological diagram that target two, which is divided, Graph node t3, facial expression image node t4, facial expression image node t5, facial expression image node t6 and connection side u2t1~u2t3, U3t4~u3t6, connection t3t5, connection side t5t6 in t2t3, connection.
Specifically, user can by the expression recommendation function key of a certain application program in triggering terminal or in terminal or Expression function of search key sends to server and obtains the request for recommending expression, and server responds the request, according to going through for the user Similarity in history interest facial expression image information and the historical interest facial expression image information between facial expression image, generates the mesh Mark object target two divide topological diagram, in order to it is subsequent based on the target two divide topological diagram extract target object feature vector, And then realize the recommendation of corresponding facial expression image.
S203: the target two based on the target object divides topological diagram, determines the feature vector of the target object;It is described The characteristic of the interest facial expression image of the feature vector characterization target object of target object.
The embodiment of the present application, the feature vector of target object are the features for characterizing the interest facial expression image of target object Data.This feature data include at least the topological structure between the content characteristic data of interest facial expression image, interest facial expression image Characteristic.
In an embodiment, the target two based on the target object divides topological diagram, determines the spy of the target object Vector is levied, may include:
S501: the feature vector characterization for dividing topological diagram to carry out target object the target two based on feature vector model is learned It practises, obtains the feature vector of the target object.
Exemplary, described eigenvector model is to utilize training object and the corresponding positive and negative expression figure of the trained object As being used as training sample set, initial neural network model is trained;The positive and negative facial expression image includes interest table Feelings image (namely positive sample image) and non-interest facial expression image (namely negative sample image).The initial neural network model includes But be not limited to common convolutional neural networks model, for example, picture scroll product neural network model GNN (such as GCN, GAN, GAE, At least one of GGN, GSTN).The training process of this feature vector model will be described in detail subsequent.
By construction feature vector model, the content characteristic of the historical interest facial expression image of target object can be efficiently used And the topology information between historical interest facial expression image, and target object nodal information is learnt in conjunction with the two And recommend, further effectively improve the effect that cold start-up is recommended.
In another embodiment, each node that target two is divided in topological diagram can be mapped as by sky by network mapping technology Between in a vector, using vector corresponding to all nodes relevant to a certain target object node as the target object section The feature vector of the corresponding target object of point.The network mapping technology include but is not limited to be Deepwalk technology, Node2vec The learning algorithm of the figures feature representation such as technology.
S205: each expression figure to be recommended in the feature vector and facial expression image set to be recommended of the target object is determined The vector similarity of the expression vector of picture.
The embodiment of the present application, facial expression image set to be recommended can be stored in the form of database local server, It can be reserved in other at least one server, cloud or memory nodes.In facial expression image set to be recommended include it is multiple to Recommend facial expression image, it can also be multiple static maps that each facial expression image to be recommended, which can be a still image or dynamic image, Picture and/or dynamic image (such as expression packet).
Facial expression image to be recommended is being obtained, the expression of the characteristics of image for characterizing facial expression image to be recommended can be extracted Vector.Later, it is based on preset similarity algorithm, calculates the vector similarity between feature vector and expression vector.This is similar Spending algorithm to include but is not limited to is cosine Similarity algorithm.
In one embodiment, each in the feature vector of the determination target object and facial expression image set to be recommended The vector similarity of the expression vector of facial expression image to be recommended, comprising:
S601: the corresponding key images of each facial expression image to be recommended in facial expression image set to be recommended are obtained.
Specifically, if facial expression image to be recommended is still image, using the still image as the key images.If wait push away Recommending expression is expression packet, then a facial expression image can be extracted from the expression packet, if the facial expression image extracted is still image, Then using the still image extracted as the key images;If the facial expression image extracted is dynamic image, needing to extract should Key frame in dynamic image is as the key images.The format of the key images can for common picture format (such as jpg, Tif, img etc.).
S603: extracting the characteristics of image of the corresponding key images of each facial expression image to be recommended, obtains described each The expression vector of facial expression image to be recommended.
Specifically, the image of the corresponding key images of each facial expression image to be recommended of convolutional network model extraction can be passed through Feature obtains the expression vector for characterizing the characteristics of image of each facial expression image to be recommended.The convolutional network model includes but unlimited In are as follows: the models such as rennet18, rennet50, VGG.The characteristics of image includes image content features, image attributes feature, image Category feature, image tag feature etc..
S605: determine the feature vector of the target object and the expression vector of each facial expression image to be recommended to Measure similarity;Described eigenvector is identical with the vector dimension of the expression vector.
Specifically, calculate target object feature vector and extracted each facial expression image to be recommended expression vector it Between vector distance (such as COS distance), using the vector distance as the vector phase measured between feature vector and expression vector Like degree.
S207: facial expression image is recommended to the target object based on the vector similarity.
The embodiment of the present application is based on the vector similarity, determines at least one corresponding mesh of maximum vector similarity value Mark recommends facial expression image;Later, Xiang Suoshu target object recommends at least one identified target to recommend facial expression image.Specifically , it, can be to the corresponding user of mark of target object or mesh after facial expression image being recommended determining at least one target Terminal where mark object recommends the target to recommend facial expression image.
It is described that facial expression image is recommended to the target object based on the vector similarity in another embodiment, it can wrap It includes:
Based on the vector similarity, treats and multiple facial expression images to be recommended in expression set is recommended to be ranked up, really Surely facial expression image recommendation list is obtained;
Based on the facial expression image recommendation list, the expression figure for the preset quantity that Xiang Suoshu target object recommends sequence forward Picture;And/or
Determine that vector similarity is greater than at least one corresponding target of the second similarity threshold and recommends facial expression image;
At least one identified target is recommended to recommend facial expression image to the target object.
Historical interest facial expression image information and the historical interest expression figure of the embodiment of the present application based on target object As the similarity between facial expression image in information, the target two for generating the target object divides topological diagram;Based on the target object Target two divide topological diagram, determine the feature vector of the target object;The feature vector of the target object characterizes target pair The characteristic of the interest facial expression image of elephant;It determines every in the feature vector and facial expression image set to be recommended of the target object The vector similarity of the expression vector of a facial expression image to be recommended;Based on the vector similarity to the target object recommendation tables Feelings image.This method can efficiently use between the content characteristic and historical interest expression of the historical interest expression of target object Topology information, and target object nodal information that target two is divided in topological diagram is determined in conjunction with the two, to examine comprehensively The real demand for considering user, can effectively improve the recommendation effect of the facial expression image based on cold start-up.
The training embodiment of feature vector model provided by the present application introduced below.Fig. 5 is provided by the embodiments of the present application The schematic diagram of the training of one feature vectors model.The training step can be executed by server 140 in Fig. 1, can also be by other Device executes, and server 140 only obtains the feature vector model of its building.As shown in figure 5, with the initial neural network model Training process including illustrating feature vector model for picture scroll product neural network model (GCN model), comprising:
S701: building training sample set, the training sample set include that several training objects and the trained object exist Positive and negative facial expression image information after preset time, the positive and negative facial expression image information include interest facial expression image and non-interest table Feelings image.
Specifically, from every set expression packet, randomly selecting a GIF image, and right so that facial expression image is expression packet as an example The facial expression image randomly selects key frame and saves as jpg image, the representative image as the set expression packet.Utilizing resnet18 Feature extraction is carried out to image, 512 dimensional feature vectors are obtained, as the expression vector of every set expression packet, to obtain every set table The feature vector of feelings packet indicates.The corresponding positive and negative facial expression image information of each trained object (user) is obtained, if training object Expression packet i has been downloaded after time τ, then it is believed that i is interest facial expression image, obtains positive facial expression image information (positive sample), Again from training the object acquisition training object do not download or used expression packet in randomly select expression packet j, i.e., it is believed that j For non-interest facial expression image, negative facial expression image information (negative sample) is obtained, to construct training sample set.
S703: based on interest facial expression image information of the trained object before preset time and expression figure therein Similarity as between, the sample two for generating the trained object divide topological diagram.
After construction training sample set, sample two can be constructed and divide topological diagram.Wherein, which includes The training object facial expression image downloading, collect or operated before time τ, the interest facial expression image information and therein The sample two that similarity between facial expression image is used to construct trained object divides topological diagram.
Specifically, the interest facial expression image information based on the trained object before preset time and therein Similarity between facial expression image, the sample two for generating the trained object divide topological diagram, comprising:
S7031: it based on the interest facial expression image information of training object and the trained object before preset time, generates The first sample two of the trained object divides topological diagram.
Specifically, the first sample two divides topological diagram to include the corresponding trained Object node of the trained object and described The corresponding facial expression image node of facial expression image in interest facial expression image information.First sample two divides topological diagram to further include trained object Connection side between node and facial expression image node, the connection side indicate that the corresponding facial expression image of facial expression image node is the instruction Practice facial expression image interested to the corresponding trained object of Object node.
S7033: based on the similarity and the first similarity threshold between every two facial expression image node, the every two is determined Connection relationship between facial expression image node.
Specifically, the similarity between the every two facial expression image node is the corresponding facial expression image of facial expression image node Similarity, can be by calculating separately (such as the cosine of vector distance between the corresponding expression vector of every two facial expression image node Distance), as the similarity between corresponding two facial expression image nodes.Later, successively compare between every two facial expression image node Similarity and the first similarity threshold size, determine the connection between every two facial expression image node based on comparative result Relationship.
If the similarity between two facial expression image nodes is more than or equal to the first similarity threshold, it is determined that two expression figures As the corresponding facial expression image of node have incidence relation namely two corresponding facial expression image nodes there are connection relationship ( There is connection side);It is on the contrary, it is determined that the corresponding facial expression image of two facial expression image nodes does not have incidence relation, namely Connection relationship is not present in two corresponding facial expression image nodes.First similarity threshold is default similarity threshold λ, can Suitability adjustment is carried out according to actual recommendation effect or artificial experience.
S7035: based between identified every two facial expression image node connection relationship and the first sample two divide Topological diagram determines that the sample two for obtaining the trained object divides topological diagram.
Specifically, topological diagram and being determined that the first sample two divides every two in topological diagram constructing first sample two and dividing After connection relationship between facial expression image node, two points can be rebuild based on the connection relationship between facial expression image node and is opened up The network structure of figure is flutterred, i.e., processing is attached to the facial expression image node with connection relationship, to obtain training object Sample two divides topological diagram.The sample two divide in topological diagram comprising training Object node, facial expression image node, training Object node with The connection side of facial expression image node further includes the connection side between facial expression image node.
In practical applications, it regard each trained object (user) as a trained Object node first, that collects is every Set expression packet is also used as a facial expression image node, the corresponding facial expression image of training Object node interest expression packet corresponding with its Just there is a connection side between node, train determine two according to whether similarity is greater than λ between the expression packet of object collection Whether there is connection between a facial expression image node, so that constructing sample two divides topological diagram G=(V, E).
S705: divide topological diagram to be input to initial graph convolutional neural networks model in the sample two of the trained object and pass Study is broadcast, the initial characteristics vector of the trained object is obtained.
In an embodiment, the sample two by the trained object divides topological diagram to be input to initial graph convolutional neural networks Model carries out propagation study, obtains the initial characteristics vector of the trained object, comprising:
Obtain the trained object sample two divide topological diagram each node node diagnostic data;
The sample two for obtaining the trained object divides the connection having between the node for connecting side in each node of topological diagram Characteristic;
Based on the node diagnostic data and the connection features data, using initial graph convolutional neural networks model to institute Stating sample two divides topological diagram to carry out propagation study, obtains the initial characteristics vector of the trained object.
Above-mentioned node diagnostic data can be used as mode input matrix, and above-mentioned connection features data can be used as adjacency matrix, Divide topological diagram to carry out propagation study in the sample two to using initial convolution neural network model, and then obtains covering richer Characteristic training object initial characteristics vector.
In practical applications, it is trained using GCN model.Constructed sample two divides topological diagram G=(V, E), the sample The input of this two points of topological diagrams is the matrix X that dimension is N × F, and wherein N is the number of nodes of network, and F is that the input of each node is special Dimension (for example, 512 dimension) is levied, which divides the matrix A that the adjacency matrix of topological diagram is N × N.Then using GCN algorithm pair The bipartite graph is propagated, and propagation rule is f (H ', A)=σ (AHW), and wherein σ is activation primitive RELU, and W is weight matrix, dimension Degree is Fi×Fi+1, wherein F0=512.
S707: initial characteristics vector, the positive and negative facial expression image corresponding with the trained object of the trained object are determined The sample vector similarity of the expression vector of facial expression image in information.
S709: the image type based on the sample vector similarity and its corresponding facial expression image optimizes initial picture scroll The model parameter of product neural network model obtains the picture scroll product neural network model.
In realistic model training, divides topological diagram according to the interest facial expression image building sample two of training object, be input to GCN (for example, two-tier network structure) is propagated, and propagates the initial characteristics vector of training object learn, and with trained sample Expression vector corresponding to the positive negative sample facial expression image information of this concentration carries out vector similarity calculating, and vector similarity is corresponding Score the score of the expression packet whether can be downloaded as end user, and then according to the figure of the positive negative sample facial expression image information As the model of type (for example, interest facial expression image or non-interest facial expression image) Lai Youhua initial graph convolutional neural networks model is joined Number, to achieve the purpose that model training.
The embodiment of the present application can use GCN by using GCN model to learn to obtain user (target object or training Object) with topology information between interest facial expression image and interest facial expression image and interest facial expression image.
In addition, the bipartite graph in GCN model can be indicated with adjacency matrix A, each of input matrix X node is Training Object node and facial expression image node, on input feature vector, user characteristics are random initializtion, and expressive features are then convolution mind The feature extracted through network model (such as resnet18) will train the interest of object and training object by adjacency matrix Facial expression image is attached, and the facial expression image node that similarity is higher than preset threshold is attached, and has each section of connection Point (training Object node-facial expression image node, facial expression image node-facial expression image node) can pass through the carry out feature of GCN Polymerization and non-linear table are changed, and the feature of node can be preferably learnt, this point is also topology of the deep learning model without calligraphy learning Structural information.In addition, interest expression node is connect with interest expression node, it can be by this similar letter of two similar expressions Breath is added in adjacency matrix in the form of connecting side, is increased the information of data, is conducive to optimization of the model to node diagnostic.
Following is the application Installation practice, can be used for executing the application embodiment of the method.It is real for the application device Undisclosed details in example is applied, the application embodiment of the method is please referred to.
Referring to FIG. 6, it illustrates a kind of block diagrams of facial expression image recommendation apparatus provided by the embodiments of the present application.The device Have the function of realizing that server side in above method example, the function can also be executed by hardware realization by hardware Corresponding software realization.Described device 60 may include:
Topological diagram generation module 61, for historical interest facial expression image information and the history based on target object Similarity in interest facial expression image information between facial expression image, the target two for generating the target object divide topological diagram;
Vector determining module 62 divides topological diagram for the target two based on the target object, determines the target object Feature vector;The characteristic of the interest facial expression image of the feature vector characterization target object of the target object;
Similarity determining module 63, for determine the target object feature vector and facial expression image set to be recommended in The vector similarity of the expression vector of each facial expression image to be recommended;
Recommending module 64, for recommending facial expression image to the target object based on the vector similarity.
In some embodiments, the topological diagram generation module may include:
First topological diagram generation unit, for the historical interest facial expression image letter based on target object and the target object Breath, generates the one or two point of topological diagram of the target object;One or the two point of topological diagram includes that the target object is corresponding The corresponding facial expression image node of facial expression image in target object node and the historical interest facial expression image information;
Connection relationship determination unit, for based on the similarity and the first similarity threshold between every two facial expression image node Value, determines the connection relationship between the every two facial expression image node;
Target topological diagram generation unit, for based between identified every two facial expression image node connection relationship and One or the two point of topological diagram determines that the target two for obtaining the target object divides topological diagram.
In some embodiments, the vector determining module includes:
Vector determination unit, the spy for dividing topological diagram to carry out target object the target two based on feature vector model Vector representative learning is levied, the feature vector of the target object is obtained;
Described eigenvector model be using training object and the corresponding positive and negative facial expression image of the trained object as Training sample set is trained initial neural network model;The positive and negative facial expression image includes interest facial expression image With non-interest facial expression image.
In some embodiments, the initial neural network model includes picture scroll product neural network model, described eigenvector Training obtains model in the following manner:
Training sample set is constructed, the training sample set includes several training objects and the trained object when default Between after positive and negative facial expression image information, the positive and negative facial expression image information includes interest facial expression image and non-interest expression figure Picture;
Based on the trained object before preset time interest facial expression image information and facial expression image therein between Similarity, the sample two for generating the trained object divides topological diagram;
Divide topological diagram to be input to initial graph convolutional neural networks model in the sample two of the trained object and carries out dissemination It practises, obtains the initial characteristics vector of the trained object;
Determine the initial characteristics vector of the trained object, in positive and negative facial expression image information corresponding with the trained object The sample vector similarity of the expression vector of facial expression image;
Image type based on the sample vector similarity and its corresponding facial expression image optimizes initial graph convolutional Neural The model parameter of network model obtains the picture scroll product neural network model.
In some embodiments, the sample two by the trained object divides topological diagram to be input to initial graph convolutional Neural net Network model carries out propagation study, obtains the initial characteristics vector of the trained object, comprising:
Obtain the trained object sample two divide topological diagram each node node diagnostic data;
The sample two for obtaining the trained object divides the connection having between the node for connecting side in each node of topological diagram Characteristic;
Based on the node diagnostic data and the connection features data, using initial graph convolutional neural networks model to institute Stating sample two divides topological diagram to carry out propagation study, obtains the initial characteristics vector of the trained object.
In some embodiments, the similarity determining module includes:
Image acquisition unit, for obtaining the corresponding key of each facial expression image to be recommended in facial expression image set to be recommended Image;
Expression vector determination unit, the image for extracting the corresponding key images of each facial expression image to be recommended are special Sign, obtains the expression vector of each facial expression image to be recommended;
Similarity determining unit, for determine the target object feature vector and each facial expression image to be recommended Expression vector vector similarity;Described eigenvector is identical with the vector dimension of the expression vector.
In some embodiments, the recommending module includes:
First object expression determination unit determines that maximum vector similarity value is corresponding for being based on the vector similarity At least one target recommend facial expression image;
First expression recommendation unit, for recommending at least one identified target to recommend expression figure to the target object Picture;And/or
First recommendation list determination unit is treated multiple in recommendation expression set for being based on the vector similarity Facial expression image to be recommended is ranked up, and determination obtains facial expression image recommendation list;
Second expression recommendation unit, for being based on the facial expression image recommendation list, Xiang Suoshu target object recommends sequence The facial expression image of forward preset quantity;And/or
Second target expression determination unit, for determining that vector similarity is greater than the second similarity threshold corresponding at least one A target recommends facial expression image;
Third expression recommendation unit, for recommending at least one identified target to recommend expression figure to the target object Picture.
The embodiment of the present application provides a kind of server, which may include processor and memory, the memory In be stored at least one instruction, at least a Duan Chengxu, code set or instruction set, this at least one instruction, an at least Duan Cheng Sequence, the code set or instruction set are loaded as the processor and are executed to realize the facial expression image as provided by above method embodiment Recommended method.
The embodiment of the present application also provides a kind of storage medium, at least one instruction, extremely is stored in the storage medium A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, code set or instruction set are by handling Device loads and executes any of the above-described facial expression image recommended method.
Further, Fig. 7 shows a kind of hardware knot of equipment for realizing method provided by the embodiment of the present application Structure schematic diagram, the equipment can be terminal, mobile terminal or other equipment, the equipment may also participate in composition or Include device provided by the embodiment of the present application.As shown in fig. 7, terminal 10 may include one or more (adopts in figure With 102a, 102b ... ..., 102n is shown) processor 102 (processor 102 can include but is not limited to Micro-processor MCV or The processing unit of programmable logic device FPGA etc.), memory 104 for storing data and the biography for communication function Defeated device 106.It in addition to this, can also include: display, input/output interface (I/O interface), universal serial bus (USB) Port (a port that can be used as in the port of I/O interface is included), network interface, power supply and/or camera.This field is general Logical technical staff is appreciated that structure shown in Fig. 7 is only to illustrate, and does not cause to limit to the structure of above-mentioned electronic device. For example, terminal 10 may also include the more perhaps less component than shown in Fig. 7 or have different from shown in Fig. 7 Configuration.
It is to be noted that said one or multiple processors 102 and/or other data processing circuits lead to herein Can often " data processing circuit " be referred to as.The data processing circuit all or part of can be presented as software, hardware, firmware Or any other combination.In addition, data processing circuit for single independent processing module or all or part of can be integrated to meter In any one in other elements in calculation machine terminal 10 (or mobile device).As involved in the embodiment of the present application, The data processing circuit controls (such as the selection for the variable resistance end path connecting with interface) as a kind of processor.
Memory 104 can be used for storing the software program and module of application software, as described in the embodiment of the present application Corresponding program instruction/the data storage device of method, the software program that processor 102 is stored in memory 104 by operation And module realizes a kind of above-mentioned Processing with Neural Network method thereby executing various function application and data processing.It deposits Reservoir 104 may include high speed random access memory, may also include nonvolatile memory, as one or more magnetic storage fills It sets, flash memory or other non-volatile solid state memories.In some instances, memory 104 can further comprise relative to place The remotely located memory of device 102 is managed, these remote memories can pass through network connection to terminal 10.Above-mentioned network Example include but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Transmitting device 106 is used to that data to be received or sent via a network.Above-mentioned network specific example may include The wireless network that the communication providers of terminal 10 provide.In an example, transmitting device 106 includes that a network is suitable Orchestration (Network Interface Controller, NIC), can be connected by base station with other network equipments so as to Internet is communicated.In an example, transmitting device 106 can be radio frequency (Radio Frequency, RF) module, For wirelessly being communicated with internet.
Display can such as touch-screen type liquid crystal display (LCD), the liquid crystal display aloow user with The user interface of terminal 10 (or mobile device) interacts.
It should be understood that above-mentioned the embodiment of the present application sequencing is for illustration only, do not represent the advantages or disadvantages of the embodiments. And above-mentioned the application specific embodiment is described.Other embodiments are within the scope of the appended claims.Some In the case of, the movement recorded in detail in the claims or step can execute and still according to the sequence being different from embodiment Desired result so may be implemented.In addition, process depicted in the drawing not necessarily requires the particular order shown or continuous Sequence is just able to achieve desired result.In some embodiments, multitasking and parallel processing are also possible or can It can be advantageous.
Various embodiments are described in a progressive manner in the application, same and similar part between each embodiment It may refer to each other, each embodiment focuses on the differences from other embodiments.Especially for device kimonos For device embodiment of being engaged in, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to method The part of embodiment illustrates.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.

Claims (10)

1. a kind of facial expression image recommended method characterized by comprising
Facial expression image in historical interest facial expression image information and the historical interest facial expression image information based on target object Between similarity, the target two for generating the target object divides topological diagram;
Target two based on the target object divides topological diagram, determines the feature vector of the target object;The target object Feature vector characterization target object interest facial expression image characteristic;
Determine the expression of each facial expression image to be recommended in the feature vector and facial expression image set to be recommended of the target object The vector similarity of vector;
Recommend facial expression image to the target object based on the vector similarity.
2. the method according to claim 1, wherein the historical interest facial expression image letter based on target object Similarity in breath and the historical interest facial expression image information between facial expression image, generates the target two of the target object Divide topological diagram, comprising:
Historical interest facial expression image information based on target object and the target object, generates the one or two of the target object Divide topological diagram;One or the two point of topological diagram includes the corresponding target object node of the target object and the historical interest table The corresponding facial expression image node of facial expression image in feelings image information;
Based on the similarity and the first similarity threshold between every two facial expression image node, the every two facial expression image section is determined Connection relationship between point;
Based between identified every two facial expression image node connection relationship and the one or the two point of topological diagram, determination obtain The target two of the target object divides topological diagram.
3. the method according to claim 1, wherein the target two based on the target object divides topology Figure, determines the feature vector of the target object, comprising:
Based on the feature vector representative learning that feature vector model divides topological diagram to carry out target object the target two, institute is obtained State the feature vector of target object;
Described eigenvector model is using training object and the corresponding positive and negative facial expression image of the trained object as training Sample set is trained initial neural network model;The positive and negative facial expression image includes interest facial expression image and non- Interest facial expression image.
4. according to the method described in claim 3, it is characterized in that, the initial neural network model includes picture scroll product nerve net Network model, training obtains described eigenvector model in the following manner:
Construct training sample set, the training sample set include several training objects and the trained object preset time it Positive and negative facial expression image information afterwards, the positive and negative facial expression image information include interest facial expression image and non-interest facial expression image;
Based on the trained object in the interest facial expression image information before preset time and the phase between facial expression image therein Like degree, the sample two for generating the trained object divides topological diagram;
Divide topological diagram to be input to initial graph convolutional neural networks model in the sample two of the trained object to carry out propagation study, obtain To the initial characteristics vector of the trained object;
Determine the initial characteristics vector of the trained object, expression in positive and negative facial expression image information corresponding with the trained object The sample vector similarity of the expression vector of image;
Image type based on the sample vector similarity and its corresponding facial expression image optimizes initial graph convolutional neural networks The model parameter of model obtains the picture scroll product neural network model.
5. according to the method described in claim 4, it is characterized in that, the sample two by the trained object divides topological diagram defeated Enter to initial graph convolutional neural networks model and carry out propagation study, obtains the initial characteristics vector of the trained object, comprising:
Obtain the trained object sample two divide topological diagram each node node diagnostic data;
The sample two for obtaining the trained object divides the connection features having between the node for connecting side in each node of topological diagram Data;
Based on the node diagnostic data and the connection features data, using initial graph convolutional neural networks model to the sample This two points of topological diagrams carry out propagation study, obtain the initial characteristics vector of the trained object.
6. the method according to claim 1, wherein the feature vector of the determination target object with wait push away Recommend the vector similarity of the expression vector of each facial expression image to be recommended in facial expression image set, comprising:
Obtain the corresponding key images of each facial expression image to be recommended in facial expression image set to be recommended;
The characteristics of image for extracting the corresponding key images of each facial expression image to be recommended, obtains each expression to be recommended The expression vector of image;
Determine the vector similarity of the feature vector of the target object and the expression vector of each facial expression image to be recommended; Described eigenvector is identical with the vector dimension of the expression vector.
7. the method according to claim 1, wherein described be based on the vector similarity to the target object Recommend facial expression image, comprising:
Based on the vector similarity, determine that at least one corresponding target of maximum vector similarity value recommends facial expression image;
At least one identified target is recommended to recommend facial expression image to the target object;And/or
Based on the vector similarity, treats and multiple facial expression images to be recommended in expression set is recommended to be ranked up, determining To facial expression image recommendation list;
Based on the facial expression image recommendation list, the facial expression image for the preset quantity that Xiang Suoshu target object recommends sequence forward; And/or
Determine that vector similarity is greater than at least one corresponding target of the second similarity threshold and recommends facial expression image;
At least one identified target is recommended to recommend facial expression image to the target object.
8. a kind of facial expression image recommendation apparatus characterized by comprising
Topological diagram generation module, for based on target object historical interest facial expression image information and the historical interest table Similarity in feelings image information between facial expression image, the target two for generating the target object divide topological diagram;
Vector determining module divides topological diagram for the target two based on the target object, determines the feature of the target object Vector;The characteristic of the interest facial expression image of the feature vector characterization target object of the target object;
Similarity determining module, for determine the target object feature vector and facial expression image set to be recommended in each to Recommend the vector similarity of the expression vector of facial expression image;
Recommending module, for recommending facial expression image to the target object based on the vector similarity.
9. a kind of equipment, which is characterized in that the equipment includes processor and memory, and at least one is stored in the memory Item instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code Collection or instruction set are loaded by the processor and are executed to realize the facial expression image recommendation side as described in claim 1 to 7 is any Method.
10. a kind of computer storage medium, which is characterized in that be stored at least one instruction, at least one in the storage medium Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, code set or instruction set are added by processor It carries and executes the facial expression image recommended method as described in claim 1 to 7 is any.
CN201910725995.0A 2019-08-07 2019-08-07 A kind of facial expression image recommended method, device, equipment and medium Pending CN110472087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910725995.0A CN110472087A (en) 2019-08-07 2019-08-07 A kind of facial expression image recommended method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910725995.0A CN110472087A (en) 2019-08-07 2019-08-07 A kind of facial expression image recommended method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN110472087A true CN110472087A (en) 2019-11-19

Family

ID=68510332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910725995.0A Pending CN110472087A (en) 2019-08-07 2019-08-07 A kind of facial expression image recommended method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110472087A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104599A (en) * 2019-12-23 2020-05-05 北京百度网讯科技有限公司 Method and apparatus for outputting information
CN111506751A (en) * 2020-04-20 2020-08-07 创景未来(北京)科技有限公司 Method and device for searching mechanical drawing
CN112801191A (en) * 2021-02-02 2021-05-14 中国石油大学(北京) Intelligent recommendation method, device and equipment for pipeline accident handling
CN113139654A (en) * 2021-03-18 2021-07-20 北京三快在线科技有限公司 Method and device for training neural network model
CN113286200A (en) * 2020-02-20 2021-08-20 佛山市云米电器科技有限公司 Program recommendation method, cloud server, television, system and storage medium
WO2023045378A1 (en) * 2021-09-24 2023-03-30 北京沃东天骏信息技术有限公司 Method and device for recommending item information to user, storage medium, and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004318597A (en) * 2003-04-17 2004-11-11 Kyodo Printing Co Ltd Recommendation system
CN105913296A (en) * 2016-04-01 2016-08-31 北京理工大学 Customized recommendation method based on graphs
CN108401005A (en) * 2017-02-08 2018-08-14 腾讯科技(深圳)有限公司 A kind of expression recommendation method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004318597A (en) * 2003-04-17 2004-11-11 Kyodo Printing Co Ltd Recommendation system
CN105913296A (en) * 2016-04-01 2016-08-31 北京理工大学 Customized recommendation method based on graphs
CN108401005A (en) * 2017-02-08 2018-08-14 腾讯科技(深圳)有限公司 A kind of expression recommendation method and apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104599A (en) * 2019-12-23 2020-05-05 北京百度网讯科技有限公司 Method and apparatus for outputting information
CN111104599B (en) * 2019-12-23 2023-08-18 北京百度网讯科技有限公司 Method and device for outputting information
CN113286200A (en) * 2020-02-20 2021-08-20 佛山市云米电器科技有限公司 Program recommendation method, cloud server, television, system and storage medium
CN111506751A (en) * 2020-04-20 2020-08-07 创景未来(北京)科技有限公司 Method and device for searching mechanical drawing
CN112801191A (en) * 2021-02-02 2021-05-14 中国石油大学(北京) Intelligent recommendation method, device and equipment for pipeline accident handling
CN112801191B (en) * 2021-02-02 2023-11-21 中国石油大学(北京) Intelligent recommendation method, device and equipment for handling pipeline accidents
CN113139654A (en) * 2021-03-18 2021-07-20 北京三快在线科技有限公司 Method and device for training neural network model
WO2023045378A1 (en) * 2021-09-24 2023-03-30 北京沃东天骏信息技术有限公司 Method and device for recommending item information to user, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN110472087A (en) A kind of facial expression image recommended method, device, equipment and medium
WO2017166449A1 (en) Method and device for generating machine learning model
WO2017181612A1 (en) Personalized video recommendation method and device
CN111708876B (en) Method and device for generating information
CN108833458A (en) A kind of application recommended method, device, medium and equipment
CN110413867B (en) Method and system for content recommendation
JP5880101B2 (en) Information processing apparatus, information processing method, and program
CN108629608A (en) User data processing method and processing device
CN110276406A (en) Expression classification method, apparatus, computer equipment and storage medium
CN112100221B (en) Information recommendation method and device, recommendation server and storage medium
CN110334544A (en) Federal model degeneration processing method, device, federal training system and storage medium
US10313457B2 (en) Collaborative filtering in directed graph
CN109117442A (en) A kind of application recommended method and device
Ang et al. Enhancing STEM education using augmented reality and machine learning
CN110069997B (en) Scene classification method and device and electronic equipment
Fernández‐Palacios et al. ARC ube—The Augmented Reality Cube for Archaeology
CN111405314A (en) Information processing method, device, equipment and storage medium
CN112801053B (en) Video data processing method and device
CN112084412A (en) Information pushing method, device, equipment and storage medium
WO2017200586A1 (en) Prioritizing topics of interest determined from product evaluations
CN115115901A (en) Method and device for acquiring cross-domain learning model
CN111949860B (en) Method and apparatus for generating a relevance determination model
CN113641246A (en) Method and device for determining user concentration degree, VR equipment and storage medium
CN112199571A (en) Artificial intelligence information processing system, method and readable storage medium
CN112115703A (en) Article evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination