WO2018145590A1 - 表情图片推荐方法、装置、服务器集群及存储介质 - Google Patents

表情图片推荐方法、装置、服务器集群及存储介质 Download PDF

Info

Publication number
WO2018145590A1
WO2018145590A1 PCT/CN2018/074561 CN2018074561W WO2018145590A1 WO 2018145590 A1 WO2018145590 A1 WO 2018145590A1 CN 2018074561 W CN2018074561 W CN 2018074561W WO 2018145590 A1 WO2018145590 A1 WO 2018145590A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
emoticon
user
emoticons
style
Prior art date
Application number
PCT/CN2018/074561
Other languages
English (en)
French (fr)
Inventor
万伟
李霖
刘龙坡
陈谦
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018145590A1 publication Critical patent/WO2018145590A1/zh
Priority to US16/432,386 priority Critical patent/US10824669B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Definitions

  • the present application relates to the field of network application technologies, and in particular, to an expression picture recommendation method, apparatus, server cluster, and storage medium.
  • Publishing an emoticon is one of the features that users typically use when using social networking apps.
  • the social network application in addition to the built-in emoticons, the social network application can also provide the user with an emoticon image developed by a third-party developer, which is selected and downloaded by the user.
  • the social network application usually recommends the currently more popular emoticon image to the user, or recommends to the user that other users who have common features or common interests with the user use more emoticons, from which the user Choose your favorite emoticons.
  • the emoticon image recommended by the social network application to the user is a public user or other user who has a common feature or a common hobby with the user, and may not include the emoticon image that the current user likes. As a result, the recommendation effect of the expression picture is poor.
  • the embodiment of the present invention provides an emoticon image recommendation method, device, server cluster, and storage medium, which can solve the emoticon image recommended by the social network application to the user in the related art, and may not include the emoticon image that the current user likes, or The user's favorite emoticon image is not recommended, resulting in a poor recommendation effect of the emoticon image.
  • the technical solution is as follows:
  • an expression picture recommendation method for use in a server cluster, the method comprising:
  • each set of emoticons in each set of emoticons corresponds to at least one image style, and each set of emoticons includes at least one image;
  • the specified emoticon picture is recommended to the user.
  • an emoticon recommendation device comprising:
  • a record obtaining module configured to obtain a usage record of each set of emoticons used by the user, each set of emoticons in the set of emoticons corresponds to at least one image style, and each set of emoticons includes at least one image;
  • a recommendation index obtaining module configured to obtain a pre-corrected recommendation index of the specified emoticon image, and obtain an image style of the specified emoticon image, the recommendation index being used to indicate a priority degree of recommending the specified emoticon image to the user;
  • a correction module configured to correct the recommended index before the correction according to the usage record, the image style of each set of emoticons, and the image style of the specified emoticon to obtain a revised recommendation index
  • a recommendation module configured to recommend the specified emoticon picture to the user when the corrected recommendation index satisfies a recommendation condition.
  • the recommended index of the specified emoticon is corrected, and the recommended index is recommended to the user according to the revised recommendation index.
  • Emoticon picture Since the user's preference for the image style of the emoticon image is comprehensively considered, the emoticon image recommendation personalized to the user in combination with the user's personal preference is realized, and the recommendation effect for the emoticon image of the single user is improved.
  • FIG. 1 is a schematic structural diagram of an emoticon picture recommendation system according to an exemplary embodiment
  • FIG. 2 is a flowchart of an emoticon picture recommendation method according to an exemplary embodiment
  • Figure 3 is a schematic diagram of a function diagram involved in the embodiment shown in Figure 2;
  • FIG. 4 is a flowchart of an expression picture recommendation method according to an exemplary embodiment
  • FIG. 5 is a schematic diagram of an implementation process of a server cluster recommending an emoticon to a user
  • FIG. 6 is a structural block diagram of an emoticon picture recommending apparatus according to an exemplary embodiment
  • FIG. 7 is a schematic structural diagram of a server cluster according to an exemplary embodiment.
  • FIG. 1 is a schematic structural diagram of an emoticon picture recommendation system according to an exemplary embodiment of the present application.
  • the system includes a number of user terminals 120 and a server cluster 140.
  • the user terminal 120 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV). Compress standard audio layers 4) players, laptops and desktop computers, and more.
  • the user terminal 120 and the server cluster 140 are connected by a communication network.
  • the communication network is a wired network or a wireless network.
  • Server cluster 140 is a server, or a number of servers, or a virtualization platform, or a cloud computing service center.
  • the server cluster 140 may include a server for implementing the emoticon management platform 142.
  • the server cluster 140 further includes a server for implementing the social network platform 144; optionally, the server cluster 140 further includes user operations. Record management server 146.
  • the emoticon management platform 142 includes: a server for implementing emoticon image recommendation and a server for implementing emoticon image download management.
  • the social network platform 144 includes: a server for implementing social information transceiving, a server for managing and storing each user account, a server for managing and storing each group account, and managing each user account or group.
  • the social network platform 144 is connected to the user operation record management server 146 via a communication network.
  • the user operation record management server 146 includes: a server for collecting historical usage records of the user's expression pictures, and a server for storing historical usage records of the user's expression pictures.
  • the user operation record management server 146 may obtain the operation record data of the user's expression image from the local social network platform 144 or other related social network application platforms, and according to the authorization of the user authorization, according to The obtained operation record counts the historical usage record of the user on the expression picture.
  • the system may further include a management device 160, which is connected to the server cluster 140 through a communication network.
  • the communication network is a wired network or a wireless network.
  • the wireless or wired network described above uses standard communication techniques and/or protocols.
  • the network is usually the Internet, but can also be any network, including but not limited to a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, a wired or a wireless. Any combination of networks, private networks, or virtual private networks).
  • data exchanged over a network is represented using techniques and/or formats including Hyper Text Markup Language (HTML), Extensible Markup Language (XML), and the like.
  • SSL Secure Socket Layer
  • TLS Transport Layer Security
  • VPN Virtual Private Network
  • IPsec Internet Protocol Security
  • Regular encryption techniques are used to encrypt all or some of the links.
  • the above described data communication techniques may also be replaced or supplemented using custom and/or dedicated data communication techniques.
  • each set of emoticons used by the user may be obtained, and each set of emoticons in each set of emoticons corresponds to at least one image style, and each set of emoticons Include at least one picture, obtain a recommended index of the specified expression picture, and obtain an image style of the specified expression picture, wherein the recommendation index is used to indicate the priority of recommending the specified expression picture to the user, according to the usage record, and the designation
  • the image style of the expression image is corrected by the recommended index before the correction, and the corrected recommendation index is obtained.
  • the corrected recommendation index satisfies the recommendation condition, the user is recommended to specify the expression image.
  • the user's preference for various image style emoticons can be indicated to some extent. Therefore, in the above solution, after the usage index of each set of emoticons used by the user, the image style of each set of emoticons, and the image style of the specified emoticon are corrected, the recommended index of the specified emoticon is corrected, and then corrected.
  • the recommended index is an image of the emoticon recommended to the user. Since the user's preference for the image style of the emoticon image is comprehensively considered, the emoticon image recommendation personalized to the user in combination with the user's personal preference is realized, and the recommendation effect for the emoticon image of the single user is improved.
  • the "user” in this article can also be understood as a “user account” in the network system, corresponding to the users in the real world.
  • FIG. 2 is a flowchart of an expression picture recommendation method according to an exemplary embodiment of the present application, which is applied to a server cluster in the system shown in FIG. 1 , and the expression picture recommendation method may include the following step:
  • Step 201 Obtain a usage record of each group of emoticons used by the user, and each group of emoticons in each group of emoticons corresponds to at least one image style, and each group of emoticons includes at least one picture.
  • the server cluster may preset a classification system of the image style of the emoticon picture.
  • the image styles in a classification system include at least one of a cute 2D expression, a funny 2D expression, a cute 3D expression, a funny 3D expression, a real human expression, a real animal expression, and an artistic word expression.
  • a cute 2D expression a cute 2D expression
  • a funny 2D expression a cute 3D expression
  • a funny 3D expression a real human expression
  • a real animal expression a real animal expression
  • artistic word expression an artistic word expression.
  • the embodiments of the present application are only described by using various image styles included in the foregoing classification system. In practical applications, more or less image styles may be set in the server cluster according to actual application requirements. There is no limitation on the type of image style in the above classification system.
  • each set of emoticons usually contains at least two images, and the image style of each of the same set of emoticons is usually consistent. Therefore, in the embodiment of the present application, each set of emoticons can be used as a set of emoticons of an image style.
  • the user may also create an emoticon by graffiti, or collect emoticons posted by other users in the social network application, or copy from other social network applications.
  • Emoticons are used in current social networking applications and more. These emoticons are usually independent and do not appear with other emoticons. For such emoticons, each of these emoticons can be treated as a group.
  • each set of emoticons may correspond to only one image style, or may correspond to two or more image styles.
  • a set of emoticons may only correspond to cute 3D emoticons.
  • This kind of image style can also correspond to the two image styles of cute 3D expression and live expression.
  • the image style of the set of emoticons is the image style of the one.
  • the image style of the set of emoticons is the image style of each of the sets of emoticons.
  • the image style of the set of emoticons is a cute 3D emoticon, that is, the image style of each of the 10 images is a cute 3D emoticon
  • the set of emoticons is The image style is two image styles of cute 3D expression and real human expression, that is, the image style of each of the above 10 pictures is a cute 3D expression and a real human expression.
  • the step of obtaining the usage record of each set of emoticons used by the user may be performed by the user operation record management server 146 in the server cluster shown in FIG. 1, for example, the user operation record management server 146 may be socialized.
  • the network platform 144 obtains an operation record of the user in the social network application, extracts an operation record of the user using the expression picture, and statistically generates a usage record of the user for each group of the expression picture.
  • the user operation record management server 146 collects the expression image used by the user every time, and the image of the set of the image corresponding to the image is corresponding to the usage record. The number of uses is increased by one, and finally the number of times the user uses the expression images of each group.
  • the usage record of each group of emoticons used by the user is specifically used by the user, and the usage record of each group of emoticons used by the user, that is, the step 201 is obtained.
  • the usage record is the current user's usage record for each of the above groups of emoticons.
  • the user using the emoticon image may refer to the user posting the emoticon image through the social application message in the social network application, where the social application message may be an instant messaging message or a personal activity, that is, the user using the emoticon image may refer to the user.
  • the social application message may be an instant messaging message or a personal activity, that is, the user using the emoticon image may refer to the user.
  • Publish emoticons via instant messaging windows, group chat windows, or personal updates.
  • a user may post two or more identical emoticons in the same social application message during the process of publishing an emoticon image using a social application message (for example, the user is in a message or a personal activity)
  • a social application message for example, the user is in a message or a personal activity
  • the frequency of the user's use of each set of emoticons is accurately counted, and when the user operates the record management server 146 to count the emoticons used by the user each time, for the same social
  • the plurality of identical emoticons posted in the application message correspond to the social application message, and the number of uses corresponding to the set of emoticons in which the emoticon is located is increased by one.
  • the server 146 counts the usage record of the user's expression picture, the number of uses corresponding to the group of expression pictures is incremented by one for the expression picture 1 in the personal activity, and the group is set for the expression picture 2 in the personal dynamic. Add 1 to the number of uses for the emoticon image.
  • the user posting two or more identical expression images in the same social application message may also be regarded as the user prefers the expression image, and therefore, when the user operates the record
  • the management server 146 counts the number of times of use of the same set of emoticons published in the same social application message, corresponding to the social application message, and the set of emoticons corresponding to the emoticon image. Add the number of the emoticons in the social app message.
  • the server 146 counts the usage record of the user's expression picture, the number of uses corresponding to the group of expression pictures is increased by 3 for the expression picture 1 in the personal dynamic, and the group is set for the expression picture 2 in the personal dynamic. Add 2 to the number of uses for the emoticon image.
  • the image style of the user-preferred emoticon may have a certain periodicity.
  • the user uses a cute 2D expression, and in another period, the user may prefer to use Real expression.
  • the user operation time may be obtained within a predetermined length period closest to the current time. The usage record of each group of emoticons makes the subsequent recommendation results closer to the user's preferences in the most recent period of time.
  • the length of the predetermined length of time period may be manually set by an administrator, or the user operates a fixed time length set by the record management server 146 by default; for example, if the fixed time length is one month, the user operates.
  • the record management server 146 obtains the usage record of each group of emoticons used by the user, the record management server 146 can acquire the usage record of the group of emoticons in the month closest to the current time.
  • Step 202 Acquire a recommended index before the correction of the specified emoticon image, and obtain an image style of the specified emoticon image.
  • the recommendation index may be used to indicate the priority of recommending the specified expression picture to the user. For example, the higher the value of the recommendation index is, the more easily the specified emoticon image is accepted by the user when the specified emoticon is recommended to the user.
  • the specified emoticon picture can also be called: target emoticon picture, preset emoticon picture and other possible names.
  • the server cluster may calculate the pre-corrected recommendation index of the specified emoticon image according to the recommendation algorithm based on the collaborative filtering.
  • the pre-corrected recommendation index of the specified emoticon image is calculated based on other recommendation algorithms.
  • the recommended index before correction can also be called the initial recommendation index.
  • the step 202 can be performed by the emoticon management platform 142 in the server cluster shown in FIG. 1.
  • the recommendation algorithm based on collaborative filtering is a general term for a type of recommendation algorithm.
  • the core idea is to construct a user-item matrix, that is, a matrix for each user to score the items (for each set of expression pictures in the embodiment of the present application). Then, according to the constructed user-item matrix, the user-item matrix based on the collaborative filtering recommendation algorithm finds other users who have similar preferences to the current user (ie, other users who have the same emoticon image as the current user), and according to the current The user has a user-item matrix of other users with similar preferences to predict the emoticons that the user may like.
  • the emoticon management platform 142 constructs a user-item matrix corresponding to each user for each user's use frequency of each group of emoticons.
  • each element corresponds to a user using a set of emoticons. frequency.
  • the emoticon management platform 142 normalizes the user-item matrix of each user constructed above. Normalization here means that the user-item matrix corresponding to each user is separately normalized, that is, for each user, the value of each element in the user-item matrix of the user is equal to the corresponding one of the elements used by the user.
  • the frequency of the group of emoticons is divided by the maximum value of the frequency corresponding to each group of emoticons used by the user, and the formula is as follows:
  • f(u, e) is the frequency at which the user u uses a set of emoticons e
  • K is the number of sets of all emoticons.
  • Uimatrix[u,e] is the value normalized to the element of the group emoticon e in the user-item matrix corresponding to the user.
  • the emoticon management platform 142 finds the users of other users who have similar preferences to the current user through the recommendation algorithm based on the collaborative filtering based on the user-item matrix of each user.
  • the item matrix combined with the user-item matrix of other users having similar preferences to the current user, calculates the predicted value of the element in the user-item matrix of the current user, where the element of the current user's user-item matrix is 0.
  • the corresponding expression image may be the specified expression image, and the recommendation index of the specified expression image may be obtained by using the predicted value of the element.
  • the predicted value of the element may be directly used as the recommendation index of the specified expression image, that is,
  • Each user calculates a recommendation index for each set of expression images that the user has not used as a basis for recommendation.
  • the foregoing solution is used to obtain a recommendation index before the correction according to the recommendation algorithm based on the collaborative filtering.
  • the server cluster may also use other types of recommendation algorithms to obtain the correction according to actual application requirements.
  • the recommendation index for example, other types of recommendation algorithms may have a content-based recommendation algorithm, an association rule-based recommendation algorithm, and a knowledge-based recommendation algorithm, and even a server cluster may also be based on two or more recommendation algorithms.
  • the combination is used to obtain the recommended coefficients before the above correction.
  • the recommended algorithm used in the server cluster to obtain the recommended coefficient before the correction is not limited in this embodiment.
  • the server cluster may also obtain the pre-correction recommendation index without using a specific recommendation algorithm.
  • the server cluster may set the pre-correction recommendation index of the specified emoticon image to one.
  • the fixed value, the recommended index of all the facial expressions to be recommended before correction is the same, that is, the server cluster does not combine other recommendation algorithms, and only recommends the expression image to the user according to the usage record of the group of emoticons used by the user.
  • Step 203 correct the recommended index before the correction according to the usage record, the image style of each set of emoticons, and the image style of the specified emoticon to obtain the revised recommendation index;
  • this step includes the following two sub-steps:
  • Sub-step one generating a user's interest vector according to the usage record of each group of emoticons used by the user and the image style of each set of emoticons;
  • Sub-step 2 according to the interest vector, the image style of the specified emoticon picture and the preset correction formula, the recommendation index of the specified emoticon picture is corrected, and the corrected recommendation index is obtained.
  • the technical solution corresponding to the sub-step one includes the following content:
  • each of the above interest vectors indicates the number of times the user uses an image style emoticon image.
  • the process of generating the user's interest vector may be as shown in steps a to c below.
  • step a an initialization interest vector is generated according to the type of the image style, and the value of the element corresponding to each image style in the initialization interest vector is 1.
  • the vector is too sparse, and the accuracy of the subsequent calculation is improved.
  • the method shown in the embodiment of the present application can use at least the emoticon image of each image style by default.
  • the number of elements in the vector v is the number of types of image styles in the classification system of the above-mentioned expression image (assumed to be n)
  • each element in the vector v corresponds to one of the image styles.
  • Step b For each set of emoticons, in the initialization interest vector, superimpose the value of the element corresponding to the image style of each set of emoticons on the number of times the user uses each set of emoticons to obtain the superimposed vector.
  • the server cluster can count all the emoticons used by the user and their frequencies according to the above usage records. For each set of emoticons and corresponding use frequency m, determining that the image style corresponding to the set of emoticons corresponds to an element in the vector v, such as the i-th element, and then adding m to the i-th element of the vector v, which is:
  • the first element in the vector v corresponds to a cute 2D expression
  • the second element corresponds to a funny 2D expression
  • the last element corresponds to an artistic word expression
  • the user may use the set of emoticons for the two or more image styles for each time the user uses the usage record of the set of emoticons. Add 1 to the number of uses.
  • a certain set of emoticons corresponds to a cute 2D emoticon and a real human face
  • the user uses the emoticon for the group of 10 times
  • the usage record of the group of emoticons is increased by 10 for the number of times the user uses the cute 2D expression and the real expression.
  • step c the superposed vector is normalized to obtain a user's interest vector.
  • v[i] is the i-th element in the vector v
  • v[j] is the j-th element in the vector v
  • n is the length of the vector v.
  • rp(u, e) is the revised recommendation index
  • cf(u, e) is the recommended index before correction
  • v(u) is the user's interest vector
  • frq(u) is the average number of times the user uses the emoticon image per unit time period.
  • the unit time period may be a fixed period of time set in advance, for example, the unit time period may be one day or one hour or the like.
  • the average number of times the user uses the expression picture in each unit time period is the average number of times the user uses each group of expression pictures in the usage record average to the number of times of use in each unit time period corresponding to the usage record. For example, if the usage record is a usage record of the user within 30 days, and the unit time period is 1 day, the total number of times the user uses the expression image counted in the usage record is 10000, and the user uses the expression in each unit time period.
  • the average number of images is 10000/30 ⁇ 333.
  • FIG. 3 is a schematic diagram of a function diagram according to an embodiment of the present application.
  • the function related to the average number of times the user uses the expression picture in each unit time period is a half s-shaped function, and the user
  • the value of the function is 0, indicating that the credibility of the recommended index is 0 using the user's interest vector, which is completely unavailable; the average usage in the user unit time period
  • the function first increases significantly, then gradually flattens, and its limit value is 1.0. This change trend indicates that the user's interest vector pair is used as the average used emoticon frequency increases during the user unit time period. It is also more and more credible to correct the recommendation index.
  • Var(v(u)) in the above formula is the variance value of each element in the user's interest vector.
  • the more concentrated the image style of the user using the expression picture the larger the correction of the recommendation index is, and the smaller the correction of the recommendation index is.
  • sim(e, v(u)) in the above formula is the cosine similarity between the specified emoticon picture and the interest vector.
  • the server cluster may determine an image style vector corresponding to the specified emoticon image, the image style vector is similar to the user's interest vector, and also includes n elements, each element corresponding to an image style, and The value of the element corresponding to the image style of the specified emoticon picture is 1, and the other elements are all 0.
  • the server cluster fetches the image style vector and multiplies and sums the elements in the user's interest vector whose positions are not 0, and then divides With the length of two vectors, the result obtained is the above-mentioned cosine similarity.
  • the specified emoticon image corresponds to only one image style
  • the specified emoticon image since the user's interest vector v(u) is normalized, and in the same dimensional vector space, the specified emoticon image is only in one element (hypothesis)
  • the value on the i-th element) is 1, and the rest are all 0.
  • only one set of elements in the image style vector and the user's interest vector are not 0, and both vectors are 1 in length.
  • the value of the i-th element in the user's interest vector is the above-described cosine similarity.
  • the cosine similarity is calculated by first determining the image style of the specified emoticon image corresponding to the element in the user's interest vector v(u) (assumed to be the i-th element) Then, the value of sim(e, v(u)) is determined as the value of the i-th element of the vector v(u).
  • Step 204 When the revised recommendation index satisfies the recommendation condition, recommend the specified expression picture to the user.
  • the above recommendation condition may include at least one of the following conditions:
  • the corrected recommendation index is higher than the preset index threshold, and the corrected recommendation index ranks higher than the preset ranking threshold in the recommendation index of the expression image to be recommended.
  • the server cluster may determine whether to recommend the specified expression image according to the revised recommendation index of the specified expression image, for example, in the server cluster.
  • the index threshold (which may be a value manually set by an administrator) is set in advance, and when the corrected recommendation index is higher than the index threshold, the server cluster may recommend the specified emoticon to the user.
  • the server cluster may also determine whether to recommend the specified emoticon image in combination with the recommendation index of other emoticon images to be recommended. For example, the server cluster may refer to each of the recommended recommendation indexes corresponding to each of the emoticon images to be recommended for each of the to-be-recommended
  • the emoticon picture is ranked, and the emoticon picture ranked in the top x position is recommended to the user. If the specified emoticon picture is ranked in the first x position, the server cluster recommends the specified emoticon picture to the user; wherein x is the above ranking threshold. It can be the value set by the administrator in the server cluster, or it can be the default value set in the server cluster.
  • the server cluster may also determine whether to recommend the specified emoticon in combination with the ranking of the recommendation index and the specific value of the recommendation index. For example, the server cluster may rank each of the to-be-recommended emoticons according to the corrected recommendation index corresponding to each of the to-be-recommended emoticons, if the specified emoticon is ranked in the first x position, and the specified emoticon is corrected. If the recommendation index is higher than the index threshold, the server cluster recommends the specified expression image to the user. Otherwise, if the specified expression image is ranked outside the first x position, or the modified recommendation index of the specified expression image is not higher than the index Threshold, the server cluster does not recommend the specified emoticon to the user.
  • the specified emoticon image may be any one of the emoticon images to be recommended to the user, which is equivalent to the method of the embodiment of the present application.
  • the recommendation index can be corrected for any set of emoticons.
  • the specified emoticon picture may also be any one of a group of emoticon images with the highest recommendation index before the correction in the emoticon picture recommended by the user.
  • the server cluster may perform the foregoing steps 201 to 205 periodically, for example, performing the foregoing solution in a cycle of half a month or one month to periodically update the emoticon picture recommended to the user.
  • the method for recommending an emoticon provided by the embodiment of the present application, by using the record of each set of emoticons used by the user and the image style of each set of emoticons, and the image style of the specified emoticon, the specified emoticons After the recommended index of the image is corrected, the emoticon image recommended to the user according to the revised recommendation index. Since the user's preference for the image style of the emoticon image is comprehensively considered, the emoticon image recommendation personalized to the user in combination with the user's personal preference is realized, and the recommendation effect for the emoticon image of the single user is improved.
  • the method provided by the embodiment of the present application calculates the recommendation index before the correction by using the recommendation algorithm based on the collaborative filtering, and combines the user's preference for the image style of the expression image to correct the recommendation index before the correction, thereby realizing the combined user.
  • the personal preference and the recommendation algorithm based on collaborative filtering recommend an emoticon to the user.
  • the method provided by the embodiment of the present application first generates an initialization interest vector with a value of 1 for each element of the image style when generating the user's interest vector, and superimposes the user pair on the basis of initializing the interest vector.
  • the number of times the image style is used the superimposed vector is obtained and normalized to avoid the problem that the user's interest vector has too many 0 values and the vector is too sparse, thereby improving the accuracy of subsequent calculations.
  • the method provided by the embodiment of the present application comprehensively considers the average number of times the user uses the expression picture in each unit time period and the influence of the user's interest concentration degree on the correction when correcting the recommendation index of the specified expression picture. Improve the accuracy of the recommendation index correction.
  • FIG. 4 is a flowchart of an expression picture recommendation method according to an exemplary embodiment, which is applied to a server cluster in the system shown in FIG. 1 , and the expression picture recommendation method may include the following steps:
  • Step 401 Acquire an emoticon image sample corresponding to each image style, where the emoticon image sample is a partial emoticon image in the expression library that is assigned a corresponding image style.
  • the expression library includes each group of emoticons used by the user and the specified emoticon.
  • the expression database may be a database for storing an emoticon image, which is stored in the emoticon management platform 142, and the expression library stores various sets of emoticons existing in the system. .
  • the expression library also stores a specified emoticon picture that the user has not used and may recommend to the user.
  • the expression image sample may be manually labeled by a management personnel, and the server cluster obtains an artificially labeled image of the expression image corresponding to each image style.
  • the sample of the above-mentioned expression picture may also be marked according to the group.
  • the image style of each group of the expression pictures in the expression library may be first marked.
  • At least one set of emoticons is displayed, wherein the number of sets of emoticons marked in each image style can be determined by the manager according to the actual situation. For example, considering the comprehensive labor cost and the subsequent machine training effect, under the general difficulty classification system, 50 groups of expression pictures can be marked under each image style.
  • Step 402 Extract image feature information of an emoticon image sample corresponding to each image style.
  • the server cluster may perform image classification by a machine learning classification model suitable for image classification.
  • the machine learning classification model for image classification is mainly divided into two categories, one is the traditional machine learning classification model, such as SVM (Support Vector Machine) model, maximum entropy model or random forest model, etc.
  • the other type is a deep neural network model, such as a convolutional neural network model.
  • the extraction of image feature information required by these two types of machine learning classification models is also different.
  • the server cluster can pass SIFT (Scale-invariant feature transform), SURF (Speed-up robust features), ORB (Oriented fast and Rotated brief, fast steering and short rotation), HOG (Histogram of Oriented Gradient), LBP (Local Binary Pattern) and other image feature extraction algorithms from each image style corresponding to the image sample Image feature information is extracted.
  • SIFT Scale-invariant feature transform
  • SURF Speed-up robust features
  • ORB Oriented fast and Rotated brief, fast steering and short rotation
  • HOG Heistogram of Oriented Gradient
  • LBP Local Binary Pattern
  • the server cluster may extract the RGB (Red Green Blue) color value at each pixel in the image of the emoticon corresponding to each image style as the image feature information.
  • RGB Red Green Blue
  • Step 403 Perform machine learning training on the image feature information and the image style corresponding to the image feature information to obtain a machine learning classification model.
  • the server cluster may input the image feature information and the image style corresponding to the image feature information into the selected machine model for training to obtain A machine learning classification model for classifying expression images.
  • Step 404 Input image feature information of the unclassified emoticon image into the machine learning classification model to obtain an image style corresponding to the unclassified emoticon image.
  • the server cluster may extract the image feature information of each group of emoticons according to the corresponding machine learning classification model.
  • the image feature information of each set of emoticons is input into the machine learning classification model, and the machine learning classification model can output image styles corresponding to the respective sets of emoticons.
  • Step 405 Acquire a usage record of each group of emoticons used by the user, and each group of emoticons in each group of emoticons corresponds to at least one image style, and each set of emoticons includes at least one picture.
  • Step 406 Acquire a recommended index before the correction of the specified emoticon image, and obtain an image style of the specified emoticon image.
  • Step 407 Generate a user's interest vector according to the usage record of each group of emoticons used by the user and the image style of each set of emoticons.
  • Step 408 Correct the recommendation index of the specified expression picture according to the interest vector, the image style of the specified expression picture, and the preset correction formula, and obtain the corrected recommendation index.
  • Step 409 when the revised recommendation index satisfies the recommendation condition, recommend the specified expression picture to the user.
  • the method for recommending an emoticon provided by the embodiment of the present application, by using the record of each group of emoticons used by the user, the image style of each set of emoticons, and the image style of the specified emoticon, the specified emoticon After the recommendation index is corrected, the emoticon image recommended to the user according to the revised recommendation index. Since the user's preference for the image style of the emoticon image is comprehensively considered, the emoticon image recommendation personalized to the user in combination with the user's personal preference is realized, and the recommendation effect for the emoticon image of the single user is improved. .
  • the method provided by the embodiment of the present invention performs machine learning training on the image style corresponding to the image feature information and the image feature information by extracting image feature information of the emoticon image sample corresponding to each image style, and obtains a machine learning classification.
  • the model inputs the image feature information of the unclassified emoticon image into the machine learning classification model, and obtains an image style corresponding to the unclassified emoticon image, thereby realizing automatic classification of the image style of each set of emoticon images.
  • FIG. 5 illustrates a schematic diagram of an implementation process in which a server cluster recommends an emoticon to a user.
  • the above implementation process may be implemented by the emoticon management platform 142, the social network platform 144, and the user operation record management server 146 in the server cluster shown in FIG. 1.
  • the management personnel use the management device 160 to analyze the expression.
  • Some of the emoticons in the expression library of the picture management platform 142 are image-styled, and each image style is labeled with 50 sets of emoticons.
  • the emoticon image management platform 142 obtains an emoticon image in which the manager has an image style, and obtains an emoticon image sample corresponding to each image style, and extracts image feature information from the emoticon image sample, and extracts image feature information of the extracted emoticon sample. And the image style corresponding to the image feature information (ie, the image style marked by the user) performs machine learning to obtain a machine learning classification model.
  • the emoticon picture management platform 142 samples the same method to extract image feature information of each set of emoticons of other unlabeled image styles in the expression library, and inputs image feature information of each set of emoticons into the machine learning classification model, and outputs each set of expressions.
  • the respective image styles of the pictures are obtained, thereby obtaining image styles corresponding to the respective sets of expression pictures managed in the expression picture management platform 142.
  • user A uses social network applications through user terminal 120, including the use of emoticons in social network applications.
  • the social network platform 144 collects the operation record of the user A, and transmits the collected operation record of the user A to the user operation record management server 146. Every predetermined period, such as every other month, the user operation record management server 146 generates a usage record of each group of emoticons in the most recent month according to the operation record of the user A, and the usage record includes the user A pair.
  • the user operation record management server 146 supplies the generated use record to the expression picture management platform 142 for the number of times of use of each set of expression pictures.
  • the emoticon management platform 142 obtains the usage record of the user A in the most recent one month, and generates the user A's interest vector in combination with the usage record and the image style of each set of emoticons. At the same time, the emoticon management platform 142 calculates the usage record of each group of emoticons in the latest month of each user, and calculates the group of emoticons that are not used by the user A by using the recommendation algorithm based on the collaborative filtering. The recommended index before the correction. The expression picture management platform 142 corrects the recommendation index before the correction using the interest vector of the user A, and sorts the group of expression images that the user A has not used according to the revised recommendation index, and recommends to the user A according to the ranking result. Among them, the recommended groups of expressions with the highest index are recommended.
  • FIG. 6 is a structural block diagram of an emoticon picture recommending apparatus according to an exemplary embodiment.
  • the emoticon recommendation device may be implemented as part or all of the server cluster by hardware or a combination of hardware and software to perform all or part of the steps in the embodiment shown in FIG. 2 or FIG. 4.
  • the emoticon recommendation device may include a record acquisition module 601, a recommendation index acquisition module 602, a correction module 603, and a recommendation module 604.
  • the record obtaining module 601 is configured to obtain a usage record of each set of emoticons used by the user, where each set of emoticons in the set of emoticons corresponds to at least one image style, and each set of emoticons includes at least one image. .
  • a recommendation index obtaining module 602 configured to acquire a pre-corrected recommendation index of the specified emoticon image, and obtain an image style of the specified emoticon image, the recommendation index being used to indicate a priority of recommending the specified emoticon image to the user .
  • the correction module 603 is configured to correct the recommended index before the correction according to the usage record, the image style of each set of emoticons, and the image style of the specified emoticon to obtain a corrected recommendation index.
  • the recommendation module 604 is configured to recommend the specified emoticon picture to the user when the corrected recommendation index satisfies the recommendation condition.
  • the modification module 603 includes: a vector generation unit and a correction unit.
  • a vector generating unit configured to generate an interest vector of the user according to a usage record of the group of emoticons and an image style of each set of emoticons, where each element in the interest vector indicates the user uses The number of times an image-style emoticon image.
  • a correction unit configured to correct a recommendation index of the specified emoticon image according to the interest vector, an image style of the specified emoticon image, and a preset correction formula.
  • step 204 in FIG. 2 For the specific steps performed by the correction unit, refer to the description in step 204 in FIG. 2 above, and details are not described herein again.
  • the usage record includes a usage count of each set of the emoticons in the set of emoticons
  • the vector generating unit includes: a generating subunit, a superposing subunit, and a normalized subunit.
  • Generating a subunit configured to generate an initialization interest vector according to the number of types of the image style, wherein the value of the element corresponding to each image style in the initialization interest vector is 1;
  • An overlay subunit configured to superimpose, on the initialization interest vector, the value of an element corresponding to an image style of each set of emoticons on a usage time of the user for each set of emoticons in the initialization interest vector , obtaining the superposed vector;
  • a normalization subunit is configured to normalize the superposed vector to obtain an interest vector of the user.
  • correction formula is:
  • rp(u, e) is the corrected recommendation index
  • cf(u, e) is the recommended index before the correction
  • frp(u) is that the user uses the emoticon image in each unit time period.
  • the average number of times, v(u) is the interest vector
  • var(v(u)) is the variance value of each element in the interest vector
  • sim(e, v(u)) is the specified emoticon image and The cosine similarity between the interest vectors.
  • the recommendation index obtaining module 602 is configured to calculate, before the correction, the recommendation index of the specified expression according to the recommendation algorithm based on the collaborative filtering.
  • the recommended condition includes at least one of the following conditions:
  • the corrected recommendation index is higher than a preset index threshold
  • the corrected recommendation index ranks higher in the recommendation index of the expression image to be recommended than the preset ranking threshold.
  • the device further includes: a sample acquisition module, a feature extraction module, a training module, and an image style acquisition module.
  • a sample obtaining module configured to acquire an emoticon image sample corresponding to each image style before the record obtaining module acquires a usage record of each set of emoticon images used by the user, where the emoticon image sample is specified in the expression database a partial emoticon image of the corresponding image style, the emoticon library containing each set of emoticon images used by the user and the specified emoticon image.
  • a feature extraction module configured to extract image feature information of the emoticon image sample corresponding to each of the image styles.
  • a training module configured to perform machine learning training on the image feature information and the image style corresponding to the image feature information, to obtain a machine learning classification model.
  • An image style obtaining module configured to input image feature information of an unclassified emoticon image into the machine learning classification model, to obtain an image style corresponding to the unclassified emoticon image, wherein the unclassified emoticon image is the expression database Other expression pictures other than the expression picture samples.
  • step 404 in FIG. 4 For the specific steps performed by the image style obtaining module, refer to the description in step 404 in FIG. 4 above, and details are not described herein again.
  • the emoticon picture recommendation apparatus specifies the emoticon image by using the record of each set of emoticon images used by the user, the image style of each set of emoticon images, and the image style of the specified emoticon image. After the recommendation index is corrected, the emoticon image recommended to the user according to the revised recommendation index. Since the user's preference for the image style of the emoticon image is comprehensively considered, the emoticon image recommendation personalized to the user in combination with the user's personal preference is realized, and the recommendation effect for the emoticon image of the single user is improved.
  • the apparatus provided by the embodiment of the present application calculates the recommendation index before the correction by using the recommendation algorithm based on the collaborative filtering, and combines the user's preference for the image style of the expression image to correct the recommendation index before the correction, thereby realizing the combined user.
  • the personal preference and the recommendation algorithm based on collaborative filtering recommend an emoticon to the user.
  • the apparatus provided by the embodiment of the present application first generates an initialization interest vector with a value of 1 for each element of the image style when generating the user's interest vector, and superimposes the user pair on the basis of initializing the interest vector.
  • the number of times the image style is used the superimposed vector is obtained and normalized to avoid the problem that the user's interest vector has too many 0 values and the vector is too sparse, thereby improving the accuracy of subsequent calculations.
  • the apparatus provided by the embodiment of the present application comprehensively considers the average number of times the user uses the expression picture in each unit time period and the influence of the user's interest concentration degree on the correction when correcting the recommendation index of the specified expression picture. Improve the accuracy of the recommendation index correction.
  • the apparatus performs machine learning training on the image style corresponding to the image feature information and the image feature information by extracting image feature information of the emoticon image sample corresponding to each image style, and obtains a machine learning classification.
  • the model inputs the image feature information of the unclassified emoticon image into the machine learning classification model, and obtains an image style corresponding to the unclassified emoticon image, thereby realizing automatic classification of the image style of each set of emoticon images.
  • FIG. 7 is a schematic structural diagram of a server cluster according to an exemplary embodiment.
  • the server cluster 700 includes at least one server including a central processing unit (CPU) 701, a system memory 704 including a random access memory (RAM) 702 and a read only memory (ROM) 703, and a connection system. Memory 704 and system bus 705 of central processing unit 701.
  • the server cluster 700 also includes a basic input/output system (I/O system) 706 that facilitates transfer of information between various devices within the computer, and a large capacity for storing the operating system 713, applications 714, and other program modules 715.
  • the basic input/output system 706 includes a display 708 for displaying information and an input device 709 such as a mouse or keyboard for user input of information.
  • the display 708 and input device 709 are both connected to the central processing unit 701 via an input and output controller 710 that is coupled to the system bus 705.
  • the basic input/output system 706 can also include an input and output controller 710 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus.
  • input and output controller 710 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 707 is connected to the central processing unit 701 by a mass storage controller (not shown) connected to the system bus 705.
  • the mass storage device 707 and its associated computer readable medium provide non-volatile storage for the server cluster 700. That is, the mass storage device 707 can include a computer readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • the computer readable medium can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the server cluster 700 can also be operated by a remote computer connected to the network through a network such as the Internet. That is, the server cluster 700 can be connected to the network 712 through the network interface unit 711 connected to the system bus 705, or the network interface unit 711 can be used to connect to other types of networks or remote computer systems (not shown). ).
  • the memory further includes one or more programs, the one or more programs are stored in a memory, and the central processing unit 701 implements the expression picture recommendation method shown in FIG. 2 or FIG. 4 by executing the one or more programs. .
  • non-transitory computer readable storage medium comprising instructions, such as a memory comprising instructions executable by a processor of a server to perform an emoticon image as shown in various embodiments of the present application.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Abstract

一种表情图片推荐方法、装置、服务器集群(140,700)及存储介质。该方法包括:获取用户使用过的各组表情图片的使用记录,各组表情图片中的每组表情图片对应至少一种图像风格(201),获取指定表情图片的修正前的推荐指数,并获取指定表情图片的图像风格(202),根据使用记录,每组表情图片的图像风格,以及指定表情图片的图像风格,对修正前的推荐指数进行修正,获得修正后的推荐指数(203),当修正后的推荐指数满足推荐条件时,向用户推荐指定表情图片(204)。本方法向用户推荐的表情图片是综合考虑了用户对表情图片的图像风格的喜好程度的表情图片,提高了表情图片的推荐效果。

Description

表情图片推荐方法、装置、服务器集群及存储介质
本申请要求于2017年02月13日提交中国国家知识产权局、申请号为201710075877.0、申请名称为“表情图片推荐方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络应用技术领域,特别涉及一种表情图片推荐方法、装置、服务器集群及存储介质。
背景技术
发布表情图片是用户使用社交网络应用时常用的功能之一。在社交网络应用中,除了内置的表情图片之外,社交网络应用还可以向用户提供第三方开发者开发的表情图片,由用户选择下载和使用。
随着第三方开发者开发的表情图片的增加,用户自己很难从海量的表情图片中快速选择出喜欢的表情图片。对此,在相关技术中,社交网络应用通常会向用户推荐当前较为热门的表情图片,或者,向用户推荐与该用户有共同特征或共同爱好的其他用户使用较多的表情图片,由用户从中选择自己喜欢的表情图片。
在上述相关技术中,社交网络应用向用户推荐的表情图片是大众用户或与该用户有共同特征或共同爱好的其他用户使用较多的表情图片,其中可能并不包括当前用户喜欢的表情图片,从而导致表情图片的推荐效果较差。
申请内容
本申请实施例提供了一种表情图片推荐方法、装置、服务器集群及存储介质,可以解决相关技术中社交网络应用向用户推荐的表情图片中可能并不包括当前用户喜欢的表情图片,或者,当前用户喜欢的表情图片没有被推荐,从而导致表情图片的推荐效果较差的问题,技术方案如下:
根据本申请实施例的一个方面,提供了一种表情图片推荐方法,用于服务器集群中,所述方法包括:
获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片;
获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度;
根据所述使用记录、所述每组表情图片的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
根据本申请实施例的另一方面,提供了一种表情图片推荐装置,所述装置包括:
记录获取模块,用于获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片;
推荐指数获取模块,用于获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度;
修正模块,用于根据所述使用记录,所述每组表情图片的图像风格,以及所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
推荐模块,用于当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
本申请实施例提供的技术方案可以包括以下有益效果:
通过获取用户使用过的各组表情图片的使用记录以及各组表情图片的图像风格,以及指定表情图片的图像风格,对指定表情图片的推荐指数进行修正后,按照修正后的推荐指数向用户推荐的表情图片。由于综合考虑了用户对表情图片的图像风格的喜好程度,从而实现了结合用户的个人喜好向用户进行个性化的表情图片推荐,提高了针对单个用户的表情图片的推荐效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是根据一示例性实施例示出的一种表情图片推荐系统的结构示意图;
图2是根据一示例性实施例示出的一种表情图片推荐方法的流程图;
图3是图2所示的实施例涉及的一种函数图形示意图;
图4是根据一示例性实施例示出的一种表情图片推荐方法的流程图;
图5是一种服务器集群向用户推荐表情图片的实现过程的示意图;
图6是根据一示例性实施例示出的一种表情图片推荐装置的结构方框图;
图7是根据一示例性实施例示出的一种服务器集群的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
图1是本申请一示例性实施例示出的一种表情图片推荐系统的结构示意图。该系统包括:若干个用户终端120和服务器集群140。
用户终端120可以是手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
用户终端120与服务器集群140之间通过通信网络相连。可选的,通信网络是有线网络或无线网络。
服务器集群140是一台服务器,或者由若干台服务器,或者是一个虚拟化平台,或者是一个云计算服务中心。可选的,服务器集群140可以包括用于实现表情图片管理平台142的服务器,可选的,服务器集群140还包括用于实现社交网络平台144的服务器;可选的,服务器集群140还包括用户操作记录管 理服务器146。
可选的,表情图片管理平台142包括:用于实现表情图片推荐的服务器以及用于实现表情图片下载管理的服务器。
可选的,社交网络平台144包括:用于实现社交信息收发的服务器、用于管理和存储各个用户账号的服务器、用于管理和存储各个群组账号的服务器、用于管理各个用户账号或群组账号的联系人列表的服务器。社交网络平台144与用户操作记录管理服务器146之间通过通信网络相连。
可选的,用户操作记录管理服务器146包括:用于统计用户对表情图片的历史使用记录的服务器、用于存储用户对表情图片的历史使用记录的服务器。
可选的,用户操作记录管理服务器146在用户授权认可的前提下,可以从本地的社交网络平台144,或者,从其它关联的社交网络应用平台中获取用户对表情图片的操作记录数据,并根据获取到的操作记录统计用户对表情图片的历史使用记录。
可选的,该系统还可以包括管理设备160,该管理设备160与服务器集群140之间通过通信网络相连。可选的,通信网络是有线网络或无线网络。
可选的,上述的无线网络或有线网络使用标准通信技术和/或协议。网络通常为因特网、但也可以是任何网络,包括但不限于局域网(Local Area Network,LAN)、城域网(Metropolitan Area Network,MAN)、广域网(Wide Area Network,WAN)、移动、有线或者无线网络、专用网络或者虚拟专用网络的任何组合)。在一些实施例中,使用包括超文本标记语言(Hyper Text Mark-up Language,HTML)、可扩展标记语言(Extensible Markup Language,XML)等的技术和/或格式来代表通过网络交换的数据。此外还可以使用诸如安全套接字层(Secure Socket Layer,SSL)、传输层安全(Trassport Layer Security,TLS)、虚拟专用网络(Virtual Private Network,VPN)、网际协议安全(Internet Protocol Security,IPsec)等常规加密技术来加密所有或者一些链路。在另一些实施例中,还可以使用定制和/或专用数据通信技术取代或者补充上述数据通信技术。
用户通常是因为表情图片的图像风格而喜欢某个或某些表情图片,并且,不同的用户所喜欢的表情图片的图像风格也各不相同。在本申请实施例中,在向用户推荐表情图片时,可以获取用户使用过的各组表情图片的使用记录,各 组表情图片中的每组表情图片对应至少一种图像风格,每组表情图片中包含至少一个图片,获取指定表情图片的修正前的推荐指数,并获取指定表情图片的图像风格,其中,该推荐指数用于指示向用户推荐指定表情图片的优先程度,根据使用记录,以及指定表情图片的图像风格,对修正前的推荐指数进行修正,获得修正后的推荐指数,当修正后的推荐指数满足推荐条件时,向用户推荐指定表情图片。
由于用户对表情图片的使用次数,以及用户使用过的表情图片的图像风格,可以在一定程度上指示用户对各种图像风格的表情图片的偏好程度。因此,在上述方案中,通过用户使用过的各组表情图片的使用记录、各组表情图片的图像风格和指定表情图片的图像风格,对指定表情图片的推荐指数进行修正后,按照修正后的推荐指数向用户推荐的表情图片。由于综合考虑了用户对表情图片的图像风格的喜好程度,从而实现了结合用户的个人喜好向用户进行个性化的表情图片推荐,提高了针对单个用户的表情图片的推荐效果。
本文中的“用户”也可以理解成为在网络系统中的“用户帐号”,与真实世界中的用户所对应。
图2是根据本申请一示例性实施例示出的一种表情图片推荐方法的流程图,以应用于如图1所示的系统中的服务器集群为例,该表情图片推荐方法可以包括如下几个步骤:
步骤201,获取用户使用过的各组表情图片的使用记录,各组表情图片中的每组表情图片对应至少一种图像风格,每组表情图片中包含至少一个图片。
在本申请实施例中,服务器集群可以预先设置表情图片的图像风格的分类体系。
示意性的,一个分类体系中的图像风格包括:可爱2D表情、搞怪2D表情、可爱3D表情、搞怪3D表情、真人表情、真实动物表情以及艺术字表情中的至少一种。本申请实施例仅以上述分类体系中包含的各种图像风格为例进行说明,在实际应用中,服务器集群中可以按照实际应用的需求来设置更多或者更少的图像风格,本申请实施例对于上述分类体系中的图像风格的种类不做限定。
由于第三方开发者在开发表情图片时,大多是成套的开发表情图片。每一套表情图片中通常包含至少两个图片,且同一套表情图片中的各个图片的图像 风格通常是保持一致的。因此,在本申请实施例中,可以将每套表情图片作为一种图像风格的一组表情图片。
可选的,用户在使用社交网络应用的过程中,也可能会通过涂鸦方式创造表情图片,或者,收藏其他用户在社交网络应用中发布的表情图片,或者,也可以从其它社交网络应用中复制表情图片并在当前的社交网络应用中使用等等。上述这些表情图片通常是独立存在的,并不与其它的表情图片成套出现,对于此类表情图片,可以将这些表情图片中的每个图片单独作为一组进行处理。
在实际应用中,每组表情图片可以只对应一种图像风格,也可以对应两种或两种以上的图像风格,比如,以上述的分类体系为例,一组表情图片可以只对应可爱3D表情这一种图像风格,也可以同时对应可爱3D表情和真人表情这两种图像风格。
对于一组表情图片,当该组表情图片中只包含一个图片时,该组表情图片的图像风格就是这一个图片的图像风格。而当该组表情图片中包含至少两个图片时,该组表情图片的图像风格是该组表情图片中每个图片的图像风格。比如,一组表情图片中包含10个图片,则该组表情图片的图像风格为可爱3D表情,即表示上述10个图片中每个图片的图像风格都是可爱3D表情,而该组表情图片的图像风格为可爱3D表情和真人表情这两种图像风格,即表示上述10个图片中每个图片的图像风格都是可爱3D表情和真人表情。
可选的,上述获取用户使用过的各组表情图片的使用记录的步骤可以由图1所示的服务器集群中的用户操作记录管理服务器146来执行,比如,用户操作记录管理服务器146可以从社交网络平台144中获取用户的在社交网络应用中的操作记录,从中提取出用户使用表情图片的操作记录,并统计生成用户对上述各组表情图片的使用记录。
示意性的,以上述使用记录中包含用户对各组表情图片的使用次数为例,用户操作记录管理服务器146统计用户每一次使用的表情图片,并将该表情图片所在的那一组表情图片对应的使用次数加1,最后获得用户对各组表情图片的使用次数。
可选的,在本申请实施例中,上述用户使用过的各组表情图片的使用记录,具体是指该用户对其使用过的各组表情图片的使用记录,即上述步骤201中获取到的使用记录,是当前用户对上述各组表情图片的使用记录。
可选的,上述用户使用表情图片可以是指用户在社交网络应用中,通过社交应用消息发布表情图片,其中,社交应用消息可以是即时通讯消息或者个人动态,即用户使用表情图片可以是指用户通过即时通讯窗口、群组聊天窗口或者个人动态发布表情图片。
在实际应用中,用户在使用社交应用消息发布表情图片的过程中,可能会在同一个社交应用消息中发布两个或两个以上相同的表情图片(比如,用户在一条消息或一条个人动态中发布多个完全一样的表情图片),为了排除这种情况的干扰,准确统计用户使用各组表情图片的频次,当用户操作记录管理服务器146统计用户每一次使用的表情图片时,对于在同一社交应用消息中发布的多个相同的表情图片,对应该社交应用消息,将该表情图片所在的那一组表情图片对应的使用次数加1。
比如,以某一组表情图片中包含表情图片1和表情图片2为例,用户在发布个人动态时,在一条个人动态中发布3个表情图片1和2个表情图片2,则用户操作记录管理服务器146统计用户的表情图片的使用记录时,针对该条个人动态中的表情图片1,将该组表情图片对应的使用次数加1,并针对该条个人动态中的表情图片2,将该组表情图片对应的使用次数再加1。
或者,在另一种可能的实现方式中,用户在同一条社交应用消息中发布两个或两个以上相同的表情图片,也可以看作是用户更偏爱该表情图片,因此,当用户操作记录管理服务器146统计用户每一次使用的表情图片时,对于在同一社交应用消息中发布的多个相同的表情图片,对应该社交应用消息,将该表情图片所在的那一组表情图片对应的使用次数加上该表情图片在社交应用消息中的个数。
比如,以某一组表情图片中包含表情图片1和表情图片2为例,用户在发布个人动态时,在一条个人动态中发布3个表情图片1和2个表情图片2,则用户操作记录管理服务器146统计用户的表情图片的使用记录时,针对该条个人动态中的表情图片1,将该组表情图片对应的使用次数加3,并针对该条个人动态中的表情图片2,将该组表情图片对应的使用次数再加2。
可选的,由于用户偏好的表情图片的图像风格可能会有一定的周期性,比如,在曾经的一段时间内,用户使用可爱2D表情较多,而在另一段时间内,用户可能更喜欢使用真人表情。对此,为了提高推荐的准确性,在本申请实施例中,用户操作记录管理服务器146获取用户对各组表情图片的使用记录时, 可以获取距离当前时间最近的预定长度时间段内,用户对各组表情图片的使用记录,从而使得后续的推荐结果能够更接近用户最近一段时间内的偏好。其中,该预定长度时间段的时间长度可以是由管理人员手动设置,或者,用户操作记录管理服务器146中默认设置的一个固定时间长度;比如,假设该固定时间长度为1个月,则用户操作记录管理服务器146在获取用户使用过的各组表情图片的使用记录时,可以获取该用户在距离当前时间最近的一个月内,对各组表情图片的使用记录。
步骤202,获取指定表情图片的修正前的推荐指数,并获取指定表情图片的图像风格。
其中,推荐指数可以用于指示向用户推荐指定表情图片的优先程度。比如,该推荐指数的数值越高,向上述用户推荐该指定表情图片时,该指定表情图片越容易被用户所接受。
指定表情图片也可称为:目标表情图片、预设表情图片等其它可能的名称。
可选的,服务器集群可以根据基于协同过滤的推荐算法计算获得指定表情图片的修正前的推荐指数。或者,基于其它推荐算法计算得到指定表情图片的修正前的推荐指数。修正前的推荐指数也可称为初始推荐指数。
具体的,该步骤202可以由图1所示的服务器集群中的表情图片管理平台142来执行。其中,基于协同过滤的推荐算法是一类推荐算法的统称,其核心思路为:先构建user-item矩阵,即各个用户对项目(本申请实施例中为每一组表情图片)评分的矩阵,然后再根据构建的user-item矩阵,基于协同过滤的推荐算法找到与当前用户具有相似喜好的其他用户(即与当前用户偏好相同的表情图片的其他用户)的user-item矩阵,并根据与当前用户具有相似喜好的其他用户的user-item矩阵来预测用户可能喜欢的表情图片。
示例性的,假设用户使用某一组表情图片的频次越高,用户对这组表情图片的喜好程度就越高,即通过用户对各组表情图片的使用频次来代替一般意义上的用户评分,表情图片管理平台142针对每个用户对各组表情图片的使用频次来构建各个用户对应的user-item矩阵,在一个用户对应的user-item矩阵中,每个元素对应用户使用一组表情图片的频次。由于不同用户的使用习惯不同,上述构建的不同用户对应的user-item矩阵中,表示用户对各组表情的使用频次的原始数值差异会非常大,因此,为了降低后续计算过程的复杂度,在进一步处理之前,表情图片管理平台142对上述构建的各个用户的user-item矩阵进行 归一化。此处归一化是指对每个用户对应的user-item矩阵单独进行归一化,即对于每个用户,该用户的user-item矩阵中每个元素的值等于用户使用该元素对应的一组表情图片的频次除以该用户使用的各组表情图片分别对应的频次中的最大值,其公式如下:
Figure PCTCN2018074561-appb-000001
其中f(u,e)为用户u使用一组表情图片e的频次,K为所有表情图片的组数。uimatrix[u,e]为用户对应的user-item矩阵中对应该组表情图片e的元素归一化之后的数值。用户未使用过的各组表情图片(即f(u,e)=0的各组表情图片),就是推荐算法需要预测打分(也就是上述推荐指数)的表情图片。
在对各个用户的user-item矩阵进行归一化之后,表情图片管理平台142基于各个用户的user-item矩阵,通过基于协同过滤的推荐算法找出与当前用户具有相似喜好的其他用户的user-item矩阵,并结合与当前用户具有相似喜好的其他用户的user-item矩阵计算当前用户的user-item矩阵中为0的元素的预测值,其中,当前用户的user-item矩阵中为0的元素所对应的表情图片即可以是上述指定表情图片,通过该元素的预测值可以获得该指定表情图片的推荐指数,比如,可以直接将该元素的预测值作为该指定表情图片的推荐指数,即针对每一个用户,对该用户未使用过的各组表情图片计算一个推荐指数,以此作为推荐的依据。
可选的,上述方案以根据基于协同过滤的推荐算法获取修正前的推荐指数为例进行说明,在实际应用中,服务器集群也可以根据实际的应用需求,使用其它类型的推荐算法来获取修正前的推荐指数,比如,其它类型的推荐算法可以有基于内容的推荐算法、基于关联规则的推荐算法以及基于知识的推荐算法等,甚至于,服务器集群还可以根据两种或两种以上的推荐算法的组合来获取上述修正前的推荐系数。对于服务器集群获取修正前的推荐系数所使用的推荐算法,本申请实施例不做限定。
可选的,在另一种可能的实现方式中,服务器集群也可以不使用具体的推荐算法来获取修正前的推荐指数,比如,服务器集群可以将指定表情图片的修正前的推荐指数设置为一个固定的数值,所有待推荐的表情图片的修正前的推荐指数都相同,即服务器集群不结合其它的推荐算法,只根据用户对其使用过的各组表情图片的使用记录向用户推荐表情图片。
步骤203,根据使用记录、每组表情图片的图像风格和指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
可选地,本步骤包括如下两个子步骤:
子步骤一,根据用户使用过的各组表情图片的使用记录,以及每组表情图片的图像风格,生成用户的兴趣向量;
子步骤二,根据兴趣向量、指定表情图片的图像风格以及预设的修正公式,对指定表情图片的推荐指数进行修正,获得修正后的推荐指数。
可选地,针对子步骤一对应的技术方案,包括如下内容:
其中,上述兴趣向量中的每一个元素指示用户使用一种图像风格的表情图片的次数。
可选的,当使用记录中包含用户对各组表情图片中的每组表情图片的使用次数时,生成用户的兴趣向量的过程可以如下述的步骤a至步骤c所示。
步骤a,根据图像风格的种类数生成初始化兴趣向量,初始化兴趣向量中对应每一种图像风格的元素的值均为1。
由于用户在使用表情图片时,通常只会使用少部分的图像风格的表情图片,而其它大部分图像风格的表情图片用户可能从不使用。为了避免用户的兴趣向量中出现过多的0值而导致向量过于稀疏,提高后续计算的准确性,本申请实施例所示的方法,可以默认用户对每一种图像风格的表情图片都至少使用了一次。即上述初始化兴趣向量是一个全1向量v=[1,1,…,1],向量v中的元素的个数为上述表情图像的分类体系中的图像风格的种类数(假设为n),且向量v中的每个元素对应其中一种图像风格。
步骤b,对于每组表情图片,在初始化兴趣向量中,将每组表情图片的图像风格对应的元素的值叠加上用户对每组表情图片的使用次数,获得叠加后的向量。
服务器集群可以根据上述使用记录,统计用户使用的所有表情图片及其频次。对每组表情图片和对应的使用频次m,确定该组表情图片对应的图像风格对应在上述向量v中的元素,比如第i个元素,然后在向量v的第i个元素上加上m,即:
v[i]=v[i]+m。
比如,假设上述向量v中的第1个元素对应可爱2D表情,第2个元素对应搞怪2D表情,最后一个元素对应艺术字表情,根据上述使用记录统计确定 用户数对可爱2D表情的使用次数为200次,对搞怪2D表情的使用次数为0次,对艺术字表情的使用次数为30次,则叠加后的向量v=[201,1,…,31]。
可选的,当一组表情图片对应两种或两种以上图像风格时,则对于用户每次使用该组表情图片的使用记录,将用户对该两种或两种以上图像风格的表情图片的使用次数分别加1。
比如,某一组表情图片同时对应可爱2D表情和真人表情,且用户对该组表情图片的使用次数为10次,则在统计用户对可爱2D表情和真人表情的使用次数时,根据用户对该组表情图片的使用记录,将用户对可爱2D表情和真人表情的使用次数分别加上10。
步骤c,对叠加后的向量进行归一化,获得用户的兴趣向量。
服务器集群在对叠加后的向量进行归一化时,可以将向量v的所有元素都除以向量v的长度,使得归一化后获得的用户的兴趣向量的长度为1,归一化公式如下:
Figure PCTCN2018074561-appb-000002
其中,v[i]为向量v中的第i个元素,v[j]为向量v中的第j个元素,n为向量v的长度。
针对子步骤2对应的技术方案,包括如下内容:
可选的,上述修正公式为:
Figure PCTCN2018074561-appb-000003
其中,rp(u,e)为修正后的推荐指数,cf(u,e)为修正前的推荐指数,v(u)是用户的兴趣向量。
另外,frq(u)是用户在每个单位时间段内使用表情图片的平均次数。上述单位时间段可以是预先设置的固定时长的时间段,比如,上述单位时间段可以是一天或一小时等等。用户在每个单位时间段内使用表情图片的平均次数是上述使用记录中的统计出的用户使用各组表情图片的总次数平均到上述使用记录对应的各个单位时间段上的使用次数。比如,假设上述使用记录是用户在30天内的使用记录,单位时间段是1天,上述使用记录中统计出的用户使用表情图片的总次数为10000,则用户在每个单位时间段内使用表情图片的平均次数为10000/30≈333次。
上述公式中的
Figure PCTCN2018074561-appb-000004
是与用户在每个单位时间段内使用表情 图片的平均次数相关的一个函数。
图3示出了本申请实施例涉及的一种函数图形示意图,如图3所示,上述与用户在每个单位时间段内使用表情图片的平均次数相关的函数是半个s形函数,用户单位时间段内的平均使用频次为0时,该函数值为0,表示使用用户的兴趣向量对推荐指数进行修正的可信度为0,完全不可用;随着用户单位时间段内的平均使用频次的增大,该函数先明显增大,然后逐渐平缓,其极限值为1.0,这种变化趋势表示,随着用户单位时间段内平均使用表情图片频次的增大,使用用户的兴趣向量对推荐指数进行修正也越来越可信。
上述公式中的var(v(u))是用户的兴趣向量中各个元素的方差值。方差值越大,说明该用户使用的表情图片的图像风格越集中,反之,若方差值越小,则表示该用户使用的表情图片的图像风格越分散。其中,用户使用表情图片的图像风格越集中,则该公式对上述推荐指数修正的幅度越大,反之,该公式对上述推荐指数修正的幅度越小。
上述公式中的sim(e,v(u))是指定表情图片与兴趣向量之间的余弦相似度。
其中,在计算上述余弦相似度时,服务器集群可以确定指定表情图片对应的图像风格向量,该图像风格向量与用户的兴趣向量类似,也包含n个元素,每个元素对应一种图像风格,且其中对应于指定表情图片的图像风格的元素的值为1,其它元素均为0,服务器集群取出图像风格向量与用户的兴趣向量中对应位置都不为0的元素相乘并求和,再除以2个向量的长度,获得的结果即为上述余弦相似度。
可选的,当指定表情图片只对应一种图像风格时,由于用户的兴趣向量v(u)经过了归一化处理,而在同样维度的向量空间内,指定表情图片只有在一个元素(假设为第i个元素)上的值为1,其余都为0,此时图像风格向量与用户的兴趣向量中只有1组对应位置都不为0的元素,而且两个向量长度都为1,因此,根据上述余弦相似度计算公式,用户的兴趣向量中的第i个元素的值即为上述余弦相似度。即当指定表情图片只对应一种图像风格时,上述余弦相似度的计算方法为:首先确定指定表情图片的图像风格对应在用户的兴趣向量v(u)中的元素(假设为第i个元素),然后将sim(e,v(u))的值确定为向量v(u)的第i个元素的值。
步骤204,当修正后的推荐指数满足推荐条件时,向用户推荐指定表情图片。
其中,上述推荐条件可以包括以下条件中的至少一种:
修正后的推荐指数高于预设的指数阈值,以及,修正后的推荐指数在待推荐的表情图片的推荐指数中的排名高于预设的排名阈值。
在本申请实施例中,服务器集群对向用户推荐指定表情图片的推荐指数进行修正后,可以只根据该指定表情图片的修正后的推荐指数确定是否推荐该指定表情图片,比如,服务器集群中可以预先设置上述指数阈值(可以是由管理人员手动设置的一个数值),当修正后的推荐指数高于该指数阈值时,服务器集群即可以向用户推荐该指定表情图片。
或者,服务器集群也可以结合其它待推荐的表情图片的推荐指数确定是否推荐该指定表情图片,比如,服务器集群可以按照各个待推荐的表情图片各自对应的修正后的推荐指数,对各个待推荐的表情图片进行排名,并将排名在前x位的表情图片推荐给用户,如果该指定表情图片排名在前x位,则服务器集群向用户推荐该指定表情图片;其中,x就是上述的排名阈值,其可以是管理人员在服务器集群中设置的数值,或者,也可以是服务器集群中默认设置的数值。
或者,服务器集群也可以结合推荐指数的排名以及推荐指数的具体数值来确定是否推荐该指定表情图片。比如,服务器集群可以按照各个待推荐的表情图片各自对应的修正后的推荐指数,对各个待推荐的表情图片进行排名,若该指定表情图片排名在前x位,且该指定表情图片的修正后的推荐指数高于指数阈值,则服务器集群向用户推荐该指定表情图片,反之,若该指定表情图片排名在前x位之外,或者,该指定表情图片的修正后的推荐指数不高于指数阈值,则服务器集群不向用户推荐该指定表情图片。
可选的,在本申请实施例中,上述指定表情图片可以是待向用户推荐的表情图片中的任意一组图片,相当于对表情图片进行分类后,通过本申请实施例所示的方法,可以对任意一组表情图片进行推荐指数的修正。或者,上述指定表情图片也可以是待向用户推荐的表情图片中,对应的修正前的推荐指数最高的若干组表情图片中的任意一组图片。
可选的,在本申请实施例中,服务器集群可以按照定期执行上述步骤201至步骤205,比如,以半个月或一个月为周期执行上述方案,以定期更新向用户推荐的表情图片。
综上所述,本申请实施例提供的表情图片推荐方法,通过用户对其使用过 的各组表情图片的使用记录以及各组表情图片的图像风格,以及指定表情图片的图像风格,对指定表情图片的推荐指数进行修正后,按照修正后的推荐指数向用户推荐的表情图片。由于综合考虑了用户对表情图片的图像风格的喜好程度,从而实现了结合用户的个人喜好向用户进行个性化的表情图片推荐,提高了针对单个用户的表情图片的推荐效果。
另外,本申请实施例提供的方法,通过基于协同过滤的推荐算法计算获得修正前的推荐指数,并结合用户对表情图片的图像风格的个人喜好对修正前的推荐指数进行修正,实现了结合用户的个人喜好和基于协同过滤的推荐算法向用户推荐表情图片。
此外,本申请实施例提供的方法,在生成用户的兴趣向量时,首先生成对应每一种图像风格的元素的值均为1的初始化兴趣向量,并在初始化兴趣向量的基础上叠加用户对各个图像风格的表情图片的使用次数,获得叠加后的向量并进行归一化处理,避免用户的兴趣向量中出现过多的0值而导致向量过于稀疏的问题,从而提高后续计算的准确性。
另外,本申请实施例提供的方法,在对指定表情图片的推荐指数进行修正时,综合考虑了用户在每个单位时间段内使用表情图片的平均次数以及用户的兴趣集中程度对修正的影响,提高了推荐指数修正的准确性。
图4是根据一示例性实施例示出的一种表情图片推荐方法的流程图,以应用于如图1所示的系统中的服务器集群为例,该表情图片推荐方法可以包括如下几个步骤:
步骤401,获取每一种图像风格对应的表情图片样本,该表情图片样本是表情库中被指定了对应的图像风格的部分表情图片。
其中,表情库中包含用户使用过的各组表情图片以及指定表情图片。比如,以图1所示的系统为例,该表情库可以是设置在表情图片管理平台142中的一个用于存储表情图片的数据库,该表情库中存储有系统中已有的各组表情图片。对于单个用户而言,该表情库中除了存储有该用户使用过的各组表情图片之外,还存储有该用户未使用过,且可能向该用户推荐的指定表情图片。
在本申请实施例中,上述表情图片样本可以由管理人员进行人工标注,服务器集群获取人工标注的,每一种图像风格对应的表情图片样本。
其中,上述表情图片样本也可以按照组别进行标注,比如,管理人员构建 表情图片的图像风格的分类体系后,可以先在表情库中所有的各组表情图片中,对应每一种图像风格标注出至少一组表情图片,其中,每一种图像风格标注出的表情图片的组数可以由管理人员按照实际情况自行决定。比如,综合人工成本和后续的机器训练效果的考虑,在一般难度的分类体系下,每个图像风格下可以标注50组左右的表情图片。
步骤402,提取每一种图像风格对应的表情图片样本的图像特征信息。
在本申请实施例中,服务器集群可以通过适用于图像分类的机器学习分类模型来进行图像分类。
目前,适用于图像分类的机器学习分类模型主要分为两大类,一类是传统的机器学习分类模型,比如SVM(Support Vector Machine,支持向量机)模型、最大熵模型或者随机森林模型等等,另一类是深度神经网络模型,比如卷积神经网络模型。这两类机器学习分类模型所需要的图像特征信息的提取方式也不相同。
具体的,若使用传统的机器学习分类模型,则服务器集群可以通过SIFT(Scale-invariant feature transform,尺度不变特征变换)、SURF(Speed-up robust features,加速健壮特征)、ORB(Oriented fast and rotated brief,快速导向和短暂旋转)、HOG(Histogram of Oriented Gradient,方向梯度直方图)、LBP(Local Binary Pattern,局部二值模式)等图像特征提取算法从每一种图像风格对应的表情图片样本中提取图像特征信息。
若使用深度神经网络模型,则服务器集群可以将每一种图像风格对应的表情图片样本中,每个像素点上的RGB(Red Green Blue,红绿蓝)颜色值提取为上述图像特征信息。
步骤403,对图像特征信息以及图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型。
在上述步骤中提取每一种图像风格对应的表情图片样本的图像特征信息后,服务器集群即可以将图像特征信息以及图像特征信息对应的图像风格输入选择的机器模型进行训练,以获得用于对表情图片进行分类的机器学习分类模型。
步骤404,将未分类的表情图片的图像特征信息输入机器学习分类模型,获得未分类的表情图片对应的图像风格。
在训练好上述机器学习分类模型后,对于其它未分类(即管理人员未标注 图像风格)的各组表情图片,服务器集群可以按照对应的机器学习分类模型,提取各组表情图片的图像特征信息,并将各组表情图片的图像特征信息输入上述机器学习分类模型,该机器学习分类模型即可以输出各组表情图片分别对应的图像风格。
步骤405,获取用户使用过的各组表情图片的使用记录,各组表情图片中的每组表情图片对应至少一种图像风格,每组表情图片中包含至少一个图片。
步骤406,获取指定表情图片的修正前的推荐指数,并获取指定表情图片的图像风格。
步骤407,根据用户使用过的各组表情图片的使用记录,以及每组表情图片的图像风格,生成用户的兴趣向量。
步骤408,根据兴趣向量、指定表情图片的图像风格以及预设的修正公式,对指定表情图片的推荐指数进行修正,获得修正后的推荐指数。
步骤409,当修正后的推荐指数满足推荐条件时,向用户推荐指定表情图片。
其中,上述步骤405至步骤409的执行过程可以参考上述图2所示实施例中的步骤201至步骤205中对应的描述,此处不再赘述。
综上所述,本申请实施例提供的表情图片推荐方法,通过用户对其使用过的各组表情图片的使用记录、各组表情图片的图像风格和指定表情图片的图像风格,对指定表情图片的推荐指数进行修正后,按照修正后的推荐指数向用户推荐的表情图片。由于综合考虑了用户对表情图片的图像风格的喜好程度,从而实现了结合用户的个人喜好向用户进行个性化的表情图片推荐,提高了针对单个用户的表情图片的推荐效果。。
此外,本申请实施例提供的方法,通过提取每一种图像风格对应的表情图片样本的图像特征信息,对所述图像特征信息以及图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型,并将未分类的表情图片的图像特征信息输入机器学习分类模型,获得未分类的表情图片对应的图像风格,实现对各组表情图片的图像风格的自动分类。
具体的,请参考图5,其示出了一种服务器集群向用户推荐表情图片的实现过程的示意图。以上述实现过程可以由图1所示的服务器集群中的表情图片管理平台142、社交网络平台144以及用户操作记录管理服务器146实现为例, 如图5所示,管理人员通过管理设备160对表情图片管理平台142的表情库中的部分表情图片进行图像风格标注,每种图像风格标注50组左右的表情图片。表情图片管理平台142将管理人员标注了图像风格的表情图片获取为每种图像风格分别对应的表情图片样本,并对表情图片样本进行图像特征信息的提取,对提取到的表情样本的图像特征信息以及图像特征信息对应的图像风格(即用户标注的图像风格)进行机器学习,获得机器学习分类模型。表情图片管理平台142采样同样的方法提取表情库中其它未标注图像风格的各组表情图片的图像特征信息,并将各组表情图片的图像特征信息输入上述机器学习分类模型,并输出各组表情图片各自对应的图像风格,从而获得表情图片管理平台142中管理的各组表情图片各自对应的图像风格。
另一方面,用户A通过用户终端120使用社交网络应用,包括在社交网络应用中使用表情图片。社交网络平台144收集该用户A的操作记录,并将收集到的用户A的操作记录发送给用户操作记录管理服务器146。每隔预定周期,比如每隔一个月,用户操作记录管理服务器146根据用户A的操作记录,生成用户A在最近一个月时间内对各组表情图片的使用记录,该使用记录中包括用户A对各组表情图片的使用次数,用户操作记录管理服务器146将生成的使用记录提供给表情图片管理平台142。
表情图片管理平台142获得用户A在最近一个月时间内对各组表情图片的使用记录后,结合该使用记录以及各组表情图片的图像风格生成用户A的兴趣向量。同时,表情图片管理平台142还根据各个用户在最近一个月时间内对各组表情图片的使用记录,通过基于协同过滤的推荐算法计算出向用户A推荐该用户A未使用过的各组表情图片的修正前的推荐指数。表情图片管理平台142使用上述用户A的兴趣向量对该修正前的推荐指数进行修正,并按照修正后的推荐指数对用户A未使用过的各组表情图片进行排序,根据排序结果向用户A推荐其中推荐指数最高的几组表情图片。
图6是根据一示例性实施例示出的一种表情图片推荐装置的结构方框图。该表情图片推荐装置可以通过硬件或者软硬结合的方式实现为服务器集群中的部分或全部,以执行图2或图4所示实施例中的全部或者部分步骤。该表情图片推荐装置可以包括:记录获取模块601、推荐指数获取模块602、修正模块603以及推荐模块604。
记录获取模块601,用于获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片。
其中,记录获取模块601所执行的具体步骤可以参考上述图2中步骤201下的描述,此处不再赘述。
推荐指数获取模块602,用于获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度。
修正模块603,用于根据所述使用记录、所述每组表情图片的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数。
推荐模块604,用于当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
其中,推荐模块604所执行的具体步骤可以参考上述图2中步骤205下的描述,此处不再赘述。
可选的,所述修正模块603,包括:向量生成单元以及修正单元。
向量生成单元,用于根据所述各组表情图片的使用记录,以及所述每组表情图片的图像风格,生成所述用户的兴趣向量,所述兴趣向量中的每一个元素指示所述用户使用一种图像风格的表情图片的次数。
修正单元,用于根据所述兴趣向量、所述指定表情图片的图像风格以及预设的修正公式,对所述指定表情图片的推荐指数进行修正。
其中,修正单元所执行的具体步骤可以参考上述图2中步骤204下的描述,此处不再赘述。
可选的,所述使用记录中包含用户对所述各组表情图片中的每组表情图片的使用次数,所述向量生成单元,包括:生成子单元、叠加子单元以及归一化子单元。
生成子单元,用于根据图像风格的种类数生成初始化兴趣向量,所述初始化兴趣向量中对应每一种图像风格的元素的值均为1;
叠加子单元,用于对于所述每组表情图片,在所述初始化兴趣向量中,将所述每组表情图片的图像风格对应的元素的值叠加上用户对所述每组表情图片的使用次数,获得叠加后的向量;
归一化子单元,用于对所述叠加后的向量进行归一化,获得所述用户的兴趣向量。
其中,向量生成单元所执行的具体步骤可以参考上述图2中步骤203下的描述,此处不再赘述。
可选的,所述修正公式为:
Figure PCTCN2018074561-appb-000005
其中,rp(u,e)为所述修正后的推荐指数,cf(u,e)为所述修正前的推荐指数,frp(u)是所述用户在每个单位时间段内使用表情图片的平均次数,v(u)是所述兴趣向量,var(v(u))是所述兴趣向量中各个元素的方差值,sim(e,v(u))是所述指定表情图片与所述兴趣向量之间的余弦相似度。
可选的,所述推荐指数获取模块602,用于根据基于协同过滤的推荐算法计算获得所述指定表情的修正前的推荐指数。
其中,推荐指数获取模块602所执行的具体步骤可以参考上述图2中步骤202下的描述,此处不再赘述。
可选的,所述推荐条件包括以下条件中的至少一种:
所述修正后的推荐指数高于预设的指数阈值;
所述修正后的推荐指数在待推荐的表情图片的推荐指数中的排名高于预设的排名阈值。
可选的,所述装置还包括:样本获取模块、特征提取模块、训练模块以及图像风格获取模块。
样本获取模块,用于在所述记录获取模块获取用户使用过的各组表情图片的使用记录之前,获取每一种图像风格对应的表情图片样本,所述表情图片样本是表情库中被指定了对应的图像风格的部分表情图片,所述表情库中包含所述用户使用过的各组表情图片以及所述指定表情图片。
其中,样本获取模块所执行的具体步骤可以参考上述图4中步骤401下的描述,此处不再赘述。
特征提取模块,用于提取所述每一种图像风格对应的表情图片样本的图像特征信息。
其中,特征提取模块所执行的具体步骤可以参考上述图4中步骤402下的描述,此处不再赘述。
训练模块,用于对所述图像特征信息以及所述图像特征信息对应的图像风 格进行机器学习训练,获得机器学习分类模型。
其中,训练模块所执行的具体步骤可以参考上述图4中步骤403下的描述,此处不再赘述。
图像风格获取模块,用于将未分类的表情图片的图像特征信息输入所述机器学习分类模型,获得所述未分类的表情图片对应的图像风格,所述未分类的表情图片是所述表情库中除所述表情图片样本之外的其它表情图片。
其中,图像风格获取模块所执行的具体步骤可以参考上述图4中步骤404下的描述,此处不再赘述。
综上所述,本申请实施例提供的表情图片推荐装置,通过用户对其使用过的各组表情图片的使用记录、各组表情图片的图像风格和指定表情图片的图像风格,对指定表情图片的推荐指数进行修正后,按照修正后的推荐指数向用户推荐的表情图片。由于综合考虑了用户对表情图片的图像风格的喜好程度,从而实现了结合用户的个人喜好向用户进行个性化的表情图片推荐,提高了针对单个用户的表情图片的推荐效果。
另外,本申请实施例提供的装置,通过基于协同过滤的推荐算法计算获得修正前的推荐指数,并结合用户对表情图片的图像风格的个人喜好对修正前的推荐指数进行修正,实现了结合用户的个人喜好和基于协同过滤的推荐算法向用户推荐表情图片。
此外,本申请实施例提供的装置,在生成用户的兴趣向量时,首先生成对应每一种图像风格的元素的值均为1的初始化兴趣向量,并在初始化兴趣向量的基础上叠加用户对各个图像风格的表情图片的使用次数,获得叠加后的向量并进行归一化处理,避免用户的兴趣向量中出现过多的0值而导致向量过于稀疏的问题,从而提高后续计算的准确性。
另外,本申请实施例提供的装置,在对指定表情图片的推荐指数进行修正时,综合考虑了用户在每个单位时间段内使用表情图片的平均次数以及用户的兴趣集中程度对修正的影响,提高了推荐指数修正的准确性。
此外,本申请实施例提供的装置,通过提取每一种图像风格对应的表情图片样本的图像特征信息,对所述图像特征信息以及图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型,并将未分类的表情图片的图像特征信息输入机器学习分类模型,获得未分类的表情图片对应的图像风格,实现对各组表情图片的图像风格的自动分类。
图7是根据一示例性实施例示出的一种服务器集群的结构示意图。所述服务器集群700包括至少一台服务器,所述服务器集群700包括中央处理单元(CPU)701、包括随机存取存储器(RAM)702和只读存储器(ROM)703的系统存储器704,以及连接系统存储器704和中央处理单元701的系统总线705。所述服务器集群700还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)706,和用于存储操作系统713、应用程序714和其他程序模块715的大容量存储设备707。
所述基本输入/输出系统706包括有用于显示信息的显示器708和用于用户输入信息的诸如鼠标、键盘之类的输入设备709。其中所述显示器708和输入设备709都通过连接到系统总线705的输入输出控制器710连接到中央处理单元701。所述基本输入/输出系统706还可以包括输入输出控制器710以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器710还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备707通过连接到系统总线705的大容量存储控制器(未示出)连接到中央处理单元701。所述大容量存储设备707及其相关联的计算机可读介质为服务器集群700提供非易失性存储。也就是说,所述大容量存储设备707可以包括诸如硬盘或者CD-ROM驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器704和大容量存储设备707可以统称为存储器。
根据本申请的各种实施例,所述服务器集群700还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即服务器集群700可以通过连接在所述系统总线705上的网络接口单元711连接到网络712,或者说,也可以使用网络接口单元711来连接到其他类型的网络或远程计算机系统(未示出)。
所述存储器还包括一个或者一个以上的程序,所述一个或者一个以上程序存储于存储器中,中央处理器701通过执行该一个或一个以上程序来实现图2或图4所示的表情图片推荐方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由服务器的处理器执行以完成本申请各个实施例所示的表情图片推荐方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的申请后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (22)

  1. 一种表情图片推荐方法,其特征在于,用于服务器集群中,所述方法包括:
    获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片;
    获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度;
    根据所述使用记录、所述每组表情图片的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
    当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述使用记录、所述每组表情图片对应的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数,包括:
    根据所述各组表情图片的使用记录,以及所述每组表情图片的图像风格,生成所述用户的兴趣向量,所述兴趣向量中的每一个元素指示所述用户使用一种图像风格的表情图片的次数;
    根据所述兴趣向量、所述指定表情图片的图像风格以及预设的修正公式,对所述指定表情图片的推荐指数进行修正。
  3. 根据权利要求2所述的方法,其特征在于,所述使用记录中包含用户对所述各组表情图片中的每组表情图片的使用次数,所述根据所述用户使用过的各组表情图片的使用记录,以及所述每组表情图片的图像风格,生成所述用户的兴趣向量,包括:
    根据图像风格的种类数生成初始化兴趣向量,所述初始化兴趣向量中对应每一种图像风格的元素的值均为预设值;
    对于所述每组表情图片,在所述初始化兴趣向量中将所述每组表情图片的图像风格对应的元素的值叠加上用户对所述每组表情图片的使用次数,获得叠加后的向量;
    对所述叠加后的向量进行归一化,获得所述用户的兴趣向量。
  4. 根据权利要求2所述的方法,其特征在于,所述修正公式为:
    Figure PCTCN2018074561-appb-100001
    其中,rp(u,e)为所述修正后的推荐指数,cf(u,e)为所述修正前的推荐指数,frq(u)是所述用户在每个单位时间段内使用表情图片的平均次数,v(u)是所述兴趣向量,var(v(u))是所述兴趣向量中各个元素的方差值,sim(e,v(u))是所述指定表情图片与所述兴趣向量之间的余弦相似度。
  5. 根据权利要求1所述的方法,其特征在于,所述获取指定表情图片的修正前的推荐指数,包括:
    根据基于协同过滤的推荐算法计算获得所述指定表情的修正前的推荐指数。
  6. 根据权利要求1所述的方法,其特征在于,所述推荐条件包括以下条件中的至少一种:
    所述修正后的推荐指数高于预设的指数阈值;
    所述修正后的推荐指数在待推荐的表情图片的推荐指数中的排名高于预设的排名阈值。
  7. 根据权利要求1至6任一所述的方法,其特征在于,在获取用户使用过的各组表情图片的使用记录之前,所述方法还包括:
    获取每一种图像风格对应的表情图片样本,所述表情图片样本是表情库中已标注了图像风格的部分表情图片,所述表情库中包含所述用户使用过的各组表情图片以及所述指定表情图片;
    提取所述每一种图像风格对应的表情图片样本的图像特征信息;
    对所述图像特征信息以及所述图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型;
    将未分类的表情图片的图像特征信息输入所述机器学习分类模型,获得所述未分类的表情图片对应的图像风格,所述未分类的表情图片是所述表情库中除所述表情图片样本之外的其它表情图片。
  8. 一种表情图片推荐装置,其特征在于,所述装置包括:
    记录获取模块,用于获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片;
    推荐指数获取模块,用于获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度;
    修正模块,用于根据所述使用记录、所述每组表情图片的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
    推荐模块,用于当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
  9. 根据权利要求8所述的装置,其特征在于,所述修正模块,包括:
    向量生成单元,用于根据所述各组表情图片的使用记录,以及所述每组表情图片的图像风格,生成所述用户的兴趣向量,所述兴趣向量中的每一个元素指示所述用户使用一种图像风格的表情图片的次数;
    修正单元,用于根据所述兴趣向量、所述指定表情图片的图像风格以及预设的修正公式,对所述指定表情图片的推荐指数进行修正。
  10. 根据权利要求9所述的装置,其特征在于,所述使用记录中包含用户对所述各组表情图片中的每组表情图片的使用次数,所述向量生成单元,包括:
    生成子单元,用于根据图像风格的种类数生成初始化兴趣向量,所述初始化兴趣向量中对应每一种图像风格的元素的值均为1;
    叠加子单元,用于对于所述每组表情图片,在所述初始化兴趣向量中,将所述每组表情图片的图像风格对应的元素的值叠加上用户对所述每组表情图片的使用次数,获得叠加后的向量;
    归一化子单元,用于对所述叠加后的向量进行归一化,获得所述用户的兴趣向量。
  11. 根据权利要求9所述的装置,其特征在于,所述修正公式为:
    Figure PCTCN2018074561-appb-100002
    其中,rp(u,e)为所述修正后的推荐指数,cf(u,e)为所述修正前的推荐指数,frq(u)是所述用户在每个单位时间段内使用表情图片的平均次数,v(u)是所述兴趣向量,var(r(u))是所述兴趣向量中各个元素的方差值,sim(e,v(u))是所述指定表情图片与所述兴趣向量之间的余弦相似度。
  12. 根据权利要求8所述的装置,其特征在于,
    所述推荐指数获取模块,用于根据基于协同过滤的推荐算法计算获得所述指定表情的修正前的推荐指数。
  13. 根据权利要求8所述的装置,其特征在于,所述推荐条件包括以下条件中的至少一种:
    所述修正后的推荐指数高于预设的指数阈值;
    所述修正后的推荐指数在待推荐的表情图片的推荐指数中的排名高于预设的排名阈值。
  14. 根据权利要求8至13任一所述的装置,其特征在于,所述装置还包括:
    样本获取模块,用于在所述记录获取模块获取用户使用过的各组表情图片的使用记录之前,获取每一种图像风格对应的表情图片样本,所述表情图片样本是表情库中被指定了对应的图像风格的部分表情图片,所述表情库中包含所述用户使用过的各组表情图片以及所述指定表情图片;
    特征提取模块,用于提取所述每一种图像风格对应的表情图片样本的图像特征信息;
    训练模块,用于对所述图像特征信息以及所述图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型;
    图像风格获取模块,用于将未分类的表情图片的图像特征信息输入所述机器学习分类模型,获得所述未分类的表情图片对应的图像风格,所述未分类的表情图片是所述表情库中除所述表情图片样本之外的其它表情图片。
  15. 一种服务器集群,所述服务器集群包括:处理器和存储器;所述存储 器用于存储一个或多个程序,所述一个或多个程序被所述处理器执行时用于实现如下步骤:
    获取用户使用过的各组表情图片的使用记录,所述各组表情图片中的每组表情图片对应至少一种图像风格,所述每组表情图片中包含至少一个图片;
    获取指定表情图片的修正前的推荐指数,并获取所述指定表情图片的图像风格,所述推荐指数用于指示向所述用户推荐所述指定表情图片的优先程度;
    根据所述使用记录、所述每组表情图片的图像风格和所述指定表情图片的图像风格,对所述修正前的推荐指数进行修正,获得修正后的推荐指数;
    当所述修正后的推荐指数满足推荐条件时,向所述用户推荐所述指定表情图片。
  16. 根据权利要求15所述的服务器集群,其特征在于,所述一个或多个程序被所述处理器执行时还用于实现如下步骤:
    根据所述各组表情图片的使用记录,以及所述每组表情图片的图像风格,生成所述用户的兴趣向量,所述兴趣向量中的每一个元素指示所述用户使用一种图像风格的表情图片的次数;
    根据所述兴趣向量、所述指定表情图片的图像风格以及预设的修正公式,对所述指定表情图片的推荐指数进行修正。
  17. 根据权利要求16所述的服务器集群,其特征在于,所述使用记录中包含用户对所述各组表情图片中的每组表情图片的使用次数,所述一个或多个程序被所述处理器执行时还用于实现如下步骤:
    根据图像风格的种类数生成初始化兴趣向量,所述初始化兴趣向量中对应每一种图像风格的元素的值均为预设值;
    对于所述每组表情图片,在所述初始化兴趣向量中将所述每组表情图片的图像风格对应的元素的值叠加上用户对所述每组表情图片的使用次数,获得叠加后的向量;
    对所述叠加后的向量进行归一化,获得所述用户的兴趣向量。
  18. 根据权利要求16所述的服务器集群,其特征在于,所述修正公式为:
    Figure PCTCN2018074561-appb-100003
    其中,rp(u,e)为所述修正后的推荐指数,cf(u,e)为所述修正前的推荐指数,frq(u)是所述用户在每个单位时间段内使用表情图片的平均次数,v(u)是所述兴趣向量,var(v(u))是所述兴趣向量中各个元素的方差值,sim(e,v(u))是所述指定表情图片与所述兴趣向量之间的余弦相似度。
  19. 根据权利要求15所述的服务器集群,其特征在于,所述一个或多个程序被所述处理器执行时还用于实现如下步骤:
    根据基于协同过滤的推荐算法计算获得所述指定表情的修正前的推荐指数。
  20. 根据权利要求15所述的服务器集群,其特征在于,所述推荐条件包括以下条件中的至少一种:
    所述修正后的推荐指数高于预设的指数阈值;
    所述修正后的推荐指数在待推荐的表情图片的推荐指数中的排名高于预设的排名阈值。
  21. 根据权利要求15至20任一所述的服务器集群,其特征在于,所述一个或多个程序被所述处理器执行时还用于实现如下步骤:
    获取每一种图像风格对应的表情图片样本,所述表情图片样本是表情库中已标注了图像风格的部分表情图片,所述表情库中包含所述用户使用过的各组表情图片以及所述指定表情图片;
    提取所述每一种图像风格对应的表情图片样本的图像特征信息;
    对所述图像特征信息以及所述图像特征信息对应的图像风格进行机器学习训练,获得机器学习分类模型;
    将未分类的表情图片的图像特征信息输入所述机器学习分类模型,获得所述未分类的表情图片对应的图像风格,所述未分类的表情图片是所述表情库中除所述表情图片样本之外的其它表情图片。
  22. 一种非临时性计算机可读存储介质,其特征在于,所述存储介质存储有一个或多个指令,所述一个或多个指令被处理器执行时用于实现如权利要求1至7任一所述的表情图片推荐方法。
PCT/CN2018/074561 2017-02-13 2018-01-30 表情图片推荐方法、装置、服务器集群及存储介质 WO2018145590A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/432,386 US10824669B2 (en) 2017-02-13 2019-06-05 Sticker recommendation method and apparatus, server cluster, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710075877.0A CN108287857B (zh) 2017-02-13 2017-02-13 表情图片推荐方法及装置
CN201710075877.0 2017-02-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/432,386 Continuation US10824669B2 (en) 2017-02-13 2019-06-05 Sticker recommendation method and apparatus, server cluster, and storage medium

Publications (1)

Publication Number Publication Date
WO2018145590A1 true WO2018145590A1 (zh) 2018-08-16

Family

ID=62831525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/074561 WO2018145590A1 (zh) 2017-02-13 2018-01-30 表情图片推荐方法、装置、服务器集群及存储介质

Country Status (3)

Country Link
US (1) US10824669B2 (zh)
CN (1) CN108287857B (zh)
WO (1) WO2018145590A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108287857B (zh) 2017-02-13 2021-02-26 腾讯科技(深圳)有限公司 表情图片推荐方法及装置
US10776656B2 (en) * 2017-12-22 2020-09-15 Laurent Francois MARTIN Methods and systems for applying content aware stickers onto a layout
CN110764627B (zh) * 2018-07-25 2023-11-10 北京搜狗科技发展有限公司 一种输入方法、装置和电子设备
CN109214374B (zh) * 2018-11-06 2020-12-18 北京达佳互联信息技术有限公司 视频分类方法、装置、服务器及计算机可读存储介质
CN109784382A (zh) * 2018-12-27 2019-05-21 广州华多网络科技有限公司 标注信息处理方法、装置以及服务器
CN109918675A (zh) * 2019-03-15 2019-06-21 福建工程学院 一种上下文感知的网络表情图片自动生成方法及装置
CN110516099A (zh) * 2019-08-27 2019-11-29 北京百度网讯科技有限公司 图像处理方法和装置
CN112492389B (zh) * 2019-09-12 2022-07-19 上海哔哩哔哩科技有限公司 视频推送方法、视频播放方法、计算机设备和存储介质
US11252274B2 (en) * 2019-09-30 2022-02-15 Snap Inc. Messaging application sticker extensions
CN112818146B (zh) * 2021-01-26 2022-12-02 山西三友和智慧信息技术股份有限公司 一种基于产品图像风格的推荐方法
CN112991160B (zh) * 2021-05-07 2021-08-20 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440335A (zh) * 2013-09-06 2013-12-11 北京奇虎科技有限公司 视频推荐方法及装置
US20140016822A1 (en) * 2012-07-10 2014-01-16 Yahoo Japan Corporation Information providing device and information providing method
CN104394057A (zh) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 表情推荐方法及装置
CN105975563A (zh) * 2016-04-29 2016-09-28 腾讯科技(深圳)有限公司 表情推荐方法及装置
CN106355429A (zh) * 2016-08-16 2017-01-25 北京小米移动软件有限公司 图像素材的推荐方法及装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9075882B1 (en) * 2005-10-11 2015-07-07 Apple Inc. Recommending content items
US20090163183A1 (en) * 2007-10-04 2009-06-25 O'donoghue Hugh Recommendation generation systems, apparatus and methods
US9215423B2 (en) * 2009-03-30 2015-12-15 Time Warner Cable Enterprises Llc Recommendation engine apparatus and methods
US20160034970A1 (en) * 2009-10-13 2016-02-04 Luma, Llc User-generated quick recommendations in a media recommendation system
JP5440394B2 (ja) * 2010-05-31 2014-03-12 ソニー株式会社 評価予測装置、評価予測方法、及びプログラム
CN102611785B (zh) * 2011-01-20 2014-04-02 北京邮电大学 面向手机的移动用户个性化新闻主动推荐服务系统及方法
WO2013010024A1 (en) * 2011-07-12 2013-01-17 Thomas Pinckney Recommendations in a computing advice facility
US9208155B2 (en) * 2011-09-09 2015-12-08 Rovi Technologies Corporation Adaptive recommendation system
CN102663092B (zh) * 2012-04-11 2015-01-28 哈尔滨工业大学 一种基于服装组图的风格元素挖掘和推荐方法
US20140096018A1 (en) * 2012-09-28 2014-04-03 Interactive Memories, Inc. Methods for Recognizing Digital Images of Persons known to a Customer Creating an Image-Based Project through an Electronic Interface
US20140310304A1 (en) * 2013-04-12 2014-10-16 Ebay Inc. System and method for providing fashion recommendations
CN103793498B (zh) * 2014-01-22 2017-08-25 百度在线网络技术(北京)有限公司 图片搜索方法、装置以及搜索引擎
US10013601B2 (en) * 2014-02-05 2018-07-03 Facebook, Inc. Ideograms for captured expressions
CN104834677A (zh) * 2015-04-13 2015-08-12 苏州天趣信息科技有限公司 一种基于属性类别的表情图片显示方法、装置和终端
CN104881798A (zh) * 2015-06-05 2015-09-02 北京京东尚科信息技术有限公司 基于商品图像特征的个性化搜索装置及方法
US10191949B2 (en) * 2015-06-18 2019-01-29 Nbcuniversal Media, Llc Recommendation system using a transformed similarity matrix
US10114824B2 (en) * 2015-07-14 2018-10-30 Verizon Patent And Licensing Inc. Techniques for providing a user with content recommendations
US11393007B2 (en) * 2016-03-31 2022-07-19 Under Armour, Inc. Methods and apparatus for enhanced product recommendations
CN108287857B (zh) 2017-02-13 2021-02-26 腾讯科技(深圳)有限公司 表情图片推荐方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140016822A1 (en) * 2012-07-10 2014-01-16 Yahoo Japan Corporation Information providing device and information providing method
CN103440335A (zh) * 2013-09-06 2013-12-11 北京奇虎科技有限公司 视频推荐方法及装置
CN104394057A (zh) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 表情推荐方法及装置
CN105975563A (zh) * 2016-04-29 2016-09-28 腾讯科技(深圳)有限公司 表情推荐方法及装置
CN106355429A (zh) * 2016-08-16 2017-01-25 北京小米移动软件有限公司 图像素材的推荐方法及装置

Also Published As

Publication number Publication date
CN108287857B (zh) 2021-02-26
US10824669B2 (en) 2020-11-03
US20190286648A1 (en) 2019-09-19
CN108287857A (zh) 2018-07-17

Similar Documents

Publication Publication Date Title
WO2018145590A1 (zh) 表情图片推荐方法、装置、服务器集群及存储介质
US20210342745A1 (en) Artificial intelligence model and data collection/development platform
US10614381B2 (en) Personalizing user experiences with electronic content based on user representations learned from application usage data
US8768863B2 (en) Adaptive ranking of news feed in social networking systems
US9582786B2 (en) News feed ranking model based on social information of viewer
US10229357B2 (en) High-capacity machine learning system
WO2018145577A1 (zh) 表情推荐方法和装置
US10740802B2 (en) Systems and methods for gaining knowledge about aspects of social life of a person using visual content associated with that person
TWI658420B (zh) 融合時間因素之協同過濾方法、裝置、伺服器及電腦可讀存儲介質
US20190392049A1 (en) System for classification based on user actions
US10445558B2 (en) Systems and methods for determining users associated with devices based on facial recognition of images
JP2017535857A (ja) 変換されたデータを用いた学習
US11915298B2 (en) System and method for intelligent context-based personalized beauty product recommendation and matching
KR20160083900A (ko) 얼굴 표현을 위한 시스템 및 방법
US11126826B1 (en) Machine learning system and method for recognizing facial images
US10606910B2 (en) Ranking search results using machine learning based models
US11789980B2 (en) Method, system, and non-transitory computer readable record medium for providing multi profile
Chamoso et al. Social computing for image matching
US11645563B2 (en) Data filtering with fuzzy attribute association
US20230385903A1 (en) System and method for intelligent context-based personalized beauty product recommendation and matching at retail environments
CN112449217B (zh) 一种推送视频的方法、装置、电子设备和计算机可读介质
US11520838B2 (en) System and method for providing recommendations of documents
US20160042277A1 (en) Social action and social tie prediction
WO2023158446A1 (en) Privacy-enhanced training and deployment of machine learning models using client-side and server-side data
Wu et al. A hybrid approach based on collaborative filtering to recommending mobile apps

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18751167

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18751167

Country of ref document: EP

Kind code of ref document: A1