WO2021044460A1 - Dispositif, procédé et programme d'estimation de carte utilisateur/produit - Google Patents

Dispositif, procédé et programme d'estimation de carte utilisateur/produit Download PDF

Info

Publication number
WO2021044460A1
WO2021044460A1 PCT/JP2019/034346 JP2019034346W WO2021044460A1 WO 2021044460 A1 WO2021044460 A1 WO 2021044460A1 JP 2019034346 W JP2019034346 W JP 2019034346W WO 2021044460 A1 WO2021044460 A1 WO 2021044460A1
Authority
WO
WIPO (PCT)
Prior art keywords
product
user
feature vector
hidden feature
word
Prior art date
Application number
PCT/JP2019/034346
Other languages
English (en)
Japanese (ja)
Inventor
幸史 市川
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2019/034346 priority Critical patent/WO2021044460A1/fr
Priority to JP2021543616A priority patent/JP7310899B2/ja
Publication of WO2021044460A1 publication Critical patent/WO2021044460A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present invention relates to a user / product map estimation device that maps the estimated user / product relationship in space, a user / product map estimation method, and a user / product map estimation program.
  • map space the space for displaying products and users in association with each other (the space at the map destination)
  • the map space is an arbitrary vector space
  • each product and user is represented by a vector on the map space.
  • map space is not limited to the vector space, and may be defined as a module, for example.
  • users are arranged in the map space based on the purchasing behavior of products (for example, beer). For example, a user who often buys "sharp” beer is placed near the "sharp” beer.
  • Non-Patent Documents 1 to 3 describe techniques for mapping a user and a product in the same space, respectively.
  • Non-Patent Document 1 estimates the vector on the user's map space and the vector on the map space of the product based on the user behavior data as follows.
  • the vector on the map space will be referred to as a hidden feature vector.
  • a set of products that the user likes (hereinafter referred to as a positive example product) and a product that the user does not like (hereinafter referred to as a negative example product) is defined.
  • the positive example product and the negative example product can be defined by using the behavior data of the user who is paying attention.
  • a set of products that the user of interest has purchased is defined as a regular product of the user of interest.
  • a product that the user of interest has not purchased is defined as a negative example product of the user of interest.
  • the distance from the hidden feature vector of each user to the hidden feature vector of the positive example product and the hidden feature vector of the negative example product of each user is calculated.
  • the distance between the hidden feature vector of each user and the hidden feature vector of the positive example product of each user is the hidden feature vector of each user and the hidden feature vector of the negative example product of each user. Assume the constraint that it is closer than the distance to. Then, the hidden feature vector of the user and the hidden feature vector of the product are estimated so as to realize this constraint as much as possible.
  • the device described in Non-Patent Document 1 estimates a vector on the user's map space and a vector on the product's map space based on the user's behavior data and product features.
  • the features of the product are also mapped to the space in which the hidden feature vector of the user and the product is defined.
  • image data, tags, and the like are assumed as features of each product.
  • Product features are transformed into a single vector in map space by any function.
  • a function that converts a product feature into one vector on the map space is referred to as an encoder.
  • an encoder for example, an affine transformation or a neural network is assumed.
  • the hidden feature vector of the user and the product is estimated under the above-mentioned constraint based on the user behavior and the constraint that the distance between the hidden feature vector of the product and the product feature vector projected by the encoder becomes short.
  • the above-mentioned encoder parameters are also estimated at the same time.
  • the device described in Non-Patent Document 2 estimates the hidden feature vector of the user and the hidden feature vector of the product based on the user's behavior data and the product feature, similarly to the device described in Non-Patent Document 1.
  • the device described in Non-Patent Document 2 learns a function that converts a point on the map space to the product feature space.
  • this function is referred to as a decoder.
  • the decoder makes it possible to interpret what kind of image data each point in the map space is, for example, in a situation where product features are input as image data.
  • Non-Patent Document 3 estimates the hidden feature vector of the user and the hidden feature vector of the product as follows. First, the distributed representation of words is learned using external data. By this learning, a certain vector is assigned to each word. This vector is estimated by the semantic closeness of each word. The distance between the vectors of words used in similar contexts, such as "Shepherd,” “Doberman,” and “Akita Inu,” is estimated to be close. On the other hand, the distance between vectors of words used in completely different contexts, such as “shepherd” and “windbreak”, is estimated to be long. In the following, the vector of each obtained word will be referred to as a word vector using external data. In addition, the vector space in which this word vector is defined is referred to as a word space.
  • each product has a certain word set by decomposing the sentence of the product into words by morphological analysis.
  • the word set of the entire product is a subset of words obtained by using external data. If there are words that are not in the external data among the words that the entire product has, this assumption can be satisfied by removing such words.
  • the hidden feature vector of each product is defined as the average value of the word vectors of each product. Further, the hidden feature vector of each user is defined as the average value of the hidden feature vector of each user's example product.
  • Non-Patent Document 3 the product and the user have a vector defined in the word space. For example, by adding natural language-based features such as “cold,” “red,” and “carbonated” to a product, it becomes possible to identify similar products and target users of such products.
  • Non-Patent Document 1 has a problem that it is difficult to interpret what kind of features each point on the embedded space means. That is, the device described in Non-Patent Document 1 outputs a set of products commonly preferred by a plurality of users in a mass on the map space. However, what kind of commonality the masses have must be interpreted after knowing the properties of each product.
  • Non-Patent Document 2 the interpretability of the map space is improved by simultaneously learning the function of projecting each point on the map space onto the product feature space for the above problem.
  • the interpretability of the map space is improved by simultaneously learning the function of projecting each point on the map space onto the product feature space for the above problem.
  • the product feature space is limited to the features of the product. Therefore, for example, it is not possible to perform an operation such as adding a new feature (characteristic) or subtracting an undefined concept. Specifically, in the above beer example, for example, a product with a certain "sharpness” is added with the characteristic of "lemon flavor", or the characteristic of "beer” is subtracted to add the characteristic of "black tea". You cannot operate it freely. That is, in the device described in Non-Patent Document 2, it is not possible to manipulate the hidden feature vector of the product in the map space by adding or subtracting the features shown in natural language.
  • Non-Patent Document 3 the word space learned using external data is used, and the hidden feature vector of the user and the hidden feature vector of the product are defined on the word space.
  • the external data is created by a huge corpus, and that most of the words we use every day are also assigned word vectors. Therefore, in the device described in Non-Patent Document 3, for example, the feature of "lemon flavor” is added to the product with “sharpness” shown above, or the feature of "tea” is subtracted from the feature of "beer". It is possible to add or subtract features based on natural language, such as adding.
  • the hidden feature vector of the product is represented by the sum of the word vectors. Therefore, for example, when there are a plurality of products having only the word "sharp", they are projected at exactly the same points on the map space. As a result, the effects of features that do not appear in the text of the product cannot be reflected. For example, if a product with "sharpness” has a characteristic "scent of wheat” that is not described in the text, it should not be placed at the same position as the word vector of "sharpness” in the map space. Is also assumed. That is, products having the same word are projected on the same point in the map space.
  • Such a situation can occur in a situation where the product information is not so substantial.
  • Such a situation is assumed when, for example, only short information such as an explanation of an EC (Electronic Commerce) site, a catch phrase of a product, and a category of a product is recorded as data.
  • EC Electronic Commerce
  • the device described in Non-Patent Document 1 has a problem that it is difficult to interpret what kind of feature each point in the map space means. Further, in the device described in Non-Patent Document 2, features other than the set of features defined in the entire product are added or subtracted from the hidden feature vector of the product or the hidden feature vector of the user in the map space. There is a problem that it cannot be operated. Further, the device described in Non-Patent Document 3 has a problem that the effect of a feature that does not appear in the text of the product cannot be incorporated, and the product having the same word is projected at the same position. Therefore, it is preferable that features that do not appear in the text explaining the product or the user can be estimated from the user's behavior data and the features can be embedded in the map space in which the features of the product or the user can be operated.
  • the present invention is a user / product map capable of mapping the relationship between a user and a product in consideration of the characteristics even when the product or the characteristics of the user do not appear in the text explaining these. It is an object of the present invention to provide an estimation device, a user / product map estimation method, and a user / product map estimation program.
  • the user / product map estimation device has an input unit for inputting learning data representing a product targeted for action according to a user's preference, product information representing a feature of the product, and a relationship between words. It is provided with an estimation unit that estimates a hidden feature vector representing a position on the map space for each of the user and the product based on the word information representing the product and the learning data, and the estimation unit uses the user's hidden feature vector and the product.
  • the distance between the hidden feature vector of the product and the hidden feature vector of the product reflects the user's preference for the product indicated by the training data, and the closer the relationship indicated by the word information is, the more the hidden feature vector of the product and the product information represent. It is characterized in that the hidden feature vector is estimated so that the distance from the word vector estimated based on the word indicating the feature of the product is close.
  • an input unit for inputting learning data representing a product targeted for action according to a user's preference, user information representing a feature provided by the user, and between words. It is provided with an estimation unit that estimates a hidden feature vector representing a position on the map space for each of the user and the product based on the word information representing the relationship and the learning data, and the estimation unit is the hidden feature vector of the user.
  • the distance between the hidden feature vector of the product and the hidden feature vector of the product should be a distance that reflects the user's preference for the product indicated by the learning data, and the closer the relationship indicated by the user information is, the closer the hidden feature vector of the user and the user information are. It is characterized in that the hidden feature vector is estimated so that the distance from the word vector estimated based on the word indicating the feature of the user to be represented is close.
  • learning data representing a product targeted for action is input according to a user's preference, product information representing the characteristics of the product, and a word representing the relationship between words.
  • the hidden feature vector representing the position in the map space is estimated for each of the user and the product, and at the time of estimation, the distance between the hidden feature vector of the user and the hidden feature vector of the product is determined.
  • the distance is set to reflect the user's preference for the product indicated by the training data, and the closer the relationship indicated by the word information is, the more based on the hidden feature vector of the product and the word indicating the characteristic of the product represented by the product information. It is characterized in that the hidden feature vector is estimated so that the distance from the word vector estimated by the above is close.
  • the user / product map estimation program is an input process for inputting learning data representing a product targeted for action into a computer according to a user's preference, product information representing the characteristics of the product, and words. Based on the word information representing the relationship between the two, and the training data, an estimation process for estimating the hidden feature vector representing the position in the map space for each of the user and the product is executed, and the user's hidden feature is hidden in the estimation process.
  • the distance between the feature vector and the hidden feature vector of the product should be a distance that reflects the user's preference for the product indicated by the training data, and the closer the relationship indicated by the word information is, the more the hidden feature vector of the product and the product It is characterized in that the hidden feature vector is estimated so that the distance from the word vector estimated based on the word indicating the feature of the product represented by the information is short.
  • the relationship between the user and the product in consideration of the characteristics can be mapped in space.
  • the user / product map estimation device in the present invention is a device that displays the relationship between the estimated user and the product in association with each other.
  • FIG. 1 is a block diagram showing a configuration example of the first embodiment of the user / product map estimation device according to the present invention.
  • the distance relationship between the user and the product is restricted from the behavior mechanism of the user and the word information indicating the relationship between the words, and the position of the user and the product in the word space is estimated.
  • product purchasing that is, purchasing mechanism
  • the behavior according to the user's taste is not limited to purchasing, and includes, for example, the behavior of evaluating, referencing, searching, and displaying one product from many products.
  • the vector representing the position of the user in the map space is indicated by P
  • the vector representing the position of the product in the map space is indicated by Q
  • the vector P may be referred to as a user's hidden feature vector
  • the vector Q may be referred to as a product hidden feature vector.
  • the distance between the vector P and the vector Q is represented by d (P, Q). This distance d is calculated by, for example, the Euclidean distance or an absolute value.
  • a vector representing the semantic content of the word possessed (explained) by each product or each user is described as a word vector and is represented by V.
  • This vector is a vector defined by the semantic closeness of each word and is estimated from the word information. Words used in similar contexts, such as “Shepherd,” “Doberman,” and “Akita Inu,” are close together, while words used in completely different contexts, such as “Shepherd” and “windbreak.” Is set to be far away.
  • Such word vector estimation can be realized by widely known estimation techniques such as Word2vec, fastText, and Grove. Then, by mapping on the word space, it becomes possible to perform natural language-based operations on the hidden feature vector of the user and the hidden feature vector of the product.
  • it is an object to estimate the hidden feature vector P of the user and the hidden feature vector Q of the product described above.
  • the user / product map estimation device 100 of the present embodiment includes a product information input unit 10, a word information input unit 20, a learning data input unit 30, an estimation unit 40, and an output unit 50. It is provided with a storage unit 60.
  • the storage unit 60 stores various parameters used for processing by the estimation unit 40, which will be described later. Further, the storage unit 60 may store the information received as input by the product information input unit 10, the word information input unit 20, and the learning data input unit 30.
  • the storage unit 60 is realized by, for example, a magnetic disk or the like.
  • the product information input unit 10 accepts input of product information representing the characteristics (attributes) of the product.
  • the product information input unit 10 may directly accept the input of the attributes of each product, or may accept the product information including the product attributes. Examples of the product information include a description given to the product.
  • the product information input unit 10 extracts words related to the product attribute from the product information.
  • the method of extracting the word related to the product attribute is arbitrary, and the product information input unit 10 may extract the word related to the product attribute from the product information by, for example, morphological analysis.
  • the product information input unit 10 may accept user information as input instead of the product information.
  • User information includes, for example, the profession and interests of the user.
  • the product information input unit 10 extracts words related to the user attribute from the user information.
  • the product information input unit 10 accepts user information as an input instead of the product information, the same effect can be obtained by reading the part described as the product as the user and the part described as the user below as the product. Play.
  • the product information input unit 10 can be called a user information input unit. The same applies to the following embodiments.
  • the word information input unit 20 accepts input of word information.
  • the word information input unit 20 may directly accept the input of the word vector indicated by each word as the word information, or may accept the set of sentences including the words. Examples of a set of sentences including words include a dictionary of words, a product description, a review sentence, and posting on SNS (Social Networking Service).
  • SNS Social Networking Service
  • the word information input unit 20 estimates the word vector of each word from the set of sentences including words.
  • the word information input unit 20 may use a word vector estimation technique such as word2vec, fastext, or grow as the estimation method.
  • the learning data input unit 30 inputs the learning data used by the estimation unit 40, which will be described later, for estimating the vector P and the vector Q.
  • the learning data is data showing the relationship between the user and the product, and specifically, is data representing the product that is the target of the action according to the preference of the user. For example, when focusing on purchasing behavior as a user's behavior, purchasing data (purchasing history) indicating data linked to purchasing according to the user's preference may be used as learning data.
  • the estimation unit 40 estimates the hidden feature vector P of each user and the hidden feature vector Q of each product corresponding to the product information based on the product information, the learning data, and the word information.
  • the distance relationship between the user and the product is restricted from the learning data and the word information, and the estimation unit 40 estimates the position of the user and the product in the word space.
  • the estimation unit 40 may estimate the vector P and the vector Q by calculating P and Q that minimize (optimize) the loss function illustrated in the following equation 1, for example.
  • Equation 1 L (P, Q, Y) is a term calculated based on the distance relationship between the user and the product based on the purchase data.
  • Y represents learning data (purchasing data).
  • L (P, Q, Y) is, for example, a value larger as the distance from the hidden feature vector of the positive product is farther than the distance from the hidden feature vector of the negative product with respect to the hidden feature vector P of the user. Is defined to take.
  • the product set purchased by the user may be treated as the regular product, and the other product set not purchased may be treated as the negative example set.
  • the estimation unit 40 estimates that the distance between the hidden feature vector P of the user and the hidden feature vector Q of the product reflects the user's preference for the product indicated by the learning data Y. Specifically, the estimation unit 40 may calculate L (P, Q, Y) by the following equation 2.
  • Equation 2 P u is the hidden feature vectors, Q i and Q j of the user u each represents a hidden feature vector of positive cases items i and negative cases product j. Further, I u + represents a set of positive example products of user u, and I u ⁇ represents a set of negative example products of user u. Further, in Equation 2, the function h is a function that returns the same value as the argument when the argument is a positive value, and returns 0 when the argument is a negative value. Also, m is a hyperparameter that adjusts the distance between positive and negative examples.
  • w u, i, j is the user u, positive sample product i, and a weight that is defined for negative cases product j, the distance between the hidden feature vector of the positive sample product, Negative example Adjust the weight of the term when it is farther than the hidden feature vector of the product.
  • the same value may be defined for woo, i, j in all sets, or woo, i, j may be largely defined for a regular product that is presumed to have a stronger preference. Good.
  • the estimation unit 40 uses the user's normal product or negative product based on the learning data as the user's preference for the product. Then, the estimation unit 40 estimates the hidden feature vector so as to minimize the loss function including the term defined by the distance between the hidden feature vector of the user and the hidden feature vector of the positive product or the negative product. May be good.
  • the estimation unit 40 estimates the hidden feature vector so that the closer the relationship indicated by the word information is, the closer the distance between the hidden feature vector of the product and the word vector of the product is.
  • L (Q, V) in Equation 1 is a word vector in which the hidden feature vector of the product is linked to the attribute of the product based on the hidden feature vector Q of each product, the attribute of each product, and the word vector V. The closer it is to, the smaller the value.
  • the estimation unit 40 may calculate L (Q, V) by the formula 3 illustrated below. That is, the estimation unit 40 may estimate the hidden feature vector so as to minimize the loss function including the term defined by the distance between the word vector and the hidden feature vector of the product.
  • Equation 3 i represents the index of the product and k represents the index of the product attribute. Further, w ik is a weight indicating whether or not the product i has the product attribute k, and may be a binary value of 0 or 1, or may be a positive real number indicating the degree. ⁇ is a hyperparameter that adjusts the magnitude of contribution of L (P, Q, Y) and L (Q, V).
  • the estimation unit 40 may calculate the vector P and the vector Q by the method of minimizing (optimizing) the loss function of the above equation 1. In this case, the estimation unit 40 may calculate P and Q that maximize the loss function by the steepest descent method or Newton's method.
  • the output unit 50 outputs the hidden feature vector of each user and the hidden feature vector of each product in the map space.
  • FIG. 2 is an explanatory diagram showing an example of an output result.
  • the example shown in FIG. 2 shows an example in which users, products, and words are mapped in the same space.
  • the triangular mark illustrated in FIG. 2 indicates a word vector
  • the symbol shown in the area R1 indicates a hidden feature vector of the product.
  • the symbol existing in the area R2 indicates the hidden feature vector of the user.
  • the output unit 50 may accept a user, a product, or a word designated by the user and output a user, a product, or a word in the vicinity of the designated user, the product, or the word.
  • the product information input unit 10, the word information input unit 20, the learning data input unit 30, the estimation unit 40, and the output unit 50 are computer processors (for example, a user / product map estimation program) that operate according to a program (user / product map estimation program). It is realized by CPU (Central Processing Unit) and GPU (Graphics Processing Unit).
  • the program is stored in the storage unit 60, and the processor reads the program and operates as a product information input unit 10, a word information input unit 20, a learning data input unit 30, an estimation unit 40, and an output unit 50 according to the program. You may. Further, the function of the user / product map estimation device 100 may be provided in the SaaS (Software as a Service) format.
  • SaaS Software as a Service
  • the product information input unit 10, the word information input unit 20, the learning data input unit 30, the estimation unit 40, and the output unit 50 may each be realized by dedicated hardware. Further, a part or all of each component of each device may be realized by a general-purpose or dedicated circuit (circuitry), a processor, or a combination thereof. These may be composed of a single chip or may be composed of a plurality of chips connected via a bus. A part or all of each component of each device may be realized by a combination of the above-mentioned circuit or the like and a program.
  • each component of the user / product map estimation device 100 when a part or all of each component of the user / product map estimation device 100 is realized by a plurality of information processing devices and circuits, the plurality of information processing devices and circuits may be centrally arranged. It may be arranged in a distributed manner.
  • the information processing device, the circuit, and the like may be realized as a form in which each of the client-server system, the cloud computing system, and the like is connected via a communication network.
  • FIG. 3 is a flowchart showing an operation example of the user / product map estimation device 100 of the present embodiment.
  • the product information input unit 10 inputs product information (step S11).
  • the word information input unit 20 inputs word information (step S12).
  • the learning data input unit 30 inputs the learning data (step S13).
  • the estimation unit 40 estimates the hidden feature vector P of the user and the hidden feature vector Q of the product based on the product information, the word information, and the learning data (step S14).
  • the estimation unit 40 may estimate the hidden feature vector P of the user and the hidden feature vector Q of the product by minimizing the loss function. Then, the estimation unit 40 determines the convergence test of the estimation process (step S15). The estimation unit 40 may determine that the processing has converged when, for example, the amount of change in the value to be minimized, such as the loss function value, is less than a predetermined value or ratio. When it is determined that the convergence has occurred (Yes in step S15), the estimation unit 40 ends the estimation process. On the other hand, if it is not determined that the convergence has occurred (No in step S15), the estimation unit 40 repeats the processes after step S14.
  • the learning data input unit 30 inputs the learning data
  • the estimation unit 40 receives the hidden feature vector for each of the user and the product based on the product information, the word information, and the learning data. To estimate.
  • the estimation unit 40 sets the distance between the hidden feature vector of the user and the hidden feature vector of the product to be a distance that reflects the user's preference for the product indicated by the training data, and the relationship indicated by the word information.
  • the hidden feature vector is estimated so that the closer is, the closer the distance between the hidden feature vector of the product and the word vector representing the feature of the product is. Therefore, even if the characteristics of the product do not appear in the text explaining the product, the relationship between the user and the product in consideration of the characteristics can be mapped in space.
  • the estimation unit 40 receives the user and the product based on the user information, the word information, and the learning data. Estimate the hidden feature vector for each of. At that time, the estimation unit 40 sets the distance between the hidden feature vector of the user and the hidden feature vector of the product to be a distance that reflects the user's preference for the product indicated by the learning data, and the relationship indicated by the user information. The closer is, the closer the hidden feature vector of the user is to the word vector representing the user's feature, and the hidden feature vector is estimated. Therefore, even if the user's characteristics do not appear in the text explaining the user, the relationship between the user and the product in consideration of the characteristics can be mapped in space.
  • the estimation unit 40 receives the user's information based on the product information, the word information, and the learning data.
  • the hidden feature vector P and the hidden feature vector Q of the product are estimated.
  • the user's example product is placed in the vicinity of the user by estimation between the vector P and the vector Q based on the learning data.
  • the user's negative product is placed at a position away from the user.
  • a restriction is imposed on arranging the hidden feature vector Q of the product and the word vector V based on the product information in the vicinity.
  • the hidden feature vector Q of the product is arranged near the word vector based on the product information. Therefore, the position of the product in the map space reflects the semantic position of the word that the product has. Due to the limitation to Q based on the above-mentioned learning data, the hidden feature vector Q of each product is not a simple average value of the word vectors of the product, but a position that reflects the user's preference.
  • FIG. 4 is an explanatory diagram showing an example of the relationship between hidden feature vectors of users, products, and words.
  • beer A having the word “rich” as product information
  • beer B having the word “fragrance” as product information.
  • beer A is mapped to the same position as the word vector of "rich”
  • beer B is mapped to the same position as the word vector of "fragrance”.
  • the constraint shown in Equation 2 tries to bring the user and the hidden feature vector of beer B closer to each other.
  • the regular products of each user are connected by a solid line, and the words possessed by each product are indicated by a dotted line.
  • the attractive force of the calculation result by the above equations 2 and 3 acts between the lines, and the accurate position of the hidden feature vector of the product is estimated.
  • the position of the user is estimated by taking into account the repulsive force from the negative example product.
  • the estimated hidden feature vector of beer B is positioned at a point deviated from the position of the word vector of "fragrance” in the direction of the word vector of "rich”. Therefore, according to the present embodiment, it is possible to obtain a map that estimates the hidden characteristics of the product (in the example here, the “richness” of beer B).
  • the output hidden feature vector P of the user and the hidden feature vector Q of the product are maps on the word space. Therefore, a new hidden feature vector can be calculated by freely adding or subtracting word vectors. For example, adding “lemon flavor” to a certain beer or subtracting the feature "beer” to add the feature "tea” can be calculated by calculation between the hidden feature vector and the word vector.
  • the output unit 50 may enumerate the users, products, or words positioned within a certain predetermined distance from the hidden feature vector Q of the product after the operation.
  • the same operation can be performed on the user. For example, it is possible to add the feature of "marriage” to a certain user, or subtract the feature of "student” and add the feature of "IT work” by the calculation between the hidden feature vector and the word vector. ..
  • the prepared word space uses the word space learned using external data. It is also assumed that the external data is created by a huge corpus and that most of the words we use every day are also assigned word vectors. Therefore, it is possible to add and subtract more flexible features than the word set that the set of goods or the entire user has as an attribute.
  • the learning data of this embodiment user behavior data, review data, etc. existing in ID-POS (Point of sale system), EC site, video viewing site, Web migration log, etc. can be used.
  • the word information a word vector obtained from a word dictionary, a product description, a review sentence, a post on SNS, or the like can be used.
  • product attributes that are not clearly stated can be estimated and used for promotion and product development. Further, according to the present embodiment, it is possible to output a target user, a similar product, or an associated word of a new product obtained when an attribute is changed based on a natural language starting from a certain product. Therefore, it is possible to grasp the target of new product development and devise promotion measures. In addition, even for changes in user characteristics such as life events, more effective product recommendation and promotion will be possible by changing the attributes of users based on natural language.
  • Embodiment 2 Next, a second embodiment of the user / product map estimation device according to the present invention will be described.
  • the hidden feature vectors of the user and the product are estimated based on the word information prepared in advance.
  • the word vector representing the semantic relationship of the words prepared in advance in this way may not always hold in terms of the relationship between the user and the product.
  • words such as “spicy” and “sweet” are assumed to exist in close positions in the word space.
  • the reason is that the context in which these words appear is similar. That is, since a sentence such as “this curry is spicy” can be replaced with the word “sweet” such as “this curry is sweet”, “sweet” and “spicy” exist in the vicinity as word vectors. Is assumed.
  • the user groups who prefer sweet-tasting products and spicy-tasting products are different.
  • due to the closeness of the word vectors of "sweet” and "spicy” there is a possibility that a user who prefers spicy taste is placed in the vicinity of the sweet product obtained by the first embodiment.
  • FIG. 5 is an explanatory diagram showing an example of the relationship between the hidden feature vector of the user, the product, and the word when the word space is not converted.
  • FIG. 5 illustrates a map of users and goods in the vicinity of the words “spicy,” “sweet,” and “cake.”
  • “spicy” and “sweet” are arranged in the vicinity as word vectors, and "cake” is arranged in the distance.
  • the neighborhood of the product having the attribute of "sweet” is indicated by a circle. In this case, it is difficult for users in the vicinity of the attribute of "sweet”, users who prefer “cake” to products in the vicinity, and products having the characteristic of "cake” to appear.
  • users and products in the vicinity are mixed with users who prefer “spicy” products and "spicy” products.
  • FIG. 6 is a block diagram showing a configuration example of a second embodiment of the user / product map estimation device according to the present invention.
  • the user / product map estimation device 200 of the present embodiment includes a product information input unit 10, a word information input unit 20, a learning data input unit 30, an estimation unit 42, an output unit 52, and a storage unit 60. ing. That is, the user / product map estimation device 200 of the present embodiment is compared with the user / product map estimation device 100 of the first embodiment, and instead of the estimation unit 40 and the output unit 50, the estimation unit 42 and the output unit 52 It differs in that it has.
  • the function f is arbitrary, and the function f may be a function determined by a certain parameter ⁇ .
  • FIG. 7 is an explanatory diagram showing an example of converting the word space by the function f.
  • “spicy” and “sweet” are arranged in the vicinity as word vectors, and “cake” is arranged in the distance.
  • the function f is defined as a transformation in which "sweet” and “cake” are placed close to each other as the converted word vector, and "spicy” is placed far away from “sweet” and “cake”.
  • the estimation unit 42 estimates the hidden feature vector P of each user corresponding to the product information, the hidden feature vector Q of each product, and the parameter ⁇ of the conversion f, based on the product information, the learning data, and the word information. Similar to the first embodiment, the estimation unit 42 constrains the distance relationship between the user and the product from the learning data and the word information, and estimates the position of the user and the product in the word space. Specifically, the estimation unit 42 may estimate the vector P, the vector Q, and the parameter ⁇ by calculating P, Q, and ⁇ that minimize (optimize) the loss function illustrated in the following equation 4. Good.
  • L (P, Q, Y) is a term calculated based on the distance relationship between the user and the product based on the purchase data, as in the first embodiment. Further, L (P, Q, Y) takes a larger value as the distance from the hidden feature vector of the positive product is farther than the distance from the hidden feature vector of the negative product, as in the first embodiment. It may be defined as. Specifically, L (P, Q, Y) may be defined as in Equation 2 described above.
  • L (Q, V, ⁇ ) is the hidden feature vector Q of the product and the attributes of the product based on the hidden feature vector Q of each product, the attributes of each product, the word vector, and the parameters ⁇ of the function f and the function f.
  • the estimation unit 42 estimates the hidden feature vector so as to minimize the loss function including the term defined by the distance between the vector obtained by converting the word vector V by the function f and the hidden feature vector Q of the product. You may. Specifically, the estimation unit 42 may calculate L (Q, V, ⁇ ) by the formula 5 illustrated below.
  • Equation 3 The contents of i, k, wick , and ⁇ are the same as those in Equation 3 described above.
  • a specific example of the function f is an affine transformation.
  • f (V k , ⁇ ) is expressed as VA + b by the matrix A and the vector b.
  • the parameter ⁇ is each element of the matrix A and each element of the vector b.
  • the estimation unit 42 may calculate the vector P and the vector Q and the parameter ⁇ by the method of minimizing (optimizing) the loss function of the above equation 4. That is, the estimation unit 42 minimizes the loss function including the term defined by the distance between the vector obtained by converting the word vector V by the function f and the hidden feature vector Q of the product, and the hidden feature vector and the function f.
  • the parameter ⁇ of In this case, the estimation unit 42 may calculate P and Q that maximize the loss function by the steepest descent method or Newton's method.
  • the output unit 52 outputs the hidden feature vector of each user, the hidden feature vector of each product, and the word vector converted by the function f. Further, the output unit 52 may output the parameter of the function f.
  • FIG. 8 is an explanatory diagram showing an example of the output result. The example shown in FIG. 8 shows an example in which users, products, and words are mapped in the same space.
  • the output unit 52 may accept the user, the product, or the word specified by the user and output the user, the product, or the word in the vicinity of the designated user, the product, or the word.
  • the product information input unit 10, the word information input unit 20, the learning data input unit 30, the estimation unit 42, and the output unit 52 are realized by a computer processor that operates according to a program (user / product map estimation program). To.
  • FIG. 9 is a flowchart showing an operation example of the user / product map estimation device 200 of the present embodiment.
  • the processes from step S11 to step S13 for inputting the product information, the word information, and the learning data are the same as the processes illustrated in FIG.
  • the estimation unit 42 estimates the hidden feature vector P of the user, the hidden feature vector Q of the product, and the parameter ⁇ of the function f that transforms the word space, based on the product information, the word information, and the learning data (step S24).
  • the estimation unit 42 may estimate the hidden feature vector P of the user and the hidden feature vector Q of the product by minimizing (optimizing) the loss function. ..
  • step S25 the estimation unit 42 makes a convergence test in the same manner as in step S15 in FIG. That is, when it is determined that the convergence has occurred (Yes in step S25), the estimation unit 42 ends the estimation process. On the other hand, if it is not determined that the convergence has occurred (No in step S25), the estimation unit 42 repeats the processes after step S24.
  • the estimation unit 42 minimizes the loss function including the term defined by the distance between the vector obtained by converting the word vector by the function f and the hidden feature vector Q of the product. , Estimate the hidden feature vector (and the parameter ⁇ of the function f). Therefore, in addition to the effect of the first embodiment, the word space can be modified to suit the user's taste.
  • the estimation unit 42 transforms the hidden feature vector P of the user, the hidden feature vector Q of the product, and the word space based on the product information, the word information, the learning data, and the word information, and the parameters of the function f. Estimate ⁇ .
  • the user's example product is placed in the vicinity of the user by estimation between the vector P and the vector Q based on the learning data.
  • the user's negative product is placed at a position away from the user. As a result, the similarity between products based on user behavior is reflected on the map space.
  • a restriction is imposed on arranging the hidden feature vector Q of the product and the word vector V based on the product information in the vicinity.
  • the hidden feature vector Q of the product is arranged near the word vector based on the product information. Therefore, the position of the product in the map space reflects the semantic position of the word that the product has. Due to the limitation to Q based on the above-mentioned learning data, the hidden feature vector Q of each product is not a simple average value of the word vectors of the product, but a position that reflects the user's preference.
  • FIG. 10 is an explanatory diagram showing an example of the relationship between the hidden feature vector of the user, the product, and the word when the word space is transformed.
  • FIG. 10 illustrates how the user and product maps in the vicinity of the words “spicy”, “sweet” and “cake” are corrected by the process according to this embodiment.
  • “spicy” and “sweet” are placed nearby as word vectors, and "cake” is placed at points away from the "spicy” and “sweet” word vectors.
  • the functions f place the words "sweet” and "cake” in the vicinity as converted word vectors, and the vector of "spicy” is far from “sweet” or "cake”. Placed in.
  • users in the vicinity of the attribute of "sweet” users who prefer “cake” to the products in the vicinity, and products having the characteristic of "cake” appear.
  • users who prefer “spicy” products and “spicy” products are less likely to be mixed.
  • the output hidden feature vector of the user and the product is a map on the word space converted by the function f.
  • the original word vector is associated with the vector of the converted word space by the function f. Therefore, also in this embodiment, a new hidden feature vector can be calculated by freely adding or subtracting a word vector. Specifically, when adding a certain word vector to a certain vector in the map space, the vector obtained by converting the word vector by the function f may be added to the vector in the map space.
  • the word space corrected by the user's preference can be obtained as an operable map space. It also makes it possible to obtain maps of users and products in that space.
  • Embodiment 3 a third embodiment of the user / product map estimation device according to the present invention will be described.
  • the hidden feature vector of the user and the hidden feature vector of the product are output.
  • the output user and product hidden feature vectors are maps on the word space. Therefore, a new hidden feature vector can be calculated by freely adding or subtracting word vectors. For example, it is possible to add a feature of "lemon flavor" to a certain beer, or subtract a feature of "beer” to add a feature of "tea” by calculation between a hidden feature vector and a word vector. However, observing such calculations and results is not always an intuitive operation for the user of the device.
  • FIG. 11 is a block diagram showing a configuration example of a third embodiment of the user / product map estimation device according to the present invention.
  • the user / product map estimation device 300 of the present embodiment has a product information input unit 10, a word information input unit 20, a learning data input unit 30, an estimation unit 42, an output unit 52, a storage unit 60, and an output.
  • the operation unit 70 is provided. That is, the user / product map estimation device 300 of the present embodiment is different from the user / product map estimation device 200 of the second embodiment in that it includes an output operation unit 70.
  • the estimation unit 42 and the output unit 52 may be realized by the estimation unit 40 and the output unit 50 in the first embodiment, respectively.
  • the output operation unit 70 receives information on the product or user for which the hidden feature vector is output.
  • the output operation unit 70 may accept, for example, a user ID or a name as user information.
  • the output operation unit 70 outputs a hidden feature vector of the corresponding product or user based on the received input.
  • the output operation unit 70 accepts the input of any word and operation defined in the word space.
  • the output operation unit 70 may accept operations between vectors such as addition and subtraction, and may accept numerical values indicating the degree of addition and subtraction.
  • the output operation unit 70 calculates a new hidden feature vector by the hidden feature vector specified by the above-mentioned product or user information, the hidden feature vector of the word to be the input calculation, and the input calculation. To do.
  • the output operation unit 70 performs an operation of subtracting the hidden feature vector of "caffeine” from the hidden feature vector of "product A”. Then, the output operation unit 70 calculates the distance between the hidden feature vector after the above calculation and the hidden feature vector of the user, product, or word arranged in the map space, and the user having the hidden feature vector closer to the hidden feature vector. , Identify the product or word.
  • the output operation unit 70 may perform this calculation process for all users, products, and words, or may set a range (for example, only users, only products, only products in a specific category, etc.) given by the users in advance. You may go to the subject.
  • the output operation unit 70 outputs the hidden feature vector of the specified user, product, or word in a manner that is easy for the user to see.
  • the output operation unit 70 may display the product name and the product image side by side in the order of the distance from the hidden feature vector after the above calculation.
  • the output operation unit 70 lower-dimensionalizes, for example, a user, a product, or a word determined to be in or near a point of a hidden feature vector obtained on a map space by a method such as principal component analysis or tSNE. It may be highlighted and displayed on the map space projected on.
  • the product information input unit 10, the word information input unit 20, the learning data input unit 30, the estimation unit 42, the output unit 52, and the output operation unit 70 operate according to a program (user / product map estimation program). It is realized by the processor of the computer.
  • FIG. 12 is a flowchart showing an operation example of the user / product map estimation device 300 of the present embodiment.
  • the processing from step S11 to step S25 for inputting various information and estimating the hidden feature vector and the parameter ⁇ of the function f is the same as the processing illustrated in FIG.
  • the output operation unit 70 receives the product or user information and the input of any word and operation defined in the word space (step S36).
  • the output operation unit 70 calculates a new hidden feature vector by the hidden feature vector specified from the input product or user information, the hidden feature vector assigned to the input word, and the input calculation. (Step S37). Then, the output operation unit 70 calculates the distance between the hidden feature vector of the user, the product, or the word arranged on the map space and the calculated hidden feature vector (step S38).
  • the output operation unit 70 outputs a hidden feature vector of a nearby user, product, or word in a manner that is easy for the user to see based on the above-mentioned distance calculation (step S39).
  • the output operation unit 70 receives the information of the product or user to be output of the hidden feature vector, the word to be calculated, and the input of the calculation, and the hidden feature of the product or user is hidden.
  • the result of performing the operation related to the hidden feature vector of the received word is output.
  • the output operation unit 70 outputs the result of the natural language-based operation to a certain product or user based on the user input. Therefore, in addition to the effects of the first embodiment and the second embodiment, in the present embodiment, the result can be observed more intuitively by performing the operation based on the natural language on the output hidden feature vector. ..
  • FIG. 13 is a block diagram showing an outline of the user / product map estimation device according to the present invention.
  • the user / product map estimation device 80 (for example, the user / product map estimation device 100) according to the present invention inputs learning data (for example, purchase data) representing a product that is the target of an action according to a user's preference.
  • learning data for example, purchase data
  • the unit 81 for example, the learning data input unit 30
  • the product information representing the features of the product
  • the word information representing the relationship between words
  • the learning data the hidden feature representing the position on the map space.
  • It includes an estimation unit 82 (for example, an estimation unit 40 and an estimation unit 42) that estimates a vector for each of a user and a product.
  • the distance between the hidden feature vector of the user (for example, the hidden feature vector P) and the hidden feature vector of the product (for example, the hidden feature vector Q) reflects the user's preference for the product indicated by the training data.
  • Estimate the hidden feature vector (eg, using Equation 1).
  • the estimation unit 82 minimizes the loss function (for example, the above-mentioned equation 1) including the term defined by the distance between the word vector and the hidden feature vector of the product (for example, the above-mentioned equation 3).
  • the hidden feature vector may be estimated.
  • the estimation unit 82 uses the user's positive or negative product based on the learning data as the user's preference for the product, and sets the user's hidden feature vector and the user's hidden feature vector of the positive or negative product.
  • the hidden feature vector may be estimated so as to minimize the loss function (eg, equation 1 above) that includes the term defined by the distance (eg, equation 2 above).
  • the estimation unit 82 performs a loss function (for example, the above-mentioned equation 4) including a term defined by the distance between the vector obtained by converting the word vector by the conversion function (for example, the function f) and the hidden feature vector of the product.
  • the hidden feature vector may be estimated to be minimized.
  • the estimation unit 82 minimizes the loss function including the term defined by the distance between the vector obtained by converting the word vector by the conversion function and the hidden feature vector of the product, and the parameters of the hidden feature vector and the conversion function. (For example, the parameter ⁇ ) may be estimated.
  • the user / product map estimation device 80 may include an output unit (for example, an output unit 52) that outputs the parameters of the conversion function.
  • an output unit for example, an output unit 52
  • the user / product map estimation device 80 may include an output unit (for example, an output unit 50) that outputs a hidden feature vector of each user and a hidden feature vector of each product in the map space.
  • an output unit for example, an output unit 50
  • the user / product map estimation device 80 receives the information of the target product or user that outputs the hidden feature vector, the word to be calculated, and the input of the calculation, and receives the input of the hidden feature vector of the product or user.
  • An output operation unit (for example, an output operation unit 70) that outputs the result of performing an operation on the hidden feature vector of the received word may be provided.
  • the output operation unit may output a user, a product, or a word arranged in the vicinity of the vector obtained as a result of the calculation.
  • the estimation unit 82 may estimate a hidden feature vector representing a position on the map space in which the feature of the product can be manipulated for each of the user and the product.
  • the user / product map estimation device 80 may estimate the hidden feature vector using the user information instead of the product information or together with the product information.
  • the input unit 81 (for example, the learning data input unit 30) inputs learning data (for example, purchase data) representing the product that is the target of the action according to the user's preference
  • the estimation unit 82 (for example, the learning data input unit 30) inputs.
  • the estimation unit 40 and the estimation unit 42) generate a hidden feature vector representing a position in the map space based on the user information representing the feature provided by the user, the word information representing the relationship between words, and the learning data. Estimate for each user and product.
  • the estimation unit 82 reflects the user's preference for the product indicated by the learning data in the distance between the hidden feature vector of the user (for example, the hidden feature vector P) and the hidden feature vector of the product (for example, the hidden feature vector Q).
  • a hidden feature vector representing a position on the map space based on the learning data is provided with an estimation unit that estimates each of the user and the product, and the estimation unit includes the hidden feature vector of the user and the hidden feature of the product.
  • the distance from the vector is set to be a distance that reflects the user's preference for the product indicated by the learning data, and the closer the relationship indicated by the word information is, the more the hidden feature vector of the product and the product information represent.
  • a user / product map estimation device that estimates the hidden feature vector so that the distance from the word vector estimated based on the word indicating the feature of the product is close.
  • the estimation unit estimates the hidden feature vector so as to minimize the loss function including the term defined by the distance between the word vector and the hidden feature vector of the product. apparatus.
  • the estimation unit uses the user's positive product or negative product based on the learning data as the user's preference for the product, and the user's hidden feature vector and the positive product or the hidden feature of the negative product.
  • the user / product map estimation device according to Appendix 1 or Appendix 2, which estimates a hidden feature vector so as to minimize a loss function including a term defined by a distance from the vector.
  • the estimation unit estimates the hidden feature vector so as to minimize the loss function including the term defined by the distance between the vector obtained by converting the word vector by the conversion function and the hidden feature vector of the product.
  • the user / product map estimation device according to any one of 1 to 3.
  • the estimation unit minimizes the loss function including the term defined by the distance between the vector obtained by converting the word vector by the conversion function and the hidden feature vector of the product, and the hidden feature vector and the conversion function.
  • the user / product map estimation device according to any one of Supplementary note 1 to Supplementary note 4, which estimates the parameters of the above.
  • Appendix 6 The user / product map estimation device according to Appendix 5, which includes an output unit that outputs parameters of a conversion function.
  • Supplementary note 7 The user / product map estimation device according to any one of Supplementary note 1 to Supplementary note 6, which includes an output unit for outputting a hidden feature vector of each user and a hidden feature vector of each product in the map space.
  • the output operation unit is a user / product map estimation device according to Supplementary note 8 that outputs a user, a product, or a word arranged in the vicinity of a vector obtained as a result of an operation.
  • the estimation unit describes a hidden feature vector representing a position on a map space in which the features of the product can be manipulated, in any one of Supplements 1 to 9 that estimates the features of the product for each of the user and the product.
  • User / product map estimation device User / product map estimation device.
  • a hidden feature vector representing a position on the map space based on the learning data is provided with an estimation unit that estimates each of the user and the product, and the estimation unit includes the hidden feature vector of the user and the hidden feature of the product.
  • the distance to the vector is set to be a distance that reflects the user's preference for the product indicated by the learning data, and the closer the relationship indicated by the word information is, the more the hidden feature vector of the user and the user information represent.
  • a user / product map estimation device that estimates the hidden feature vector so that the distance from the word vector estimated based on the word indicating the user's characteristics is close.
  • the learning data representing the product targeted for action is input according to the user's preference, the product information representing the characteristics of the product, the word information representing the relationship between words, and the learning data.
  • a hidden feature vector representing a position on the map space is estimated for each of the user and the product based on the above, and at the time of the estimation, the distance between the hidden feature vector of the user and the hidden feature vector of the product is the training data. The distance is set to reflect the user's preference for the product indicated by, and the closer the relationship indicated by the word information is, the more the hidden feature vector of the product and the word indicating the characteristic of the product represented by the product information are used.
  • a user / product map estimation method characterized in that the hidden feature vector is estimated so that the distance from the word vector estimated based on the data is close.
  • Appendix 13 The user / product map estimation method according to Appendix 12, wherein the hidden feature vector is estimated so as to minimize the loss function including the term defined by the distance between the word vector and the hidden feature vector of the product.
  • an estimation process for estimating a hidden feature vector representing a position in the map space for each of the user and the product is executed.
  • the distance between the hidden feature vector of the user and the hidden feature vector of the product is set to be a distance reflecting the user's preference for the product indicated by the learning data, and the relationship indicated by the word information.
  • the user for estimating the hidden feature vector so that the closer the sex is, the closer the distance between the hidden feature vector of the product and the word vector estimated based on the word indicating the feature of the product represented by the product information is.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Dans la présente invention, une unité d'entrée 81 entre des données d'apprentissage représentant un produit qui a été soumis à une action conformément à une préférence d'utilisateur. Sur la base d'informations de produit représentant une caractéristique du produit, des informations de mot représentant une relation entre des mots, et des données d'apprentissage, une unité d'estimation 82 estime, pour l'utilisateur et pour le produit, un vecteur de caractéristiques caché représentant une position sur un espace de carte. L'unité d'estimation 82 estime les vecteurs de caractéristiques cachés de telle sorte que la distance entre le vecteur de caractéristiques caché d'utilisateur et le vecteur de caractéristiques caché de produit devient une distance reflétant la préférence d'utilisateur indiquée par les données d'apprentissage pour le produit, et de telle sorte que plus la relation indiquée par les informations de mot est proche, plus la distance entre le vecteur de caractéristiques caché de produit et un vecteur de mot estimé sur la base d'un mot indiquant la caractéristique de produit représentée par les informations de produit est proche.
PCT/JP2019/034346 2019-09-02 2019-09-02 Dispositif, procédé et programme d'estimation de carte utilisateur/produit WO2021044460A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/034346 WO2021044460A1 (fr) 2019-09-02 2019-09-02 Dispositif, procédé et programme d'estimation de carte utilisateur/produit
JP2021543616A JP7310899B2 (ja) 2019-09-02 2019-09-02 ユーザ・商品マップ推定装置、方法およびプログラム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/034346 WO2021044460A1 (fr) 2019-09-02 2019-09-02 Dispositif, procédé et programme d'estimation de carte utilisateur/produit

Publications (1)

Publication Number Publication Date
WO2021044460A1 true WO2021044460A1 (fr) 2021-03-11

Family

ID=74852034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/034346 WO2021044460A1 (fr) 2019-09-02 2019-09-02 Dispositif, procédé et programme d'estimation de carte utilisateur/produit

Country Status (2)

Country Link
JP (1) JP7310899B2 (fr)
WO (1) WO2021044460A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7239759B1 (ja) 2022-03-17 2023-03-14 ヤフー株式会社 情報提供装置、情報提供方法、および情報提供プログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011257811A (ja) * 2010-06-04 2011-12-22 Shinshu Univ 商品検索システム及び商品検索システムにおける商品検索方法
JP2013105309A (ja) * 2011-11-14 2013-05-30 Sony Corp 情報処理装置、情報処理方法、及びプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011257811A (ja) * 2010-06-04 2011-12-22 Shinshu Univ 商品検索システム及び商品検索システムにおける商品検索方法
JP2013105309A (ja) * 2011-11-14 2013-05-30 Sony Corp 情報処理装置、情報処理方法、及びプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KOHTANI, HIROTSUGUET AL: "A note on visualization of user preference based on purchase data analysis", ITE TECHNICAL REPORT, vol. 35, no. 9, 14 February 2011 (2011-02-14), pages 199 - 202 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7239759B1 (ja) 2022-03-17 2023-03-14 ヤフー株式会社 情報提供装置、情報提供方法、および情報提供プログラム
JP2023137093A (ja) * 2022-03-17 2023-09-29 ヤフー株式会社 情報提供装置、情報提供方法、および情報提供プログラム

Also Published As

Publication number Publication date
JPWO2021044460A1 (fr) 2021-03-11
JP7310899B2 (ja) 2023-07-19

Similar Documents

Publication Publication Date Title
US11836780B2 (en) Recommendations based upon explicit user similarity
CN103443787B (zh) 用于标识文本关系的系统
US8392360B1 (en) Providing an answer to a question left unanswered in an electronic forum
Cuesta-Valino et al. Word of mouth and digitalization in small retailers: Tradition, authenticity, and change
US9449073B2 (en) Measuring and displaying facets in context-based conformed dimensional data gravity wells
KR20180091043A (ko) 사용자 포트레이트를 획득하는 방법 및 장치
US20140019285A1 (en) Dynamic Listing Recommendation
CN116127020A (zh) 生成式大语言模型训练方法以及基于模型的搜索方法
Hauff et al. Importance and performance in PLS-SEM and NCA: Introducing the combined importance-performance map analysis (cIPMA)
WO2021044460A1 (fr) Dispositif, procédé et programme d'estimation de carte utilisateur/produit
Chen et al. Data acquisition: A new frontier in data-centric AI
TWI506569B (zh) 一種可辨識圖片中物件之位置範圍與行爲關係之圖片標記方法
JP6527257B1 (ja) 提供装置、提供方法および提供プログラム
US11605100B1 (en) Methods and systems for determining cadences
JP7139270B2 (ja) 推定装置、推定方法及び推定プログラム
JP2003167920A (ja) ニーズ情報構築方法、ニーズ情報構築装置、ニーズ情報構築プログラム及びこれを記録した記録媒体
JP6680725B2 (ja) カテゴリ選択装置、広告配信システム、カテゴリ選択方法、およびプログラム
JP6809148B2 (ja) プログラムおよび組み合わせ抽出システム
US8538813B2 (en) Method and system for providing an SMS-based interactive electronic marketing offer search and distribution system
KR20090029220A (ko) 사용자에게 정보를 제공하는 컴퓨터 구현 방법, 사용자에게다중 통화 정보를 제공하는 컴퓨터 구현 방법 및 사용자 인터페이스
JP6506839B2 (ja) 不満情報処理装置及びシステム
JP2019149200A (ja) 提供装置、提供方法および提供プログラム
JP2020035072A (ja) 情報処理装置、情報処理方法および情報処理プログラム
US20230067824A1 (en) Preference inference device, preference inference method, and preference inference program
JP5775241B1 (ja) 情報処理システム、情報処理方法、および情報処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944498

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021543616

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19944498

Country of ref document: EP

Kind code of ref document: A1