CN115034803A - New article mining method and device and storage medium - Google Patents

New article mining method and device and storage medium Download PDF

Info

Publication number
CN115034803A
CN115034803A CN202210384684.4A CN202210384684A CN115034803A CN 115034803 A CN115034803 A CN 115034803A CN 202210384684 A CN202210384684 A CN 202210384684A CN 115034803 A CN115034803 A CN 115034803A
Authority
CN
China
Prior art keywords
article
preset
target
item
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210384684.4A
Other languages
Chinese (zh)
Inventor
罗飞
胡炜
王答明
易津锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Shangke Information Technology Co Ltd
Priority to CN202210384684.4A priority Critical patent/CN115034803A/en
Publication of CN115034803A publication Critical patent/CN115034803A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Genetics & Genomics (AREA)
  • Physiology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a method and a device for excavating new articles and a storage medium, and relates to the field of computers. The method comprises the steps of obtaining a plurality of article attribute combinations of a preset article type; evaluating a plurality of preset targets of each article attribute combination by using an article multi-target evaluation model, wherein the article multi-target evaluation model is obtained by learning historical data of existing articles of preset categories aiming at the plurality of preset targets; determining a preferred item attribute combination according to the evaluation results of a plurality of preset targets of a plurality of item attribute combinations; and determining new articles of the preset article type according to the preferable article attribute combination. The new goods are automatically excavated through the machine learning model, consumption of new goods design on manpower, financial resources and material resources is reduced, the multi-target learning model can output goods attribute combinations meeting multiple targets simultaneously, local optimization of jumping out of a single target is facilitated through interaction among the multiple targets, and model parameters needing to be adjusted and repeated calculation are reduced through bottom layer feature sharing.

Description

New article digging method and device and storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and an apparatus for mining a new object and a storage medium.
Background
With the rapid development of the electronic commerce industry and the improvement of the efficiency of the production link of the supply chain, the iteration of articles is continuously accelerated, the trend of new article consumption is continuously increased, and the new articles become an important link for attracting consumers of enterprises, getting the market first and improving the brand value.
Most of the existing new article design methods are that a product manager acquires user requirements through market research and market analysis according to strategic requirements of enterprises, meanwhile, according to methods such as comment mining and the like, the attention and the requirements of users on various attributes of the articles are mined, the attributes of the articles are changed in combination with experience, new articles are designed, and then a design scheme of the new articles is finally determined through production and trial sale.
Disclosure of Invention
The embodiment of the disclosure provides a scheme for automatically excavating new goods through a machine learning model, which is helpful for designing the new goods and reduces the consumption of designing the new goods in the aspects of manpower, financial resources and material resources; the model belongs to a multi-target learning model, can output article attribute combinations meeting multiple targets at the same time, is beneficial to jumping out of local optimum of a single target through interaction among the multiple targets in the learning process, and can reduce model parameters needing to be adjusted and repeated calculation because the bottom layer characteristics of the multi-target learning model are shared. In addition, the item attribute combination can be obtained more comprehensively through a heuristic search method, some item attribute combinations are avoided being missed, and more new items can be mined.
Some embodiments of the present disclosure provide a new article mining method, including:
acquiring a plurality of article attribute combinations of preset articles;
evaluating a plurality of preset targets of each article attribute combination by using an article multi-target evaluation model, wherein the article multi-target evaluation model is obtained by learning historical data of existing articles of preset categories according to the plurality of preset targets;
determining a preferred article attribute combination according to the evaluation results of a plurality of preset targets of a plurality of article attribute combinations;
and determining the new articles of the preset categories according to the preferred article attribute combination.
In some embodiments, the obtaining the plurality of item attribute combinations of the preset item class includes: and searching the article attributes of the preset articles by using a heuristic search method to obtain a plurality of article attribute combinations of the preset articles.
In some embodiments, the article multi-target evaluation model is obtained by learning historical data of existing articles of the preset category for the preset targets based on a multi-gate mixed expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
In some embodiments, in a case where the item multi-objective evaluation model is obtained based on a multi-gated hybrid expert multi-objective learning model, the item multi-objective evaluation model includes an input layer, a multi-gated hybrid expert layer, a tower layer, and an output layer, which are sequentially cascaded, wherein:
the multi-gated hybrid expert layer comprises a plurality of feedforward neural networks which are called experts and connected with the input layer, and a plurality of gated networks which are connected with the input layer and the experts, wherein each gated network comprises a gate connected with the input layer and a weighted summation calculator, the weighted summation calculator in each gated network carries out weighted summation operation on the output result of each expert according to weight information provided by the gate in the gated network, and each gated network corresponds to a preset target;
the tower layer comprises a plurality of sub-tower layers, and each sub-tower layer is in full connection with a neural network and is connected with a gate control network;
the output layer comprises a plurality of sub-output layers, and each sub-output layer is a fully connected neural network and is connected with one sub-tower layer.
In some embodiments, further comprising: performing one or more of the following operations prior to learning: initializing the weight and deviation of each expert and each gate control by using a uniform distribution initializer; initializing the weight of the fully-connected neural network of each sub-tower layer by using a normal distribution initializer; and initializing the weight of the fully-connected neural network of each sub-output layer by using a normal distribution initializer.
In some embodiments, learning historical data of existing articles of the preset category for the plurality of preset targets to obtain the article multi-target evaluation model comprises: determining a training data set according to historical data of existing articles of the preset article class, inputting training characteristic data in the training data set into an article multi-target evaluation model, determining total loss of a plurality of preset targets according to difference information between a predicted value of each preset target output by the article multi-target evaluation model and a true value of each preset target in the training data set, and training the article multi-target evaluation model according to the total loss of the plurality of preset targets.
In some embodiments, learning historical data of existing articles of the preset category for the plurality of preset targets to obtain the article multi-target evaluation model comprises: and determining a verification data set according to the historical data of the existing articles of the preset article type, verifying the article multi-target evaluation model by using the verification data set every time the article multi-target evaluation model is trained for one turn, and evaluating the training effect by using a preset evaluation function according to the verification result.
In some embodiments, learning historical data of existing articles of the preset category for the plurality of preset targets to obtain the article multi-target evaluation model comprises: determining a test data set according to the historical data of the existing articles of the preset article class, testing the article multi-target evaluation model by using the test data set, and using the article multi-target evaluation model passing the test for mining new articles.
In some embodiments, determining the preferred item attribute combination based on the evaluation of the plurality of preset goals for the plurality of item attribute combinations comprises:
calculating a comprehensive evaluation value of a plurality of preset targets of each article attribute combination according to the evaluation value of each preset target of each article attribute combination;
and determining a preferred article attribute combination from the plurality of article attribute combinations according to the comprehensive evaluation value of each article attribute combination.
In some embodiments, the heuristic search method comprises: genetic algorithm, particle swarm algorithm, ant colony algorithm, tabu search algorithm and simulated annealing algorithm.
In some embodiments, the plurality of preset goals comprises: sales volume and unique visitor conversion rate.
In some embodiments, the historical data of the existing item includes: historical sales data of the existing item, wherein the historical sales data of the existing item includes: intrinsic attributes and sales information of the existing item.
Some embodiments of the present disclosure provide a new article excavating device, including:
an attribute combination acquisition unit configured to acquire a plurality of item attribute combinations of a preset item class;
the evaluation unit is configured to evaluate a plurality of preset targets of each item attribute combination by utilizing an item multi-target evaluation model, and the item multi-target evaluation model is obtained by learning historical data of existing items of the preset categories aiming at the plurality of preset targets;
a preferred attribute combination determination unit configured to determine a preferred item attribute combination according to evaluation results of a plurality of preset targets of a plurality of item attribute combinations;
and the new item determining unit is configured to determine the new item of the preset item type according to the preferred item attribute combination.
In some embodiments, further comprising: the model learning unit is configured to learn historical data of existing articles of the preset categories aiming at the preset targets to obtain the article multi-target evaluation model based on a multi-gate control hybrid expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
In some embodiments, the attribute combination obtaining unit is configured to search the article attributes of the preset category by using a heuristic search method to obtain a plurality of article attribute combinations of the preset category.
Some embodiments of the present disclosure provide a new article excavating device, including: a memory; and a processor coupled to the memory, the processor configured to perform the new item mining method of various embodiments based on instructions stored in the memory.
Some embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the new item mining method of the various embodiments.
Drawings
The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure can be understood more clearly from the following detailed description, which proceeds with reference to the accompanying drawings.
It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
Fig. 1 illustrates a flow diagram of a new item mining method of some embodiments of the present disclosure.
Fig. 2A and 2B show schematic diagrams of chromosome mutation and crossover of each individual in the genetic algorithm of some embodiments of the present disclosure.
Fig. 3 shows a flow diagram of a new item mining method based on genetic algorithms according to some embodiments of the present disclosure.
Fig. 4 is a schematic diagram of an item multi-objective assessment model obtained based on a multi-gated hybrid expert multi-objective learning model according to some embodiments of the present disclosure.
FIG. 5 illustrates a schematic diagram of a learning method for an item multi-objective assessment model, according to some embodiments of the present disclosure.
Fig. 6 illustrates a schematic view of a new article excavation apparatus according to some embodiments of the present disclosure.
Fig. 7 shows a schematic view of a new article excavation apparatus according to further embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
Fig. 1 illustrates a flow diagram of a new item mining method of some embodiments of the present disclosure.
As shown in fig. 1, the new article excavation method of this embodiment includes the following steps.
In step 110, a plurality of article attribute combinations of a predetermined article type are obtained.
In some embodiments, a heuristic search method is used to search the article attributes of the preset categories to obtain a plurality of article attribute combinations of the preset categories. The heuristic search methods include, for example, but are not limited to: genetic algorithm, particle swarm algorithm, ant colony algorithm, tabu search algorithm and simulated annealing algorithm. The various heuristics differ primarily in the manner in which they operate and the stopping criteria when searching. In the following, a brief explanation of each heuristic algorithm is provided, and the detailed description may refer to the related art.
The tabu search (tabu search) algorithm has tabu lists (tabu list) during searching, stores results (namely article attribute combinations) occurring in the previous searching process, and does not consider the results in the iteration process so as to prevent search circulation from occurring and falling into local optimum.
The simulated annealing algorithm is derived from a solid annealing principle, internal energy E is simulated as a target function value f, temperature T is evolved into a control parameter T, iteration of 'generating new solution → calculating target function difference → accepting or abandoning' is repeated on the current solution from an initial solution i (initial article attribute combination) and the initial value T of the control parameter, the value T is gradually attenuated, and the current solution when the algorithm is terminated is the approximate optimal solution (optimal article attribute combination). When the result of the new search is worse than the result of the last search according to the temperature, the result of the difference is accepted with a certain probability, which is helpful for jumping out of the local optimal solution. The probability is related to the temperature, the higher the probability is, so the higher the temperature is, the higher the probability is, the higher the temperature is, the random search is equivalent to, the lower the temperature is, the local search is equivalent to, and the temperature is reduced by a certain proportion after each round of search is carried out.
The ant colony algorithm is a probabilistic algorithm for finding an optimized path. The feasible solution of the problem to be optimized is represented by the walking paths of the ants, and all paths of the whole ant colony form a solution space (namely, an article attribute combination) of the problem to be optimized. The shorter ants release a larger amount of pheromone, and as time advances, the concentration of pheromone accumulated on the shorter paths gradually increases, and the number of ants selecting the paths also increases. Finally, the whole ant will concentrate on the best path under the action of positive feedback, and the corresponding solution is the optimal solution (the preferred article attribute combination) of the problem to be optimized.
The particle swarm algorithm simulates the behavior of a bird swarm to randomly search for food. In particle swarm optimization, the potential solution of each optimization problem is a bird in the search space, called a "particle". All particles have an adaptation value determined by the function to be optimized, and each particle also has a velocity that determines the direction and distance in which they "fly". The particle swarm algorithm is initialized to a random population of particles (initial article property combination) and then finds the optimal solution (preferred article property combination) according to an iteration. In each iteration, the particle updates itself by tracking two extrema: the 1 st is the optimal solution found by the particle itself, which is called the individual extremum; the 2 nd is the optimal solution found at present in the whole population, which is called a global extremum; instead of using the entire population, a portion of it may be used as a neighborhood for the particle, called a local extremum.
The genetic algorithm is based on the inspiration of the genetic law of the nature, each population includes a plurality of individuals (Individual, corresponding to the article attribute combination of the embodiment) with different advantages and disadvantages, in the process of reproduction, as shown in fig. 2A and 2B, the chromosome (corresponding to the article attribute of the embodiment) of each Individual is subjected to Mutation (Mutation), cross (cross) and other operations to generate a new population, and then according to the criterion of advantage and disadvantage elimination, excellent individuals are selected by calculating the fitness of the individuals (corresponding to the evaluation of the article multi-objective evaluation model of the embodiment and selecting the preferable article attribute combination).
As an example, a new item mining method based on a genetic algorithm is described in detail later with reference to fig. 3.
In step 120, a plurality of preset targets of each item attribute combination are evaluated by using an item multi-target evaluation model, the item multi-target evaluation model is obtained by learning historical data of existing items of the preset categories according to the plurality of preset targets, and a learning process is described in detail later.
The object multi-target evaluation model is a machine learning model and is a multi-target learning model. In some embodiments, the item Multi-target evaluation model is obtained by learning historical data of existing items of the preset category for the preset targets based on, for example, a Multi-gate mixed-of-Experts (MMoE) Multi-target learning model, a Progressive Layered Extraction (PLE) Multi-target learning model, or a Sub-Network Routing (SNR) Multi-target learning model.
As an example, the multi-objective evaluation model of the article based on the multi-gated hybrid expert multi-objective learning model is described in detail with reference to fig. 4.
In some embodiments, the plurality of preset goals comprises: sales and Unique Visitor (UV) conversion. The UV conversion refers to: and in a statistical period, the number of times of completing the conversion behavior accounts for the ratio of the total click times of the promotion information. For example: 100 users see the promotion information of the item, wherein 10 users click the information to jump to the target website, and 3 users have conversion behaviors of purchasing the item, and the like, so that the UV conversion rate of the promotion information is (3/10) × 100% ═ 30%.
The sales volume and the UV conversion rate are two targets with correlation, and the traditional single-target optimization method cannot reflect the mutual influence among different targets, so that the optimal combination which simultaneously meets a plurality of targets cannot be well found; the new article mining method based on the multi-objective optimization method can simultaneously predict two tasks related to new articles, namely the sales volume and the UV conversion rate, and local optimal solutions of different tasks are located at different positions, so that the local optimal dilemma can be jumped out through interaction among the tasks.
In some embodiments, the historical data of the existing item includes: historical sales data of the existing item, the historical sales data of the existing item comprising: intrinsic attributes and sales information of the existing item. The intrinsic properties of the article include, for example, appearance properties, functional properties, mode properties, and the like of the article. Taking a refrigerator as an example, the refrigerator has appearance attributes (such as length, width, height, panel material, color, door opening mode, liquid crystal screen and the like), function attributes (such as preservation, multi-cycle, dry-wet separate storage, intelligent application, door-in-door, ice making and the like), mode attributes (such as refrigeration mode, temperature control mode, energy efficiency grade, defrosting mode, fixed frequency/variable frequency and the like). The sales information of the article includes, for example, price, shelf-time, and the like.
In step 130, a preferred item attribute combination is determined based on the evaluation of the plurality of preset objectives for the plurality of item attribute combinations.
In some embodiments, determining the preferred item attribute combination based on the evaluation of the plurality of preset goals for the plurality of item attribute combinations comprises: calculating a comprehensive evaluation value of a plurality of preset targets of each item attribute combination according to the evaluation value of each preset target of each item attribute combination, for example, by adopting a weighted summation calculation mode, wherein the weight of each preset target can be set according to needs, for example, the weight of sales volume is set to be higher than the weight of UV conversion rate; according to the comprehensive evaluation value of each item attribute combination, a preferred item attribute combination is determined from a plurality of item attribute combinations, for example, several item attribute combinations with the largest comprehensive evaluation value are selected as the preferred item attribute combinations, and the selection number can be set as required.
In step 140, a new item of the predetermined category is determined according to the preferred item attribute combination.
If none of the preferred item attribute combinations appear in the existing items, all of the preferred item attribute combinations can be used as new items of the preset category. And if part of the preferred item attribute combinations appear in the existing items, the preferred item attribute combinations which do not appear in the existing items are used as the new items of the preset category.
The embodiment of the disclosure provides a scheme for automatically excavating new articles through a machine learning model, which helps in the design of new articles, reduces the consumption of the design of new articles in the aspects of manpower, financial resources and material resources, particularly under the condition of more article attributes and combinations thereof, the expensive design project cannot be basically completed in a manual mode, and some article attribute combinations are easily missed; the model belongs to a multi-target learning model, can output the object attribute combination simultaneously meeting multiple targets, is beneficial to jumping out of the local optimum of a single target through the interaction among the multiple targets in the learning process, and can reduce model parameters needing to be adjusted and repeated calculation because the bottom layer characteristics of the multi-target learning model are shared. In addition, the item attribute combination can be obtained more comprehensively through a heuristic search method, some item attribute combinations are avoided being missed, and more new items can be mined. In addition, the new item mining method of the disclosed embodiments does not rely on time series factors. If a new item is mined according to a time series prediction method, characteristics, trends, periodicity and the like of sales of the item in a time dimension need to be depended on, and if the new item lacks the characteristics, errors are accumulated over time even if the new item is predicted according to historical attribute combinations.
Fig. 3 shows a flow diagram of a new item mining method based on genetic algorithms according to some embodiments of the present disclosure. As shown in fig. 3, the new item mining method based on the genetic algorithm of this embodiment includes the following steps.
In step 310, the comprehensive evaluation value of a plurality of preset targets is set to be the maximum as the mining target, and the value range of the article attribute is set.
Let the set of article attributes x ═ x 1 ,x 2 ,…,x d ,…,x D ]Where D is the number of article attributes, x d Is the D-th attribute of the article, D is 1,2, …, D, each attribute x d All values of (a) are within their value ranges.
At step 320, using genetic algorithm, several initial item attribute combinations are randomly searched out as initial population, and each item attribute combination is used as an Individual (Individual).
In step 330, each iteration starts, fitness of the individual is evaluated according to the principle of high or low, and selection operation is performed, some better article attribute combinations are selected for mutation, intersection and other operations, and the article attribute combinations are gradually evolved towards the direction of multi-target optimization.
The mutation means that a part of attribute values in an individual are randomly transformed into other values of the attribute, as shown in fig. 2A. Crossover refers to the exchange of attribute values between different individuals, as shown in fig. 2B.
The method comprises the steps of calculating evaluation values of a plurality of preset targets of an individual by using an article multi-target evaluation model, carrying out weighted summation on the evaluation values of the plurality of preset targets of the individual, calculating a comprehensive evaluation value of the plurality of preset targets of the individual, and representing the fitness of the individual, wherein the larger the comprehensive evaluation value of the individual is, the larger the fitness of the individual is.
In step 340, it is determined whether the search stop condition is satisfied. If the search stop condition is not satisfied, return to step 320 and continue the iteration.
The search stop conditions are, for example: the iteration times of the population reach preset iteration times, or no more excellent article attribute combination appears within a certain iteration times, and the like.
In step 350, if the search stop condition is satisfied, outputting multi-objective optimal k (i.e. top-k) item attribute combinations as the design solutions of the mined new items, where k is the number of design solutions of the new items to be output and can be set.
Therefore, the new article is mined based on the genetic algorithm and the article multi-target evaluation model, wherein novel article attribute combinations can be generated through operations such as variation, intersection and the like of the genetic algorithm, and a design scheme of the new article is provided.
Fig. 4 is a schematic diagram of an item multi-objective assessment model obtained based on a multi-gated hybrid expert multi-objective learning model according to some embodiments of the present disclosure.
As shown in fig. 4, in the case that the article multi-objective evaluation model is obtained based on a multi-gated hybrid expert multi-objective learning model, the article multi-objective evaluation model includes an input layer, a multi-gated hybrid expert layer, a tower layer, and an output layer, which are sequentially cascaded.
The Input Layer (Input Layer) is used to instantiate the data Input to this Layer as a tensor (tensor), with the dimensionality being the number of features d of the Input data. The Input layer may be implemented, for example, based on the Input () function of a deep learning library (e.g., Keras).
The Multi-gate hybrid-Expert Layer (MMoE Layer) includes a plurality of feed-forward neural networks (or Expert feed-forward networks) called Experts (Experts) connected to an input Layer and a plurality of gate networks connected to the input Layer and the plurality of Experts. Different experts output different results according to the input characteristics. And the gating network is used for carrying out weighted summation operation on the output result of each expert. The number of the gate control networks is the same as that of the preset targets, each gate control network corresponds to one preset target, and different gate control networks can perform weighted summation operation with different weights on output results of the experts. Each gating network comprises a Gate (Gate) connected with an input layer and a weighted summation calculator, and the weighted summation calculator in each gating network carries out weighted summation operation on the output result of each expert according to weight information provided by the Gate in the gating network. The parameters of the current layer are set, for example, the number h of hidden layer units of experts is set to be 32, the number n of experts is set to be 16, and the number k of tasks is set to be 2. The parameters of the layer can be adjusted according to actual conditions, and are not limited to the illustrated examples. Each task corresponds to a preset target, and if 2 preset targets exist, the number of the tasks is 2.
The Tower Layer (Tower Layer) comprises a plurality of sub-Tower layers, the number of the sub-Tower layers is the same as that of the preset targets, and each sub-Tower Layer corresponds to one preset target. Each sub-tower layer is fully connected with a neural network and is connected with a gating network. Each sub-tower layer takes as input the output of a respective gating network. The number of hidden layer units in this layer is, for example, 16, and is not limited to the illustrated example. The Tower Layer 1 and the Tower Layer 2 in the figure are two sub-Tower layers.
The Output Layer (Output Layer) comprises a plurality of sub-Output layers, the number of the sub-Output layers is the same as the number of the preset targets, and each sub-Output Layer corresponds to one preset target. Each sub-output layer is fully connected with the neural network and is connected with one sub-tower layer. And each sub-output layer takes the output of the corresponding sub-tower layer as input and outputs the prediction result of the corresponding preset target. The number of hidden layer units provided in the present layer is, for example, 1. Output Layer 1 and Output Layer 2 in the figure are two sub-Output layers.
Through the multi-target evaluation model of the article, an input layer and an expert feedforward network in the model are shared for a plurality of learning tasks, and model parameters needing to be adjusted and repeated calculation can be reduced.
Before learning, an activation function of the multi-objective assessment model of the item may be set. For example, the activation function of each expert is set as the ReLU function, and the activation function of each gate is set as the Softmax function; setting the activation function of each sub-tower layer as a ReLU function; and setting the activation function of each sub-output layer as a Linear function.
Before learning, the item multi-objective assessment model may be initialized. For example, the weights and biases for each expert and each gate are initialized with a uniformly distributed initializer; initializing the weight of the fully-connected neural network of each sub-tower layer by using a normal distribution initializer; and initializing the weight of the fully-connected neural network of each sub-output layer by using a normal distribution initializer.
The operation of the multi-objective assessment model for the object is described below.
In the input layer, the input layer instantiates the input data as a tensor, the dimensionality of which is the number of features d of the input data.
In the multi-gate control mixed expert layer, the multi-gate control mixed expert layer takes tensor output by the input layer as input x, and performs weighted summation on output results of each expert according to weight information provided by each gate control to obtain output of each taskResults f k (x)=∑ i g k (x) i f i (x) Wherein f is i (x) Representing the output of the ith expert feedforward network, g k (x) i And x represents the input of the multi-gating hybrid expert layer, namely the output of the input layer. For example, k is 1,2, i is 1,2,3 … 16. The activation function of each expert feedforward network is set as a ReLU function to make the calculation speed and convergence speed faster, then f i (x)=max{0,w i x+b},w i The weight information (matrix) of the ith expert feedforward network needs to be determined through training, the matrix parameters are the characteristic number d of input data x the hidden layer unit number h x the expert number n of the expert, b are the deviation information (matrix) of the expert feedforward network needs to be determined through training, and the matrix parameters are the hidden layer unit number h x the expert number n of the expert. Setting the activation function of each gate control as a Softmax function, converting the output result of the gate control into a nonnegative number, and enabling the sum of all items to be 1
Figure BDA0003594432480000121
x i Representing the output of gating for the ith expert feedforward network before activation function processing, the output of gating for each expert feedforward network before activation function processing may be in the combination { x } i }=w gk x + b represents, w gk The gated weight information (matrix) corresponding to the kth task needs to be determined through training, the matrix parameters are the characteristic number d x the expert number n x the task number k of input data, b is the gated deviation information (matrix) needs to be determined through training, and the matrix parameters are the expert number n x the task number k. In addition, before model training, the weights and biases of each expert feedforward network and each gate may be initialized with a uniformly distributed initializer, and regularization terms and constraints may not be set.
In the Tower layer, the output result of the corresponding task of the multi-gate control mixed expert layer above the sub-Tower layer of each task is used as input x, the weight initialization is carried out on the fully-connected neural network of the layer by using a normal distribution initializer, the activation function is set as a ReLU function, and then the Tower LayOutput result h of er to kth task k =max{0,wf k (x) + b, where w and b represent weight information (matrix) and deviation information (matrix) of the sub-tower layer of the kth task, respectively.
In the output layer, the output result of the sub-tower layer of the corresponding task above the sub-output layer of each task is used as input, a normal distribution initializer is used for initializing the weight of the fully-connected neural network of the layer, the activation function is defaulted to be a Linear function, and the final output result y is enabled to be k =linear(h k ) The output of the only hidden layer unit through the layer is 1 value.
It should be noted that the inputs of the multi-gated hybrid expert layer and the tower layer are both outputs of the previous layer, and therefore, although the inputs of the gated hybrid expert layer and the tower layer are both represented by x, x has different meanings, the input x of the gated hybrid expert layer represents the output of the previous layer, and the input x of the tower layer represents the output of the previous layer.
FIG. 5 illustrates a schematic diagram of a learning method for an item multi-objective assessment model, according to some embodiments of the present disclosure. As shown in fig. 5, the learning method of the multi-target evaluation model for an article of this embodiment includes the following steps.
At step 510, historical data, such as historical sales data, of existing items of a predetermined category is obtained.
Taking the refrigerator category as an example, the appearance attributes (such as length, width and height, panel material, color, door opening mode, liquid crystal screen and the like), the function attributes (such as fresh keeping, multi-cycle, dry-wet separate storage, intelligent application, door-in-door, ice making and the like), the mode attributes (such as refrigeration mode, temperature control mode, energy efficiency grade, defrosting mode, fixed frequency/variable frequency and the like), and the sales information (such as price on shelf, time on shelf and the like) of various existing refrigerators are obtained.
The historical data includes, for example, numerical discrete feature data such as depth, width, height, etc., numerical continuous feature data such as price on shelf, etc., and non-numerical feature data such as brand, color, panel material, etc.
At step 520, the historical data is preprocessed, including data processing and feature engineering, to convert the noisy raw data set into a data set that can be used to train the model.
In some embodiments, the pre-processing of the historical data includes, for example, but is not limited to, the following:
1) data that cannot be used to predict the pre-set target (e.g., sales, UV conversion, etc.) of the new item, such as order size, click through rate, etc., are discarded.
2) And calculating the time difference according to the commodity shelf date and the commodity selling date.
3) According to the time difference, the sales of each SKU (Stock Keeping Unit) in the days to be predicted in the future are summed.
4) The date of sale is removed.
5) Duplicate rows of data are removed.
6) New features are generated. For example, the average sales volume and average price corresponding to the brand are generated from the historical sales volume and price of each brand of the article according to 3 new features of the adult, the month and the day of the last day, and the brand value, the market share and the like of the article are represented.
7) Processing abnormal values of the data, such as: the median corresponding to the UV conversion was found for the brand of the article and replaced its outliers.
8) And filling missing values of the corresponding characteristics according to the values with the highest occurrence frequency of the characteristics of the article.
9) Min-Max normalization processing is carried out on the numerical discrete features, and the influence of unit and scale difference among the features is eliminated, so that the process of finding the optimal solution is gentle and easy to converge. Wherein the Min-Max normalization method comprises
Figure BDA0003594432480000141
Where min is the minimum value in the set of x and max is the maximum value in the set of x.
10) And carrying out Z-Score normalization processing on the numerical continuous features, and eliminating the influence of unit and scale difference among the features, so that the process of searching the optimal solution becomes gentle and easy to converge. Wherein the Z-Score normalization method is
Figure BDA0003594432480000142
Where μ is the mean of the set of x and σ is the standard deviation of the set of x.
11) And carrying out One-Hot coding on the non-numerical type features, so that the non-numerical type features are converted into a form beneficial to model learning. For example, for each feature, if it has m possible values, then after One-Hot encoding, it becomes m binary features, and these features are mutually exclusive, with only One activation at a time.
In step 530, the preprocessed data set is partitioned to form a training data set, a verification data set, and a test data set, for example, the method includes the following steps.
1) And (4) disordering the sequence of the data sets and enhancing the generalization capability of the model.
2) The data set in the disorganized order is divided into a training set, a verification set and a test set according to a certain proportion (such as 0.7:0.15: 0.15).
3) And splitting the feature data and the target data in the three divided sets to form a training data set (comprising the training feature data and the training target data), a verification data set (comprising the verification feature data and the verification target data) and a test data set (comprising the test feature data and the test target data).
Thus, a training data set, a verification data set, and a test data set are determined based on historical data of existing articles of the predetermined category.
At step 540, the multi-objective assessment model of the item is trained using the training data set.
In some embodiments, training feature data in the training data set is input into the multi-target item assessment model, total loss of the plurality of preset targets is determined according to difference information between a predicted value of each preset target output by the multi-target item assessment model and a true value of each preset target in the training data set, and the multi-target item assessment model is trained according to the total loss of the plurality of preset targets. Thus, the model parameters are continuously adjusted through training, so that the total loss is smaller and smaller.
For example, the total loss function for a number of preset targets is the Mean Absolute Error (MAE),
Figure BDA0003594432480000151
wherein y is mi The true value of the ith target recorded for the mth item data,
Figure BDA0003594432480000152
and M is more than or equal to 1 and less than or equal to M, and M is the number of the article data records in the training data set. By using the MAE to determine the loss relative to using Mean Squared Error (MSE), the phenomenon that the loss rises continuously in the early stage of training can be avoided, and the learning performance of the model is better.
The optimizer of the model, for example, adopts an Adam optimizer, and can be suitable for unstable objective functions and large-scale data and parameter scenes.
Setting the learning rate as an exponentially decaying learning rate:
Figure BDA0003594432480000153
wherein lr is a learning rate, the decay _ rate is a decay rate of the learning rate, and in the range of (0,1), global _ step is the number of running rounds, and the decay _ step is the decay one-time learning rate of how many rounds, so that the model can obtain a faster gradient descent speed in the early stage of training and gradually slows down along with the training, and the upper and lower fluctuation of the loss value is prevented from not converging. Other training parameters are set, for example, the number of training rounds (epochs) is 200, for example, and the batch size for training each batch is 256, for example. After setting various parameters, the training model can be started through the Hyperopt tool.
At step 550, the multi-objective assessment model of the item is validated using the validation dataset.
In some embodiments, each time the article multi-target evaluation model is trained for one turn, the article multi-target evaluation model is verified by using a verification data set, a training effect is evaluated by using a preset evaluation function according to a verification result, if the evaluation result indicates that the article multi-target evaluation model is optimized in the current turn of training, the article multi-target evaluation model is trained for the next turn, and if the evaluation result indicates that the article multi-target evaluation model is not optimized in the preset turn of training, the training is stopped.
Wherein the evaluation index can be set as a Weighted Mean Absolute Percentage Error (WMAPE) function,
Figure BDA0003594432480000161
wherein y is ni To verify the true value of the ith target of the nth item data record in the data set,
Figure BDA0003594432480000162
the predicted value of the ith target is recorded for the data of the nth item in the verification data set. Compared with the evaluation by adopting MAPE, the WMAPE is adopted for evaluation, so that the influence caused by the difference in the order of magnitude of the true value y is reduced, the situation that the denominator is 0 cannot be calculated is avoided, and the method is more convincing in the aspects of sales volume and UV conversion rate prediction.
Furthermore, an evaluation index for the verification data set may be set, and if none of the multiple rounds of training of a preset number of rounds (e.g., 50 rounds) is optimized, the training may be stopped in advance.
And after the training is finished, storing the model and the parameters under the optimal verification result so as to directly load the model with the optimal verification result during prediction.
In step 560, the multi-objective item assessment model is tested using the test data set, and the multi-objective item assessment model that passes the test is used for new item mining.
During testing, the total loss function MAE of the plurality of preset targets can be adopted to evaluate the predicted loss of the model, and if the total loss function MAE is small enough to meet the business requirement, the multi-target evaluation model of the article passes the test and can be used for mining new articles.
Therefore, the historical data of the existing articles in the preset categories are learned according to the preset targets to obtain the article multi-target evaluation model.
Fig. 6 illustrates a schematic view of a new article excavation apparatus according to some embodiments of the present disclosure. As shown in fig. 6, the new article excavating device 600 of this embodiment includes: a memory 610 and a processor 620 coupled to the memory 610, the processor 620 configured to perform the new item mining method of any of the foregoing embodiments based on instructions stored in the memory 610.
For example, a plurality of article attribute combinations of a preset article class are obtained; evaluating a plurality of preset targets of each article attribute combination by using an article multi-target evaluation model, wherein the article multi-target evaluation model is obtained by learning historical data of existing articles of preset categories according to the plurality of preset targets; determining a preferred article attribute combination according to the evaluation results of a plurality of preset targets of a plurality of article attribute combinations; and determining the new articles of the preset categories according to the preferred article attribute combination. And searching the article attributes of the preset categories by using a heuristic search method to obtain a plurality of article attribute combinations of the preset categories. The article multi-target evaluation model is obtained by learning historical data of existing articles of the preset category aiming at a plurality of preset targets based on a multi-gate control hybrid expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
Memory 610 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The Processor 620 may be implemented as discrete hardware components such as a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, discrete gates, or transistors.
The apparatus 600 may also include an input-output interface 630, a network interface 640, a storage interface 650, and the like. These interfaces 630, 640, 650 and the connections between the memory 610 and the processor 620 may be, for example, via a bus 660. The input/output interface 630 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 640 provides a connection interface for various networking devices. The storage interface 650 provides a connection interface for external storage devices such as an SD card and a usb disk. The bus 660 may use any of a variety of bus architectures. For example, bus structures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, and a Peripheral Component Interconnect (PCI) bus.
Fig. 7 shows a schematic view of a new article excavation apparatus according to further embodiments of the present disclosure. As shown in fig. 7, the new article excavating device 700 of this embodiment includes a unit 710 and a unit 740, and may further include a unit 750.
The attribute combination obtaining unit 710 is configured to obtain a plurality of item attribute combinations of a preset category, for example, a heuristic search method is used to search the item attributes of the preset category to obtain the plurality of item attribute combinations of the preset category. The heuristic search method comprises the following steps: genetic algorithm, particle swarm algorithm, ant colony algorithm, tabu search algorithm and simulated annealing algorithm.
The evaluation unit 720 is configured to evaluate a plurality of preset targets of each item attribute combination by using an item multi-target evaluation model, wherein the item multi-target evaluation model is obtained by learning historical data of existing items of the preset category according to the plurality of preset targets.
A preferred attribute combination determination unit 730 configured to determine a preferred item attribute combination according to evaluation results of a plurality of preset targets of a plurality of item attribute combinations. For example, according to the evaluation value of each preset target of each article attribute combination, calculating a comprehensive evaluation value of a plurality of preset targets of each article attribute combination; and determining a preferred article attribute combination from the plurality of article attribute combinations according to the comprehensive evaluation value of each article attribute combination.
A new item determining unit 740 configured to determine a new item of the preset category according to the preferred item attribute combination.
The model learning unit 750 is configured to learn historical data of existing articles of the preset category aiming at the multiple preset targets to obtain the article multi-target evaluation model based on a multi-gating hybrid expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
Some embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the new item mining method of the embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more non-transitory computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (17)

1. A method of excavating a new object, comprising:
acquiring a plurality of article attribute combinations of preset articles;
evaluating a plurality of preset targets of each article attribute combination by using an article multi-target evaluation model, wherein the article multi-target evaluation model is obtained by learning historical data of existing articles of preset categories according to the plurality of preset targets;
determining a preferred item attribute combination according to the evaluation results of a plurality of preset targets of a plurality of item attribute combinations;
and determining the new articles of the preset categories according to the preferred article attribute combination.
2. The method of claim 1, wherein obtaining a plurality of article attribute combinations of a predetermined category comprises:
and searching the article attributes of the preset articles by using a heuristic search method to obtain a plurality of article attribute combinations of the preset articles.
3. The method according to claim 1, wherein the object multi-target evaluation model is obtained by learning historical data of existing objects of the preset category aiming at the preset targets based on a multi-gating hybrid expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
4. The method according to claim 3, wherein in the case where the commodity multi-objective assessment model is obtained based on a multi-gated hybrid expert multi-objective learning model, the commodity multi-objective assessment model comprises an input layer, a multi-gated hybrid expert layer, a tower layer, and an output layer, which are cascaded in sequence, wherein:
the multi-gated hybrid expert layer comprises a plurality of feedforward neural networks which are called experts and connected with the input layer, and a plurality of gated networks which are connected with the input layer and the experts, wherein each gated network comprises a gate connected with the input layer and a weighted summation calculator, the weighted summation calculator in each gated network carries out weighted summation operation on the output result of each expert according to weight information provided by the gate in the gated network, and each gated network corresponds to a preset target;
the tower layer comprises a plurality of sub-tower layers, and each sub-tower layer is in full connection with a neural network and is connected with a gate control network;
the output layer comprises a plurality of sub-output layers, and each sub-output layer is a fully connected neural network and is connected with one sub-tower layer.
5. The method of claim 4, further comprising:
performing one or more of the following operations prior to learning:
initializing the weight and deviation of each expert and each gate control by using a uniform distribution initializer; initializing the weight of the fully-connected neural network of each sub-tower layer by using a normal distribution initializer; and initializing the weight of the fully-connected neural network of each sub-output layer by using a normal distribution initializer.
6. The method of claim 1, wherein learning historical data of existing items of the predetermined category for the plurality of predetermined objectives to obtain the item multi-objective assessment model comprises:
determining a training data set according to historical data of existing articles of the preset article type, inputting training characteristic data in the training data set into an article multi-target evaluation model, determining total loss of a plurality of preset targets according to difference information between a predicted value of each preset target output by the article multi-target evaluation model and a true value of each preset target in the training data set, and training the article multi-target evaluation model according to the total loss of the plurality of preset targets.
7. The method of claim 6, wherein learning historical data of existing items of the predetermined category for the plurality of predetermined goals to obtain the item multi-goal assessment model comprises:
and determining a verification data set according to the historical data of the existing articles of the preset article type, verifying the article multi-target evaluation model by using the verification data set every time the article multi-target evaluation model is trained for one turn, and evaluating the training effect by using a preset evaluation function according to the verification result.
8. The method of claim 7, wherein learning historical data of existing items of the predetermined category for the plurality of predetermined goals to obtain the item multi-goal evaluation model comprises:
determining a test data set according to the historical data of the existing articles of the preset article class, testing the article multi-target evaluation model by using the test data set, and using the article multi-target evaluation model passing the test for mining new articles.
9. The method of claim 1, wherein determining a preferred item attribute combination based on the evaluation of the plurality of preset goals for the plurality of item attribute combinations comprises:
calculating a comprehensive evaluation value of a plurality of preset targets of each article attribute combination according to the evaluation value of each preset target of each article attribute combination;
and determining a preferred article attribute combination from the plurality of article attribute combinations according to the comprehensive evaluation value of each article attribute combination.
10. The method of claim 2, wherein the heuristic search method comprises: genetic algorithm, particle swarm algorithm, ant colony algorithm, tabu search algorithm and simulated annealing algorithm.
11. The method of claim 1, wherein the plurality of preset goals comprises: sales volume and unique visitor conversion rate.
12. The method of claim 1, wherein the historical data of the existing item comprises: historical sales data of the existing item, the historical sales data of the existing item comprising: intrinsic attributes and sales information of the existing item.
13. A new article excavating apparatus comprising:
an attribute combination acquisition unit configured to acquire a plurality of article attribute combinations of a preset article type;
the evaluation unit is configured to evaluate a plurality of preset targets of each item attribute combination by utilizing an item multi-target evaluation model, and the item multi-target evaluation model is obtained by learning historical data of existing items of the preset categories aiming at the plurality of preset targets;
a preferred attribute combination determination unit configured to determine a preferred item attribute combination according to evaluation results of a plurality of preset targets of a plurality of item attribute combinations;
a new item determination unit configured to determine a new item of the preset category according to the preferred item attribute combination.
14. The apparatus of claim 13, further comprising:
the model learning unit is configured to learn historical data of existing articles of the preset categories aiming at the preset targets to obtain the article multi-target evaluation model based on a multi-gate control hybrid expert multi-target learning model, a progressive hierarchical extraction multi-target learning model or a subnet routing multi-target learning model.
15. The apparatus according to claim 13, wherein the attribute combination obtaining unit is configured to search the article attributes of the predetermined category by using a heuristic search method to obtain a plurality of article attribute combinations of the predetermined category.
16. A new article excavating device comprising:
a memory; and a processor coupled to the memory, the processor configured to perform the new item mining method of any of claims 1-12 based on instructions stored in the memory.
17. A non-transitory computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of the new article mining method of any one of claims 1-12.
CN202210384684.4A 2022-04-13 2022-04-13 New article mining method and device and storage medium Pending CN115034803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210384684.4A CN115034803A (en) 2022-04-13 2022-04-13 New article mining method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210384684.4A CN115034803A (en) 2022-04-13 2022-04-13 New article mining method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115034803A true CN115034803A (en) 2022-09-09

Family

ID=83119249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210384684.4A Pending CN115034803A (en) 2022-04-13 2022-04-13 New article mining method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115034803A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103782898A (en) * 2014-02-25 2014-05-14 江苏沿江地区农业科学研究所 Method for culturing new variety of Zixiangruan rice
CN105488696A (en) * 2015-12-02 2016-04-13 浙江农林大学 Multi-target product material design method
US20160260110A1 (en) * 2015-03-04 2016-09-08 Wal-Mart Stores, Inc. System and method for predicting the sales behavior of a new item
US20160321716A1 (en) * 2015-04-30 2016-11-03 Wal-Mart Stores, Inc. System, method, and non-transitory computer-readable storage media for enhancing online product search through multiobjective optimization of product search ranking functions
CN112270570A (en) * 2020-11-03 2021-01-26 重庆邮电大学 Click conversion rate prediction method based on feature combination and representation learning
US20210182600A1 (en) * 2019-12-16 2021-06-17 NEC Laboratories Europe GmbH Measuring relatedness between prediction tasks in artificial intelligence and continual learning systems
CN113763024A (en) * 2021-03-19 2021-12-07 北京沃东天骏信息技术有限公司 Article attribute mining method, apparatus and storage medium
WO2021254114A1 (en) * 2020-06-17 2021-12-23 腾讯科技(深圳)有限公司 Method and apparatus for constructing multitask learning model, electronic device and storage medium
CN114065015A (en) * 2020-07-31 2022-02-18 阿里巴巴集团控股有限公司 Search recommendation method, device and equipment
CN114120045A (en) * 2022-01-25 2022-03-01 北京猫猫狗狗科技有限公司 Target detection method and device based on multi-gate control hybrid expert model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103782898A (en) * 2014-02-25 2014-05-14 江苏沿江地区农业科学研究所 Method for culturing new variety of Zixiangruan rice
US20160260110A1 (en) * 2015-03-04 2016-09-08 Wal-Mart Stores, Inc. System and method for predicting the sales behavior of a new item
US20160321716A1 (en) * 2015-04-30 2016-11-03 Wal-Mart Stores, Inc. System, method, and non-transitory computer-readable storage media for enhancing online product search through multiobjective optimization of product search ranking functions
CN105488696A (en) * 2015-12-02 2016-04-13 浙江农林大学 Multi-target product material design method
US20210182600A1 (en) * 2019-12-16 2021-06-17 NEC Laboratories Europe GmbH Measuring relatedness between prediction tasks in artificial intelligence and continual learning systems
WO2021254114A1 (en) * 2020-06-17 2021-12-23 腾讯科技(深圳)有限公司 Method and apparatus for constructing multitask learning model, electronic device and storage medium
CN114065015A (en) * 2020-07-31 2022-02-18 阿里巴巴集团控股有限公司 Search recommendation method, device and equipment
CN112270570A (en) * 2020-11-03 2021-01-26 重庆邮电大学 Click conversion rate prediction method based on feature combination and representation learning
CN113763024A (en) * 2021-03-19 2021-12-07 北京沃东天骏信息技术有限公司 Article attribute mining method, apparatus and storage medium
CN114120045A (en) * 2022-01-25 2022-03-01 北京猫猫狗狗科技有限公司 Target detection method and device based on multi-gate control hybrid expert model

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ZHANG,Z ET AL: "Gene feature selection algorithm based on discrete particle swarm optimization and neighborhood reduction", 《 COMPUTER ENGINEERING》, vol. 42, no. 3, 30 September 2016 (2016-09-30), pages 188 - 191 *
吴君华: "新品供应计划数字化和智能化的研究和实践", 《供应链管理》, vol. 2, no. 11, 8 November 2021 (2021-11-08), pages 112 - 128 *
知乎: "多类目MoE模型在京东电商搜索中的应用", Retrieved from the Internet <URL:http://zhuanlan.zhihu.com/p/406057536?utm_id=0> *
知乎: "推荐系统(十六)大厂实践经验学习:多目标模型", Retrieved from the Internet <URL:http://zhuanlan.zhihu.com/p/493164774> *
网易网: "京东小魔方全链路打造"新品挖掘机"", Retrieved from the Internet <URL:http://www.163.com/dy/article/GOTT5IB30512B07B.html> *
陈国东等: "面向复合意象的产品形态多目标优化", 《中国机械工程》, no. 20, 23 January 2015 (2015-01-23), pages 69 - 76 *

Similar Documents

Publication Publication Date Title
US11100387B2 (en) Systems and methods for learning and predicting transactions
Phaladisailoed et al. Machine learning models comparison for bitcoin price prediction
KR102693928B1 (en) Automated evaluation of project acceleration
CA3131688A1 (en) Process and system including an optimization engine with evolutionary surrogate-assisted prescriptions
WO2016183391A1 (en) System, method and computer-accessible medium for making a prediction from market data
US11087344B2 (en) Method and system for predicting and indexing real estate demand and pricing
Li et al. Heterogeneous ensemble learning with feature engineering for default prediction in peer-to-peer lending in China
Faritha Banu et al. Artificial intelligence based customer churn prediction model for business markets
Pandey et al. Gold and diamond price prediction using enhanced ensemble learning
Cortez et al. Multi-step time series prediction intervals using neuroevolution
Bhattacharya et al. Credit risk evaluation: a comprehensive study
Popchev et al. Algorithms for Machine Learning with Orange System.
Ramachandra et al. Machine learning application for black friday sales prediction framework
US11468352B2 (en) Method and system for predictive modeling of geographic income distribution
CN114463994B (en) Traffic flow prediction parallel method based on chaos and reinforcement learning
CN116703607A (en) Financial time sequence prediction method and system based on diffusion model
CN115034803A (en) New article mining method and device and storage medium
Srinivasarao et al. A Novel Hybrid Optimization Algorithm for Materialized View Selection from Data Warehouse Environments.
Jackson et al. Automl approach to classification of candidate solutions for simulation models of logistic systems
Ngo Stacking Ensemble for auto_ml
Mastelini et al. Online multi-target regression trees with stacked leaf models
Nalabala et al. Financial predictions based on fusion models-a systematic review
Allende Application of Shallow Neural Networks to Retail Intermittent Demand Time Series
Ali Meta-level learning for the effective reduction of model search space.
Qiao et al. Hierarchical accounting variables forecasting by deep learning methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination