CN117455366A - Inventory allocation method, device, medium and equipment based on neural network model - Google Patents

Inventory allocation method, device, medium and equipment based on neural network model Download PDF

Info

Publication number
CN117455366A
CN117455366A CN202311757785.2A CN202311757785A CN117455366A CN 117455366 A CN117455366 A CN 117455366A CN 202311757785 A CN202311757785 A CN 202311757785A CN 117455366 A CN117455366 A CN 117455366A
Authority
CN
China
Prior art keywords
target
neural network
network model
order
supply
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311757785.2A
Other languages
Chinese (zh)
Inventor
王书为
郑玮
蒋能学
徐可
王成林
马雨浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202311757785.2A priority Critical patent/CN117455366A/en
Publication of CN117455366A publication Critical patent/CN117455366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure relates to an inventory allocation method, device, medium and equipment based on a neural network model, and relates to the technical field of computers or data processing. The method comprises the following steps: in response to successful placement of a target order, determining a first number of target demand nodes corresponding to the target order according to the targeting conditions of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to the supply channels of the target orders in a supply platform for providing the target objects for the target orders; inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value. Therefore, the probability of obtaining the allocation scheme can be increased and the applicable scene of the allocation scheme can be expanded by using the neural network model to fit the allocation proportion.

Description

Inventory allocation method, device, medium and equipment based on neural network model
Technical Field
Embodiments of the present disclosure relate to the field of computer technology or the field of data processing technology, and more particularly, to a neural network model-based inventory allocation method, a neural network model-based inventory allocation device, a computer-readable storage medium, and an electronic apparatus.
Background
For a stable and efficient supply platform, inventory allocation needs to meet the conditions of each party as much as possible, on one hand, the allocation proportion of each order on each channel needs to be determined, and on the other hand, the allocation proportion of each order needs to meet numerous conditions such as maximization of order completion rate and optimal flow allocation. However, current dispensing methods do not help the supply platform meet the various conditions described above during inventory dispensing.
Disclosure of Invention
However, the related art incorporates complex constraints in the allocation logic, which often make it difficult to finally obtain an allocation scheme; secondly, a heuristic mode is often adopted for searching, the essence of the heuristic mode is that an optimal solution is searched through a greedy strategy, and the obtained allocation scheme is easy to obtain and suitable for part of scenes.
For this reason, an inventory allocation method is highly required to increase the cases where solutions exist and to alleviate the cases where solutions are trapped locally optimal.
In this context, embodiments of the present disclosure desirably provide a neural network model-based inventory allocation method, a neural network model-based inventory allocation apparatus, a computer-readable storage medium, and an electronic device.
According to a first aspect of the present disclosure, there is provided a neural network model-based inventory allocation method, the method comprising: in response to successful placement of a target order, determining a first number of target demand nodes corresponding to the target order according to the targeting conditions of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to supply channels of the target orders in a supply platform for providing the target objects for the target orders; and inputting the first quantity and the second quantity into a trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value.
In one embodiment, the determining a first number of target demand nodes corresponding to the target order according to the targeting condition of the target order includes: determining a number of the orientation conditions; splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
In one embodiment, the determining a second number of target supply nodes according to the supply channel of the target order includes: each supply channel of the target order is taken as one target supply node.
In one embodiment, the responding to the successful placement of the target order comprises: and determining that the order is successful in response to the total inventory of all the target supply nodes on the target order placing day being greater than or equal to the total demand of all the target demand nodes.
In one embodiment, the inputting the first number and the second number into a trained neural network model is performed to obtain an allocation proportion value of the target object allocated by each target supply node, including: at an input layer, performing independent thermal coding on the target demand node and the target supply node respectively to obtain a first sparse vector and a second sparse vector; in the embedding layer, performing dimension reduction on the first sparse vector and the second sparse vector to obtain a first dense vector and a second dense vector; at a network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector; at the output layer, the first feature vector and the second feature vector are converted into the distribution proportion value of the target supply node by adopting a normalization function.
In one embodiment, the method further comprises: reading a trained neural network model for processing a last target order; training a trained neural network model of a previous target order by adopting the first quantity and the second quantity of the target orders to obtain the trained neural network model; and inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node.
In one embodiment, training a neural network model to obtain the trained neural network model is: initializing parameters of a neural network model; the parameters include weights and deviations; inputting training data into the neural network model, and propagating forward to obtain a predicted allocation proportion value; calculating a loss value between a predicted distribution proportion and a real distribution proportion according to a preset loss function; the loss function includes a regularization term; adopting a gradient descent method to calculate the descent gradient of the parameters of the neural network model according to the back propagation of the loss value; and optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
In one embodiment, the method further comprises: calculating an evaluation index of the trained neural network model by adopting a verification set; the evaluation index includes at least one of: mean square error, root mean square error, mean absolute error, and coefficient is determined.
According to a second aspect of the present disclosure, there is provided an inventory allocation device based on a neural network model, the device comprising: a target demand node determining module configured to determine a first number of target demand nodes corresponding to a target order according to an orientation condition of the target order in response to successful placement of the target order; the target order is a purchase credential formed by purchasing the target object; a target supply node determination module configured to determine a second number of target supply nodes according to a supply channel of the target order in a supply platform providing the target object for the target order; and the target object distribution module is configured to input the first quantity and the second quantity into a trained neural network model for processing, obtain a distribution proportion value of the target object distributed by each target supply node, and distribute the target object according to the distribution proportion value.
In one embodiment, the target demand node determination module is configured to: determining a number of the orientation conditions; splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
In one embodiment, the target supply node determination module is configured to: each supply channel of the target order is taken as one target supply node.
In one embodiment, the target demand node determination module is configured to: and determining that the order is successful in response to the total inventory of all the target supply nodes on the target order placing day being greater than or equal to the total demand of all the target demand nodes.
In one embodiment, the target object allocation module is configured to: at an input layer, performing independent thermal coding on the target demand node and the target supply node respectively to obtain a first sparse vector and a second sparse vector; in the embedding layer, performing dimension reduction on the first sparse vector and the second sparse vector to obtain a first dense vector and a second dense vector; at a network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector; at the output layer, the first feature vector and the second feature vector are converted into the distribution proportion value of the target supply node by adopting a normalization function.
In one embodiment, the apparatus further comprises a model update module configured to: reading a trained neural network model for processing a last target order; training a trained neural network model of a previous target order by adopting the first quantity and the second quantity of the target orders to obtain the trained neural network model; and inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node.
In one embodiment, the apparatus further comprises a model training module configured to: initializing parameters of a neural network model; the parameters include weights and deviations; inputting training data into the neural network model, and propagating forward to obtain a predicted allocation proportion value; calculating a loss value between a predicted distribution proportion and a real distribution proportion according to a preset loss function; the loss function includes a regularization term; adopting a gradient descent method to calculate the descent gradient of the parameters of the neural network model according to the back propagation of the loss value; and optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
In one embodiment, the apparatus further comprises a model evaluation module configured to: calculating an evaluation index of the trained neural network model by adopting a verification set; the evaluation index includes at least one of: mean square error, root mean square error, mean absolute error, and coefficient is determined.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods described above.
According to a first aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the methods described above via execution of the executable instructions.
According to the inventory allocation method based on the neural network model, the inventory allocation device based on the neural network model, the computer-readable storage medium and the electronic equipment, in response to successful order placement of a target order, a first number of target demand nodes corresponding to the target order are determined according to the orientation conditions of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to supply channels of the target orders in a supply platform for providing the target objects for the target orders; and inputting the first quantity and the second quantity into a trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value. Therefore, the probability of obtaining the allocation scheme can be increased and the applicable scene of the allocation scheme can be expanded by using the neural network model to fit the allocation proportion.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
fig. 1 shows a schematic diagram of an inventory allocation flow architecture based on a neural network model in an embodiment of the disclosure.
Fig. 2 shows a flowchart of a neural network model-based inventory allocation method in an embodiment of the disclosure.
FIG. 3 illustrates a flow chart of target demand node determination in a neural network model-based inventory allocation method in an embodiment of the disclosure.
Fig. 4 is a schematic diagram of a target demand node and a target supply node in an inventory allocation method based on a neural network model according to an embodiment of the disclosure.
FIG. 5 illustrates a flowchart of a trained neural network model processing a first quantity and a second quantity in a neural network model-based inventory allocation method in an embodiment of the present disclosure.
Fig. 6 illustrates a schematic diagram of a trained neural network model in a neural network model-based inventory allocation method in an embodiment of the disclosure.
Fig. 7 illustrates a flowchart of updating a trained neural network model in a neural network model-based inventory allocation method in an embodiment of the present disclosure.
FIG. 8 illustrates a flowchart of training a neural network model to obtain a trained neural network model in a neural network model-based inventory allocation method in an embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of an inventory allocation device based on a neural network model in an embodiment of the disclosure.
Fig. 10 shows a schematic structural diagram of an electronic device in an embodiment of the disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Those skilled in the art will appreciate that embodiments of the present disclosure may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the present disclosure, there are provided a neural network model-based inventory allocation method, a neural network model-based inventory allocation device, a computer-readable storage medium, and an electronic apparatus.
Any number of elements in the figures are for illustration and not limitation, and any naming is used for distinction only, and not for any limiting sense.
The principles and spirit of the present disclosure are described in detail below with reference to several representative embodiments thereof.
For a stable and efficient supply platform, inventory allocation needs to meet the conditions of each party as much as possible, on one hand, the allocation proportion of each order on each channel needs to be determined, and on the other hand, the allocation proportion of each order needs to meet numerous conditions such as maximization of order completion rate and optimal flow allocation. However, current dispensing methods do not help the supply platform meet the various conditions described above during inventory dispensing. Specifically, the related art has the following problems:
(1) Because the delivery conditions of the order delivery party (such as an advertiser) are guaranteed, the order can be delivered within a specified time, the flow (such as an advertisement position) can be fully utilized, complex limiting conditions are often doped in the distribution logic, and the conditions often lead to no solution in the distribution solving process;
(2) The current order is often searched in a heuristic way in the inventory allocation method of the supply platform, and the essence of the current order is that an optimal solution is searched through a greedy strategy, so that a solving result is easily caused to fall into a local optimal solution;
(3) The lack of a real-time update mechanism can not adapt to the changing order environment in time, which may result in poor inventory allocation.
In view of the foregoing, the present disclosure provides a neural network model-based inventory allocation method, a neural network model-based inventory allocation device, a computer-readable storage medium, and an electronic apparatus, which determine a first number of target demand nodes corresponding to a target order according to an orientation condition of the target order in response to successful placement of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to the supply channels of the target orders in a supply platform for providing the target objects for the target orders; inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value. Therefore, the probability of obtaining the allocation scheme can be increased and the applicable scene of the allocation scheme can be expanded by using the neural network model to fit the allocation proportion.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are specifically described below.
Application scene overview
It should be noted that the following application scenarios are only shown for facilitating understanding of the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
The method and the system can be applied to any inventory allocation scene, a server responds to successful placement of a target order, and a first number of target demand nodes corresponding to the target order are determined according to the orientation conditions of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to the supply channels of the target orders in a supply platform for providing the target objects for the target orders; inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value.
Exemplary method
The system architecture and application scenario of the operating environment of the present exemplary embodiment are described below in conjunction with fig. 1.
Fig. 1 shows a schematic diagram of a system architecture, which system architecture 100 may include a terminal 110 and a server 120. The terminal 110 may be a smart phone, a tablet computer, a personal computer, etc., and the terminal 110 may receive a target order and related information input by a user. The server 120 determines a first number of target demand nodes corresponding to a target order according to the targeting conditions of the target order in response to successful placement of the target order; the target order is a purchase credential formed by purchasing the target object; determining a second number of target supply nodes according to the supply channels of the target orders in a supply platform for providing the target objects for the target orders; inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value. Server 120 may generally refer to a backend system (e.g., inventory allocation system) that provides inventory allocation related services, and may be a server or a cluster of servers. The terminal 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction.
Exemplary embodiments of the present disclosure first provide an inventory allocation method based on a neural network model, which may include:
responding to successful order placement of the target order, and splitting the target order into a first number of target demand nodes according to the orientation conditions of the target order; the target order is a purchase credential formed by purchasing the target object;
determining a second number of target supply nodes according to the supply channels of the target orders in a supply platform for providing the target objects for the target orders;
inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value.
Fig. 2 shows an exemplary flow of the inventory allocation method based on the neural network model, and each step in fig. 2 is specifically described below.
Referring to fig. 2, in step S210, in response to successful placement of a target order, a first number of target demand nodes corresponding to the target order is determined according to an orientation condition of the target order.
The target order is a purchase credential formed by purchasing the target object.
The purchase credentials formed by the order for the consumer to purchase an object (such as a commodity) generally comprise object information (such as price, color, size and the like), purchase time, merchant information (such as a shop name and the like) and the like; further, the target order may be understood as a new order, and the purchasing credentials obtained by purchasing the target object for the requesting party in the supply-demand relationship; such as: in the E-commerce scene, a consumer orders a commodity A to obtain a target order 1; in the advertisement putting scene, an advertiser orders B advertisement positions for advertising first advertisements according to a strategy B to obtain a target order 2; in the song recommendation scenario, the consumer purchases C songs C, resulting in a target order 3.
The orientation condition is an additional condition for the consumer to achieve a certain purpose; such as: in order to purposefully target advertisements to a certain category of target user groups, the advertiser additionally adds targeting conditions such as labels, episodes, albums and the like to target the advertisements; specifically, for example: the targeting condition of the target order of the advertiser D is a TV play channel and an ancient-installed play label, namely, the advertisement of the advertiser D needs to be put into the video under the TV play channel and the ancient-installed play label.
A demand node may be understood as a directional order with directional labels/directional conditions. The number of demand nodes can be determined according to the orientation condition of any granularity; such as: the number of the demand nodes can be determined according to the number of the orientation conditions, and the number of the demand nodes can be determined according to the types of the orientation conditions; specifically, referring to fig. 3, the step S210 may further include the following steps S310 and S320:
step S310, determining the number of orientation conditions.
Wherein the orientation conditions are generally contained in the target order, and the number of the orientation conditions can be determined by checking the target order; exemplary, the targeting condition of the target order is a television channel and an ancient-dress drama label, and the number of the targeting conditions is 2; the orientation conditions of the target order are the wind orientation label, the language orientation label and the vermicelli crowd orientation, and the number of the orientation conditions is 3.
And step S320, splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
The number of the target demand nodes is the number of the directional conditions, namely the first number is the number of the target demand nodes; for example, the targeting condition of the target order is a drama channel and an ancient drama label, and the number of the targeting conditions is 2, then the target order is split into two target demand nodes, namely, the drama channel is taken as one target demand node, and the video with the ancient drama label is taken as one target demand node. Illustratively, as shown in fig. 4, the target order is split to obtain a target demand node 1, a target demand node 2, and a target demand node 3.
The supply node refers to a flow exposure node for the supply platform, and refers to an inventory node generated according to factors such as advertisement positions, time and the like in contract advertisements of the Internet. In a song pushing system, a supply node refers to a flow channel and a scene, wherein the flow channel comprises an external channel and an internal channel, and can provide flow distribution for a generated song order; such as: song E is distributed on album E, content edit E, and recommended menu E, then the supply node for song E is album E, content edit E, and recommended menu E.
Illustratively, if the supply amount of the target supply node is greater than or equal to the demand amount of the target demand node, the ordering is successful, otherwise, the ordering fails; further, in order to meet the more complex demands of consumers, the current day supply amount of the target supply node is larger than or equal to the demand amount of the target demand node, the ordering is successful, otherwise, the ordering is failed; specifically, the "successful in response to the target order" in the above step S210 may further include the steps of:
and determining that the order is successful in response to the total inventory of all target supply nodes on the target order placing day being greater than or equal to the total demand of all target demand nodes.
Illustratively, the total stock of all target supply nodes is 3000, the total stock of all target supply nodes is 1000 on the day of the order placement of the target order, and the total demand of all target demand nodes is 1000, then the total stock of all target supply nodes on the day is equal to the total demand of all target demand nodes, and the order placement is successful; if the total stock of all target supply nodes is more than 1000 total demand of all target demand nodes on the same day as the target order is placed, the order is also successful; if the total inventory of all target supply nodes is less than 1000 total demand of all target demand nodes on the day of the target order placement, the placement fails.
With continued reference to FIG. 2, in step S220, in the supply platform providing the target object for the target order, a second number of target supply nodes is determined according to the supply channel of the target order.
Wherein, the supply platform refers to a platform for selling commodities, such as: e-commerce platform (selling goods), listening to song App (selling songs), fitness App (selling fitness courses), etc.
Supply channels refer to the way goods are provided, such as: an external channel 1, an external channel 2, an internal channel 1, an internal channel 2, and the like. The number of supply nodes may be determined according to any granularity of supply channel; the number of target supply nodes is determined as above. Specifically, the "determining the second number of target supply nodes according to the supply channel of the target order" in the above step S220 may further include the steps of:
each supply channel of the target order is taken as a target supply node.
Illustratively, the feed channels may be divided into external channels and internal channels at a relatively coarse granularity, and then the target feed nodes are external channel feed nodes and internal channel feed nodes; the supply channels may be divided into the external channel 1, the external channel 2, the internal channel 1, and the internal channel 2 at a relatively fine granularity, and then the target supply node is the external channel 1 supply node, the external channel 2 supply node, the internal channel 1 supply node, and the internal channel 2 supply node. Illustratively, as shown in fig. 4, the target supply node 1, the target supply node 2, the target supply node 3, the target supply node 4 are obtained based on four supply channels; further, in the above example, the target supply node 1 and the target supply node 3 provide target objects for the target demand node 1, the target supply node 2 and the target supply node 4 provide target objects for the target demand node 2, and the target supply node 3 and the target supply node 4 provide target objects for the target demand node 3.
With continued reference to fig. 2, in step S230, the first number and the second number are input into the trained neural network model for processing, so as to obtain an allocation proportion of the target object allocated by each target supply node, and the target objects are allocated according to the allocation proportion value.
Illustratively, the neural network model includes an input layer, an embedded layer, a network layer, and an output layer; the input layer is used for acquiring the first quantity and the second quantity which are input and carrying out preset data processing; the embedded layer is used for reducing the dimension of the data output by the input layer; the network layer is used for extracting the characteristics of the data output by the embedded layer; the output layer is used for converting the output of the network layer. For example, the neural network model may be trained to obtain the trained neural network model described above, the trained neural network model performing the following operations for each target demand node and each target supply node; specifically, referring to fig. 5, the "inputting the first number and the second number into the trained neural network model for processing in step S230 to obtain the allocation proportion of the target object allocated for the target order by each target supply node" may further include the following steps S510 to S540:
And S510, performing independent thermal coding on the target demand node and the target supply node at an input layer to obtain a first sparse vector and a second sparse vector.
Wherein the input layer is the first layer of the trained neural network model, typically receives input data and passes it to the next layer, but does not perform any operations on the input data; thus, the input layer has no weight and bias values. As illustrated by way of example in fig. 6.
One-Hot Encoding (One Encoding), also known as One-bit valid Encoding, uses an N-bit state register to encode N states, each with its own register bit, and at any time only One of the bits is valid. I.e. only one bit is a 1 and the rest are zero values. And (5) single-heat coding.
Illustratively, the target demand nodes are the ancient drama channel and the ancient drama label, and then the one-hot code of the target demand node of the ancient drama channel is (0, 1), and then the one-hot code of the target demand node of the ancient drama label is (1, 0). The target demand nodes are similar and will not be described in detail here.
In this step, the first sparse vector may be obtained by transversely splicing the single thermal codes generated by each target demand node, or may be obtained by longitudinally splicing the single thermal codes generated by each target demand node; such as: transversely splicing the single-hot code of the target demand node of the ancient drama channel as (0, 1) and the single-hot code of the target demand node of the ancient drama label as (1, 0) as [0,1, 0 ] ]The single-heat code of the target demand node of the ancient drama channel is (0, 1) and the single-heat code of the target demand node of the ancient drama label is (1, 0) are longitudinally spliced to form
And step S520, performing dimension reduction on the first sparse vector and the second sparse vector at the embedding layer to obtain a first dense vector and a second dense vector.
Wherein the embedded layer is a mapping that can map features from an original low-dimensional space to a high-dimensional space or from an original high-dimensional space to a low-dimensional space; such as: after single heat codingMultiplication of the high-dimensional sparse matrix with the mapping table (i.e. the embedding matrix or the look-up table)>Is subjected to dimension reduction to obtain the matrix of +.>Is a low-dimensional dense matrix of (1); the dimension of the rise is opposite to that of the rise, and is not described in detail herein.
In actual operation, the first sparse vector and the second sparse vector can be mapped into a space with smaller dimension through the embedding layer; illustratively, as shown in FIG. 6, the first sparse vector and the second sparse vector may be mapped into k-dimensional space by an embedding layer; specifically, the vector of the input target demand node j (first sparse vector) is multiplied by the embedding matrixObtaining an embedded vector (first dense vector) emb of the vector j Wherein N represents the number of target demand nodes; multiplying the vector of the input target supply node i (second sparse vector) by the embedding matrix +.>Obtaining an embedded vector (i.e. a second dense vector) emb of the vector i Wherein M represents the number of target supply nodes; in fig. 6, the embedding layer includes a demand embedding layer that embeds the first sparse vector and a supply embedding layer that embeds the second sparse vector.
And step S530, at the network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector.
The network layer may include a plurality of fully-connected networks, and each fully-connected network may include a plurality of fully-connected layers, which is not limited herein; illustratively, each fully-connected network may include three fully-connected layers for outputting 256-dimensional, 128-dimensional, and 64-dimensional feature vectors, respectively.
Further, each full connection layer can use a certain probability and an activation function to randomly lose so as to prevent overfitting; such as: random loss was performed using a probability of 0.2 and a linear rectification function (Linear rectification function, reLU) to prevent overfitting. Illustratively, as shown in fig. 6, the network layer may be a deep neural network (Deep neural network, DNN) formed by sequentially connecting a plurality of multi-layer perceptrons (Multilayer perceptron, MLP), where the network layer may be a deep neural network and the fully connected network may be a multi-layer perceptrons.
In step S540, at the output layer, the first feature vector and the second feature vector are converted into the allocation proportion value of the target supply node by using a normalization function.
Wherein the output layer functions to convert the output of the previous layer into a final distribution ratio. The output layer usually uses a weight connection layer, and the output dimension of the output layer is 1; at the same time, all vectors under the same target supply node are output transformed using a normalized exponential function (Softmax) to a value between 0 and 1. As illustrated by way of example in fig. 6.
By using the neural network model to fit and distribute the proportion and through the transformation of an embedded matrix and the randomness of the model, the optimal solution can be obtained quickly, and the situation of sinking into the local optimal solution can be relieved.
In one embodiment, in order to further improve the real-time performance, the trained neural network model for processing the previous target order may be further trained to obtain a trained neural network model for processing the current target order; specifically, referring to fig. 7, the inventory allocation method based on the neural network model may further include the following steps S710 to S730:
step S710, reading the trained neural network model for processing the last target order.
The trained neural network model for processing each target order can be stored, so that the trained neural network model for processing the last target order can be directly read when needed.
And step S720, training the trained neural network model of the previous target order by adopting the first quantity and the second quantity of the target order to obtain the trained neural network model.
Wherein the parameters of the embedded vector, the loss function, etc. can be kept unchanged.
How the neural network model is trained to obtain a trained neural network model is described in the following, and is not explained here.
And step 730, inputting the first number and the second number into the trained neural network model for processing, and obtaining the distribution proportion value of the target object distributed by each target supply node.
Illustratively, the neural network model includes an input layer, an embedded layer, a network layer, and an output layer; the input layer is used for acquiring the first quantity and the second quantity of input and performing simple data processing; the embedded layer is used for reducing the dimension of the data output by the input layer; the network layer is used for extracting the characteristics of the data output by the embedded layer; the output layer is used for converting the output of the network layer. For example, the neural network model may be trained to obtain the trained neural network model.
The model is updated in real time according to the current order, so that the model can be adapted to the changed order environment in time, and the distribution effect of the obstacle model is reported.
In one embodiment, the neural network model may be trained to obtain a trained neural network model; specifically, referring to fig. 8, training the neural network model to obtain a trained neural network model may further include the following steps S810 to S850:
step 810, initializing parameters of the neural network model.
Wherein the parameters include weights and deviations.
The parameter may be initialized by pre-training, random, or fixed value, which is not limited herein.
Step S820, inputting training data into a neural network model, and transmitting forward to obtain a predicted allocation proportion value.
The training data can be divided into a plurality of batches, and the neural network model is trained according to the batches; such as: the training data may be divided into 10 batches, with each batch having training data B, and the neural network model trained 10 times.
Step S830, calculating a loss value between the predicted distribution ratio and the real distribution ratio according to a preset loss function.
Wherein the loss function comprises a regularization term.
The goal of model training is typically loss function minimization. In practice, the following loss function may be employed:
wherein,,/>represents the demand of the demand node j, +.>Representing the sum of the supply amounts of all supply nodes i to which the demand node j can be directed; />Representing an unfinished penalty coefficient corresponding to the demand node j obtained from the database; />Representing an unfinished amount corresponding to a demand node j obtained from a database; />Representing the complete play proportion of the order j on the supply channel i; />Coefficients representing loss function terms involved in the neural network acquired from the database system; />Representing the number of samples entered; />Representing the importance degree of the demand node j; />Representing the distribution proportion of the model output;
the added regularization term reg is used as a limiting condition to replace the limiting condition in the search process of the heuristic algorithm in the related technology to restrict the optimization process, so that the probability of no solution in the distribution process is reduced while the solving result meets various limiting conditions.
And step S840, calculating the gradient of the parameter of the neural network model by adopting a gradient descent method according to the back propagation of the loss value.
In the deep learning back propagation process, the optimizer directs each parameter of the loss function (objective function) to update the proper size in the correct direction, so that each updated parameter enables the loss function value to continuously approach the global minimum.
Optimization mainly considers two aspects:
first, the direction of optimization, reflected in the optimizer as gradient or momentum;
second, the step size is reflected in the optimizer as the learning rate.
Generally, the parameters that need to be optimized include the loss function, initial learning rate, time to iterate one batch of training data.
Optimizers include random gradient descent (Stochastic Gradient Descent, SGD), SGD with Momentum, SGD with Nesterov Acceleration, adaptive learning rate algorithms (AdaGrad gradient descent), adaDelta/RMSProp, adam gradient descent, and the like.
And step S850, optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
The preset convergence condition is an iteration number threshold value or a loss value threshold value which is set according to historical data or experience; and when the iteration times threshold value is reached or the loss value threshold value is reached, ending the iterative training, and obtaining the trained neural network model.
In actual operation, the neural network model trained by the training set can be further tested by adopting test data, and after the test is passed, the neural network model passed by the test is used as a trained neural network model.
In one embodiment, to ensure model effectiveness, the trained neural network model may be further evaluated; specifically, the inventory allocation method based on the neural network model may further include the following steps:
and calculating the evaluation index of the trained neural network model by using the verification set.
Wherein the evaluation index comprises at least one of the following: mean square error, root mean square error, mean absolute error, and coefficient is determined.
The mean square error (Mean Squared Error, MSE) is a commonly used measure of the difference between the model's predicted and actual observed values, used to evaluate the fit of the model to a given datum. The MSE is obtained by calculating the average of the squares of the differences between the predicted and actual observed values.
Root mean square error (Root Mean Squared Error, RMSE) is a commonly used measure of the difference between a model's predicted and actual observed values and is used to evaluate the fit of the model to a given datum. RMSE is obtained by calculating the mean of the square of the difference between the predicted value and the actual observed value and taking the square root thereof.
The mean absolute error (Mean Absolute Error, MAE) is a commonly used indicator of the difference between the predicted and actual observed values of a model, used to evaluate the fit of the model to a given datum. MAE is obtained by calculating the average of the absolute values of the differences between the predicted and actual observed values.
The decision coefficient (Coefficient of Determination), commonly denoted as R, is a statistical indicator used to evaluate the goodness of fit of the regression model. It represents the ratio of the variability of the dependent variable that can be interpreted by the model, i.e., the degree of fitting of the model to the data.
Exemplary apparatus
Having described the neural network model-based inventory allocation method of the exemplary embodiments of the present disclosure, next, the neural network model-based inventory allocation device of the exemplary embodiments of the present disclosure will be described with reference to fig. 9.
Referring to fig. 9, an inventory allocation device based on a neural network model, the device includes: a target demand node determination module 910 configured to determine, in response to a target order being placed successfully, a first number of target demand nodes corresponding to the target order according to an orientation condition of the target order; the target order is a purchase credential formed by purchasing the target object; a target supply node determination module 920 configured to determine a second number of target supply nodes according to the supply channel of the target order in a supply platform providing the target object for the target order; the target object allocation module 930 is configured to input the first number and the second number into the trained neural network model for processing, obtain an allocation proportion value of the target object allocated by each target supply node, and allocate the target object according to the allocation proportion value.
In one embodiment, the target demand node determination module 910 is configured to: determining the number of orientation conditions; splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
In one embodiment, the target supply node determination module 920 is configured to: each supply channel of the target order is taken as a target supply node.
In one embodiment, the target demand node determination module 910 is configured to: and determining that the order is successful in response to the total inventory of all target supply nodes on the target order placing day being greater than or equal to the total demand of all target demand nodes.
In one embodiment, the target object allocation module 930 is configured to: at an input layer, performing independent thermal coding on a target demand node and a target supply node respectively to obtain a first sparse vector and a second sparse vector; in the embedding layer, performing dimension reduction on the first sparse vector and the second sparse vector to obtain a first dense vector and a second dense vector; at a network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector; at the output layer, a normalization function is used to convert the first feature vector and the second feature vector into the allocation proportion value of the target supply node.
In one embodiment, the apparatus further comprises a model update module configured to: reading a trained neural network model for processing a last target order; training the trained neural network model of the previous target order by adopting the first quantity and the second quantity of the target orders to obtain a trained neural network model; and inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node.
In one embodiment, the apparatus further comprises a model training module configured to: initializing parameters of a neural network model; parameters include weights and deviations; inputting training data into a neural network model, and propagating forward to obtain a predicted allocation proportion value; calculating a loss value between a predicted distribution proportion and a real distribution proportion according to a preset loss function; the loss function includes a regularization term; adopting a gradient descent method to counter-propagate and calculate the descent gradient of the parameters of the neural network model according to the loss value; and optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
In one embodiment, the apparatus further comprises a model evaluation module configured to: calculating an evaluation index of the trained neural network model by adopting the verification set; the evaluation index includes at least one of: mean square error, root mean square error, mean absolute error, and coefficient is determined.
Exemplary storage Medium
A storage medium according to an exemplary embodiment of the present disclosure is described below.
In the present exemplary embodiment, the above-described method may be implemented by a program product, such as a portable compact disc read only memory (CD-ROM) and including program code, and may be run on a device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RE, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Exemplary electronic device
An electronic device of an exemplary embodiment of the present disclosure is described with reference to fig. 10.
The electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. Components of electronic device 1000 may include, but are not limited to: at least one processing unit 1010, at least one memory unit 1020, a bus 1030 connecting the various system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the storage unit stores program code that is executable by the processing unit 1010 such that the processing unit 1010 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 1010 may perform the method steps shown in fig. 1, etc.
The memory unit 1020 may include volatile memory units such as a random access memory unit (RAM) 1021 and/or a cache memory unit 1022, and may further include a read only memory unit (ROM) 1023.
Storage unit 1020 may also include a program/utility 1024 having a set (at least one) of program modules 1025, such program modules 1025 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1030 may include a data bus, an address bus, and a control bus.
The electronic device 1000 may also communicate with one or more external devices 2000 (e.g., keyboard, pointing device, bluetooth device, etc.) via an input/output (I/O) interface 1050. The electronic device 1000 also includes a display unit 1040 that is connected to an input/output (I/O) interface 1050 for displaying. Also, electronic device 1000 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1060. As shown, the network adapter 1060 communicates with other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that while several modules or sub-modules of the apparatus are mentioned in the detailed description above, such partitioning is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that this disclosure is not limited to the particular embodiments disclosed nor does it imply that features in these aspects are not to be combined to benefit from this division, which is done for convenience of description only. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (18)

1. A neural network model-based inventory allocation method, the method comprising:
in response to successful placement of a target order, determining a first number of target demand nodes corresponding to the target order according to the targeting conditions of the target order; the target order is a purchase credential formed by purchasing the target object;
Determining a second number of target supply nodes according to supply channels of the target orders in a supply platform for providing the target objects for the target orders;
and inputting the first quantity and the second quantity into a trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node, and distributing the target object according to the distribution proportion value.
2. The method of claim 1, wherein the determining a first number of target demand nodes corresponding to the target order based on the targeting conditions of the target order comprises:
determining a number of the orientation conditions;
splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
3. The method of claim 1, wherein the determining a second number of target supply nodes based on the supply channel of the target order comprises:
each supply channel of the target order is taken as one target supply node.
4. The method of claim 1, wherein the responding to the target order success comprises:
And determining that the order is successful in response to the total inventory of all the target supply nodes on the target order placing day being greater than or equal to the total demand of all the target demand nodes.
5. The method of claim 1, wherein said inputting the first number and the second number into a trained neural network model for processing results in an allocation proportion value for the target object allocated by each of the target supply nodes, comprising:
at an input layer, performing independent thermal coding on the target demand node and the target supply node respectively to obtain a first sparse vector and a second sparse vector;
in the embedding layer, performing dimension reduction on the first sparse vector and the second sparse vector to obtain a first dense vector and a second dense vector;
at a network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector;
at the output layer, the first feature vector and the second feature vector are converted into the distribution proportion value of the target supply node by adopting a normalization function.
6. The method according to claim 1, wherein the method further comprises:
Reading a trained neural network model for processing a last target order;
training a trained neural network model of a previous target order by adopting the first quantity and the second quantity of the target orders to obtain the trained neural network model;
and inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node.
7. The method of claim 1, wherein training a neural network model to obtain the trained neural network model is:
initializing parameters of a neural network model; the parameters include weights and deviations;
inputting training data into the neural network model, and propagating forward to obtain a predicted allocation proportion value;
calculating a loss value between a predicted distribution proportion and a real distribution proportion according to a preset loss function; the loss function includes a regularization term;
adopting a gradient descent method to calculate the descent gradient of the parameters of the neural network model according to the back propagation of the loss value;
and optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
8. The method of claim 7, wherein the method further comprises:
calculating an evaluation index of the trained neural network model by adopting a verification set; the evaluation index includes at least one of: mean square error, root mean square error, mean absolute error, and coefficient is determined.
9. An inventory allocation device based on a neural network model, the device comprising:
a target demand node determining module configured to determine a first number of target demand nodes corresponding to a target order according to an orientation condition of the target order in response to successful placement of the target order; the target order is a purchase credential formed by purchasing the target object;
a target supply node determination module configured to determine a second number of target supply nodes according to a supply channel of the target order in a supply platform providing the target object for the target order;
and the target object distribution module is configured to input the first quantity and the second quantity into a trained neural network model for processing, obtain a distribution proportion value of the target object distributed by each target supply node, and distribute the target object according to the distribution proportion value.
10. The apparatus of claim 9, wherein the target demand node determination module is configured to:
determining a number of the orientation conditions;
splitting the target orders according to the number of the orientation conditions to obtain a first number of target demand nodes.
11. The apparatus of claim 9, wherein the target supply node determination module is configured to:
each supply channel of the target order is taken as one target supply node.
12. The apparatus of claim 9, wherein the target demand node determination module is configured to:
and determining that the order is successful in response to the total inventory of all the target supply nodes on the target order placing day being greater than or equal to the total demand of all the target demand nodes.
13. The apparatus of claim 9, wherein the target object allocation module is configured to:
at an input layer, performing independent thermal coding on the target demand node and the target supply node respectively to obtain a first sparse vector and a second sparse vector;
in the embedding layer, performing dimension reduction on the first sparse vector and the second sparse vector to obtain a first dense vector and a second dense vector;
At a network layer, extracting features of the first dense vector and the second dense vector to obtain a first feature vector and a second feature vector;
at the output layer, the first feature vector and the second feature vector are converted into the distribution proportion value of the target supply node by adopting a normalization function.
14. The apparatus of claim 9, further comprising a model update module configured to:
reading a trained neural network model for processing a last target order;
training a trained neural network model of a previous target order by adopting the first quantity and the second quantity of the target orders to obtain the trained neural network model;
and inputting the first quantity and the second quantity into the trained neural network model for processing to obtain the distribution proportion value of the target object distributed by each target supply node.
15. The apparatus of claim 9, further comprising a model training module configured to:
initializing parameters of a neural network model; the parameters include weights and deviations;
Inputting training data into the neural network model, and propagating forward to obtain a predicted allocation proportion value;
calculating a loss value between a predicted distribution proportion and a real distribution proportion according to a preset loss function; the loss function includes a regularization term;
adopting a gradient descent method to calculate the descent gradient of the parameters of the neural network model according to the back propagation of the loss value;
and optimizing the neural network model according to the descending gradient until a preset convergence condition is reached, and ending training to obtain the trained neural network model.
16. The apparatus of claim 15, further comprising a model evaluation module configured to:
calculating an evaluation index of the trained neural network model by adopting a verification set; the evaluation index includes at least one of: mean square error, root mean square error, mean absolute error, and coefficient is determined.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1-8.
18. An electronic device, comprising:
A processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-8 via execution of the executable instructions.
CN202311757785.2A 2023-12-19 2023-12-19 Inventory allocation method, device, medium and equipment based on neural network model Pending CN117455366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311757785.2A CN117455366A (en) 2023-12-19 2023-12-19 Inventory allocation method, device, medium and equipment based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311757785.2A CN117455366A (en) 2023-12-19 2023-12-19 Inventory allocation method, device, medium and equipment based on neural network model

Publications (1)

Publication Number Publication Date
CN117455366A true CN117455366A (en) 2024-01-26

Family

ID=89585802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311757785.2A Pending CN117455366A (en) 2023-12-19 2023-12-19 Inventory allocation method, device, medium and equipment based on neural network model

Country Status (1)

Country Link
CN (1) CN117455366A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097302A (en) * 2018-01-29 2019-08-06 北京京东尚科信息技术有限公司 The method and apparatus for distributing order
CN110197257A (en) * 2019-05-28 2019-09-03 浙江大学 A kind of neural network structure Sparse methods based on increment regularization
CN112001681A (en) * 2020-08-31 2020-11-27 杭州拼便宜网络科技有限公司 Warehouse management method, device, platform and computer readable storage medium
US11080727B1 (en) * 2018-12-11 2021-08-03 Stitch Fix, Inc. Global optimization of inventory allocation
CN113706211A (en) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 Advertisement click rate prediction method and system based on neural network
KR102452440B1 (en) * 2022-07-25 2022-10-11 주식회사 어스큐레이션 Inventory management and order processing methods, devices and systems for distribution of electronic equipment
CN116107279A (en) * 2023-02-20 2023-05-12 合肥城市云数据中心股份有限公司 Flow industrial energy consumption multi-objective optimization method based on attention depth neural network
CN116227577A (en) * 2023-03-07 2023-06-06 北京中电普华信息技术有限公司 Neural network model training method, device, equipment and readable storage medium
CN116257758A (en) * 2023-01-18 2023-06-13 杭州网易云音乐科技有限公司 Model training method, crowd expanding method, medium, device and computing equipment
CN116862580A (en) * 2023-07-11 2023-10-10 深圳市乐信信息服务有限公司 Short message reaching time prediction method and device, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097302A (en) * 2018-01-29 2019-08-06 北京京东尚科信息技术有限公司 The method and apparatus for distributing order
US11080727B1 (en) * 2018-12-11 2021-08-03 Stitch Fix, Inc. Global optimization of inventory allocation
CN110197257A (en) * 2019-05-28 2019-09-03 浙江大学 A kind of neural network structure Sparse methods based on increment regularization
CN112001681A (en) * 2020-08-31 2020-11-27 杭州拼便宜网络科技有限公司 Warehouse management method, device, platform and computer readable storage medium
CN113706211A (en) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 Advertisement click rate prediction method and system based on neural network
KR102452440B1 (en) * 2022-07-25 2022-10-11 주식회사 어스큐레이션 Inventory management and order processing methods, devices and systems for distribution of electronic equipment
CN116257758A (en) * 2023-01-18 2023-06-13 杭州网易云音乐科技有限公司 Model training method, crowd expanding method, medium, device and computing equipment
CN116107279A (en) * 2023-02-20 2023-05-12 合肥城市云数据中心股份有限公司 Flow industrial energy consumption multi-objective optimization method based on attention depth neural network
CN116227577A (en) * 2023-03-07 2023-06-06 北京中电普华信息技术有限公司 Neural network model training method, device, equipment and readable storage medium
CN116862580A (en) * 2023-07-11 2023-10-10 深圳市乐信信息服务有限公司 Short message reaching time prediction method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XU, J等: "Sportswear retailing forecast model based on the combination of multi-layer perceptron and convolutional neural network", 《TEXTILE RESEARCH JOURNAL》, vol. 91, no. 23, 31 December 2021 (2021-12-31), pages 2980 - 2994 *
吕飞等: "单周期随机库存控制策略下的选址分配问题优化模型及其算法", 《物流技术》, vol. 28, no. 12, 15 December 2009 (2009-12-15), pages 93 - 97 *
石磊;: "时间序列和神经网络对迷你型洗衣机销量预测", 安徽理工大学学报(自然科学版), no. 03, 15 September 2013 (2013-09-15), pages 73 - 77 *

Similar Documents

Publication Publication Date Title
CN109902849B (en) User behavior prediction method and device, and behavior prediction model training method and device
US10812870B2 (en) Yield optimization of cross-screen advertising placement
CA3007853C (en) End-to-end deep collaborative filtering
US20180342004A1 (en) Cumulative success-based recommendations for repeat users
US20220374888A1 (en) Digital asset management
CN110866628A (en) System and method for multi-bounded time series prediction using dynamic time context learning
CN110046965A (en) Information recommendation method, device, equipment and medium
CN109035028B (en) Intelligent consultation strategy generation method and device, electronic equipment and storage medium
KR20110082597A (en) Automated specification, estimation, discovery of causal drivers and market response elasticities or lift factors
CN110991464A (en) Commodity click rate prediction method based on deep multi-mode data fusion
CN114117216A (en) Recommendation probability prediction method and device, computer storage medium and electronic equipment
WO2017124041A1 (en) Yield optimization of cross-screen advertising placement
CN112348592A (en) Advertisement recommendation method and device, electronic equipment and medium
CN113706211A (en) Advertisement click rate prediction method and system based on neural network
Zhao et al. Tag‐Aware Recommender System Based on Deep Reinforcement Learning
US10678800B2 (en) Recommendation prediction based on preference elicitation
CN116910373B (en) House source recommendation method and device, electronic equipment and storage medium
CN112348590A (en) Method and device for determining value of article, electronic equipment and storage medium
US20210248576A1 (en) System to facilitate exchange of data segments between data aggregators and data consumers
CN116340635A (en) Article recommendation method, model training method, device and equipment
CN117455366A (en) Inventory allocation method, device, medium and equipment based on neural network model
CN113902481B (en) Rights and interests determining method, device, storage medium and apparatus
CN114331637A (en) Object recommendation method and device, storage medium and electronic equipment
CN115329183A (en) Data processing method, device, storage medium and equipment
CN114529309A (en) Information auditing method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination