CN115630585B - Method, apparatus, device and computer readable medium for predicting commodity circulation quantity - Google Patents

Method, apparatus, device and computer readable medium for predicting commodity circulation quantity Download PDF

Info

Publication number
CN115630585B
CN115630585B CN202211670369.4A CN202211670369A CN115630585B CN 115630585 B CN115630585 B CN 115630585B CN 202211670369 A CN202211670369 A CN 202211670369A CN 115630585 B CN115630585 B CN 115630585B
Authority
CN
China
Prior art keywords
target
historical
data
sample
circulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211670369.4A
Other languages
Chinese (zh)
Other versions
CN115630585A (en
Inventor
黄智杰
庄晓天
吴盛楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202211670369.4A priority Critical patent/CN115630585B/en
Publication of CN115630585A publication Critical patent/CN115630585A/en
Application granted granted Critical
Publication of CN115630585B publication Critical patent/CN115630585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, devices, and computer-readable media for item flow quantity prediction. One embodiment of the method comprises the following steps: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted streaming quantity; word embedding processing is carried out on the historical static feature data, and a second feature vector is obtained; splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector; and inputting the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time. The embodiment relates to artificial intelligence, and the circulation quantity corresponding to the target object can be predicted more accurately by using the trained encoding and decoding model.

Description

Method, apparatus, device and computer readable medium for predicting commodity circulation quantity
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, a device, and a computer readable medium for predicting an item circulation quantity.
Background
The item circulation quantity prediction is an important step in item circulation planning, and the accurate prediction of the future circulation quantity of the item plays an important role in controlling the inventory value and the inventory level. For future flow predictions of an item, the following are generally used: a pre-trained simple time series prediction method or machine learning model is utilized to predict the amount of circulation of an item at a point in time in the future.
However, the inventors have found that when the circulation amount of the article is predicted in the above manner, there are often the following technical problems:
whether the prediction is performed by a simple time sequence or a machine learning method, the complete circulation quantity characteristics cannot be captured, so that the prediction on the circulation quantity of the article is not accurate enough.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose article flow quantity prediction methods, apparatuses, devices, and computer readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of item flow amount prediction, the method comprising: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted streaming quantity; word embedding processing is carried out on the historical static feature data, so that a second feature vector is obtained; splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector; and inputting the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time.
Optionally, the method further comprises: initializing the value of a first preset counter to be a first preset count value; determining the obtained circulation quantity corresponding to the target time as a target circulation quantity; based on the target flow amount, the following flow amount generation step is performed: determining a later time point of the time points corresponding to the target flow quantity as a second target time; adding the target flow quantity to the tail of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target feature vector list and the second feature vector to obtain a target spliced vector; inputting the target splicing vector into the decoding model to obtain a circulation quantity corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to determining that the first target count value meets a preset prediction frequency condition, sequencing the obtained target circulation quantity to obtain a target circulation quantity sequence.
Optionally, the method further comprises: and in response to determining that the first target count value does not meet the preset number of times condition, taking the circulation quantity corresponding to the second target time as a target circulation quantity, and taking the target historical dynamic characteristic data time sequence as a historical dynamic characteristic data time sequence, executing the circulation quantity generation step again.
Optionally, the sample set of the encoding and decoding model is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any one of the historical circulation data in the historical circulation data time sequence as second characteristic data; and generating a sample set based on the first characteristic data time sequence, wherein samples in the sample set comprise a sample circulation data sequence and a sample target circulation quantity.
Optionally, each first characteristic data in the first characteristic data time sequence includes a historical circulation amount; and generating a sample set based on the first feature data time sequence, wherein samples in the sample set include a sample flow data sequence and a sample target flow amount, and the method includes: for each first characteristic data in the above-described time series of first characteristic data, the following steps are performed: selecting a preset number of first characteristic data meeting preset continuity conditions from the first characteristic data time sequence based on the first characteristic data, and determining each selected first characteristic data as target first characteristic data to obtain a target first characteristic data sequence; selecting target first characteristic data meeting preset position conditions from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining a historical circulation quantity included in target first characteristic data at the last position in the target first characteristic data sequence as a sample target circulation quantity; and determining the sample flow data sequence and the sample target flow quantity as samples.
Optionally, the encoding and decoding model is trained by the following steps: based on the sample set, the following sample training steps are performed: inputting the second characteristic data and the sample flow data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted flow quantity corresponding to each sample in the sample set; determining an absolute value of a difference between a predicted flow rate corresponding to each sample in the sample set and a corresponding sample target flow rate as a sample error value, and obtaining a sample error value set; generating a target loss value for the sample error value set by using a preset target loss function; determining the initial coding and decoding model as a trained coding and decoding model in response to determining that the target loss value is less than or equal to a preset threshold; and in response to determining that the target loss value is greater than the preset threshold, adjusting parameters of the initial encoding and decoding model, and taking the adjusted initial encoding and decoding model as the initial encoding and decoding model, and executing the sample training step again.
In a second aspect, some embodiments of the present disclosure provide an item flow amount prediction apparatus, the apparatus including: an acquisition unit configured to acquire a historical dynamic feature data time series and historical static feature data of a target article of a historical period; a first input unit configured to input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model for generating a first feature vector list, wherein the encoding and decoding model is used for generating a predicted streaming volume; the word embedding processing unit is configured to perform word embedding processing on the historical static feature data to obtain a second feature vector; the splicing unit is configured to splice the first characteristic vector list and the second characteristic vector to obtain a spliced vector; and the second input unit is configured to input the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time.
Alternatively, the second input unit may be configured to: initializing the value of a first preset counter to be a first preset count value; determining the obtained circulation quantity corresponding to the target time as a target circulation quantity; based on the target flow amount, the following flow amount generation step is performed: determining a later time point of the time points corresponding to the target flow quantity as a second target time; adding the target flow quantity to the tail of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target feature vector list and the second feature vector to obtain a target spliced vector; inputting the target splicing vector into the decoding model to obtain a circulation quantity corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to determining that the first target count value meets a preset prediction frequency condition, sequencing the obtained target circulation quantity to obtain a target circulation quantity sequence.
Alternatively, the second input unit may be configured to: and in response to determining that the first target count value does not meet the preset number of times condition, taking the circulation quantity corresponding to the second target time as a target circulation quantity, and taking the target historical dynamic characteristic data time sequence as a historical dynamic characteristic data time sequence, executing the circulation quantity generation step again.
Optionally, the sample set of the encoding and decoding model is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any one of the historical circulation data in the historical circulation data time sequence as second characteristic data; and generating a sample set based on the first characteristic data time sequence, wherein samples in the sample set comprise a sample circulation data sequence and a sample target circulation quantity.
Optionally, each first characteristic data in the first characteristic data time sequence includes a historical circulation amount; and the generating unit may be configured to: for each first characteristic data in the above-described time series of first characteristic data, the following steps are performed: selecting a preset number of first characteristic data meeting preset continuity conditions from the first characteristic data time sequence based on the first characteristic data, and determining each selected first characteristic data as target first characteristic data to obtain a target first characteristic data sequence; selecting target first characteristic data meeting preset position conditions from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining a historical circulation quantity included in target first characteristic data at the last position in the target first characteristic data sequence as a sample target circulation quantity; and determining the sample flow data sequence and the sample target flow quantity as samples.
Optionally, the encoding and decoding model is trained by the following steps: based on the sample set, the following sample training steps are performed: inputting the second characteristic data and the sample flow data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted flow quantity corresponding to each sample in the sample set; determining an absolute value of a difference between a predicted flow rate corresponding to each sample in the sample set and a corresponding sample target flow rate as a sample error value, and obtaining a sample error value set; generating a target loss value for the sample error value set by using a preset target loss function; determining the initial coding and decoding model as a trained coding and decoding model in response to determining that the target loss value is less than or equal to a preset threshold; and in response to determining that the target loss value is greater than the preset threshold, adjusting parameters of the initial encoding and decoding model, and taking the adjusted initial encoding and decoding model as the initial encoding and decoding model, and executing the sample training step again.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantageous effects: the method for predicting the circulation quantity of the object can accurately predict the circulation quantity corresponding to the object by using the trained coding and decoding model. Specifically, the reason for making the item flow amount prediction inaccurate is that: whether the prediction is performed by a simple time sequence or a machine learning method, the complete circulation quantity characteristics cannot be captured, so that the prediction on the circulation quantity of the article is not accurate enough. Based on this, the item flow amount prediction method of some embodiments of the present disclosure first acquires a historical dynamic feature data time series and a historical static feature data of a target item for a historical period of time. Therefore, the complete circulation quantity characteristics of the target object can be captured conveniently. The complete circulation quantity characteristics of the target object comprise dynamic characteristics and static characteristics. And then, inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model, and obtaining a first characteristic vector list representing dynamic characteristics of the target object circulation quantity. And then, carrying out word embedding processing on the historical static feature data to obtain a second feature vector representing static features of the circulation quantity of the target object. And then, splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector representing the complete circulation quantity characteristic of the target object. And finally, inputting the spliced vector representing the complete circulation quantity characteristic of the target object into a decoding model included in the coding and decoding model, so that the circulation quantity of the target object at the target time can be accurately generated. Therefore, the encoding and decoding model solves the problems that the complete circulation quantity characteristics cannot be captured and the prediction of the commodity circulation quantity is not accurate enough.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of an item flow amount prediction method according to some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of an item flow amount prediction method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of an item flow amount prediction method according to the present disclosure;
FIG. 4 is a schematic structural view of some embodiments of an item flow amount prediction device according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure;
fig. 6 is a schematic diagram of one application scenario of a sample generation step of an item flow prediction method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an item flow amount prediction method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the electronic device 101 may acquire a historical dynamic feature data time series 103 and a historical static feature data 104 of the target item 102 for a historical period of time. In this application scenario, the target object 102 may be a "mobile phone". The above described historical dynamic feature data time series 103 may include historical dynamic feature data 1031 and historical dynamic feature data 1032. The electronic device 101 may then input the historical dynamic feature data time series 103 into an encoding model 1051 included in the pre-trained encoding and decoding model 105 to generate a first feature vector list 106. Wherein the encoding and decoding model 105 described above may be used to generate a predicted throughput. In this application scenario, the first feature vector list 106 may include: a first feature vector 1061 and a first feature vector 1062. The encoding and decoding model 105 may include: coding model 1051 and decoding model 1052. Next, the electronic device 101 may perform word embedding processing on the historical static feature data sequence 104 to obtain a second feature vector 107. Then, the electronic device 101 may splice the first feature vector list 106 and the second feature vector 107 to obtain a spliced vector 108. Finally, the electronic device 101 may input the spliced vector 108 into a decoding model 1052 included in the encoding and decoding model 105, to obtain the stream quantity 109 corresponding to the target time. In this application scenario, the flow amount 109 may be any positive integer.
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be installed in the above-listed hardware device. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of an item flow amount prediction method according to the present disclosure is shown. The method for predicting the commodity circulation quantity comprises the following steps:
step 201, acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period.
In some embodiments, the execution subject of the method for predicting the amount of the item flow (for example, the electronic device 101 shown in fig. 1) may acquire the historical dynamic feature data time sequence and the historical static feature data of the target item in the historical time period through a wired connection manner or a wireless connection manner. The historical time period may be a period of time before the time point to be predicted. The target item may be an item for which the circulation amount is to be predicted. The historical dynamic characteristic data in the above-described time series of historical dynamic characteristic data may be specific characteristic data of a characteristic that varies frequently with time at one point in time. For example, the features corresponding to the historical dynamic feature data in the historical dynamic feature data time series may include, but are not limited to, at least one of the following: circulation quantity, value information and remaining value net value. The historical static feature data may be specific feature data corresponding to features that are generally not time-varying. For example, the features corresponding to the historical static feature data may include, but are not limited to, at least one of: color, size, model.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
Step 202, inputting the historical dynamic feature data time series into a coding model included in a pre-trained coding and decoding model to generate a first feature vector list.
In some embodiments, the executing entity may input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model to generate a first feature vector list. Wherein the encoding and decoding models are used to generate a predicted throughput. The predicted stream volume may be a stream volume of the target item predicted by the encoding and decoding model at a future point in time. The coding model included in the coding and decoding model may be an attention coding model with the time series of historical dynamic feature data as input and the first feature vector list as output. For example, the coding model may be, but is not limited to, one of the following: the transducer model comprises a coding part, a convolutional neural network model based on an attention mechanism and a cyclic neural network model based on the attention mechanism.
As an example, the execution subject may input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model to generate a first feature vector list. Wherein the coding model included in the coding and decoding model may include 6 encoders. Each encoder included in the above-described encoding model may include a multi-headed attention model and a feedforward neural network model. The multi-headed attention model may be a 6-headed attention model.
And 203, performing word embedding processing on the historical static feature data to obtain a second feature vector.
In some embodiments, the execution body may perform word embedding processing on the historical static feature data to obtain a second feature vector. Wherein the second feature vector may be a vector characterizing static feature data.
As an example, the execution body may perform word embedding processing on the historical static feature data through a preset word embedding method, so as to obtain a second feature vector. The word embedding method may be a method of converting words in text into digital vectors. The above-mentioned preset word embedding method may be, but is not limited to, one of the following: a one-hot encoding method, word2vec (word to vector), model.
And 204, splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector.
In some embodiments, the execution body may splice the first feature vector list and the second feature vector to obtain a spliced vector. The splice vector may be a vector representing each of dynamic feature data and static feature data of the target object.
As an example, first, the execution body may splice each first feature vector in the first feature vector list in a preset splicing manner to obtain a dynamic feature splice vector. The preset splicing manner may be, but is not limited to, one of the following: a transverse tiling and splicing mode and a longitudinal tiling and splicing mode. Then, the execution body may splice the dynamic feature splice vector and the second feature vector in the preset splice manner to obtain a splice vector. For the transverse tiling and splicing mode, if two first feature vectors exist in the first feature vector list, each first feature vector in the first feature vector list is an n-dimensional vector, and the second feature vector is an m-dimensional vector, the splicing vector is an (n+n+m) -dimensional vector. Aiming at a longitudinal tiling and splicing mode, if n is greater than m, complementing the second characteristic vector by 0 from the tail part of the vector until n dimensions are reached, and then the splicing vector is a 3*n-dimension vector; if n is smaller than m, complementing each first feature vector by 0 from the tail part of the vector until m dimensions are reached, and obtaining a spliced vector which is a 3*m-dimension vector; if n is equal to m, the splice vector is a 3*n-dimensional vector.
Step 205, inputting the spliced vector into a decoding model included in the encoding and decoding model to obtain the circulation quantity corresponding to the target time.
In some embodiments, the execution body may input the splicing vector into a decoding model included in the encoding and decoding model to obtain the stream quantity corresponding to the target time. The decoding model included in the encoding and decoding model may be a neural network model that takes a spliced vector as an input and takes a streaming quantity corresponding to a target time as an output. The target time may be a time point of the amount of flow to be predicted.
As an example, the execution body may input the splicing vector to a decoding model included in the encoding and decoding model to obtain the stream quantity corresponding to the target time. Wherein, the decoding model included in the encoding and decoding model can be a fully connected neural network model. The above described encoding and decoding model may include a decoding model that includes an activation function layer and a linear output layer. The activation function layer may include an activation function that is a ReLU (Rectified linear unit, modified linear unit) function. The linear output layer may be an output layer that performs linear transformation on the output of the activation function layer to obtain a preset number of expected values. The predetermined value may be 1. Firstly, the execution body may perform nonlinear mapping on values in each dimension of the input spliced vector by using an activation function included in the activation function layer, so as to obtain a spliced value vector. Wherein, the splice value of each dimension in the splice value vector may be a real number greater than or equal to 0. And then, the execution main body can perform linear conversion on the spliced values of each dimension in the spliced value vector through the linear output layer to obtain the circulation quantity corresponding to the target time.
In some optional implementations of some embodiments, the sample set of the encoding and decoding models is generated by:
in the first step, the execution body may acquire a time series of historical circulation data of the target object. Wherein each of the historical circulation data in the above-mentioned time series of historical circulation data may include first dynamic feature data and first static feature data. The historical circulation data in the above-described time series of historical circulation data may be data that relates to the circulation amount of the target article at a past time point. The first dynamic characteristic data may be characteristic data of the target article that varies with time. The first static feature data may be feature data that does not change over time.
As an example, the executing entity may obtain a time series of historical circulation data of the target item. The historical circulation data in the historical circulation data time sequence can comprise circulation quantity data, value data, color data and size data. The flow volume data and the value data may be first dynamic feature data. The color data and the size data may be first static feature data. For the target item "× mobile phone", one circulation date corresponding to the target item may be "2022-01-01". The amount of the target item to be transferred on the corresponding transfer date may be 20. The value data for the target item at the corresponding date of the circulation may be 666 yuan. The color data of the target item may be red. The dimensional data for the target article may be 5.1 inches. The first dynamic feature data corresponding to the date of the stream may be {20, 666}. The first static feature data corresponding to the date of the circulation may be { "red", "5.1 inch" }. The historical transfer data corresponding to the transfer date may be {20, 666, "red", "5.1 inches" }.
And a second step, the execution body may determine first dynamic feature data included in each of the historical circulation data in the historical circulation data time series as first feature data, so as to obtain a first feature data time series. The first feature data time sequence may be an ordered set of first dynamic feature data corresponding to each time point.
In practice, the execution subject may determine, for each of the historical circulation data in the time series of historical circulation data, first dynamic feature data included in the historical circulation data as first feature data.
In the third step, the execution body may determine, as the second feature data, first static feature data included in any one of the history stream data in the history stream data time series.
As an example, the execution body may determine first dynamic feature data included in the first historical circulation data in the historical circulation data time series as second feature data.
Fourth, the execution body may generate a sample set based on the first feature data time series. The samples in the sample set may include a sample flow data sequence and a sample target flow amount. The sample stream data sequence may be a set of sample stream data corresponding to a plurality of consecutive time points. The sample target flow rate may be a desired value of the flow rate.
As an example, the execution body may split the first feature data time sequence according to a fixed length value to obtain a sample set. The fixed length value may be 5. For the first feature data time series including 15 first feature data, the execution body may split according to the fixed length value, and may obtain a template set including 3 samples.
Alternatively, each of the first feature data in the above-described time series of first feature data may include a historical circulation amount. The execution body may generate a sample set based on the first feature data time series. The samples in the sample set may include a sample flow data sequence and a sample target flow amount. The following steps may be performed for each first characteristic data in the above-described time series of first characteristic data:
the first step, the execution subject may select a preset number of first feature data satisfying a preset continuity condition from the first feature data time series based on the first feature data, and determine each selected first feature data as target first feature data, to obtain a target first feature data sequence. The preset continuity condition may be that each of the selected first feature data is first feature data corresponding to a plurality of consecutive time points starting from the time point corresponding to the first feature data. The time points and the first feature data may be in one-to-one correspondence. The predetermined number may be a predetermined number of first feature data to be selected. The target first feature data sequence may be an ordered set of respective target first feature data corresponding to a plurality of consecutive time points.
And a second step, the execution body can select target first characteristic data meeting a preset position condition from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence. The preset position condition may be that the target first feature data in the target first feature data sequence is arranged before the end position in the target first feature data sequence.
As an example, first, the execution body may determine the number of target first feature data included in the target first feature data sequence as a data number value. Then, the execution body may use each target first feature data at the previous (data number minus 1) positions in the target first feature data sequence as sample circulation data to obtain a sample circulation data sequence.
And thirdly, the execution body can determine the historical circulation quantity included in the target first characteristic data at the last position in the target first characteristic data sequence as a sample target circulation quantity.
Fourth, the execution body may determine the sample flow data sequence and the sample target flow amount as samples.
As an example, fig. 6 shows one application scenario 600 of sample generation steps of an item flow prediction method according to the present disclosure. Wherein the first characteristic data time sequence 601 may include: first feature data 6011, first feature data 6012, first feature data 6013, first feature data 6014, and first feature data 6015. The predetermined number may be 4. First, the execution subject may select 4 first feature data satisfying a preset continuity condition, that is, first feature data 6011, first feature data 6012, first feature data 6013, and first feature data 6014, from the first feature data time series 601 based on the first feature data 6011. Then, the execution subject may determine the first feature data 6011, the first feature data 6012, the first feature data 6013, and the first feature data 6014 as target first feature data 6021, target first feature data 6022, target first feature data 6023, and target first feature data 6024, respectively, to obtain the target first feature data sequence 602. Then, the execution body may select, from the target first feature data sequence 602, target first feature data that satisfies a preset position condition as sample circulation data, to obtain a sample circulation data sequence 603. The sample stream data sequence 603 may include target first feature data 6021, target first feature data 6022, and target first feature data 6023, among other things. Next, the executing entity may determine the historical circulation amount corresponding to the target first feature data 6024 at the last position in the target first feature data sequence 602 as the sample target circulation amount 604. Finally, the execution body may determine the sample flow data sequence 603 and the sample target flow 604 as a sample 605.
Optionally, the encoding and decoding model is trained by the following steps:
first, based on the sample set, the following sample training steps are performed:
and a first sub-step of inputting the second characteristic data and the sample flow data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted flow quantity corresponding to each sample in the sample set. The initial encoding and decoding model may be a model after model parameter initialization.
For example, for each sample in the sample set, the execution body may input the second feature data and a sample stream data sequence corresponding to the sample to the initial encoding and decoding model, so as to obtain a predicted stream output by the initial encoding and decoding model.
And a second sub-step of determining the absolute value of the difference between the predicted flow rate corresponding to each sample in the sample set and the corresponding sample target flow rate as a sample error value to obtain a sample error value set.
And a third sub-step of generating a target loss value for the sample error value set by using a preset target loss function. The predetermined objective loss function may be used to measure the degree of inconsistency between the predicted value (e.g., the predicted flow rate) and the actual value (e.g., the sample objective flow rate) of the model. The preset target loss function can be set according to actual requirements.
As an example, the execution body may determine a sum of the respective sample error values in the sample error value set as the target loss value using a preset target loss function.
And a fourth sub-step of determining the initial encoding and decoding model as a trained encoding and decoding model in response to determining that the loss value is equal to or less than a predetermined threshold. The predetermined threshold may be a predetermined loss value.
As an example, first, the execution body may determine that the target loss value is equal to or less than a preset threshold. The executing body may then determine the initial encoding and decoding model as a trained encoding and decoding model.
And a second step of adjusting parameters of the initial encoding and decoding model in response to determining that the target loss value is greater than the preset threshold, and re-executing the sample training step by using the adjusted initial encoding and decoding model as an initial encoding and decoding model.
As an example, first, the execution subject may determine that the target loss value is greater than the preset threshold. The execution entity may then adjust the parameters of the initial encoding and decoding model using a back propagation algorithm (Back Propgation Algorithm, BP algorithm) and a batch gradient descent method based on the target loss value. Finally, the execution body may execute the sample training step again using the adjusted initial encoding and decoding model as the initial encoding and decoding model.
It should be noted that the execution body may adjust the learning rate of the initial encoding and decoding model according to the target loss value. For example, in the multi-pass training of the model, the execution body may reduce the learning rate of the initial encoding and decoding model to half of the original rate when the target loss value is successively reduced by several passes.
The above training process realizes obtaining the trained encoding and decoding model by adopting a mode of batch input and overall adjustment of model parameters, but the application is not limited to the method.
The above embodiments of the present disclosure have the following advantageous effects: the method for predicting the circulation quantity of the object can accurately predict the circulation quantity corresponding to the object by using the trained coding and decoding model. Specifically, the reason for making the item flow amount prediction inaccurate is that: whether the prediction is performed by a simple time sequence or a machine learning method, the model is too simple, so that the model can not capture the complete circulation quantity characteristics, and the prediction on the circulation quantity of the article is not accurate enough. Based on this, the item flow amount prediction method of some embodiments of the present disclosure first acquires a historical dynamic feature data time series and a historical static feature data of a target item for a historical period of time. Therefore, the complete circulation quantity characteristics of the target object can be captured conveniently. The complete circulation quantity characteristics of the target object comprise dynamic characteristics and static characteristics. And then, inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first characteristic vector list which can obtain dynamic characteristics representing the flow quantity of the target object. And then, carrying out word embedding processing on the historical static feature data to obtain a second feature vector representing static features of the circulation quantity of the target object. And then, splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector representing the complete circulation quantity characteristic of the target object. And finally, inputting the spliced vector representing the complete circulation quantity characteristic of the target object into a decoding model included in the coding and decoding model, so that the circulation quantity of the target object at the target time can be accurately generated. Therefore, the encoding and decoding model solves the problems that the complete circulation quantity characteristics cannot be captured and the prediction of the commodity circulation quantity is not accurate enough.
With further reference to FIG. 3, a flow 300 of further embodiments of the item flow amount prediction method is shown. The flow 300 of the item flow quantity prediction method includes the following steps:
step 301, acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period.
Step 302, inputting the historical dynamic feature data time series into an encoding model included in a pre-trained encoding and decoding model to generate a first feature vector list.
And step 303, performing word embedding processing on the historical static feature data to obtain a second feature vector.
And step 304, splicing the first feature vector list and the second feature vector to obtain a spliced vector.
In step 305, the spliced vector is input to a decoding model included in the encoding and decoding model, so as to obtain the stream quantity corresponding to the target time.
In some embodiments, the specific implementation of steps 301 to 305 and the technical effects thereof may refer to steps 201 to 205 in the corresponding embodiment of fig. 2, which are not described herein.
Step 306, initializing the value of the first preset counter to a first preset count value.
In some embodiments, the executing entity (e.g., the electronic device 101 shown in fig. 1) may initialize the value of the first preset counter to a first preset count value. The first preset counter may be used to count the number of predictions. The number of predictions may be the number of time points of the amount of flow to be predicted. The time point may be a specific day or time. The first preset count value may be a preset integer number.
As an example, the first preset count value may be 1. The execution body may use the first preset count value as an initial count value to initialize the first preset counter. The first preset counter may start counting from 1.
Step 307, determining the obtained circulation quantity corresponding to the target time as a target circulation quantity.
In some embodiments, the executing body may determine the obtained circulation amount corresponding to the target time as the target circulation amount. The target flow rate may be a predicted flow rate of the target article.
Step 308, based on the target flow volume, performing the following flow volume generation steps:
step 3081, determining a later time point of the target flow amount corresponding time point as the second target time.
In some embodiments, the execution body may determine a time point subsequent to the time point corresponding to the target flow amount as the second target time. For example, the target flow amount may correspond to a time point of "2022-9-28", and the second target time may be "2022-9-29".
Step 3082, adding the target circulation quantity to the end of the historical dynamic feature data time sequence, and deleting the historical dynamic feature data corresponding to the first position from the historical dynamic feature data time sequence to obtain a target historical dynamic feature data time sequence.
In some embodiments, the executing entity may add the target flow amount to the end of the historical dynamic feature data time sequence, and delete the historical dynamic feature data corresponding to the first location from the historical dynamic feature data time sequence to obtain the target historical dynamic feature data time sequence. The target historical dynamic feature data time sequence may be an updated historical dynamic feature data time sequence.
As an example, the target flow amount described above may be 15. The historical dynamic feature data time series may be {12, 20, 19, 22, 17}. First, the execution body may add the target flow amount 15 to the end of the historical dynamic feature data time series to obtain an added historical dynamic feature data time series {12, 20, 19, 22, 17, 15}. Then, the execution subject may delete the history dynamic feature data 12 corresponding to the first location from the added history dynamic feature data time series {12, 20, 19, 22, 17, 15} to obtain the target history dynamic feature data time series {20, 19, 22, 17, 15}.
Step 3083, the target historical dynamic feature data time series is input to the coding model to generate a first target feature vector list.
In some embodiments, the execution body may input the target historical dynamic feature data time series to the encoding model to generate a first target feature vector list. Wherein, the first target feature vector in the first target feature vector list may be a vector characterizing dynamic feature data.
As an example, first, the execution subject may input the target historical dynamic feature data time series into the encoding model. Then, for each target historical dynamic feature data in the target historical dynamic feature data time sequence, the execution subject may perform feature extraction on the target historical dynamic feature data by using each encoder to obtain a first target feature vector. Wherein the output of the previous encoder may be the input of the next encoder.
Step 3084, stitching the first target feature vector list and the second feature vector to obtain a target stitching vector.
In some embodiments, the execution body may splice the first target feature vector list and the second feature vector to obtain a target spliced vector. The target stitching vector may be a vector representing each of dynamic feature data and static feature data of the target object.
As an example, first, the executing body may splice each first target feature vector in the first target feature vector list by using the preset splicing manner, so as to obtain a target dynamic feature splice vector. Then, the execution body may splice the target dynamic feature splice vector and the second feature vector in the preset splice manner, so as to obtain a target splice vector.
Step 3085, inputting the target splicing vector into the decoding model to obtain a stream quantity corresponding to the second target time, and determining the sum of the value of the first preset counter and the first preset step value as the first target count value.
In some embodiments, the execution body may input the target splicing vector to the decoding model, obtain a stream amount corresponding to the second target time, and determine a sum of the value of the first preset counter and a first preset step value as the first target count value. The first preset step value may be a value that increases when the value of the first preset counter changes each time. The first target count value may be a result of a change in the value of the first preset counter.
Step 3086, in response to determining that the first target count value meets the preset number of predictions, sorting the obtained target circulation amounts to obtain a target circulation amount sequence.
In some embodiments, the executing body may sort the obtained target circulation amounts to obtain a target circulation amount sequence in response to determining that the first target count value meets a preset prediction frequency condition. The preset number of predictions may be that the first target count value is greater than the number of predictions. The number of predictions may be a number of planned predictions.
As an example, first, the execution subject may determine that the first target count value is greater than the number of times to be predicted. Then, the execution body may sort the obtained target circulation amounts according to the time points corresponding to the obtained target circulation amounts and the time sequence, so as to obtain a target circulation amount sequence.
In step 309, in response to determining that the first target count value does not meet the preset number of times condition, the circulation amount generating step is performed again with the circulation amount corresponding to the second target time as the target circulation amount and the above-mentioned target historical dynamic feature data time series as the historical dynamic feature data time series.
In some embodiments, the executing body may execute the flow amount generating step again with the flow amount corresponding to the second target time as the target flow amount and the target historical dynamic feature data time series as the historical dynamic feature data time series in response to determining that the first target count value does not satisfy the preset number of times condition.
As an example, first, the execution subject may determine that the first target count value is equal to or less than the predicted number of times. Then, the execution subject may execute the transfer amount generation step again with the transfer amount corresponding to the second target time as a target transfer amount and the target historical dynamic characteristic data time series as a historical dynamic characteristic data time series.
As can be seen in fig. 3, the flow 300 of the item flow amount prediction in some embodiments corresponding to fig. 3 highlights more specific steps of how to generate the target flow amount sequence from the historical dynamic feature data time series, the historical static feature data, and the pre-trained encoding and decoding model than the description of some embodiments corresponding to fig. 2. Thus, the schemes described in these embodiments add the target stream quantity predicted in the previous step to the end of the historical dynamic feature data time sequence to generate the target historical dynamic feature data time sequence, and use the target historical dynamic feature data time sequence and the historical static feature data as inputs of the encoding and decoding model, so that a more accurate target stream quantity sequence can be generated.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an item flow amount prediction apparatus, which correspond to those method embodiments shown in fig. 2, and which are particularly applicable in various electronic devices.
As shown in fig. 4, an item flow amount prediction apparatus 400 includes: an acquisition unit 401, a first input unit 402, a word embedding processing unit 403, a concatenation unit 404, and a second input unit 405. Wherein the obtaining unit 401 is configured to obtain a historical dynamic feature data time sequence and a historical static feature data of the target object in the historical time period; a first input unit 402 configured to input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model for generating a predicted streaming amount, to generate a first feature vector list; a word embedding processing unit 403 configured to perform word embedding processing on the historical static feature data to obtain a second feature vector; a stitching unit 404, configured to stitch the first feature vector list and the second feature vector to obtain a stitched vector; and a second input unit 405 configured to input the spliced vector into a decoding model included in the encoding and decoding model, so as to obtain a stream quantity corresponding to the target time.
In some optional implementations of some embodiments, the second input unit 405 may be further configured to: initializing the value of a first preset counter to be a first preset count value; determining the obtained circulation quantity corresponding to the target time as a target circulation quantity; based on the target flow amount, the following flow amount generation step is performed: determining a later time point of the time points corresponding to the target flow quantity as a second target time; adding the target flow quantity to the tail of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target feature vector list and the second feature vector to obtain a target spliced vector; inputting the target splicing vector into the decoding model to obtain a circulation quantity corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to determining that the first target count value meets a preset prediction frequency condition, sequencing the obtained target circulation quantity to obtain a target circulation quantity sequence.
In some optional implementations of some embodiments, the second input unit 405 may be further configured to: and in response to determining that the first target count value does not meet the preset number of times condition, taking the circulation quantity corresponding to the second target time as a target circulation quantity, and taking the target historical dynamic characteristic data time sequence as a historical dynamic characteristic data time sequence, executing the circulation quantity generation step again.
In some optional implementations of some embodiments, the sample set of the encoding and decoding models is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any one of the historical circulation data in the historical circulation data time sequence as second characteristic data; and generating a sample set based on the first characteristic data time sequence, wherein samples in the sample set comprise a sample circulation data sequence and a sample target circulation quantity.
In some optional implementations of some embodiments, each of the first feature data in the first feature data time series includes a historical circulation volume; and the generation unit in the item flow amount prediction apparatus 400 may be configured to: for each first characteristic data in the above-described time series of first characteristic data, the following steps are performed: selecting a preset number of first characteristic data meeting preset continuity conditions from the first characteristic data time sequence based on the first characteristic data, and determining each selected first characteristic data as target first characteristic data to obtain a target first characteristic data sequence; selecting target first characteristic data meeting preset position conditions from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining a historical circulation quantity included in target first characteristic data at the last position in the target first characteristic data sequence as a sample target circulation quantity; and determining the sample flow data sequence and the sample target flow quantity as samples.
In some alternative implementations of some embodiments, the above-described encoding and decoding models are trained by: based on the sample set, the following sample training steps are performed: inputting the second characteristic data and the sample flow data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted flow quantity corresponding to each sample in the sample set; determining an absolute value of a difference between a predicted flow rate corresponding to each sample in the sample set and a corresponding sample target flow rate as a sample error value, and obtaining a sample error value set; generating a target loss value for the sample error value set by using a preset target loss function; determining the initial coding and decoding model as a trained coding and decoding model in response to determining that the target loss value is less than or equal to a preset threshold; and in response to determining that the target loss value is greater than the preset threshold, adjusting parameters of the initial encoding and decoding model, and taking the adjusted initial encoding and decoding model as the initial encoding and decoding model, and executing the sample training step again.
It will be appreciated that the elements described in the item flow amount prediction apparatus 400 correspond to the respective steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
Referring now to fig. 5, a schematic diagram of an electronic device 500 (e.g., electronic device 101 of fig. 1) suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted streaming quantity; word embedding processing is carried out on the historical static feature data, so that a second feature vector is obtained; splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector; and inputting the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a first input unit, a word embedding processing unit, a concatenation unit, and a second input unit. The names of these units do not constitute limitations on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires a historical dynamic feature data time series and a historical static feature data of a target item for a historical period of time", for example.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (9)

1. A method of item flow quantity prediction, comprising:
acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period;
inputting the historical dynamic characteristic data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted streaming quantity;
word embedding processing is carried out on the historical static feature data, and a second feature vector is obtained;
splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector;
and inputting the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time.
2. The method of claim 1, wherein the method further comprises:
initializing the value of a first preset counter to be a first preset count value;
determining the obtained circulation quantity corresponding to the target time as a target circulation quantity;
based on the target flow amount, the following flow amount generation step is performed:
determining a later time point of the time point corresponding to the target flow quantity as a second target time;
Adding the target flow quantity to the tail of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence;
inputting the target historical dynamic feature data time sequence to the coding model to generate a first target feature vector list;
splicing the first target feature vector list and the second feature vector to obtain a target spliced vector;
inputting the target splicing vector into the decoding model to obtain a circulation quantity corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value;
and responding to the fact that the first target count value meets the preset prediction frequency condition, sequencing all obtained target circulation quantity to obtain a target circulation quantity sequence.
3. The method of claim 2, wherein the method further comprises:
and in response to determining that the first target count value does not meet the preset number of times condition, taking the circulation quantity corresponding to the second target time as a target circulation quantity, and taking the target historical dynamic characteristic data time sequence as a historical dynamic characteristic data time sequence, executing the circulation quantity generation step again.
4. The method of claim 1, wherein the sample set of encoding and decoding models is generated by:
acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data;
determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence;
determining first static characteristic data included in any one of the historical circulation data in the historical circulation data time sequence as second characteristic data;
and generating a sample set based on the first characteristic data time sequence, wherein samples in the sample set comprise a sample circulation data sequence and a sample target circulation quantity.
5. The method of claim 4, wherein each first feature data in the first feature data time series comprises a historical flow amount; and
the generating a sample set based on the first characteristic data time sequence, wherein samples in the sample set comprise a sample circulation data sequence and a sample target circulation quantity, and the generating comprises the following steps:
For each first characteristic data in the time series of first characteristic data, performing the steps of:
selecting a preset number of first characteristic data meeting preset continuity conditions from the first characteristic data time sequence based on the first characteristic data, and determining each selected first characteristic data as target first characteristic data to obtain a target first characteristic data sequence;
selecting target first characteristic data meeting preset position conditions from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence;
determining a historical circulation quantity included in target first characteristic data at the last position in the target first characteristic data sequence as a sample target circulation quantity;
and determining the sample flow data sequence and the sample target flow quantity as samples.
6. The method of claim 4, wherein the encoding and decoding model is trained by:
based on the sample set, the following sample training steps are performed:
inputting the second characteristic data and the sample flow data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted flow quantity corresponding to each sample in the sample set;
Determining an absolute value of a difference between a predicted flow rate corresponding to each sample in the sample set and a corresponding sample target flow rate as a sample error value, and obtaining a sample error value set;
generating a target loss value for the sample error value set by using a preset target loss function;
in response to determining that the target loss value is less than or equal to a preset threshold, determining the initial encoding and decoding model as a trained encoding and decoding model;
and in response to determining that the target loss value is greater than the preset threshold, adjusting parameters of the initial encoding and decoding model, and taking the adjusted initial encoding and decoding model as an initial encoding and decoding model, performing the sample training step again.
7. An item flow amount prediction apparatus comprising:
an acquisition unit configured to acquire a historical dynamic feature data time series and historical static feature data of a target article of a historical period;
a first input unit configured to input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model for generating a first feature vector list, wherein the encoding and decoding model is used to generate a predicted throughput;
The word embedding processing unit is configured to perform word embedding processing on the historical static feature data to obtain a second feature vector;
the splicing unit is configured to splice the first characteristic vector list and the second characteristic vector to obtain a spliced vector;
and the second input unit is configured to input the spliced vector into a decoding model included in the coding and decoding model to obtain the circulation quantity corresponding to the target time.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-6.
CN202211670369.4A 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity Active CN115630585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211670369.4A CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211670369.4A CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Publications (2)

Publication Number Publication Date
CN115630585A CN115630585A (en) 2023-01-20
CN115630585B true CN115630585B (en) 2023-05-02

Family

ID=84909746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211670369.4A Active CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Country Status (1)

Country Link
CN (1) CN115630585B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442821B1 (en) * 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
CN110633853A (en) * 2019-09-12 2019-12-31 北京彩云环太平洋科技有限公司 Training method and device of space-time data prediction model and electronic equipment
CN113408797A (en) * 2021-06-07 2021-09-17 北京京东振世信息技术有限公司 Method for generating flow-traffic prediction multi-time-sequence model, information sending method and device
CN114202130A (en) * 2022-02-11 2022-03-18 北京京东振世信息技术有限公司 Flow transfer amount prediction multitask model generation method, scheduling method, device and equipment
CN114429365A (en) * 2022-01-12 2022-05-03 北京京东振世信息技术有限公司 Article sales information generation method and device, electronic equipment and computer medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190268283A1 (en) * 2018-02-23 2019-08-29 International Business Machines Corporation Resource Demand Prediction for Distributed Service Network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442821B1 (en) * 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
CN110633853A (en) * 2019-09-12 2019-12-31 北京彩云环太平洋科技有限公司 Training method and device of space-time data prediction model and electronic equipment
CN113408797A (en) * 2021-06-07 2021-09-17 北京京东振世信息技术有限公司 Method for generating flow-traffic prediction multi-time-sequence model, information sending method and device
CN114429365A (en) * 2022-01-12 2022-05-03 北京京东振世信息技术有限公司 Article sales information generation method and device, electronic equipment and computer medium
CN114202130A (en) * 2022-02-11 2022-03-18 北京京东振世信息技术有限公司 Flow transfer amount prediction multitask model generation method, scheduling method, device and equipment

Also Published As

Publication number Publication date
CN115630585A (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN110852421B (en) Model generation method and device
CN113408797B (en) Method for generating multi-time sequence model of flow quantity prediction, method and device for sending information
CN113436620B (en) Training method of voice recognition model, voice recognition method, device, medium and equipment
WO2019141902A1 (en) An apparatus, a method and a computer program for running a neural network
CN115085196B (en) Power load predicted value determination method, device, equipment and computer readable medium
CN113128419B (en) Obstacle recognition method and device, electronic equipment and storage medium
CN113327599A (en) Voice recognition method, device, medium and electronic equipment
CN116562600B (en) Water supply control method, device, electronic equipment and computer readable medium
CN115630585B (en) Method, apparatus, device and computer readable medium for predicting commodity circulation quantity
CN117035842A (en) Model training method, traffic prediction method, device, equipment and medium
CN117241092A (en) Video processing method and device, storage medium and electronic equipment
CN111653261A (en) Speech synthesis method, speech synthesis device, readable storage medium and electronic equipment
CN114639072A (en) People flow information generation method and device, electronic equipment and computer readable medium
CN114511152A (en) Training method and device of prediction model
CN115222036A (en) Model training method, characterization information acquisition method and route planning method
CN113361701A (en) Quantification method and device of neural network model
CN116107666B (en) Program service flow information generation method, device, electronic equipment and computer medium
CN111949938B (en) Determination method and device of transaction information, electronic equipment and computer readable medium
CN116882591B (en) Information generation method, apparatus, electronic device and computer readable medium
CN118052580A (en) Model generation method, order quantity generation device, equipment and medium
CN115221427A (en) Time series prediction method, apparatus, device, medium, and program product
CN112417151A (en) Method for generating classification model and method and device for classifying text relation
CN111582482A (en) Method, apparatus, device and medium for generating network model information
CN116757752A (en) Method and device for determining delivery result, readable medium and electronic equipment
CN117541403A (en) Risk database construction method and device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant