CN115630585A - Article traffic prediction method, device, equipment and computer readable medium - Google Patents

Article traffic prediction method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN115630585A
CN115630585A CN202211670369.4A CN202211670369A CN115630585A CN 115630585 A CN115630585 A CN 115630585A CN 202211670369 A CN202211670369 A CN 202211670369A CN 115630585 A CN115630585 A CN 115630585A
Authority
CN
China
Prior art keywords
target
historical
data
sample
characteristic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211670369.4A
Other languages
Chinese (zh)
Other versions
CN115630585B (en
Inventor
黄智杰
庄晓天
吴盛楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202211670369.4A priority Critical patent/CN115630585B/en
Publication of CN115630585A publication Critical patent/CN115630585A/en
Application granted granted Critical
Publication of CN115630585B publication Critical patent/CN115630585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The embodiment of the disclosure discloses an article traffic prediction method, an article traffic prediction device, article traffic prediction equipment and a computer readable medium. One embodiment of the method comprises: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a coding and decoding model trained in advance to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted traffic; performing word embedding processing on the historical static characteristic data to obtain a second characteristic vector; splicing the first feature vector list and the second feature vector to obtain a spliced vector; and inputting the splicing vector into a decoding model included by the coding and decoding model to obtain the streaming load corresponding to the target time. The implementation mode is related to artificial intelligence, and the trained coding and decoding model is used, so that the corresponding traffic of the target object can be predicted more accurately.

Description

Article traffic prediction method, device, equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device, equipment and a computer readable medium for predicting commodity circulation.
Background
The forecasting of the article circulation amount is an important step in the article circulation planning, and the accurate forecasting of the future circulation amount of the article has an important role in controlling the inventory value and the inventory level. For future traffic prediction of an article, the following is generally used: and predicting the circulation amount of the article at a certain future time point by utilizing a simple time series prediction method or a machine learning model which is trained in advance.
However, the inventor has found that when the above-mentioned manner is adopted to predict the amount of the article, the following technical problems often exist:
no matter the prediction is performed by a simple time sequence or a machine learning method, the complete characteristics of the circulation amount cannot be captured, so that the prediction of the circulation amount of the article is not accurate enough.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose item streamer prediction methods, apparatuses, devices and computer readable media to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for predicting commodity circulation, the method including: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a coding and decoding model trained in advance to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted traffic volume; performing word embedding processing on the historical static feature data to obtain a second feature vector; splicing the first feature vector list and the second feature vector to obtain a spliced vector; and inputting the splicing vector to a decoding model included by the coding and decoding model to obtain the streaming load corresponding to the target time.
Optionally, the method further includes: initializing a value of a first preset counter to a first preset count value; determining the obtained flow amount corresponding to the target time as a target flow amount; based on the target traffic volume, executing the following traffic volume generation step: determining a later time point of the time point corresponding to the target traffic flow as a second target time; adding the target traffic volume to the end of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target feature vector list and the second feature vector to obtain a target spliced vector; inputting the target splicing vector to the decoding model to obtain a flow amount corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to the fact that the first target counting value meets the preset prediction frequency condition, sequencing the obtained target traffic flows to obtain a target traffic flow sequence.
Optionally, the method further includes: and in response to determining that the first target count value does not satisfy the preset prediction frequency condition, performing the step of generating the amount of streaming again by using the amount of streaming corresponding to the second target time as a target amount of streaming and the target historical dynamic characteristic data time series as a historical dynamic characteristic data time series.
Optionally, the sample set of the coding and decoding model is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any historical circulation data in the historical circulation data time series as second characteristic data; and generating a sample set based on the first characteristic data time series, wherein the samples in the sample set comprise a sample circulation data series and a sample target circulation.
Optionally, each of the first feature data in the first feature data time series includes a historical traffic volume; and generating a sample set based on the first characteristic data time series, wherein samples in the sample set include a sample flow data series and a sample target flow quantity, and the method includes: for each first feature data in the time series of first feature data, performing the following steps: selecting a preset number of first feature data meeting a preset continuity condition from the first feature data time sequence based on the first feature data, and determining each selected first feature data as target first feature data to obtain a target first feature data sequence; selecting target first characteristic data meeting a preset position condition from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining the historical traffic included in the target first characteristic data at the last position in the target first characteristic data sequence as a sample target traffic; and determining the sample flow data sequence and the sample target flow quantity as samples.
Optionally, the coding and decoding model is obtained by training through the following steps: based on the sample set, executing the following sample training steps: inputting the second feature data and the sample stream data sequence of each sample in the sample set into an initial coding and decoding model to obtain a predicted stream traffic corresponding to each sample in the sample set; determining an absolute value of a difference between a predicted flow quantity corresponding to each sample in the sample set and a corresponding sample target flow quantity as a sample error value to obtain a sample error value set; generating a target loss value aiming at the sample error value set by utilizing a preset target loss function; in response to determining that the target loss value is less than or equal to a preset threshold value, determining the initial coding and decoding model as a trained coding and decoding model; and adjusting parameters of the initial coding and decoding model in response to the fact that the target loss value is larger than the preset threshold value, taking the adjusted initial coding and decoding model as the initial coding and decoding model, and executing the sample training step again.
In a second aspect, some embodiments of the present disclosure provide an article transit amount prediction apparatus, comprising: an acquisition unit configured to acquire a historical dynamic feature data time series and historical static feature data of a target item for a historical period of time; a first input unit configured to input the historical dynamic characteristic data time series to a coding model included in a coding and decoding model trained in advance to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted traffic volume; the word embedding processing unit is configured to perform word embedding processing on the historical static feature data to obtain a second feature vector; the splicing unit is configured to splice the first feature vector list and the second feature vector to obtain a spliced vector; and the second input unit is configured to input the splicing vector to a decoding model included in the coding and decoding model to obtain a streaming volume corresponding to the target time.
Optionally, the second input unit may be configured to: initializing a value of a first preset counter to a first preset count value; determining the obtained flow amount corresponding to the target time as a target flow amount; based on the target traffic volume, executing the following traffic volume generation step: determining a later time point of the time point corresponding to the target traffic flow as a second target time; adding the target traffic volume to the end of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target characteristic vector list and the second characteristic vector to obtain a target splicing vector; inputting the target splicing vector to the decoding model to obtain a flow amount corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to the fact that the first target counting value meets the preset prediction frequency condition, sequencing the obtained target traffic flows to obtain a target traffic flow sequence.
Optionally, the second input unit may be configured to: in response to determining that the first target count value does not satisfy the preset prediction number condition, the step of generating the amount of the streaming is executed again, with the amount of the streaming corresponding to the second target time as a target amount of the streaming, and with the target historical moving characteristic data time series as a historical moving characteristic data time series.
Optionally, the sample set of the coding and decoding model is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any historical circulation data in the historical circulation data time series as second characteristic data; and generating a sample set based on the first characteristic data time series, wherein the samples in the sample set comprise a sample circulation data series and a sample target circulation.
Optionally, each of the first feature data in the first feature data time series includes a historical traffic volume; and the generating unit may be configured to: for each first feature data in the time series of first feature data, performing the following steps: selecting a preset number of first feature data meeting a preset continuity condition from the first feature data time series based on the first feature data, and determining each selected first feature data as target first feature data to obtain a target first feature data series; selecting target first characteristic data meeting a preset position condition from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining the historical traffic included in the target first characteristic data at the last position in the target first characteristic data sequence as a sample target traffic; and determining the sample flow data sequence and the sample target flow quantity as samples.
Optionally, the coding and decoding model is obtained by training through the following steps: based on the sample set, executing the following sample training steps: inputting the second feature data and the sample stream data sequence of each sample in the sample set into an initial coding and decoding model to obtain a predicted stream traffic corresponding to each sample in the sample set; determining the absolute value of the difference between the predicted flow quantity corresponding to each sample in the sample set and the corresponding sample target flow quantity as a sample error value to obtain a sample error value set; generating a target loss value aiming at the sample error value set by utilizing a preset target loss function; in response to determining that the target loss value is less than or equal to a preset threshold value, determining the initial coding and decoding model as a trained coding and decoding model; and adjusting parameters of the initial coding and decoding model in response to the fact that the target loss value is larger than the preset threshold value, taking the adjusted initial coding and decoding model as the initial coding and decoding model, and executing the sample training step again.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer-readable medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method described in any implementation manner of the first aspect.
The above embodiments of the present disclosure have the following advantages: the commodity circulation amount prediction method of some embodiments of the present disclosure can accurately predict the circulation amount corresponding to the target commodity by using the trained coding and decoding model. Specifically, the reason why the prediction of the article flow amount is not accurate is that: no matter the prediction is performed by a simple time sequence or a machine learning method, the complete characteristics of the circulation amount cannot be captured, so that the prediction of the circulation amount of the article is not accurate enough. Based on this, the article transit amount prediction method of some embodiments of the present disclosure first obtains the historical dynamic feature data time series and the historical static feature data of the target article in the historical time period. Therefore, the complete circulation quantity characteristic of the target object can be captured conveniently. Wherein the complete traffic characteristics of the target object comprise dynamic characteristics and static characteristics. Then, the historical dynamic characteristic data time sequence is input into a coding model included in a coding and decoding model trained in advance, and a first characteristic vector list representing the dynamic characteristics of the target article flow volume can be obtained. And then, performing word embedding processing on the historical static characteristic data to obtain a second characteristic vector representing the static characteristic of the target article flow quantity. And then, splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector representing the complete flow quantity characteristic of the target object. Finally, the splicing vector representing the complete circulation quantity characteristic of the target object is input into the decoding model included by the coding and decoding model, so that the circulation quantity of the target object at the target time can be accurately generated. Therefore, the coding and decoding model solves the problems that the complete traffic characteristics cannot be captured and the prediction of the article traffic is not accurate enough.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario of an item traffic prediction method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of an item turnover prediction method according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of an item turnover prediction method according to the present disclosure;
FIG. 4 is a schematic block diagram of some embodiments of an article runoff prediction apparatus according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure;
fig. 6 is a schematic diagram of an application scenario of a sample generation step of an item runoff prediction method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of an item traffic prediction method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the electronic device 101 may obtain a historical dynamic feature data time series 103 and a historical static feature data 104 of the target item 102 for a historical time period. In this application scenario, the target object 102 may be a "mobile phone. The historical dynamic characteristics data time series 103 may include historical dynamic characteristics data 1031 and historical dynamic characteristics data 1032. The electronic device 101 may then input the historical dynamic feature data time series 103 described above to the coding model 1051 comprised by the pre-trained coding and decoding model 105 to generate the first feature vector list 106. The encoding and decoding model 105 may be used to generate a predicted amount of traffic, among other things. In this application scenario, the first feature vector list 106 may include: a first feature vector 1061 and a first feature vector 1062. The encoding and decoding model 105 may include: an encoding model 1051 and a decoding model 1052. Next, the electronic device 101 may perform word embedding processing on the historical static feature data sequence 104 to obtain a second feature vector 107. Next, the electronic device 101 may splice the first feature vector list 106 and the second feature vector 107 to obtain a spliced vector 108. Finally, the electronic device 101 may input the stitching vector 108 to a decoding model 1052 included in the encoding and decoding model 105, and obtain a traffic 109 corresponding to the target time. In this application scenario, the amount of the above-mentioned traffic 109 may be any positive integer.
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be installed in the above-listed hardware devices. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of an item turnover prediction method according to the present disclosure is shown. The article flow quantity prediction method comprises the following steps:
step 201, obtaining a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period.
In some embodiments, an executing subject (e.g., the electronic device 101 shown in fig. 1) of the article transit prediction method may acquire the historical dynamic feature data time series and the historical static feature data of the target article in the historical time period through a wired connection manner or a wireless connection manner. The historical time period may be a time period before the time point to be predicted. The target item may be an item for which a runoff amount is to be predicted. The historical dynamic feature data in the historical dynamic feature data time series may be specific feature data of a feature which changes frequently with time change at a time point. For example, the corresponding characteristics of the historical dynamic characteristics data in the historical dynamic characteristics data time series may include, but are not limited to, at least one of the following: traffic volume, value information, net worth remaining. The historical static feature data may be specific feature data corresponding to features that generally do not change over time. For example, the characteristics corresponding to the historical static characteristic data may include, but are not limited to, at least one of: color, size, model.
It is noted that the wireless connection may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a UWB (ultra wideband) connection, and other wireless connection now known or developed in the future.
Step 202, inputting the historical dynamic feature data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first feature vector list.
In some embodiments, the execution subject may input the historical dynamic feature data time series to a coding model included in a pre-trained coding and decoding model to generate the first feature vector list. Wherein the coding and decoding model is used for generating a predicted traffic. The predicted traffic may be the traffic of the target object predicted by the encoding and decoding model at a future time point. The coding model included in the coding and decoding model may be an attention coding model that takes the historical dynamic feature data time series as input and takes the first feature vector list as output. For example, the coding model may be, but is not limited to, one of the following: the Transformer model comprises a coding part, a convolutional neural network model based on an attention mechanism and a cyclic neural network model based on the attention mechanism.
As an example, the execution subject may input the historical dynamic feature data time series to a coding model included in a coding and decoding model trained in advance to generate the first feature vector list. The coding model included in the coding and decoding model may include 6 encoders. The coding model described above may include each encoder including a multi-headed attention model and a feed-forward neural network model. The multi-head attention model may be a 6-head attention model.
And step 203, performing word embedding processing on the historical static feature data to obtain a second feature vector.
In some embodiments, the execution subject may perform word embedding processing on the historical static feature data to obtain a second feature vector. The second feature vector may be a vector characterizing static feature data.
As an example, the execution subject may perform word embedding processing on the historical static feature data by using a preset word embedding method to obtain a second feature vector. The word embedding method may be a method of converting words in a text into a number vector. The preset word embedding method may be, but is not limited to, one of the following: one-hot encoding, word to vector (word to vector) model.
And 204, splicing the first feature vector list and the second feature vector to obtain a spliced vector.
In some embodiments, the execution body may splice the first feature vector list and the second feature vector to obtain a spliced vector. The stitching vector may be a vector representing each of dynamic feature data and static feature data of the target object.
As an example, first, the execution main body may splice each first feature vector in the first feature vector list in a preset splicing manner to obtain a dynamic feature splicing vector. The preset splicing mode may be, but is not limited to, one of the following: a transverse tiling splicing mode and a longitudinal tiling splicing mode. Then, the execution body may splice the dynamic feature splicing vector and the second feature vector in the preset splicing manner to obtain a splicing vector. For the horizontal tiling and splicing mode, if there are two first feature vectors in the first feature vector list, each first feature vector in the first feature vector list is an n-dimensional vector, and the second feature vector is an m-dimensional vector, the splicing vector is an (n + n + m) -dimensional vector. For the longitudinal tiling and splicing mode, if n is larger than m, completing the second feature vector from the tail of the vector by 0 until reaching n dimensions, and then splicing the vector into 3*n dimension vector; if n is smaller than m, completing each first feature vector from the tail of the vector by 0 until m dimensions are reached, and then splicing the vectors to form a 3*m-dimensional vector; if n is equal to m, the stitching vector is a 3*n dimensional vector.
Step 205, the concatenation vector is input to a decoding model included in the coding and decoding model, and a traffic corresponding to the target time is obtained.
In some embodiments, the execution body may input the stitching vector to a decoding model included in the encoding and decoding model, so as to obtain a streaming volume corresponding to the target time. The decoding model included in the coding and decoding model may be a neural network model that takes the stitching vector as input and takes the traffic corresponding to the target time as output. The target time may be a time point of the amount of the streamer to be predicted.
As an example, the execution body may input the concatenation vector to a decoding model included in the coding and decoding model, and obtain a streaming volume corresponding to the target time. The decoding model included in the coding and decoding model may be a fully connected neural network model. The encoding and decoding model described above includes a decoding model that may include an activation function layer and a linear output layer. The activation function layer may include an activation function that may be a ReLU (Rectified linear unit) function. The linear output layer may be an output layer that linearly transforms an output of the activation function layer to obtain a preset number of desired values. The preset value may be 1. First, the execution body may perform nonlinear mapping on the input values in each dimension of the stitching vector by using the activation function included in the activation function layer, so as to obtain a stitching value vector. The concatenation value of each dimension in the concatenation value vector may be a real number greater than or equal to 0. Then, the execution body may perform linear conversion on the concatenation value of each dimension in the concatenation value vector through the linear output layer to obtain a traffic volume corresponding to the target time.
In some optional implementations of some embodiments, the sample set of the above coding and decoding model is generated by:
in the first step, the execution subject may obtain a time series of historical circulation data of the target object. Each historical circulation data in the historical circulation data time series can comprise first dynamic characteristic data and first static characteristic data. The historical circulation data in the historical circulation data time series can be data related to the circulation amount of the target object at the past time point. The first dynamic characteristic data may be characteristic data of the target item that changes with time. The first static feature data may be time invariant feature data.
As an example, the execution subject may obtain a time series of historical circulation data of the target item. The historical circulation data in the historical circulation data time sequence can comprise circulation volume data, value data, color data and size data. The traffic data and the value data may be first dynamic characteristic data. The color data and the size data may be first static feature data. For the target object ". About.handset", a circulation date corresponding to the target object may be "2022-01-01". The amount of the target item's circulation on the corresponding circulation date may be 20. The value data for the target item at the corresponding flow date may be 666 dollars. The color data of the target item may be red. The target item may have dimensional data of 5.1 inches. The first dynamic profile corresponding to the flow date may be 20, 666. The first static feature data corresponding to the streaming date may be { "red", "5.1 inches" }. The historical flow data corresponding to the flow date may be {20, 666, "red", "5.1 inches }".
In the second step, the execution main body may determine, as the first feature data, the first dynamic feature data included in each historical streaming data in the historical streaming data time series, to obtain a first feature data time series. The first feature data time series may be an ordered set of first dynamic feature data corresponding to each time point.
In practice, the execution subject may determine, as the first feature data, first dynamic feature data included in the historical circulation data for each historical circulation data in the historical circulation data time series.
Third, the execution main body may determine, as the second feature data, first static feature data included in any one of the historical flow data in the historical flow data time series.
As an example, the execution subject may determine, as the second feature data, first dynamic feature data included in first historical streaming data in the historical streaming data time series.
In the fourth step, the execution subject may generate a sample set based on the first feature data time series. The samples in the sample set may include a sample stream data sequence and a sample target stream amount. The sample flow data sequence may be a set of sample flow data corresponding to a plurality of time points in succession. The sample target amount of the streaming may be a desired value of the amount of the streaming.
As an example, the execution subject may split the first feature data time series according to a fixed length value, so as to obtain a sample set. The fixed length value may be 5. For a first feature data time sequence including 15 first feature data, the execution subject may be split according to the fixed length value, and a template set including 3 samples may be obtained.
Optionally, each of the first feature data in the first feature data time series may include a historical amount of traffic. The execution entity may generate a sample set based on the first feature data time series. The samples in the sample set may include a sample stream data sequence and a sample target stream amount. For each first characteristic data in the above-mentioned time series of first characteristic data, the following steps may be performed:
in the first step, the execution subject may select a preset number of first feature data satisfying a preset continuity condition from the first feature data time series based on the first feature data, and determine each of the selected first feature data as target first feature data to obtain a target first feature data series. The preset continuity condition may be that each of the selected first feature data is first feature data corresponding to a plurality of consecutive time points from a time point corresponding to the first feature data. The time points may correspond to the first characteristic data one to one. The predetermined number may be a predetermined number of the first feature data to be selected. The target first feature data sequence may be an ordered set of respective target first feature data corresponding to a plurality of consecutive time points.
And secondly, selecting target first characteristic data meeting a preset position condition from the target first characteristic data sequence by the execution main body to serve as sample circulation data, and obtaining a sample circulation data sequence. The preset position condition may be that the target first feature data in the target first feature data sequence is arranged before the end position in the target first feature data sequence.
As an example, first, the execution subject may determine the number of target first feature data included in the target first feature data sequence as a data number value. Then, the execution subject may use each target first feature data at a previous (data number minus 1) position in the target first feature data sequence as sample flow data to obtain a sample flow data sequence.
Third, the execution subject may determine a historical traffic included in the target first feature data at the last position in the target first feature data sequence as a sample target traffic.
Fourth, the execution subject may determine the sample stream data sequence and the sample target stream amount as samples.
As an example, fig. 6 illustrates one application scenario 600 of a sample generation step of an item traffic prediction method according to the present disclosure. The first characteristic data time series 601 may include: first characteristic data 6011, first characteristic data 6012, first characteristic data 6013, first characteristic data 6014, and first characteristic data 6015. The predetermined number may be 4. First, the execution main body may select 4 pieces of first feature data satisfying a preset continuity condition, that is, the first feature data 6011, the first feature data 6012, the first feature data 6013, and the first feature data 6014, from the first feature data time series 601 based on the first feature data 6011. Then, the executing body may determine the first feature data 6011, the first feature data 6012, the first feature data 6013, and the first feature data 6014 as the target first feature data 6021, the target first feature data 6022, the target first feature data 6023, and the target first feature data 6024, respectively, to obtain the target first feature data sequence 602. Then, the execution subject may select target first feature data meeting a preset position condition from the target first feature data sequence 602 as sample flow data, so as to obtain a sample flow data sequence 603. The sample flow data sequence 603 may include, among other things, target first feature data 6021, target first feature data 6022, and target first feature data 6023. Next, the execution subject may determine the historical amount of flow corresponding to the target first feature data 6024 at the last position in the target first feature data sequence 602 as the sample target amount of flow 604. Finally, the execution body may determine the sample stream data sequence 603 and the sample target stream amount 604 as a sample 605.
Optionally, the coding and decoding model is obtained by training through the following steps:
the first step, based on the sample set, executing the following sample training steps:
a first substep of inputting the second feature data and the sample stream data sequence of each sample in the sample set to an initial coding and decoding model to obtain a predicted stream traffic corresponding to each sample in the sample set. The initial encoding and decoding model may be a model with initialized model parameters.
For example, for each sample in the sample set, the execution body may input the second feature data and a sample stream data sequence corresponding to the sample into the initial coding and decoding model, and obtain a predicted stream amount output by the initial coding and decoding model.
And a second substep of determining an absolute value of a difference between a predicted drift amount corresponding to each sample in the sample set and a corresponding sample target drift amount as a sample error value to obtain a sample error value set.
And a third substep of generating a target loss value for the set of sample error values using a preset target loss function. The predetermined target loss function may be used to measure the degree of inconsistency between a predicted value (e.g., predicted amount of flow) and a true value (e.g., sample target amount of flow) of the model. The preset target loss function may be set according to actual requirements.
As an example, the execution subject may determine a sum of each sample error value in the sample error value set as a target loss value using a preset target loss function.
And a fourth substep of determining the initial coding and decoding model as a trained coding and decoding model in response to determining that the loss value is less than or equal to a preset threshold value. The preset threshold may be a preset loss value.
As an example, first, the execution subject may determine that the target loss value is equal to or less than a preset threshold value. Then, the execution body may determine the initial encoding and decoding model as a trained encoding and decoding model.
And secondly, adjusting parameters of the initial coding and decoding model in response to the fact that the target loss value is larger than the preset threshold value, taking the adjusted initial coding and decoding model as the initial coding and decoding model, and executing the sample training step again.
As an example, first, the execution body may determine that the target loss value is greater than the preset threshold value. Then, the execution body may adjust parameters of the initial encoding and decoding model according to the target loss value by using a Back propagation Algorithm (BP Algorithm) and a batch gradient descent method. Finally, the executing entity may use the adjusted initial coding and decoding model as an initial coding and decoding model to execute the sample training step again.
The execution body may further adjust a learning rate of the initial coding and decoding model according to the target loss value. For example, in multiple rounds of training of the model, when the target loss value decreases for successive rounds, the execution body may decrease the learning rate of the initial encoding and decoding model to half of the original learning rate.
The above training process realizes obtaining the trained coding and decoding model by batch input and overall model parameter adjustment, but the present application is not limited thereto.
The above embodiments of the present disclosure have the following advantages: the commodity circulation amount prediction method of some embodiments of the present disclosure can accurately predict the circulation amount corresponding to the target commodity by using the trained coding and decoding model. Specifically, the reason why the article flow prediction is not accurate enough is that: whether the prediction is performed by a simple time series method or a machine learning method, the model is too simple to capture the complete characteristics of the circulation amount, so that the prediction of the circulation amount of the goods is not accurate enough. Based on this, the article transit amount prediction method of some embodiments of the present disclosure first obtains the historical dynamic feature data time series and the historical static feature data of the target article in the historical time period. Therefore, the complete circulation quantity characteristic of the target object can be captured conveniently. Wherein the complete traffic characteristics of the target object comprise dynamic characteristics and static characteristics. Then, the historical dynamic characteristic data time sequence is input into a coding model included in a coding and decoding model trained in advance, so that a first characteristic vector list capable of representing the dynamic characteristics of the target article flow is generated. And then, performing word embedding processing on the historical static characteristic data to obtain a second characteristic vector representing the static characteristic of the target article flow quantity. And then, splicing the first characteristic vector list and the second characteristic vector to obtain a spliced vector representing the complete flow quantity characteristic of the target object. Finally, the splicing vector representing the complete circulation quantity characteristic of the target object is input into the decoding model included by the coding and decoding model, so that the circulation quantity of the target object at the target time can be accurately generated. Therefore, the coding and decoding model solves the problems that the complete traffic characteristics cannot be captured and the prediction of the article traffic is not accurate enough.
With further reference to fig. 3, a flow 300 of further embodiments of an item turnover prediction method is illustrated. The process 300 of the item traffic prediction method includes the following steps:
step 301, acquiring historical dynamic characteristic data time series and historical static characteristic data of the target object in the historical time period.
Step 302, inputting the historical dynamic feature data time sequence into a coding model included in a pre-trained coding and decoding model to generate a first feature vector list.
Step 303, performing word embedding processing on the historical static feature data to obtain a second feature vector.
And 304, splicing the first feature vector list and the second feature vector to obtain a spliced vector.
Step 305, inputting the splicing vector into a decoding model included in the coding and decoding model to obtain the flow vector corresponding to the target time.
In some embodiments, the specific implementation of steps 301 to 305 and the technical effect thereof may refer to steps 201 to 205 in the embodiment corresponding to fig. 2, and are not described herein again.
Step 306, initialize the value of the first preset counter to a first preset count value.
In some embodiments, the execution subject (e.g., the electronic device 101 shown in fig. 1) may initialize the value of the first preset counter to the first preset count value. The first preset counter may be configured to count the number of predictions. The prediction number may be the number of time points of the amount of the traffic to be predicted. The time point may be a specific day or a specific time. The first predetermined count value may be a predetermined integer value.
As an example, the first preset count value may be 1. The execution body may use the first preset count value as an initial count value to initialize the first preset counter. The first preset counter may start counting from 1.
And 307, determining the obtained flow amount corresponding to the target time as a target flow amount.
In some embodiments, the execution body may determine the obtained amount of the streaming corresponding to the target time as a target amount of the streaming. Wherein the target amount of runoff may be a predicted amount of runoff of the target item.
Step 308, based on the target amount of traffic, executing the following traffic generation steps:
step 3081, determining a time point subsequent to the time point corresponding to the target traffic volume as a second target time.
In some embodiments, the execution subject may determine a time point subsequent to the time point corresponding to the target amount of the streaming as the second target time. For example, the target time point corresponding to the target traffic amount may be "2022-9-28", and the second target time may be "2022-9-29".
3082, adding the target traffic volume to the end of the time sequence of the historical dynamic characteristic data, and deleting the historical dynamic characteristic data corresponding to the first position from the time sequence of the historical dynamic characteristic data to obtain the time sequence of the target historical dynamic characteristic data.
In some embodiments, the execution body may add the target traffic volume to the end of the historical dynamic feature data time series, and delete the historical dynamic feature data corresponding to the first position from the historical dynamic feature data time series to obtain the target historical dynamic feature data time series. The target historical dynamic characteristic data time series may be an updated historical dynamic characteristic data time series.
As an example, the target amount of the streaming may be 15. The historical dynamic profile time series may be 12, 20, 19, 22, 17. First, the execution agent may add the target traffic 15 to the end of the historical motion feature time series to obtain an added historical motion feature time series {12, 20, 19, 22, 17, 15}. Then, the execution subject may delete the historical motion feature data 12 corresponding to the first position from the added historical motion feature data time series {12, 20, 19, 22, 17, 15} to obtain the target historical motion feature data time series {20, 19, 22, 17, 15}.
Step 3083, the target historical dynamic feature data time series is input to a coding model to generate a first target feature vector list.
In some embodiments, the execution subject may input the target historical dynamic feature data time series to the coding model to generate a first target feature vector list. The first target feature vector in the first target feature vector list may be a vector characterizing dynamic feature data.
As an example, first, the execution body may input the target historical dynamic characteristic data time series to the coding model. Then, for each target historical dynamic feature data in the target historical dynamic feature data time series, the execution main body may perform feature extraction on the target historical dynamic feature data by using each encoder to obtain a first target feature vector. Wherein the output of the previous encoder may be the input of the subsequent encoder.
Step 3084, the first target feature vector list and the second feature vector are spliced to obtain a target splicing vector.
In some embodiments, the execution body may splice the first target feature vector list and the second feature vector to obtain a target spliced vector. The target stitching vector may be a vector representing each of dynamic feature data and static feature data of the target object.
As an example, first, the execution main body may splice each first target feature vector in the first target feature vector list in the preset splicing manner to obtain a target dynamic feature splicing vector. Then, the execution main body can splice the target dynamic feature splicing vector and the second feature vector in the preset splicing mode to obtain a target splicing vector.
Step 3085, the target splicing vector is input to the decoding model to obtain a flow amount corresponding to the second target time, and the sum of the value of the first preset counter and the first preset step value is determined as a first target count value.
In some embodiments, the execution body may input the target splicing vector to the decoding model, obtain a number of transitions corresponding to the second target time, and determine a sum of a value of the first preset counter and a first preset step value as a first target count value. The first preset step value may be a value incremented each time the value of the first preset counter changes. The first target count value may be a result of a change in a value of the first preset counter.
Step 3086, in response to determining that the first target count value meets the preset prediction frequency condition, sorting the obtained target traffic flows to obtain a target traffic flow sequence.
In some embodiments, the execution main body may, in response to determining that the first target count value satisfies a preset prediction number condition, sort the obtained target traffic volumes to obtain a target traffic volume sequence. The preset prediction frequency condition may be that the first target count value is greater than the prediction frequency. The predicted number of times may be a number of times that the prediction is planned.
As an example, first, the execution main body may determine that the first target count value is greater than the number of times to be predicted. Then, the execution main body may sort the obtained target traffic volumes according to the time points corresponding to the obtained target traffic volumes and the time sequence, so as to obtain a target traffic volume sequence.
Step 309, in response to determining that the first target count value does not satisfy the preset prediction frequency condition, taking the traffic amount corresponding to the second target time as the target traffic amount, and taking the target historical dynamic characteristic data time series as the historical dynamic characteristic data time series, and executing the traffic amount generation step again.
In some embodiments, the execution main body may execute the streaming amount generation step again by using the streaming amount corresponding to the second target time as a target streaming amount and the target historical dynamic characteristic data time series as a historical dynamic characteristic data time series in response to determining that the first target count value does not satisfy the preset prediction number condition.
As an example, first, the execution main body may determine that the first target count value is equal to or less than the prediction number. Then, the execution body may execute the traffic amount generation step again by using the traffic amount corresponding to the second target time as a target traffic amount and the target historical moving characteristic data time series as a historical moving characteristic data time series.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of article traffic prediction in some embodiments corresponding to fig. 3 highlights the specific steps of how to generate the target traffic sequence according to the historical dynamic feature data time sequence, the historical static feature data, and the pre-trained coding and decoding model. Thus, the embodiments describe schemes that add the target traffic predicted in the previous step to the end of the historical dynamic feature data time series to generate the target historical dynamic feature data time series, and that use the target historical dynamic feature data time series and the historical static feature data as inputs to the encoding and decoding model, can generate a more accurate target traffic series.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an article traffic prediction apparatus, which correspond to those shown in fig. 2, and which may be applied in various electronic devices.
As shown in fig. 4, an article drift amount prediction apparatus 400 includes: an acquisition unit 401, a first input unit 402, a word embedding processing unit 403, a concatenation unit 404, and a second input unit 405. The acquiring unit 401 is configured to acquire a historical dynamic feature data time series and historical static feature data of a target item in a historical time period; a first input unit 402, configured to input the historical dynamic feature data time series to a coding model included in a coding and decoding model trained in advance to generate a first feature vector list, wherein the coding and decoding model is used for generating a predicted traffic; a word embedding processing unit 403 configured to perform word embedding processing on the historical static feature data to obtain a second feature vector; a splicing unit 404 configured to splice the first feature vector list and the second feature vector to obtain a spliced vector; a second input unit 405 configured to input the stitching vector to a decoding model included in the coding and decoding model, so as to obtain a traffic corresponding to the target time.
In some optional implementations of some embodiments, the second input unit 405 may be further configured to: initializing a value of a first preset counter to a first preset count value; determining the obtained flow amount corresponding to the target time as a target flow amount; based on the target traffic volume, executing the following traffic volume generation step: determining a later time point of the time point corresponding to the target traffic flow as a second target time; adding the target traffic volume to the end of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence; inputting the target historical dynamic characteristic data time sequence into the coding model to generate a first target characteristic vector list; splicing the first target characteristic vector list and the second characteristic vector to obtain a target splicing vector; inputting the target splicing vector to the decoding model to obtain a flow amount corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value; and in response to the fact that the first target counting value meets the preset prediction frequency condition, sequencing the obtained target traffic flows to obtain a target traffic flow sequence.
In some optional implementations of some embodiments, the second input unit 405 may be further configured to: in response to determining that the first target count value does not satisfy the preset prediction number condition, the step of generating the amount of the streaming is executed again, with the amount of the streaming corresponding to the second target time as a target amount of the streaming, and with the target historical moving characteristic data time series as a historical moving characteristic data time series.
In some optional implementations of some embodiments, the sample set of the above coding and decoding model is generated by: acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data; determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence; determining first static characteristic data included in any historical circulation data in the historical circulation data time series as second characteristic data; and generating a sample set based on the first characteristic data time series, wherein the samples in the sample set comprise a sample circulation data series and a sample target circulation.
In some optional implementations of some embodiments, each of the first feature data in the first feature data time series includes a historical amount of traffic; and the generation unit in the article transit amount prediction apparatus 400 may be configured to: for each first feature data in the time series of first feature data, performing the following steps: selecting a preset number of first feature data meeting a preset continuity condition from the first feature data time series based on the first feature data, and determining each selected first feature data as target first feature data to obtain a target first feature data series; selecting target first characteristic data meeting a preset position condition from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence; determining the historical traffic included in the target first characteristic data at the last position in the target first characteristic data sequence as a sample target traffic; and determining the sample circulation data sequence and the sample target circulation as samples.
In some optional implementations of some embodiments, the encoding and decoding model is trained by: based on the sample set, executing the following sample training steps: inputting the second feature data and the sample stream data sequence of each sample in the sample set into an initial coding and decoding model to obtain a predicted stream traffic corresponding to each sample in the sample set; determining the absolute value of the difference between the predicted flow quantity corresponding to each sample in the sample set and the corresponding sample target flow quantity as a sample error value to obtain a sample error value set; generating a target loss value aiming at the sample error value set by utilizing a preset target loss function; in response to determining that the target loss value is less than or equal to a preset threshold value, determining the initial coding and decoding model as a trained coding and decoding model; and adjusting parameters of the initial coding and decoding model in response to the fact that the target loss value is larger than the preset threshold value, taking the adjusted initial coding and decoding model as the initial coding and decoding model, and executing the sample training step again.
It is to be understood that the units described in the article transit amount prediction apparatus 400 correspond to the respective steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., electronic device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period; inputting the historical dynamic characteristic data time sequence into a coding model included in a coding and decoding model trained in advance to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a predicted traffic; performing word embedding processing on the historical static characteristic data to obtain a second characteristic vector; splicing the first feature vector list and the second feature vector to obtain a spliced vector; and inputting the splicing vector to a decoding model included by the coding and decoding model to obtain the streaming load corresponding to the target time.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first input unit, a word embedding processing unit, a concatenation unit, and a second input unit. The names of the units do not form a limitation to the units themselves in some cases, and for example, the acquiring unit may also be described as a "unit that acquires historical dynamic feature data time series and historical static feature data of a target item over a historical time period".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (9)

1. A method for predicting commodity circulation amount comprises the following steps:
acquiring a historical dynamic characteristic data time sequence and historical static characteristic data of a target object in a historical time period;
inputting the historical dynamic characteristic data time sequence into a coding model included in a coding and decoding model trained in advance to generate a first characteristic vector list, wherein the coding and decoding model is used for generating a prediction traffic;
performing word embedding processing on the historical static feature data to obtain a second feature vector;
splicing the first feature vector list and the second feature vector to obtain a spliced vector;
and inputting the splicing vector into a decoding model included by the coding and decoding model to obtain the flow vector corresponding to the target time.
2. The method of claim 1, wherein the method further comprises:
initializing a value of a first preset counter to a first preset count value;
determining the obtained flow amount corresponding to the target time as a target flow amount;
based on the target traffic volume, performing the following traffic volume generation steps:
determining a later time point of the time point corresponding to the target traffic flow as a second target time;
adding the target traffic to the end of the historical dynamic characteristic data time sequence, and deleting the historical dynamic characteristic data corresponding to the first position from the historical dynamic characteristic data time sequence to obtain a target historical dynamic characteristic data time sequence;
inputting the target historical dynamic feature data time series into the coding model to generate a first target feature vector list;
splicing the first target feature vector list and the second feature vector to obtain a target spliced vector;
inputting the target splicing vector into the decoding model to obtain a flow amount corresponding to the second target time, and determining the sum of the value of the first preset counter and a first preset step value as a first target count value;
and in response to the fact that the first target counting value meets the preset prediction frequency condition, sequencing the obtained target traffic flows to obtain a target traffic flow sequence.
3. The method of claim 2, wherein the method further comprises:
and in response to the fact that the first target counting value does not meet the preset prediction frequency condition, taking the flow amount corresponding to the second target time as a target flow amount, taking the target historical dynamic characteristic data time sequence as a historical dynamic characteristic data time sequence, and executing the flow amount generating step again.
4. The method of claim 1, wherein the sample set of encoding and decoding models is generated by:
acquiring a historical circulation data time sequence of the target object, wherein each historical circulation data in the historical circulation data time sequence comprises first dynamic characteristic data and first static characteristic data;
determining first dynamic characteristic data included in each historical circulation data in the historical circulation data time sequence as first characteristic data to obtain a first characteristic data time sequence;
determining first static characteristic data included in any one of the historical circulation data in the historical circulation data time series as second characteristic data;
generating a sample set based on the first characteristic data time series, wherein the samples in the sample set comprise a sample circulation data series and a sample target circulation.
5. The method of claim 4, wherein each first feature data in the first feature data time series comprises a historical amount of traffic; and
generating a sample set based on the first characteristic data time series, wherein samples in the sample set comprise a sample circulation data series and a sample target circulation, and the method comprises the following steps:
for each first feature data in the time series of first feature data, performing the steps of:
selecting a preset number of first feature data meeting a preset continuity condition from the first feature data time sequence based on the first feature data, and determining each selected first feature data as target first feature data to obtain a target first feature data sequence;
selecting target first characteristic data meeting a preset position condition from the target first characteristic data sequence as sample circulation data to obtain a sample circulation data sequence;
determining the historical traffic included by the target first characteristic data at the last position in the target first characteristic data sequence as a sample target traffic;
determining the sample stream data sequence and the sample target stream traffic as a sample.
6. The method of claim 4, wherein the encoding and decoding model is trained by:
based on the sample set, performing the following sample training steps:
inputting the second characteristic data and the sample stream data sequence of each sample in the sample set into an initial coding and decoding model to obtain a predicted stream traffic corresponding to each sample in the sample set;
determining an absolute value of a difference between a predicted flow quantity corresponding to each sample in the sample set and a corresponding sample target flow quantity as a sample error value to obtain a sample error value set;
generating a target loss value for the sample error value set by using a preset target loss function;
in response to determining that the target loss value is less than or equal to a preset threshold, determining the initial coding and decoding model as a trained coding and decoding model;
and adjusting parameters of the initial coding and decoding model in response to determining that the target loss value is greater than the preset threshold value, and performing the sample training step again by using the adjusted initial coding and decoding model as the initial coding and decoding model.
7. An article drift amount prediction device comprising:
an acquisition unit configured to acquire a historical dynamic feature data time series and historical static feature data of a target item for a historical period of time;
a first input unit configured to input the historical dynamic feature data time series to an encoding model included in a pre-trained encoding and decoding model to generate a first feature vector list, wherein the encoding and decoding model is used for generating a predicted traffic volume;
the word embedding processing unit is configured to perform word embedding processing on the historical static feature data to obtain a second feature vector;
the splicing unit is configured to splice the first feature vector list and the second feature vector to obtain a spliced vector;
and the second input unit is configured to input the splicing vector to a decoding model included by the coding and decoding model to obtain a streaming volume corresponding to the target time.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202211670369.4A 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity Active CN115630585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211670369.4A CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211670369.4A CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Publications (2)

Publication Number Publication Date
CN115630585A true CN115630585A (en) 2023-01-20
CN115630585B CN115630585B (en) 2023-05-02

Family

ID=84909746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211670369.4A Active CN115630585B (en) 2022-12-26 2022-12-26 Method, apparatus, device and computer readable medium for predicting commodity circulation quantity

Country Status (1)

Country Link
CN (1) CN115630585B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442821B1 (en) * 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
US20190268283A1 (en) * 2018-02-23 2019-08-29 International Business Machines Corporation Resource Demand Prediction for Distributed Service Network
CN110633853A (en) * 2019-09-12 2019-12-31 北京彩云环太平洋科技有限公司 Training method and device of space-time data prediction model and electronic equipment
CN113408797A (en) * 2021-06-07 2021-09-17 北京京东振世信息技术有限公司 Method for generating flow-traffic prediction multi-time-sequence model, information sending method and device
CN114202130A (en) * 2022-02-11 2022-03-18 北京京东振世信息技术有限公司 Flow transfer amount prediction multitask model generation method, scheduling method, device and equipment
CN114429365A (en) * 2022-01-12 2022-05-03 北京京东振世信息技术有限公司 Article sales information generation method and device, electronic equipment and computer medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8442821B1 (en) * 2012-07-27 2013-05-14 Google Inc. Multi-frame prediction for hybrid neural network/hidden Markov models
US20190268283A1 (en) * 2018-02-23 2019-08-29 International Business Machines Corporation Resource Demand Prediction for Distributed Service Network
CN110633853A (en) * 2019-09-12 2019-12-31 北京彩云环太平洋科技有限公司 Training method and device of space-time data prediction model and electronic equipment
CN113408797A (en) * 2021-06-07 2021-09-17 北京京东振世信息技术有限公司 Method for generating flow-traffic prediction multi-time-sequence model, information sending method and device
CN114429365A (en) * 2022-01-12 2022-05-03 北京京东振世信息技术有限公司 Article sales information generation method and device, electronic equipment and computer medium
CN114202130A (en) * 2022-02-11 2022-03-18 北京京东振世信息技术有限公司 Flow transfer amount prediction multitask model generation method, scheduling method, device and equipment

Also Published As

Publication number Publication date
CN115630585B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN113408797B (en) Method for generating multi-time sequence model of flow quantity prediction, method and device for sending information
CN113436620B (en) Training method of voice recognition model, voice recognition method, device, medium and equipment
WO2019141902A1 (en) An apparatus, a method and a computer program for running a neural network
CN115085196A (en) Power load predicted value determination method, device, equipment and computer readable medium
CN113362811A (en) Model training method, speech recognition method, device, medium and equipment
CN113327599A (en) Voice recognition method, device, medium and electronic equipment
CN108268936B (en) Method and apparatus for storing convolutional neural networks
CN114049072B (en) Index determination method and device, electronic equipment and computer readable medium
CN116562600B (en) Water supply control method, device, electronic equipment and computer readable medium
CN113779316A (en) Information generation method and device, electronic equipment and computer readable medium
CN117035842A (en) Model training method, traffic prediction method, device, equipment and medium
CN115630585B (en) Method, apparatus, device and computer readable medium for predicting commodity circulation quantity
CN116977885A (en) Video text task processing method and device, electronic equipment and readable storage medium
CN115757933A (en) Recommendation information generation method, device, equipment, medium and program product
CN114639072A (en) People flow information generation method and device, electronic equipment and computer readable medium
CN115222036A (en) Model training method, characterization information acquisition method and route planning method
CN113361701A (en) Quantification method and device of neural network model
CN111582456A (en) Method, apparatus, device and medium for generating network model information
CN116107666B (en) Program service flow information generation method, device, electronic equipment and computer medium
CN111949938B (en) Determination method and device of transaction information, electronic equipment and computer readable medium
CN111582482A (en) Method, apparatus, device and medium for generating network model information
CN117112869A (en) Article classification result generation method, apparatus, device, medium and program product
CN117196199A (en) Article scheduling method, apparatus, electronic device and computer readable medium
CN117913779A (en) Method, apparatus, electronic device and readable medium for predicting electric load information
CN115221427A (en) Time series prediction method, apparatus, device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant