CN117035842A - Model training method, traffic prediction method, device, equipment and medium - Google Patents

Model training method, traffic prediction method, device, equipment and medium Download PDF

Info

Publication number
CN117035842A
CN117035842A CN202310943649.6A CN202310943649A CN117035842A CN 117035842 A CN117035842 A CN 117035842A CN 202310943649 A CN202310943649 A CN 202310943649A CN 117035842 A CN117035842 A CN 117035842A
Authority
CN
China
Prior art keywords
sequence
traffic
characteristic data
time
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310943649.6A
Other languages
Chinese (zh)
Inventor
郑大念
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202310943649.6A priority Critical patent/CN117035842A/en
Publication of CN117035842A publication Critical patent/CN117035842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Computing Systems (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure discloses a model training method, a traffic prediction device, equipment and a medium. One embodiment of the method comprises the following steps: acquiring a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence; correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence; acquiring a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and acquiring a third characteristic data sequence; and training the initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model. This embodiment is related to artificial intelligence, and more accurate predicted traffic can be generated by fitting a traffic to the prediction model.

Description

Model training method, traffic prediction method, device, equipment and medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a model training method, a traffic prediction method, a device, equipment, and a medium.
Background
Currently, the traffic prediction method may be to predict traffic at a predetermined time point in the future. For traffic prediction, the following methods are generally adopted: traffic at future points in time is predicted by either a multiple model prediction method or a single model recursive prediction method.
However, the inventors have found that when the traffic is predicted in the above manner, there are often the following technical problems:
first, multiple model prediction methods include individual models that are independently trained, and lack of correlation dependence of the predicted data over multiple time steps may lead to inaccurate traffic predictions.
Second, single-model recursive prediction suffers from error diffusion phenomena, which can lead to inaccurate traffic predictions.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a model training method, a traffic prediction method, a device, an apparatus, and a medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a model training method, comprising: acquiring a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence; correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence aiming at a first future time sequence; acquiring a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and acquiring a third characteristic data sequence for the first future time sequence; and training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model.
Optionally, the traffic timing prediction model is trained by: acquiring a third actual traffic sequence corresponding to a third historical time sequence and a traffic label corresponding to time information to be predicted; determining a fourth characteristic data sequence for the third historical time sequence; and taking the third actual traffic sequence and the fourth characteristic data sequence as model inputs, taking the traffic label as prediction output, and performing time sequence model training on the initial traffic time sequence prediction model to generate a traffic time sequence prediction model.
Optionally, the second characteristic data in the second characteristic data sequence includes at least one of the following: holiday encoded data, traffic trend encoded data, and impact event encoded data.
Optionally, the traffic trend encoded data is encoded by: for each second characteristic data in the above-mentioned second characteristic data sequence, the following encoding step is performed: screening a second historical time corresponding to the second actual traffic meeting the inflection point condition of the preset traffic from the second historical time sequence according to the second actual traffic sequence to obtain a second historical time subsequence; determining a second historical time corresponding to the second characteristic data as a target historical time; determining a second historical time group which is in the second historical time sub-sequence and is related to the existence time of the target historical time in sequence; and determining traffic trend coded data aiming at the second characteristic data according to the second historical time group.
In a second aspect, some embodiments of the present disclosure provide a model training apparatus comprising: a first acquisition unit configured to acquire a first actual traffic volume sequence and a first characteristic data sequence for a first historical time sequence; a first input unit configured to input the first actual traffic sequence and the first feature data sequence to a pre-trained traffic sequence prediction model, to obtain a first predicted traffic sequence for a first future time sequence; a second acquisition unit configured to acquire a second actual traffic volume sequence and a second characteristic data sequence for a second historical time sequence, and to acquire a third characteristic data sequence for the first future time sequence; and the training unit is configured to train the initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model.
In a third aspect, some embodiments of the present disclosure provide a traffic prediction method, including: acquiring a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence; generating a second predicted traffic sequence for a second future time sequence using a pre-trained traffic fit prediction model based on the fourth actual traffic sequence and the fifth feature data sequence, wherein the traffic fit prediction model is generated based on the model training method of the present disclosure; acquiring a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence aiming at current time information; and inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence of which the corresponding time period is after the current time information and before the third future time sequence.
Optionally, the method further comprises: acquiring a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence aiming at current time information; and inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence of which the corresponding time period is after the current time information and before the third future time sequence.
Optionally, generating a second predicted traffic sequence for a second future time sequence by fitting a predictive model with pre-trained traffic according to the fourth actual traffic sequence and the fifth characteristic data sequence, including: determining at least one fifth historical time series having a contemporaneous time relationship with the fourth historical time series; inputting at least one sixth actual traffic sequence and at least one seventh characteristic data sequence into a pre-trained traffic sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the actual traffic sequence corresponding to the at least one fifth historical time sequence is at least one sixth actual traffic sequence, and the characteristic data sequence corresponding to the at least one fifth historical time sequence is at least one seventh characteristic data sequence.
In a fourth aspect, some embodiments of the present disclosure provide a traffic prediction apparatus, including: a third acquisition unit configured to acquire a fourth actual traffic volume sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence; a generation unit configured to generate a second predicted traffic sequence for a second future time sequence using a pre-trained traffic fit prediction model based on the fourth actual traffic sequence and the fifth feature data sequence, wherein the traffic fit prediction model is generated based on a model training method of the present disclosure; a fourth acquisition unit configured to acquire a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence for current time information; and a second input unit configured to input the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic sequence prediction model, and obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence in which a corresponding time period is after the current time information and in which a corresponding time period is before the third future time sequence.
Optionally, the generating unit may be further configured to: determining at least one fifth historical time series having a contemporaneous time relationship with the fourth historical time series; inputting at least one sixth actual traffic sequence and at least one seventh characteristic data sequence into a pre-trained traffic sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the actual traffic sequence corresponding to the at least one fifth historical time sequence is at least one sixth actual traffic sequence, and the characteristic data sequence corresponding to the at least one fifth historical time sequence is at least one seventh characteristic data sequence.
In a fifth aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first and third aspects.
In a sixth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as described in any of the implementations of the first and third aspects.
In a seventh aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the method described in any one of the implementations of the first and third aspects above.
The above embodiments of the present disclosure have the following advantageous effects: by using the model training method of some embodiments of the present disclosure, a more accurate predicted traffic can be generated by fitting the traffic to the prediction model. In particular, the reason for the related traffic predictions not being accurate enough is that: first, multiple model prediction methods include individual models that are independently trained, and lack of correlation dependence of the predicted data over multiple time steps may lead to inaccurate traffic predictions. Second, single-model recursive prediction suffers from error diffusion phenomena, which can lead to inaccurate traffic predictions. Based on this, the model training method of some embodiments of the present disclosure first obtains a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence for subsequent generation of a first predicted traffic sequence for a first future time sequence. And then, correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence aiming at a first future time sequence. Here, the first predicted traffic sequence corresponding to the first future time sequence is directly predicted by the traffic sequence prediction model, so that the phenomenon of error diffusion is avoided, and the accuracy of the first predicted traffic sequence is improved. Further, a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence are acquired, and a third characteristic data sequence for the first future time sequence is acquired. Here, the acquired second actual traffic sequence, second feature data sequence and third feature data sequence are used for training of a subsequent initial traffic fit prediction model. And finally, training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model with more accurate fitting traffic. The traffic fitting prediction model is trained in a traffic and corresponding characteristic data fitting mode, the obtained traffic fitting prediction model considers the dependency relationship of time step data, and the occurrence of error diffusion problem of recursive prediction is avoided by learning the fitting relationship corresponding to the third characteristic data sequence and the first prediction traffic sequence. In summary, the traffic in the future time period is predicted by the traffic time sequence prediction model and the traffic fitting prediction model, so that not only the dependence relationship of a plurality of time step data is considered, but also the error diffusion problem caused by recursion prediction is also considered. Thus, more accurate prediction of traffic can be generated.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a model training method according to some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of a model training method according to the present disclosure;
FIG. 3 is a flow chart of some embodiments of a traffic prediction method according to the present disclosure;
FIG. 4 is a flow chart of further embodiments of a traffic prediction method according to the present disclosure;
FIG. 5 is a schematic structural view of some embodiments of a model training apparatus according to the present disclosure;
FIG. 6 is a schematic diagram of the architecture of some embodiments of a traffic prediction device according to the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of one application scenario of a model training method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the electronic device 101 may acquire a first actual traffic sequence 103 and a first feature data sequence 104 for a first historical time sequence 102. The electronic device 101 may then input the first actual traffic sequence 103 and the first characteristic data sequence 104 into a pre-trained traffic timing prediction model 105, resulting in a first predicted traffic sequence 106 for a first future time sequence 107. Next, the electronic device 101 may obtain a second actual traffic sequence 109 and a second characteristic data sequence 110 for the second historical time sequence 108, and a third characteristic data sequence 111 for the first future time sequence 107 described above. Finally, the electronic device 101 may train the initial traffic fitting prediction model 112 according to the second characteristic data sequence 110, the third characteristic data sequence 111, the second actual traffic sequence 109, and the first predicted traffic sequence 106, to obtain a traffic fitting prediction model 113.
The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device is embodied as software, it may be installed in the above-listed hardware device. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a model training method according to the present disclosure is shown. The model training method comprises the following steps:
step 201, a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence are acquired.
In some embodiments, the execution subject of the model training method (e.g., the electronic device 101 shown in fig. 1) may obtain the first actual traffic sequence and the first feature data sequence for the first historical time sequence through a wired connection or a wireless connection. Wherein the first historical time in the first historical time series may be time information before the current time. A predetermined interval length (i.e., a corresponding time step) exists between each of the first historic times in the first historic time series. For example, the predetermined interval period may be 1 week. There is a one-to-one correspondence of the first actual traffic in the first actual traffic sequence to the first historical time in the first historical time sequence. The first actual traffic may be actual traffic that may be determined for traffic demands of an actual traffic scenario. In practice, the first actual traffic may be sales of orders, customer service traffic, or user consultation. The first characteristic data in the first characteristic data sequence has a one-to-one correspondence with the first historical time in the first historical time sequence. The first characteristic data may characterize characteristic data for a corresponding actual business scenario occurring corresponding to the first historical time. For example, for the actual business scenario being a sales scenario, the corresponding first feature data includes: duration of sales activity, sales volume, sales type, sales population.
Optionally, the first feature data in the first feature data sequence includes at least one of the following: holiday encoded data, traffic trend encoded data, and impact event encoded data.
The holiday encoding data may be encoded data obtained by encoding whether the second historical time corresponding to the second feature data is a holiday. The traffic trend encoded data may be encoded data after encoding traffic. The influence event encoded data may be encoded data encoded for influence events occurring within a corresponding first historical time. In practice, holiday encoded data may be characterized by 3 numbers. Wherein the first number characterizes yesterday's holiday condition. The second number characterizes the holiday situation today. The third number characterizes holiday conditions in tomorrow. Specifically, holidays may be characterized by a "1" and holidays may not be characterized by a "0". The holiday encoded data may include: 000. 001, 010, 011, 100, 110, 111.
Step 202, inputting the first actual traffic sequence and the first characteristic data sequence to a pre-trained traffic time sequence prediction model correspondingly, so as to obtain a first predicted traffic sequence aiming at a first future time sequence.
In some embodiments, the executing entity may input the first actual traffic sequence and the first feature data sequence to a pre-trained traffic timing prediction model, to obtain a first predicted traffic sequence for a first future time sequence. The traffic timing prediction model may be a timing neural network model for predicting traffic corresponding to a future point in time. In practice, the traffic timing prediction model may be Long Short-Term Memory (LSTM). The first future time in the first future time sequence may be a time information sequence subsequent to the current time information. For example, the current time information is "1 month and 1 day". The first future time sequence may include: "1 month 7 days", "1 month 14 days", "1 month 21 days", "1 month 28 days". The first predicted traffic in the first sequence of predicted traffic may be predicted traffic corresponding to the first future time. I.e. there is a one-to-one correspondence of the first predicted traffic in the first sequence of predicted traffic with the first future time in the first sequence of future times.
In some alternative implementations of some embodiments, the traffic timing prediction model described above is trained by:
The first step, a third actual traffic sequence corresponding to a third historical time sequence and a traffic label corresponding to time information to be predicted are obtained.
Wherein the predetermined time interval between each of the third historic times in the third historic time series may be the same as the predetermined time interval corresponding to the first historic time series. The time interval, corresponding time period, of the third historical time sequence corresponding to the first historical time sequence may be different. The third actual traffic sequence is not described in detail, and reference may be made to the explanation of the first actual traffic sequence. The traffic label corresponding to the time information to be predicted may be the actual traffic of the time point to be predicted.
For example, the third historical time series includes: "2 month 1 day", "2 month 2 day", "2 month 3 day", "2 month 4 day". That is, the predetermined time interval corresponding to the third historical time series is 1 day. The third actual traffic sequence comprises: sales corresponding to "2 months 1 day", sales corresponding to "2 months 2 days", sales corresponding to "2 months 3 days", and sales corresponding to "2 months 4 days". The time information to be predicted may be "2 months 5 days". The traffic label may be sales corresponding to "2 months 5 days".
And a second step of determining a fourth characteristic data sequence for the third historical time sequence. The fourth feature data sequence is not described in detail, and reference may be made to the explanation of the first feature data sequence.
Optionally, the fourth characteristic data in the fourth characteristic data sequence includes at least one of the following: holiday encoded data, traffic trend encoded data, and impact event encoded data.
And thirdly, inputting the third actual traffic sequence and the fourth characteristic data sequence as models, outputting the traffic label as prediction output, and performing time sequence model training on an initial traffic time sequence prediction model to generate a traffic time sequence prediction model.
Step 203, obtaining a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and obtaining a third characteristic data sequence for the first future time sequence.
In some embodiments, the executing entity may obtain a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and obtain a third characteristic data sequence for the first future time sequence. In particular, the specific explanation of the second historical time series, the second actual traffic volume series, the second characteristic data series and the third characteristic data series is not explained again, and reference may be made to the above-mentioned first historical time series, first actual traffic volume series and first characteristic data series. Likewise, the third characteristic data sequence is not described in detail, and reference is made to the explanation of the first characteristic data sequence.
The second historical time sequence is different from the first historical time sequence in time steps. The second historical time in the second historical time series has a one-to-one correspondence with the second actual traffic in the second actual traffic series. The second historical time in the second historical time sequence has a one-to-one correspondence with the second feature data in the second feature data sequence. The first future time in the first future time sequence has a one-to-one correspondence with the third feature data in the third feature data sequence.
In some optional implementations of some embodiments, the second feature data in the second feature data sequence includes at least one of: holiday encoded data, traffic trend encoded data, and impact event encoded data.
Optionally, the traffic trend encoded data is encoded by:
for each second characteristic data in the above-mentioned second characteristic data sequence, the following encoding step is performed:
and step 1, screening a second historical time corresponding to the second actual traffic meeting the inflection point condition of the preset traffic from the second historical time sequence according to the second actual traffic sequence to obtain a second historical time subsequence.
The preset traffic inflection point condition may be that the second actual traffic is the traffic of the inflection point position on the traffic curve formed by the second actual traffic sequence. The traffic curve may characterize a size transformation of each of the second actual traffic in the second actual traffic sequence.
And 2, determining a second historical time corresponding to the second characteristic data as a target historical time.
And 3, determining a second historical time group which is related to the existence time of the target historical time in the second historical time sub-sequence. The time-sequence association may be that the time is in a sequence relationship. For example, the second historical time subsequence includes: "1 month 1 day", "1 month 8 days", "1 month 11 days", "1 month 18 days". Then a second set of historical times associated with the target historical time presence time sequence includes: "1 month and 8 days" and "1 month and 11 days".
And a sub-step 4 of determining traffic trend coded data for the second characteristic data according to the second historical time group.
As an example, first, the execution subject may subtract the earliest second historical time in the second historical time group from the target historical time to obtain the first subtraction value. Then, the executing entity may subtract the earliest second historical time in the second historical time group from the latest second historical time in the second historical time group to obtain a second subtraction value. Then, the first subtraction value is divided by the second subtraction value to obtain a division value. Further, a number of time steps between two second historic times in the second set of historic times is determined. And then, determining the vector dimension of the traffic trend coded data according to the time step number. And finally, generating traffic trend coded data according to the time step number between the target historical time and the earliest second historical time, the divisor value and the time step number between the two second historical times. For example, the number of time steps between the two second historic times is 6. The number of time steps between the target historical time and the earliest second historical time is 2. The division value is 3. The traffic trend encoded data may be (0,0,3,0,0,0).
And 204, training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model.
In some embodiments, the executing entity may train the initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence, and the first predicted traffic sequence to obtain a traffic fitting prediction model. The initial traffic fitting prediction model may be a traffic fitting prediction model that has not been trained, and the traffic fitting prediction model may be a network model that implements fitting regression. In practice, the business fit prediction model may be a convolutional neural network (Convolutional Neural Networks, CNN) model.
As an example, the execution body may input the second feature data sequence and the third feature data sequence as models, use the second actual traffic sequence and the first predicted traffic sequence as fitting targets, train the initial traffic fitting prediction model, and obtain the traffic fitting prediction model.
The above embodiments of the present disclosure have the following advantageous effects: by using the model training method of some embodiments of the present disclosure, a more accurate predicted traffic can be generated by fitting the traffic to the prediction model. In particular, the reason for the related traffic predictions not being accurate enough is that: first, multiple model prediction methods include individual models that are independently trained, and lack of correlation dependence of the predicted data over multiple time steps may lead to inaccurate traffic predictions. Second, single-model recursive prediction suffers from error diffusion phenomena, which can lead to inaccurate traffic predictions. Based on this, the model training method of some embodiments of the present disclosure first obtains a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence for subsequent generation of a first predicted traffic sequence for a first future time sequence. And then, correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence aiming at a first future time sequence. Here, the first predicted traffic sequence corresponding to the first future time sequence is directly predicted by the traffic sequence prediction model, so that the phenomenon of error diffusion is avoided, and the accuracy of the first predicted traffic sequence is improved. Further, a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence are acquired, and a third characteristic data sequence for the first future time sequence is acquired. Here, the acquired second actual traffic sequence, second feature data sequence and third feature data sequence are used for training of a subsequent initial traffic fit prediction model. And finally, training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model with more accurate fitting traffic. The traffic fitting prediction model is trained in a traffic and corresponding characteristic data fitting mode, the obtained traffic fitting prediction model considers the dependency relationship of time step data, and the occurrence of error diffusion problem of recursive prediction is avoided by learning the fitting relationship corresponding to the third characteristic data sequence and the first prediction traffic sequence. In summary, the traffic in the future time period is predicted by the traffic time sequence prediction model and the traffic fitting prediction model, so that not only the dependence relationship of a plurality of time step data is considered, but also the error diffusion problem caused by recursion prediction is also considered. Thus, more accurate prediction of traffic can be generated.
With further reference to fig. 3, a flow 300 of some embodiments of a traffic prediction method according to the present disclosure is shown. The traffic prediction method comprises the following steps:
step 301, a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to the fourth historical time sequence are obtained.
In some embodiments, the executing body (e.g., the electronic device) may obtain a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to the fourth historical time sequence. Wherein the fourth historical time series may be a time information series different from the first historical time series. The specific meaning may be seen in the interpretation of the first historical time series. The fourth historical time in the fourth historical time series has a one-to-one correspondence with the fourth actual traffic in the fourth actual traffic series. The fourth historical time in the fourth historical time series has a one-to-one correspondence with the fifth characteristic data in the fifth characteristic data series.
Optionally, the fifth characteristic data in the fifth characteristic data sequence includes at least one of the following: holiday encoded data, traffic trend encoded data, and impact event encoded data.
Step 302, generating a second predicted traffic sequence for a second future time sequence by fitting a predictive model with pre-trained traffic according to the fourth actual traffic sequence and the fifth characteristic data sequence.
In some embodiments, the executing entity may generate a second predicted traffic sequence for a second future time sequence based on the fourth actual traffic sequence and the fifth characteristic data sequence by fitting a predictive model with pre-trained traffic. Wherein the traffic fitting prediction model is generated based on the model training method of the present disclosure.
As an example, the above-described execution body may input a fourth actual traffic sequence and a fifth feature data sequence to a pre-trained traffic fitting prediction model to generate a second predicted traffic sequence for a second future time sequence.
In step 303, a fifth actual traffic sequence and a sixth characteristic data sequence are acquired.
In some embodiments, the executing entity may obtain a fifth actual traffic sequence and a sixth characteristic data sequence. The fifth actual traffic sequence is not described in detail, and reference may be made to the explanation of the first actual traffic sequence. Likewise, the sixth characteristic data sequence is not described in detail, and reference may be made to the explanation of the first characteristic data sequence.
Optionally, the sixth characteristic data in the sixth characteristic data sequence includes at least one of the following: holiday encoded data, traffic trend encoded data, and impact event encoded data.
Step 304, inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic time sequence prediction model, and obtaining a third predicted traffic sequence for a third future time sequence.
In some embodiments, the executing entity may input the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic timing prediction model to obtain a third predicted traffic sequence for a third future time sequence. The second future time sequence is a time information sequence in which a corresponding time period is after the current time information and in which a corresponding time period is before the third future time sequence. For example, the current time information is "1 month 1 day", and the second future time series may include: "1 month 2 days", "1 month 3 days", "1 month 4 days", "1 month 5 days". The third future time sequence may include: "1 month 6 days", "1 month 7 days", "1 month 8 days", "1 month 9 days". Wherein a third future time in the third future time sequence has a one-to-one correspondence with a third predicted traffic in the third predicted traffic sequence.
The above embodiments of the present disclosure have the following advantageous effects: by the traffic prediction method of some embodiments of the present disclosure, traffic within a predetermined time period in the future can be accurately predicted using a traffic timing prediction model and a traffic fitting prediction model.
With further reference to fig. 4, a flow 400 of further embodiments of traffic prediction methods according to the present disclosure is shown. The traffic prediction method comprises the following steps:
step 401, a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to the fourth historical time sequence are obtained.
Step 402, determining at least one fifth historical time series having a contemporaneous time relationship with the fourth historical time series.
In some embodiments, an executing subject (e.g., an electronic device) may determine at least one fifth historical time series that has a contemporaneous time relationship with the fourth historical time series described above.
Step 403, inputting the at least one sixth actual traffic sequence and the at least one seventh characteristic data sequence into a pre-trained traffic timing prediction model, resulting in a third predicted traffic sequence for a third future time sequence.
In some embodiments, the executing entity may input at least one sixth actual traffic sequence and at least one seventh feature data sequence into a pre-trained traffic timing prediction model, resulting in a third predicted traffic sequence for a third future time sequence. Wherein the actual traffic sequence corresponding to the at least one fifth historical time sequence is at least one sixth actual traffic sequence. The at least one fifth historical time series corresponds to at least one seventh characteristic data series.
Optionally, the seventh feature data in the seventh feature data sequence includes at least one of: holiday encoded data, traffic trend encoded data, and impact event encoded data.
Step 404, a fifth actual traffic sequence and a sixth characteristic data sequence are acquired.
Step 405, inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic sequence prediction model, to obtain a third predicted traffic sequence for a third future time sequence.
The above embodiments of the present disclosure have the following advantageous effects: according to the traffic prediction method of some embodiments of the present disclosure, by adding at least one historical time sequence of the contemporaneous time relationship, the traffic time sequence prediction model can consider more characteristic information, and traffic in a future preset time period can be predicted more accurately.
With further reference to fig. 5, as an implementation of the method illustrated in the above figures, the present disclosure provides some embodiments of a model training apparatus, which correspond to those method embodiments illustrated in fig. 2, which may find particular application in a variety of electronic devices.
As shown in fig. 5, a model training apparatus 500 includes: a first acquisition unit 501, a first input unit 502, a second acquisition unit 503, and a training unit 504. Wherein the first obtaining unit 501 is configured to obtain a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence; a first input unit 502 configured to input the first actual traffic sequence and the first feature data sequence to a pre-trained traffic timing prediction model, to obtain a first predicted traffic sequence for a first future time sequence; a second acquisition unit 503 configured to acquire a second actual traffic volume sequence and a second characteristic data sequence for a second historical time sequence, and to acquire a third characteristic data sequence for the above-described first future time sequence; training unit 504 is configured to train the initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence, so as to obtain a traffic fitting prediction model.
In some alternative implementations of some embodiments, the traffic timing prediction model described above is trained by: acquiring a third actual traffic sequence corresponding to a third historical time sequence and a traffic label corresponding to time information to be predicted; determining a fourth characteristic data sequence for the third historical time sequence; and taking the third actual traffic sequence and the fourth characteristic data sequence as model inputs, taking the traffic label as prediction output, and performing time sequence model training on the initial traffic time sequence prediction model to generate a traffic time sequence prediction model.
In some optional implementations of some embodiments, the second feature data in the second feature data sequence includes at least one of: holiday encoded data, traffic trend encoded data, and impact event encoded data.
In some alternative implementations of some embodiments, the traffic trend encoded data is encoded by: for each second characteristic data in the above-mentioned second characteristic data sequence, the following encoding step is performed: screening a second historical time corresponding to the second actual traffic meeting the inflection point condition of the preset traffic from the second historical time sequence according to the second actual traffic sequence to obtain a second historical time subsequence; determining a second historical time corresponding to the second characteristic data as a target historical time; determining a second historical time group which is in the second historical time sub-sequence and is related to the existence time of the target historical time in sequence; and determining traffic trend coded data aiming at the second characteristic data according to the second historical time group.
It will be appreciated that the elements described in the model training apparatus 500 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features, and benefits described above with respect to the method are equally applicable to the model training apparatus 500 and the units contained therein, and are not described in detail herein.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of a traffic prediction apparatus, which correspond to those method embodiments shown in fig. 3, and which are particularly applicable in various electronic devices.
As shown in fig. 6, a model training apparatus 600 includes: a third acquisition unit 601, a generation unit 602, a fourth acquisition unit 603, and a second input unit 604. Wherein, the third obtaining unit 601 is configured to obtain a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence; a generating unit 602 configured to generate a second predicted traffic sequence for a second future time sequence using a pre-trained traffic fit prediction model based on the fourth actual traffic sequence and the fifth feature data sequence, wherein the traffic fit prediction model is generated based on a model training method of the present disclosure; a fourth acquisition unit 603 configured to acquire a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence for current time information; a second input unit 604 configured to input the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic sequence prediction model, and obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence in which a corresponding time period is after the current time information and in which a corresponding time period is before the third future time sequence.
In some optional implementations of some embodiments, the generating unit 602 may be further configured to: determining at least one fifth historical time series having a contemporaneous time relationship with the fourth historical time series; inputting at least one sixth actual traffic sequence and at least one seventh characteristic data sequence into a pre-trained traffic sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the actual traffic sequence corresponding to the at least one fifth historical time sequence is at least one sixth actual traffic sequence, and the characteristic data sequence corresponding to the at least one fifth historical time sequence is at least one seventh characteristic data sequence.
It will be appreciated that the elements described in the traffic prediction device 600 correspond to the various steps in the method described with reference to fig. 3. Thus, the operations, features and advantages described above with respect to the method are equally applicable to the traffic prediction device 600 and the units contained therein, and are not described herein.
Referring now to fig. 7, a schematic diagram of an electronic device 700 (e.g., electronic device 101 of fig. 1) suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only one example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a read-only memory 702 or a program loaded from a storage means 708 into a random access memory 703. In the random access memory 703, various programs and data necessary for the operation of the electronic device 700 are also stored. The processing means 701, the read only memory 702 and the random access memory 703 are connected to each other by a bus 704. An input/output interface 705 is also connected to the bus 704.
In general, the following devices may be connected to the input/output interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 7 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 709, or from storage 708, or from read only memory 702. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 701.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence; correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence aiming at a first future time sequence; acquiring a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and acquiring a third characteristic data sequence for the first future time sequence; and training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model. Acquiring a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence; generating a second predicted traffic sequence for a second future time sequence by using a pre-trained traffic fit prediction model according to the fourth actual traffic sequence and the fifth characteristic data sequence, wherein the traffic fit prediction model is generated based on a model training method; acquiring a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence aiming at current time information; and inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence of which the corresponding time period is after the current time information and before the third future time sequence.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a first acquisition unit, a first input unit, a second acquisition unit, and a training unit. The names of these units do not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring the first actual traffic sequence and the first characteristic data sequence for the first historical time sequence".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements any of the model training methods and traffic prediction methods described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (11)

1. A model training method, comprising:
acquiring a first actual traffic sequence and a first characteristic data sequence for a first historical time sequence;
correspondingly inputting the first actual traffic sequence and the first characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a first predicted traffic sequence aiming at a first future time sequence;
acquiring a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and acquiring a third characteristic data sequence for the first future time sequence;
And training an initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model.
2. The method of claim 1, wherein the traffic timing prediction model is trained by:
acquiring a third actual traffic sequence corresponding to a third historical time sequence and a traffic label corresponding to time information to be predicted;
determining a fourth sequence of characteristic data for the third historical time series;
and taking the third actual traffic sequence and the fourth characteristic data sequence as model inputs, taking the traffic label as prediction output, and performing time sequence model training on an initial traffic time sequence prediction model to generate a traffic time sequence prediction model.
3. The method of claim 1, wherein the second feature data in the second sequence of feature data comprises at least one of: holiday encoded data, traffic trend encoded data, and impact event encoded data.
4. A method according to claim 3, wherein the traffic trend encoded data is encoded by:
For each second characteristic data in the second sequence of characteristic data, performing the following encoding steps:
screening a second historical time corresponding to the second actual traffic meeting the inflection point condition of the preset traffic from the second historical time sequence according to the second actual traffic sequence to obtain a second historical time subsequence;
determining a second historical time corresponding to the second characteristic data as a target historical time;
determining a second historical time group in the second historical time subsequence, wherein the second historical time group is related to the existence time of the target historical time in time sequence;
and determining traffic trend coded data for the second characteristic data according to the second historical time group.
5. A traffic prediction method, comprising:
acquiring a fourth actual traffic sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence;
generating a second predicted traffic sequence for a second future time sequence based on the fourth actual traffic sequence and the fifth characteristic data sequence using a pre-trained traffic fit prediction model, wherein the traffic fit prediction model is generated based on the method of one of claims 1-4;
Acquiring a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence aiming at current time information;
and inputting the fifth actual traffic sequence and the sixth characteristic data sequence into a pre-trained traffic time sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence of which the corresponding time period is after the current time information and the corresponding time period is before the third future time sequence.
6. The method of claim 5, wherein said generating a second predicted traffic sequence for a second future time sequence from said fourth actual traffic sequence and said fifth characteristic data sequence using a pre-trained traffic fit prediction model comprises:
determining at least one fifth historical time series having a contemporaneous time relationship with the fourth historical time series;
inputting at least one sixth actual traffic sequence and at least one seventh characteristic data sequence into a pre-trained traffic sequence prediction model to obtain a third predicted traffic sequence for a third future time sequence, wherein the actual traffic sequence corresponding to the at least one fifth historical time sequence is at least one sixth actual traffic sequence, and the characteristic data sequence corresponding to the at least one fifth historical time sequence is at least one seventh characteristic data sequence.
7. A model training apparatus comprising:
a first acquisition unit configured to acquire a first actual traffic volume sequence and a first characteristic data sequence for a first historical time sequence;
a first input unit configured to correspondingly input the first actual traffic sequence and the first characteristic data sequence to a pre-trained traffic timing prediction model, resulting in a first predicted traffic sequence for a first future time sequence;
a second acquisition unit configured to acquire a second actual traffic sequence and a second characteristic data sequence for a second historical time sequence, and to acquire a third characteristic data sequence for the first future time sequence;
the training unit is configured to train the initial traffic fitting prediction model according to the second characteristic data sequence, the third characteristic data sequence, the second actual traffic sequence and the first predicted traffic sequence to obtain a traffic fitting prediction model.
8. A traffic prediction apparatus comprising:
a third acquisition unit configured to acquire a fourth actual traffic volume sequence and a fifth characteristic data sequence corresponding to a fourth historical time sequence;
A generation unit configured to generate a second predicted traffic sequence for a second future time sequence from the fourth actual traffic sequence and the fifth characteristic data sequence using a pre-trained traffic fit prediction model, wherein the traffic fit prediction model is generated based on the method of one of claims 1-4;
a fourth acquisition unit configured to acquire a fifth actual traffic sequence and a sixth characteristic data sequence, wherein the fifth actual traffic sequence is a traffic sequence for current time information;
a second input unit configured to input the fifth actual traffic sequence and the sixth characteristic data sequence to a pre-trained traffic timing prediction model, resulting in a third predicted traffic sequence for a third future time sequence, wherein the second future time sequence is a time information sequence with a corresponding time period after the current time information and with a corresponding time period before the third future time sequence.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
10. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-6.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202310943649.6A 2023-07-28 2023-07-28 Model training method, traffic prediction method, device, equipment and medium Pending CN117035842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310943649.6A CN117035842A (en) 2023-07-28 2023-07-28 Model training method, traffic prediction method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310943649.6A CN117035842A (en) 2023-07-28 2023-07-28 Model training method, traffic prediction method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117035842A true CN117035842A (en) 2023-11-10

Family

ID=88625476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310943649.6A Pending CN117035842A (en) 2023-07-28 2023-07-28 Model training method, traffic prediction method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117035842A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236653A (en) * 2023-11-13 2023-12-15 北京国电通网络技术有限公司 Traffic prediction-based vehicle scheduling method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236653A (en) * 2023-11-13 2023-12-15 北京国电通网络技术有限公司 Traffic prediction-based vehicle scheduling method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US20240127795A1 (en) Model training method, speech recognition method, device, medium, and apparatus
CN113408797A (en) Method for generating flow-traffic prediction multi-time-sequence model, information sending method and device
CN117035842A (en) Model training method, traffic prediction method, device, equipment and medium
CN115629938A (en) Fault state type information generating method, device, equipment, medium and product
CN115357350A (en) Task configuration method and device, electronic equipment and computer readable medium
CN112989203B (en) Material throwing method, device, equipment and medium
CN115169852B (en) Information transmission method, apparatus, electronic device, medium, and computer program product
CN113837814A (en) Method and device for predicting quantity of released resources, readable medium and electronic equipment
CN117236653A (en) Traffic prediction-based vehicle scheduling method and device and electronic equipment
CN114792258B (en) Information generation method and device, electronic equipment and computer readable medium
CN112073202B (en) Information generation method and device, electronic equipment and computer readable medium
CN116703262B (en) Distribution resource adjustment method, distribution resource adjustment device, electronic equipment and computer readable medium
CN118095512A (en) Adjustment information generation method, device, electronic equipment and computer readable medium
CN118052580A (en) Model generation method, order quantity generation device, equipment and medium
CN115565607B (en) Method, device, readable medium and electronic equipment for determining protein information
CN115328811B (en) Program statement testing method and device for industrial control network simulation and electronic equipment
CN116934557B (en) Behavior prediction information generation method, device, electronic equipment and readable medium
CN117876091A (en) Information transmission method, apparatus, electronic device, and computer-readable medium
CN113077353B (en) Method, device, electronic equipment and medium for generating nuclear insurance conclusion
CN116800834B (en) Virtual gift merging method, device, electronic equipment and computer readable medium
CN111949938B (en) Determination method and device of transaction information, electronic equipment and computer readable medium
CN111782777B (en) Method and device for generating information
CN111709787B (en) Method, device, electronic equipment and medium for generating user retention time
CN117689253A (en) Method, device, equipment and medium for generating flow information of product model
CN118052509A (en) Inventory information generation method, apparatus, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination