WO2023236601A1 - 参数预测方法、预测服务器、预测系统及电子设备 - Google Patents

参数预测方法、预测服务器、预测系统及电子设备 Download PDF

Info

Publication number
WO2023236601A1
WO2023236601A1 PCT/CN2023/079633 CN2023079633W WO2023236601A1 WO 2023236601 A1 WO2023236601 A1 WO 2023236601A1 CN 2023079633 W CN2023079633 W CN 2023079633W WO 2023236601 A1 WO2023236601 A1 WO 2023236601A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
parameters
prediction model
model
models
Prior art date
Application number
PCT/CN2023/079633
Other languages
English (en)
French (fr)
Inventor
贾宇航
雷艺学
张云飞
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023236601A1 publication Critical patent/WO2023236601A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management

Definitions

  • Embodiments of the present application relate to remote control driving and high-speed fields in Internet of Vehicles technology, and more specifically, to parameter prediction methods, prediction servers, prediction systems and electronic devices.
  • Quality of service (QoS) parameters include parameters in dimensions such as throughput, bandwidth, transmission rate, and parameters in dimensions such as delay.
  • QoS Quality of service
  • the prediction requirements for QoS parameters in different scenarios are not consistent. For example, when both remote control scenarios and high-speed scenarios need to predict delay, compared with remote control scenarios, the high-speed scenario requires higher latency prediction accuracy, while the accuracy of existing prediction solutions for QoS parameters is both The higher the better, there are no related solutions for multiple accuracy predictions. Based on this, for some scenarios that require lower prediction accuracy of parameters, when using a prediction algorithm with higher prediction accuracy for prediction, there are This may lead to excess computing power and increase prediction costs.
  • Embodiments of the present application provide a parameter prediction method, prediction server, prediction system and electronic equipment, which can not only realize simultaneous prediction of multiple parameters, but also realize diverse prediction of the multiple parameters.
  • embodiments of the present application provide a parameter prediction method, including:
  • the plurality of prediction models correspond to the plurality of parameters
  • each prediction model in the plurality of prediction models is used to predict the parameters corresponding to each prediction model
  • the single prediction model is used to predict the parameters corresponding to the prediction model.
  • the multiple parameters are predicted.
  • a prediction server including:
  • An acquisition unit used to acquire input parameters for prediction of multiple parameters
  • a determining unit configured to determine a target prediction model for predicting the multiple parameters based on the first prediction error corresponding to the multiple prediction models running in parallel and the second prediction error corresponding to the single prediction model running independently;
  • the plurality of prediction models correspond to the plurality of parameters
  • each prediction model in the plurality of prediction models is used to predict the parameters corresponding to each prediction model
  • the single prediction model is used to predict the parameters corresponding to the prediction model.
  • the prediction unit is used to predict the multiple parameters based on the input parameters and the target prediction model.
  • inventions of the present application provide a prediction system.
  • the prediction system includes: a terminal device, a network device, a core network device, a prediction server, and a remote control server; wherein the core network device converts the network device or the terminal.
  • the collected data sent by the device is processed into structured data, and the structured data is sent to the prediction server.
  • the prediction server executes the first aspect of the uplink method based on the structured data sent by the core network device, and sends the prediction result to the prediction server.
  • Remote control server or terminal device is a terminal device.
  • an electronic device including:
  • a processor adapted to implement computer instructions
  • the computer-readable storage medium stores computer instructions, and the computer instructions are suitable for the processor to load and execute the method of the first aspect.
  • embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions.
  • the computer instructions When the computer instructions are read and executed by a processor of a computer device, the computer device executes the above-mentioned step.
  • embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method of the first aspect.
  • the model is designed as a general prediction model for one-time prediction of multiple parameters, so that both the multiple prediction models and the single prediction model can achieve simultaneous prediction of multiple parameters. Furthermore, considering that the energy consumption cost of multiple prediction models running in parallel is greater than the energy consumption cost of a single prediction model running alone, that is, the prediction cost of the multiple prediction models is greater than the prediction cost of the single prediction model.
  • the application introduces the first prediction errors corresponding to the multiple prediction models and the second prediction error corresponding to the single prediction model, and determines the prediction for the multiple parameters based on the first prediction error and the second prediction error.
  • the target prediction model is equivalent to predicting multiple parameters based on different prediction costs and different prediction accuracy, thereby achieving diverse predictions of the multiple parameters.
  • the solution provided by this application can not only achieve simultaneous prediction of multiple parameters, but also achieve diversity prediction of the multiple parameters.
  • Figure 1 is an example of the system framework provided by the embodiment of this application.
  • Figure 2 is a schematic flow chart of the parameter prediction method provided by the embodiment of the present application.
  • Figure 3 is a schematic flow chart of a parameter prediction method in a remote control creation scenario provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart of the test training process provided by the embodiment of the present application.
  • Figure 5 is a schematic block diagram of a prediction server provided by an embodiment of the present application.
  • Figure 6 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
  • an embodiment means that a particular feature, structure or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application.
  • the appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art understand, both explicitly and implicitly, that the embodiments described herein may be combined with other embodiments.
  • AI Artificial Intelligence
  • Digital computers or machines controlled by digital computers to simulate, extend, and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • Technology and application systems In other words, artificial intelligence is a comprehensive technology of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is the study of the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive subject that covers a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, mechatronics and other technologies.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • the parameter prediction method provided by the embodiment of the present application can be implemented through artificial intelligence.
  • Computer Vision Technology (Computer Vision, CV): Computer vision is a science that studies how to make machines "see”. Furthermore, it refers to machine vision such as using cameras and computers to replace human eyes to identify and measure targets, and Further graphics processing is performed to make the computer processing into an image more suitable for human eye observation or to be transmitted to the instrument for detection.
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, simultaneous positioning and mapping Construction and other technologies also include common biometric identification technologies such as face recognition and fingerprint recognition.
  • CV technology can be used to realize the parameter prediction method and image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, and video content.
  • image recognition image semantic understanding
  • OCR optical character recognition
  • video processing video semantic understanding
  • video content video content
  • Behavior recognition three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality and other related functions.
  • Machine Learning A multi-field interdisciplinary subject involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. It specializes in studying how computers can simulate or implement human learning behavior to acquire new knowledge or skills, and reorganize existing knowledge structures to continuously improve their performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent. Its applications cover all fields of artificial intelligence.
  • Machine learning and deep learning usually include artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning and other technologies.
  • the model involved in the embodiment of the present application can be trained through ML technology.
  • Figure 1 is an example of a system framework 100 provided by an embodiment of the present application.
  • the system framework 100 may include terminal equipment 111, terminal equipment 112, network equipment 121, network equipment 122, core network equipment 130, prediction server 140, remote control server 150 and remote control cockpit 160.
  • the terminal device 111 can communicate with the core network device 130 through the network device 121 and/or the network device 122, and the prediction server 140 can predict various parameters of the terminal device 111 (or the terminal device 112) to obtain prediction results. , and send the predicted prediction results to the remote control server 150 or the remote control cockpit 160.
  • the remote control server 150 or the remote control cockpit 160 sends control instructions to the terminal device 111 (or the terminal device 112) based on the received prediction results. , to control the terminal device 111 (or the terminal device 112) to perform corresponding operations.
  • control instruction is used to control the terminal device 111 (or the terminal device 112) to perform at least one of the following: adjusting the uplink video bit rate, reducing the remote control speed, or safely docking and exiting the remote control operation, etc.
  • the remote control server 150 may be used to process information, for example, to undertake service requests and responses, or to send control to the terminal device 111 (or terminal device 112) based on the prediction results fed back by the prediction server 140.
  • the driver in the remote control cockpit 160 can send control instructions to the terminal device 111 (or terminal device 112) based on the video information displayed on the screen or the prediction results fed back by the prediction server 140.
  • the remote control server 150 can also analyze the traffic environment where the terminal device 111 (or the terminal device 112) is located based on the obtained prediction results, and send the analysis results to the remote cockpit 160, so that The driver in the remote cockpit 160 remotely drives the vehicle based on the analysis results.
  • the prediction server 140 can predict the quality of service (Predictive Quality of Service, QoS) parameters used to characterize the terminal device 111 (or the terminal device 112).
  • the QoS parameters include the parameters used to characterize the terminal device 111 (or the terminal device 112).
  • network status information which may also be referred to as network status information for short. It includes but is not limited to: throughput, Signal to Interference plus Noise Ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Receiving Power, RSRP), Reference Signal Receiving Quality (RSRQ), and delay.
  • SINR Signal to Interference plus Noise Ratio
  • RSSI Received Signal Strength Indication
  • RSRP Reference Signal Receiving Power
  • RSRQ Reference Signal Receiving Quality
  • the terminal device 111 (or the terminal device 112) can also be predicted, including but not limited to: the location information of the terminal device, the azimuth angle of the terminal device, the longitude and latitude information of the terminal device, video resolution, video classification Piece size, video fragment download time, etc.
  • the prediction server 140 may predict the current data of the QoS parameters based on the historical data of the QoS parameters. For example, the prediction server 140 can predict the delay of the terminal device 111 (or the terminal device 112) at the current time based on the historical delay data of the terminal device 111 (or the terminal device 112).
  • system framework 100 can be applied to any scenario where remote control is required or where various parameter indicators need to be predicted.
  • the system framework 100 may be suitable for remote driving or high-speed scenarios.
  • the terminal device 111 (or the terminal device 112) may be a vehicle, including but not limited to: manned vehicles, intelligent connected vehicles, and unmanned vehicles.
  • the vehicle can establish communication with the network device 121 or the network device 122 through the built-in radio communication subsystem of its on-board unit.
  • Network device 121 or network device 122 may act as the emission source.
  • the network device 121 or the network device 122 may be an active communication device, for example, including but not limited to: 4G/5G base station, road side unit (Road Side Unit, RSU), WiFi, etc.
  • the terminal device 111 may be a device equipped with one or more of an application (Application, APP), a vehicle-mounted Internet Protocol (IP) camera, and other terminal software.
  • the core network equipment 130 includes but is not limited to 4G core network equipment or 5G core network (5G Core, 5GC) equipment.
  • the 5G core network equipment may be a 5G cloud core network.
  • the prediction server may be called a QoS prediction server, and the remote control cockpit 160 may be a cockpit for remote control driving of a vehicle.
  • system architecture can also support various traffic flow services, including but not limited to: unmanned truck collection, vehicle-road collaboration, real-time twins, remote control and other scenarios. This application does not specifically limit this.
  • FIG. 1 exemplarily shows two terminal devices, two network devices, and one core network device, but the application is not limited thereto.
  • the system framework 100 may include other numbers of network devices and other numbers of terminal devices may be included within the coverage of each network device.
  • the prediction server 140 is usually only able to predict a single QoS parameter (such as throughput or delay), but actual application scenarios often require prediction of multiple QoS parameters. For example, in a remote control scenario, since the throughput and delay of the uplink of the terminal device are both sensitive variables, it is necessary to predict the throughput and delay at the same time.
  • the prediction requirements for QoS parameters in different scenarios are not consistent. For example, when both remote control scenarios and high-speed scenarios need to predict delay, compared with remote control scenarios, the high-speed scenario requires higher latency prediction accuracy, while the accuracy of existing prediction solutions for QoS parameters is both The higher the better, there are no related solutions for multiple accuracy predictions. Based on this, for some scenarios that require lower prediction accuracy of parameters, when using a prediction algorithm with higher prediction accuracy for prediction, there are This may lead to excess computing power and increase prediction costs.
  • the present application provides a parameter prediction method, which enables the prediction server 140 to not only achieve simultaneous prediction of multiple parameters, but also achieve diversity prediction of the multiple parameters.
  • the prediction server 140 can predict multiple parameters such as throughput and delay at the same time.
  • multiple prediction models running in parallel or an enhanced single prediction model can be used to predict multiple parameters simultaneously, which not only enables simultaneous prediction of multiple parameters, but also enables diverse predictions of the multiple parameters.
  • FIG. 2 shows a schematic flowchart of a parameter prediction method 200 according to an embodiment of the present application.
  • the parameter prediction 200 can be executed by any electronic device with data processing capabilities.
  • the electronic device may be implemented as a server.
  • Serve The server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers. It can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, and middleware. Cloud servers for basic cloud computing services such as software services, domain name services, security services, and big data and artificial intelligence platforms.
  • the servers can be connected directly or indirectly through wired or wireless communication methods. This application does not impose specific restrictions here.
  • the parameter prediction 200 may be performed by the prediction server 140 shown in FIG. 1 .
  • the method 200 may include some or all of the following content:
  • the plurality of prediction models correspond to the plurality of parameters
  • each prediction model in the plurality of prediction models is used to predict the parameters corresponding to each prediction model
  • the single prediction model is used to predict the parameters corresponding to the prediction model.
  • S230 Predict the multiple parameters based on the input parameters and the target prediction model.
  • the target prediction model is a single prediction model
  • the single prediction model is used to make a one-time prediction for the multiple parameters; where the term "one-time prediction" is intended to illustrate: using the single prediction model to perform a one-time prediction
  • the predicted values of the multiple parameters can be obtained; in other words, for the single prediction model, the input parameters only need to be input once to obtain the predicted values of the multiple parameters.
  • each prediction model in the plurality of prediction models is used to predict parameters corresponding to each prediction model.
  • each prediction model in the multiple prediction models it is necessary to perform multiple prediction processes using the multiple prediction models in a traversal manner to obtain the predicted values of the multiple parameters; or in other words, for each prediction model in the multiple prediction models, The input parameters need to be input once, so that each prediction model can output the predicted value of the parameter corresponding to each prediction model.
  • each prediction model in the multiple prediction models is designed as a dedicated prediction model corresponding to the parameters, and the single prediction model is The model is designed as a general prediction model for one-time prediction of multiple parameters, so that both the multiple prediction models and the single prediction model can achieve simultaneous prediction of multiple parameters. Furthermore, considering that the energy consumption cost of multiple prediction models running in parallel is greater than the energy consumption cost of a single prediction model running alone, that is, the prediction cost of the multiple prediction models is greater than the prediction cost of the single prediction model.
  • the application introduces the first prediction errors corresponding to the multiple prediction models and the second prediction error corresponding to the single prediction model, and determines the prediction for the multiple parameters based on the first prediction error and the second prediction error.
  • the target prediction model is equivalent to predicting multiple parameters based on different prediction costs and different prediction accuracy, thereby achieving diverse predictions of the multiple parameters.
  • the solution provided by this application can not only achieve simultaneous prediction of multiple parameters, but also achieve diversity prediction of the multiple parameters.
  • changes in multiple parameters can be predicted, and then the vehicle can be remotely controlled based on the predicted values of the multiple parameters.
  • corresponding measures can be taken in a timely manner based on the predicted values of these multiple parameters (such as adjusting the uplink video bit rate, reducing the remote control speed, safely stopping and exiting the remote control, etc.), thereby improving the safety of the remote driving vehicle. and enhance the driving decision-making performance of remotely driven vehicles.
  • first prediction error and the second prediction error involved in this application may be parameters or indicators used to characterize the prediction accuracy of the prediction model.
  • the first prediction error and the second prediction prediction may be relative mean square errors (Relative Mean Square Error, RMSE) or other parameters.
  • the multiple prediction models and the single prediction model involved in this application can be any model or algorithm with a prediction function, for example, including but not limited to: traditional learning models, ensemble learning models or deep learning models.
  • traditional learning models include but are not limited to: tree models (regression trees) or logistic regression (LR) models; integrated learning models include but are not limited to: improved models of gradient boosting algorithms (XGBoost) or random forest models ; Deep learning models include but are not limited to: Long Short-Term Memory (LSTM) Or neural network.
  • LSTM Long Short-Term Memory
  • a control instruction for controlling the terminal device may be generated based on the prediction results.
  • This control instruction is used to control the terminal device to perform at least one of the following: adjusting the uplink video bit rate, reducing the remote control speed, or safely docking and exiting the remote control operation, etc.
  • the prediction results of the multiple parameters can also be used to generate a network environment of the terminal device, so that the remote control personnel can remotely control the terminal device based on the analyzed network environment.
  • the terminal device is a remote driving vehicle and the network environment is a traffic environment; that is to say, the prediction results of the multiple parameters can also be used to generate the traffic environment (such as congestion) of the remote driving vehicle. , so that the remote controller can control the remote driving vehicle based on the analyzed traffic environment (such as congestion).
  • this S220 may include:
  • the single prediction model is determined as the target prediction model; if the first prediction error is less than the second prediction error, the multiple prediction models are determined as The target prediction model or the single prediction model is determined as the target prediction model.
  • the first prediction error is greater than or equal to the second prediction error, it means that the prediction accuracy of the multiple prediction models is less than or equal to the prediction accuracy of the single prediction model.
  • the prediction cost is greater than the prediction cost of the single prediction model. Therefore, determining the single prediction model as the target prediction model can ensure the accuracy of the prediction compared with determining the multiple prediction models as the target prediction model. Based on this, the predicted cost of controlling the multiple parameters is controlled.
  • the first prediction error is smaller than the second prediction error, it means that the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model.
  • the prediction costs of the multiple prediction models are greater than the prediction accuracy of the single prediction model.
  • the prediction cost of a single prediction model at this time, the single prediction model is determined as the target prediction model, which is equivalent to when the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model, the optimization can be controlled by
  • the target prediction model is determined by predicting the cost of multiple parameters.
  • the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model.
  • the prediction costs of the multiple prediction models are greater than the prediction accuracy of the single prediction model.
  • the prediction cost of a single prediction model; at this time, determining the multiple prediction models as the target prediction model is equivalent to the fact that when the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model, it can be guaranteed through optimization
  • the target prediction model is determined by the prediction accuracy of the multiple parameters.
  • this S220 may include:
  • the plurality of prediction models are determined as the target prediction model or the single prediction model is determined as the target prediction model.
  • the single prediction model is determined as the target prediction model; in the first 2.
  • the multiple prediction models are determined as the target prediction model.
  • the single prediction model is determined as the target prediction model; if the second prediction error is greater than the preset error, the multiple predictions are The model is determined as the target prediction model.
  • the preset error may represent the accuracy requirements of the multiple parameters.
  • the second prediction error is less than or equal to the preset error, it means that the prediction accuracy of the single prediction model can meet the accuracy requirements to a certain extent.
  • the single prediction model is determined as the target prediction.
  • the model can control the prediction costs of the multiple parameters on the basis of ensuring the prediction accuracy of the multiple parameters.
  • the second prediction error is greater than the preset error, it means that the prediction accuracy of the single prediction model may not be able to meet the accuracy requirements to a certain extent.
  • the multiple prediction models are determined as the target predictive model, The prediction accuracy of these multiple parameters can be prioritized by sacrificing prediction costs.
  • the single prediction model is determined as the target prediction model; if the second prediction error is outside the preset error range , then the multiple prediction models are determined as the target prediction model.
  • the preset error range can represent the accuracy requirements of the multiple parameters.
  • the second prediction error is within the preset error range, it means that the prediction accuracy of the single prediction model can meet the accuracy requirements to a certain extent.
  • the single prediction model is determined as the target prediction model. , can control the prediction costs of the multiple parameters on the basis of ensuring the prediction accuracy of the multiple parameters.
  • the second prediction error is outside the preset error range, it means that the prediction accuracy of the single prediction model may not be able to meet the accuracy requirements to a certain extent.
  • the multiple prediction models are determined as the target prediction. The model can prioritize the prediction accuracy of these multiple parameters by sacrificing prediction costs.
  • this S220 may include:
  • the multiple prediction models are determined as the target prediction model or the single prediction model is determined as the target prediction model.
  • this embodiment determines the target prediction model by introducing the application scenarios of the multiple parameters, which can ensure that the The prediction accuracy of the target prediction model is consistent with the accuracy requirements of the multiple parameters, which avoids the prediction accuracy of the multiple parameters being inconsistent due to the inconsistency between the prediction accuracy of the target prediction model and the accuracy requirements of the multiple parameters.
  • the problem is that the prediction cost of multiple parameters is too low or too high.
  • the application scenarios of the multiple parameters are determined by the accuracy requirements of the multiple parameters.
  • the application scenario of the multiple parameters is designed as a scenario determined by the accuracy requirements of the multiple parameters, which can ensure that the prediction accuracy of the target prediction model is consistent with the accuracy requirements of the multiple parameters. It avoids the problem that the prediction accuracy of the multiple parameters is too low or the prediction cost of the multiple parameters is too high due to the inconsistency between the prediction accuracy of the target prediction model and the accuracy requirements of the multiple parameters.
  • the multiple prediction models are determined as the target prediction models; if the application scenario of the multiple parameters is a prediction server In the scenario where the multiple parameters are applied, the single prediction model is determined as the target prediction model.
  • the embodiment of the present application distinguishes the application scenarios of the multiple parameters by using the devices that apply the multiple parameters, which can not only reduce the number of multiple parameters Classification of application scenarios, thereby reducing the complexity of determining the target prediction model, and ensuring that the prediction accuracy of the target prediction model is consistent with the accuracy requirements of the multiple parameters, avoiding the need for predictions due to the target prediction model If the accuracy is inconsistent with the accuracy requirements of the multiple parameters, the prediction accuracy of the multiple parameters is too low or the prediction cost of the multiple parameters is too high.
  • the application scenario of the multiple parameters is a scenario where the terminal device applies the multiple parameters
  • the multiple parameters are parameters that can affect the user experience.
  • the multiple prediction models can be used as the target prediction models to prioritize the prediction accuracy of the multiple parameters.
  • the application scenario of the multiple parameters is a scenario where the prediction server applies the multiple parameters, it means that the multiple parameters may not be parameters that affect the user experience.
  • the single prediction model can be used as the target prediction model to prioritize the prediction costs of the multiple parameters.
  • the multiple prediction models can also be determined as the target based on the second prediction error and the application scenarios of the multiple parameters.
  • Predictive models may The single prediction model is determined as the target prediction model. This application does not specifically limit this.
  • the multiple prediction models are determined as the target based on the application scenarios of the multiple parameters. Predictive model or identify the single predictive model as the target predictive model. In other words, if the first prediction error is less than the second prediction error, it means that the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model; if the second prediction error is less than or equal to the preset error, it means that The prediction accuracy of the single prediction model can meet the accuracy requirement to a certain extent.
  • the multiple prediction models are determined as the target prediction model or the single prediction model is determined as
  • the target prediction model can make the target prediction model a model that matches the application scenarios of the multiple parameters, and can avoid the prediction accuracy being too low or the prediction cost caused by the mismatch between the target prediction model and the application scenarios of the multiple parameters. Problems that are too high can improve the prediction effect.
  • the multiple prediction models are determined as the target based on the application scenarios of the multiple parameters. Predictive model or identify the single predictive model as the target predictive model. In other words, if the first prediction error is smaller than the second prediction error, it means that the prediction accuracy of the multiple prediction models is greater than the prediction accuracy of the single prediction model. If the second prediction error is within the preset error range, it means that The prediction accuracy of the single prediction model can meet the accuracy requirement to a certain extent.
  • the multiple prediction models are determined as the target prediction model or the single prediction model is determined as
  • the target prediction model can make the target prediction model a model that matches the application scenarios of the multiple parameters, and can avoid the prediction accuracy being too low or the prediction cost caused by the mismatch between the target prediction model and the application scenarios of the multiple parameters. Problems that are too high can improve the prediction effect.
  • this S210 may include:
  • the structured data includes m historical moments and a parameter set for each of the m historical moments; the parameter set includes the multiple parameters, and m is an integer greater than 0;
  • n historical moments Based on the m historical moments, taking the first historical moment as the starting moment, select n historical moments forward in chronological order; n is an integer greater than 0 and less than or equal to m;
  • the input parameters are determined.
  • the input parameters are determined through n parameter sets. Since the parameter set is less difficult to obtain, the acquisition cost of the input parameters can be controlled. In addition, when the value of n is an integer greater than 1, it is equivalent to, The input parameters are determined through multiple parameter sets. At this time, the input parameters can not only reflect the parameter set at the first historical moment in the past, but also improve the changing trend of the parameter set before the first historical moment, enriching the method for analyzing multiple parameters. The information used to predict multiple parameters can improve the prediction accuracy of the multiple parameters.
  • the parameter set at each historical moment includes the values of the multiple parameters at each historical moment.
  • the parameter set at each historical moment may also include the value of the auxiliary parameter at each historical moment.
  • the parameter set at time t0 includes the values of the multiple parameters at time t0, and may further include the values of the auxiliary parameters at time t0.
  • the multiple parameters include but are not limited to: network status information and throughput used to characterize the network status of the terminal device.
  • the network status information includes but is not limited to: Signal to Interference plus Noise Ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Receiving Power, RSRP), Reference Signal Receiving Quality (RSRQ), and delay.
  • the auxiliary information includes but is not limited to: the location information of the terminal device, the azimuth angle of the terminal device and the longitude and latitude information of the terminal device, video resolution, video fragment size, video fragment download time, etc.
  • the structured data may be two-dimensional structured data of historical moments and parameter sets.
  • the structural data may be data obtained by processing time series data by a terminal device or a network device.
  • it may also be data obtained by processing time series data by a core network device or a prediction server.
  • the time series data may be a sequence formed by counting certain indicators or parameters at different times and arranging them in chronological order.
  • the time series data may be data collected by terminal equipment or network equipment.
  • methods for processing time series data into received data include, but are not limited to: various data processing technologies that convert unstructured data into structured data.
  • this data processing technology mainly includes the following three types:
  • Decision trees are commonly used in various commercial application software and systems, such as product data storage, transaction logs, etc. This method requires manual feature extraction, which is cumbersome and requires a lot of manpower for data labeling.
  • Deep learning can find the semantic features of these structured data and make the data scalable by solving the problems of manual data cleaning and preparation and finding automated methods with little or no human intervention.
  • Structured data cleaning cleans out some erroneous data according to certain rules. This is data cleaning.
  • the task of data cleaning is to filter the data that does not meet the requirements, and hand the filtered results to the business department in charge to confirm whether they are filtered out or corrected by the business unit before extraction.
  • the target prediction model is a single prediction model, obtain a second historical moment before the first historical moment; based on the m historical moments, taking the second historical moment as the starting moment, Select k historical moments forward in chronological order; k is an integer greater than 0 and less than or equal to m; based on the structured data, obtain k parameter sets corresponding to the k historical moments; for the n parameter sets and the Perform differential operations on k parameter sets to obtain the input parameters.
  • the input parameters of the multiple prediction models are different from the input parameters of the single prediction model.
  • the target prediction model is a single prediction model
  • the target The output of the prediction model is the predicted values of the multiple parameters. Since the target prediction model predicts the multiple parameters based on historical data, the correlation between the predicted values of the multiple parameters and the n parameter sets is relatively large. , affecting the prediction accuracy of these multiple parameters.
  • the present application uses differential operations to eliminate or reduce the correlation between the predicted values of the multiple parameters and the n parameter sets, so as to improve the prediction accuracy of the multiple parameters.
  • the n parameter sets are determined as the input parameters.
  • n parameter sets After obtaining the n parameter sets, it may be determined whether to process the n parameter sets based on whether the target prediction model is a single prediction model. If the target prediction model is multiple prediction models, directly process the n parameter sets. The n parameter sets are determined as input parameters for each prediction model in the plurality of prediction models.
  • the target prediction model is a plurality of prediction models
  • the plurality of prediction models are dedicated prediction models corresponding to parameters, that is, the prediction accuracy of the multiple prediction models is relatively high.
  • it is possible to The n parameter sets are directly determined as input parameters to reduce the prediction complexity of the multiple parameters.
  • the method 200 may further include:
  • Each of the multiple prediction models is tested based on the test data set, and multiple prediction errors are obtained;
  • the first prediction error is obtained
  • the single prediction model is tested based on the test data set to obtain the second prediction error.
  • the The plurality of prediction models and the single prediction model are tested on the test set, so that the subsequent prediction process determines the target prediction model for prediction based on the first prediction error and the second prediction error obtained by the test.
  • Figure 3 is a schematic flow chart of a parameter prediction method 300 in a remote control creation scenario provided by an embodiment of the present application.
  • the parameter prediction 300 can be interactively executed by terminal equipment, network equipment, core network equipment, prediction servers, and remote control servers, where the prediction server can be executed by any electronic device with data processing capabilities.
  • the electronic device may be implemented as a server.
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers. It can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, and middleware. Cloud servers for basic cloud computing services such as software services, domain name services, security services, and big data and artificial intelligence platforms.
  • the servers can be connected directly or indirectly through wired or wireless communication methods. This application does not impose specific restrictions here.
  • the parameter prediction 300 may be performed by the system architecture 100 shown in FIG. 1 .
  • the method 300 may include:
  • S320 The terminal device obtains time series data and converts the time series data into structured data.
  • the terminal device may be a vehicle, including but not limited to: manned vehicles, intelligent connected vehicles, and unmanned vehicles.
  • the time series data may be a sequence formed by counting certain indicators or parameters at different times and arranging them in chronological order.
  • the certain indicator or parameter includes network status information and throughput used to characterize the network status of the terminal device.
  • the network status information includes but is not limited to: Signal to Interference plus Noise Ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Receiving Power, RSRP), Reference Signal Receiving Quality (RSRQ), and delay.
  • SINR Signal to Interference plus Noise Ratio
  • RSSI Received Signal Strength Indication
  • RSRP Reference Signal Receiving Power
  • RSRQ Reference Signal Receiving Quality
  • the certain indicators or parameters may also include but are not limited to: location information of the terminal device, azimuth angle of the terminal device, longitude and latitude information of the terminal device, video resolution, video fragment size, video fragment download time, etc.
  • the method by which the terminal device processes time series data into received data includes, but is not limited to: various data processing technologies that convert unstructured data into structured data.
  • this data processing technology mainly includes the following three types:
  • Decision trees are commonly used in various commercial application software and systems, such as product data storage, transaction logs, etc. This method requires manual feature extraction, which is cumbersome and requires a lot of manpower for data labeling.
  • Deep learning can find the semantic features of these structured data and make the data scalable by solving the problems of manual data cleaning and preparation and finding automated methods with little or no human intervention.
  • Structured data cleaning cleans out some erroneous data according to certain rules, which is data cleaning.
  • the task of data cleaning is to filter the data that does not meet the requirements, and submit the filtering results to the business department to confirm whether it is filtered or not. Extraction will be done after the business unit is corrected.
  • S330 The terminal device sends the structured data to the network device through the uplink.
  • the terminal device can upload the acquired collected data to the network device through the 5G uplink with structured data.
  • Network equipment may include one or more of 4G/5G base stations, RSU, WiFi, etc.
  • this embodiment intends to use a terminal device to obtain the collected data, and send the collected data to the core network device through the network device.
  • the collected data can also be obtained through network equipment (such as base stations), and the obtained collected data can be sent to the core network equipment in real time. This situation requires the operator to participate and provide relevant data support. It can reduce the software and hardware requirements for terminal equipment.
  • S340 The network device forwards the structured data to the prediction server through the core network device.
  • the network device uses relevant interfaces to forward the structured data to the prediction server through the core network device for processing. Perform training of multiple prediction models and training of a single prediction model.
  • Core network equipment includes but is not limited to 4G core network equipment or 5G core network equipment.
  • 5G core network equipment can be a 5G cloud core network.
  • the prediction server preprocesses the structured data to obtain a training set and a test set.
  • the prediction server processes the structured data into a sample set and divides the sample set to obtain a training set and a test set.
  • the process of preprocessing structured data by the prediction server can be understood as: the prediction server processes the structured data as input to multiple prediction models or a single prediction model and configures corresponding labels for each input, where each An input and corresponding label can be used to form a sample.
  • the prediction server trains multiple prediction models running in parallel and a single prediction model running independently based on the training set.
  • the multiple prediction models are two LSTM (Parallel-LSTM) networks running in parallel, and the single prediction model is an enhanced LSTM (Improved-LSTM) network.
  • the prediction server receives structured data and analyzes the structured data. Do preprocessing, and then train Parallel-LSTM and Improved-LSTM based on the training set.
  • the algorithm module can be started first and the training parameters can be configured.
  • the prediction server configures the training parameters, it includes but is not limited to setting at least one of the following parameters: input features, output features, prediction features, model parameters, algorithm average times, prediction time, model training and testing Scale, number of iterations, model loss function, model optimization function, number of machine learning neurons, etc.
  • the configuration method may be a default configuration method or configured by a user, which is not specifically limited in this application.
  • the prediction model involved in the embodiments of this application is not specifically limited.
  • the prediction model includes but is not limited to: a traditional learning model, an ensemble learning model, or a deep learning model.
  • traditional learning models include but are not limited to: tree models (regression trees) or logistic regression (LR) models; integrated learning models include but are not limited to: improved models of gradient boosting algorithms (XGBoost) or random forest models ; Deep learning models include but are not limited to: Long Short-Term Memory network (Long Short-Term Memory, LSTM) or neural network.
  • LSTM Long Short-Term Memory network
  • LSTM Long Short-Term Memory
  • LSTM Long Short-Term Memory
  • LSTM Long Short-Term Memory
  • LSTM Long Short-Term Memory
  • LSTM-based systems can learn tasks such as translating languages, controlling robots, image analysis, document summarization, speech recognition image recognition, handwriting recognition, controlling chat robots, predicting diseases, click-through rates and stocks, and synthesizing music.
  • the prediction server tests multiple prediction models and a single prediction model based on the test set, and obtains the first prediction error and the second prediction error.
  • the multiple prediction models are two LSTMs (Parallel-LSTM) running in parallel, and the single prediction model is an enhanced LSTM (Improved-LSTM).
  • the prediction server receives structured data and predicts the structured data. processing, and then test Parallel-LSTM and Improved-LSTM based on this test set.
  • the first prediction error is the average prediction error of Parallel-LSTM, which can be recorded as ARMSE1
  • the second prediction error is the prediction error of Improved-LSTM, which can be recorded as RMSE2.
  • the first prediction error corresponding to the multiple prediction models and the second prediction error corresponding to the single prediction model can be based on , predict multiple parameters.
  • the prediction process is explained below.
  • the prediction process may include:
  • the prediction server can further determine whether the second prediction error is less than the preset error, Determine the target prediction model for predicting the multiple parameters.
  • the preset error may represent the accuracy requirements of the multiple parameters.
  • the prediction server determines a single prediction model as the target prediction model.
  • the second prediction error is less than or equal to the preset error, it means that the prediction accuracy of the single prediction model can meet the accuracy requirements to a certain extent.
  • the single prediction model is determined as the target prediction.
  • the model can control the prediction costs of the multiple parameters on the basis of ensuring the prediction accuracy of the multiple parameters.
  • the prediction server determines the plurality of prediction models as target prediction models.
  • the second prediction error is greater than the preset error, it means that the prediction accuracy of the single prediction model may not be able to meet the accuracy requirements to a certain extent.
  • the multiple prediction models are determined as the target The prediction model can give priority to ensuring the prediction accuracy of these multiple parameters by sacrificing the prediction cost.
  • the single prediction model is determined as the target prediction model; if the second prediction error is outside the preset error range , then the multiple prediction models are determined as the target prediction model.
  • the preset error range can represent the accuracy requirements of the multiple parameters.
  • the second prediction error is within the preset error range, it means that the prediction accuracy of the single prediction model can meet the accuracy requirements to a certain extent.
  • the single prediction model is determined as the target prediction model. , can control the prediction costs of the multiple parameters on the basis of ensuring the prediction accuracy of the multiple parameters.
  • the second prediction error is outside the preset error range, it means that the prediction accuracy of the single prediction model may not be able to meet the accuracy requirements to a certain extent.
  • the multiple prediction models are determined as the target prediction. The model can prioritize the prediction accuracy of these multiple parameters by sacrificing prediction costs.
  • the first prediction error is greater than or equal to the second prediction error, it means that the prediction accuracy of the multiple prediction models is less than or equal to the prediction accuracy of the single prediction model.
  • the prediction cost is greater than the prediction cost of the single prediction model. Therefore, determining the single prediction model as the target prediction model can ensure the accuracy of the prediction compared with determining the multiple prediction models as the target prediction model. Based on this, the predicted cost of controlling the multiple parameters is controlled.
  • the first prediction error in this embodiment is recorded as ARMSE1
  • the second preset error is recorded as ARMSE2
  • the preset error is recorded as threshold 1.
  • ARMSE1 is greater than RMSE2
  • this embodiment uses the model of a single prediction model as the prediction model, and uses a single prediction model as the prediction model.
  • the result of the prediction model is used as the subsequent prediction result to notify the remote control server and/or terminal, and the corresponding remote control operation is taken;
  • this embodiment uses the models of multiple prediction models as the prediction model, and uses the multiple prediction models as prediction models.
  • the results of a prediction model are used as subsequent prediction results to notify the remote control server and/or terminal, and corresponding remote control operations are taken.
  • the prediction server sends the prediction result to the remote control server or remote cockpit.
  • the remote control server and the remote cockpit send control instructions to the terminal device based on the prediction results.
  • the remote control server or the remote cockpit After receiving the prediction result, the remote control server or the remote cockpit generates a control instruction based on the prediction result and sends the control instruction to the terminal device. For example, in a remote control scenario, the remote control server or remote cockpit can send control instructions to the terminal device (such as a remote driving vehicle) through a downlink (such as a 5G-V2X link).
  • a downlink such as a 5G-V2X link
  • the terminal device can control the terminal device to perform corresponding operations based on the control instruction.
  • the terminal device can control the terminal device to perform at least one of the following based on the received control instruction: adjusting the uplink video bit rate, reducing the remote control speed, safely docking, exiting the remote control operation, and so on.
  • Figure 4 is a schematic flow chart of the test training process provided by the embodiment of the present application.
  • this S350 may include:
  • the prediction server converts the time series prediction problem into a supervised learning problem and obtains a data set.
  • the time series problem can be transformed into a supervised learning problem by setting the prediction time and lag time.
  • Supervised learning refers to the process of using a set of samples of known categories to adjust the parameters of a classifier to achieve the required performance.
  • Supervised learning can also be called supervised training or teacher learning.
  • Supervised learning is the machine learning task of inferring a feature from labeled training data. Training data consists of a set of training examples. In supervised learning, each instance consists of an input object (usually a vector) and a desired output value (also called a supervision signal). Supervised learning algorithms analyze this training data and produce an inferred function that can be used to map out new instances. An optimal solution would allow the algorithm to correctly determine the class labels of unseen instances.
  • the prediction server can divide the obtained data set to obtain a training set and a test set.
  • the prediction server can preprocess the data set (ie, the sample set), including data normalization operations and splitting (splitting the training set and the test set) operation on the data set, etc. After the prediction server obtains the training set and the test set, it can set training parameters, and train the multiple prediction models and the single prediction model based on the training set and the test set.
  • the data set ie, the sample set
  • splitting splitting the training set and the test set
  • this S360 may include:
  • the prediction server uses the first prediction model among the multiple prediction models to predict the delay and throughput, and obtains the first prediction value of the delay.
  • the prediction server determines the prediction error of the first prediction model based on the first predicted value of the delay and the real value of the delay.
  • the prediction server uses the second prediction model among the multiple prediction models to predict the delay and throughput, and obtains the first prediction value of the throughput.
  • the prediction server determines the prediction error of the second prediction model based on the first predicted value of throughput and the real value of throughput.
  • the prediction server trains the first prediction model and the second prediction model based on the prediction error of the first prediction model and the error of the second prediction model.
  • the multiple prediction models are two parallel LSTM (Parallel-LSTM) networks.
  • two parallel LSTM networks two central processing units (Central Processing Unit, CPU) are used.
  • a graphics processing unit (GPU) runs the two parallel LSTMs in parallel, and the inputs are two sets of the same structured data (i.e., training sets).
  • the first LSTM network of the two parallel LSTM networks its prediction feature is the delay, and its output is the first predicted value of the delay.
  • the prediction feature is (uplink) throughput
  • its output is the first predicted value of throughput; based on this, the prediction
  • the measurement server determines the prediction error of the first LSTM network based on the first predicted value of the delay and the true value of the delay, and trains the first LSTM network based on the prediction error of the first LSTM network, for example, until The prediction error converges to a preset threshold.
  • the prediction server determines the prediction error of the second LSTM network based on the first predicted value of the throughput and the true value of the throughput, and trains the second LSTM network based on the prediction error of the second LSTM network, for example, until the second The prediction error of the LSTM network converges to a preset threshold.
  • S366 The prediction server performs a differential operation on the training set to obtain a differential training set.
  • the prediction server uses a single prediction model to predict delay and throughput, and obtains a second predicted value of delay and a second predicted value of throughput.
  • the prediction server determines the prediction error of a single prediction model based on the second predicted value of the delay, the second predicted value of the throughput, the real value of the delay, and the real value of the throughput.
  • the prediction server trains the second prediction model based on the prediction error of a single prediction model.
  • the single prediction model is an enhanced LSTM network (Improved-LSTM), which inputs structured data, and its prediction characteristics are (uplink) throughput and delay, that is to say, the The output of the enhanced LSTM network includes a second predicted value of delay and a second predicted value of throughput.
  • the prediction server can determine the prediction error of the enhanced LSTM network based on the second predicted value of the delay, the second predicted value of the throughput, the true value of the delay, and the true value of the throughput, and based on the enhanced LSTM network
  • the enhanced LSTM network is trained with the prediction error, for example, until the prediction error of the enhanced LSTM network converges to a preset threshold.
  • differential operation can be used to eliminate the correlation between the predicted features to avoid serious time lag in the prediction results.
  • this S370 may include:
  • the prediction server uses the first prediction model to test the delay and throughput, and obtains the first test value of the delay.
  • the prediction server determines the prediction error of the first prediction model based on the first test value of the delay and the real value of the delay.
  • the prediction server uses the second prediction model to test the delay and throughput, and obtains the first test value of the throughput.
  • the prediction server determines the prediction error of the second prediction model based on the first test value of throughput and the real value of throughput.
  • the prediction server determines the first prediction error based on the prediction error of the first prediction model and the error of the second prediction model.
  • S376 The prediction server performs a differential operation on the test set to obtain a differential test set.
  • the prediction server uses a single prediction model to predict delay and throughput, and obtains a second test value of delay and a second test value of throughput.
  • the prediction server determines the second prediction error based on the second test value of the delay, the second test value of the throughput, the real value of the delay, and the real value of the throughput.
  • the second prediction error is RSME2.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be used in this application.
  • the implementation of the examples does not constitute any limitations.
  • Figure 5 is a schematic block diagram of the prediction server 400 provided by the embodiment of the present application.
  • the prediction server 400 may include:
  • Obtaining unit 410 is used to obtain input parameters for predicting multiple parameters
  • Determining unit 420 configured to determine a target prediction model for predicting the multiple parameters based on the first prediction error corresponding to the multiple prediction models running in parallel and the second prediction error corresponding to the single prediction model running independently;
  • the plurality of prediction models correspond to the plurality of parameters
  • each prediction model in the plurality of prediction models is used to predict the parameters corresponding to each prediction model
  • the single prediction model is used to predict the parameters corresponding to the prediction model.
  • the prediction unit 430 is configured to predict the multiple parameters based on the input parameters and the target prediction model.
  • the determining unit 420 is specifically used to:
  • first prediction error is greater than or equal to the second prediction error, determine the single prediction model as the target prediction model
  • the multiple prediction models are determined as the target prediction model or the single prediction model is determined as the target prediction model.
  • the determining unit 420 is specifically used to:
  • the plurality of prediction models are determined as the target prediction model or the single prediction model is determined as the target prediction model.
  • the determining unit 420 is specifically used to:
  • the single prediction model is determined as the target prediction model; if the second prediction error is greater than the preset error, then the multiple prediction models are determined as the target prediction. Model.
  • the determining unit 420 is specifically used to:
  • the multiple prediction models are determined as the target prediction model or the single prediction model is determined as the target prediction model.
  • the determining unit 420 is specifically used to:
  • the multiple prediction models are determined as the target prediction models; if the application scenario of the multiple parameters is a scenario where the prediction server applies the multiple parameters. scenario, the single prediction model is determined as the target prediction model.
  • the acquisition unit 410 is specifically used to:
  • the structured data includes m historical moments and a parameter set for each of the m historical moments; the parameter set includes the multiple parameters, and m is an integer greater than 0;
  • n historical moments Based on the m historical moments, taking the first historical moment as the starting moment, select n historical moments forward in chronological order; n is an integer greater than 0 and less than or equal to m;
  • the input parameters are determined.
  • the acquisition unit 410 is specifically used to:
  • the target prediction model is a single prediction model, obtain the second historical moment before the first historical moment; based on the m historical moments, take the second historical moment as the starting moment and select forward in chronological order.
  • k historical moments; k is an integer greater than 0 and less than or equal to m; based on the structured data, obtain k parameter sets corresponding to the k historical moments; perform a difference between the n parameter sets and the k parameter sets Operation to obtain the input parameters.
  • the acquisition unit 410 is specifically used to:
  • the n parameter set is determined as the input parameter.
  • the acquisition unit 410 before acquiring input parameters for predicting multiple parameters, the acquisition unit 410 is also used to:
  • the method 200 may also include:
  • Each of the multiple prediction models is tested based on the test data set, and multiple prediction errors are obtained;
  • the first prediction error is obtained
  • the single prediction model is tested based on the test data set to obtain the second prediction error.
  • the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
  • the prediction server 400 may correspond to the corresponding subject in executing the methods 200 to 300 in the embodiment of the present application, and each unit in the prediction server 400 is to implement the corresponding processes in the methods 200 to 300. For the sake of simplicity, they are not mentioned here. Again.
  • each unit in the prediction server 400 involved in the embodiment of the present application can be separately or entirely combined into one or several other units, or some of the units can be further divided into functional units. It is composed of multiple smaller units, which can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application.
  • the above units are divided based on logical functions. In practical applications, the function of one unit can also be realized by multiple units, or the functions of multiple units can be realized by one unit. In other embodiments of the present application, the prediction server 400 may also include other units. In actual applications, these functions may also be implemented with the assistance of other units, and may be implemented by multiple units in cooperation.
  • the method can be implemented by processing a computer including, for example, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (GPU), a random access storage medium (RAM), a read-only storage medium (A computer program (including program code) capable of executing the steps involved in the corresponding method is run on a general-purpose computing device of a general-purpose computer with processing elements and storage elements such as ROM) to construct the prediction server 400 involved in the embodiment of the present application, and to implement Parameter prediction method in the embodiment of this application.
  • the computer program can be recorded on, for example, a computer-readable storage medium, loaded into an electronic device through the computer-readable storage medium, and run therein to implement the corresponding methods of the embodiments of the present application.
  • the units mentioned above can be implemented in the form of hardware, can also be implemented in the form of instructions in the form of software, or can be implemented in the form of a combination of software and hardware.
  • each step of the method embodiments in the embodiments of the present application can be completed by integrated logic circuits of hardware in the processor and/or instructions in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application can be directly embodied in hardware.
  • the execution of the decoding processor is completed, or the execution is completed using a combination of hardware and software in the decoding processor.
  • the software can be located in a mature storage medium in this field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
  • FIG. 6 is a schematic structural diagram of an electronic device 500 provided by an embodiment of the present application.
  • the electronic device 500 at least includes a processor 510 and a computer-readable storage medium 520 .
  • the processor 510 and the computer-readable storage medium 520 may be connected through a bus or other means.
  • the computer-readable storage medium 520 is used to store a computer program 521.
  • the computer program 521 includes computer instructions.
  • the processor 510 is used to execute the computer instructions stored in the computer-readable storage medium 520.
  • the processor 510 is the computing core of the electronic device 500
  • the core and the control core are adapted to implement one or more computer instructions, and are specifically adapted to load and execute one or more computer instructions to implement corresponding method processes or corresponding functions.
  • the processor 510 may also be called a central processing unit (CPU) or a graphics processing unit (GPU).
  • the processor 510 may include, but is not limited to: a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the computer-readable storage medium 520 can be a high-speed RAM memory, or a non-volatile memory (Non-Volatile Memory), such as at least one disk memory; optionally, it can also be at least one located far away from the aforementioned processor 510 Computer-readable storage media.
  • the computer-readable storage medium 520 includes, but is not limited to, volatile memory and/or non-volatile memory.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
  • Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • Direct Rambus RAM Direct Rambus RAM
  • the electronic device 500 may further include a transceiver 530 .
  • the processor 510 can control the transceiver 530 to communicate with other devices. Specifically, it can send information or data to other devices, or receive information or data sent by other devices.
  • Transceiver 530 may include a transmitter and a receiver.
  • the transceiver 530 may further include an antenna, and the number of antennas may be one or more.
  • bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
  • the electronic device 500 can be any electronic device with data processing capabilities; the first computer instructions are stored in the computer-readable storage medium 520; the computer-readable storage medium is loaded and executed by the processor 510 520 to implement the corresponding steps in the method embodiment shown in Figure 1; in specific implementation, the first computer instructions in the computer-readable storage medium 520 are loaded by the processor 510 and execute the corresponding steps, as To avoid repetition, we will not go into details here.
  • embodiments of the present application also provide a computer-readable storage medium (Memory).
  • the computer-readable storage medium is a memory device in the electronic device 500 and is used to store programs and data.
  • computer-readable storage medium 520 may include a built-in storage medium in the electronic device 500 , and of course may also include an extended storage medium supported by the electronic device 500 .
  • the computer-readable storage medium provides storage space that stores the operating system of the electronic device 500 .
  • one or more computer instructions suitable for being loaded and executed by the processor 510 are also stored in the storage space. These computer instructions may be one or more computer programs 521 (including program codes).
  • embodiments of the present application further provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • computer program 521 the data processing device 500 may be a computer, the processor 510 reads the computer instructions from the computer-readable storage medium 520, and the processor 510 executes the computer instructions, so that the computer performs the parameter prediction provided in the above various optional ways. method.
  • the plan A computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transmitted from a website, computer, server, or data center to Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) methods.
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请提供了一种参数预测方法、预测服务器、预测系统及电子设备,本申请实施例涉及车联网技术中的远控驾驶和高速领域,该方法包括:获取用于对多个参数进行预测的输入参数;基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对该多个参数进行预测的目标预测模型;其中,该多个预测模型和该多个参数一一对应,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测,该单个预测模型用于对该多个参数进行一次性预测;基于该输入参数和该目标预测模型,对该多个参数进行预测。本申请提供的方案不仅能够实现对多个参数进行同时预测,还能够实现对该多个参数的多样性预测。

Description

参数预测方法、预测服务器、预测系统及电子设备
本申请要求于2022年06月08日提交中国专利局、申请号为202210647581.2、发明名称为“参数预测方法、预测服务器、预测系统及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及车联网技术中的远控驾驶和高速领域,并且更具体地,涉及参数预测方法、预测服务器、预测系统及电子设备。
背景技术
服务质量(Quality of service,QoS)参数包括吞吐量、带宽、传输速率等维度的参数和时延等维度的参数。例如,在远控场景下,由于终端设备的上行链路的吞吐量和时延均是敏感变量,因此,需要同时对吞吐量和时延进行预测,以提升远控效果。
但是,相关技术只能预测单一的QoS参数,如吞吐量或时延。
此外,不同场景对QoS参数的预测需求并不一致。例如,远控场景和高速场景均需要对时延进行预测时,与远控场景相比,高速场景对时延的预测准确性要求较高,而现有预测方案对QoS参数的准确度都是越高越好,并不存在多种准确度预测的相关方案,基于此,针对某些对参数的预测准确度的要求较低的场景,采用预测准确性较高的预测算法进行预测时,有可能导致算力过剩进而增加了预测成本。
发明内容
本申请实施例提供了一种参数预测方法、预测服务器、预测系统及电子设备,不仅能够实现对多个参数的同时预测,还能够实现对该多个参数的多样性预测。
第一方面,本申请实施例提供了一种参数预测方法,包括:
获取用于对多个参数进行预测的输入参数;
基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对该多个参数进行预测的目标预测模型;
其中,该多个预测模型和该多个参数一一对应,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测,该单个预测模型用于对该多个参数进行一次性预测;
基于该输入参数和该目标预测模型,对该多个参数进行预测。
第二方面,本申请实施例提供了一种预测服务器,包括:
获取单元,用于获取用于对多个参数进行预测的输入参数;
确定单元,用于基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对该多个参数进行预测的目标预测模型;
其中,该多个预测模型和该多个参数一一对应,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测,该单个预测模型用于对该多个参数进行一次性预测;
预测单元,用于基于该输入参数和该目标预测模型,对该多个参数进行预测。
第三方面,本申请实施例提供了一种预测系统,该预测系统包括:终端设备、网络设备、核心网设备、预测服务器、远控服务器;其中,该核心网设备将该网络设备或该终端设备发送的采集数据处理为结构化数据,并将结构化数据发送给该预测服务器,该预测服务器基于该核心网设备发送的结构化数据执行上行第一方面的方法,并将预测结果发送至该远控服务器或该终端设备。
第四方面,本申请实施例提供了一种电子设备,包括:
处理器,适于实现计算机指令;以及,
计算机可读存储介质,计算机可读存储介质存储有计算机指令,计算机指令适于由处理器加载并执行上述第一方面的方法。
第五方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机指令,该计算机指令被计算机设备的处理器读取并执行时,使得计算机设备执行上述第一方面的方法。
第六方面,本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述第一方面的方法。
基于以上技术方案,通过引入并行允许的多个预测模型和单独运行的单个预测模型,并将该多个预测模型中的每一个预测模型设计为与参数对应的专用预测模型,以及将该单个预测模型设计为用于对该多个参数进行一次性预测的通用预测模型,使得该多个预测模型和该单个预测模型均能够实现对多个参数的同时预测。进一步的,考虑到并行运行的多个预测模型的耗能成本大于单独运行的单个预测模型的能耗成本,即该多个预测模型的预测成本大于该单个预测模型的预测成本,基于此,本申请通过引入该多个预测模型对应的第一预测误差和该单个预测模型对应的第二预测误差,并基于该第一预测误差和该第二预测误差,确定用于对该多个参数进行预测的目标预测模型,相当于,可以基于不同的预测成本和不同的预测准确度对该多个参数进行预测,进而实现对该多个参数的多样性预测。
简言之,本申请提供的方案不仅能够实现对多个参数进行同时预测,还能够实现对该多个参数的多样性预测。
附图说明
图1是本申请实施例提供的系统框架的示例。
图2是本申请实施例提供的参数预测方法的示意性流程图。
图3是本申请实施例提供的在远控创景下的参数预测方法的示意性流程图。
图4是本申请实施例提供的测试训练过程的示意性流程图。
图5是本申请实施例提供的预测服务器的示意性框图。
图6是本申请实施例提供的电子设备的示意性框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
下面对本申请实施例中涉及的技术领域进行介绍和说明:
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、 技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
结合本申请实施例提供的参数预测方法来说,可通过人工智能的方法实现本申请实施例提供的参数预测方法。
计算机视觉技术(Computer Vision,CV):计算机视觉是一门研究如何使机器“看”的科学,更进一步地说,就是指用摄影机和电脑代替人眼对目标进行识别和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能系统。计算机视觉技术通常包括图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
结合本申请实施例提供的参数预测方法来说,可以通过CV技术,来实现该参数预测方法中与图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实等相关功能。
机器学习(Machine Learning,ML):一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习等技术。
结合本申请实施例提供的参数预测方法来说,可以通过ML技术来训练本申请实施例涉及的模型。
图1是本申请实施例提供的系统框架100的示例。
如图1所示,该系统框架100可包括终端设备111、终端设备112、网络设备121、网络设备122、核心网设备130、预测服务器140、远控服务器150以及远控驾驶舱160。
其中,该终端设备111可以通过网络设备121和/或网络设备122与核心网设备130进行通信,预测服务器140可以对终端设备111(或终端设备112)的各种参数进行预测,以得到预测结果,并将预测得到的预测结果发送至远控服务器150或远控驾驶舱160,远控服务器150或远控驾驶舱160基于收到的预测结果向终端设备111(或终端设备112)发送控制指令,以控制终端设备111(或终端设备112)执行相应的操作。例如,该控制指令用于控制终端设备111(或终端设备112)执行以下中的至少一项:调整上行视频码率、降低远控速度、或安全停靠以及退出远控操作等等。其中,该远控服务器150可用于负责信息的处理工作,例如,用于承担服务请求和响应等工作,再如,基于预测服务器140反馈的预测结果向终端设备111(或终端设备112)发送控制指令;该远控驾驶舱160内的驾驶员可以根据屏幕上显示的视频信息或预测服务器140反馈的预测结果向终端设备111(或终端设备112)发送控制指令。
当然,在其他可替代实施例中,远控服务器150还可以基于得到的预测结果对终端设备111(或终端设备112)所在的交通环境进行分析,并将分析结果发送至远程驾驶舱160,以便远程驾驶舱160内的驾驶员基于分析结果远程驾驶车辆。
示例性地,预测服务器140可以对用于表征终端设备111(或终端设备112)的服务质量(Predictive Quality of Service,QoS)参数进行预测,QoS参数包括用于表征终端设备111(或终端设备112)的网络状态的信息,其也可以简称为网络状态信息。其包括但不限于:吞吐量、信号与干扰加噪声比(Signal to Interference plus Noise Ratio,SINR)、接收的信号强度指示(Received Signal Strength Indication,RSSI)、参考信号接收功率(Reference Signal Receiving Power,RSRP)、参考信号接收质量(Reference Signal Receiving Quality,RSRQ)、时延。当然,也可以对终端设备111(或终端设备112)的其他参数进行预测,其包括但不限于:终端设备的位置信息、终端设备的方位角和终端设备的经纬度信息、视频分辨率、视频分片大小、视频分片下载时间等。
示例性地,预测服务器140可以基于QoS参数的历史数据,预测QoS参数的当前数据。例如,预测服务器140可基于终端设备111(或终端设备112)的历史时延数据,预测当前时刻下终端设备111(或终端设备112)的时延。
示例性地,该系统框架100可以适用于任意一种需要进行远程控制的场景或需要对各种参数指标进行预测的场景。
例如,该系统框架100可以适用于远程驾驶或高速场景,此时,终端设备111(或终端设备112)可以是车辆,其包括但不限于:有人驾驶车辆、智能网联车辆以及无人驾驶车辆,车辆可通过其车载单元内置的无线电通信子系统与网络设备121或网络设备122建立通信。网络设备121或网络设备122可以充当发射源。网络设备121或网络设备122可以是有源通信设备,例如,其包括但不限于:4G/5G基站、路侧单元(Road Side Unit,RSU)、WiFi等。当然,终端设备111(或终端设备112)可以是设置有应用程序(Application,APP)、车载互联网一些(Internet Protocol,IP)摄像头、以及其余终端软件等中的一种或几种的设备。核心网设备130包括但不限于4G核心网设备或5G核心网(5G Core,5GC)设备,5G核心网设备可以是5G云化核心网。预测服务器可以称为QoS预测服务器,远控驾驶舱160可以是用于远控驾驶车辆的驾驶舱。
当然,该系统架构也可以支持各种交通流业务,例如,其包括但不限于:无人集卡、车路协同、实时孪生以及远程控制等场景。本申请对此不作具体限定。
此外,图1示例性地示出了两个终端设备、两个网络设备、一个核心网设备,但本申请不限于此。例如,在其他可替代实施例中,该系统框架100可以包括其他数量个网络设备并且每个网络设备的覆盖范围内可以包括其它数量的终端设备。
值得注意的是,预测服务器140通常仅能够对单个的QoS参数(如吞吐量或时延)进行预测,而实际应用场景往往需要对多个QoS参数进行预测。例如,在远控场景下,由于终端设备的上行链路的吞吐量和时延均是敏感变量,即需要同时对吞吐量和时延进行预测。
此外,不同场景对QoS参数的预测需求并不一致。例如,远控场景和高速场景均需要对时延进行预测时,与远控场景相比,高速场景对时延的预测准确性要求较高,而现有预测方案对QoS参数的准确度都是越高越好,并不存在多种准确度预测的相关方案,基于此,针对某些对参数的预测准确度的要求较低的场景,采用预测准确性较高的预测算法进行预测时,有可能导致算力过剩进而增加了预测成本。
有鉴于此,本申请提供了一种参数预测方法,其使得预测服务器140不仅能够实现对多个参数的同时预测,还能够实现对该多个参数的多样性预测。特别的,预测服务器140可以同时对多个参数如吞吐量和时延进行预测。具体地,可以采用并行运行的多个预测模型或增强的单个预测模型对多个参数同时进行预测,不仅能够实现对多个参数的同时预测,还能够实现对该多个参数的多样性预测。
图2示出了根据本申请实施例的参数预测方法200的示意性流程图,该参数预测200可以由任何具有数据处理能力的电子设备执行。例如,该电子设备可实施为服务器。服务 器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、以及大数据和人工智能平台等基础云计算服务的云服务器,服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做具体限制。例如,该参数预测200可以由图1所示的预测服务器140执行。
如图2所示,所述方法200可包括以下部分或全部内容:
S210,获取用于对多个参数进行预测的输入参数;
S220,基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对该多个参数进行预测的目标预测模型;
其中,该多个预测模型和该多个参数一一对应,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测,该单个预测模型用于对该多个参数进行一次性预测;
S230,基于该输入参数和该目标预测模型,对该多个参数进行预测。
示例性地,该目标预测模型为该单个预测模型时,该单个预测模型用于对该多个参数进行一次性预测;其中,术语“一次性预测”旨在说明:利用该单个预测模型执行一次预测过程,即可获取该多个参数的预测值;或者说,针对该单个预测模型,仅需要输入一次该输入参数,即可获取该多个参数的预测值。相应的,该目标预测模型为该多个预测模型时,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测。也即是说,需要以遍历的方式利用该多个预测模型执行多次预测过程,才能够获取该多个参数的预测值;或者说,针对该多个预测模型中的每一个预测模型,均需要输入一次该输入参数,由此,该每一个预测模型才能够输出该每一个预测模型对应的参数的预测值。
本实施例中,通过引入并行允许的多个预测模型和单独运行的单个预测模型,并将该多个预测模型中的每一个预测模型设计为与参数对应的专用预测模型,以及将该单个预测模型设计为用于对该多个参数进行一次性预测的通用预测模型,使得该多个预测模型和该单个预测模型均能够实现对多个参数的同时预测。进一步的,考虑到并行运行的多个预测模型的耗能成本大于单独运行的单个预测模型的能耗成本,即该多个预测模型的预测成本大于该单个预测模型的预测成本,基于此,本申请通过引入该多个预测模型对应的第一预测误差和该单个预测模型对应的第二预测误差,并基于该第一预测误差和该第二预测误差,确定用于对该多个参数进行预测的目标预测模型,相当于,可以基于不同的预测成本和不同的预测准确度对该多个参数进行预测,进而实现对该多个参数的多样性预测。
简言之,本申请提供的方案不仅能够实现对多个参数进行同时预测,还能够实现对该多个参数的多样性预测。
以在远程控制场景为例,通过本申请提供的方案,可以对多个参数(如吞吐量和时延)的变化进行预测,进而可以基于该多个参数的预测值远程控制车辆,例如,可以在网络变化时刻到来之前,基于该多个参数的预测值及时采取相应措施(如调整上行视频码率、降低远控速度、安全停靠退出远控等等),进而提高远程驾驶车辆的安全性,并增强远程驾驶车辆的驾驶决策性能。
应当理解,本申请涉及的该第一预测误差和第二预测误差可以是用于表征预测模型的预测准确度的参数或指标。例如,该第一预测误差和该第二预测预测可以是相对均方误差(Relative Mean Square Error,RMSE)或其他参数。
此外,本申请涉及的该多个预测模型和该单个预测模型可以是任一种具有预测功能的模型或算法,例如,其包括但不限于:传统学习模型、集成学习模型或深度学习模型。可选的,传统学习模型包括但不限于:树模型(回归树)或逻辑回归(logistic regression,LR)模型;集成学习模型包括但不限于:梯度提升算法的改进模型(XGBoost)或随机森林模型;深度学习模型包括但不限于:长短期记忆网络(Long Short-Term Memory,LSTM) 或神经网络。当然,在本申请的其他实施例中,也可以使用其他机器学习类的模型,本申请对此不作具体限定。
还应当理解,本申请对多个参数的预测结果的作用或用途不作具体限定。例如,在一些实现方式中,在获取该多个参数的预测结果后,可基于该预测结果生成用于控制终端设备的控制指令。该控制指令用于控制终端设备执行以下中的至少一项:调整上行视频码率、降低远控速度、或安全停靠以及退出远控操作等等。在另一些实现方式中,该多个参数的预测结果还可用于生成终端设备的网络环境,以便远程控制人员基于分析得到的网络环境远程控制终端设备。例如,在远程驾驶场景下,该终端设备为远程驾驶车辆,该网络环境为交通环境;也即是说,该多个参数的预测结果还可用于生成远程驾驶车辆的交通环境(例如拥堵情况),以便远程控制人员基于分析得到的交通环境(例如拥堵情况)控制远程驾驶车辆。
在一些实施例中,该S220可包括:
若该第一预测误差大于或等于该第二预测误差,则将该单个预测模型确定为该目标预测模型;若该第一预测误差小于该第二预测误差,则将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
示例性地,若该第一预测误差大于或等于该第二预测误差,则说明该多个预测模型的预测准确度小于或等于该单个预测模型的预测准确度,此外,由于该多个预测模型的预测成本大于该单个预测模型的预测成本,因此,将该单个预测模型确定为该目标预测模型,与将该多个预测模型确定为该目标预测模型相比,能够在保证该预测准确度的基础上,控制该多个参数的预测成本。
示例性地,若该第一预测误差小于该第二预测误差,则说明该多个预测模型的预测准确度大于该单个预测模型的预测准确度,此外,该多个预测模型的预测成本大于该单个预测模型的预测成本;此时,将该单个预测模型确定为该目标预测模型,相当于,该多个预测模型的预测准确度大于该单个预测模型的预测准确度时,可通过优选控制该多个参数的预测成本的方式,确定该目标预测模型。
示例性地,若该第一预测误差小于该第二预测误差,则说明该多个预测模型的预测准确度大于该单个预测模型的预测准确度,此外,该多个预测模型的预测成本大于该单个预测模型的预测成本;此时,将该多个预测模型确定为该目标预测模型,相当于,该多个预测模型的预测准确度大于该单个预测模型的预测准确度时,可通过优选保证该多个参数的预测准确度的方式,确定该目标预测模型。
在一些实施例中,该S220可包括:
若该第一预测误差小于该第二预测误差,则基于该第二预测误差,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
示例性地,若该第一预测误差小于该第二预测误差,则在该第二预测误差满足该多个参数的准确度需求时,将该单个预测模型确定为该目标预测模型;在该第二预测误差不满足该多个参数的准确度需求时,将该多个预测模型确定为该目标预测模型。
在一些实施例中,若该第二预测误差小于或等于预设误差,则将该单个预测模型确定为该目标预测模型;若该第二预测误差大于该预设误差,则将该多个预测模型确定为该目标预测模型。
示例性地,该预设误差可以代表该多个参数的准确度需求。
示例性地,若该第二预测误差小于或等于预设误差,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,将该单个预测模型确定为该目标预测模型,能够在保证该多个参数的预测准确度的基础上,控制该多个参数的预测成本。
示例性地,若该第二预测误差大于该预设误差,则说明该单个预测模型的预测准确度在一定程度上未必能够满足准确度需求,此时,将该多个预测模型确定为该目标预测模型, 可以通过牺牲预测成本的方式,优先保证该多个参数的预测准确度。
当然,该多个参数的准确度需求也可以表现为其他形式的参数,本申请对此不作具体限定。
例如,在其他可替代实施例中,若该第二预测误差位于预设误差范围内,则将该单个预测模型确定为该目标预测模型;若该第二预测误差位于该预设误差范围之外,则将该多个预测模型确定为该目标预测模型。换言之,可以通过该预设误差范围代表该多个参数的准确度需求。具体地,若该第二预测误差位于预设误差范围内,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,将该单个预测模型确定为该目标预测模型,能够在保证该多个参数的预测准确度的基础上,控制该多个参数的预测成本。若该第二预测误差位于该预设误差范围之外,则说明该单个预测模型的预测准确度在一定程度上未必能够满足准确度需求,此时,将该多个预测模型确定为该目标预测模型,可以通过牺牲预测成本的方式,优先保证该多个参数的预测准确度。
在一些实施例中,该S220可包括:
若该第一预测误差小于该第二预测误差,则基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
本实施例中,若该第一预测误差小于该第二预测误差,说明该多个预测模型的预测成本高且预测准确度高,该单个预测模型的预测成本低且预测准确度低;在这种情况下,由于该多个参数的应用场景在一定程度上能够体现该多个参数的准确度需求,因此,本实施例通过引入该多个参数的应用场景确定该目标预测模型,能够保证该目标预测模型的预测准确度和该多个参数的准确度需求保持一致,避免了由于该目标预测模型的预测准确度和该多个参数的准确度需求不一致而出现该多个参数的预测准确度过低或该多个参数的预测成本过高的问题。
示例性地,该多个参数的应用场景通过该多个参数的准确度需求确定的场景。
本实施例中,将该多个参数的应用场景设计为通过该多个参数的准确度需求确定的场景,能够保证该目标预测模型的预测准确度和该多个参数的准确度需求保持一致,避免了由于该目标预测模型的预测准确度和该多个参数的准确度需求不一致而出现该多个参数的预测准确度过低或该多个参数的预测成本过高的问题。
在一些实施例中,若该多个参数的应用场景为终端设备应用该多个参数的场景,则将该多个预测模型确定为该目标预测模型;若该多个参数的应用场景为预测服务器应用该多个参数的场景,则将该单个预测模型确定为该目标预测模型。
本实施例中,考虑到不同设备通常对该多个参数的预测准确度需求不同,本申请实施例通过应用该多个参数的设备区分该多个参数的应用场景,不仅能够减少该多个参数的应用场景的分类,进而降低了确定该目标预测模型的复杂度,还能够保证该目标预测模型的预测准确度和该多个参数的准确度需求保持一致,避免了由于该目标预测模型的预测准确度和该多个参数的准确度需求不一致而出现该多个参数的预测准确度过低或该多个参数的预测成本过高的问题。
示例性地,若该多个参数的应用场景为终端设备应用该多个参数的场景,说明该多个参数是能够影响用户体验的参数,此时,在该多个预测模型和该单个预测模型均可用的情况下,可以将该多个预测模型作为该目标预测模型,以优先保证该多个参数的预测准确性。
示例性地,若该多个参数的应用场景为预测服务器应用该多个参数的场景,说明该多个参数有可能不是影响用户体验的参数,此时,在该多个预测模型和该单个预测模型均可用的情况下,可以将该单个预测模型作为该目标预测模型,以优先控制该等多个参数的预测成本。
当然,在其他可替代实施例中,若该第一预测误差小于该第二预测误差,也可以基于该第二预测误差和该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将 该单个预测模型确定为该目标预测模型。本申请对此不作具体限定。
示例性地,若该第一预测误差小于该第二预测误差,且该第二预测误差小于或等于预设误差,则基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。换言之,若该第一预测误差小于该第二预测误差,说明该多个预测模型的预测准确度大于该单个预测模型的预测准确度,若该第二预测误差小于或等于预设误差,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型,能够使得该目标预测模型为与该多个参数的应用场景匹配的模型,能够避免该目标预测模型与该多个参数的应用场景不匹配时导致预测准确度过低或预测成本过高的问题,进而能够提升预测效果。
示例性地,若该第一预测误差小于该第二预测误差,且该第二预测误差位于预设误差范围内,则基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。换言之,若该第一预测误差小于该第二预测误差,说明该多个预测模型的预测准确度大于该单个预测模型的预测准确度,若该第二预测误差位于预设误差范围内,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型,能够使得该目标预测模型为与该多个参数的应用场景匹配的模型,能够避免该目标预测模型与该多个参数的应用场景不匹配时导致预测准确度过低或预测成本过高的问题,进而能够提升预测效果。
在一些实施例中,该S210可包括:
获取终端设备的结构化数据;
其中,该结构化数据包括m个历史时刻和该m个历史时刻中的每一个历史时刻的参数集合;该参数集合包括所述多个参数,m为大于0的整数;
获取位于该多个参数对应的时刻之前的且间隔时长大于或等于预设时长的第一历史时刻;
基于该m个历史时刻,以该第一历史时刻为起始时刻,按照时间顺序向前选择n个历史时刻;n为大于0且小于或等于m的整数;
基于该结构化数据,获取该n个历史时刻对应的n个参数集合;
基于该目标预测模型和该n个参数集合,确定该输入参数。
本实施例中,通过n个参数集合确定输入参数,由于参数集合的获取难度较小,因此,能够控制该输入参数的获取成本;此外,n的取值为大于1的整数时,相当于,通过多个参数集合确定输入参数,此时,该输入参数不仅能够反映过去第一历史时刻的参数集合,还能够提升该第一历史时刻之前的参数集合的变化趋势,丰富了用于对该多个参数进行预测的信息,能够提升该多个参数的预测准确度。
示例性地,该每一个历史时刻的参数集合包括该多个参数在该每一个历史时刻的取值。进一步的,该每一个历史时刻的参数集合还可以包括辅助参数在该每一个历史时刻的取值。例如,t0时刻的参数集合包括该多个参数在t0时刻的取值,进一步的,还可以包括辅助参数在该t0时刻的取值。可选的,该多个参数包括但不限于:用于表征终端设备的网络状态的网络状态信息和吞吐量。该网络状态信息包括但不限于:信号与干扰加噪声比(Signal to Interference plus Noise Ratio,SINR)、接收的信号强度指示(Received Signal Strength Indication,RSSI)、参考信号接收功率(Reference Signal Receiving Power,RSRP)、参考信号接收质量(Reference Signal Receiving Quality,RSRQ)、时延。该辅助信息包括但不限于:终端设备的位置信息、终端设备的方位角和终端设备的经纬度信息、视频分辨率、视频分片大小、视频分片下载时间等。
示例性地,该结构化数据可以是历史时刻和参数集合的二维结构数据。
示例性地,该结构数据可以是终端设备或网络设备通过对时间序列数据进行处理得到的数据,当然,也可以是核心网设备或预测服务器对时间序列数据进行处理得到的数据。该时间序列数据可以是在不同时间上统计某种指标或参数,并按时间先后顺序排列而形成的序列。
示例性地,该时间序列数据可以是终端设备采集或网络设备采集得到的数据。
示例性地,将时间序列数据处理为接收数据的方法包括但不限于:将非机构化数据转化成结构化数据的各种数据处理技术。例如,该数据处理技术主要包括以下三种:
1、决策树:
决策树普遍应用于各类商业应用软件和系统中,例如产品数据存储,交易日志等等,这样的方法需要人工进行特征提取,操作繁琐且需要耗费大量人力进行数据标签。
2、深度学习:
深度学习可以找到这些结构化数据的语义特征,并通过解决人工数据清洗和准备的问题,找到极少或者没有人为干预的自动化方法,使得数据可拓展。
3、结构化数据清洗:
结构化数据清洗按照一定的规则把一些错误数据清洗掉,这就是数据清洗。而数据清洗的任务是过滤那些不符合要求的数据,将过滤的结果交给业务主管部门,确认是否过滤掉还是由业务单位修正之后再进行抽取。
在一些实施例中,若该目标预测模型为该单个预测模型,则获取位于该第一历史时刻之前的第二历史时刻;基于该m个历史时刻,以该第二历史时刻为起始时刻,按照时间顺序向前选择k个历史时刻;k为大于0且小于或等于m的整数;基于该结构化数据,获取该k个历史时刻对应的k个参数集合;对该n个参数集合和该k个参数集合进行差分运算,得到该输入参数。
示例性地,该多个预测模型的输入参数和该单个预测模型的输入参数不同。
示例性地,获取该n个参数集合之后,可以基于该目标预测模型是否为该单个预测模型,确定是否对该n个参数集合进行处理,若该目标预测模型为该单个预测模型,则该目标预测模型的输出为该多个参数的预测值,由于该目标预测模型基于历史数据对该多个参数进行预测时,该多个参数的预测值与该n个参数集合之间的相关性较大,影响了该多个参数的预测准确度。有鉴于此,本申请通过差分运算的方式消除或减低该多个参数的预测值和该n个参数集合的相关性,能够提升该多个参数的预测准确度。
在一些实施例中,若该目标预测模型为该多个预测模型,则将该n个参数集合,确定为该输入参数。
例如,在获取该n个参数集合之后,可以基于该目标预测模型是否为该单个预测模型,确定是否对该n个参数集合进行处理,若该目标预测模型为该多个预测模型,则直接将该n个参数集合确定为该多个预测模型中的每一个预测模型的输入参数。
本实施例中,考虑到该目标预测模型为该多个预测模型时,该多个预测模型为与参数对应的专用预测模型,即该多个预测模型的预测准确度较高,此时,可以将该n个参数集合直接确定为该输入参数,以降低该多个参数的预测复杂度。
在一些实施例中,该S210之前,该方法200还可包括:
获取训练数据集和测试数据集;
基于该训练数据集训练该多个预测模型和该单个预测模型;
基于该测试数据集对该多个预测模型中的每一个预测模型进行测试,得到多个预测误差;
基于该多个预测误差,得到该第一预测误差;
基于该测试数据集对该单个预测模型进行测试,得到该第二预测误差。
本实施例中,基于训练集对该多个预测模型和该单个预测模型进行训练后,还可以基 于测试集对该多个预测模型和该单个预测模型进行测试,以便后续的预测过程基于测试得到的第一预测误差和第二预测误差,确定用于进行预测的目标预测模型。
图3是本申请实施例提供的在远控创景下的参数预测方法300的示意性流程图。该参数预测300可以由终端设备、网络设备、核心网设备、预测服务器和远控服务器交互执行,其中预测服务器可以是任何具有数据处理能力的电子设备执行。例如,该电子设备可实施为服务器。服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、以及大数据和人工智能平台等基础云计算服务的云服务器,服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做具体限制。例如,该参数预测300可以由图1所示的系统架构100执行。
如图3所示,该方法300可包括:
S310,开始。
S320,终端设备获取时间序列数据,并将时间序列数据转换成结构化数据。
示例性地,终端设备可以是车辆,其包括但不限于:有人驾驶车辆、智能网联车辆以及无人驾驶车辆。
示例性地,该时间序列数据可以是在不同时间上统计某种指标或参数,并按时间先后顺序排列而形成的序列。
示例性地,该某种指标或参数包括用于表征终端设备的网络状态的网络状态信息和吞吐量。该网络状态信息包括但不限于:信号与干扰加噪声比(Signal to Interference plus Noise Ratio,SINR)、接收的信号强度指示(Received Signal Strength Indication,RSSI)、参考信号接收功率(Reference Signal Receiving Power,RSRP)、参考信号接收质量(Reference Signal Receiving Quality,RSRQ)、时延。进一步的,该某种指标或参数还可包括但不限于:终端设备的位置信息、终端设备的方位角和终端设备的经纬度信息、视频分辨率、视频分片大小、视频分片下载时间等。
示例性地,终端设备将时间序列数据处理为接收数据的方法包括但不限于:将非机构化数据转化成结构化数据的各种数据处理技术。例如,该数据处理技术主要包括以下三种:
1、决策树:
决策树普遍应用于各类商业应用软件和系统中,例如产品数据存储,交易日志等等,这样的方法需要人工进行特征提取,操作繁琐且需要耗费大量人力进行数据标签。
2、深度学习:
深度学习可以找到这些结构化数据的语义特征,并通过解决人工数据清洗和准备的问题,找到极少或者没有人为干预的自动化方法,使得数据可拓展。
3、结构化数据清洗:
结构化数据清洗按照一定的规则把一些错误数据清洗掉,这就是数据清洗.而数据清洗的任务是过滤那些不符合要求的数据,将过滤的结果交给业务主管部门,确认是否过滤掉还是由业务单位修正之后再进行抽取。
S330,终端设备通过上行链路将结构化数据发送至网络设备。
示例性地,终端设备可将获取的采集数据通过5G上行链路将结构化数据上传给网络设备。网络设备可以包括4G/5G基站、RSU、WiFi等中的一种或多种。
值得注意的是,本实施例拟采用终端设备获取该采集数据,并将采集到的数据发送通过网络设备发送至核心网设备。在其他可替代实施例中,也可以通过网络设备(如基站)获取该采集数据,并将获取到的该采集数据实时发送给核心网设备,这种情况需要运营商参与并提供相关数据支持,能够降低对终端设备的软硬件需求。
S340,网络设备通过核心网设备将结构化数据转发至预测服务器。
示例性地,网络设备利用相关接口通过核心网设备将结构化数据转发给预测服务器进 行多个预测模型的训练和单个预测模型的训练。核心网设备包括但不限于4G核心网设备或5G核心网设备,5G核心网设备可以是5G云化核心网。
S350,预测服务器对结构化数据进行预处理,得到训练集和测试集。
示例性地,预测服务器将结构化数据处理为样本集,并对样本集进行划分,得到训练集和测试集。
示例性地,预测服务器对结构化数据进行预处理的过程可以理解为:预测服务器将结构化数据处理为多个预测模型或单个预测模型的输入并针对每一个输入配置对应的标签,其中,每一个输入和对应的标签可以用于形成一个样本。
S360,预测服务器基于训练集对并行运行的多个预测模型和单独运行的单个预测模型进行训练。
示例性地,该多个预测模型为两个并行运行的LSTM(Parallel-LSTM)网络,该单个预测模型为增强的LSTM(Improved-LSTM)网络,预测服务器接收结构化数据并对该结构化数据做预处理,然后基于该训练集对Parallel-LSTM和Improved-LSTM进行训练。
示例性地,预测服务器基于该训练集对Parallel-LSTM和Improved-LSTM进行训练时,可先启动算法模块并对训练参数进行配置。
示例性地,预测服务器对训练参数进行配置时,包括但不限于设置以下参数的至少一项:输入特征、输出特征、预测特征、模型的参数、算法平均次数、预测时间、模型训练和测试的比例、迭代次数、模型损失函数、模型优化函数、机器学习神经元的数量等等。其中,配置方式可以是默认的配置方式,或通过用户进行配置,本申请对此不作具体限定。
应当理解,本申请实施例涉及的预测模型不作具体限定。作为示例,该预测模型包括但不限于:传统学习模型、集成学习模型或深度学习模型。可选的,传统学习模型包括但不限于:树模型(回归树)或逻辑回归(logistic regression,LR)模型;集成学习模型包括但不限于:梯度提升算法的改进模型(XGBoost)或随机森林模型;深度学习模型包括但不限于:长短期记忆网络(Long Short-Term Memory,LSTM)或神经网络。当然,在本申请的其他实施例中,也可以使用其他机器学习类的模型,本申请对此不作具体限定。LSTM是一种时间递归神经网络(RNN),与普通的RNN相比,其可以用于解决长序列训练过程中的梯度消失和梯度爆炸问题,进而能够在更长的序列中有更好的表现。基于LSTM的系统可以学习翻译语言、控制机器人、图像分析、文档摘要、语音识别图像识别、手写识别、控制聊天机器人、预测疾病、点击率和股票以及合成音乐等等任务。
S370,预测服务器基于测试集对多个预测模型和单个预测模型进行测试,得到第一预测误差和第二预测误差。
示例性地,该多个预测模型为两个并行运行的LSTM(Parallel-LSTM),该单个预测模型为增强的LSTM(Improved-LSTM),预测服务器接收结构化数据并对该结构化数据做预处理,然后基于该测试集对Parallel-LSTM和Improved-LSTM进行测试。其中,该第一预测误差为Parallel-LSTM的平均预测误差,其可记为ARMSE1,该第二预测误差为Improved-LSTM的预测误差,其可记为RMSE2。
对该多个预测模型和该单个预测模型执行完训练过程和测试过程后,在实际预测过程中,可以基于该多个预测模型对应的第一预测误差和该单个预测模型对应的第二预测误差,对多个参数进行预测。下面对预测过程进行说明。
如图3所示,该预测过程可包括:
S381,对多个参数进行预测时,预测服务器确定第一预测误差是否小于第二预测误差?
S382,若该第一预测误差小于该第二预测误差,则预测服务器确定第二预测误差是否小于预设误差?
示例性地,若该第一预测误差小于该第二预测误差,则说明该多个预测模型的预测准 确度大于该单个预测模型的预测准确度,此外,该多个预测模型的预测成本大于该单个预测模型的预测成本;此时,预测服务器可进一步通过判断第二预测误差是否小于预设误差,确定用于对该多个参数进行预测的目标预测模型。
示例性地,该预设误差可以代表该多个参数的准确度需求。
S383,若该第一预测误差小于该第二预测误差,且该第二预测误差小于该预设误差,则预测服务器将单个预测模型确定为目标预测模型。
示例性地,若该第二预测误差小于或等于预设误差,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,将该单个预测模型确定为该目标预测模型,能够在保证该多个参数的预测准确度的基础上,控制该多个参数的预测成本。
S384,若该第一预测误差小于该第二预测误差,且该第二预测误差大于或等于该预设误差,则预测服务器将该多个个预测模型确定为目标预测模型。
示例性地,若该第二预测误差大于该预设误差,则说明该单个预测模型的预测准确度在一定程度上未必能够满足准确度需求,此时,将该多个预测模型确定为该目标预测模型,可以通过牺牲预测成本的方式,优先保证该多个参数的预测准确度。
当然,该多个参数的准确度需求也可以表现为其他形式的参数,本申请对此不作具体限定。
例如,在其他可替代实施例中,若该第二预测误差位于预设误差范围内,则将该单个预测模型确定为该目标预测模型;若该第二预测误差位于该预设误差范围之外,则将该多个预测模型确定为该目标预测模型。换言之,可以通过该预设误差范围代表该多个参数的准确度需求。具体地,若该第二预测误差位于预设误差范围内,则说明该单个预测模型的预测准确度在一定程度上能够满足准确度需求,此时,将该单个预测模型确定为该目标预测模型,能够在保证该多个参数的预测准确度的基础上,控制该多个参数的预测成本。若该第二预测误差位于该预设误差范围之外,则说明该单个预测模型的预测准确度在一定程度上未必能够满足准确度需求,此时,将该多个预测模型确定为该目标预测模型,可以通过牺牲预测成本的方式,优先保证该多个参数的预测准确度。
S385,若该第一预测误差大于或等于该第二预测误差,则预测服务器将单个预测模型确定为目标预测模型。
示例性地,若该第一预测误差大于或等于该第二预测误差,则说明该多个预测模型的预测准确度小于或等于该单个预测模型的预测准确度,此外,由于该多个预测模型的预测成本大于该单个预测模型的预测成本,因此,将该单个预测模型确定为该目标预测模型,与将该多个预测模型确定为该目标预测模型相比,能够在保证该预测准确度的基础上,控制该多个参数的预测成本。
值得注意的是,在S382中,假设本实施例引入了预设误差,作为第二预测误差的门限值。
假设将本实施例中的第一预测误差记为ARMSE1,将第二预设误差记为ARMSE2,并将该预设误差记为阈值1。如果ARMSE1大于RMSE2,则说明单个预测模型比多个预测模型性能好,即预测更准确,且单个预测模型相比于多个预测模型占用更小的内存,此时我们将单个预测模型的模型作为预测模型,并将单个预测模型的结果作为后续的预测结果通知远控服务器和/或终端,采取相应的远控操作;如果ARMSE1小于RMSE2,此时需进一步判断RMSE2的值,如果RMSE2的值小于阈值1,即多个预测模型和单个预测模型的预测误差均在可承受的门限范围内,出于节省内存和能耗的考虑,本实施例将单个预测模型的模型作为预测模型,并将单个预测模型的结果作为后续的预测结果通知远控服务器和/或终端,采取相应的远控操作;如果RMSE2的值大于阈值1,本实施例将多个预测模型的模型作为预测模型,并将多个预测模型的结果作为后续的预测结果通知远控服务器和/或终端,采取相应的远控操作。
S391,预测服务器将预测结果发送至远控服务器或远程驾驶舱。
S392,远控服务器和远程驾驶舱基于预测结果向终端设备发送控制指令。
示例性地,远控服务器或远程驾驶舱收到该预测结果后,基于该预测结果生成控制指令,并向终端设备发送该控制指令。例如,在远程控制场景中,远控服务器或远程驾驶舱可以通过下行链路(如5G-V2X链路)向终端设备(如远程驾驶车辆)发送控制指令。
示例性地,终端设备接收到远控服务器和远程驾驶舱发送的控制指令后,可以基于该控制指令控制终端设备执行相应的操作。例如,终端设备可以基于收到的控制指令,控制该终端设备执行以下中的至少一项:调整上行视频码率、降低远控速度、安全停靠以及退出远控操作等等。
S392,结束。
图4是本申请实施例提供的测试训练过程的示意性流程图。
如图4所示,该S350可包括:
S351,预测服务器将时间序列预测问题转换为监督学习问题,得到数据集。
示例性地,可以通过设置预测时间和滞后时间,将时间序列问题转化为监督学习的问题。
值得注意的是,时间序列是指在不同时间上统计某种指标或参数,并按时间先后顺序排列而形成的序列。监督学习是指:利用一组已知类别的样本调整分类器的参数,使其达到所要求性能的过程,监督学习也可称为监督训练或有教师学习。监督学习是从标记的训练数据来推断一个功能的机器学习任务。训练数据包括一套训练示例。在监督学习中,每个实例都是由一个输入对象(通常为矢量)和一个期望的输出值(也称为监督信号)组成。监督学习算法是分析该训练数据,并产生一个推断的功能,其可以用于映射出新的实例。一个最佳的方案将允许该算法来正确地决定那些看不见的实例的类标签。
S352,预测服务器可将得到的数据集进行分割,得到训练集和测试集。
示例性地,预测服务器可对数据集(即样本集)进行预处理,包括对数据集的数据归一化操作以及分割(拆分训练集和测试集)操作等。预测服务器获取训练集和测试集后,可设置训练参数,并基于训练集和测试集对该多个预测模型和该单个预测模型进行训练。
下面结合图4对该多个预测模型和该单个预测模型的训练过程(即图3所示的S360)进行说明。
如图4所示,该S360可包括:
S361,预测服务器利用多个预测模型中的第一预测模型对时延和吞吐量进行预测,得到时延的第一预测值。
S362,预测服务器基于时延的第一预测值和时延的真实值,确定第一预测模型的预测误差。
S363,预测服务器利用多个预测模型中的第二预测模型对时延和吞吐量进行预测,得到吞吐量的第一预测值。
S364,预测服务器基于吞吐量的第一预测值和吞吐量的真实值,确定第二预测模型的预测误差。
S365,预测服务器基于第一预测模型的预测误差和第二预测模型的误差,训练第一预测模型和第二预测模型。
示例性地,针对S361~S364,假设该多个预测模型为两个并行的LSTM(Parallel-LSTM)网络,针对该两个并行的LSTM网络,利用两台中央处理器(Central Processing Unit,CPU)或图形处理器(graphics processing unit,GPU)并行运行该两个并行的LSTM,其输入为两组相同的结构化数据(即训练集),对于该两个并行的LSTM网络中的第一LSTM网络,其预测特征为时延,其输出为时延的第一预测值。对于该两个并行的LSTM网络中的第二LSTM网络,其预测特征为(上行)吞吐量,其输出为吞吐量的第一预测值;基于此,预 测服务器基于时延的第一预测值和时延的真实值,确定第一LSTM网络的预测误差,并基于第一LSTM网络的预测误差训练该第一LSTM网络,例如直至该第一LSTM网络的预测误差收敛至预设阈值。类似的,预测服务器基于吞吐量的第一预测值和吞吐量的真实值,确定第二LSTM网络的预测误差,并基于第二LSTM网络的预测误差训练该第二LSTM网络,例如直至该第二LSTM网络的预测误差收敛至预设阈值。
S366,预测服务器对训练集进行差分运算,得到差分训练集。
S367,预测服务器利用单个预测模型对时延和吞吐量进行预测,得到时延的第二预测值和吞吐量的第二预测值。
S368,预测服务器基于时延的第二预测值、吞吐量的第二预测值、时延的真实值以及吞吐量的真实值,确定单个预测模型的预测误差。
S369,预测服务器基于单个预测模型的预测误差,训练该第二预测模型。
示例性地,针对S366~S369,假设该单个预测模型为增强的LSTM网络(Improved-LSTM),其输入结构化数据,其预测特征为(上行)吞吐量和时延,也即是说,该增强的LSTM网络的输出包括时延的第二预测值和吞吐量的第二预测值。基于此,预测服务器可基于时延的第二预测值、吞吐量的第二预测值、时延的真实值以及吞吐量的真实值,确定增强的LSTM网络的预测误差,并基于增强的LSTM网络的预测误差训练该增强的LSTM网络,例如直至该增强的LSTM网络的预测误差收敛至预设阈值。
值得注意的是,本实施例中,在训练该增强的LSTM网络过程中,可以先利用差分运算消除预测特征之间的相关性,避免预测结果出现严重的时间滞。
下面结合图4对该多个预测模型和单个预测模型的测试过程(即图3所示的S370)进行说明。
如图4所示,该S370可包括:
S371,预测服务器利用第一预测模型对时延和吞吐量进行测试,得到时延的第一测试值。
S372,预测服务器基于时延的第一测试值和时延的真实值,确定第一预测模型的预测误差。
S373,预测服务器利用第二预测模型对时延和吞吐量进行测试,得到吞吐量的第一测试值。
S374,预测服务器基于吞吐量的第一测试值和吞吐量的真实值,确定第二预测模型的预测误差。
S375,预测服务器基于第一预测模型的预测误差和第二预测模型的误差,确定第一预测误差。
S376,预测服务器对测试集进行差分运算,得到差分测试集。
S377,预测服务器利用单个预测模型对时延和吞吐量进行预测,得到时延的第二测试值和吞吐量的第二测试值。
S378,预测服务器基于时延的第二测试值、吞吐量的第二测试值、时延的真实值以及吞吐量的真实值,确定第二预测误差。
示例性地,在S371~S378中,假设将该第一预测模型的预测误差记为ARMSE_1,将第二预测模型的预测误差记为ARMSE_2,则可以确定该第一预测误差为ARMSE(=(ARSME_1+ARMSE_2)/2)。
示例性地,该第二预测误差为RSME2。
以上结合附图详细描述了本申请的优选实施方式,但是,本申请并不限于上述实施方式中的具体细节,在本申请的技术构思范围内,可以对本申请的技术方案进行多种简单变型,这些简单变型均属于本申请的保护范围。例如,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要 的重复,本申请对各种可能的组合方式不再另行说明。又例如,本申请的各种不同的实施方式之间也可以进行任意组合,只要其不违背本申请的思想,其同样应当视为本申请所公开的内容。
还应理解,在本申请的各种方法实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上文对本申请实施例提供的方法进行了说明,下面对本申请实施例提供的装置进行说明。
图5是本申请实施例提供的预测服务器400的示意性框图。
如图5所示,该预测服务器400可包括:
获取单元410,用于获取用于对多个参数进行预测的输入参数;
确定单元420,用于基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对该多个参数进行预测的目标预测模型;
其中,该多个预测模型和该多个参数一一对应,该多个预测模型中的每一个预测模型用于对该每一个预测模型对应的参数进行预测,该单个预测模型用于对该多个参数进行一次性预测;
预测单元430,用于基于该输入参数和该目标预测模型,对该多个参数进行预测。
在一些实施例中,该确定单元420具体用于:
若该第一预测误差大于或等于该第二预测误差,则将该单个预测模型确定为该目标预测模型;
若该第一预测误差小于该第二预测误差,则将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
在一些实施例中,该确定单元420具体用于:
若该第一预测误差小于该第二预测误差,则基于该第二预测误差,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
在一些实施例中,该确定单元420具体用于:
若该第二预测误差小于或等于预设误差,则将该单个预测模型确定为该目标预测模型;若该第二预测误差大于该预设误差,则将该多个预测模型确定为该目标预测模型。
在一些实施例中,该确定单元420具体用于:
若该第一预测误差小于该第二预测误差,则基于该多个参数的应用场景,将该多个预测模型确定为该目标预测模型或将该单个预测模型确定为该目标预测模型。
在一些实施例中,该确定单元420具体用于:
若该多个参数的应用场景为终端设备应用该多个参数的场景,则将该多个预测模型确定为该目标预测模型;若该多个参数的应用场景为预测服务器应用该多个参数的场景,则将该单个预测模型确定为该目标预测模型。
在一些实施例中,该获取单元410具体用于:
获取终端设备的结构化数据;
其中,该结构化数据包括m个历史时刻和该m个历史时刻中的每一个历史时刻的参数集合;该参数集合包括所述多个参数,m为大于0的整数;
获取位于该多个参数对应的时刻之前的且间隔时长大于或等于预设时长的第一历史时刻;
基于该m个历史时刻,以该第一历史时刻为起始时刻,按照时间顺序向前选择n个历史时刻;n为大于0且小于或等于m的整数;
基于该结构化数据,获取该n个历史时刻对应的n个参数集合;
基于该目标预测模型和该n个参数集合,确定该输入参数。
在一些实施例中,该获取单元410具体用于:
若该目标预测模型为该单个预测模型,则获取位于该第一历史时刻之前的第二历史时刻;基于该m个历史时刻,以该第二历史时刻为起始时刻,按照时间顺序向前选择k个历史时刻;k为大于0且小于或等于m的整数;基于该结构化数据,获取该k个历史时刻对应的k个参数集合;对该n个参数集合和该k个参数集合进行差分运算,得到该输入参数。
在一些实施例中,该获取单元410具体用于:
若该目标预测模型为该多个预测模型,则将该n个参数集合,确定为该输入参数。
在一些实施例中,该获取单元410获取用于对多个参数进行预测的输入参数之前,还用于:
之前,该方法200还可包括:
获取训练数据集和测试数据集;
基于该训练数据集训练该多个预测模型和该单个预测模型;
基于该测试数据集对该多个预测模型中的每一个预测模型进行测试,得到多个预测误差;
基于该多个预测误差,得到该第一预测误差;
基于该测试数据集对该单个预测模型进行测试,得到该第二预测误差。
应理解,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,预测服务器400可以对应于执行本申请实施例的方法200~300中的相应主体,并且预测服务器400中的各个单元分别为了实现方法200~300中的相应流程,为了简洁,在此不再赘述。
还应当理解,本申请实施例涉及的预测服务器400中的各个单元可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)单元还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个单元的功能也可以由多个单元来实现,或者多个单元的功能由一个单元实现。在本申请的其它实施例中,该预测服务器400也可以包括其它单元,在实际应用中,这些功能也可以由其它单元协助实现,并且可以由多个单元协作实现。根据本申请的另一个实施例,可以通过在包括例如中央处理单元(Central Processing Unit,CPU)、图形处理器(graphics processing unit,GPU)、随机存取存储介质(RAM)、只读存储介质(ROM)等处理元件和存储元件的通用计算机的通用计算设备上运行能够执行相应方法所涉及的各步骤的计算机程序(包括程序代码),来构造本申请实施例涉及的预测服务器400,以及来实现本申请实施例的参数预测方法。计算机程序可以记载于例如计算机可读存储介质上,并通过计算机可读存储介质装载于电子设备中,并在其中运行,来实现本申请实施例的相应方法。
换言之,上文涉及的单元可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过软硬件结合的形式实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件组合执行完成。可选地,软件可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图6是本申请实施例提供的电子设备500的示意结构图。
如图6所示,该电子设备500至少包括处理器510以及计算机可读存储介质520。其中,处理器510以及计算机可读存储介质520可通过总线或者其它方式连接。计算机可读存储介质520用于存储计算机程序521,计算机程序521包括计算机指令,处理器510用于执行计算机可读存储介质520存储的计算机指令。处理器510是电子设备500的计算核 心以及控制核心,其适于实现一条或多条计算机指令,具体适于加载并执行一条或多条计算机指令从而实现相应方法流程或相应功能。
作为示例,处理器510也可称为中央处理器(Central Processing Unit,CPU)或图形处理器(graphics processing unit,GPU)。处理器510可以包括但不限于:通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
作为示例,计算机可读存储介质520可以是高速RAM存储器,也可以是非不稳定的存储器(Non-VolatileMemory),例如至少一个磁盘存储器;可选的,还可以是至少一个位于远离前述处理器510的计算机可读存储介质。具体而言,计算机可读存储介质520包括但不限于:易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
如图6所示,该电子设备500还可以包括收发器530。
其中,处理器510可以控制该收发器530与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器530可以包括发射机和接收机。收发器530还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该通信设备500中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
在一种实现方式中,该电子设备500可以是任一具有数据处理能力的电子设备;该计算机可读存储介质520中存储有第一计算机指令;由处理器510加载并执行计算机可读存储介质520中存放的第一计算机指令,以实现图1所示方法实施例中的相应步骤;具体实现中,计算机可读存储介质520中的第一计算机指令由处理器510加载并执行相应步骤,为避免重复,此处不再赘述。
根据本申请的另一方面,本申请实施例还提供了一种计算机可读存储介质(Memory),计算机可读存储介质是电子设备500中的记忆设备,用于存放程序和数据。例如,计算机可读存储介质520。可以理解的是,此处的计算机可读存储介质520既可以包括电子设备500中的内置存储介质,当然也可以包括电子设备500所支持的扩展存储介质。计算机可读存储介质提供存储空间,该存储空间存储了电子设备500的操作系统。并且,在该存储空间中还存放了适于被处理器510加载并执行的一条或多条的计算机指令,这些计算机指令可以是一个或多个的计算机程序521(包括程序代码)。
根据本申请的另一方面,本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。例如,计算机程序521。此时,数据处理设备500可以是计算机,处理器510从计算机可读存储介质520读取该计算机指令,处理器510执行该计算机指令,使得该计算机执行上述各种可选方式中提供的参数预测方法。
换言之,当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计 算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地运行本申请实施例的流程或实现本申请实施例的功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质进行传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元以及流程步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
最后需要说明的是,以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (15)

  1. 一种参数预测方法,其特征在于,包括:
    获取用于对多个参数进行预测的输入参数;
    基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对所述多个参数进行预测的目标预测模型;
    其中,所述多个预测模型和所述多个参数一一对应,所述多个预测模型中的每一个预测模型用于对所述每一个预测模型对应的参数进行预测,所述单个预测模型用于对所述多个参数进行一次性预测;
    基于所述输入参数和所述目标预测模型,对所述多个参数进行预测。
  2. 根据权利要求1所述的方法,其特征在于,所述基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对所述多个参数进行预测的目标预测模型,包括:
    若所述第一预测误差大于或等于所述第二预测误差,则将所述单个预测模型确定为所述目标预测模型;
    若所述第一预测误差小于所述第二预测误差,则将所述多个预测模型确定为所述目标预测模型或将所述单个预测模型确定为所述目标预测模型。
  3. 根据权利要求1所述的方法,其特征在于,所述基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对所述多个参数进行预测的目标预测模型,包括:
    若所述第一预测误差小于所述第二预测误差,则基于所述第二预测误差,将所述多个预测模型确定为所述目标预测模型或将所述单个预测模型确定为所述目标预测模型。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述第二预测误差,将所述多个预测模型确定为所述目标预测模型或将所述单个预测模型确定为所述目标预测模型,包括:
    若所述第二预测误差小于或等于预设误差,则将所述单个预测模型确定为所述目标预测模型;若所述第二预测误差大于所述预设误差,则将所述多个预测模型确定为所述目标预测模型。
  5. 根据权利要求1所述的方法,其特征在于,所述基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对所述多个参数进行预测的目标预测模型,包括:
    若所述第一预测误差小于所述第二预测误差,则基于所述多个参数的应用场景,将所述多个预测模型确定为所述目标预测模型或将所述单个预测模型确定为所述目标预测模型。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述多个参数的应用场景,将所述多个预测模型确定为所述目标预测模型或将所述单个预测模型确定为所述目标预测模型,包括:
    若所述多个参数的应用场景为终端设备应用所述多个参数的场景,则将所述多个预测模型确定为所述目标预测模型;若所述多个参数的应用场景为预测服务器应用所述多个参数的场景,则将所述单个预测模型确定为所述目标预测模型。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述获取用于对多个参数进行预测的输入参数,包括:
    获取终端设备的结构化数据;
    其中,所述结构化数据包括m个历史时刻和所述m个历史时刻中的每一个历史时刻的参数集合;所述参数集合包括所述多个参数,m为大于0的整数;
    获取位于所述多个参数对应的时刻之前的且间隔时长大于或等于预设时长的第一历 史时刻;
    基于所述m个历史时刻,以所述第一历史时刻为起始时刻,按照时间顺序向前选择n个历史时刻;n为大于0且小于或等于m的整数;
    基于所述结构化数据,获取所述n个历史时刻对应的n个参数集合;
    基于所述目标预测模型和所述n个参数集合,确定所述输入参数。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述目标预测模型和所述n个参数集合,确定所述输入参数,包括:
    若所述目标预测模型为所述单个预测模型,则获取位于所述第一历史时刻之前的第二历史时刻;
    基于所述m个历史时刻,以所述第二历史时刻为起始时刻,按照时间顺序向前选择k个历史时刻;k为大于0且小于或等于m的整数;
    基于所述结构化数据,获取所述k个历史时刻对应的k个参数集合;
    对所述n个参数集合和所述k个参数集合进行差分运算,得到所述输入参数。
  9. 根据权利要求7所述的方法,其特征在于,所述基于所述目标预测模型和所述n个参数集合,确定所述输入参数,包括:
    若所述目标预测模型为所述多个预测模型,则将所述n个参数集合,确定为所述输入参数。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述获取用于对多个参数进行预测的输入参数之前,所述方法还包括:
    获取训练数据集和测试数据集;
    基于所述训练数据集训练所述多个预测模型和所述单个预测模型;
    基于所述测试数据集对所述多个预测模型中的每一个预测模型进行测试,得到多个预测误差;
    基于所述多个预测误差,得到所述第一预测误差;
    基于所述测试数据集对所述单个预测模型进行测试,得到所述第二预测误差。
  11. 一种预测服务器,其特征在于,包括:
    获取单元,用于获取用于对多个参数进行预测的输入参数;
    确定单元,用于基于并行运行的多个预测模型对应的第一预测误差和单独运行的单个预测模型对应的第二预测误差,确定用于对所述多个参数进行预测的目标预测模型;
    其中,所述多个预测模型和所述多个参数一一对应,所述多个预测模型中的每一个预测模型用于对所述每一个预测模型对应的参数进行预测,所述单个预测模型用于对所述多个参数进行一次性预测;
    预测单元,用于基于所述输入参数和所述目标预测模型,对所述多个参数进行预测。
  12. 一种预测系统,其特征在于,所述预测系统包括:终端设备、网络设备、核心网设备、预测服务器、远控服务器;其中,所述核心网设备将所述网络设备或所述终端设备发送的采集数据处理为结构化数据,并将结构化数据发送给所述预测服务器,所述预测服务器基于所述核心网设备发送的结构化数据执行如权利要求1至10中任一项所述的参数预测方法,并将预测结果发送至所述远控服务器或所述终端设备。
  13. 一种电子设备,其特征在于,包括:
    处理器,适于执行计算机程序;
    计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至10中任一项所述的参数预测方法。
  14. 一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1至10中任一项所述的参数预测方法。
  15. 一种计算机程序产品,包括指令,其特征在于,当所述指令在计算机上运行时, 使得所述计算机执行如权利要求1至10中任一项所述的参数预测方法。
PCT/CN2023/079633 2022-06-08 2023-03-03 参数预测方法、预测服务器、预测系统及电子设备 WO2023236601A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210647581.2 2022-06-08
CN202210647581.2A CN117273182A (zh) 2022-06-08 2022-06-08 参数预测方法、预测服务器、预测系统及电子设备

Publications (1)

Publication Number Publication Date
WO2023236601A1 true WO2023236601A1 (zh) 2023-12-14

Family

ID=89117506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079633 WO2023236601A1 (zh) 2022-06-08 2023-03-03 参数预测方法、预测服务器、预测系统及电子设备

Country Status (2)

Country Link
CN (1) CN117273182A (zh)
WO (1) WO2023236601A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117494908A (zh) * 2023-12-29 2024-02-02 宁波港信息通信有限公司 基于大数据的港口货物吞吐量预测方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170364614A1 (en) * 2016-06-16 2017-12-21 International Business Machines Corporation Adaptive forecasting of time-series
CN111027776A (zh) * 2019-12-13 2020-04-17 北京华展汇元信息技术有限公司 一种基于改进型长短期记忆lstm神经网络的污水处理水质预测方法
CN111198808A (zh) * 2019-12-25 2020-05-26 东软集团股份有限公司 预测性能指标的方法、装置、存储介质及电子设备
CN111639798A (zh) * 2020-05-26 2020-09-08 华青融天(北京)软件股份有限公司 智能的预测模型选择方法及装置
CN113392359A (zh) * 2021-08-18 2021-09-14 腾讯科技(深圳)有限公司 多目标预测方法、装置、设备及存储介质
CN113569460A (zh) * 2021-06-08 2021-10-29 北京科技大学 实车燃料电池系统状态多参数预测方法及装置
CN113869521A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 构建预测模型的方法、装置、计算设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170364614A1 (en) * 2016-06-16 2017-12-21 International Business Machines Corporation Adaptive forecasting of time-series
CN111027776A (zh) * 2019-12-13 2020-04-17 北京华展汇元信息技术有限公司 一种基于改进型长短期记忆lstm神经网络的污水处理水质预测方法
CN111198808A (zh) * 2019-12-25 2020-05-26 东软集团股份有限公司 预测性能指标的方法、装置、存储介质及电子设备
CN111639798A (zh) * 2020-05-26 2020-09-08 华青融天(北京)软件股份有限公司 智能的预测模型选择方法及装置
CN113869521A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 构建预测模型的方法、装置、计算设备和存储介质
CN113569460A (zh) * 2021-06-08 2021-10-29 北京科技大学 实车燃料电池系统状态多参数预测方法及装置
CN113392359A (zh) * 2021-08-18 2021-09-14 腾讯科技(深圳)有限公司 多目标预测方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117494908A (zh) * 2023-12-29 2024-02-02 宁波港信息通信有限公司 基于大数据的港口货物吞吐量预测方法及系统
CN117494908B (zh) * 2023-12-29 2024-03-22 宁波港信息通信有限公司 基于大数据的港口货物吞吐量预测方法及系统

Also Published As

Publication number Publication date
CN117273182A (zh) 2023-12-22

Similar Documents

Publication Publication Date Title
EP4064130A1 (en) Neural network model update method, and image processing method and device
CN112232293B (zh) 图像处理模型训练、图像处理方法及相关设备
KR20210073569A (ko) 이미지 시맨틱 세그멘테이션 네트워크를 트레이닝하기 위한 방법, 장치, 디바이스 및 저장 매체
CN112990211B (zh) 一种神经网络的训练方法、图像处理方法以及装置
WO2023035822A1 (zh) 一种目标检测方法、装置、设备及存储介质
WO2019056845A1 (zh) 道路图生成方法、装置、电子设备和计算机存储介质
WO2019232772A1 (en) Systems and methods for content identification
TW202036385A (zh) 分裂網路加速架構
WO2021051987A1 (zh) 神经网络模型训练的方法和装置
Liu et al. Vehicle-related scene understanding using deep learning
WO2023236601A1 (zh) 参数预测方法、预测服务器、预测系统及电子设备
US10732694B2 (en) Power state control of a mobile device
CN111723815B (zh) 模型训练方法、图像处理方法、装置、计算机系统和介质
CN113807399A (zh) 一种神经网络训练方法、检测方法以及装置
US20230300196A1 (en) Data sharing method and apparatus applied between vehicles, medium, and electronic device
WO2023125628A1 (zh) 神经网络模型优化方法、装置及计算设备
WO2022222647A1 (zh) 车辆意图预测方法、装置、设备及存储介质
CN113191479A (zh) 联合学习的方法、系统、节点及存储介质
CN111860259A (zh) 驾驶检测模型的训练、使用方法、装置、设备及介质
KR20230132350A (ko) 연합 감지 모델 트레이닝, 연합 감지 방법, 장치, 설비 및 매체
CN112509321A (zh) 一种基于无人机的城市复杂交通情景的驾驶控制方法、系统及可读存储介质
CN112733552A (zh) 机器翻译模型构建方法、装置以及设备
US11948084B1 (en) Function creation for database execution of deep learning model
Liu et al. HPL-ViT: A unified perception framework for heterogeneous parallel LiDARs in V2V
RU2742394C1 (ru) Способ построения интеллектуальной системы определения областей маршрутов полета беспилотного летательного аппарата в моделирующих комплексах

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23818769

Country of ref document: EP

Kind code of ref document: A1