CN116668210A - Method, system and equipment for adjusting running state of network equipment - Google Patents

Method, system and equipment for adjusting running state of network equipment Download PDF

Info

Publication number
CN116668210A
CN116668210A CN202210159693.3A CN202210159693A CN116668210A CN 116668210 A CN116668210 A CN 116668210A CN 202210159693 A CN202210159693 A CN 202210159693A CN 116668210 A CN116668210 A CN 116668210A
Authority
CN
China
Prior art keywords
network device
prediction model
throughput
transmission performance
energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210159693.3A
Other languages
Chinese (zh)
Inventor
乔羽
吴俊�
张亮
徐晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210159693.3A priority Critical patent/CN116668210A/en
Publication of CN116668210A publication Critical patent/CN116668210A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/12Arrangements for remote connection or disconnection of substations or of equipment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/087Jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Small-Scale Networks (AREA)

Abstract

The application discloses a method for adjusting the running state of network equipment, and relates to the technical field of communication. The first network device receives a performance threshold sent by the second network device, the performance threshold indicating performance conditions that the first network device needs to satisfy at runtime, the performance threshold including a throughput threshold and a transmission performance threshold. The first network device determines energy consumption, throughput and transmission performance information corresponding to each energy saving strategy in the plurality of energy saving strategies. The first network equipment determines a target energy-saving strategy from a plurality of energy-saving strategies according to the performance threshold, the energy consumption, throughput and transmission performance information corresponding to the energy-saving strategy, wherein the target energy-saving strategy is the energy-saving strategy which meets the performance threshold and has the minimum energy consumption. The first network device adjusts the operating state according to the target energy saving strategy.

Description

Method, system and equipment for adjusting running state of network equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, a system, and an apparatus for adjusting an operation state of a network device.
Background
With the continuous development of network technology, demands for network services are rapidly increasing, so that the number and size of network devices providing network services are rapidly increasing. The expansion of the network size results in an increase in network energy consumption, which not only increases the cost of network operation, but also generates a large amount of carbon emissions.
Disclosure of Invention
The application provides a method, a system and a device for adjusting the running state of network equipment, which are used for ensuring the performance of the network equipment and reducing the energy consumption of the network equipment by adjusting the running parameters of the network equipment.
In a first aspect, the present application provides a method for adjusting an operation state of a network device. The first network device receives a performance threshold sent by the second network device. The performance threshold includes a throughput threshold and a transmission performance threshold. The transmission performance threshold includes one or more of a delay threshold, a jitter threshold, and a packet loss threshold. The first network device determines energy consumption, throughput and transmission performance information corresponding to each energy saving strategy in the plurality of energy saving strategies. The transmission performance information includes one or more of delay, jitter, and packet loss. Each energy saving strategy comprises a device configuration parameter corresponding to a device in the first network equipment. The first network device determines a target energy saving strategy according to the performance threshold, the energy consumption, the throughput and the transmission performance information corresponding to each energy saving strategy. The target energy-saving strategy is the energy-saving strategy with the minimum energy consumption corresponding to the energy-saving strategy meeting the preset conditions. The preset condition includes one or more of a delay indicated by the transmission performance information being less than or equal to a delay threshold, a jitter indicated by the transmission performance information being less than or equal to a jitter threshold, a packet loss indicated by the transmission performance information being less than or equal to a packet loss threshold, and a throughput being greater than or equal to the throughput threshold. The first network device adjusts the operation state of the first network device according to the target energy-saving strategy.
In the scheme, the first network device performs optimization according to the performance threshold value issued by the second network device and the performance and energy consumption corresponding to each energy-saving strategy, so that the determined performance value of the energy-saving strategy meets the performance threshold value, and the energy consumption of the energy-saving strategy is minimum. Therefore, the first network adjusts the running state according to the energy-saving strategy, so that the performance requirement can be met and the energy consumption can be reduced.
In one possible implementation, the first network device sends first traffic information to the second network device, the first traffic information indicating a value of traffic handled by the first network device during a first time period, the first traffic information being used to determine a throughput threshold.
In one possible implementation, for each energy saving policy, the first network device inputs the energy saving policy into the energy consumption prediction model to obtain the energy consumption corresponding to the energy saving policy output by the energy consumption prediction model. The energy consumption prediction model corresponds to a device type of the first network device. The energy consumption prediction model is generated according to the energy consumption training sample. Each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value.
In one possible implementation, for each power saving policy, the first network device inputs the power saving policy into a throughput prediction model to obtain a throughput output by the throughput prediction model corresponding to the power saving policy. The throughput prediction model corresponds to a device type of the first network device. The throughput prediction model is generated from throughput training samples. Each throughput training sample comprises throughput and device configuration parameters corresponding to the throughput.
In one possible implementation, for each energy saving policy, the first network device inputs the energy saving policy into a transmission performance prediction model to obtain transmission performance information corresponding to the energy saving policy output by the transmission performance prediction model. The transmission performance prediction model corresponds to a device type of the first network device. The transmission performance prediction model is generated from transmission performance training samples. Each transmission performance training sample comprises transmission performance information, a device configuration parameter corresponding to the transmission performance information and throughput corresponding to the transmission performance information. The transmission performance prediction model comprises one or more of a delay prediction model, a jitter prediction model and a packet loss prediction model.
In one possible implementation, when the first network device has a local constraint, the target power saving policy satisfies the local constraint in addition to the preset condition. The local constraint indicates that one or more devices in the first network apparatus operate in accordance with a preset parameter and/or that the switching states of the plurality of devices of the first network apparatus remain consistent. In this implementation manner, when determining the target energy-saving policy, the first network device may further consider a local constraint, so that the determined target energy-saving policy meets both a preset condition and the local constraint.
In a second aspect, the present application provides a method for adjusting an operation state of a network device. The second network device receives the first traffic information sent by the first network device. The first traffic information indicates a value of traffic processed by the first network device during a first time period. The second network device predicts second flow information corresponding to the first network device in a second time period according to the first flow information. The second period of time is later than the first period of time. The second network device obtains a transmission performance threshold and determines a throughput threshold based on the second traffic information. The transmission performance threshold includes one or more of a delay threshold, a jitter threshold, and a packet loss threshold. The second network device sends the throughput threshold and the transmission performance threshold to the first network device, so that the first network device adjusts the running state of the first network device according to the throughput threshold and the transmission performance threshold.
In one possible implementation, the second network device inputs the first traffic information into a traffic prediction model to obtain second traffic information output by the traffic prediction model. The traffic prediction model is generated based on historical traffic information training of the first network device.
In one possible implementation, the second network device receives the traffic prediction model described above. In this implementation manner, the second network device may send the historical traffic information of the first network device to the cloud device, and the cloud device trains and generates a traffic prediction model by using the historical traffic information, and sends the traffic prediction model to the second network device.
In one possible implementation manner, a second network device receives the throughput prediction model, the energy consumption prediction model, and the transmission performance prediction model sent by the cloud device, and sends the throughput prediction model, the energy consumption prediction model, and the transmission performance prediction model to the first network device.
In one possible implementation, the throughput prediction model, the energy consumption prediction model or the transmission performance prediction model corresponds to a device type of the first network device. The energy consumption prediction model is generated according to the energy consumption training sample. Each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value. The throughput prediction model is generated from throughput training samples. Each throughput training sample includes a throughput and a device configuration parameter corresponding to the throughput. The transmission performance prediction model is generated from the transmission performance training samples. Each transmission performance training sample comprises transmission performance information, a device configuration parameter corresponding to the transmission performance information and throughput corresponding to the transmission performance information.
In a third aspect, the present application provides a network system. The system includes a first network device and a second network device. The first network device is configured to perform the method according to the first aspect or any implementation manner of the first aspect. The second network device is configured to perform the method according to the second aspect or any implementation of the second aspect.
In one possible implementation, the system further includes a cloud device. The cloud device is used for receiving statistical information sent by the second network device. The statistical information includes historical traffic information of network devices managed by the second network device. The cloud device is further used for training and generating a flow prediction model by utilizing the historical flow information, and sending the flow prediction model to the second network device.
In one possible implementation, the cloud device is further configured to send a throughput prediction model, an energy consumption prediction model, or a transmission time prediction model to the second network device.
In a fourth aspect, the present application provides a network device. The network device includes a processor and a memory. The memory is used to store instructions or computer programs. The processor is configured to execute instructions or a computer program in the memory to cause the network device to perform the method according to the first aspect or any implementation of the first aspect or to perform the method according to the second aspect or any implementation of the second aspect.
In a fifth aspect, the present application provides a computer-readable storage medium. The computer-readable storage medium includes instructions. The instructions, when executed on a computer, cause the computer to perform the method as described above for any one of the first aspects or implementations of the first aspect or to perform the method as described for any one of the second aspects or implementations of the second aspect.
In a sixth aspect, the present application provides a computer program product. The computer program product, when run on a computer, causes the computer to perform the method according to the first aspect or any implementation of the first aspect or to perform the method according to the second aspect or any implementation of the second aspect.
Drawings
Fig. 1 is a flowchart of a method for adjusting an operation state of a network device according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an adjusting device for an operation state of a network device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another device for adjusting an operation state of a network device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a network device according to an embodiment of the present application;
Fig. 6 is a schematic diagram of another network device structure according to an embodiment of the present application.
Detailed Description
In order to make the solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
With the continuous expansion of the network construction scale, the increase of the operation cost caused by the increase of the network energy consumption has become a troublesome problem for operators. The power consumption of the network device is related to the configuration parameters of the devices in the network device. However, the configuration parameters of the network device are typically kept in a high-profile state, which results in a continuous generation of high power consumption by the network device.
Based on the above, the application provides a method for adjusting the running state of the network equipment, so that the first network equipment can determine the energy-saving strategy meeting the performance requirement, and adjust the running state of the first network equipment according to the energy-saving strategy, thereby reducing the energy consumption of the first network equipment and ensuring the processing performance.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, the following description will be given with reference to the accompanying drawings.
Referring to fig. 1, the flowchart of a method for adjusting an operation state of a network device according to an embodiment of the present application is shown in fig. 1, where the method includes:
s101: the first network device transmits first traffic information to the second network device, the first traffic information indicating a value of traffic handled by the first network device during a first time period.
In this embodiment, the first network device may collect traffic information processed in the first period of time, that is, first traffic information, and send the first traffic information to the second network device. Specifically, the first flow information may include: the transmission rate and/or the reception rate of the first network device in the first period of time, or the amount of data received and/or transmitted by the first network device in the first period of time, etc. The transmission rate may be a rate statistic such as an average transmission rate, a maximum transmission rate, or the like. The data amount may be a number of bits, a number of bytes, a number of messages, etc.
In a specific implementation, the first traffic information sent by the first network device may be traffic information at a device level, or may be traffic information at a board level, or may be traffic information at an interface level. When the first traffic information includes traffic information of a specific device, the first network equipment may make a targeted parameter adjustment for the specific device based on the traffic information of the specific device, so that the parameter adjustment for the specific device is more accurate.
S102: the second network device receives the first flow information and predicts second flow information corresponding to the first network device in a second time period according to the first flow information, wherein the second time period is later than the first time period.
And the second network equipment performs flow prediction according to the first flow information after receiving the first flow information sent by the first network equipment so as to predict second flow information corresponding to the second network equipment in a second time period. Wherein the second period of time is later than the first period of time. That is, the second network device may predict a trend of future traffic from traffic information of the historical period. Specifically, the second network device may input the first traffic information into the traffic prediction model, and obtain the second traffic information output by the traffic prediction model. The traffic prediction model may be a pre-set model, for example, the second network device receives a traffic prediction model. The received traffic prediction model may be trained by other network devices or may be administrator configured. The traffic prediction model may also be pre-trained by the second network device based on historical traffic information of the first network device. The historical traffic information refers to traffic values processed by the first network device during different time periods within the historical time period. The traffic prediction models are in one-to-one correspondence with the first network devices. When there are multiple first network devices in the network, the traffic prediction model corresponding to each first network device may be different.
The flow prediction model may be a regression prediction model, a neural network model, or the like. When the flow prediction model is a neural network model, the first network device takes the acquired historical flow information as training samples, and each training sample comprises time and a flow value, wherein the flow value is taken as a label. In each round of iterative training, the first network device may input a set of training samples into the neural network model, and the neural network model outputs an inference result for the training samples. The first network device may then calculate a loss value between the inferred results output by the neural network model and the actual results (labels) of the set of training samples via the corresponding loss function. The first network device may then calculate a gradient of change of the parameter in each network layer in the neural network model based on the calculated loss value. In this way, the first network device may calculate, based on the pre-set super-parameters in the optimizer and the variation gradients of the parameters in the network layers, an adjustment value (which may also be referred to as a parameter update amount) of the parameter in the iterative training process, where the adjustment value may be, for example, a product of the variation gradient and the super-parameters (such as a learning rate, etc.), so that the first network device may update the parameter value of the parameter based on the calculated adjustment value of the parameter. After the multiple times of training, stopping training when the loss value is smaller than a preset threshold value, and obtaining a flow prediction model.
The traffic prediction model may be generated by the second network device according to the statistical information training of the first network device, or may be generated by the cloud device according to the statistical information of the first network device. When the first network device is trained and generated by the cloud device, the second network device sends statistical information to the cloud, wherein the statistical information comprises historical flow information of the first network device; and the cloud device trains and generates a flow prediction model according to the historical flow information and sends the flow prediction model to the second network device.
S103: the second network device obtains a transmission performance threshold and determines a throughput threshold based on the second traffic information.
And the second network equipment determines a throughput threshold according to the second flow information after predicting the second flow information corresponding to the first network equipment in the second time period. The second network device may also obtain a transmission performance threshold corresponding to the first network device. The transmission performance threshold comprises one or more of a delay threshold, a jitter threshold and a packet loss threshold. Specifically, the second network device may determine a maximum traffic value to be processed by the first network device in the second period according to the second traffic information, and determine a throughput threshold according to the maximum traffic value, for example, determine a traffic value obtained by adding a preset increment to the maximum traffic value as the throughput threshold. For example, if the maximum flow value is x, the throughput threshold is x×130%. Where the throughput threshold is in units of Mbps, for example. For the transmission performance threshold, it may be preconfigured by an administrator according to the transmission requirements.
S104: the second network device sends a performance threshold to the first network device, the performance threshold comprising a throughput threshold and a transmission performance threshold.
S105: the first network device determines energy consumption, throughput and transmission performance information corresponding to each energy saving strategy in the plurality of energy saving strategies.
The first network device determines energy consumption, throughput and transmission performance information corresponding to each energy saving strategy in the plurality of energy saving strategies. The transmission performance information comprises one or more of time delay, jitter and packet loss, and the energy-saving strategy comprises device configuration parameters corresponding to devices of the first network equipment. The device configuration parameters may include discrete configuration parameters of the device, such as a switching state of the device, and may also include non-discrete configuration parameters corresponding to the device, such as an operating frequency of the device, and the like. The energy saving strategies are preconfigured, and part or all of device configuration parameters included in each energy saving strategy are different.
The first network device may determine the energy consumption corresponding to each energy saving policy by the following method, which specifically includes: and for each energy-saving strategy, the first network equipment inputs the energy-saving strategy into the energy consumption prediction model, and obtains the energy consumption of the corresponding energy-saving strategy output by the energy consumption prediction model. The energy consumption prediction model corresponds to the type of the first network device and is generated according to energy consumption training samples, and each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value. That is, the first network device may determine the energy consumption level corresponding to each energy saving policy using the energy consumption prediction model generated by pre-training. The energy consumption training samples may be energy consumption values of the first network device and device configuration parameters corresponding to the energy consumption values, or may be energy consumption values of other network devices and device configuration parameters corresponding to the energy consumption values, where the device types of the other network devices are the same as the device type of the first network device. The same device type means that two network devices have the same or similar constituent devices. For example, the device type may be the same for both network devices.
The energy consumption prediction model can be generated by training the second network device by using an energy consumption training sample and sent to the first network device, or can be generated by training the cloud device by using the energy consumption training sample and sent to the first network device through the second network device.
The first network device may determine the throughput corresponding to each energy saving policy by the following method specifically including: for each energy-saving strategy, the first network device inputs the energy-saving strategy into the throughput prediction model, and obtains the throughput of the corresponding energy-saving strategy output by the throughput prediction model. The throughput prediction model corresponds to the device type of the first network device and is generated according to throughput training samples, and each throughput training sample comprises throughput and device configuration parameters corresponding to the throughput. That is, the first network device may determine a throughput size corresponding to each power saving policy using a pre-trained generated throughput prediction model. The throughput training samples may be throughput of the first network device and device configuration parameters corresponding to the throughput, or throughput of other network devices and device configuration parameters corresponding to the throughput, where the device types of the other network devices are the same as the device type of the first network device. The same device type means that two network devices have the same or similar constituent devices. For example, the device type may be the same for both network devices.
The throughput prediction model may be generated by the second network device through training with the throughput training sample and issued to the first network device, or may be generated by the cloud device through training with the throughput training sample and sent to the first network device through the second network device.
The first network device may determine transmission performance information corresponding to each energy saving policy by the following method, which specifically includes: for each energy-saving strategy, the first network device inputs the energy-saving strategy and the throughput threshold into a transmission performance prediction model, and obtains transmission performance information of the corresponding energy-saving strategy output by the transmission performance prediction model. The transmission performance prediction model corresponds to the equipment type of the first network equipment and is generated according to the transmission performance training sample. Each transmission performance training sample comprises transmission performance information, device configuration parameters corresponding to the transmission performance information and throughput corresponding to the transmission performance information. That is, each transmission performance training sample represents transmission performance information obtained by the network device processing the corresponding throughput with the corresponding device configuration parameters. The transmission performance prediction model comprises one or more of a delay prediction model, a jitter prediction model and a packet loss prediction model. That is, the first network device may determine transmission performance information corresponding to each power saving policy using a transmission performance prediction model generated by pre-training. The transmission performance training samples may be transmission performance information of the first network device, a device configuration parameter corresponding to the transmission performance information, and throughput corresponding to the transmission performance information, or may be transmission performance information of other network devices, a device configuration parameter corresponding to the transmission performance information, and throughput corresponding to the transmission performance information. The device type of the other network device is the same as the device type of the first network device. The same device type means that two network devices have the same or similar constituent devices. For example, the device type may be the same for both network devices.
The transmission performance prediction model can be generated by training the second network device by using the transmission performance training sample and sent to the first network device, or can be generated by training the cloud device by using the transmission performance training sample and sent to the first network device through the second network device.
It should be noted that, each of the above-mentioned prediction models (energy consumption prediction model, throughput prediction model, transmission performance prediction model) may be a linear model or a neural network model, and when the prediction model is a linear model, the prediction model is determined by a linear fitting manner during training; when the prediction model is a neural network model, the prediction model is determined by a neural network training mode during training. In order to facilitate understanding of the training process of the neural network model, an example of generating the energy consumption prediction model by training will be described.
The network equipment (second network equipment or cloud equipment) acquires energy consumption training samples, wherein each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value, and the energy consumption value is used as a label. In each round of iterative training, the network device may input a set of energy consumption training samples into the neural network model, and the neural network model outputs an inference result for the energy consumption training samples. The network device may then calculate a loss value between the inferred results output by the neural network model and the actual results (labels) of the set of training samples via the corresponding loss function. The network device may then calculate a gradient of change of the parameter in each network layer in the neural network model based on the calculated loss value. In this way, the network device may calculate, based on the pre-set super-parameters in the optimizer and the variation gradients of the parameters in the network layers, an adjustment value (which may also be referred to as a parameter update amount) of the parameter in the iterative training process, where the adjustment value may be, for example, a product of the variation gradient and the super-parameters (such as a learning rate, etc.), so that the network device may update the parameter value of the parameter based on the calculated adjustment value of the parameter. After the multiple times of training, stopping training when the loss value is smaller than a preset threshold value, and obtaining the energy consumption prediction model.
When the training process is executed by the cloud device, the first network device sends statistical information to the cloud device through the second network device, wherein the statistical information comprises throughput at different moments, transmission performance information corresponding to each moment, energy consumption corresponding to each moment and device configuration parameters corresponding to each moment. The cloud device extracts an energy consumption training sample, a throughput training sample and a transmission performance training sample from the statistical information, so as to generate an energy consumption prediction model by utilizing the energy consumption training sample, generate a throughput prediction model by utilizing the throughput training sample and generate a transmission time prediction model by utilizing the transmission performance training sample. And the cloud terminal equipment sends the prediction models to the first network equipment through the second network equipment.
S106: the first network equipment determines a target energy-saving strategy according to the performance threshold, the energy consumption, the throughput and the transmission performance information corresponding to each energy-saving strategy, wherein the target energy-saving strategy is the energy-saving strategy with the minimum energy consumption corresponding to the energy-saving strategy meeting the preset conditions.
After determining the energy consumption, throughput and transmission performance information corresponding to each energy saving strategy, the first network device determines a target energy saving strategy from a plurality of energy saving strategies according to the performance threshold and the energy consumption, throughput and transmission performance information corresponding to each energy saving strategy, wherein the target energy saving strategy is the energy saving strategy with the minimum energy consumption corresponding to the energy saving strategy meeting preset conditions. The preset conditions include one or more of delay indicated by the transmission performance information being less than or equal to a delay threshold, jitter indicated by the transmission performance information being less than or equal to a jitter threshold, packet loss indicated by the transmission performance information being less than or equal to a packet loss threshold, and throughput being greater than or equal to a throughput threshold.
When the first network device has a local constraint, the target energy-saving strategy meets the local constraint in addition to meeting the preset condition. Wherein the local constraint indicates one or more devices in the first network apparatus to operate according to the preset parameters, switch states of the plurality of devices of the first network apparatus remain consistent, and the one or more devices of the first network apparatus are in one or more of an on state or an off state.
The execution sequence of S105 is not limited above, and the first network device may execute S105 first, then receive the performance threshold sent by the second network device, or may first receive the performance threshold sent by the second network device, then execute S105, or may execute S105 simultaneously when receiving the performance threshold sent by the second network device.
S107: the first network device adjusts the operation state of the first network device according to the target energy-saving strategy.
After the target energy-saving strategy is determined, the first network equipment adjusts the device configuration parameters corresponding to the device according to the target energy-saving strategy, so that the running state is adjusted.
When the first network device operates according to the device configuration parameters corresponding to the target energy-saving strategy, the first network device can also acquire the corresponding statistical information of the first network device and send the statistical information to the second network device or the cloud device, so that the second network device or the cloud device can update the parameters of each prediction model, and the prediction accuracy of each prediction model is improved.
Therefore, the first network device optimizes according to the performance threshold value issued by the second network device and the performance value corresponding to each energy-saving strategy, so that the energy consumption of the energy-saving strategy is minimum under the condition that the determined performance value of the energy-saving strategy meets the performance threshold value, and each device is configured according to the energy-saving strategy, and the energy consumption is saved.
Fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application. Referring to an application scenario schematic diagram shown in fig. 2, the application scenario includes a cloud device, an analyzer, a controller, and a network device. The network device may include a forwarding device, a terminal device in the network. The controller may obtain statistical information from the network device and send the statistical information to the analyzer, so that the analyzer predicts traffic information processed by the network device according to the statistical information, and further determines a throughput threshold corresponding to the network device. The analyzer sends the performance threshold to the network equipment through the controller, the network equipment determines a target energy-saving strategy according to the performance threshold, throughput capacity, energy consumption and transmission performance information corresponding to each energy-saving strategy, and adjusts the running state of the network equipment according to the device configuration parameters corresponding to the target energy-saving strategy.
In the application scenario shown in fig. 2, the controller sends the collected statistical information to the cloud device through the analyzer, and the cloud device trains the traffic prediction model, the energy consumption prediction model, the throughput prediction model and the transmission time prediction model by using the statistical information. After training generation, the cloud device sends the flow prediction model to the analyzer, so that the analyzer utilizes the flow prediction model to predict flow information of the network device. Meanwhile, the cloud device sends the energy consumption prediction model, the throughput prediction model and the transmission time prediction model to the network device through the analyzer.
In an actual application process, the controller collects first flow information corresponding to the network equipment in a first time period and sends the first flow information to the analyzer, and the analyzer obtains second flow information corresponding to the network equipment in a second time period by using the first flow information and the flow prediction model. The analyzer obtains the transmission performance threshold, determines a throughput threshold in the performance threshold according to the second traffic information, and sends the throughput threshold and the transmission performance threshold to the network device. The network equipment determines the throughput, transmission performance information (one or more of time delay, jitter and packet loss) and energy consumption corresponding to each energy-saving strategy, determines a target energy-saving strategy according to the performance threshold and the throughput, transmission performance information and energy consumption corresponding to each energy-saving strategy, and adjusts the running state of the network equipment according to the target energy-saving strategy so as to save energy consumption.
From the foregoing, it is known that each prediction model may be a linear model or a neural network model, and in order to facilitate understanding of generation of each model, each will be described below.
Linear model:
1. energy consumption prediction model:
let s 0 ,s 1 ,…,s n Representing the switching status of the relevant devices in the network device, such as a service board (service board), a mesh board (fabric card), a serializer/deserializer (SERDES), etc. The following formula is used to fit the relationship between the switch state and the power consumption:
E=e 0 s 0 +e 1 s 1 +…+e n s n
wherein E represents energy consumption, { E 1 ,e 2 ,…,e n Is a set of parameters describing the relationship between energy consumption and device switching, and optimization algorithms such as least squares or gradient descent methods can be used,fitting parameters { e over energy consumption training samples (each sample containing device switch state and energy consumption) 1 ,e 2 ,…,e n }。
2. Throughput prediction model
Let s 0 ,s 1 ,…,s n Representing the switching state of the relevant device in the network device, the following formula is used to fit the relationship between switching state and throughput:
Throughput=t 0 s 0 +t 1 s 1 +…+t n s n
wherein, throughput represents Throughput, { t 0 ,t 1 ,…,t n The throughput training samples (each containing the device switch state and throughput) can be used to fit the parameter { t } by using optimization algorithms such as least squares or gradient descent methods 0 ,t 1 ,…,t n }。
3. Time delay prediction model
Likewise, a linear model is used to fit the relationship between device switching state, throughput, and latency:
Delay=d 0 s 0 +d 1 s 1 +…+d n s n +d n+1 x
where x represents the traffic size handled by the network device based on the set of device switch states { s0, s1, … sn }, delay represents the Delay when the network device handles traffic x based on the set of device switch states, i.e., a training sample includes a set of switch states { s0, s1, … sn }, traffic size x handled based on the switch states, delay value of the network device when handling the traffic x based on the set of switch states, { d } 0 ,d 1 ,…,d n ,d n+1 The method is a parameter of a time delay prediction model, and the model can be trained based on a plurality of training samples by using an optimization algorithm such as a least square method or a gradient descent method. Inputs required to predict latency are device switching state and minimum throughput.
4. Jitter prediction model
Fitting the relationship between device switching state, throughput, and jitter using a linear model:
Jitter=j 0 s 0 +j 1 s 1 +…+j n s n +j n+1 x
wherein Jitter represents Jitter, { j 0 ,j 1 ,…,j n ,j n+1 The parameter of the jitter prediction model is represented, and the model can be trained by using an optimization algorithm such as a least square method or a gradient descent method. Inputs needed to predict jitter are device switching state and minimum throughput.
5. Packet loss prediction model
Fitting the relationship between device switch state, traffic size, and packet loss using a linear model:
Loss=l 0 s 0 +l 1 s 1 +…+l n s n +l n+1 x
wherein Loss represents jitter, { l 0 ,l 1 ,…,l n ,l n+1 The parameter of the packet loss prediction model is represented, and the model can be trained by using an optimization algorithm such as a least square method or a gradient descent method. Inputs required to predict packet loss are device switch state and minimum throughput.
When the method is applied, the network equipment builds a calculation model by combining local switch constraint after receiving the issued minimum throughput X, the delay threshold D, the jitter threshold J and the packet loss threshold L. The calculation model minimizes energy consumption while guaranteeing minimum throughput X, delay threshold D, jitter threshold J, packet loss threshold L and local switch constraint, and the variable of the calculation model is the switch state s of each device of the network equipment 0 ,s 1 ,…,s n When the switching variable is 1, the representative device is turned on; when the value is 0, the device is turned off, and the specific form of the calculation model is as follows:
Min e 0 s 0 +e 1 s 1 +…+e n s n
S.T.X<t 1 s 0 +ts 1 +……t n s n
D>d 0 s 0 +d 1 s 1 +…+d n s n +d n+1 X
J>j 0 s 0 +j 1 s 1 +…+j n s n +j n+1 X
L>l 0 s 0 +l 1 s 1 +…+l n s n +l n+1 X
local constraint: for example s 1 =s 0
(II) neural network model
Let s 0 ,s 1 ,…,s n Representing the switching state of the relevant device in the network equipment, and fitting the relation between the switching state and the energy consumption by using a neural network:
Energy=E(s 0 ,s 1 ,…,s n )
where E () is a set describing the relationship between energy consumption and switch state. For example, back Propagation (BP) algorithms, genetic algorithms (genetic algorithm, GA) and the like may be used to fit through energy consumption training samples (each containing switch state and energy consumption).
For a throughput prediction model, a delay prediction model and a jitter prediction model, the following similar results are obtained:
Throughput=T(s 0 ,s 1 ,…,s n )
where T (-) is a set of relationships describing throughput and switch state. The BP algorithm or GA may be used to fit through throughput training samples (each containing switch state and traffic size).
Delay=D(s 0 ,s 1 ,…,s n ,x)
Where D (-) is a set of relationships describing the time delay and the device switching state. The BP algorithm or GA may be used to train samples (each sample containing switch state, traffic size, delay) through the delay to fit.
Jitter=J(s 0 ,s 1 ,…,s n ,x)
Where J (-) is a set of relationships describing jitter and device switching. The BP algorithm or GA may be used to fit by training samples (each containing switch state, traffic size, jitter).
Loss=L(s 0 ,s 1 ,…,s n ,x)
Where L (-) is a set of relationships that describe the packet loss versus the device switching. The BP algorithm or GA may be used to fit by training samples (each containing device switch state, throughput, packet loss).
After the network device receives the issued throughput threshold and transmission performance threshold (e.g., delay threshold, jitter threshold, packet loss threshold), a calculation model is built in combination with local switch constraints. The computational model minimizes energy consumption while guaranteeing throughput threshold, transmission performance threshold and local switch constraints, the variables of the computational model being for each switch s at the network device 0 ,s 1 ,…,s n If selected, it represents on, otherwise, it represents off, and the specific form of the calculation model is as follows:
Min:E(S)
S.T.X<T(S)
D>D(S,X)
J>J(S,X)
L>L(S,X)
local constraint: for example s 1 =s 0
Wherein, the calculation model can be iteratively solved by an ant colony algorithm to obtain s 0 -s n After the specific value of the switch, the state of each switch is configured to realize energy saving.
It should be noted that, in the above embodiment, only two discrete states of on and off of related devices (for example, a mesh board, a service board, and a serial/deserializer) in the network device are considered, but in practical application, non-discrete configuration parameters of devices such as a central processing unit (central processing unit, CPU), a network processor (network processor, NP), and a heat sink of the network device may also be considered. Therefore, when constructing each prediction model, the relation between the energy consumption and the non-discrete configuration parameters, the relation between the throughput and the non-discrete configuration parameters, the time delay and the non-discrete configuration parameters can be also establishedThe relation between the two, the relation between jitter and non-discrete configuration parameters, and the relation between packet loss and non-discrete configuration parameters. For example s 0 ,s 1 ,…,s n Representing discrete state configuration parameters, c, of the associated device 0 ,c 1 ,…,c m Representing the sequential state configuration parameters of another part of the devices, taking the time delay model as an example:
Delay=d 0 s 0 +d 1 s 1 +…+d n s n +d n+1 c 0 +…+d n+m c m +d n+m+1 X
For example discrete state s i The switch state of a certain service board is represented, and the available (0, 1) corresponds to the switch state respectively; c j Representing the power of the radiator fan, the configurable continuous range is 0% -100%, such as 53.2%. It will be appreciated that non-discrete configuration parameters may also be converted to discrete configuration parameters via sampling. For example, when the power of the cooling fan is sampled at 1% granularity, 101 power parameter values of the cooling fan can be obtained in the range of [0,1%,2%,3% ] 100%]Correspondingly, the values of cj in the model range from [0,1%,2%,3% ] to 100%]. Other non-discrete configuration parameters may be similarly discretized. When all the configuration parameters are discretized, the model is converted into a relationship between the discretized configuration parameters and the time delay.
Based on the above method embodiments, the embodiments of the present application provide an apparatus for adjusting an operation state of a network device, which will be described below with reference to the accompanying drawings.
Referring to fig. 3, which is a block diagram of an apparatus for adjusting an operation state of a network device according to an embodiment of the present application, as shown in fig. 3, the apparatus 300 includes a receiving unit 301, a first determining unit 302, a second determining unit 303, and an adjusting unit 304. The apparatus 300 is applicable to a first network device.
A receiving unit 301, configured to receive a performance threshold sent by the second network device. The performance threshold includes a throughput threshold and a transmission performance threshold. The transmission performance threshold includes one or more of a delay threshold, a jitter threshold, and a packet loss threshold.
The first determining unit 302 is configured to determine energy consumption, throughput, and transmission performance information corresponding to each of the plurality of energy saving policies. The transmission performance information includes one or more of delay, jitter, and packet loss. Each energy saving strategy comprises a device configuration parameter corresponding to a device in the first network equipment.
The second determining unit 303 is configured to determine a target energy saving policy according to the performance threshold, the energy consumption, the throughput and the transmission performance information corresponding to each energy saving policy. The target energy-saving strategy is the energy-saving strategy with the minimum energy consumption corresponding to the energy-saving strategy meeting the preset conditions. The preset conditions comprise one or more of time delay indicated by the transmission performance information being smaller than or equal to the time delay threshold, jitter indicated by the transmission performance information being smaller than or equal to the jitter threshold, packet loss indicated by the transmission performance information being smaller than or equal to the packet loss threshold, and throughput being larger than or equal to the throughput threshold;
An adjusting unit 304, configured to adjust an operation state of the first network device according to the target energy saving policy.
Optionally, the apparatus 300 further comprises a transmitting unit.
The sending unit is used for sending the first flow information to the second network equipment. The first traffic information indicates a value of traffic handled by the first network device during a first time period. The first flow information is used to determine the throughput threshold. In this implementation, the first network device sends traffic information processed by itself, i.e. the first traffic information, to the second network device, so that the second network device determines a throughput threshold, i.e. the minimum throughput that the first network device should be able to process in the second time period, from the first traffic information.
Optionally, the first determining unit 302 is further configured to, for each energy saving strategy, input the energy saving strategy into the energy consumption prediction model to obtain the energy consumption corresponding to the energy saving strategy output by the energy consumption prediction model. The energy consumption prediction model corresponds to a device type of the first network device. The energy consumption prediction model is generated according to the energy consumption training sample. Each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value.
Optionally, the first determining unit 302 is further configured to, for each power saving policy, input the power saving policy into the throughput prediction model to obtain a throughput corresponding to the power saving policy output by the throughput prediction model. The throughput prediction model corresponds to a device type of the first network device. The throughput prediction model is generated from throughput training samples. Each throughput training sample includes a throughput and a device configuration parameter corresponding to the throughput.
Optionally, the first determining unit 302 is further configured to input, for each energy saving policy, the energy saving policy into the transmission performance prediction model to obtain transmission performance information corresponding to the energy saving policy output by the transmission performance prediction model. The transmission performance prediction model corresponds to a device type of the first network device. The transmission performance prediction model is generated from transmission performance training samples. Each transmission performance training sample comprises transmission performance information, a device configuration parameter corresponding to the transmission performance information and throughput corresponding to the transmission performance information. The transmission performance prediction model comprises one or more of a delay prediction model, a jitter prediction model and a packet loss prediction model.
Optionally, when the first network device has a local constraint, the target energy saving policy satisfies the local constraint in addition to the preset condition. The local constraint indicates that one or more devices in the first network device are operating in accordance with the preset parameters and/or that the switching states of the plurality of devices of the first network device remain consistent. In this implementation manner, when determining the target energy-saving policy, the first network device may further consider a local constraint, so that the determined target energy-saving policy meets both a preset condition and the local constraint.
It should be noted that, specific implementations of each unit in this embodiment may refer to the related descriptions in the foregoing method embodiments, and this embodiment is not repeated herein.
Referring to fig. 4, which is a block diagram of another device for adjusting an operation state of a network device according to an embodiment of the present application, as shown in fig. 4, the device 400 includes a receiving unit 401, a predicting unit 402, an obtaining unit 403, and a sending unit 404. The apparatus 400 may be applied to a second network device.
A receiving unit 401, configured to receive first traffic information sent by the first network device. The first traffic information indicates a value of traffic processed by the first network device during a first time period.
A prediction unit 402, configured to predict second traffic information corresponding to the first network device in the second time period according to the first traffic information. The second period of time is later than the first period of time.
An obtaining unit 403, configured to obtain the transmission performance threshold and determine a throughput threshold according to the second traffic information. The transmission performance threshold includes one or more of a delay threshold, a jitter threshold, and a packet loss threshold.
And the sending unit 404 is configured to send the throughput threshold and the transmission performance threshold to the first network device, so that the first network device adjusts the operation state of the first network device according to the throughput threshold and the transmission performance threshold.
Optionally, the prediction unit 402 is further configured to input the first flow information into the flow prediction model, so as to obtain second flow information output by the flow prediction model. The traffic prediction model is generated based on historical traffic information training of the first network device.
Optionally, the receiving unit 401 is further configured to receive the flow prediction model.
Optionally, the receiving unit 401 is further configured to receive a throughput prediction model, an energy consumption prediction model, and a transmission performance prediction model sent by the cloud device. The sending unit 404 is further configured to send the throughput prediction model, the energy consumption prediction model or the transmission performance prediction model to the first network device.
Optionally, the throughput prediction model, the energy consumption prediction model or the transmission performance prediction model corresponds to a device type of the first network device. The energy consumption prediction model is generated according to the energy consumption training sample. Each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value. The throughput prediction model is generated from throughput training samples. Each throughput training sample includes a throughput and a device configuration parameter corresponding to the throughput. The transmission performance prediction model is generated from transmission performance training samples. Each transmission performance training sample comprises transmission performance information, a device configuration parameter corresponding to the transmission performance information and throughput corresponding to the transmission performance information.
It should be noted that, the implementation of each unit in this embodiment may refer to the related description in the above method embodiment, and this embodiment is not repeated here.
Fig. 5 is a schematic structural diagram of a network device according to an embodiment of the present application, where the network device may be, for example, the first network device or the second network device in the embodiment of the method, or may be implemented by the device 300 in the embodiment shown in fig. 3, or may be implemented by the device 400 in the embodiment shown in fig. 4.
The network device 500 includes: a processor 510, a communication interface 520, and a memory 530. Where the number of processors 510 in the network device 500 may be one or more, one processor being exemplified in fig. 5. In an embodiment of the present application, processor 510, communication interface 520, and memory 530 may be connected by a bus system or otherwise, as shown in FIG. 5 by way of example as bus system 540.
The processor 510 may be a CPU, NP, or a combination of a CPU and NP. Processor 510 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
Memory 530 may include volatile memory (RAM), such as random-access memory (RAM); the memory 530 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD), or a Solid State Drive (SSD); memory 530 may also include a combination of the above types of memory.
Optionally, the memory 530 stores an operating system and programs, executable modules or data structures, or a subset thereof, or an extended set thereof, wherein the programs may include various operational instructions for performing various operations. The operating system may include various system programs for implementing various underlying services and handling hardware-based tasks. The processor 510 may read the program in the memory 530 to implement the method provided by the embodiment of the present application.
The memory 530 may be a storage device in the network device 500 or may be a storage device independent of the network device 500.
Bus system 540 may be a peripheral component interconnect (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus system 540 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Fig. 6 is a schematic structural diagram of a network device 600 provided in the embodiment of the present application, where the network device may be, for example, a first network device or a second network device in the embodiment of the method, or may be implemented by a device 300 in the embodiment shown in fig. 3, or may be implemented by a device 400 in the embodiment shown in fig. 4.
The network device 600 includes a main control board 610 and an interface board 630.
The main control board 610 is also called a main processing unit (main processing unit, MPU) or a routing processing card (route processor card), and the main control board 610 controls and manages various components in the network device 600, including routing computation, device management, device maintenance, and protocol processing functions. The main control board 610 includes: a central processor 611 and a memory 612.
The interface board 630 is also referred to as a line interface unit card (line processing unit, LPU), line card, or service board. The interface board 630 is used to provide various service interfaces and to enable forwarding of data packets. The service interfaces include, but are not limited to, ethernet interfaces, such as flexible ethernet service interfaces (Flexible Ethernet Clients, flexE Clients), POS (Packet over SONET/SDH) interfaces, etc. The interface board 630 includes a central processor 631, a network processor 632, a forwarding table entry memory 634, and a physical interface card (physical interface card, PIC) 633.
The central processor 631 on the interface board 630 is used for controlling and managing the interface board 630 and communicating with the central processor 611 on the main control board 610.
The network processor 632 is configured to implement forwarding processing of the packet. The network processor 632 may be in the form of a forwarding chip. Specifically, the processing of the uplink message includes: processing a message input interface and searching a forwarding table; the processing of the downstream message includes forwarding table lookup and the like.
The physical interface card 633 is used to implement the docking function of the physical layer, from which the original traffic enters the interface board 630, and the processed messages are sent out from the physical interface card 633. Physical interface card 633 includes at least one physical interface, also known as a physical port. The physical interface card 633, also called a daughter card, may be mounted on the interface board 630, and is responsible for converting the photoelectric signal into a message, performing validity check on the message, and forwarding the message to the network processor 632 for processing. In some embodiments, the central processor 631 of the interface board 603 may also perform the functions of the network processor 632, such as implementing software forwarding based on a general purpose CPU, so that the network processor 632 is not required in the physical interface card 633.
Optionally, the network device 600 includes a plurality of interface boards, for example, the network device 600 further includes an interface board 640, the interface board 640 includes: a central processor 641, a network processor 642, a forwarding table entry memory 644, and a physical interface card 643.
Optionally, the network device 600 further comprises a switching network board 620. The switch board 620 may also be referred to as a switch board unit (switch fabric unit, SFU). In the case of a network device having a plurality of interface boards 630, the switch board 620 is used to complete the data exchange between the interface boards. For example, interface board 630 and interface board 640 may communicate via switch fabric 620.
The main control board 610 and the interface board 630 are coupled. For example. The main control board 610, the interface board 630 and the interface board 640 are connected with the system backboard through a system bus to realize intercommunication among the exchange network boards 620. In one possible implementation, an inter-process communication protocol (inter-process communication, IPC) channel is established between the main control board 610 and the interface board 630, and communication is performed between the main control board 610 and the interface board 630 through the IPC channel.
Logically, network device 600 includes a control plane that includes a main control board 610 and a central processor 631, and a forwarding plane that includes various components that perform forwarding, such as a forwarding table entry memory 634, a physical interface card 633, and a network processor 632. The control plane performs the functions of router, generating forwarding table, processing signaling and protocol messages, configuring and maintaining the status of the device, etc., and the control plane issues the generated forwarding table to the forwarding plane, where the network processor 632 forwards the message received by the physical interface card 633 based on the forwarding table issued by the control plane. The forwarding table issued by the control plane may be stored in forwarding table entry memory 634. In some embodiments, the control plane and the forwarding plane may be completely separate and not on the same device.
It should be understood that the operations on the interface board 640 are consistent with the operations of the interface board 630 in the embodiment of the present application, and are not repeated for brevity. It should be understood that the network device 600 of the present embodiment may correspond to the network device in the foregoing method embodiments, and the main control board 610, the interface board 630 and/or the interface board 640 in the network device 600 may implement the various steps in the foregoing method embodiments, which are not described herein for brevity.
It should be understood that the master control board may have one or more pieces, and that the master control board may include a main master control board and a standby master control board when there are more pieces. The interface boards may have one or more, the more data processing capabilities the network device is, the more interface boards are provided. The physical interface card on the interface board may also have one or more pieces. The switching network board may not be provided, or may be provided with one or more blocks, and load sharing redundancy backup can be jointly realized when the switching network board is provided with the plurality of blocks. Under the centralized forwarding architecture, the network device may not need to exchange network boards, and the interface board bears the processing function of the service data of the whole system. Under the distributed forwarding architecture, the network device may have at least one switching fabric, through which data exchange between multiple interface boards is implemented, providing high-capacity data exchange and processing capabilities. Therefore, the data access and processing power of the network devices of the distributed architecture is greater than that of the devices of the centralized architecture. Alternatively, the network device may be in the form of only one board card, i.e. there is no switching network board, the functions of the interface board and the main control board are integrated on the one board card, and the central processor on the interface board and the central processor on the main control board may be combined into one central processor on the one board card, so as to execute the functions after stacking the two, where the data exchange and processing capability of the device in this form are low (for example, network devices such as a low-end switch or a router). Which architecture is specifically adopted depends on the specific networking deployment scenario.
In some possible embodiments, the network device may be implemented as a virtualized device. For example, the virtualized device may be a Virtual Machine (VM) running a program for sending message functions, the virtual machine deployed on a hardware device (e.g., a physical server). Virtual machines refer to complete computer systems that run in a completely isolated environment with complete hardware system functionality through software emulation. The virtual machine may be configured as a network device. For example, the network device may be implemented based on a generic physical server in combination with network function virtualization (network functions virtualization, NFV) technology. The network device is a virtual host, a virtual router, or a virtual switch. Those skilled in the art can virtually obtain the network device with the above functions on the general physical server by combining with the NFV technology by reading the present application, and the details are not repeated here.
It should be understood that the network device in the above various product forms has any function of the network device in the above method embodiment, which is not described herein.
The embodiment of the application also provides a chip which comprises a processor and an interface circuit. The interface circuit is used for receiving the instruction and transmitting the instruction to the processor; a processor, which may be, for example, a specific implementation of the adjustment device 300 shown in fig. 3, may be used to perform the above-described operating state adjustment method. Wherein the processor is coupled to a memory for storing programs or instructions which, when executed by the processor, cause the system-on-a-chip to implement the method of any of the method embodiments described above.
Alternatively, the processor in the system-on-chip may be one or more. The processor may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general purpose processor, implemented by reading software code stored in a memory.
Alternatively, the memory in the system-on-chip may be one or more. The memory may be integral with the processor or separate from the processor, and the application is not limited. The memory may be a non-transitory processor, such as a ROM, which may be integrated on the same chip as the processor, or may be separately provided on different chips, and the type of memory and the manner of providing the memory and the processor are not particularly limited in the present application.
The system-on-chip may be, for example, an FPGA, an ASIC, a system-on-chip (SoC), a CPU, an NP, a digital signal processing circuit (digital signal processor, DSP), a microcontroller (micro controller unit, MCU), a PLD or other integrated chip.
The embodiment of the application also provides a network system. The system includes a first network device and a second network device.
The first network device is configured to execute the executing step of the first network device in the method for adjusting the operation state of the network device shown in fig. 3.
The second network device is configured to execute the executing step of the second network device in the method for adjusting the operation state of the network device shown in fig. 3.
Optionally, the system further comprises a cloud device. The cloud device is used for receiving statistical information sent by the second network device. The statistical information includes historical traffic information of network devices managed by the second network device. The cloud device is further used for training and generating a flow prediction model by utilizing the historical flow information, and sending the flow prediction model to the second network device.
Optionally, the cloud device is further configured to send the throughput prediction model, the energy consumption prediction model, or the transmission time prediction model to the second network device.
The embodiment of the application also provides a computer readable storage medium comprising instructions or a computer program, which when run on a computer, cause the computer to execute the method for adjusting the running state of the network device provided by the above embodiment.
The embodiment of the application also provides a computer program product containing instructions or a computer program, which when run on a computer, cause the computer to execute the method for adjusting the running state of the network device provided by the embodiment.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, e.g., the division of units is merely a logical service division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each service unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software business units.
The integrated units, if implemented in the form of software business units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Those skilled in the art will appreciate that in one or more of the examples described above, the services described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the services may be stored in a computer-readable medium or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The objects, technical solutions and advantageous effects of the present application have been described in further detail in the above embodiments, and it should be understood that the above are only embodiments of the present application.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (17)

1. A method for adjusting an operating state of a network device, the method comprising:
the method comprises the steps that a first network device receives a performance threshold sent by a second network device, wherein the performance threshold comprises a throughput threshold and a transmission performance threshold, and the transmission performance threshold comprises one or more of a time delay threshold, a jitter threshold and a packet loss threshold;
the first network device determines energy consumption, throughput and transmission performance information corresponding to each energy saving strategy in a plurality of energy saving strategies, wherein the transmission performance information comprises one or more of time delay, jitter and packet loss, and each energy saving strategy comprises device configuration parameters corresponding to devices in the first network device;
The first network device determines a target energy-saving strategy according to the performance threshold, the energy consumption, throughput and transmission performance information corresponding to each energy-saving strategy, wherein the target energy-saving strategy is the energy-saving strategy with the minimum energy consumption corresponding to the energy-saving strategy meeting preset conditions, the preset conditions comprise one or more of time delay indicated by the transmission performance information being smaller than or equal to the time delay threshold, jitter indicated by the transmission performance information being smaller than or equal to the jitter threshold, packet loss indicated by the transmission performance information being smaller than or equal to the packet loss threshold, and throughput being larger than or equal to the throughput threshold;
and the first network equipment adjusts the running state of the first network equipment according to the target energy-saving strategy.
2. The method according to claim 1, wherein the method further comprises:
the first network device sends first traffic information to the second network device, the first traffic information indicating a value of traffic processed by the first network device during a first time period, the first traffic information being used to determine the throughput threshold.
3. The method according to claim 1 or 2, wherein the first network device determining the energy consumption corresponding to each of a plurality of energy saving policies comprises:
For each energy-saving strategy, the first network device inputs the energy-saving strategy into an energy consumption prediction model, and obtains energy consumption corresponding to the energy-saving strategy, which is output by the energy consumption prediction model, wherein the energy consumption prediction model corresponds to the device type of the first network device, the energy consumption prediction model is generated according to energy consumption training samples, and each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value.
4. A method according to any of claims 1-3, wherein the first network device determining a throughput for each of the plurality of power saving policies comprises:
for each energy-saving strategy, the first network device inputs the energy-saving strategy into a throughput prediction model, and obtains throughput corresponding to the energy-saving strategy, which is output by the throughput prediction model, wherein the throughput prediction model corresponds to the device type of the first network device, the throughput prediction model is generated according to throughput training samples, and each throughput training sample comprises throughput and device configuration parameters corresponding to the throughput.
5. The method according to any one of claims 1-4, wherein the first network device determining transmission performance information corresponding to each of a plurality of energy saving policies includes:
For each energy-saving strategy, the first network device inputs the energy-saving strategy into a transmission performance prediction model to obtain transmission performance information corresponding to the energy-saving strategy, which is output by the transmission performance prediction model, wherein the transmission performance prediction model corresponds to the device type of the first network device, the transmission performance prediction model is generated according to transmission performance training samples, each transmission performance training sample comprises transmission performance information, device configuration parameters corresponding to the transmission performance information and throughput corresponding to the transmission performance information, and the transmission performance prediction model comprises one or more of a delay prediction model, a jitter prediction model and a packet loss prediction model.
6. The method according to any of claims 1-5, wherein the target power saving policy satisfies the local constraint in addition to the preset condition when the first network device has a local constraint, the local constraint indicating that one or more devices in the first network device operate according to a preset parameter, and/or that switching states of the plurality of devices of the first network device remain consistent.
7. A method for adjusting an operating state of a network device, the method comprising:
The method comprises the steps that second network equipment receives first flow information sent by first network equipment, wherein the first flow information indicates a value of flow processed by the first network equipment in a first time period;
the second network device predicts second flow information corresponding to the first network device in a second time period according to the first flow information, wherein the second time period is later than the first time period;
the second network equipment acquires a transmission performance threshold and determines a throughput threshold according to the second traffic information, wherein the transmission performance threshold comprises one or more of a time delay threshold, a jitter threshold and a packet loss threshold;
the second network device sends the throughput threshold and the transmission performance threshold to the first network device, so that the first network device adjusts the running state of the first network device according to the throughput threshold and the transmission performance threshold.
8. The method of claim 7, wherein the second network device predicts second traffic information corresponding to the first network device over a second time period based on the first traffic information, comprising:
the second network device inputs the first flow information into a flow prediction model to obtain the second flow information output by the flow prediction model, and the flow prediction model is generated according to the historical flow information training of the first network device.
9. The method of claim 8, wherein the method further comprises:
the second network device receives the traffic prediction model.
10. The method according to claim 9, wherein the method further comprises:
and the second network equipment receives the throughput prediction model, the energy consumption prediction model and the transmission performance prediction model which are sent by the cloud equipment, and sends the throughput prediction model, the energy consumption prediction model or the transmission performance prediction model to the first network equipment.
11. The method of claim 10, wherein the throughput prediction model, the energy consumption prediction model, or the transmission performance prediction model corresponds to a device type of the first network device,
the energy consumption prediction model is generated according to energy consumption training samples, and each energy consumption training sample comprises an energy consumption value and a device configuration parameter corresponding to the energy consumption value;
the throughput prediction model is generated according to throughput training samples, and each throughput training sample comprises throughput and device configuration parameters corresponding to the throughput;
the transmission performance prediction model is generated according to transmission performance training samples, and each transmission performance training sample comprises transmission performance information, device configuration parameters corresponding to the transmission performance information and throughput corresponding to the transmission performance information.
12. A network system, the system comprising a first network device and a second network device;
the first network device for performing the method of any of claims 1-6;
the second network device being configured to perform the method of any of claims 7-11.
13. The system of claim 12, further comprising a cloud device;
the cloud device is configured to receive statistical information sent by the second network device, where the statistical information includes historical traffic information of network devices managed by the second network device;
the cloud device is further configured to train and generate a traffic prediction model according to the historical traffic information, and send the traffic prediction model to the second network device.
14. The system of claim 13, wherein the cloud device is further configured to send a throughput prediction model, an energy consumption prediction model, or a transmission performance prediction model to the second network device.
15. A network device, the network device comprising a processor and a memory;
the memory is used for storing instructions or computer programs;
The processor being configured to execute the instructions or computer program in the memory to cause the network device to perform the method of any of claims 1-6 or to perform the method of any of claims 7-11.
16. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-6 or perform the method of any one of claims 7-11.
17. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method of any one of claims 1-6 or to perform the method of any one of claims 7-11.
CN202210159693.3A 2022-02-21 2022-02-21 Method, system and equipment for adjusting running state of network equipment Pending CN116668210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210159693.3A CN116668210A (en) 2022-02-21 2022-02-21 Method, system and equipment for adjusting running state of network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210159693.3A CN116668210A (en) 2022-02-21 2022-02-21 Method, system and equipment for adjusting running state of network equipment

Publications (1)

Publication Number Publication Date
CN116668210A true CN116668210A (en) 2023-08-29

Family

ID=87710514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210159693.3A Pending CN116668210A (en) 2022-02-21 2022-02-21 Method, system and equipment for adjusting running state of network equipment

Country Status (1)

Country Link
CN (1) CN116668210A (en)

Similar Documents

Publication Publication Date Title
Pei et al. Optimal VNF placement via deep reinforcement learning in SDN/NFV-enabled networks
US20230195499A1 (en) Technologies for deploying virtual machines in a virtual network function infrastructure
Assefa et al. A survey of energy efficiency in SDN: Software-based methods and optimization models
CN104081718B (en) For the network controller of remote system administration
Wu et al. Orchestrating bulk data transfers across geo-distributed datacenters
CN102055667B (en) Methods and apparatus for configuring virtual network switch
WO2020149786A1 (en) Dynamic deployment of network applications having performance and reliability guarantees in large computing networks
US11574241B2 (en) Adaptive threshold selection for SD-WAN tunnel failure prediction
CN108667777B (en) Service chain generation method and network function orchestrator NFVO
US20210042578A1 (en) Feature engineering orchestration method and apparatus
CN114500218B (en) Method and device for controlling network equipment
WO2011137187A2 (en) Virtual topology adaptation for resource optimization in telecommunication networks
US10091063B2 (en) Technologies for directed power and performance management
CN113645146B (en) New stream density-based software defined network controller load balancing method and system
Tadesse et al. Energy-efficient traffic allocation in SDN-basec backhaul networks: Theory and implementation
CN107079392B (en) System power management and optimization in a telecommunications system
US20140047260A1 (en) Network management system, network management computer and network management method
US20240022501A1 (en) Data Packet Sending Method and Device
CN116668210A (en) Method, system and equipment for adjusting running state of network equipment
WO2023155904A1 (en) Method and apparatus for adjusting operating state of network device, and related device
Tao et al. Adaptive VNF scaling approach with proactive traffic prediction in NFV-enabled clouds
CN114205254B (en) Flow monitoring method, device, integrated circuit, network equipment and network system
Chen et al. Poster: Chameleon: Automatic and Adaptive Tuning for DCQCN Parameters in RDMA Networks
Gandotra et al. A comprehensive survey of energy-efficiency approaches in wired networks
Wang et al. Machine Learning Empowered Intelligent Data Center Networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication