CN108718249A - Network accelerating method, device based on SDN network and computer readable storage medium - Google Patents

Network accelerating method, device based on SDN network and computer readable storage medium Download PDF

Info

Publication number
CN108718249A
CN108718249A CN201810395796.3A CN201810395796A CN108718249A CN 108718249 A CN108718249 A CN 108718249A CN 201810395796 A CN201810395796 A CN 201810395796A CN 108718249 A CN108718249 A CN 108718249A
Authority
CN
China
Prior art keywords
sdn
network
inference pattern
net controller
accelerators
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810395796.3A
Other languages
Chinese (zh)
Inventor
熊常春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Vcmy Technology Co Ltd
Original Assignee
Guangzhou Vcmy Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Vcmy Technology Co Ltd filed Critical Guangzhou Vcmy Technology Co Ltd
Priority to CN201810395796.3A priority Critical patent/CN108718249A/en
Publication of CN108718249A publication Critical patent/CN108718249A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides a kind of network accelerating method, device and computer readable storage medium based on SDN network, this method include:Historical data stream will be acquired in SDN network to be input in AI accelerators to deep learning model progress model training, generate the first inference pattern;First inference pattern is deployed in SDN cloud net controllers;The SDN clouds net controller carries out reasoning from logic according to first inference pattern, to the data flow acquired in real time, obtains control the reasoning results;The SDN clouds net controller executes the control the reasoning results in SDN network.Above method combination artificial intelligence realizes that the self-optimization of SDN network accelerates, promotes the speed of service of SDN network, reduce the operation cost of SDN network.

Description

Network accelerating method, device based on SDN network and computer readable storage medium
Technical field
The present invention relates to SDN network technical fields, and in particular to a kind of network accelerating method, device and meter based on SDN Calculation machine readable storage medium storing program for executing.
Background technology
SDN network, i.e. software defined network (Software Defined Network, SDN) are that Emulex networks are a kind of New network of Emulex network innovation framework, is a kind of realization method of network virtualization, and core technology OpenFlow is by by the network equipment Control plane is separated with data surface, to realize the flexible control of network flow, network is made to become more intelligence as pipeline Energy.
But with the fusion of ICT industrial chain frameworks gradually deeply, the quickening of network cloudization reconstruct transition and more new The evolution of standard and technology will face the increasing pressure and challenge in terms of network operation, especially how effectively drop Low running cost promotes SDN network high-speed cruising aspect, and traditional SDN network carries out network adjustment and cannot manually The needs of meeting future to high efficiency operation.Therefore, how to realize that the self-optimization of SDN network is accelerated as those skilled in the art Technical problem urgently to be resolved hurrily.
Invention content
The object of the present invention is to provide a kind of network accelerating method, device and computer readable storage medium based on SDN, In conjunction with artificial intelligence, realizes that the self-optimization of SDN network accelerates, promote the speed of service of SDN network, reduce the fortune of SDN network Seek cost.
An embodiment of the present invention provides a kind of network accelerating methods based on SDN, including:
Historical data stream will be acquired in SDN network to be input in AI accelerators to deep learning model progress model instruction Practice, generates the first inference pattern;
First inference pattern is deployed in SDN cloud net controllers;
The SDN clouds net controller carries out reasoning from logic according to first inference pattern, to the data flow acquired in real time, Obtain control the reasoning results;
The SDN clouds net controller executes the control the reasoning results in SDN network.
Preferably, the network accelerating method based on SDN further includes:
The SDN clouds net controller joins the configuration of the data flow acquired in set period of time, the SDN clouds net controller Number and forwarding data flow rule are sent to the AI accelerators;
The AI accelerators are according to the configuration of the data flow, the SDN clouds net controller that are acquired in the set period of time Parameter and forwarding data flow rule, carry out model training to first inference pattern again, obtain the after iteration optimization One inference pattern;
It will be in the first inference pattern update to the SDN clouds net controller after iteration optimization.
Preferably, the historical data stream that will be acquired in SDN network is input in AI accelerators to deep learning model Model training is carried out, is specifically included:
The historical data stream is subjected to data cleansing and pretreatment, generate standardized data sample and described is stored in Accelerate in memory module;
The AI accelerators are obtained according to the preset configuration parameter of the SDN clouds net controller from the acceleration memory module It takes data sample to carry out feature mining, obtains fisrt feature information;
Model training is carried out to the deep learning model using the fisrt feature information, generates the first reasoning mould Type.
Preferably, the pretreatment includes following one or more processing procedures:At missing values processing, feature discretization Reason, the processing of feature combined treatment, feature selecting.
Preferably, the network accelerating method based on SDN further includes:
The AI accelerators obtained from the acceleration memory module according to the default retransmitting paramater of network element data sample into Row feature mining obtains second feature information;
Model training is carried out to the deep learning model using the second feature information, generates the second training mould Block.
Second inference pattern is deployed in the multiple network elements being connect with the SDN clouds net controller;
The multiple network element carries out reasoning from logic according to second inference pattern, to the data flow that real-time reception arrives, and obtains To forwarding the reasoning results;
The multiple network element executes the forwarding the reasoning results in SDN network.
Preferably, the deep learning model includes one or more of deep learning algorithm:Spark ML algorithms, MLlib algorithms, deeplearning4j algorithms, TensorFlow algorithms, Caffe algorithms, CNTK algorithms, Theano algorithms and Torch algorithms.
Preferably, the historical data stream includes the bandwidth parameter of network, link load parameter and link delay parameter.
The embodiment of the present invention additionally provides a kind of network acceleration device based on SDN, including:Data input module, AI add Fast device, model deployment module and SDN cloud net controllers;
The data input module is input to for that will acquire historical data stream in SDN network in the AI accelerators Model training is carried out to deep learning model, generates the first inference pattern;
The model deployment module, for first inference pattern to be deployed in SDN cloud net controllers;
The SDN clouds net controller, for according to first inference pattern, logic to be carried out to the data flow acquired in real time Reasoning obtains control the reasoning results;
The SDN clouds net controller, for executing the control the reasoning results in SDN network.
The embodiment of the present invention additionally provides a kind of network acceleration device based on SDN, including processor, memory and deposits Storage is in the memory and is configured as the computer program executed by the processor, and the processor executes the calculating Such as the above-mentioned network accelerating method based on SDN is realized when machine program.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, and the computer readable storage medium includes The computer program of storage, wherein control when the computer program is run and set where the computer readable storage medium It is standby to execute such as the above-mentioned network accelerating method based on SDN.
Compared with the existing technology, the advantageous effect of a kind of network accelerating method based on SDN provided in an embodiment of the present invention It is:The network accelerating method based on SDN, including:Historical data stream will be acquired in SDN network be input to AI accelerators In to deep learning model carry out model training, generate the first inference pattern;First inference pattern is deployed to SDN cloud nets In controller;The SDN clouds net controller carries out Logical Deriving according to first inference pattern, to the data flow acquired in real time Reason obtains control the reasoning results;The SDN clouds net controller executes the control the reasoning results in SDN network.The above method In conjunction with artificial intelligence, realizes that the self-optimization of SDN network accelerates, promote the speed of service of SDN network, reduce the fortune of SDN network Seek cost.The embodiment of the present invention additionally provides a kind of network acceleration device and computer readable storage medium based on SDN.
Description of the drawings
Fig. 1 is a kind of flow chart of network accelerating method based on SDN provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of network acceleration device based on SDN provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, it is a kind of flow chart of network accelerating method based on SDN provided in an embodiment of the present invention, packet It includes:
S100:Historical data stream will be acquired in SDN network to be input in AI accelerators to deep learning model progress mould Type training generates the first inference pattern;
S200:First inference pattern is deployed in SDN cloud net controllers;
S300:The SDN clouds net controller carries out logic according to first inference pattern, to the data flow acquired in real time Reasoning obtains control the reasoning results;
S400:The SDN clouds net controller executes the control the reasoning results in SDN network.
In the present embodiment, by introducing AI accelerators in the SDN clouds net controller so that the basis of SDN network Facility layer quickly has AI training and the inferential capability of different stage, and cross-cutting analysis may be implemented, meet satisfaction of overall importance The centralization training of strategy or algorithm model of overall importance and reasoning demand are realized that the self-optimization of SDN network accelerates, are promoted The speed of service of SDN network reduces the operation cost of SDN network.The AI accelerators can be independently of the SDN clouds network control The GPU/FPGSA clusters of device processed deployment, or be deployed in the SDN clouds net controller according to setting proportioning integrated with CPU CPU/GPU/FPGSA, or it is deployed in the ASIC hardware that AI is supported in the SDN clouds net controller.
Wherein, the control the reasoning results include the update configuration parameter and stream compression of the SDN clouds net controller The current configuration parameters of itself are adjusted to institute by hair rule, the SDN clouds net controller in real time according to the control the reasoning results State update configuration parameter;So that the SDN clouds net controller configures parameter according to the update and the forwarding data flow is advised It then issues the data acquired in real time and flows to network element.
In a kind of optional embodiment, the network accelerating method based on SDN further includes:
The SDN clouds net controller joins the configuration of the data flow acquired in set period of time, the SDN clouds net controller Number and forwarding data flow rule are sent to the AI accelerators;
The AI accelerators are according to the configuration of the data flow, the SDN clouds net controller that are acquired in the set period of time Parameter and forwarding data flow rule, carry out model training to first inference pattern again, obtain the institute after iteration optimization State the first inference pattern;
It will be in first inference pattern update to the SDN clouds net controller after iteration optimization.
In the present embodiment, joined by the configuration of the data flow, the SDN clouds net controller that will be acquired in set period of time Number and forwarding data flow rule, carry out model training to first inference pattern again, realize first inference pattern Iteration optimization, to realize the SDN clouds net controller Automatic Optimal control.
It is described to acquire historical data stream in SDN network and be input in AI accelerators in a kind of optional embodiment Model training is carried out to deep learning model, is specifically included:
The historical data stream is subjected to data cleansing and pretreatment, generate standardized data sample and described is stored in Accelerate in memory module;
The AI accelerators are obtained according to the preset configuration parameter of the SDN clouds net controller from the acceleration memory module It takes data sample to carry out feature mining, obtains fisrt feature information;
Model training is carried out to the deep learning model using the fisrt feature information, generates the first reasoning mould Type.
In a kind of optional embodiment, the pretreatment includes following one or more processing procedures:Missing values processing, Feature sliding-model control, feature combined treatment, feature selecting processing.
In a kind of optional embodiment, the network accelerating method based on SDN further includes:
The AI accelerators obtained from the acceleration memory module according to the default retransmitting paramater of network element data sample into Row feature mining obtains second feature information;
Model training is carried out to the deep learning model using the second feature information, generates the second training mould Block.
Second inference pattern is deployed in the multiple network elements being connect with the SDN clouds net controller;
The multiple network element carries out reasoning from logic according to second inference pattern, to the data flow that real-time reception arrives, and obtains To forwarding the reasoning results;
The multiple network element executes the forwarding the reasoning results in SDN network.
In the present embodiment, can also the second inference pattern corresponding to the network element deployment of network infrastructure layer, such as The AI accelerators are embedded in the network equipments such as wireless base station device, router, interchanger, further realize SDN network Automatic Optimal accelerates, and realizes the artificial intelligence control of SDN network.
In a kind of optional embodiment, the deep learning model includes one or more of deep learning algorithm: Spark ML algorithms, MLlib algorithms, deeplearning4j algorithms, TensorFlow algorithms, Caffe algorithms, CNTK algorithms, Theano algorithms and Torch algorithms.
In a kind of optional embodiment, the historical data stream include the bandwidth parameter of network, link load parameter with And link delay parameter.
Specifically, the deep learning model, it can be understood as neural network model, each of described deep learning model Neuron is a logistic regression device, with x1, x2..., xnFor input, exports and be:
Wherein, f is referred to as activation primitive;W is the parameter of neural network;θ is to compare threshold value;By by each of each layer The output y of neuroniInput as each neuron to next layer.The deep learning model uses S type transmission functionsPass through anti-pass error functionThe constantly parameter W and threshold θ of adjustment neural network, makes error Function E reaches minimum, and deep learning process terminates at this time, determines the parameter W and threshold of the neural network of each neuron in network Value θ, and the deep learning model after being trained;Wherein, tiFor desired output, yiFor the output of neuron.By special by first Reference ceases the input of the first layer neuron as the deep learning model, and positive by multilayer neuron transmits iteration meter The parameter W1 and threshold θ 1 that the neural network of each neuron in network is obtained after calculation, by each god of the deep learning model The parameter W and threshold θ of neural network through member adjust separately as W1 and θ 1, obtain first inference pattern;Similarly, pass through Using second feature information as the input of the first layer neuron of the deep learning model, passed by the forward direction of multilayer neuron The parameter W2 and threshold θ 2 that the neural network of each neuron in network is obtained after iterating to calculate are passed, by the deep learning model Each neuron neural network parameter W and threshold θ adjust separately as W2 and θ 2, obtain second inference pattern.
Referring to Fig. 2, it, which is the embodiment of the present invention, additionally provides a kind of schematic diagram of the network acceleration device based on SDN, The network acceleration device based on SDN includes:Data input module 1, AI accelerators 2, model deployment module 3 and SDN clouds Net controller 4;
The data input module 1 is input to the AI accelerators 2 for that will acquire historical data stream in SDN network In to deep learning model carry out model training, generate the first inference pattern;
The model deployment module 3, for first inference pattern to be deployed in SDN clouds net controller 4;
The SDN clouds net controller 4, for according to first inference pattern, patrolling the data flow acquired in real time Reasoning is collected, control the reasoning results are obtained;
The SDN clouds net controller 4, for executing the control the reasoning results in SDN network.
In the present embodiment, by introducing AI accelerators in the SDN clouds net controller so that the basis of SDN network Facility layer quickly has AI training and the inferential capability of different stage, and cross-cutting analysis may be implemented, meet satisfaction of overall importance The centralization training of strategy or algorithm model of overall importance and reasoning demand.The AI accelerators can be independently of the SDN The GPU/FPGSA clusters of cloud net controller deployment, or be deployed in the SDN clouds net controller with CPU according to setting proportioning collection At CPU/GPU/FPGSA, or be deployed in the SDN clouds net controller support AI ASIC hardware.
Wherein, the control the reasoning results include the update configuration parameter and stream compression of the SDN clouds net controller The current configuration parameters of itself are adjusted to institute by hair rule, the SDN clouds net controller in real time according to the control the reasoning results State update configuration parameter;So that the SDN clouds net controller configures parameter according to the update and the forwarding data flow is advised It then issues the data acquired in real time and flows to network element.
In a kind of optional embodiment, the SDN clouds net controller is additionally operable to the data that will be acquired in set period of time Stream, the configuration parameter of the SDN clouds net controller and forwarding data flow rule are sent to the AI accelerators;
The AI accelerators, for according to acquired in the set period of time data flow, the SDN clouds net controller Parameter and forwarding data flow rule are configured, model training is carried out to first inference pattern again, after obtaining iteration optimization First inference pattern;
The model deployment module is additionally operable to first inference pattern update after iteration optimization to the SDN clouds In net controller.
In the present embodiment, joined by the configuration of the data flow, the SDN clouds net controller that will be acquired in set period of time Number and forwarding data flow rule, carry out model training to first inference pattern again, realize first inference pattern Iteration optimization, to realize the SDN clouds net controller Automatic Optimal control.
In a kind of optional embodiment, the data input module includes:Data pre-processing unit;
The data pre-processing unit generates standard for the historical data stream to be carried out data cleansing and pretreatment Simultaneously described be stored in accelerates in memory module the data sample of change;
The AI accelerators, for storing mould from the acceleration according to the preset configuration parameter of the SDN clouds net controller Data sample is obtained in block and carries out feature mining, obtains fisrt feature information;
The AI accelerators, for carrying out model training to the deep learning model using the fisrt feature information, Generate first inference pattern.
In a kind of optional embodiment, the pretreatment includes following one or more processing procedures:Missing values processing, Feature sliding-model control, feature combined treatment, feature selecting processing.
In a kind of optional embodiment, the network acceleration device based on SDN further includes network element;
The AI accelerators, for obtaining data sample from the acceleration memory module according to the default retransmitting paramater of network element This progress feature mining obtains second feature information;
The model generation unit, for carrying out model instruction to the deep learning model using the second feature information Practice, generates second training module.
The model deployment module is connect for being deployed to second inference pattern with the SDN clouds net controller Multiple network elements in;
The multiple network element, for according to second inference pattern, Logical Deriving to be carried out to the data flow that real-time reception arrives Reason obtains forwarding the reasoning results;
The multiple network element executes the forwarding the reasoning results in SDN network.
In the present embodiment, can also the second inference pattern corresponding to the network element deployment of network infrastructure layer, such as The AI accelerators are embedded in the network equipments such as wireless base station device, router, interchanger, further realize SDN network Automatic Optimal accelerates, and realizes the artificial intelligence control of SDN network.
In a kind of optional embodiment, the deep learning model includes one or more of deep learning algorithm: Spark ML algorithms, MLlib algorithms, deeplearning4j algorithms, TensorFlow algorithms, Caffe algorithms, CNTK algorithms, Theano algorithms and Torch algorithms.
In a kind of optional embodiment, the historical data stream include the bandwidth parameter of network, link load parameter with And link delay parameter.
Specifically, the deep learning model, it can be understood as neural network model, each of described deep learning model Neuron is a logistic regression device, with x1, x2..., xnFor input, exports and be:
Wherein, f is referred to as activation primitive;W is the parameter of neural network;θ is to compare threshold value;By by each of each layer The output y of neuroniInput as each neuron to next layer.The deep learning model uses S type transmission functionsPass through anti-pass error functionThe constantly parameter W and threshold θ of adjustment neural network, makes error Function E reaches minimum, and deep learning process terminates at this time, determines the parameter W and threshold of the neural network of each neuron in network Value θ, and obtain the deep learning model;Wherein, tiFor desired output, yiFor the output of neuron.By the way that fisrt feature is believed The input for ceasing the first layer neuron as the deep learning model, after the positive transmission iterative calculation of multilayer neuron The parameter W1 and threshold θ 1 for obtaining the neural network of each neuron in network, by each neuron of the deep learning model Neural network parameter W and threshold θ adjust separately as W1 and θ 1, obtain first inference pattern;Similarly, by by Input of two characteristic informations as the first layer neuron of the deep learning model changes by positive transmit of multilayer neuron The parameter W2 and threshold θ 2 that the neural network of each neuron in network is obtained after generation calculating, by the every of the deep learning model The parameter W and threshold θ of the neural network of a neuron adjust separately as W2 and θ 2, obtain second inference pattern.
The embodiment of the present invention additionally provides a kind of network acceleration device based on SDN, including processor, memory and deposits Storage is in the memory and is configured as the computer program executed by the processor, and the processor executes the calculating Such as the above-mentioned network accelerating method based on SDN is realized when machine program.
Illustratively, the computer program can be divided into one or more module/units, one or more A module/unit is stored in the memory, and is executed by the processor, to complete the present invention.It is one or more A module/unit can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing institute State implementation procedure of the computer program in the network acceleration device based on SDN.For example, the computer program can be by It is divided into data input module, AI accelerators, model deployment module and SDN cloud net controllers, each module concrete function as follows: The data input module is input in the AI accelerators for that will acquire historical data stream in SDN network to depth It practises model and carries out model training, generate the first inference pattern;The model deployment module is used for first inference pattern portion It affixes one's name in SDN cloud net controllers;The SDN clouds net controller is used for according to first inference pattern, to the number acquired in real time Reasoning from logic is carried out according to stream, obtains control the reasoning results.
The network acceleration device based on SDN can be desktop PC, notebook, palm PC and cloud service The computing devices such as device.The network acceleration device based on SDN may include, but be not limited only to, processor, memory.This field Technical staff is appreciated that the schematic diagram is only based on the example of the network acceleration device of SDN, does not constitute to being based on The restriction of the network acceleration device of SDN, may include than illustrating more or fewer components, either combine certain components or Different components, for example, the network acceleration device based on SDN can also include input-output equipment, network access equipment, Bus etc..
Alleged processor can be central processing unit (Central Processing Unit, CPU), can also be it His general processor, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor Deng the processor is the control centre of the network acceleration device based on SDN, entire using various interfaces and connection The various pieces of network acceleration device based on SDN.
The memory can be used for storing the computer program and/or module, and the processor is by running or executing Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization The various functions of network acceleration device based on SDN.The memory can include mainly storing program area and storage data field, In, storing program area can storage program area, application program (such as sound-playing function, image needed at least one function Playing function etc.) etc.;Storage data field can be stored uses created data (such as audio data, phone directory according to mobile phone Deng) etc..In addition, memory may include high-speed random access memory, can also include nonvolatile memory, such as firmly Disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) block, flash card (Flash Card), at least one disk memory, flush memory device or other volatile solid-states Part.
Wherein, if the integrated module/unit of the network acceleration device based on SDN is in the form of SFU software functional unit It realizes and when sold or used as an independent product, can be stored in a computer read/write memory medium.Based on this The understanding of sample, the present invention realize all or part of flow in above-described embodiment method, can also be referred to by computer program Relevant hardware is enabled to complete, the computer program can be stored in a computer readable storage medium, the computer journey Sequence is when being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes calculating Machine program code, the computer program code can be source code form, object identification code form, executable file or it is certain in Between form etc..The computer-readable medium may include:Any entity or dress of the computer program code can be carried Set, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software Distribution medium etc..It should be noted that the content that the computer-readable medium includes can be according to making laws in jurisdiction Requirement with patent practice carries out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent practice, computer Readable medium does not include electric carrier signal and telecommunication signal.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, and the computer readable storage medium includes The computer program of storage, wherein control when the computer program is run and set where the computer readable storage medium It is standby to execute such as the above-mentioned network accelerating method based on SDN.
Compared with the existing technology, the advantageous effect of a kind of network accelerating method based on SDN provided in an embodiment of the present invention It is:The network accelerating method based on SDN, including:Historical data stream will be acquired in SDN network be input to AI accelerators In to deep learning model carry out model training, generate the first inference pattern;First inference pattern is deployed to SDN cloud nets In controller;The SDN clouds net controller carries out Logical Deriving according to first inference pattern, to the data flow acquired in real time Reason obtains control the reasoning results;The SDN clouds net controller executes the control the reasoning results in SDN network.The above method In conjunction with artificial intelligence, realizes that the self-optimization of SDN network accelerates, promote the speed of service of SDN network, reduce the fortune of SDN network Seek cost.The embodiment of the present invention additionally provides a kind of network acceleration device and computer readable storage medium based on SDN.
It should be noted that the apparatus embodiments described above are merely exemplary, wherein described be used as separating component The unit of explanation may or may not be physically separated, and the component shown as unit can be or can also It is not physical unit, you can be located at a place, or may be distributed over multiple network units.It can be according to actual It needs that some or all of module therein is selected to achieve the purpose of the solution of this embodiment.In addition, device provided by the invention In embodiment attached drawing, the connection relation between module indicates there is communication connection between them, specifically can be implemented as one or A plurality of communication bus or signal wire.Those of ordinary skill in the art are without creative efforts, you can to understand And implement.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (10)

1. a kind of network accelerating method based on SDN network, which is characterized in that including:
Historical data stream will be acquired in SDN network to be input in AI accelerators to deep learning model progress model training, it is raw At the first inference pattern;
First inference pattern is deployed in SDN cloud net controllers;
The SDN clouds net controller carries out reasoning from logic according to first inference pattern, to the data flow acquired in real time, obtains Control the reasoning results;
The SDN clouds net controller executes the control the reasoning results in SDN network.
2. the network accelerating method based on SDN network as described in claim 1, which is characterized in that the network based on SDN Accelerated method further includes:
The SDN clouds net controller by the data flow acquired in set period of time, the configuration parameter of the SDN clouds net controller with And forwarding data flow rule is sent to the AI accelerators;
The AI accelerators are according to the data flow acquired in the set period of time, the configuration parameter of the SDN clouds net controller And forwarding data flow rule, model training is carried out to first inference pattern again, first after iteration optimization is obtained and pushes away Manage model;
It will be in the first inference pattern update to the SDN clouds net controller after iteration optimization.
3. the network accelerating method based on SDN network as described in claim 1, which is characterized in that it is described will be in SDN network Acquisition historical data stream is input in AI accelerators carries out model training to deep learning model, specifically includes:
The historical data stream is subjected to data cleansing and pretreatment, generate standardized data sample and described is stored in acceleration In memory module;
The AI accelerators obtain number according to the preset configuration parameter of the SDN clouds net controller from the acceleration memory module Feature mining is carried out according to sample, obtains fisrt feature information;
Model training is carried out to the deep learning model using the fisrt feature information, generates first inference pattern.
4. the network accelerating method based on SDN network as claimed in claim 3, which is characterized in that it is described pretreatment include with Next or multiple processing procedures:Missing values processing, the processing of feature sliding-model control, feature combined treatment, feature selecting.
5. the network accelerating method based on SDN network as claimed in claim 3, which is characterized in that the network based on SDN Accelerated method further includes:
The AI accelerators obtain data sample from the acceleration memory module according to the default retransmitting paramater of network element and carry out spy Sign is excavated, and second feature information is obtained;
Model training is carried out to the deep learning model using the second feature information, generates second inference pattern.
Second inference pattern is deployed in the multiple network elements being connect with the SDN clouds net controller;
The multiple network element carries out reasoning from logic according to second inference pattern, to the data flow that real-time reception arrives, and is turned Send out the reasoning results;
The multiple network element executes the forwarding the reasoning results in SDN network.
6. the network accelerating method based on SDN network as described in claim 1, which is characterized in that the deep learning model Including one or more of deep learning algorithm:Spark ML algorithms, MLlib algorithms, deeplearning4j algorithms, TensorFlow algorithms, Caffe algorithms, CNTK algorithms, Theano algorithms and Torch algorithms.
7. the network accelerating method based on SDN network as described in claim 1, which is characterized in that the historical data stream packet Include bandwidth parameter, link load parameter and the link delay parameter of network.
8. a kind of network acceleration device based on SDN network, which is characterized in that including:Data input module, AI accelerators, mould Type deployment module and SDN cloud net controllers;
The data input module is input in the AI accelerators for that will acquire historical data stream in SDN network to depth It spends learning model and carries out model training, generate the first inference pattern;
The model deployment module, for first inference pattern to be deployed in SDN cloud net controllers;
The SDN clouds net controller, for according to first inference pattern, Logical Deriving to be carried out to the data flow acquired in real time Reason obtains control the reasoning results;
The SDN clouds net controller, for executing the control the reasoning results in SDN network.
9. a kind of network acceleration device based on SDN network, which is characterized in that including processor, memory and be stored in institute The computer program executed by the processor is stated in memory and is configured as, the processor executes the computer program Shi Shixian network accelerating methods based on SDN network as claimed in any of claims 1 to 7 in one of claims.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes the calculating of storage Machine program, wherein equipment where controlling the computer readable storage medium when the computer program is run is executed as weighed Profit requires the network accelerating method based on SDN network described in any one of 1 to 7.
CN201810395796.3A 2018-04-27 2018-04-27 Network accelerating method, device based on SDN network and computer readable storage medium Pending CN108718249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810395796.3A CN108718249A (en) 2018-04-27 2018-04-27 Network accelerating method, device based on SDN network and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810395796.3A CN108718249A (en) 2018-04-27 2018-04-27 Network accelerating method, device based on SDN network and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108718249A true CN108718249A (en) 2018-10-30

Family

ID=63899368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810395796.3A Pending CN108718249A (en) 2018-04-27 2018-04-27 Network accelerating method, device based on SDN network and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108718249A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558952A (en) * 2018-11-27 2019-04-02 北京旷视科技有限公司 Data processing method, system, equipment and storage medium
CN109840501A (en) * 2019-01-31 2019-06-04 深圳市商汤科技有限公司 A kind of image processing method and device, electronic equipment, storage medium
CN109905880A (en) * 2019-03-22 2019-06-18 苏州浪潮智能科技有限公司 A kind of network partitioning method, system and electronic equipment and storage medium
WO2020172494A1 (en) * 2019-02-22 2020-08-27 Neureality Ltd. Directed and interconnected grid dataflow architecture
WO2021068244A1 (en) * 2019-10-12 2021-04-15 深圳鲲云信息科技有限公司 Local data stream acceleration method, data stream acceleration system, and computer device
CN113259147A (en) * 2020-06-28 2021-08-13 中兴通讯股份有限公司 Network element management method, device, computer equipment and medium
CN113381865A (en) * 2020-02-25 2021-09-10 华为技术有限公司 Network operation and maintenance method, device and system
WO2022133865A1 (en) * 2020-12-24 2022-06-30 Huawei Technologies Co., Ltd. Methods and systems for artificial intelligence based architecture in wireless network
WO2022237484A1 (en) * 2021-05-12 2022-11-17 华为云计算技术有限公司 Inference system and method, apparatus, and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330558A (en) * 2016-08-31 2017-01-11 哈尔滨工业大学(威海) Controller load prediction system and method applied to software defined network
CN106470168A (en) * 2015-08-22 2017-03-01 华为技术有限公司 A kind of data transmission method, the switch using the method and network control system
CN107547379A (en) * 2016-06-23 2018-01-05 华为技术有限公司 The method and relevant device of route test action are generated in software defined network
CN107835201A (en) * 2017-12-14 2018-03-23 华中师范大学 Network attack detecting method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106470168A (en) * 2015-08-22 2017-03-01 华为技术有限公司 A kind of data transmission method, the switch using the method and network control system
CN107547379A (en) * 2016-06-23 2018-01-05 华为技术有限公司 The method and relevant device of route test action are generated in software defined network
CN106330558A (en) * 2016-08-31 2017-01-11 哈尔滨工业大学(威海) Controller load prediction system and method applied to software defined network
CN107835201A (en) * 2017-12-14 2018-03-23 华中师范大学 Network attack detecting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
俞俊 等: "基于用户点击数据的细粒度图像识别方法概述", 《南京信息工程大学学报》 *
王熙、刁兴玲: "韦乐平直指NFV发展"痛点"AI将推动网络重构进入新阶段", 《通信世界》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558952A (en) * 2018-11-27 2019-04-02 北京旷视科技有限公司 Data processing method, system, equipment and storage medium
CN109840501A (en) * 2019-01-31 2019-06-04 深圳市商汤科技有限公司 A kind of image processing method and device, electronic equipment, storage medium
WO2020172494A1 (en) * 2019-02-22 2020-08-27 Neureality Ltd. Directed and interconnected grid dataflow architecture
US11922304B2 (en) 2019-02-22 2024-03-05 Neureality Ltd. Remote artificial intelligence (AI) acceleration system
CN109905880A (en) * 2019-03-22 2019-06-18 苏州浪潮智能科技有限公司 A kind of network partitioning method, system and electronic equipment and storage medium
CN109905880B (en) * 2019-03-22 2020-05-29 苏州浪潮智能科技有限公司 Network partitioning method, system, electronic device and storage medium
WO2021068244A1 (en) * 2019-10-12 2021-04-15 深圳鲲云信息科技有限公司 Local data stream acceleration method, data stream acceleration system, and computer device
CN113272792A (en) * 2019-10-12 2021-08-17 深圳鲲云信息科技有限公司 Local data stream acceleration method, data stream acceleration system and computer equipment
CN113381865A (en) * 2020-02-25 2021-09-10 华为技术有限公司 Network operation and maintenance method, device and system
CN113259147A (en) * 2020-06-28 2021-08-13 中兴通讯股份有限公司 Network element management method, device, computer equipment and medium
WO2022133865A1 (en) * 2020-12-24 2022-06-30 Huawei Technologies Co., Ltd. Methods and systems for artificial intelligence based architecture in wireless network
WO2022237484A1 (en) * 2021-05-12 2022-11-17 华为云计算技术有限公司 Inference system and method, apparatus, and related device

Similar Documents

Publication Publication Date Title
CN108718249A (en) Network accelerating method, device based on SDN network and computer readable storage medium
CN109669768B (en) Resource allocation and task scheduling method for edge cloud combined architecture
CN108809694A (en) Arranging service method, system, device and computer readable storage medium
CN113032904B (en) Model construction method, task allocation method, device, equipment and medium
CN109993299A (en) Data training method and device, storage medium, electronic device
CN108111335B (en) A kind of method and system of scheduling and link virtual network function
CN108122027A (en) A kind of training method of neural network model, device and chip
CN109992404A (en) PC cluster resource regulating method, device, equipment and medium
US20200177495A1 (en) Route control method and route setting device
WO2004019556A8 (en) Method and system for configuration control in telecommunications networks
CN108718296A (en) Network management-control method, device and computer readable storage medium based on SDN network
Dalgkitsis et al. Dynamic resource aware VNF placement with deep reinforcement learning for 5G networks
CN113268341A (en) Distribution method, device, equipment and storage medium of power grid edge calculation task
CN114611634A (en) Behavior type determination method and device, storage medium and electronic device
CN108509615A (en) Common recognition method for building up, device and readable storage medium storing program for executing based on lottery mechanism
CN110084406A (en) Load forecasting method and device based on self-encoding encoder and meta learning strategy
CN116187429A (en) End Bian Yun collaborative synchronization federal learning training algorithm based on segmentation learning
Kusetogullari et al. A reduced uncertainty-based hybrid evolutionary algorithm for solving dynamic shortest-path routing problem
CN112884142B (en) Neural network training method, target detection method, device, equipment and storage medium
CN109981330A (en) Router robot control method and device and router robot
CN115633083A (en) Power communication network service arrangement method, device and storage medium
CN110290206A (en) A kind of distributed computing system and method for cafe environment
CN109151895B (en) Data transmission method, device, server and network center node
CN108737130B (en) Network flow prediction device and method based on neural network
Maksymyuk et al. Artificial intelligence based 5G coverage design and optimization using deep generative adversarial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181030

RJ01 Rejection of invention patent application after publication