WO2023116422A1 - 网络资源处理方法、存储介质及电子装置 - Google Patents

网络资源处理方法、存储介质及电子装置 Download PDF

Info

Publication number
WO2023116422A1
WO2023116422A1 PCT/CN2022/136995 CN2022136995W WO2023116422A1 WO 2023116422 A1 WO2023116422 A1 WO 2023116422A1 CN 2022136995 W CN2022136995 W CN 2022136995W WO 2023116422 A1 WO2023116422 A1 WO 2023116422A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
network topology
network
service
node
Prior art date
Application number
PCT/CN2022/136995
Other languages
English (en)
French (fr)
Inventor
杨玺坤
王大江
屠要峰
叶友道
韩炳涛
王永成
骆庆开
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023116422A1 publication Critical patent/WO2023116422A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/22Traffic simulation tools or models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • a method for processing network resources including: collecting network topology data and service data of network resources; performing enhanced processing on the network topology data and the service data to generate sample data; Extract key features of the sample data; train the neural network model according to the key features to obtain a trained neural network model.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above Steps in the method examples.
  • FIG. 5 is a schematic diagram of a neural network for extracting network and service features according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of a data enhancement method for network topology and service change trends according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a network map according to an embodiment of the disclosure.
  • Fig. 8 is a training flow chart 1 of performance improvement combined with deep learning and a heuristic algorithm according to an embodiment of the present disclosure
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the network resource processing method in the embodiment of the present disclosure, and the processor 102 executes the computer program stored in the memory 104 by running the Various functional applications and service chain address pool slicing processing realize the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to the mobile terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via a network.
  • the specific example of the above network may include a wireless network provided by the communication provider of the mobile terminal.
  • the transmission device 106 includes a network interface controller (NIC for short), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, referred to as RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart of a network resource processing method according to an embodiment of the present disclosure. As shown in FIG. 2 , the process includes the following step:
  • Step S204 performing enhanced processing on the network topology data and the service data to generate sample data
  • Step S206 extracting key features of the sample data
  • Step S208 training the neural network model according to the key features to obtain a trained neural network model.
  • the problem that the results determined by the heuristic search algorithm in the related art are unstable and unable to adapt to changes in network and business scenarios can be solved.
  • the trained neural network model is more accurate It is stable and can adapt to changes in network and business scenarios.
  • the above step S204 may specifically include: performing enhanced processing on the network topology data and the service data, combining the processed network topology data and the processed service data to generate the associated sample data, Further, the sampling ratio of the network topology corresponding to the change order is set according to the change order of the network topology, wherein the change order refers to the number of changes in the nodes or edges of the network topology; according to the network topology Adjust the sampling ratio according to the node attributes of the changing nodes, and generate network topology samples according to the adjusted adoption ratio.
  • the creation of the multiple services is completed, wherein the service being created is called the current service: obtain the current optimization progress information, wherein the current optimization progress information includes the established business and the business to be built; pass the current training
  • the neural network model of the current business determines the income value of the child nodes of the current node; selects the child node with the largest income value and determines it as the next hop node; updates the current optimization progress information to obtain the updated current optimization progress information.
  • the trained neural network model can be used to predict network resources, specifically, to obtain the network topology data to be tested and the network resources to be tested measuring service data; performing enhanced processing on the network topology data to be tested and the service data to be tested to generate target sample data; extracting target key features of the target sample data; inputting the target key features into the training
  • the search results of the network resources to be tested are obtained, and then input into a stable neural network model through enhanced processing of the Yerba Mate network topology data and the service data to be tested, so that the predicted results (that is, the search results) result) is more accurate.
  • the combination of deep learning and heuristic algorithm can achieve a balance between speed and stability, and obtain a better optimization solution quickly and stably.
  • the entire network resource optimization process can be regarded as a process of gradually deploying each node of each service to the network topology, and finally ensuring that the overall optimization index is optimal; One-hop nodes are tested to evaluate which node is selected for better optimization.
  • Use Monte Carlo tree search as the selection of heuristic algorithm, but it is not limited to this, other heuristic algorithms that can traverse and explore the next hop node can be replaced; evaluation method: directly use deep neural network for each candidate node Make multiple forecasts, and judge the selection results based on the forecast results; continue to loop until all businesses are created.
  • the training process create sample data for each set of network topology and business data generated, and accumulate all the data to train the neural network; after training, the performance of the neural network has been improved, and it will be closer to the optimal when evaluating solution, the collected training data will be better; through multiple cycles, the performance of the model will be continuously improved, so that the overall result will also reach a better level.
  • the prediction process similar to the training process, still according to the above method, using the trained network model, only one cycle is needed to get the final result.
  • the network has learned better network features, it can predict a solution that is closer to the optimal solution every time, which is equivalent to finding a position close to the optimal solution from a huge search space, so inspiring
  • the formula search only needs to do a small amount of exploration around this position to increase the probability of finding the optimal solution, so each optimization result can stably obtain a better solution.
  • the exploration scheme is more purposeful, and only searches for the part close to the optimal solution, so the number of invalid searches is greatly reduced, and the speed is significantly improved.
  • Fig. 4 is a system architecture diagram of network resource optimization according to an embodiment of the present disclosure, as shown in Fig. 4 , including: a data processing subsystem, a model training subsystem and a model reasoning subsystem,
  • the data processing subsystem is used to enhance the data of the network topology and business trends for use in the model training phase; the neural network that extracts network and business features is introduced as an important component into the deep learning model that predicts the next hop information of the business.
  • Both the training subsystem and the model inference subsystem are used; similarly, the performance improvement of the combination of deep learning and heuristic algorithms, as a means of improving the performance of the model training subsystem and the model inference subsystem, is also applied in both subsystems . The following describes each subsystem in detail.
  • the data processing subsystem is aimed at data enhancement of network topology and business change trends.
  • the existing technology cannot adapt to changes in network and business scenarios. There are two fundamental reasons: both artificial methods and heuristic methods are only for a certain "Topology, fixed business" scene analysis or search, can not extract and learn the domain knowledge inside the network topology and business information, so when the scene changes, such as the addition of topology nodes or new services, it can only be regarded as a brand new scene, but in fact, the original scene and the changed scene have a lot of common or very similar information, which has not been utilized in the prior art.
  • the data of real scenarios is often limited. Even with the existing deep learning technology, the internal characteristics of network topology and business information cannot be well learned under limited data. Therefore, to solve the problem of adaptability to scene changes, we must first ensure that the data is sufficient to represent the change information of the real scene, and the amount of data is sufficient.
  • the data enhancement in the embodiments of the present disclosure can generate high-quality data that is more in line with real scenarios according to the changing trend of network topology and services. Specifically, including:
  • the number of changes in the nodes or edges of the network topology is described as the "order" of network topology changes.
  • adding or deleting a node is a "first-order" change of the current node.
  • second-order and third-order changes of the network topology can be defined.
  • the sampling ratio is set as a distribution whose probability gradually decreases as the order increases, and the sum of the probabilities is 1. Sample generation is guided by this ratio.
  • the network topology is generally composed of core nodes and edge nodes.
  • the core node undertakes more service routes. Once there is a change (such as deletion from the topology due to a fault), it will cause great damage to the final business optimization solution. Big change. Therefore, after formulating the above sampling ratio, for the core nodes, the sample probability of the "first-order" change is enlarged, and the probability of subsequent orders decreases in turn.
  • the change of the network topology can be summarized as a two-dimensional space of "position + action”.
  • position + action After determining the proportion and position, after analyzing the changes of a large number of existing deployed networks, the possible change trends of the network can be summarized into the following directions: The increase of nodes at the core position; the increase of nodes at the edge of the network; the deletion of nodes at a certain position; the addition of links, this trend will mostly appear on nodes with fewer links.
  • the design of the neural network for extracting network and business features directly determines the performance of the prediction results, and the key to the design is whether a suitable network and business data structure can be designed to extract data features well.
  • Network topology data mainly includes node and link information, and each information can be regarded as a multi-dimensional vector.
  • node information will include node ID, link number, etc.
  • business data can often be described as [start point, end point, business attribute ] vector.
  • the existing deep learning methods generally divide the above data into many small parts, and then use separate neural network units, such as fully connected networks, convolutional neural networks, etc. to extract, and finally simply splicing or adding to Together.
  • the service characteristics include the information of the network topology, and the existing methods cannot extract such mutually integrated information well.
  • Fig. 5 is a schematic diagram of a neural network for extracting network and business features according to an embodiment of the present disclosure.
  • a graph convolution that can extract node information is selected Neural network (Graph Neural Network, referred to as GCN) structure; before processing business data, a node information fusion layer is designed to replace the starting and ending information in business data with topology node embedding information for preliminary fusion; then, the fusion
  • GCN Neural Network
  • GCN Neural Network
  • a node information fusion layer is designed to replace the starting and ending information in business data with topology node embedding information for preliminary fusion; then, the fusion
  • the business data of the business data is transmitted to the Transformer Encoder for further feature extraction; finally, the business features and topological features are passed through their respective encoders to unify the dimensions, and finally spliced together to obtain the final fusion feature.
  • the feature extraction structure of the embodiment of the present disclosure is well adapted to the unique structure of the network topology and service data, and through two fusions, the service feature and the network topology node feature are fully fused, achieving a good feature extraction effect.
  • this structure can be used as a part of a complex neural network to specifically deal with scenarios involving network and business data; the GCN network and Transformer network used inside this structure can be replaced with other network structures according to specific problems, but It does not affect the feature extraction effect of the overall structure; similarly, the encoder structure can also be designed and replaced according to the specific needs of users.
  • the number of nodes and links in the network determines the scale of the topology map, and the scale of the topology map further determines the path that may be selected for a service from the starting point to the end point.
  • the number of nodes and links The number of optional paths will increase rapidly, coupled with the increase in the number of services, the number of options for overall optimization will be very large.
  • this problem can actually be abstracted as: how to select the optimal optimization plan among the huge alternatives, and how to always select the optimal solution stably every time.
  • Fig. 6 is a flowchart of a data enhancement method for network topology and service change trends according to an embodiment of the present disclosure, as shown in Fig. 6 , including:
  • S602. Perform preprocessing on the data, such as removing abnormal data, cleaning, and the like.
  • S610 Combine and store the generated service data and topology samples, and proceed to S604 to determine whether to continue generating.
  • Fig. 7 is a schematic diagram of a network map according to an embodiment of the present disclosure.
  • node 7 is a provincial route, carrying service exchange channels between various cities; 3, 4, 8, 13, 14, 17 It is a city-level node; the remaining nodes such as 1, 2, 5, 6, etc. can be regarded as switching nodes within the city:
  • step S602 describes the preprocessing operation on the data, which mainly includes:
  • a physical link between network nodes can be divided into multiple logical links according to characteristics such as frequency and wavelength.
  • This complex topology is not a standard graph structure recognizable by computers, and it is difficult for both deep learning models and computer algorithms to process.
  • the physical topology diagram is expanded into a logical structure diagram, which strictly conforms to the standard diagram structure in the computer data structure;
  • record network basic indicators such as bandwidth, delay, overhead, etc.
  • a serializable dictionary structure such as json
  • steps S604 to S610 describe a sample generation process.
  • Node 7 is responsible for connecting multiple city-level nodes, so it is the most core node in this network; 3, 4, 8, 13 , 14, and 17 each carry the business exchange of their own cities, so they are also relatively core nodes; the remaining nodes are the least important in this network. According to the level of coreness, it is necessary to increase the first-order change samples involving core node changes, reduce the change samples of relatively low-importance edge nodes, and finally adjust the ratio to 50%:30%:15%:5%.
  • change the topology according to the change "action” defined before such as adding or deleting a node at a different position, adding or deleting a link, and a set of first-order change topology can be obtained; on the first-order change topology, Perform the above "actions” or directly add or delete two nodes on the original topology to obtain a new set of second-order change topology, and so on to generate all topology samples in proportion.
  • the input service data such as adding or reducing one or more services, adjusting the bandwidth of one or more services, and obtaining a set of service data.
  • appropriate changes are made to the input service data, such as adding or reducing one or more services, adjusting the bandwidth of one or more services, and obtaining a set of service data.
  • it is arranged and combined with topological data to form the final set of data.
  • key features are further extracted from the generated samples, such as the total number of current services, the total number of services to be built, the cumulative delay and overhead of current services, the remaining resources of the current network, etc., and organize them into a numpy array structure that is easy to handle for deep learning. storage.
  • the neural network for extracting network and business features is shown in Figure 4, and its input and output structure is described as follows:
  • Network topology data node information is organized into a 2-dimensional vector, and each row is the data of a node; the connection relationship is organized in the form of a connection table.
  • Business data organizes the included features, such as business start and end points, bandwidth, etc., into 2-dimensional vectors, and each row is the data of a business.
  • a 2-dimensional vector After the topological data is input into the GCN network, a 2-dimensional vector will be output, and each row is an encoding vector (embedding) of a node.
  • the node information fusion layer replaces the starting and ending information in the business information with the corresponding row vector in the embedding, and the output is still a 2-dimensional vector, and each row is a coding vector of a business.
  • the output dimension is still 2-dimensional
  • the number of rows is the number of businesses
  • the length of the column vector depends on the parameter settings during specific implementation.
  • Both the topological data branch and the business data branch will enter their respective encoders, and finally output a 1-dimensional vector, whose length depends on the parameter settings during specific implementation.
  • the fusion feature will output a 1-dimensional vector.
  • topology variant 1 and topology variant 2 are generated. 1 variant, business 2), and then combined to finally generate two enhanced data:
  • Fig. 8 is a training flow chart 1 of performance improvement combined with deep learning and a heuristic algorithm according to an embodiment of the present disclosure, as shown in Fig. 8 , including:
  • S808 in the current state described in S807, use a heuristic algorithm to test the next-hop node of the current entrusted construction service.
  • S814 update the current optimization progress information, such as established services to be established, current resource allocation, etc., that is, update all status information described in S806 according to the result of S813.
  • S816 Determine whether the performance of the current model has reached the standard or whether the number of training times has reached the threshold. If not, enter the process of S802, replace the current model with the trained model and continue the sample collection and training process; if so, the training ends, and the final training model is saved.
  • S803-S806 describe the data flow of using 3 sample data to generate the training model.
  • the whole process is a cyclic process.
  • S803 is used to control the threshold of the number of times of generating training data.
  • the threshold is assumed to be 1;
  • S804 selects a sample from the sample data, assuming ⁇ real topology, service 1, service 2> is selected, S805-S813 are responsible for creating service 1 and service 2 into the real topology hop by hop, and recording the status of each hop information, which can be described as:
  • S806 is responsible for storing the above data and using it as data for training the model.
  • S805-S813 describe the process of exploring hop by hop and selecting the next hop to create services until all services are created.
  • S804 ⁇ real topology, service 1, service 2> has been selected, and the current state is the initial state;
  • S808 enters S808 because it is the initial state
  • S808 is the initial state, so: the current service to be built is service 1, the completion status of the two services is 0 (that is, not created), the creation path is empty, the overall cost is 0, and the resources of each link in the topology are all unused. Occupied;
  • S808 The current service 1 is at the starting point, which is node 3, so the optional next hops are 4 and 7. It is necessary to use S808 to try to select 4 or 7 until the revenue generated by all services is created.
  • it is a search process (that is, search selection After 4 or 7, which nodes need to be selected until all services are created), so a heuristic algorithm needs to be used.
  • This example uses the Monte Carlo tree search algorithm in the application, but it is not limited to this algorithm in the actual application process. Other heuristic algorithms can be used to obtain the cumulative income of choosing 4 or 7;
  • S809 is a test, it is a simulation rather than a real service creation. This step describes the process of gradually "simulating" creating the next hop;
  • S810 uses a deep learning model to predict which node should be selected for each hop in S809, and what is the expected profit;
  • S811 counts the income information of 4 and 7 after performing a heuristic algorithm on 4 and 7 for many times
  • the status information is updated as follows: the current business to be built is business 1, the completion status of the two businesses is 0 (that is, not created), the creation path of business 1 is 0->1, business 2 is still empty, and the overall cost Add up the overhead of link 3-4, and subtract the bandwidth of service 1 from link 3-4 for each link resource in the topology.
  • the topological resource information can still be represented by a topological map; services 1 and 2 can still be represented by two-dimensional vectors, and each row is a piece of service information. Therefore, it is exactly the same as the above-mentioned input vector structure, and can be constructed for feature extraction.
  • the process turns back to S807. Since the business has not been fully completed, the current state, that is, the 4 nodes of business 1 should be used as the starting point, and then the next hop information is selected. According to Figure 4, there are 8 optional nodes. S706-S713 process, finally select node 8, update the status information and store it as training data, and then return to S807, until business 1 and business 2 are all created, enter S806, and a batch of training samples accumulated above are stored in the data pool together , assuming that ⁇ real topology, business 1, business 2> all services are created and a total of 20 pieces of training data are generated, then there are 20 pieces of data in the current data pool.
  • S817 assumes that 100 training times are required. Since the performance of the model has been improved at this time, the model is updated to a model with better performance that has just been trained, and this model is used in the subsequent "testing" process. At this time, it goes to S803 again, and since the threshold is assumed to be 1, it goes to S804. At this time, another sample is drawn from the generated data, assuming it is ⁇ topology variant 1, service 1 variant, service 2>. The subsequent process is the same as the above description, so it will not be described again.
  • Fig. 9 is a training flow chart 2 of performance improvement combined with deep learning and a heuristic algorithm according to an embodiment of the present disclosure, as shown in Fig. 9 , including:
  • S905 in the current state described in S904, use a heuristic algorithm to test the next-hop node of the current entrusted construction service.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本公开实施例提供了一种网络资源处理方法、存储介质及电子装置,该方法包括:采集网络资源的网络拓扑数据和业务数据;对该网络拓扑数据与该业务数据进行增强处理,以生成样本数据;提取该样本数据的关键特征;根据该关键特征对神经网络模型进行训练,得到训练好的神经网络模型,可以解决相关技术中通过启发式搜索算法确定的结果不稳定,且无法适应网络和业务场景变化的问题,通过对样本数据的增强处理,使得训练好的神经网络模型更稳定,且可以适应网络和业务场景变化。

Description

网络资源处理方法、存储介质及电子装置
相关申请的交叉引用
本公开基于2021年12月20日提交的发明名称为“网络资源处理方法、存储介质及电子装置”的中国专利申请CN202111565037.5,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本公开。
技术领域
本公开实施例涉及通信领域,具体而言,涉及一种网络资源处理方法、存储介质及电子装置。
背景技术
近年来,5G网络的发展为网络通信技术带来了很多创新,包括更高的速度、更高的带宽、更低的延迟等,但同时也带来了数据流量的爆炸式增长,承载了越来越多的用户和服务。这些变化无论是对新建网络的业务规划,还是现有网络的业务优化都提出了更高的挑战。
因此,一个高效的网络资源优化方法或系统,不仅可以为用户带来优质的网络服务,还可以节省带宽资源,大大降低电信运营商的成本。
现有的方法主要包括基于专家经验的人工方法和启发式搜索方法。
人工方法,即通过网优专家利用其经验手动完成。其缺陷在于:首先,随着业务量、复杂度的增加,人脑很难同时考虑所有因素,得到高性能的优化方案。其次,目前的网络规划还是以相对较小的区域,局部业务的整体优化为主,随着覆盖区域的扩大,现有技术的资源优化能力逐渐下降。另外,网络拓扑和业务会随着用户需求不断改变,如网络扩容、业务新增等,这种情况下,人工方法需要针对新业务场景重新优化,大大增加了人工成本。
启发式搜索算法,某种程度上节约了人力成本,但其缺陷在于较高的计算复杂度,随着网络规模扩大,搜索空间变得越发庞大,导致完成一次优化的时间成倍增加,无法满足使用要求;同时,搜索到最优方案的概率也大大降低,使得此种方法耗时且不稳定。类似地,当网络拓扑和业务改变时,启发式算法仍需要针对新场景重新搜索,效率进一步降低。
综上,当前主要技术方案均在运行耗时,结果不稳定,且无法适应网络和业务场景变化等缺陷。
针对相关技术中通过启发式搜索算法确定的结果不稳定,且无法适应网络和业务场景变化的问题,尚未提出解决方案。
发明内容
本公开实施例提供了一种网络资源处理方法、存储介质及电子装置,以至少解决相关技术中通过启发式搜索算法确定的结果不稳定,且无法适应网络和业务场景变化的问题。
根据本公开的一个实施例,提供了一种网络资源处理方法,包括:采集网络资源的网络拓扑数据和业务数据;对所述网络拓扑数据与所述业务数据进行增强处理,以生成样本数据;提取所述样本数据的关键特征;根据所述关键特征对神经网络模型进行训练,得到训练好的 神经网络模型。
根据本公开的又一个实施例,还提供了一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本公开的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
附图说明
图1是本公开实施例的网络资源处理方法的移动终端的硬件结构框图;
图2是根据本公开实施例的网络资源处理方法的流程图;
图3是根据本公开可选实施例的网络资源处理方法的流程图;
图4是根据本公开实施例的网络资源优化的系统架构图;
图5是根据本公开实施例的提取网络和业务特征的神经网络的示意图;
图6是根据本公开实施例的网络拓扑和业务变化趋势的数据增强方法的流程图;
图7是根据本公开实施例的网络图谱的示意图;
图8是根据本公开实施例的深度学习和启发式算法结合的性能提升的训练流程图一;
图9是根据本公开实施例的深度学习和启发式算法结合的性能提升的训练流程图二。
具体实施方式
下文中将参考附图并结合实施例来详细说明本公开的实施例。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本公开实施例中所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在移动终端上为例,图1是本公开实施例的网络资源处理方法的移动终端的硬件结构框图,如图1所示,移动终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,其中,上述移动终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的网络资源处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及业务链地址池切片处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
在本实施例中提供了一种运行于上述移动终端或网络架构的网络资源处理方法,图2是根据本公开实施例的网络资源处理方法的流程图,如图2所示,该流程包括如下步骤:
步骤S202,采集网络资源的网络拓扑数据和业务数据;
步骤S204,对所述网络拓扑数据与所述业务数据进行增强处理,以生成样本数据;
步骤S206,提取所述样本数据的关键特征;
步骤S208,根据所述关键特征对神经网络模型进行训练,得到训练好的神经网络模型。
本公开实施例中在,上述步骤S208具体可以包括:重复执行以下步骤,直到训练后的神经网络模型的性能指标满足预设标准,或训练次数达到预设阈值,完成训练,得到训练好的神经网络模型:提取所述关键特征与对应的标签;根据所述关键特征与对应的标签对所述神经网络模型进行训练,得到所述训练好的神经网络模型。
本公开实施例可以应用于以下场景:某区域(如片区,地市,省等)的网络布设初期的业务整体规划;某区域的网络和业务场景变化后的重新优化;某区域网络节点出现异常(如设备掉电、网络线缆损坏)后的业务恢复。适用网络环境方面,可以应用于多种通信网络产品,包括但不限于固网产品,无线网络产品。以上产品都可以应用本公开提出的系统,根据自身网络特性提供输入数据和优化目标,从而起到很好的实用效果。
通过上述步骤S202至S208,可以解决相关技术中通过启发式搜索算法确定的结果不稳定,且无法适应网络和业务场景变化的问题,通过对样本数据的增强处理,使得训练好的神经网络模型更稳定,且可以适应网络和业务场景变化。
本公开实施例中,上述步骤S204具体可以包括:对所述网络拓扑数据与所述业务数据进行增强处理,将处理后的网络拓扑数据与处理后的业务数据组合生成关联的所述样本数据,进一步的,根据网络拓扑的变化阶数设置所述变化阶数对应的网络拓扑的采样比例,其中,所述变化阶数指所述网络拓扑的节点或边的变化个数;根据所述网络拓扑中变化节点的节点属性调整所述采样比例,根据调整后的采用比例生成网络拓扑样本,具体可以通过以下方式调整上述的采样比例:判断所述变化节点的节点属性是否为核心节点,在判断结果为是的情况下,增大所述变化节点对应的网络拓扑的采样比例,并减小除所述变化节点之外的其他节点对应的网络拓扑的采样比例;之后调整业务数据的业务数量和带宽,得到调整后的业务数 据;将所述网络拓扑样本与所述调整后的业务数据组合生成上述样本数据。
图3是根据本公开优选实施例的网络资源处理方法的流程图,如图2所示,上述步骤S206具体可以包括:
S302,从所述样本数据中提取一对关联的网络拓扑数据与业务数据;
S304,分别对所述网络拓扑数据与所述业务数据进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量;
S206,将所述网络拓扑数据的编码向量与所述业务数据的编码向量进行融合,得到所述关键特征。
进一步的,上述步骤S206中,具体可以通过以下方式得到关键特征:
S1,根据所述一对关联的网络拓扑数据与业务数据确定训练数据与对应的标签,并将所述训练数据与对应的标签存储到数据池中;进一步的,可以通过以下方式确定训练数据与对应的标签:通过创建业务的方式将所述业务数据携带的多个业务部署到关联的所述网络拓扑数据对应的拓扑结构中,具体的,对于所述多个业务中的每个业务,重复执行以下步骤,所述多个业务创建完成,其中,对于正在创建的业务称为当前业务:获取当前优化进度信息,其中,所述当前优化进度信息包括已建业务、待建业务;通过当前训练的神经网络模型确定所述当前业务的当前节点的子节点的收益值;选取所述收益值最大的子节点确定为下一跳节点;更新所述当前优化进度信息,得到更新后的当前优化进度信息。之后将创建业务过程的当前资源分配情况更新到所述网络拓扑数据中,并将创建业务过程的当前优化进度信息更新到所述业务数据中,将更新后的网络拓扑数据、更新后的业务数据以及对应标签进行关联,其中,所述标签包括下一跳节点的收益值和所选的下一跳节点。
S2,分别对所述训练数据中所述网络拓扑数据与所述业务数据进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量。
上述步骤S304,具体可以通过以下方式得到网络拓扑数据的编码向量与业务数据的编码向量:采用图卷积神经网络GCN提取所述网络拓扑数据的节点特征;将所述网络拓扑数据的节点特征与业务数据进行融合,得到业务特征数据;将所述网络拓扑数据的节点特征、所述业务特征数据分别进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量。
在一些实施例中,在通过训练得到训练好的神经网络模型之后,可以将训练好的神经网络模型用于对网络资源进行预测,具体的,获取待测网络资源的待测网络拓扑数据和待测业务数据;对所述待测网络拓扑数据与所述待测业务数据进行增强处理,以生成目标样本数据;提取所述目标样本数据的目标关键特征;将所述目标关键特征输入所述训练好的神经网络模 型中,得到所述待测网络资源的搜索结果,通过对马黛茶网络拓扑数据与待测业务数据的增强处理,之后输入稳定的神经网络模型中,使得预测结果(即搜索结果)更准确。
在另一些实施例中,在上述步骤S204之前,还可以对网络拓扑数据和业务数据进行预处理,具体包括清洗、异常处理等,使得滤除掉一些无效的数据。
本公开实例中,将深度学习和启发式算法相结合,可在速度和稳定性上达到均衡,快速稳定地得到较好地优化方案。具体的,整个网络资源优化过程,可看作是逐步将每一条业务的每一个节点部署到网络拓扑中,最终保证整体优化指标达到最优的过程;使用启发式算法对当前代建业务的下一跳节点进行试探,评估选取哪一个节点优化效果更好。使用蒙特卡洛树搜索作为启发式算法的选型,但不仅限于此,其他可遍历探索下一跳节点的启发式算法都可替换;评估的方式:直接使用深度神经网络对每一个待选节点进行多次预测,根据预测结果判定选取结果;不断循环直到所有业务创建完毕。在训练过程中,对生成的每套网络拓扑和业务数据创建样本数据,并积累所有数据,对神经网络进行训练;训练后,神经网络的性能已经得到提升,再进行评估时会更加接近最优解,收集的训练数据也会更优;通过多次循环,不断提升模型性能,从而使整体结果也达到较优水平。在预测过程中,和训练过程类似,仍按照上述方式,使用训练好的网络模型,只需进行一次循环即可得到最终结果。在稳定性上,因为网络学习到了较好的网络特征,因此每次都可以预测出比较接近最优解的方案,相当于从庞大的搜索空间中,找到了接近最优解的位置,因此启发式搜索只需要在此位置的周围,再进行少量的探索,即可增大发现最优解的概率,因此每次优化结果都能稳定得到较好方案。在速度上,探索的方案更具目的性,只搜索接近最优解的部分,因此大大减少了无效的搜索次数,速度得到显著提升。
图4是根据本公开实施例的网络资源优化的系统架构图,如图4所示,包括:数据处理子系统,模型训练子系统以及模型推理子系统,
数据处理子系统用于对网络拓扑和业务变化趋势的数据增强,以在模型训练阶段使用;提取网络和业务特征的神经网络作为重要组件引入预测业务下一跳信息的深度学习模型中,在模型训练子系统和模型推理子系统中都有应用;同样地,深度学习和启发式算法结合的性能提升作为模型训练子系统和模型推理子系统的性能提升手段,也在两个子系统中均有应用。下面针对每个子系统做详细介绍。
数据处理子系统,针对网络拓扑和业务变化趋势的数据增强,现有技术无法适应网络和业务场景的变化,其根本原因有两个:不论是人工方法还是启发式方法都只是针对某套“固定拓扑,固定业务”场景进行分析或者搜索,并不能提取和学习到网络拓扑和业务信息内部 的领域知识,因此当场景发生变化,如拓扑节点增加或业务新增时,只能当作一个全新的场景去处理,而实际上,原场景和变化后的场景,是由很多共同或极其相似的信息的,这些信息在现有技术中都没有被利用。真实场景的数据往往比较有限,即使是使用现有的深度学习技术,在有限的数据下也无法很好的学习到网络拓扑和业务信息的内部特征。因此,要解决对场景变化的适应性问题,首先要保证数据足够代表真实场景的变化信息,且数据量足够充分。
本公开实施例的数据增强可针对网络拓扑和业务的变化趋势,生成更符合真实场景的高质量数据。具体地,包括:
首先,将网络拓扑的节点或边的变化个数描述为网络拓扑变化的“阶数”。如在当前网络拓扑下,增加或删除一个节点,则为当前节点的“一阶”变化,类似地,可以定义网络拓扑的二阶、三阶变化等等。通过对实际网络变化的分析,将采样比例设定为一个随阶数增加而概率逐渐降低的分布,其概率总和为1。通过这个比例指导样本生成。
其次,在实际场景中,网络拓扑一般由核心节点和边缘节点组成,核心节点承接更多的业务路由,一旦出现变化(如出现故障而从拓扑删除),会使最终得到的业务优化方案造成很大变化。因此,在制定上述采样比例后,对于核心节点,放大其“一阶”变化的样本概率,后续阶数的概率依次降低。
网络拓扑的变化可以归纳为“位置+动作”的二维空间,在确定比例和位置后,经过对大量现有部署网络的变化分析,将网络可能产生的变化趋势归纳为如下几个方向:网络核心位置节点的增加;网络边缘位置的节点增加;某个位置的节点删除;链路的新增,这种趋势大多会出现在具有较少链路的节点上。最后,再生成拓扑样本空间后,再通过适当调整业务数量和带宽,和每个拓扑样本再进行组合,最终得到大量符合真实变化趋势的高质量数据。
模型训练子系统中,提取网络和业务特征的神经网络的设计直接决定了预测结果的性能,而设计的关键能否设计出适合网络和业务数据结构,能很好地提取出数据特征。
网络拓扑数据主要包含节点和链路信息,每个信息又可看作一个多维向量,比如节点信息中会包含节点ID,链路数等;而业务数据往往可以描述成[起点,终点,业务属性]的向量。
现有深度学习的方法,一般都是将上述数据拆分成很多小部分,然后用单独的神经网络单元,如全连接网络,卷积神经网络等进行提取,最后再简单的拼接或者相加到一起。但从数据信息中可以看到,业务特征中,是包含了网络拓扑的信息的,现有方法并不能很好地提取这种相互融合的信息。
图5是根据本公开实施例的提取网络和业务特征的神经网络的示意图,如图5所示,首先将业务数据和拓扑数据分别输入,对于网络拓扑数据,选取可提取节点信息的图卷积神经网络(Graph Neural Network,简称为GCN)结构;在处理业务数据之前,设计了一个节点信息融合层,将业务数据中起终点信息替换为拓扑节点embedding信息,进行初步融合;接着,将融合后的业务数据传入Transformer Encoder进行进一步特征提取;最后,业务特征和拓扑特征再通过各自的编码器,统一维度,最终拼接在一起,得到最终的融合特征。
本公开实施例的特征提取结构,很好的适应了网络拓扑和业务数据特有的结构,并通过两次融合,将业务特征与网络拓扑节点特征充分融合,起到了很好的特征提取效果。
值得注意的是,此结构可作为复杂神经网络的其中一部分,用于专门处理涉及网络和业务数据的场景;此结构内部使用的GCN网络和Transformer网络,可根据具体问题替换成其他网络结构,但不影响整体结构的特征提取效果;同理,编码器结构也可以根据使用者的具体需求进行设计替换。
真实场景中,网络的节点数和链路数,决定了拓扑图的规模,而拓扑图的规模又进一步决定了一条业务从起点走到终点,可能选取的路径,随着节点数和链路数的增加,可选路径的条数会迅速增大,再加上业务数量的增加,总体优化的可选方案数量会非常庞大。
因此,此问题实际上可抽象为:如何在庞大的可选方案中,选取到最优的优化方案,且每次选取总能稳定的选取到最优方案。
图6是根据本公开实施例的网络拓扑和业务变化趋势的数据增强方法的流程图,如图6所示,包括:
S601,外部输入真实网络拓扑和业务数据。
S602,对数据进行预处理,例如去掉异常数据、清洗等。
S603,判断是否为训练阶段,若是进入S604数据增强主流程;否则进入S613流程直接做特征提取。
S604,判断所需要生成的样本数是否已达阈值,若否则继续生成,若是则进入S612批量提取生成的样本数据。
S605,根据变化的阶数计算采样比例。
S606,判断当前变化的节点是否为核心节点,若是则进入S607,否则进入S608。
S607,若当前变化节点为核心节点,则增大一阶变化的采样比例,降低其他阶数采样比例。
S608,根据计算出的采样比例,使用前文提到的变化动作生成拓扑样本。
S609,适当调整业务数和带宽,生成一组业务数据。
S610,将生成的业务数据和拓扑样本进行组合并存储,继续S604流程判定是否需要继续生成。
S611,对生成的数据做特征提取。
S612,在流程S604判断收集的生成样本已达阈值后,批量选取生成的样本数据,输入S611流程做特征提取。
S613,流程结束。
图7是根据本公开实施例的网络图谱的示意图,如图7所示,节点7为省级路由,承载着各个地市之间的业务交换通道;3、4、8、13、14、17为地市级节点;剩余节点如1、2、5、6等等可看作地市内部的交换节点:
上述步骤S602描述了对数据的预处理操作,主要包括:
网络节点间的一条物理链路,根据频率,波长等特性可划分为多条逻辑链路。这种复杂的拓扑结构并不是计算机可识别的标准图结构,无论是深度学习模型还是计算机算法都很难处理。通过对逻辑链路的拆分,将物理拓扑图拓展为逻辑结构图,严格符合计算机数据结构中的标准图结构;
根据不同业务场景记录带宽,时延,开销等等网络基础指标,整理成便于传输的可序列化的字典结构(如json),提供给特征提取模块。
上述步骤S604至S610中描述了一次样本生成的过程。首先,确定变化阶数,如节点1在实际工作中发生故障,则在逻辑层面可表示为节点1的移除,此为1阶变化;同理,若在节点4和节点7之间新添加节点,也可抽象为1阶变化;推理可得拓扑的二阶、三阶变化等,在此不再赘述。考虑此片区的变化程度,发生5阶变化的概率极低——这相当于片区网络发生了巨大变化,业务甚至需要重新规划,将总体的变化阶数设定为4,并分配特定比例,如40%∶30%∶20%∶10%;
其次,在此网络中,可以通过对图的联通性判断节点的重要性,节点7由于负责连接多个地市级节点,因此为此片网络中最核心的节点;3、4、8、13、14、17各自承载自己地市的业务交换,因此也属于较为核心的节点;剩余的节点,在此片网络中重要性最低。根据核心性等级,需要增加涉及核心节点变化的一阶变化样本,减少重要性相对较低的边缘节点的变化样本,最终将比例调整为50%∶30%∶15%∶5%。
接下来,根据之前定义的变化“动作”,对拓扑进行变化,如在不同位置添加或删除一个节点,添加或删除一条链路,可得到一组一阶变化拓扑;在一阶变化拓扑上在执行上述“动 作”或直接在原拓扑上添加或删除2个节点等操作,可得到新的一组二阶变化拓扑,以此类推,按照比例生成全部拓扑样本。
接着,对输入的业务数据做适当变化,如增加或减少一条或多条业务,调整一条或多条业务带宽,得到一组业务数据。再按照两两配对的形式,和拓扑数据排列组合成最终的一组数据。
在特征提取阶段,对生成的样本进一步提取关键特征,如当前业务总数,待建业务总数,当前业务累计时延、开销,当前网络剩余资源等,并整理成深度学习易于处理的numpy数组结构进行存储。
提取网络和业务特征的神经网络如图4所示,将其输入输出结构描述如下:
网络拓扑数据:将节点信息组织成2维向量,每一行为一个节点的数据;连接关系组织成连接表的形式。
业务数据,将包含的特征,如业务起终点,带宽等,组织成2维向量,每一行为一个业务的数据。
拓扑数据输入GCN网络后,将输出一个2维向量,每一行为一个节点的编码向量(embedding)。
节点信息融合层将业务信息中起终点信息替换为embedding中对应的行向量,输出仍为2维向量,每行为一个业务的编码向量。
通过输入一个Transformer Encoder网络,对业务信息进一步编码,输出维度仍是2维,行数为业务数,列向量的长度取决于具体实施时的参数设定。
拓扑数据分支和业务数据分支都会进入各自的编码器,最终输出一个1维向量,其长度取决于具体实施时的参数设定。
最终经过拼接,融合特征将输出一个1维向量。
假设当前真实拓扑如图7所示,待建的业务有2条:业务1:<起点3,终点10,带宽10M>,业务2:<起点4,终点15,带宽10M>,则将此真实数据构成的样本描述为如下形式:
<真实拓扑,业务1,业务2>
经过图1中数据处理子系统,在通过发明点1的数据增强方法后,生成2个真实拓扑的变体(拓扑变体1,拓扑变体2),以及1个业务数据的变体(业务1变体,业务2),再经过组合,最终生成两条增强数据:
<拓扑变体1,业务1变体,业务2>;
<拓扑变体2,业务1变体,业务2>;
最终样本数据共有3个,即:
<真实拓扑,业务1,业务2>;
<拓扑变体1,业务1变体,业务2>;
<拓扑变体2,业务1变体,业务2>。
图8是根据本公开实施例的深度学习和启发式算法结合的性能提升的训练流程图一,如图8所示,包括:
S801,流程开始。
S802,在训练初始阶段,随机初始化模型参数;若在模型训练迭代期间,则加载当前效果最优的模型继续训练。
S803,收集的网络资源优化样本是否达到阈值,若是进入S815流程进行模型训练;若否则进入S804流程获取一套新的网络拓扑和业务数据进行创建。
S804,从数据处理子系统获取一套新的网络拓扑和业务数据,输入后续流程完成总体资源优化并收集数据。
S805,判断所有业务是否都创建完成,若是进入S806流程;若否则进入S807流程继续创建业务。
S806,对于一套网络拓扑和业务数据,存储每一次计算下一跳操作以及当前拓扑创建进度、当前网络资源情况等状态信息到数据池。
S807,获取S806中描述的状态信息。
S808,在S807描述的当前状态下,使用启发式算法对当前代建业务的下一跳节点进行试探。
S809,判探索的次数是否已达阈值,若是进入S813流程;若否进入S810流程继续完成搜索。
S810,探索当前状态下的所有子节点。
S811,在每个子节点处,调用深度学习模型,计算在当前子节点代表的下一跳信息条件下期望最大收益。
S812,统计并更新各个节点的收益信息。
S813,选取统计收益最优的子节点代表的下一跳信息,即搜索已达次数阈值,则选取收益最优的子节点代表的下一跳信息作为下一跳创建操作。
S814,更新当前优化进度信息,如已建业务待建业务,当前资源分配情况等,即根据S813的结果更新S806中描述的全部状态信息。
S815,获取的S804中描述的样本个数已达阈值,则从数据池里批量获取数据,训练深度学习模型。
S816,判断当前模型性能是否已达标准或训练次数是否已达阈值,若否进入S802流程,用训练后的模型替换当前模型继续样本收集和训练流程;若是则训练结束,保存最终的训练模型。
S818,流程结束。
上述S802中在训练初始,由于没有已训练好的模型,因此加载一个随机参数的初始模型;
S803-S806描述了使用3个样本数据生成训练模型的数据流程,整个流程是一个循环流程,以S803控制生成训练数据次数的阈值,此处假设阈值为1;
S804从样本数据中选取一个样本,假设选择了<真实拓扑,业务1,业务2>,S805-S813负责逐跳地,将业务1、业务2创建到真实拓扑中,并记录每一跳的状态信息,可描述为:
数据:<拓扑资源占用状态,业务1创建路径,业务1完成状态,业务2创建路径,业务2完成状态,总体开销,...>;
标签:<当前选择的节点,预期收益>;
S806负责将上述数据存储起来,作为训练模型的数据使用。
S805-S813描述了逐跳试探并选择下一跳创建业务,直到将所有业务都创建完成的流程,在S804中,已选取<真实拓扑,业务1,业务2>,当前为初始状态;
S808由于是初始状态,因此进入S808;
S808因为是初始状态,因此:当前待建业务为业务1,两条业务的完成状态都为0(即未创建),创建路径都为空,总体开销为0,拓扑各条链路资源均未被占用;
S808当前业务1处于起点即节点3,因此可选下一跳有4,7,需要使用S808试探选择4或者7后直至业务全部创建完成产生的收益,此处由于是一个搜索过程(即搜索选择4或7后,直到业务全部创建完成需要选择哪些节点),因此需要使用启发式算法。本实例应用时使用了蒙特卡洛树搜索算法,但实际应用过程中并不局限于此种算法,其他启发式算法可得到选择4或7的累计收益均可使用;
S809由于是试探,因此是模拟而非真实创建业务,此步骤描述了逐步“模拟”创建下一跳的流程;
S810使用深度学习模型,预测S809中每一跳该选取哪个节点,预期收益是多少;
S811在进行启发式算法对4和7经过多次试探后,统计4和7的收益信息
S812假设4的收益大于7,则选择4作为下一跳,对业务1进行创建
S813此时状态信息更新为:当前待建业务为业务1,两条业务的完成状态都为0(即未创建),业务1的创建路径为0->1,业务2仍为空,总体开销累加链路3-4的开销,拓扑各条链路资源要在链路3-4上减去业务1的带宽。更新状态后,将更新后的状态以及选取节点4及其收益组织成004中的格式,作为一条训练数据进行存储。其中,拓扑资源信息仍可以使用拓扑图表示;业务1、2仍可以二维向量表示,每行为一条业务信息。因此与上述输入向量结构完全相同,可以构进行特征提取。
此时流程又转回S807,由于业务尚未全部完成,因此要以当前状态,即以业务1的4节点为起始点,再选择下一跳信息,根据图4,可选的节点有8,经过S706-S713流程,最终选取节点8,更新状态信息并存储为训练数据,再回到S807,直到业务1和业务2全部创建完成,进入S806,上述累积的一批训练样本一并存入数据池,假设完成<真实拓扑,业务1,业务2>全部业务创建,共生成20条训练数据,则当前数据池中有20条数据。
S814由于假设了阈值为1,因此进入此流程,从数据池中使用batch的方式获取数据,训练模型若干epoch。此处使用的模型,将图5中的结构作为拓扑数据和业务数据的特征提取部分,构建一个更大的模型,此模型结构可根据业务需求进行设计并调整超参,只需要集成特征提取结构即可对拓扑数据和业务数据进行较好的特征抽取。
S817假设需要训练100次,由于此时模型性能已有提升,将模型更新为刚刚训练完成的性能较好的模型,将此模型用于后续“试探”流程。此时又进入S803,由于阈值假设为1,进入S804。此时又从生成的数据中抽取一个样本,假设是<拓扑变体1,业务1变体,业务2>,后续的流程和上述描述相同,故不再赘述。
图9是根据本公开实施例的深度学习和启发式算法结合的性能提升的训练流程图二,如图9所示,包括:
S901,流程开始。
S902,从数据处理子系统获取真实网络拓扑和业务数据。
S903,判断所有业务是否都创建完成,若是进入S912流程;若否则进入S904流程继续创建业务。
S904,获取当前业务优化进度相关数据,如已建业务、代建业务、当前资源分配情况等状态信息。
S905,在S904描述的当前状态下,使用启发式算法对当前代建业务的下一跳节点进行试探。
S906,判断探索的次数是否已达阈值,若是进入S910流程;若否进入S907流程继续完 成搜索。
S907,探索所有当前状态下的子节点。
S908,在每个子节点处,调用深度学习模型,计算在当前子节点代表的下一跳信息条件下期望最大收益。
S909,统计并更新各个节点的收益信息。
S910,搜索已达次数阈值,则选取收益最优的子节点代表的下一跳信息作为下一跳创建操作。
S911,根据当前优化进度信息,如已建业务待建业务,当前资源分配情况等,即根据S910的结果更新S904中描述的全部状态信息。
S912,所有业务都创建完成,输出全部业务部署方案以及优化目标的结果值。
S913,流程结束。
根据本公开的另一个实施例,还提供了一种网络资源处理装置,包括:采集模块,用于采集网络资源的网络拓扑数据和业务数据;增强处理模块,用于对所述网络拓扑数据与所述业务数据进行增强处理,以生成样本数据;提取模块,用于提取所述样本数据的关键特征;训练模块,用于根据所述关键特征对神经网络模型进行训练,得到训练好的神经网络模型。
本公开的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者 将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本公开不限制于任何特定的硬件和软件结合。
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (13)

  1. 一种网络资源处理方法,包括:
    采集网络资源的网络拓扑数据和业务数据;
    对所述网络拓扑数据与所述业务数据进行增强处理,以生成样本数据;
    提取所述样本数据的关键特征;
    根据所述关键特征对神经网络模型进行训练,得到训练好的神经网络模型。
  2. 根据权利要求1所述的方法,其中,对所述网络拓扑数据与所述业务数据进行增强处理,以生成样本数据包括:
    对所述网络拓扑数据与所述业务数据进行增强处理,并将处理后的网络拓扑数据与处理后的业务数据组合生成关联的所述样本数据。
  3. 根据权利要求2所述的方法,其中,对所述网络拓扑数据与所述业务数据进行增强处理,并将处理后的网络拓扑数据与处理后的业务数据组合生成关联的所述样本数据包括:
    根据网络拓扑的变化阶数设置所述变化阶数对应的网络拓扑的采样比例,其中,所述变化阶数指所述网络拓扑的节点或边的变化个数;
    根据所述网络拓扑中变化节点的节点属性调整所述采样比例,根据调整后的采用比例生成网络拓扑样本;
    调整业务数据的业务数量和带宽,得到调整后的业务数据;
    将所述网络拓扑样本与所述调整后的业务数据组合生成所述样本数据。
  4. 根据权利要求3所述的方法,其中,根据所述网络拓扑中变化节点的节点属性调整所述采样比例,根据调整后的采用比例生成网络拓扑样本包括:
    判断所述变化节点的节点属性是否为核心节点;
    在判断结果为是的情况下,增大所述变化节点对应的网络拓扑的采样比例,并减小除所述变化节点之外的其他节点对应的网络拓扑的采样比例。
  5. 根据权利要求1所述的方法,其中,提取所述样本数据的关键特征包括:
    从所述样本数据中提取一对关联的网络拓扑数据与业务数据;
    分别对所述网络拓扑数据与所述业务数据进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量;
    将所述网络拓扑数据的编码向量与所述业务数据的编码向量进行融合,得到所述关键特征。
  6. 根据权利要求5所述的方法,其中,将所述网络拓扑数据的编码向量与所述业务数据 的编码向量进行融合,得到所述关键特征包括:
    根据所述一对关联的网络拓扑数据与业务数据确定训练数据与对应的标签,并将所述训练数据与对应的标签存储到数据池中;
    分别对所述训练数据中所述网络拓扑数据与所述业务数据进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量。
  7. 根据权利要求6所述的方法,其中,根据所述一对关联的网络拓扑数据与业务数据确定训练数据与对应的标签包括:
    通过创建业务的方式将所述业务数据携带的多个业务部署到关联的所述网络拓扑数据对应的拓扑结构中;
    将创建业务过程的当前资源分配情况更新到所述网络拓扑数据中,并将创建业务过程的当前优化进度信息更新到所述业务数据中,将更新后的网络拓扑数据、更新后的业务数据以及对应标签进行关联,其中,所述标签包括下一跳节点的收益值和所选的下一跳节点。
  8. 根据权利要求7所述的方法,其中,通过创建业务的方式将所述业务数据携带的多个业务部署到关联的所述网络拓扑数据对应的拓扑结构中包括:
    对于所述多个业务中的每个业务,重复执行以下步骤,所述多个业务创建完成,其中,对于正在创建的业务称为当前业务:
    获取当前优化进度信息,其中,所述当前优化进度信息包括已建业务、待建业务;
    通过当前训练的神经网络模型确定所述当前业务的当前节点的子节点的收益值;
    选取所述收益值最大的子节点确定为下一跳节点;
    更新所述当前优化进度信息,得到更新后的当前优化进度信息。
  9. 根据权利要求5所述的方法,其中,分别对所述网络拓扑数据与所述业务数据进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量包括:
    采用图卷积神经网络GCN提取所述网络拓扑数据的节点特征;
    将所述网络拓扑数据的节点特征与业务数据进行融合,得到业务特征数据;
    将所述网络拓扑数据的节点特征、所述业务特征数据分别进行编码,得到所述网络拓扑数据的编码向量与所述业务数据的编码向量。
  10. 根据权利要求1至9中任一项所述的方法,其中,根据所述关键特征对神经网络模型进行训练,得到训练好的神经网络模型包括:
    重复执行以下步骤,直到训练后的神经网络模型的性能指标满足预设标准,或训练次数达到预设阈值,完成训练,得到训练好的神经网络模型:
    提取所述关键特征与对应的标签;
    根据所述关键特征与对应的标签对所述神经网络模型进行训练,得到所述训练好的神经网络模型。
  11. 根据权利要求1至9中任一项所述的方法,其中,在根据所述关键特征与对应的标签对所述神经网络模型进行训练,得到所述训练好的神经网络模型之后,所述方法还包括:
    获取待测网络资源的待测网络拓扑数据和待测业务数据;
    对所述待测网络拓扑数据与所述待测业务数据进行增强处理,以生成目标样本数据;
    提取所述目标样本数据的目标关键特征;
    将所述目标关键特征输入所述训练好的神经网络模型中,得到所述待测网络资源的搜索结果。
  12. 一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至11任一项中所述的方法。
  13. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至11任一项中所述的方法。
PCT/CN2022/136995 2021-12-20 2022-12-06 网络资源处理方法、存储介质及电子装置 WO2023116422A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111565037.5 2021-12-20
CN202111565037.5A CN114466369A (zh) 2021-12-20 2021-12-20 网络资源处理方法、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2023116422A1 true WO2023116422A1 (zh) 2023-06-29

Family

ID=81406419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136995 WO2023116422A1 (zh) 2021-12-20 2022-12-06 网络资源处理方法、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN114466369A (zh)
WO (1) WO2023116422A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097624A (zh) * 2023-10-18 2023-11-21 浪潮(北京)电子信息产业有限公司 一种网络拓扑结构增强方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466369A (zh) * 2021-12-20 2022-05-10 中兴通讯股份有限公司 网络资源处理方法、存储介质及电子装置
CN115051934A (zh) * 2022-06-29 2022-09-13 亚信科技(中国)有限公司 网络性能预测方法、装置、电子设备、存储介质及产品
CN114900859B (zh) * 2022-07-11 2022-09-20 深圳市华曦达科技股份有限公司 一种easymesh网络管理方法及装置
CN115695280A (zh) * 2022-09-06 2023-02-03 中国电信股份有限公司 基于边缘节点的路由方法及装置、电子设备、存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110995520A (zh) * 2020-02-28 2020-04-10 清华大学 网络流量预测方法、装置、计算机设备及可读存储介质
CN113011483A (zh) * 2021-03-11 2021-06-22 北京三快在线科技有限公司 一种模型训练和业务处理的方法及装置
US20210392049A1 (en) * 2020-06-15 2021-12-16 Cisco Technology, Inc. Machine-learning infused network topology generation and deployment
CN114466369A (zh) * 2021-12-20 2022-05-10 中兴通讯股份有限公司 网络资源处理方法、存储介质及电子装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110995520A (zh) * 2020-02-28 2020-04-10 清华大学 网络流量预测方法、装置、计算机设备及可读存储介质
US20210392049A1 (en) * 2020-06-15 2021-12-16 Cisco Technology, Inc. Machine-learning infused network topology generation and deployment
CN113011483A (zh) * 2021-03-11 2021-06-22 北京三快在线科技有限公司 一种模型训练和业务处理的方法及装置
CN114466369A (zh) * 2021-12-20 2022-05-10 中兴通讯股份有限公司 网络资源处理方法、存储介质及电子装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097624A (zh) * 2023-10-18 2023-11-21 浪潮(北京)电子信息产业有限公司 一种网络拓扑结构增强方法、装置、电子设备及存储介质
CN117097624B (zh) * 2023-10-18 2024-02-09 浪潮(北京)电子信息产业有限公司 一种网络拓扑结构增强方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114466369A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2023116422A1 (zh) 网络资源处理方法、存储介质及电子装置
Resende et al. Greedy randomized adaptive search procedures: advances and extensions
CN104699883B (zh) 采用紧凑多波形表示的电路设计评估
CN112070216B (zh) 一种基于图计算系统训练图神经网络模型的方法及系统
Li et al. Demonstration of fault localization in optical networks based on knowledge graph and graph neural network
CN112330048A (zh) 评分卡模型训练方法、装置、存储介质及电子装置
CN108270608B (zh) 一种链路预测模型的建立及链路预测方法
CN113988464A (zh) 基于图神经网络的网络链路属性关系预测方法及设备
CN115186936B (zh) 基于gnn模型的油田最优井网构建方法
CN111597276B (zh) 实体对齐方法、装置和设备
CN104486222B (zh) 基于蚁群优化算法的小时延缺陷测试关键路径选择方法
US9065743B2 (en) Determining connectivity in a failed network
CN117422031B (zh) Atpg系统测试向量生成和精简的方法和装置
CN116993043A (zh) 一种电力设备故障溯源方法及装置
US10489429B2 (en) Relationship graph evaluation system
Prignano et al. Exploring complex networks by means of adaptive walkers
Shuvro et al. Transformer based traffic flow forecasting in SDN-VANET
CN115297048B (zh) 一种基于光纤网络的路由路径生成方法及装置
CN114900435B (zh) 一种连接关系预测方法及相关设备
CN115906927A (zh) 基于人工智能的数据访问分析方法、系统及云平台
CN114580130A (zh) 基于邻接信息熵与随机游走的链路预测方法及装置
Liu et al. LightTR: A Lightweight Framework for Federated Trajectory Recovery
CN113239272A (zh) 一种网络管控系统的意图预测方法和意图预测装置
CN115828106A (zh) 用于识别网络中关键节点的方法和设备
Jin et al. Community Selection for Multivariate KPI Predictions in a 2-Tier System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909737

Country of ref document: EP

Kind code of ref document: A1