CN112101394B - Provider domain deployment method, device, computing equipment and computer storage medium - Google Patents

Provider domain deployment method, device, computing equipment and computer storage medium Download PDF

Info

Publication number
CN112101394B
CN112101394B CN201910526989.2A CN201910526989A CN112101394B CN 112101394 B CN112101394 B CN 112101394B CN 201910526989 A CN201910526989 A CN 201910526989A CN 112101394 B CN112101394 B CN 112101394B
Authority
CN
China
Prior art keywords
provider
behavior
data
training
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910526989.2A
Other languages
Chinese (zh)
Other versions
CN112101394A (en
Inventor
邢彪
张卷卷
凌啼
章淑敏
何婷婷
叶晓燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910526989.2A priority Critical patent/CN112101394B/en
Publication of CN112101394A publication Critical patent/CN112101394A/en
Application granted granted Critical
Publication of CN112101394B publication Critical patent/CN112101394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance

Abstract

The embodiment of the invention relates to the technical field of communication, and discloses a provider domain deployment method, a device, a computing device and a computer storage medium, wherein the method comprises the following steps: acquiring behavior data of a provider; the method comprises the steps of inputting behavior data of a provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets of training data comprises: behavior data of the provider and identification information for representing a behavior type of the provider; and carrying out domain deployment on the suppliers according to the behavior types of the suppliers. By means of the method, the device and the system for the network service, the behavior types of the suppliers are automatically identified, and the suppliers are deployed in a domain according to the behavior types of the suppliers, so that the network service performance is improved.

Description

Provider domain deployment method, device, computing equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a provider domain deployment method, a device, a computing device and a computer storage medium.
Background
Network function virtualization is the direction of network transformation of operators, in the initial stage of network function virtualization introduction, clouding network elements and non-clouding network elements coexist, clouding network elements are deployed on a network element layer on a virtual platform, non-clouding network elements are deployed on a network element layer on a traditional hardware platform, service providers with large service fluctuation need to be preferentially deployed on the virtual platform for facilitating capacity expansion and capacity contraction, and service providers with stable service are deployed on the hardware platform.
In carrying out embodiments of the present invention, the inventors found that: the existing service provider domain deployment mode is to select service providers with larger service fluctuation according to experience by a person skilled in the art and deploy the service providers on a virtual platform, but the existing technology cannot meet the requirement of automatic operation and maintenance due to the fact that the number of the service providers is numerous and the number of the service providers is increased and decreased continuously.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention provide a vendor domain deployment method, apparatus, computing device, and computer storage medium, which overcome or at least partially solve the foregoing problems.
According to an aspect of an embodiment of the present invention, there is provided a vendor domain deployment method, the method including:
acquiring behavior data of a provider; inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: behavior data of the provider and identification information for identifying a behavior type of the provider; and carrying out domain deployment on the provider according to the behavior type of the provider.
In an alternative manner, the behavior types include stationary type and non-stationary type, and the performing domain deployment on the provider according to the behavior type of the provider includes: and deploying the suppliers with the behavior types of stationary type on a hardware platform, and deploying the suppliers with the behavior types of non-stationary type on a virtual platform.
In an alternative way, after obtaining the behavior data of the provider, the method further comprises: normalizing the behavior data to obtain standard behavior data; the step of inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider comprises the following steps: and inputting the standard behavior data into a classification model to obtain the behavior type of the provider.
In an alternative way, before obtaining the behavior data of the provider, the method further comprises: constructing a neural network model; and training the neural network model according to the multiple sets of training data to obtain a classification model.
In an alternative way, constructing the neural network model includes: the method comprises the steps of constructing a deep neural network model comprising an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolution layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer.
In an alternative manner, training the neural network model according to multiple sets of training data to obtain a classification model includes: obtaining the weight of the neural network model according to the multiple groups of training data; calculating a loss function value according to the weight; repeatedly updating the weight according to an optimization algorithm until the loss function value is minimum; and obtaining a classification model according to the weight with the minimum loss function value.
In an alternative way, calculating the loss function value from the weights includes: and calculating a binary cross entropy loss function value according to the weight.
In an alternative manner, repeatedly updating the weights according to an optimization algorithm until the loss function value is minimum includes: and repeatedly updating the weights according to the Adam algorithm until the loss function value is minimum.
According to another aspect of the embodiment of the present invention, there is provided a vendor domain deployment apparatus, including: the system comprises an acquisition module, a determination module and a deployment module, wherein the acquisition module is used for acquiring behavior data of a provider; the determining module is used for inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: behavior data of the provider and identification information for identifying a behavior type of the provider; and the deployment module is used for carrying out domain deployment on the suppliers according to the behavior characteristics of the suppliers.
In an alternative way, the behavior types include stationary and non-stationary, the deployment module being further to: and deploying the suppliers with the behavior types of stationary type on a hardware platform, and deploying the suppliers with the behavior types of non-stationary type on a virtual platform.
In an alternative, the apparatus further comprises: and the normalization module is used for carrying out normalization processing on the behavior data to obtain standard behavior data. The determining module is further configured to input the standard behavior data into a classification model to obtain a behavior type of the provider.
In an alternative, the apparatus further comprises: the system comprises a construction module and a training module, wherein the construction module is used for constructing a neural network model, and the training module is used for training the neural network model according to multiple sets of training data to obtain a classification model.
In an alternative way, the building block is further configured to: the method comprises the steps of constructing a deep neural network model comprising an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolution layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer.
In an alternative, the training module is further to: obtaining the weight of the neural network model according to the multiple groups of training data; calculating a loss function value according to the weight; repeatedly updating the weight according to an optimization algorithm until the loss function value is minimum; and obtaining a classification model according to the weight with the minimum loss function value.
In an alternative way, calculating the loss function value from the weights includes: and calculating a binary cross entropy loss function value according to the weight.
In an alternative way, repeatedly updating the weights according to the optimization algorithm until the loss function value is minimum comprises: and repeatedly updating the weights according to the Adam algorithm until the loss function value is minimum.
According to another aspect of an embodiment of the present invention, there is provided a computing device including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the provider domain deployment method.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to one of the provider domain deployment methods described above.
The embodiment of the invention obtains the behavior type of the provider by inputting the obtained behavior data of the provider into a classification model, and carries out domain deployment on the provider according to the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: the vendor's behavior data and identification information to identify the vendor's behavior type. According to the embodiment of the invention, the behavior type of the provider can be automatically identified, and domain deployment is carried out according to the behavior type of the provider, so that the automatic operation and maintenance of the provider are realized. In addition, the classification model comprehensively considers a plurality of groups of data types, and the classification effect is better.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a vendor domain deployment method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of a neural network according to a first embodiment of the present invention;
FIG. 3 is a functional block diagram of a vendor domain-partitioned deployment device according to a third embodiment of the present invention;
fig. 4 shows a schematic structural diagram of a computing device according to a fourth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flowchart of a provider domain deployment method according to a first embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step 110: behavioral data of the provider is obtained.
Wherein the provider is a direct provider of mobile internet service content, application services. The provider calls the service network element deployed on the hardware platform virtual platform through the API to provide mobile Internet and application service for the client. For example, the service provider may invoke a plurality of service network elements such as a dream gateway, an industry gateway, a short message center, a multimedia message center, and the like. The hardware platform is a platform based on which the physical network element is located, for example, a server, a storage and switching device, and the like. The functionality of the hardware-based network element is referred to as a physical network function PNF (Physical Network Function). The virtual platform is a platform where the virtual network element is located, and corresponds to the hardware platform, each physical network element is mapped into a virtual network element VNF, the virtual network element is based on a virtualized network function, and is a network element function implemented by pure software, and the functions included in the resources included in the virtual network element VNF are resources which are closed and hidden by the virtualized software. The behavior data of the provider is historical characteristic data of the service provider calling gateway for service, such as maximum concurrency, minimum concurrency every day, average concurrency, uplink and downlink transmission quantity and the like.
In a specific embodiment, the behavior data of the provider is obtained according to days, including seven features including a maximum concurrency amount, a minimum concurrency amount, an average concurrency amount, an uplink transmission data total amount, a downlink transmission data total amount, an uplink maximum concurrency amount, and a downlink maximum concurrency amount, and in order to reduce the magnitude difference between the behavior data, the behavior data is normalized to obtain standard behavior data, where the standard behavior data is located between [0,1 ].
In a specific embodiment, the standard behavioral data is obtained according to the following formula:
X std =X sca ×(X max -X min )+X min
wherein X is std Is standard behavior data, X is acquired behavior data, X max And X max The maximum value and the minimum value of the acquired behavior data respectively.
Step 120: the method comprises the steps of inputting behavior data of a provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets of training data comprises: the behavior data of the provider and identification information for representing the behavior type of the provider.
And inputting the normalized standard behavior data into a classification model to obtain the behavior type of the provider. When training is performed, a plurality of groups of training data are historical behavior data in a plurality of acquired days, a specific number of days is manually set by a person skilled in the art when the embodiment of the invention is implemented, in order to train a classification model more accurately, the acquired training data need to reflect the behavior of a provider when the setting is performed, the updating and changing of the provider are considered, the setting of the number of days is not too large or too small, the latest data can not be acquired due to the change of the provider, the training data is insufficient due to the too small, and the trained classification model can not reliably classify the behavior types of the provider. It can be appreciated that, when training is performed, the training data is normalized to obtain standard training data, and the model is trained using the standard training data.
The behavior types of the suppliers include stationary type and non-stationary type, corresponding identification information is set for stationary type and non-stationary type during training, the content of specific identification information can be set manually, the embodiment of the invention is not limited to this, for example, 0 can be used for stationary type information, and 1 can be used for non-stationary type information.
When the classification model is trained, the neural network model is used for training, and the classification model is obtained by constructing the neural network model and training the neural network model according to multiple sets of training data. The neural network model may be a variety of types of models that may be classified into two classes, such as convolutional neural network models.
In a specific embodiment, the neural network model is a convolutional neural network model, and the constructed convolutional neural network model comprises an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolutional layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer, and the specific structure of the neural network can be referred to as fig. 2. The input layer comprises seven neurons, each neuron represents behavior data, the output layer comprises two neurons which respectively represent two behavior types of a provider, the convolution layer is used for respectively extracting the behavior data and training behavior data of the input layer, the output dimension of one convolution layer is 128-dimensional, the airspace window length of the output dimension is set to be 5, namely, five input behavior data are read at a time, and the activation function of the convolution layer is set to be a 'relu' function. The output dimension of the other convolution layer and the airspace window length of the output dimension are the same as those of the previous convolution layer, and the activation function is the same as that of the previous convolution layer, so that the two convolution layers are connected together and used for learning the behavior characteristics represented by the input behavior data. The Dropout layer is used to randomly turn off neurons with a probability that prevents overfitting, e.g., the reject probability is set to 0.5, for randomly turning off input neurons with a 50% probability each time the parameters are updated during training. The pooling layer is used for reducing the size of the learned behavior data, for example, to 1/4 of the original size, and only the important behavior data is reserved. In the embodiment of the invention, a maximum pooling layer is adopted to reserve the maximum value of each behavior data in the behavior data, and other values are discarded. The flattening layer is used for reducing the dimension of the data input by the convolution layer. The fully connected layer contains 64 neurons and the activation function is set to the "relu" function, providing buffering of inputs to the output layer.
And when training is performed, obtaining weights of the neural network model according to a plurality of groups of training data, calculating a loss function according to the weights, repeatedly updating the weights according to an optimization algorithm until the loss function value is minimum, and obtaining a classification model according to the weight with the minimum loss function value. In a specific embodiment, the training round number is set to 500, the batch size is set to 32, the loss function selects a binary cross entropy loss function binary crossentropy, and the weights are repeatedly updated according to Adam algorithm until the loss function value is minimal. The Adam algorithm is used for improving the learning rate of a traditional gradient descent method, the neural network autonomously learns weight values, and the weight which minimizes the loss function value can be found through the gradient descent method. After training, in order to verify the model, after each round of training, the test data is used to verify the model, the test data and the training data contain the same content, and when the training data is acquired, a part of the training data can be used as the test data, for example, 80% of the acquired historical data is the training data, and 20% is the test data.
Step 130: and carrying out domain deployment on the suppliers according to the behavior types of the suppliers.
Suppliers with behavior types of stationary type are deployed on the hardware platform, and suppliers with behavior types of non-stationary type are deployed on the virtual platform. It should be noted that, the provider deployed on the hardware platform provides the service by calling the gateway of the gateway layer of the hardware platform, and the provider deployed on the virtual layer provides the service by calling the gateway of the gateway layer of the virtual platform.
The embodiment of the invention obtains the behavior type of the provider by inputting the obtained behavior data of the provider into a classification model, and carries out domain deployment on the provider according to the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: the vendor's behavior data and identification information to identify the vendor's behavior type. According to the embodiment of the invention, the behavior type of the provider can be automatically identified, and domain deployment is carried out according to the behavior type of the provider, so that the automatic operation and maintenance of the provider are realized. In addition, the classification model comprehensively considers a plurality of groups of data types, and the classification effect is better.
Fig. 3 shows a functional block diagram of a vendor domain-partitioned deployment device according to a second embodiment of the present invention, including: the system comprises an acquisition module 310, a determination module 320 and a deployment module 330, wherein the acquisition module 310 is used for acquiring behavior data of a provider; the determining module 320 is configured to input behavior data of a provider into a classification model to obtain a behavior type of the provider, where the classification model is obtained by training multiple sets of training data, and each set of training data in the multiple sets of training data includes: behavior data of the provider and identification information for identifying a behavior type of the provider; the deployment module 330 is configured to perform domain deployment on the provider according to the behavior characteristics of the provider.
In an alternative manner, the behavior types include stationary and non-stationary, the deployment module 330 is further configured to: suppliers with behavior types of stationary type are deployed on the hardware platform, and suppliers with behavior types of non-stationary type are deployed on the virtual platform.
In an alternative, the apparatus further comprises: the normalization module 340 is configured to normalize the behavior data to obtain standard behavior data. The determining module 350 is further configured to input the standard behavior data into the classification model to obtain the behavior type of the provider.
In an alternative, the apparatus further comprises: the system comprises a construction module 360 and a training module 370, wherein the construction module 360 is used for constructing a neural network model, and the training module 370 is used for training the neural network model according to multiple sets of training data to obtain a classification model.
In an alternative, the construction module 360 is further configured to: a deep neural network model is constructed comprising an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolution layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer.
In an alternative approach, the training module 370 is further to: obtaining weights of the neural network model according to the multiple groups of training data; calculating a loss function value according to the weight; repeatedly updating the weights according to the optimization algorithm until the loss function value is minimum; and obtaining a classification model according to the weight with the minimum loss function value.
In an alternative way, calculating the loss function value from the weights includes: and calculating a binary cross entropy loss function value according to the weight.
In an alternative way, repeatedly updating the weights according to the optimization algorithm until the loss function value is minimum comprises: and repeatedly updating the weights according to the Adam algorithm until the loss function value is minimum.
The embodiment of the invention inputs the acquired behavior data of the provider into a classification model through a determination module 320 to obtain the behavior type of the provider, and performs domain deployment on the provider according to the behavior type of the provider through a deployment module 330, wherein the classification model is obtained through training of multiple sets of training data, and each set of training data in the multiple sets of training data comprises: the vendor's behavior data and identification information to identify the vendor's behavior type. According to the embodiment of the invention, the behavior type of the provider can be automatically identified, and domain deployment is carried out according to the behavior type of the provider, so that the automatic operation and maintenance of the provider are realized. In addition, the classification model comprehensively considers a plurality of groups of data types, and the classification effect is better.
The embodiment of the invention provides a non-volatile computer storage medium, which stores at least one executable instruction, and the computer executable instruction can execute an operation corresponding to a provider domain deployment method in any of the above method embodiments.
An embodiment of the present invention provides a computer program product, which is characterized in that the computer program product includes a computer program stored on a computer storage medium, where the computer program includes program instructions, when the program instructions are executed by a computer, cause the computer to perform an operation corresponding to a vendor domain division deployment method in any of the above-mentioned method embodiments.
FIG. 4 is a schematic structural diagram of a computing device according to a fourth embodiment of the present invention, and the embodiment of the present invention is not limited to the specific implementation of the computing device.
As shown in fig. 4, the computing device may include: a processor 402, a communication interface (Communications Interface) 404, a memory 406, and a communication bus 408.
Wherein: processor 402, communication interface 404, and memory 406 communicate with each other via communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically perform an operation corresponding to the vendor domain deployment method used in any of the foregoing method embodiments.
In particular, program 410 may include program code including computer-operating instructions.
The processor 402 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 406 for storing programs 410. Memory 406 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Program 410 may be specifically operable to cause processor 402 to:
acquiring behavior data of a provider; inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: behavior data of the provider and identification information for identifying a behavior type of the provider; and carrying out domain deployment on the provider according to the behavior type of the provider.
In an alternative manner, the behavior types include stationary and non-stationary, and the program 410 may be specifically configured to cause the processor 402 to: and deploying the suppliers with the behavior types of stationary type on a hardware platform, and deploying the suppliers with the behavior types of non-stationary type on a virtual platform.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: normalizing the behavior data to obtain standard behavior data; the step of inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider comprises the following steps: and inputting the standard behavior data into a classification model to obtain the behavior type of the provider.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: constructing a neural network model; and training the neural network model according to the multiple sets of training data to obtain a classification model.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: the method comprises the steps of constructing a deep neural network model comprising an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolution layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: obtaining the weight of the neural network model according to the multiple groups of training data; calculating a loss function value according to the weight; repeatedly updating the weight according to an optimization algorithm until the loss function value is minimum; and obtaining a classification model according to the weight with the minimum loss function value.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: and calculating a binary cross entropy loss function value according to the weight.
In an alternative, program 410 may be specifically operative to cause processor 402 to perform the following operations: and repeatedly updating the weights according to the Adam algorithm until the loss function value is minimum.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (10)

1. A vendor domain-partitioned deployment method, the method comprising:
acquiring behavior data of a provider;
inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: behavior data of the provider and identification information for identifying a behavior type of the provider;
performing domain deployment on the provider according to the behavior type of the provider; the behavior types include stationary type and non-stationary type, and the domain deployment is performed on the provider according to the behavior type of the provider, including: deploying the suppliers with the behavior types of stationary type on a hardware platform, and deploying the suppliers with the behavior types of non-stationary type on a virtual platform; wherein, the hardware platform is based on the platform where the physical network element is located; the virtual platform is a platform where the virtual network element is located, and each physical network element is mapped into a virtual network element VNF corresponding to the hardware platform.
2. The method of claim 1, wherein after obtaining the behavioral data of the provider, the method further comprises:
normalizing the behavior data to obtain standard behavior data;
the step of inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider comprises the following steps:
and inputting the standard behavior data into a classification model to obtain the behavior type of the provider.
3. The method of claim 1, wherein prior to obtaining the behavioral data of the provider, the method further comprises:
constructing a neural network model;
and training the neural network model according to the multiple sets of training data to obtain a classification model.
4. A method according to claim 3, wherein said constructing a neural network model comprises:
the method comprises the steps of constructing a deep neural network model comprising an input layer, an output layer and six hidden layers, wherein the six hidden layers comprise two convolution layers, a dropout layer, a pooling layer, a leveling layer and a full connection layer.
5. A method according to claim 3, wherein training the neural network model according to the plurality of sets of training data to obtain a classification model comprises:
obtaining the weight of the neural network model according to the multiple groups of training data;
calculating a loss function value according to the weight;
repeatedly updating the weight according to an optimization algorithm until the loss function value is minimum;
and obtaining a classification model according to the weight with the minimum loss function value.
6. The method of claim 5, wherein said calculating a loss function value from said weights comprises:
and calculating a binary cross entropy loss function value according to the weight.
7. The method of claim 5, wherein repeatedly updating the weights according to an optimization algorithm until the loss function value is minimum comprises:
and repeatedly updating the weights according to the Adam algorithm until the loss function value is minimum.
8. A vendor domain-separated deployment apparatus, the apparatus comprising:
the acquisition module is used for acquiring behavior data of the suppliers;
the determining module is used for inputting the behavior data of the provider into a classification model to obtain the behavior type of the provider, wherein the classification model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: behavior data of the provider and identification information for identifying a behavior type of the provider;
the deployment module is used for carrying out domain deployment on the suppliers according to the behavior characteristics of the suppliers; the behavior types include stationary type and non-stationary type, and the domain deployment is performed on the provider according to the behavior type of the provider, including: deploying the suppliers with the behavior types of stationary type on a hardware platform, and deploying the suppliers with the behavior types of non-stationary type on a virtual platform; wherein, the hardware platform is based on the platform where the physical network element is located; the virtual platform is a platform where the virtual network element is located, and each physical network element is mapped into a virtual network element VNF corresponding to the hardware platform.
9. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to hold at least one executable instruction that causes the processor to perform a vendor domain deployment method according to any one of claims 1-7.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform a vendor domain deployment method according to any of claims 1-7.
CN201910526989.2A 2019-06-18 2019-06-18 Provider domain deployment method, device, computing equipment and computer storage medium Active CN112101394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910526989.2A CN112101394B (en) 2019-06-18 2019-06-18 Provider domain deployment method, device, computing equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910526989.2A CN112101394B (en) 2019-06-18 2019-06-18 Provider domain deployment method, device, computing equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112101394A CN112101394A (en) 2020-12-18
CN112101394B true CN112101394B (en) 2024-03-22

Family

ID=73748828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910526989.2A Active CN112101394B (en) 2019-06-18 2019-06-18 Provider domain deployment method, device, computing equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112101394B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040046849A (en) * 2002-11-28 2004-06-05 엘지전자 주식회사 Method for Soft PVC Service in Exchange System
CN1825814A (en) * 2005-02-25 2006-08-30 华为技术有限公司 Method data communication network dividing area and route information diffusion
JP2011221634A (en) * 2010-04-06 2011-11-04 Hitachi Ltd Computer system, logic section management method and logic division processing program
CN102947790A (en) * 2010-06-22 2013-02-27 惠普发展公司,有限责任合伙企业 A method and system for determining a deployment of applications
CN103873569A (en) * 2014-03-05 2014-06-18 兰雨晴 Resource optimized deployment method based on IaaS (infrastructure as a service) cloud platform
CN104899551A (en) * 2015-04-30 2015-09-09 北京大学 Form image classification method
WO2016082143A1 (en) * 2014-11-27 2016-06-02 华为技术有限公司 Virtual network policy configuration method and system, as well as virtual network element and network management system thereof
WO2016101638A1 (en) * 2014-12-23 2016-06-30 国家电网公司 Operation management method for electric power system cloud simulation platform
CN106502889A (en) * 2016-10-13 2017-03-15 华为技术有限公司 The method and apparatus of prediction cloud software performance
CN107067173A (en) * 2017-04-13 2017-08-18 东北林业大学 The enterprise's supply chain reconfiguration method analyzed based on locality preserving projections
CN108604330A (en) * 2015-10-06 2018-09-28 内特弗利克斯股份有限公司 System and method for the safety of application and risk assessment and test
CA3059254A1 (en) * 2017-04-06 2018-10-11 Akili Interactive Labs, Inc. Distributed network for the secured collection, analysis, and sharing of data across platforms
CN108805259A (en) * 2018-05-23 2018-11-13 北京达佳互联信息技术有限公司 neural network model training method, device, storage medium and terminal device
CN108900551A (en) * 2018-08-16 2018-11-27 中国联合网络通信集团有限公司 SDN/NFV network safety protection method and device
CN109039679A (en) * 2017-06-08 2018-12-18 中国移动通信集团浙江有限公司 A kind of NFV network signal acquisition method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070996A1 (en) * 2005-09-26 2007-03-29 Oran David R Port hopping scheme for peer-to-peer connections
CN100365978C (en) * 2006-02-23 2008-01-30 华为技术有限公司 Method and device for realizing classified service to business provider
US10680902B2 (en) * 2016-08-31 2020-06-09 At&T Intellectual Property I, L.P. Virtual agents for facilitation of network based storage reporting
US10977260B2 (en) * 2016-09-26 2021-04-13 Splunk Inc. Task distribution in an execution node of a distributed execution environment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040046849A (en) * 2002-11-28 2004-06-05 엘지전자 주식회사 Method for Soft PVC Service in Exchange System
CN1825814A (en) * 2005-02-25 2006-08-30 华为技术有限公司 Method data communication network dividing area and route information diffusion
JP2011221634A (en) * 2010-04-06 2011-11-04 Hitachi Ltd Computer system, logic section management method and logic division processing program
CN102947790A (en) * 2010-06-22 2013-02-27 惠普发展公司,有限责任合伙企业 A method and system for determining a deployment of applications
CN103873569A (en) * 2014-03-05 2014-06-18 兰雨晴 Resource optimized deployment method based on IaaS (infrastructure as a service) cloud platform
WO2016082143A1 (en) * 2014-11-27 2016-06-02 华为技术有限公司 Virtual network policy configuration method and system, as well as virtual network element and network management system thereof
WO2016101638A1 (en) * 2014-12-23 2016-06-30 国家电网公司 Operation management method for electric power system cloud simulation platform
CN104899551A (en) * 2015-04-30 2015-09-09 北京大学 Form image classification method
CN108604330A (en) * 2015-10-06 2018-09-28 内特弗利克斯股份有限公司 System and method for the safety of application and risk assessment and test
CN106502889A (en) * 2016-10-13 2017-03-15 华为技术有限公司 The method and apparatus of prediction cloud software performance
CA3059254A1 (en) * 2017-04-06 2018-10-11 Akili Interactive Labs, Inc. Distributed network for the secured collection, analysis, and sharing of data across platforms
CN107067173A (en) * 2017-04-13 2017-08-18 东北林业大学 The enterprise's supply chain reconfiguration method analyzed based on locality preserving projections
CN109039679A (en) * 2017-06-08 2018-12-18 中国移动通信集团浙江有限公司 A kind of NFV network signal acquisition method and device
CN108805259A (en) * 2018-05-23 2018-11-13 北京达佳互联信息技术有限公司 neural network model training method, device, storage medium and terminal device
CN108900551A (en) * 2018-08-16 2018-11-27 中国联合网络通信集团有限公司 SDN/NFV network safety protection method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Network function virtualization: Challenges and opportunities for innovations;Bo Han 等;《IEEE Communications Magazine》;第53卷(第2期);第90-97页 *
云计算环境下可信服务组合及其关键技术研究;孟顺梅;《中国博士学位论文全文数据库 (信息科技辑)》(第10期);第I139-10页 *
基于云计算的基础设施云服务平台设计和实现;杨光;《中国优秀硕士学位论文全文数据库 (信息科技辑)》(第03期);第I139-248页 *
基于时间序列分析的云数据中心带宽分配策略研究与实现;朱翔鹰;《中国优秀硕士学位论文全文数据库 (信息科技辑)》(第03期);第I137-46页 *
新一代互联网体系结构及关键技术研究;李锐;《中国博士学位论文全文数据库 (信息科技辑)》(第12期);I136-57 *

Also Published As

Publication number Publication date
CN112101394A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN111444009A (en) Resource allocation method and device based on deep reinforcement learning
JP6870508B2 (en) Learning programs, learning methods and learning devices
CN111104954A (en) Object classification method and device
CN112288572A (en) Service data processing method and computer equipment
CN109739985A (en) Automatic document classification method, equipment and storage medium
CN112734033A (en) Model training method, device, equipment and storage medium
CN112398674B (en) Method and device for generating VNFD configuration template for describing virtual network functions
CN113516239A (en) Model training method and device, storage medium and electronic equipment
CN112101394B (en) Provider domain deployment method, device, computing equipment and computer storage medium
CN113259145B (en) End-to-end networking method and device for network slicing and network slicing equipment
CN110532448B (en) Document classification method, device, equipment and storage medium based on neural network
US8312137B1 (en) Live experiment framework
CN115713669A (en) Image classification method and device based on inter-class relation, storage medium and terminal
CN115346084A (en) Sample processing method, sample processing apparatus, electronic device, storage medium, and program product
CN110929118B (en) Network data processing method, device, apparatus and medium
CN111143148B (en) Model parameter determining method, device and storage medium
CN112242959B (en) Micro-service current-limiting control method, device, equipment and computer storage medium
CN113825148A (en) Method and device for determining alarm level of network node and computing equipment
JP7073686B2 (en) Neural network coupling reduction
CN113139579B (en) Image classification method and system based on image feature self-adaptive convolution network
CN111178418B (en) Image classification method and device, storage medium and electronic equipment
CN112104467B (en) Cutover operation risk rating method and device and computing equipment
CN117576539A (en) Interface picture identification interface method, device and equipment
CN117689074A (en) User complaint prediction method, device, equipment and medium
CN117391914A (en) Telecommunication fraud recognition method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant