CN113301141A - Deployment method and system of artificial intelligence support framework - Google Patents

Deployment method and system of artificial intelligence support framework Download PDF

Info

Publication number
CN113301141A
CN113301141A CN202110553847.2A CN202110553847A CN113301141A CN 113301141 A CN113301141 A CN 113301141A CN 202110553847 A CN202110553847 A CN 202110553847A CN 113301141 A CN113301141 A CN 113301141A
Authority
CN
China
Prior art keywords
service
artificial intelligence
module
service processing
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110553847.2A
Other languages
Chinese (zh)
Other versions
CN113301141B (en
Inventor
温向明
蒋秋萍
章晨宇
王鲁晗
郑伟
路兆铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110553847.2A priority Critical patent/CN113301141B/en
Publication of CN113301141A publication Critical patent/CN113301141A/en
Application granted granted Critical
Publication of CN113301141B publication Critical patent/CN113301141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The present disclosure provides a method and system for deploying an artificial intelligence support framework; the method comprises the following steps: the service identification module receives a service request input into the artificial intelligence support framework, generates a service processing notice and sends the service processing notice to the service processing module; the service identification module acquires a data address table by using the data sharing module and sends the data address table to the service processing module; the service processing module calls service data from the data sharing module according to the data address table; each layer of service group in the service processing module adopts distributed cloud deployment and independently receives a service processing notice and a data address table; setting a plurality of services in each layer of service group, wherein each service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, trains a local model of the artificial intelligence algorithm and processes service requests; and a service extension module supporting the service request is deployed in the artificial intelligence support framework and is communicated with the service identification module and the service processing module to process the service request.

Description

Deployment method and system of artificial intelligence support framework
Technical Field
The embodiment of the disclosure relates to the technical field of mobile communication, in particular to a method and a system for deploying an artificial intelligence support framework.
Background
In the current technology of mobile access network service support based on artificial intelligence, an artificial intelligence method is adopted only for specific services of a mobile access network for processing, and management of artificial intelligence method groups and unified data opening and data management of the access network are not involved.
Based on this, a solution for realizing the service support of the artificial intelligence support framework in the process of mobile access to the network is needed.
Disclosure of Invention
In view of the above, the present disclosure is directed to a method and a system for deploying an artificial intelligence support framework.
Based on the above purpose, the present disclosure provides a deployment method of an artificial intelligence support framework, which is applied to a deployment system of the artificial intelligence support framework; the deployment system of the artificial intelligence support framework comprises: the system comprises a service identification module, a service processing module and a data sharing module;
the method comprises the following steps:
the service identification module receives a service request input into the artificial intelligence support framework, generates a service processing notification according to the service request and sends the service processing notification to the service processing module;
the service identification module acquires a data address table from the data sharing module and sends the data address table to the service processing module;
the service processing module receives the service processing notification and the data address table respectively and independently through a plurality of layer service groups deployed by a distributed cloud end of the service processing module, and obtains service data related to the service request from the data sharing module according to the service processing notification and the data address table respectively; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data;
further, the system for deploying the artificial intelligence support framework further includes: a master control console; the master control console is independently deployed and is connected with at least one artificial intelligence support frame;
the deployment method of the artificial intelligence support framework further comprises the following steps:
and the master control console responds to the fact that the same service exists in all the artificial intelligence support frames and adopts the same artificial intelligence algorithm to conduct preset horizontal federal learning on the same service so as to obtain a distributed training model for managing at least one artificial intelligence support frame.
Further, the system for deploying the artificial intelligence support framework further includes: a service extension module; the service expansion module is connected with the service identification module and the service processing module;
the deployment method of the artificial intelligence support framework further comprises the following steps:
the service extension module communicates with the service identification module and communicates with the service processing module to process the service request that is not supported by the service processing module in response to determining that the service processing module does not support the service request.
Based on the same inventive concept, the present disclosure also provides a deployment system of an artificial intelligence support framework, comprising: the system comprises a service identification module, a data sharing module and a service processing module;
the service identification module is configured to receive a service request sent to the artificial intelligence support framework, generate a service processing notification according to the service request, and send the service processing notification to the service processing module;
the service identification module is further configured to acquire a data address table from the data sharing module and send the data address table to the service processing module;
the service processing module is configured to receive the service processing notification and the data address table independently through a plurality of layer service groups deployed by a distributed cloud, and obtain service data related to the service request according to the service processing notification and the data address table; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data.
Further, the system for deploying the artificial intelligence support framework further includes: a main control console deployed independently; the master control console is connected with at least one artificial intelligence support frame;
wherein the grandmaster is configured to: and for all the artificial intelligence support frames, in response to the fact that the same service exists and the same artificial intelligence algorithm is adopted, the master control station performs preset horizontal federal learning on the same service to obtain a distributed training model for managing at least one artificial intelligence support frame.
Further, the system for deploying the artificial intelligence support framework further includes: a service extension module; the service expansion module is connected with the service identification module and the service processing module;
the service extension module is configured to: in response to determining that the service processing module does not support the service request, communicating with the service identification module and communicating with the service processing module to process the service request that is not supported by the service processing module
As can be seen from the foregoing, the artificial intelligence support framework deployment method and system provided by the present disclosure, based on the artificial intelligence mobile communication access network technology, comprehensively considers the aspects of the artificial intelligence support framework such as support of specific services, support of data opening and identification of the mobile access network, and decision feedback of the mobile access network, to perform the artificial intelligence support framework deployment, thereby implementing management of artificial intelligence method groups, implementing endogenous intelligence of the artificial intelligence access network, and improving the efficiency of data and communication.
Drawings
In order to more clearly illustrate the technical solutions in the present disclosure or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for deploying an artificial intelligence support framework according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a deployment system for an artificial intelligence support framework in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a service identification module of an artificial intelligence support framework in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a service processing module of an artificial intelligence support framework in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a service module of an artificial intelligence support framework of an embodiment of the present disclosure;
FIG. 6 is a parallel policy flow diagram of an artificial intelligence support framework of an embodiment of the disclosure;
FIG. 7 is a schematic diagram of a federated learning module of an artificial intelligence support framework in an embodiment of the present disclosure.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that technical terms or scientific terms used in the embodiments of the present disclosure should have a general meaning as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
As described in the background section, the existing artificial intelligence support framework deployment method is also difficult to meet the requirements of actual production.
In the process of implementing the present disclosure, the applicant finds that the existing deployment method of the artificial intelligence support framework has the main problems that: an artificial intelligence method is adopted to optimize specific services of a wireless access network, management of artificial intelligence method groups is not involved, unified data opening and data management of the access network are not involved, so that the access network technology based on artificial intelligence is split and independent, model optimization cannot be performed by taking advantage and making up for weakness through joint learning, data are repeatedly obtained in practical application, efficiency is low, and endogenous intelligence of the access network cannot be realized; the existing artificial intelligence support framework is deployed, and decision feedback on a mobile access network cannot be realized.
It is to be appreciated that the method can be performed by any apparatus, device, platform, cluster of devices having computing and processing capabilities.
Hereinafter, the technical method of the embodiment of the present disclosure is described in detail by specific embodiments, and specifically with reference to a flowchart of a deployment method of an artificial intelligence support framework shown in fig. 1 and a schematic diagram of a deployment system of an artificial intelligence support framework shown in fig. 2.
Referring to fig. 1 and 2, a deployment method of an artificial intelligence support framework according to an embodiment of the present disclosure is applied to a deployment system of the artificial intelligence support framework; wherein, the deployment system of artificial intelligence support frame includes: a service identification module, a service processing module and a data sharing module,
specifically, the deployment method of the artificial intelligence support framework comprises the following steps:
step S101, the service identification module receives a service request input into the artificial intelligence support framework, generates a service processing notification according to the service request, and sends the service processing notification to the service processing module.
In some embodiments, as shown in fig. 2, in a usage scenario of the present framework, the artificial intelligence support framework may be deployed locally to the base station and perform interactive communication with the base station.
The base station includes a CU (central unit) and a DU (distributed unit) of the mobile access network, and in the present disclosure, the CU and the DU are regarded as one base station entity and communicate with the artificial intelligence support framework through a unified interface.
As shown in fig. 2, the service request, service data and decision feedback between the base station and the artificial intelligence framework are separated from each other;
firstly, a base station sends a service request to an artificial intelligence support frame through an SR interface (service request interface) connected with the artificial intelligence support frame; the sent service request is a serial service identification code, and specifically includes: a layer identification code and an in-layer identification code; for example, the layer identifier may be 3 bits for identifying each layer of the protocol stack, and the layer identifier may be 5 bits for identifying each service in the layer.
Further, a service identification module in the artificial intelligence support framework receives a service request input by the base station;
specifically, as shown in fig. 3, the service identification module includes: in order to make the artificial intelligence support frame quickly respond to the service request of the base station, the identification code processing part identifies the received serial service identification code and transmits the serial service identification code to the service processing notification part, and the service processing notification part generates a service processing notification;
further, as shown in fig. 2 and 3, the service identification module sends the service processing notification to the connected service processing module through the SRN2 interface (second service processing notification interface).
Step S102, the service identification module acquires a data address table from the data sharing module and sends the data address table to the service processing module.
In some embodiments, unified data opening and data management based on a base station are a basis for supporting a service request, and in the present disclosure, after an artificial intelligence support framework is deployed locally in the base station, the base station opens its own data in a shared memory or distributed shared memory manner.
Furthermore, a data sharing module is arranged in the artificial intelligence support frame, and network state information is shared between the base station and the artificial intelligence support frame through the data sharing module; the service data shared in the data sharing module comprises: user service request, user channel environment, base station control information, etc.
Further, the data sharing module is connected with the service identification module through an SRN1 interface (first service processing notification interface) to realize communication.
Further, as shown in fig. 3, after the service identification module receives the service request from the base station, the data management information may be obtained from the data sharing module through the SRN1 interface; further utilizing data management information to generate a data address table required by the service request through a mapping relation built in a service-data mapping part in the service identification module;
further, the service processing notification section in the service identification module sends the obtained data address table to the service processing module through the SRN2 interface.
Step S103, the service processing module receives the service processing notification and the data address table independently through a plurality of layer service groups deployed by a distributed cloud end of the service processing module, and obtains service data related to the service request from the data sharing module according to the service processing notification and the data address table; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data.
In some embodiments, as shown in fig. 2, the data sharing module is connected to the service processing module, and the service processing module will invoke the service data required for processing the service request from the data sharing module according to the obtained service processing notification and the data address table, so as to increase the speed of reading data by the service processing module.
In some embodiments, the internal structure of the service processing module may be divided into at least one layer of service group according to the protocol stack, as shown in fig. 4, each layer of service group adopts distributed cloud deployment, so that each layer of service group can independently process the service request of each layer of the protocol stack, thereby relieving the working pressure of the service processing module; the service processing notification and the service data address table are independently received by the layer service group corresponding to the service request.
Further, as shown in fig. 4, each layer service group may include no less than one service, wherein each service may execute a relatively independent artificial intelligence algorithm to support service requests.
Further, as shown in fig. 5, in order to implement unified management on the artificial intelligence algorithm group, in each service, an environment and a policy related to the service are set; wherein, the service request and each service data related to the service request are all regarded as environment; the policy may be an artificial intelligence algorithm that supports service requests;
in the disclosure, an abstract strategy can be adopted to complete the encapsulation and dynamic switching of the artificial intelligence algorithm; specifically, as shown in fig. 5, a plurality of policies may be built in the abstract policy, where each policy is used to encapsulate at least one artificial intelligence algorithm, and when the service environment changes, different relevant policies may be adopted according to the specific environment dynamics.
In some embodiments, because the artificial intelligence algorithm needs to perform model training in actual application, and further support the service request is caused, that is, the decision on the service request is delayed, and the optimal decision on the service request cannot be made in the first time, a parallel strategy can be adopted in the abstract strategy, so that the artificial intelligence algorithm has sufficient time to obtain a feedback function fed back by the base station, perform model training, and simultaneously ensure the implementation support on the service request.
Specifically, as shown in fig. 6, a plurality of backup copies of the artificial intelligence algorithm for processing the service request may be saved in the parallel policy, and in the present disclosure, taking saving two backup copies as an example, the artificial intelligence algorithm is saved as the first backup: a1, and a second backup: a2;
further, as shown in fig. 6, on a time axis of the parallel policy operation, the parallel policy operation may be divided into a plurality of time slots according to a time sequence, at the beginning of the first time slot, a1 is made to make a decision on the service request by combining with the local model, that is, an a1 decision, the result of the a1 decision is sent to the base station through a DF interface (decision feedback interface), and the base station waits for execution information of the a1 decision, that is, an a1 feedback function;
further, at the beginning of the second timeslot, making a2 make a decision again on the service request in combination with the local model, that is, an a2 decision, sending the result of the a2 decision to the base station through a DF interface (decision feedback interface), and waiting for the updated execution information of the base station on the a2 decision, that is, an a2 feedback function;
it should be noted that, when the decision a1 is made in the first time slot and the decision a2 is made in the second time slot, the combined local model may be an original model that is not optimized by training;
at any time of the second time slot, the A1 may obtain an A1 feedback function, so that before the third time slot comes, the training of the local model is completed by using the A1 feedback function;
further, at the beginning of the third time slot, the a1 is made to combine with the local model trained most recently to continue making a1 decision on the service request, the result of the a1 decision is sent to the base station through a DF interface (decision feedback interface), and the base station waits for the execution information updated by the a1 decision, that is, the a1 feedback function is obtained again;
further, at any time of the third time slot, a2 may obtain a2 feedback function, so that the retraining of the local model is completed by using the a2 feedback function before the fourth time slot comes;
further, at the beginning of each subsequent time slot, the decision A1 and the decision A2 are alternately carried out, the feedback functions A1 and A2 are alternately obtained according to the time sequence, the local model is continuously trained in sequence according to the sequentially obtained feedback functions until the service request is ended, and the decision of the parallel strategy and the model training are terminated.
In some embodiments, the artificial intelligence support framework deployment system further comprises: a service extension module; the service expansion module is connected with both the service identification module and the service processing module;
further, the method for deploying the artificial intelligence support framework further comprises the following steps:
the service extension module communicates with the service identification module and communicates with the service processing module to process the service request that is not supported by the service processing module in response to determining that the service processing module does not support the service request.
In the embodiment of the present disclosure, as shown in fig. 2, the artificial intelligence support framework has extensibility, and can implement microservice for supporting the service request of the base station;
specifically, when the service processing module in the artificial intelligence support framework does not support the service request of the base station, a service extension module may be deployed in the artificial intelligence support framework, and as shown in fig. 2, the service extension module is connected to the service identification module through an SE1 interface (a first service extension interface) and is connected to the service processing module through an SE2 interface (a second service extension interface); reconfiguring support for service requests to process service requests sent by a base station may be further implemented.
Based on the artificial intelligence support framework constructed above, the deployment system of the artificial intelligence support framework further comprises: a master control console; the main control console is independently deployed and connected with at least one artificial intelligence support frame;
further, the method for deploying the artificial intelligence support framework further comprises the following steps:
and the master control console responds to the fact that the same service exists in all the artificial intelligence support frames and adopts the same artificial intelligence algorithm to conduct preset horizontal federal learning on the same service so as to obtain a distributed training model for managing at least one artificial intelligence support frame.
In some embodiments, as shown in fig. 7, for a plurality of deployed artificial intelligence support frameworks, intelligent enabling of a mobile access network can be achieved by performing preset shared learning on local models thereof, namely horizontal federal learning.
Specifically, as shown in fig. 7, a plurality of entities of the artificial intelligence support framework are respectively connected to a main console deployed independently, in this embodiment, three entities are taken as an example;
further, in the artificial intelligence support frameworks of all entities, if the same service exists and the same artificial intelligence algorithm is adopted in the service, the characteristics of the service are overlapped more, but the objects of the service are not overlapped, that is, the business states are the same, but the reached customers are different.
Further, in order to enable each entity to learn processing experience and local models of other entities for service requests and to ensure data privacy of each entity and its corresponding base station, the following horizontal federal learning may be adopted to construct a sample data-based distributed training model:
as shown in fig. 7, firstly, each entity performs the above training on the local model by using local data, and encrypts and uploads the gradient parameters therein to the master console;
furthermore, the master control console can safely aggregate the gradient parameters of all entities and update the parameters of the distributed training model;
further, the master console returns the updated parameters to each entity;
furthermore, each entity updates the parameters of the respective local model according to the returned parameters, and processes the respective service request.
Therefore, the artificial intelligence support framework deployment method and the artificial intelligence support framework deployment system are based on the artificial intelligence mobile communication access network technology, the artificial intelligence support framework is deployed by comprehensively considering the aspects of support of specific services by the artificial intelligence support framework, support of data opening and identification of the mobile access network, decision feedback of the mobile access network and the like, so that management of artificial intelligence method groups is realized, endogenous intelligence of the artificial intelligence access network is realized, and the data and communication efficiency is improved.
It should be noted that the method of the embodiments of the present disclosure may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may perform only one or more steps of the method of an embodiment of the disclosure, and the devices may interact with each other to complete the method.
It should be noted that the above describes some embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to the method of any embodiment, the embodiment of the disclosure further provides a deployment system of the artificial intelligence support framework.
Referring to fig. 2, the system for deploying an artificial intelligence support framework includes: a service identification module S201, a data sharing module S202 and a service processing module S203;
wherein the service identification module S201 is configured to: receiving a service request sent to the artificial intelligence support framework, generating a service processing notification according to the service request, and sending the service processing notification to the service processing module;
the service identification module S201 is further configured to: acquiring a data address table from the data sharing module, and sending the data address table to the service processing module;
the service processing module S203 is configured to: the method comprises the steps that a plurality of layer service groups deployed through a distributed cloud end respectively and independently receive a service processing notification and a data address table, and respectively obtain service data related to a service request according to the service processing notification and the data address table; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data.
In some embodiments, the service identification module S201 is specifically configured to send, by the base station, a service request to the artificial intelligence support framework through an SR interface (service request interface) connected to the artificial intelligence support framework; the sent service request is a serial service identification code, and specifically includes: a layer identification code and an in-layer identification code; for example, the layer identifier may be 3 bits for identifying each layer of the protocol stack, and the layer identifier may be 5 bits for identifying each service in the layer.
Further, a service identification module in the artificial intelligence support framework receives a service request input by the base station;
specifically, as shown in fig. 3, the service identification module includes: in order to make the artificial intelligence support frame quickly respond to the service request of the base station, the identification code processing part identifies the received serial service identification code and transmits the serial service identification code to the service processing notification part, and the service processing notification part generates a service processing notification;
further, as shown in fig. 2 and 3, the service identification module sends the service processing notification to the connected service processing module through the SRN2 interface (second service processing notification interface).
Further, unified data opening and data management based on the base station are the basis for supporting the service request, and in the disclosure, after the artificial intelligence support framework is deployed in the local area of the base station, the base station opens the data of the base station by adopting a shared memory or a distributed shared memory mode.
Furthermore, a data sharing module is arranged in the artificial intelligence support frame, and network state information is shared between the base station and the artificial intelligence support frame through the data sharing module; the service data shared in the data sharing module comprises: user service request, user channel environment, base station control information, etc.
Further, the data sharing module is connected with the service identification module through an SRN1 interface (first service processing notification interface) to realize communication.
Further, as shown in fig. 3, after the service identification module receives the service request from the base station, the data management information may be obtained from the data sharing module through the SRN1 interface; further utilizing data management information to generate a data address table required by the service request through a mapping relation built in a service-data mapping part in the service identification module;
further, the service processing notification section in the service identification module sends the obtained data address table to the service processing module through the SRN2 interface.
As shown in fig. 2, the data sharing module is connected to the service processing module, and the service processing module will call the service data required for processing the service request from the data sharing module according to the obtained service processing notification and the data address table, so as to increase the speed of reading data by the service processing module
In some embodiments, the service processing module S203 is specifically configured such that an internal structure of the service processing module may be divided into at least one layer of service groups according to a protocol stack, as shown in fig. 4, each layer of service group adopts distributed cloud deployment, so that each layer of service group can independently process a service request of each layer of the protocol stack, so as to relieve working pressure of the service processing module; the service processing notification and the service data address table are independently received by the layer service group corresponding to the service request.
Further, as shown in fig. 4, each layer service group may include no less than one service, wherein each service may execute a relatively independent artificial intelligence algorithm to support service requests.
Further, as shown in fig. 5, in order to implement unified management on the artificial intelligence algorithm group, in each service, an environment and a policy related to the service are set; wherein, the service request and each service data related to the service request are all regarded as environment; the policy may be an artificial intelligence algorithm that supports service requests;
in the disclosure, an abstract strategy can be adopted to complete the encapsulation and dynamic switching of the artificial intelligence algorithm; specifically, as shown in fig. 5, a plurality of policies may be built in the abstract policy, where each policy is used to encapsulate at least one artificial intelligence algorithm, and when the service environment changes, different relevant policies may be adopted according to the specific environment dynamics.
In some embodiments, because the artificial intelligence algorithm needs to perform model training in actual application, and further support the service request is caused, that is, the decision on the service request is delayed, and the optimal decision on the service request cannot be made in the first time, a parallel strategy can be adopted in the abstract strategy, so that the artificial intelligence algorithm has sufficient time to obtain a feedback function fed back by the base station, perform model training, and simultaneously ensure the implementation support on the service request.
Specifically, as shown in fig. 6, a plurality of backup copies of the artificial intelligence algorithm for processing the service request may be saved in the parallel policy, and in the present disclosure, taking saving two backup copies as an example, the artificial intelligence algorithm is saved as the first backup: a1, and a second backup: a2;
further, as shown in fig. 6, on a time axis of the parallel policy operation, the parallel policy operation may be divided into a plurality of time slots according to a time sequence, at the beginning of the first time slot, a1 is made to make a decision on the service request by combining with the local model, that is, an a1 decision, the result of the a1 decision is sent to the base station through a DF interface (decision feedback interface), and the base station waits for execution information of the a1 decision, that is, an a1 feedback function;
further, at the beginning of the second timeslot, making a2 make a decision again on the service request in combination with the local model, that is, an a2 decision, sending the result of the a2 decision to the base station through a DF interface (decision feedback interface), and waiting for the updated execution information of the base station on the a2 decision, that is, an a2 feedback function;
it should be noted that, when the decision a1 is made in the first time slot and the decision a2 is made in the second time slot, the combined local model may be an original model that is not optimized by training;
at any time of the second time slot, the A1 may obtain an A1 feedback function, so that before the third time slot comes, the training of the local model is completed by using the A1 feedback function;
further, at the beginning of the third time slot, the a1 is made to combine with the local model trained most recently to continue making a1 decision on the service request, the result of the a1 decision is sent to the base station through a DF interface (decision feedback interface), and the base station waits for the execution information updated by the a1 decision, that is, the a1 feedback function is obtained again;
further, at any time of the third time slot, a2 may obtain a2 feedback function, so that the retraining of the local model is completed by using the a2 feedback function before the fourth time slot comes;
further, at the beginning of each subsequent time slot, the decision A1 and the decision A2 are alternately carried out, the feedback functions A1 and A2 are alternately obtained according to the time sequence, the local model is continuously trained in sequence according to the sequentially obtained feedback functions until the service request is ended, and the decision of the parallel strategy and the model training are terminated.
In some embodiments, the artificial intelligence support framework deployment system further comprises: a service extension module S204; the service expansion module is connected with the service identification module and the service processing module;
the service extension module S203 is configured to: in response to determining that the service processing module does not support the service request, communicating with the service identification module and communicating with the service processing module to process the service request that is not supported by the service processing module.
In some embodiments, the artificial intelligence support framework has expandability, and can support the service request of the base station to realize microservice;
specifically, when the service processing module in the artificial intelligence support framework does not support the service request of the base station, a service extension module may be deployed in the artificial intelligence support framework, and as shown in fig. 2, the service extension module is connected to the service identification module through an SE1 interface (a first service extension interface) and is connected to the service processing module through an SE2 interface (a second service extension interface); reconfiguring support for service requests to process service requests sent by a base station may be further implemented.
In some embodiments, based on the artificial intelligence support framework constructed as described above, the system for deploying an artificial intelligence support framework further includes: a main control console deployed independently; the master control console is connected with at least one artificial intelligence supporting frame;
wherein the grandmaster is configured to: and for all the artificial intelligence support frames, in response to the fact that the same service exists and the same artificial intelligence algorithm is adopted, the master control station performs preset horizontal federal learning on the same service to obtain a distributed training model for managing at least one artificial intelligence support frame.
In some embodiments, for a plurality of deployed artificial intelligence support frameworks, intelligent enabling of a mobile access network can be achieved by performing preset shared learning on local models thereof, namely horizontal federal learning.
Specifically, as shown in fig. 7, a plurality of entities of the artificial intelligence support framework are respectively connected to a main console deployed independently, in this embodiment, three entities are taken as an example;
further, in the artificial intelligence support frameworks of all entities, if the same service exists and the same artificial intelligence algorithm is adopted in the service, the characteristics of the service are overlapped more, but the objects of the service are not overlapped, that is, the business states are the same, but the reached customers are different.
Further, in order to enable each entity to learn processing experience and local models of other entities for service requests and to ensure data privacy of each entity and its corresponding base station, the following horizontal federal learning may be adopted to construct a sample data-based distributed training model:
as shown in fig. 7, firstly, each entity performs the above training on the local model by using local data, and encrypts and uploads the gradient parameters therein to the master console;
furthermore, the master control console can safely aggregate the gradient parameters of all entities and update the parameters of the distributed training model;
further, the master console returns the updated parameters to each entity;
furthermore, each entity updates the parameters of the respective local model according to the returned parameters and processes the respective service request
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the modules may be implemented in the same one or more software and/or hardware when implementing embodiments of the present disclosure.
The apparatus of the foregoing embodiment is used to implement the deployment method of the artificial intelligence support framework corresponding to any one of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the present disclosure, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring embodiments of the present disclosure, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which embodiments of the present disclosure are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The embodiments of the present disclosure are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalents, improvements, and the like that may be made within the spirit and principles of the embodiments of the disclosure are intended to be included within the scope of the disclosure.

Claims (10)

1. A deployment method of artificial intelligence support frame is applied to the deployment system of artificial intelligence support frame; the deployment system of the artificial intelligence support framework comprises: the system comprises a service identification module, a service processing module and a data sharing module;
the method comprises the following steps:
the service identification module receives a service request input into the artificial intelligence support framework, generates a service processing notification according to the service request and sends the service processing notification to the service processing module;
the service identification module acquires a data address table by using the data sharing module and sends the data address table to the service processing module;
the service processing module receives the service processing notification and the data address table respectively and independently through a plurality of layer service groups deployed by a distributed cloud end of the service processing module, and obtains service data related to the service request from the data sharing module according to the service processing notification and the data address table respectively; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data.
2. The method of claim 1, wherein the artificial intelligence support framework deployment system further comprises: a master control console; the master control console is independently deployed and is connected with at least one artificial intelligence support frame;
the method further comprises the following steps:
and the master control console responds to the fact that the same service exists in all the artificial intelligence support frames and adopts the same artificial intelligence algorithm to conduct preset horizontal federal learning on the same service so as to obtain a distributed training model for managing at least one artificial intelligence support frame.
3. The method of claim 1, wherein the artificial intelligence support framework deployment system further comprises: a service extension module; the service expansion module is connected with the service identification module and the service processing module;
the method further comprises the following steps:
the service extension module communicates with the service identification module and communicates with the service processing module to process the service request that is not supported by the service processing module in response to determining that the service processing module does not support the service request.
4. The method of claim 2, wherein performing a preset horizontal federal learning of the same service comprises:
each artificial intelligence support framework performs the operations of: training the local model by using the service data local to the artificial intelligence support framework, and encrypting and uploading gradient parameters in the local model to the master control platform;
the master control desk aggregates the gradient parameters of the artificial intelligence support frames and updates the parameters of the local model;
the master console returns the updated parameters to each artificial intelligence support framework;
and each artificial intelligence support framework updates the local model thereof and processes the service request.
5. The method of claim 1, wherein independently executing the artificial intelligence algorithm using a preset abstraction policy comprises:
the service processing module sets a plurality of algorithm strategies in the service, and at least one artificial intelligence algorithm is packaged in each algorithm strategy;
the service processing module determines to adopt the corresponding algorithm strategy in response to the service request and the change of the service data.
6. The method of claim 1, wherein the training the local model of the artificial intelligence algorithm comprises:
the service processing module determines to store at least two backups of the artificial intelligence algorithm in response to the training of the local model and the processing of the service request both needing to be performed without delay;
the service processing module makes a decision on the service request by using different backups of the artificial intelligence algorithm at the beginning of every two adjacent time slots according to the local model and obtains a feedback function of each decision;
the service processing module trains the local model according to the feedback function and applies the local model to the decision of the next time slot starting;
the service processing module stops training of the local model and stops decision-making of the service request in response to determining that the service request is complete.
7. The method of claim 1, wherein the service identification module receives a service request entered into the artificial intelligence support framework, comprising:
the service identification module performs service identification on the serial service identification code contained in the service request;
the service identification module acquires a data address table from the data sharing module, and the data address table comprises:
and the service identification module obtains the data address table related to the service request by using a built-in mapping relation and according to the data management information obtained from the data sharing module.
8. A system for deploying an artificial intelligence support framework, comprising: the system comprises a service identification module, a data sharing module and a service processing module;
the service identification module is configured to receive a service request sent to the artificial intelligence support framework, generate a service processing notification according to the service request, and send the service processing notification to the service processing module;
the service identification module is further configured to acquire a data address table from the data sharing module and send the data address table to the service processing module;
the service processing module is configured to receive the service processing notification and the data address table independently through a plurality of layer service groups deployed by a distributed cloud, and obtain service data related to the service request according to the service processing notification and the data address table; each layer service group is provided with at least one service, the service independently executes an artificial intelligence algorithm by adopting a preset abstract strategy, and a local model of the artificial intelligence algorithm is trained for processing the service request based on the service data.
9. The system of claim 8, further comprising: a main control console deployed independently; the master control console is connected with at least one artificial intelligence support frame;
wherein the grandmaster is configured to: and for all the artificial intelligence support frames, in response to the fact that the same service exists and the same artificial intelligence algorithm is adopted, the master control station performs preset horizontal federal learning on the same service to obtain a distributed training model for managing at least one artificial intelligence support frame.
10. The system of claim 8, further comprising: a service extension module; the service expansion module is connected with the service identification module and the service processing module;
the service extension module is configured to: in response to determining that the service processing module does not support the service request, communicating with the service identification module and communicating with the service processing module to process the service request that is not supported by the service processing module.
CN202110553847.2A 2021-05-20 2021-05-20 Deployment method and system of artificial intelligence support framework Active CN113301141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110553847.2A CN113301141B (en) 2021-05-20 2021-05-20 Deployment method and system of artificial intelligence support framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110553847.2A CN113301141B (en) 2021-05-20 2021-05-20 Deployment method and system of artificial intelligence support framework

Publications (2)

Publication Number Publication Date
CN113301141A true CN113301141A (en) 2021-08-24
CN113301141B CN113301141B (en) 2022-06-17

Family

ID=77323292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110553847.2A Active CN113301141B (en) 2021-05-20 2021-05-20 Deployment method and system of artificial intelligence support framework

Country Status (1)

Country Link
CN (1) CN113301141B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179801A1 (en) * 2022-03-24 2023-09-28 北京邮电大学 Data processing method and apparatus, communication system, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984364A (en) * 2019-05-21 2020-11-24 江苏艾蒂娜互联网科技有限公司 Artificial intelligence cloud platform for 5G era
WO2021004478A1 (en) * 2019-07-10 2021-01-14 华为技术有限公司 Distributed ai system
CN112491962A (en) * 2020-11-03 2021-03-12 深圳市中博科创信息技术有限公司 Model-driven intelligent distributed architecture method and platform
CN112534777A (en) * 2018-08-10 2021-03-19 华为技术有限公司 Hierarchical business perception engine based on artificial intelligence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112534777A (en) * 2018-08-10 2021-03-19 华为技术有限公司 Hierarchical business perception engine based on artificial intelligence
CN111984364A (en) * 2019-05-21 2020-11-24 江苏艾蒂娜互联网科技有限公司 Artificial intelligence cloud platform for 5G era
WO2021004478A1 (en) * 2019-07-10 2021-01-14 华为技术有限公司 Distributed ai system
CN112491962A (en) * 2020-11-03 2021-03-12 深圳市中博科创信息技术有限公司 Model-driven intelligent distributed architecture method and platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CAO GANG等: "AIF: An Artificial Intelligence Framework for Smart Wireless Network Management", 《IEEE COMMUNICATIONS LETTERS》 *
CAO GANG等: "AIF: An Artificial Intelligence Framework for Smart Wireless Network Management", 《IEEE COMMUNICATIONS LETTERS》, 28 February 2018 (2018-02-28) *
LU ZHAOMING 等: "An Artificial Intelligence Enabled F-RAN Testbed", 《IEEE WIRELESS COMMUNICATIONS》 *
LU ZHAOMING 等: "An Artificial Intelligence Enabled F-RAN Testbed", 《IEEE WIRELESS COMMUNICATIONS》, 30 April 2020 (2020-04-30) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179801A1 (en) * 2022-03-24 2023-09-28 北京邮电大学 Data processing method and apparatus, communication system, electronic device, and storage medium

Also Published As

Publication number Publication date
CN113301141B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN107666692A (en) A kind of state transition method, user terminal and base station
JP6076480B2 (en) Service processing method and apparatus
WO2008022018A2 (en) Method and apparatus for maximizing resource utilization of base stations in a communication network
US11593905B2 (en) Electronic system
CN111797173B (en) Alliance chain sharing system, method and device, electronic equipment and storage medium
US20230132861A1 (en) Switching method and apparatus, device, and storage medium
CN104936258A (en) Network access method, terminal and system
CN103068076A (en) Single card multiple standby terminal, adapter module and subscriber identity module (SIM) card access method
CN107295610B (en) Network access method, related equipment and system
CN113301141B (en) Deployment method and system of artificial intelligence support framework
CN113727429A (en) Cross-network-group clock synchronization method and device, storage medium and terminal
CN105025103A (en) Cloud routing method and device for application service system based on TUXEDO middleware
CN114944971B (en) Method and device for deploying network by using Kubernetes, electronic equipment and storage medium
US11956702B2 (en) User equipment (UE) service over a network exposure function (NEF) in a wireless communication network
US11889593B2 (en) Wireless communication service over an edge data network (EDN) between a user equipment (UE) and an application server (AS)
US11303745B2 (en) Electronic system
CN104601346A (en) Method and apparatus for managing network connection of switch
CN115714785A (en) Method and equipment for determining computing power resource
US10860376B2 (en) Communication apparatus and base station
CN114356830B (en) Bus terminal control method, device, computer equipment and storage medium
CN110166506A (en) The connection method of hypertext transfer protocol Http and node device
US11785423B1 (en) Delivery of geographic location for user equipment (UE) in a wireless communication network
US11683672B2 (en) Distributed ledger control over wireless network slices
CN108874557A (en) A kind of front end interface processing method and system
CN115002215B (en) Cloud government enterprise oriented resource allocation model training method and resource allocation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant