Disclosure of Invention
The embodiment of the invention provides an artificial intelligence service system and a method for realizing artificial intelligence service, which can realize artificial intelligence service more safely.
In a first aspect, the present invention provides an artificial intelligence service system, including:
the system comprises a cloud computing platform, a fog computing cluster and at least one user terminal; wherein the content of the first and second substances,
the fog computing cluster comprises at least two fog computing nodes;
each fog computing node is connected with the cloud computing platform;
each fog computing node is respectively connected with at least one user terminal;
each user terminal is respectively connected with at least one fog computing node;
the cloud computing platform is used for training an artificial intelligence model and issuing the trained artificial intelligence model to at least one target fog computing node;
the fog computing node is used for compressing the artificial intelligence model to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform; when a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the target user terminal is received, sending the target artificial intelligence model to the target user terminal;
the user terminal is used for sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user; receiving and deploying the target artificial intelligence model sent by the target fog computing node; and inputting a data set to be processed into the deployed target artificial intelligence model, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
Preferably, the first and second electrodes are formed of a metal,
the user terminal includes: the verification device comprises a verification information acquisition unit, a correction unit and a feedback processing unit; wherein the content of the first and second substances,
the verification information acquisition unit is used for acquiring a verification result corresponding to the data set to be processed;
the checking unit is used for checking whether the first processing result is the same as the verification result or not, and if the first processing result is different from the verification result, the information feedback unit is triggered;
the feedback processing unit is used for generating feedback information corresponding to the artificial intelligence model according to the data set to be processed and sending the feedback information to the target fog computing node which is correspondingly connected;
then, the fog computing node comprises: a storage processing unit; wherein the content of the first and second substances,
the storage processing unit is used for receiving and storing feedback information corresponding to the artificial intelligence model;
the cloud computing platform comprises: the system comprises a feedback information acquisition unit, a training processing unit and an issuing processing unit; wherein the content of the first and second substances,
the feedback information acquisition unit is used for acquiring each piece of feedback information which is stored in each target fog calculation node and corresponds to the artificial intelligence model;
the training processing unit is used for training the artificial intelligence model according to the acquired feedback information to form an optimized artificial intelligence model;
and the issuing processing unit is used for issuing the optimized training model to each target fog computing node.
Preferably, the first and second electrodes are formed of a metal,
the training processing unit is used for detecting the total feedback amount of each acquired feedback information, and training the artificial intelligence model according to each acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
Preferably, the first and second electrodes are formed of a metal,
the user terminal further comprises: a compression request unit; wherein the content of the first and second substances,
the compression request unit is used for determining a compression rate under the trigger of a user and sending a compression request which carries the compression rate and corresponds to the artificial intelligence model to one correspondingly connected target fog computing node;
then, the fog computing node further comprises: a compression processing unit; wherein the content of the first and second substances,
and the compression processing unit is used for compressing the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model when receiving the compression request which is sent by the target user terminal and corresponds to the artificial intelligence model.
Preferably, the first and second electrodes are formed of a metal,
the fog computing node, comprising: the system comprises a model deployment unit and a service response unit; wherein the content of the first and second substances,
the model deployment unit is used for deploying the received artificial intelligence model;
the service response unit is used for inputting the input stage to be processed into the deployed artificial intelligence model when receiving the data set to be processed sent by the target user terminal connected with the service response unit, receiving a second processing result output by the deployed artificial intelligence model after the input data set to be processed is processed, and sending the received second processing result to the target user terminal connected with the service response unit;
the user terminal includes: a service processing unit; wherein the content of the first and second substances,
the service processing unit is used for sending the data set to be processed to the target fog computing node correspondingly connected with the service processing unit; and receiving and providing the second processing result sent by one target fog computing node connected with the target fog computing node.
In a second aspect, an embodiment of the present invention provides a method for implementing an artificial intelligence service by using the artificial intelligence service system in any one of the first aspects, including:
training an artificial intelligence model by using a cloud computing platform, and issuing the trained artificial intelligence model to at least one target fog computing node;
compressing the artificial intelligence model by using the fog computing node to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform;
sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user by utilizing the user terminal;
when receiving a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the mist computing node, the mist computing node sends the target artificial intelligence model to the target user terminal;
receiving and deploying the target artificial intelligence model sent by the target fog computing node by using the user terminal;
and inputting the data set to be processed into the deployed target artificial intelligence model by using the user terminal, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
Preferably, the first and second electrodes are formed of a metal,
when the user terminal comprises a verification information acquisition unit, a proofreading unit and a feedback processing unit, the fog computing node comprises the storage processing unit, and the cloud computing platform comprises the feedback information acquisition unit, a training processing unit and a sending processing unit,
further comprising:
acquiring a verification result corresponding to the data set to be processed by utilizing the verification information acquisition unit of the user terminal;
checking whether the first processing result is the same as the verification result by using the checking unit of the user terminal;
generating feedback information corresponding to the artificial intelligence model according to the data set to be processed by using the feedback processing unit of the user terminal when the first processing result is different from the verification result, and sending the feedback information to the target fog calculation node which is correspondingly connected;
receiving and storing feedback information corresponding to the artificial intelligence model by using the storage processing unit of each target fog computing node;
acquiring each piece of feedback information corresponding to the artificial intelligence model, which is stored in a storage processing unit of each target fog computing node, by using the feedback information acquisition unit of the cloud computing platform;
training the artificial intelligence model according to the acquired feedback information by utilizing the training processing unit of the cloud computing platform to form an optimized artificial intelligence model;
and issuing the optimized training model to each target fog computing node by using the issuing processing unit of the cloud computing platform.
Preferably, the first and second electrodes are formed of a metal,
the training the artificial intelligence model according to the obtained feedback information by using the training processing unit of the cloud computing platform to form an optimized artificial intelligence model, including:
and detecting the total feedback amount of each piece of acquired feedback information by using the training processing unit of the cloud computing platform, and training the artificial intelligence model according to each piece of acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
Preferably, the first and second electrodes are formed of a metal,
when the user terminal further includes the compression request unit and the fog computing node further includes a compression processing unit, before the utilizing the fog computing node, when receiving the artificial intelligence model issued by the cloud computing platform, performs compression processing on the artificial intelligence model to form a target artificial intelligence model, the method further includes:
determining a compression ratio under the triggering of a user by utilizing the compression request unit of the user terminal, and sending a compression request carrying the compression ratio and corresponding to the artificial intelligence model to a correspondingly connected target fog computing node;
then, said compressing said artificial intelligence model to form a target artificial intelligence model comprises:
and when the compression processing unit of the fog computing node receives a compression request which is sent by one target user terminal and corresponds to the artificial intelligence model, the compression processing unit compresses the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model.
Preferably, the first and second electrodes are formed of a metal,
when the fog computing node comprises the model deployment unit and the service response unit, and the user terminal comprises the service processing unit, the method further comprises the following steps:
deploying the received artificial intelligence model with the model deployment unit of the fog computing node;
sending a data set to be processed to one target fog computing node correspondingly connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal;
when the service response unit of the fog computing node receives the data set to be processed sent by one target user terminal connected with the service response unit, the input stage to be processed is input into the deployed artificial intelligence model, a second processing result output after the deployed artificial intelligence model processes the input data set to be processed is received, and the received second processing result is sent to the one target user terminal connected with the service response unit;
and receiving and providing the second processing result sent by one target fog computing node connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal.
The embodiment of the invention provides an artificial intelligence service system and a method for realizing artificial intelligence service, the system comprises a cloud computing platform, at least one user terminal and a fog computing cluster comprising at least two fog computing nodes, the cloud computing platform can train an artificial intelligence model, the trained artificial intelligence model can be issued to one or more target fog computing nodes connected with the cloud computing platform, each target fog computing node receiving the artificial intelligence model can respectively compress the artificial intelligence model received by the cloud computing platform to form a relatively small target artificial intelligence model, when the artificial intelligence service is required to be realized, a user can send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected with the user through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, the target user terminal can further receive and deploy a target fog calculation model sent by the corresponding target fog calculation node, and the target fog calculation model deployed at the target user terminal is used for processing the data set to be processed so as to output a first processing result. In summary, when the technical scheme provided by the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, and meanwhile, the data set to be processed does not need to be released to the cloud computing platform, so that the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an artificial intelligence service system, including:
a cloud computing platform 101, a fog computing cluster 102, and at least one user terminal 103; wherein the content of the first and second substances,
the fog computing cluster 102 includes at least two fog computing nodes 1021;
each fog computing node 1021 is connected with the cloud computing platform 101;
each fog computing node 1021 is respectively connected with at least one user terminal 103;
each user terminal 103 is respectively connected with at least one fog computing node 1021;
the cloud computing platform 101 is configured to train an artificial intelligence model, and issue the trained artificial intelligence model to at least one target fog computing node 1021;
the fog computing node 1021 is configured to, when receiving the artificial intelligence model delivered by the cloud computing platform 101, compress the artificial intelligence model to form a target artificial intelligence model; when a deployment request corresponding to the artificial intelligence model and sent by a target user terminal 103 connected with the target user terminal is received, sending the target artificial intelligence model to the target user terminal 103;
the user terminal 103 is configured to send a deployment request corresponding to the artificial intelligence model to the target fog computing node 1021 connected to the user terminal under the trigger of the user; receiving and deploying the target artificial intelligence model sent by the target fog computing node 1021; and inputting a data set to be processed into the deployed target artificial intelligence model, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
As shown in fig. 1, the system includes a cloud computing platform, at least one user terminal, and a fog computing cluster including at least two fog computing nodes, where the cloud computing platform may train an artificial intelligence model, the trained artificial intelligence model may be issued to one or more target fog computing nodes connected to the cloud computing platform, each target fog computing node that receives the artificial intelligence model may compress the artificial intelligence model received by the target fog computing node to form a relatively smaller target artificial intelligence model, and when it is necessary to implement artificial intelligence service, a user may send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected to the user terminal through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, and the target user terminal may further receive and deploy the target fog computing model sent by the corresponding target fog computing node And processing the data set to be processed through a target fog calculation model deployed at a target user terminal so as to output a first processing result. In summary, when the technical scheme provided by the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, and meanwhile, the data set to be processed does not need to be released to the cloud computing platform, so that the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
Referring to fig. 2, fig. 3, and fig. 4, in an embodiment of the present invention, the ue 103 includes: a verification information acquisition unit 1031, a proofreading unit 1032 and a feedback processing unit 1033; wherein the content of the first and second substances,
the verification information acquisition unit 1031 is configured to acquire a verification result corresponding to the to-be-processed data set;
the proofreading unit 1032 is configured to proofread whether the first processing result is the same as the verification result, and if the first processing result is different from the verification result, trigger the information feedback unit 1033;
the feedback processing unit 1033 is configured to generate feedback information corresponding to the artificial intelligence model according to the data set to be processed, and send the feedback information to a correspondingly connected target fog computing node 1021;
then the process of the first step is carried out,
the fog calculation node 1021 includes: a storage processing unit 10211; wherein the content of the first and second substances,
the storage processing unit 10211 is configured to receive and store feedback information corresponding to the artificial intelligence model;
the cloud computing platform 101 includes: a feedback information acquisition unit 1011, a training processing unit 1012 and a sending processing unit 1013; wherein the content of the first and second substances,
the feedback information acquisition unit 1011 is configured to acquire each piece of feedback information corresponding to the artificial intelligence model stored in each target fog computing node 1021;
the training processing unit 1012 is configured to train the artificial intelligence model according to each acquired feedback information to form an optimized artificial intelligence model;
the issuing processing unit 1013 is configured to issue the optimized training model to each target fog computing node 1021.
On one hand, an artificial intelligence model trained by a cloud computing platform may not have a function of accurately processing a data set to be processed, and a formed target artificial intelligence model may not have a function of accurately processing the data set to be processed; on the other hand, after the artificial intelligence model is compressed into a relatively small target artificial intelligence model, the data processing capacity of the target artificial intelligence model is inevitably reduced, which may cause that the formed target artificial intelligence model cannot accurately process the data set to be processed.
Therefore, in the above embodiment, the verification information collecting unit of the user terminal is used to collect the verification result corresponding to the to-be-processed data set (the verification result should be able to be used to accurately evaluate the accuracy of the first processing result output by the target artificial intelligence model, and may be determined through manual definition or other manners), the checking unit is used to check whether the first processing result is the same as the collected verification result, if the verification result is different from the first processing result, it indicates that the target artificial intelligence model fails to accurately process the input to-be-processed data set, at this time, the information feedback unit may generate the feedback information corresponding to the artificial intelligence model (the feedback information may specifically carry the to-be-processed data set and the verification result), and send the feedback information to one target fog computing node connected correspondingly, and the storage unit of each target fog computing node may store each feedback information received by it, in the subsequent process, the cloud computing platform can obtain each feedback information corresponding to the artificial intelligence model stored in each target fog computing node through the feedback information acquisition unit of the cloud computing platform, then the artificial intelligence model is further optimized and trained through the training processing unit of the cloud computing platform according to each feedback information (a data set to be processed and a corresponding proofreading result) to form an optimized artificial intelligence model, the formed optimized artificial intelligence model can more accurately process the corresponding data set, and after the issuing processing unit issues the optimized training model to each target fog computing node, more accurate artificial intelligence service can be provided for a user through the optimized artificial intelligence model.
Specifically, in an embodiment of the present invention, the training processing unit 1012 is configured to detect a total feedback amount of each obtained feedback information, and train the artificial intelligence model according to each obtained feedback information when the total feedback amount reaches a preset amount, so as to form an optimized artificial intelligence model. When the cloud computing platform trains the artificial intelligence model, more computing resources and longer training time are generally consumed, and if and only when the total feedback amount of each acquired feedback information reaches a certain amount, the artificial intelligence model is further trained through the training processing unit, so that resource waste caused by training the artificial intelligence model for a long time on the cloud computing platform is avoided.
Referring to fig. 2 and fig. 3, in an embodiment of the present invention, the user terminal 103 further includes: a compression request unit 1034; wherein the content of the first and second substances,
the compression request unit 1034 is configured to determine a compression ratio under the trigger of a user, and send a compression request carrying the compression ratio and corresponding to the artificial intelligence model to one of the correspondingly connected target fog computing nodes 1021;
then the process of the first step is carried out,
the fog calculation node 1021, further comprising: a compression processing unit 10212; wherein the content of the first and second substances,
the compressing unit 10212 is configured to, when receiving a compression request corresponding to the artificial intelligence model sent by one of the target user terminals 103 connected to the compressing unit, compress the received artificial intelligence model according to the compression rate carried in the received compression request to form a target artificial intelligence model.
In the above embodiment, the computing power of the user terminal with respect to the fog computing node is poor, the compression ratio is determined by the compression request unit of the user terminal under the trigger of the user, and the compression request carrying the compression ratio is sent to the target fog computing node connected correspondingly, so that the target fog computing node can compress the artificial intelligence model received by the fog computing node in combination with the compression ratio carried by the compression request to form a target artificial intelligence model with a certain size, and the data processing power of the formed target artificial intelligence model and the computing power of the user terminal deploying the target artificial intelligence model are taken into consideration comprehensively, so that the formed target artificial intelligence model can be deployed normally on the user terminal and can accurately process a data set to be processed.
Referring to fig. 2 and 3, in an embodiment of the present invention, the fog calculation node 1021 includes: a model deployment unit 10213 and a service response unit 10214; wherein the content of the first and second substances,
the model deployment unit 10213 is configured to deploy the received artificial intelligence model;
the service response unit 10214 is configured to, when receiving the to-be-processed data set sent by one target user terminal 103 connected to the service response unit, input the to-be-processed input stage into the deployed artificial intelligence model, receive a second processing result output by the deployed artificial intelligence model after processing the input to-be-processed data set, and send the received second processing result to the one target user terminal 103 connected to the service response unit;
the user terminal 103 includes: a service processing unit 1035; wherein the content of the first and second substances,
the service processing unit 1035 is configured to send a data set to be processed to one of the target fog computing nodes 1021 connected to the service processing unit 1035; receiving and providing the second processing result sent by one of the target fog computing nodes 1021 connected thereto.
In the embodiment, the computing capacity of the fog computing node is relatively strong relative to that of the user terminal, and the data processing capacity of a target artificial intelligence model deployed and operated on the user terminal is relatively low relative to that of the artificial intelligence model; the received artificial intelligence model is deployed through the model deployment unit of the fog computing node, when the accuracy requirement of a user on the processing result corresponding to the data set to be processed is high, the data set to be processed can be sent to a target fog computing node correspondingly connected with the service processing unit through the service processing unit, the artificial intelligence model of the target fog computing node is deployed to process the data set to be processed, and a more accurate second processing result is obtained.
By combining the above embodiments, when the user has a high requirement on the safety of the artificial intelligence service, the offline artificial intelligence service can be adopted, that is, the relatively small target artificial intelligence model deployed in the user terminal is used for processing the data set to be processed; when the accuracy requirement of the user on the artificial intelligence service is high, the online artificial intelligence service can be adopted, namely, the data set to be processed is sent to the fog computing nodes with the corresponding artificial intelligence models deployed in the fog computing cluster through the corresponding user terminals, and the data set to be processed is processed by the artificial intelligence models with relatively high data processing capacity deployed in the fog computing nodes.
As shown in fig. 5, an embodiment of the present invention provides a method for implementing artificial intelligence services by using an artificial intelligence service system provided in any embodiment of the present invention, including:
step 501, training an artificial intelligence model by using a cloud computing platform, and issuing the trained artificial intelligence model to at least one target fog computing node;
step 502, compressing the artificial intelligence model by using the fog computing node to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform;
step 503, sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the trigger of the user by using the user terminal;
step 504, when receiving a deployment request corresponding to the artificial intelligence model sent by a target user terminal connected with the fog computing node, sending the target artificial intelligence model to the target user terminal by using the fog computing node;
step 505, receiving and deploying the target artificial intelligence model sent by the target fog computing node by using the user terminal;
step 506, inputting the data set to be processed into the deployed target artificial intelligence model by using the user terminal, and receiving a first processing result output by the deployed artificial intelligence model after processing the input data set to be processed.
In an embodiment of the present invention, when the user terminal includes a verification information acquisition unit, a verification unit, and a feedback processing unit, the cloud computing node includes the storage processing unit, and the cloud computing platform includes the feedback information acquisition unit, the training processing unit, and the issuing processing unit, the method further includes:
acquiring a verification result corresponding to the data set to be processed by utilizing the verification information acquisition unit of the user terminal;
checking whether the first processing result is the same as the verification result by using the checking unit of the user terminal;
generating feedback information corresponding to the artificial intelligence model according to the data set to be processed by using the feedback processing unit of the user terminal when the first processing result is different from the verification result, and sending the feedback information to the target fog calculation node which is correspondingly connected;
receiving and storing feedback information corresponding to the artificial intelligence model by using the storage processing unit of each target fog computing node;
acquiring each piece of feedback information corresponding to the artificial intelligence model, which is stored in a storage processing unit of each target fog computing node, by using the feedback information acquisition unit of the cloud computing platform;
training the artificial intelligence model according to the acquired feedback information by utilizing the training processing unit of the cloud computing platform to form an optimized artificial intelligence model;
and issuing the optimized training model to each target fog computing node by using the issuing processing unit of the cloud computing platform.
In an embodiment of the present invention, the training the artificial intelligence model by using the training processing unit of the cloud computing platform according to the obtained feedback information to form an optimized artificial intelligence model, including: and detecting the total feedback amount of each piece of acquired feedback information by using the training processing unit of the cloud computing platform, and training the artificial intelligence model according to each piece of acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
In an embodiment of the present invention, when the user terminal further includes the compression request unit, and the fog computing node further includes a compression processing unit, before the utilizing the fog computing node, when receiving the artificial intelligence model delivered by the cloud computing platform, performs compression processing on the artificial intelligence model to form a target artificial intelligence model, the method further includes:
determining a compression ratio under the triggering of a user by utilizing the compression request unit of the user terminal, and sending a compression request carrying the compression ratio and corresponding to the artificial intelligence model to a correspondingly connected target fog computing node;
then, said compressing said artificial intelligence model to form a target artificial intelligence model comprises:
and when the compression processing unit of the fog computing node receives a compression request which is sent by one target user terminal and corresponds to the artificial intelligence model, the compression processing unit compresses the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model.
In an embodiment of the present invention, when the fog computing node includes the model deployment unit and a service response unit, and the user terminal includes the service processing unit, the method further includes:
deploying the received artificial intelligence model with the model deployment unit of the fog computing node;
sending a data set to be processed to one target fog computing node correspondingly connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal;
when the service response unit of the fog computing node receives the data set to be processed sent by one target user terminal connected with the service response unit, the input stage to be processed is input into the deployed artificial intelligence model, a second processing result output after the deployed artificial intelligence model processes the input data set to be processed is received, and the received second processing result is sent to the one target user terminal connected with the service response unit;
and receiving and providing the second processing result sent by one target fog computing node connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal.
In summary, the embodiments of the present invention have at least the following advantages:
1. in an embodiment of the invention, the system is composed of a cloud computing platform, at least one user terminal and a fog computing cluster comprising at least two fog computing nodes, wherein the cloud computing platform can train an artificial intelligence model, the trained artificial intelligence model can be issued to one or more target fog computing nodes connected with the cloud computing platform, each target fog computing node receiving the artificial intelligence model can respectively compress the artificial intelligence model received by the target fog computing node to form a relatively small target artificial intelligence model, when the artificial intelligence service is required to be realized, a user can send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected with the user through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, and the target user terminal can further receive and deploy the target fog computing model sent by the corresponding target fog computing node And processing the data set to be processed through a target fog calculation model deployed at the target user terminal to output a first processing result. In summary, when the technical scheme provided by the embodiment of the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, meanwhile, the data set to be processed does not need to be released to the cloud computing platform, the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
2. In an embodiment of the invention, a verification information acquisition unit of a user terminal is used for acquiring a verification result corresponding to a data set to be processed, a correction unit is used for correcting whether a first processing result is the same as the acquired verification result, if the verification result is different from the first processing result, a target artificial intelligence model fails to accurately process the input data set to be processed, at the moment, feedback information corresponding to the artificial intelligence model can be generated through an information feedback unit and sent to a target fog computing node which is correspondingly connected, a storage unit of each target fog computing node can store each piece of feedback information received by the storage unit, in the subsequent process, a cloud computing platform can acquire each piece of feedback information corresponding to the artificial intelligence model and stored in each target fog computing node through a feedback information acquisition unit of the cloud computing platform, and then the artificial intelligence model is subjected to entering through a training processing unit of the cloud computing platform according to each piece of feedback information And after the issuing processing unit issues the optimized training model to each target fog computing node, more accurate artificial intelligence service can be provided for the user through the optimized artificial intelligence model.
3. In an embodiment of the invention, more computing resources and longer training time are generally consumed when the cloud computing platform trains the artificial intelligence model, and if and only if the total feedback amount of each acquired feedback information reaches a certain amount, the artificial intelligence model is further trained through the training processing unit, so that resource waste caused by training the artificial intelligence model for a long time on the cloud computing platform is avoided.
4. In one embodiment of the invention, the computing capacity of the user terminal relative to the fog computing node is poor, the compression ratio is determined by the compression request unit of the user terminal under the triggering of a user, and the compression request carrying the compression ratio is sent to a target fog computing node which is correspondingly connected, so that the target fog computing node can compress the artificial intelligence model received by the fog computing node by combining the compression ratio carried by the compression request to form a target artificial intelligence model with a certain size, and the data processing capacity of the formed target artificial intelligence model and the computing capacity of the user terminal deploying the target artificial intelligence model are comprehensively considered, thereby ensuring that the formed target artificial intelligence model can be normally deployed on the user terminal and can accurately process a data set to be processed.
5. In one embodiment of the invention, the computing power of the fog computing node is relatively strong compared with that of the user terminal, and the data processing power of a target artificial intelligence model deployed and operated on the user terminal is relatively low compared with that of the artificial intelligence model; the received artificial intelligence model is deployed through the model deployment unit of the fog computing node, when the accuracy requirement of a user on the processing result corresponding to the data set to be processed is high, the data set to be processed can be sent to a target fog computing node correspondingly connected with the service processing unit through the service processing unit, the artificial intelligence model of the target fog computing node is deployed to process the data set to be processed, and a more accurate second processing result is obtained.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.