CN108667850B - Artificial intelligence service system and method for realizing artificial intelligence service - Google Patents

Artificial intelligence service system and method for realizing artificial intelligence service Download PDF

Info

Publication number
CN108667850B
CN108667850B CN201810488394.8A CN201810488394A CN108667850B CN 108667850 B CN108667850 B CN 108667850B CN 201810488394 A CN201810488394 A CN 201810488394A CN 108667850 B CN108667850 B CN 108667850B
Authority
CN
China
Prior art keywords
artificial intelligence
intelligence model
target
user terminal
computing node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810488394.8A
Other languages
Chinese (zh)
Other versions
CN108667850A (en
Inventor
郝虹
段成德
姜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Innovation and Entrepreneurship Technology Co Ltd
Original Assignee
Inspur Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Group Co Ltd filed Critical Inspur Group Co Ltd
Priority to CN201810488394.8A priority Critical patent/CN108667850B/en
Publication of CN108667850A publication Critical patent/CN108667850A/en
Application granted granted Critical
Publication of CN108667850B publication Critical patent/CN108667850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The invention provides an artificial intelligence service system and a method for realizing artificial intelligence service, wherein the system comprises: the system comprises a cloud computing platform, a user terminal and a fog computing cluster with fog computing nodes; the cloud computing platform trains an artificial intelligence model and sends the trained artificial intelligence model to the fog computing node; when the fog computing node receives the artificial intelligence model, compressing the artificial intelligence model to form a target artificial intelligence model; when a deployment request sent by a user terminal connected with the target artificial intelligence model is received, sending the target artificial intelligence model to the user terminal; the user terminal sends a deployment request to a fog computing node connected with the user terminal; receiving and deploying a target artificial intelligence model sent by a fog computing node; the data set to be processed is input into a target artificial intelligence model, and a first processing result output after the artificial intelligence model processes the data set to be processed is received. By the technical scheme of the invention, the artificial intelligence service can be realized more safely.

Description

Artificial intelligence service system and method for realizing artificial intelligence service
Technical Field
The invention relates to the technical field of computers, in particular to an artificial intelligence service system and a method for realizing artificial intelligence service.
Background
With the development of cloud computing and big data, applications of providing artificial intelligence services by training artificial intelligence models are increasingly popularized in various industries.
At present, an artificial intelligence model is usually trained and deployed on a cloud computing platform to realize online artificial intelligence service, that is, a user can send a data set to be processed to the cloud computing platform through a user terminal, so that the artificial intelligence model deployed on the cloud computing platform can process the corresponding data set and feed back a processing result to the user terminal.
When the artificial intelligence service is realized through the method, because the number of access users of the cloud computing platform is large, the artificial intelligence model deployed on the cloud computing platform is possibly maliciously changed by an intruder, the data set issued to the cloud computing platform is easy to leak, and the safety of the artificial intelligence service is low.
Disclosure of Invention
The embodiment of the invention provides an artificial intelligence service system and a method for realizing artificial intelligence service, which can realize artificial intelligence service more safely.
In a first aspect, the present invention provides an artificial intelligence service system, including:
the system comprises a cloud computing platform, a fog computing cluster and at least one user terminal; wherein the content of the first and second substances,
the fog computing cluster comprises at least two fog computing nodes;
each fog computing node is connected with the cloud computing platform;
each fog computing node is respectively connected with at least one user terminal;
each user terminal is respectively connected with at least one fog computing node;
the cloud computing platform is used for training an artificial intelligence model and issuing the trained artificial intelligence model to at least one target fog computing node;
the fog computing node is used for compressing the artificial intelligence model to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform; when a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the target user terminal is received, sending the target artificial intelligence model to the target user terminal;
the user terminal is used for sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user; receiving and deploying the target artificial intelligence model sent by the target fog computing node; and inputting a data set to be processed into the deployed target artificial intelligence model, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
Preferably, the first and second electrodes are formed of a metal,
the user terminal includes: the verification device comprises a verification information acquisition unit, a correction unit and a feedback processing unit; wherein the content of the first and second substances,
the verification information acquisition unit is used for acquiring a verification result corresponding to the data set to be processed;
the checking unit is used for checking whether the first processing result is the same as the verification result or not, and if the first processing result is different from the verification result, the information feedback unit is triggered;
the feedback processing unit is used for generating feedback information corresponding to the artificial intelligence model according to the data set to be processed and sending the feedback information to the target fog computing node which is correspondingly connected;
then, the fog computing node comprises: a storage processing unit; wherein the content of the first and second substances,
the storage processing unit is used for receiving and storing feedback information corresponding to the artificial intelligence model;
the cloud computing platform comprises: the system comprises a feedback information acquisition unit, a training processing unit and an issuing processing unit; wherein the content of the first and second substances,
the feedback information acquisition unit is used for acquiring each piece of feedback information which is stored in each target fog calculation node and corresponds to the artificial intelligence model;
the training processing unit is used for training the artificial intelligence model according to the acquired feedback information to form an optimized artificial intelligence model;
and the issuing processing unit is used for issuing the optimized training model to each target fog computing node.
Preferably, the first and second electrodes are formed of a metal,
the training processing unit is used for detecting the total feedback amount of each acquired feedback information, and training the artificial intelligence model according to each acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
Preferably, the first and second electrodes are formed of a metal,
the user terminal further comprises: a compression request unit; wherein the content of the first and second substances,
the compression request unit is used for determining a compression rate under the trigger of a user and sending a compression request which carries the compression rate and corresponds to the artificial intelligence model to one correspondingly connected target fog computing node;
then, the fog computing node further comprises: a compression processing unit; wherein the content of the first and second substances,
and the compression processing unit is used for compressing the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model when receiving the compression request which is sent by the target user terminal and corresponds to the artificial intelligence model.
Preferably, the first and second electrodes are formed of a metal,
the fog computing node, comprising: the system comprises a model deployment unit and a service response unit; wherein the content of the first and second substances,
the model deployment unit is used for deploying the received artificial intelligence model;
the service response unit is used for inputting the input stage to be processed into the deployed artificial intelligence model when receiving the data set to be processed sent by the target user terminal connected with the service response unit, receiving a second processing result output by the deployed artificial intelligence model after the input data set to be processed is processed, and sending the received second processing result to the target user terminal connected with the service response unit;
the user terminal includes: a service processing unit; wherein the content of the first and second substances,
the service processing unit is used for sending the data set to be processed to the target fog computing node correspondingly connected with the service processing unit; and receiving and providing the second processing result sent by one target fog computing node connected with the target fog computing node.
In a second aspect, an embodiment of the present invention provides a method for implementing an artificial intelligence service by using the artificial intelligence service system in any one of the first aspects, including:
training an artificial intelligence model by using a cloud computing platform, and issuing the trained artificial intelligence model to at least one target fog computing node;
compressing the artificial intelligence model by using the fog computing node to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform;
sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user by utilizing the user terminal;
when receiving a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the mist computing node, the mist computing node sends the target artificial intelligence model to the target user terminal;
receiving and deploying the target artificial intelligence model sent by the target fog computing node by using the user terminal;
and inputting the data set to be processed into the deployed target artificial intelligence model by using the user terminal, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
Preferably, the first and second electrodes are formed of a metal,
when the user terminal comprises a verification information acquisition unit, a proofreading unit and a feedback processing unit, the fog computing node comprises the storage processing unit, and the cloud computing platform comprises the feedback information acquisition unit, a training processing unit and a sending processing unit,
further comprising:
acquiring a verification result corresponding to the data set to be processed by utilizing the verification information acquisition unit of the user terminal;
checking whether the first processing result is the same as the verification result by using the checking unit of the user terminal;
generating feedback information corresponding to the artificial intelligence model according to the data set to be processed by using the feedback processing unit of the user terminal when the first processing result is different from the verification result, and sending the feedback information to the target fog calculation node which is correspondingly connected;
receiving and storing feedback information corresponding to the artificial intelligence model by using the storage processing unit of each target fog computing node;
acquiring each piece of feedback information corresponding to the artificial intelligence model, which is stored in a storage processing unit of each target fog computing node, by using the feedback information acquisition unit of the cloud computing platform;
training the artificial intelligence model according to the acquired feedback information by utilizing the training processing unit of the cloud computing platform to form an optimized artificial intelligence model;
and issuing the optimized training model to each target fog computing node by using the issuing processing unit of the cloud computing platform.
Preferably, the first and second electrodes are formed of a metal,
the training the artificial intelligence model according to the obtained feedback information by using the training processing unit of the cloud computing platform to form an optimized artificial intelligence model, including:
and detecting the total feedback amount of each piece of acquired feedback information by using the training processing unit of the cloud computing platform, and training the artificial intelligence model according to each piece of acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
Preferably, the first and second electrodes are formed of a metal,
when the user terminal further includes the compression request unit and the fog computing node further includes a compression processing unit, before the utilizing the fog computing node, when receiving the artificial intelligence model issued by the cloud computing platform, performs compression processing on the artificial intelligence model to form a target artificial intelligence model, the method further includes:
determining a compression ratio under the triggering of a user by utilizing the compression request unit of the user terminal, and sending a compression request carrying the compression ratio and corresponding to the artificial intelligence model to a correspondingly connected target fog computing node;
then, said compressing said artificial intelligence model to form a target artificial intelligence model comprises:
and when the compression processing unit of the fog computing node receives a compression request which is sent by one target user terminal and corresponds to the artificial intelligence model, the compression processing unit compresses the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model.
Preferably, the first and second electrodes are formed of a metal,
when the fog computing node comprises the model deployment unit and the service response unit, and the user terminal comprises the service processing unit, the method further comprises the following steps:
deploying the received artificial intelligence model with the model deployment unit of the fog computing node;
sending a data set to be processed to one target fog computing node correspondingly connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal;
when the service response unit of the fog computing node receives the data set to be processed sent by one target user terminal connected with the service response unit, the input stage to be processed is input into the deployed artificial intelligence model, a second processing result output after the deployed artificial intelligence model processes the input data set to be processed is received, and the received second processing result is sent to the one target user terminal connected with the service response unit;
and receiving and providing the second processing result sent by one target fog computing node connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal.
The embodiment of the invention provides an artificial intelligence service system and a method for realizing artificial intelligence service, the system comprises a cloud computing platform, at least one user terminal and a fog computing cluster comprising at least two fog computing nodes, the cloud computing platform can train an artificial intelligence model, the trained artificial intelligence model can be issued to one or more target fog computing nodes connected with the cloud computing platform, each target fog computing node receiving the artificial intelligence model can respectively compress the artificial intelligence model received by the cloud computing platform to form a relatively small target artificial intelligence model, when the artificial intelligence service is required to be realized, a user can send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected with the user through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, the target user terminal can further receive and deploy a target fog calculation model sent by the corresponding target fog calculation node, and the target fog calculation model deployed at the target user terminal is used for processing the data set to be processed so as to output a first processing result. In summary, when the technical scheme provided by the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, and meanwhile, the data set to be processed does not need to be released to the cloud computing platform, so that the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an artificial intelligence service system according to an embodiment of the present invention;
FIG. 2 is a diagram of a user terminal in an artificial intelligence service system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a fog calculation node in an artificial intelligence service system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a cloud computing platform in an artificial intelligence service system according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for implementing artificial intelligence services using an artificial intelligence service system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an artificial intelligence service system, including:
a cloud computing platform 101, a fog computing cluster 102, and at least one user terminal 103; wherein the content of the first and second substances,
the fog computing cluster 102 includes at least two fog computing nodes 1021;
each fog computing node 1021 is connected with the cloud computing platform 101;
each fog computing node 1021 is respectively connected with at least one user terminal 103;
each user terminal 103 is respectively connected with at least one fog computing node 1021;
the cloud computing platform 101 is configured to train an artificial intelligence model, and issue the trained artificial intelligence model to at least one target fog computing node 1021;
the fog computing node 1021 is configured to, when receiving the artificial intelligence model delivered by the cloud computing platform 101, compress the artificial intelligence model to form a target artificial intelligence model; when a deployment request corresponding to the artificial intelligence model and sent by a target user terminal 103 connected with the target user terminal is received, sending the target artificial intelligence model to the target user terminal 103;
the user terminal 103 is configured to send a deployment request corresponding to the artificial intelligence model to the target fog computing node 1021 connected to the user terminal under the trigger of the user; receiving and deploying the target artificial intelligence model sent by the target fog computing node 1021; and inputting a data set to be processed into the deployed target artificial intelligence model, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
As shown in fig. 1, the system includes a cloud computing platform, at least one user terminal, and a fog computing cluster including at least two fog computing nodes, where the cloud computing platform may train an artificial intelligence model, the trained artificial intelligence model may be issued to one or more target fog computing nodes connected to the cloud computing platform, each target fog computing node that receives the artificial intelligence model may compress the artificial intelligence model received by the target fog computing node to form a relatively smaller target artificial intelligence model, and when it is necessary to implement artificial intelligence service, a user may send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected to the user terminal through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, and the target user terminal may further receive and deploy the target fog computing model sent by the corresponding target fog computing node And processing the data set to be processed through a target fog calculation model deployed at a target user terminal so as to output a first processing result. In summary, when the technical scheme provided by the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, and meanwhile, the data set to be processed does not need to be released to the cloud computing platform, so that the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
Referring to fig. 2, fig. 3, and fig. 4, in an embodiment of the present invention, the ue 103 includes: a verification information acquisition unit 1031, a proofreading unit 1032 and a feedback processing unit 1033; wherein the content of the first and second substances,
the verification information acquisition unit 1031 is configured to acquire a verification result corresponding to the to-be-processed data set;
the proofreading unit 1032 is configured to proofread whether the first processing result is the same as the verification result, and if the first processing result is different from the verification result, trigger the information feedback unit 1033;
the feedback processing unit 1033 is configured to generate feedback information corresponding to the artificial intelligence model according to the data set to be processed, and send the feedback information to a correspondingly connected target fog computing node 1021;
then the process of the first step is carried out,
the fog calculation node 1021 includes: a storage processing unit 10211; wherein the content of the first and second substances,
the storage processing unit 10211 is configured to receive and store feedback information corresponding to the artificial intelligence model;
the cloud computing platform 101 includes: a feedback information acquisition unit 1011, a training processing unit 1012 and a sending processing unit 1013; wherein the content of the first and second substances,
the feedback information acquisition unit 1011 is configured to acquire each piece of feedback information corresponding to the artificial intelligence model stored in each target fog computing node 1021;
the training processing unit 1012 is configured to train the artificial intelligence model according to each acquired feedback information to form an optimized artificial intelligence model;
the issuing processing unit 1013 is configured to issue the optimized training model to each target fog computing node 1021.
On one hand, an artificial intelligence model trained by a cloud computing platform may not have a function of accurately processing a data set to be processed, and a formed target artificial intelligence model may not have a function of accurately processing the data set to be processed; on the other hand, after the artificial intelligence model is compressed into a relatively small target artificial intelligence model, the data processing capacity of the target artificial intelligence model is inevitably reduced, which may cause that the formed target artificial intelligence model cannot accurately process the data set to be processed.
Therefore, in the above embodiment, the verification information collecting unit of the user terminal is used to collect the verification result corresponding to the to-be-processed data set (the verification result should be able to be used to accurately evaluate the accuracy of the first processing result output by the target artificial intelligence model, and may be determined through manual definition or other manners), the checking unit is used to check whether the first processing result is the same as the collected verification result, if the verification result is different from the first processing result, it indicates that the target artificial intelligence model fails to accurately process the input to-be-processed data set, at this time, the information feedback unit may generate the feedback information corresponding to the artificial intelligence model (the feedback information may specifically carry the to-be-processed data set and the verification result), and send the feedback information to one target fog computing node connected correspondingly, and the storage unit of each target fog computing node may store each feedback information received by it, in the subsequent process, the cloud computing platform can obtain each feedback information corresponding to the artificial intelligence model stored in each target fog computing node through the feedback information acquisition unit of the cloud computing platform, then the artificial intelligence model is further optimized and trained through the training processing unit of the cloud computing platform according to each feedback information (a data set to be processed and a corresponding proofreading result) to form an optimized artificial intelligence model, the formed optimized artificial intelligence model can more accurately process the corresponding data set, and after the issuing processing unit issues the optimized training model to each target fog computing node, more accurate artificial intelligence service can be provided for a user through the optimized artificial intelligence model.
Specifically, in an embodiment of the present invention, the training processing unit 1012 is configured to detect a total feedback amount of each obtained feedback information, and train the artificial intelligence model according to each obtained feedback information when the total feedback amount reaches a preset amount, so as to form an optimized artificial intelligence model. When the cloud computing platform trains the artificial intelligence model, more computing resources and longer training time are generally consumed, and if and only when the total feedback amount of each acquired feedback information reaches a certain amount, the artificial intelligence model is further trained through the training processing unit, so that resource waste caused by training the artificial intelligence model for a long time on the cloud computing platform is avoided.
Referring to fig. 2 and fig. 3, in an embodiment of the present invention, the user terminal 103 further includes: a compression request unit 1034; wherein the content of the first and second substances,
the compression request unit 1034 is configured to determine a compression ratio under the trigger of a user, and send a compression request carrying the compression ratio and corresponding to the artificial intelligence model to one of the correspondingly connected target fog computing nodes 1021;
then the process of the first step is carried out,
the fog calculation node 1021, further comprising: a compression processing unit 10212; wherein the content of the first and second substances,
the compressing unit 10212 is configured to, when receiving a compression request corresponding to the artificial intelligence model sent by one of the target user terminals 103 connected to the compressing unit, compress the received artificial intelligence model according to the compression rate carried in the received compression request to form a target artificial intelligence model.
In the above embodiment, the computing power of the user terminal with respect to the fog computing node is poor, the compression ratio is determined by the compression request unit of the user terminal under the trigger of the user, and the compression request carrying the compression ratio is sent to the target fog computing node connected correspondingly, so that the target fog computing node can compress the artificial intelligence model received by the fog computing node in combination with the compression ratio carried by the compression request to form a target artificial intelligence model with a certain size, and the data processing power of the formed target artificial intelligence model and the computing power of the user terminal deploying the target artificial intelligence model are taken into consideration comprehensively, so that the formed target artificial intelligence model can be deployed normally on the user terminal and can accurately process a data set to be processed.
Referring to fig. 2 and 3, in an embodiment of the present invention, the fog calculation node 1021 includes: a model deployment unit 10213 and a service response unit 10214; wherein the content of the first and second substances,
the model deployment unit 10213 is configured to deploy the received artificial intelligence model;
the service response unit 10214 is configured to, when receiving the to-be-processed data set sent by one target user terminal 103 connected to the service response unit, input the to-be-processed input stage into the deployed artificial intelligence model, receive a second processing result output by the deployed artificial intelligence model after processing the input to-be-processed data set, and send the received second processing result to the one target user terminal 103 connected to the service response unit;
the user terminal 103 includes: a service processing unit 1035; wherein the content of the first and second substances,
the service processing unit 1035 is configured to send a data set to be processed to one of the target fog computing nodes 1021 connected to the service processing unit 1035; receiving and providing the second processing result sent by one of the target fog computing nodes 1021 connected thereto.
In the embodiment, the computing capacity of the fog computing node is relatively strong relative to that of the user terminal, and the data processing capacity of a target artificial intelligence model deployed and operated on the user terminal is relatively low relative to that of the artificial intelligence model; the received artificial intelligence model is deployed through the model deployment unit of the fog computing node, when the accuracy requirement of a user on the processing result corresponding to the data set to be processed is high, the data set to be processed can be sent to a target fog computing node correspondingly connected with the service processing unit through the service processing unit, the artificial intelligence model of the target fog computing node is deployed to process the data set to be processed, and a more accurate second processing result is obtained.
By combining the above embodiments, when the user has a high requirement on the safety of the artificial intelligence service, the offline artificial intelligence service can be adopted, that is, the relatively small target artificial intelligence model deployed in the user terminal is used for processing the data set to be processed; when the accuracy requirement of the user on the artificial intelligence service is high, the online artificial intelligence service can be adopted, namely, the data set to be processed is sent to the fog computing nodes with the corresponding artificial intelligence models deployed in the fog computing cluster through the corresponding user terminals, and the data set to be processed is processed by the artificial intelligence models with relatively high data processing capacity deployed in the fog computing nodes.
As shown in fig. 5, an embodiment of the present invention provides a method for implementing artificial intelligence services by using an artificial intelligence service system provided in any embodiment of the present invention, including:
step 501, training an artificial intelligence model by using a cloud computing platform, and issuing the trained artificial intelligence model to at least one target fog computing node;
step 502, compressing the artificial intelligence model by using the fog computing node to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform;
step 503, sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the trigger of the user by using the user terminal;
step 504, when receiving a deployment request corresponding to the artificial intelligence model sent by a target user terminal connected with the fog computing node, sending the target artificial intelligence model to the target user terminal by using the fog computing node;
step 505, receiving and deploying the target artificial intelligence model sent by the target fog computing node by using the user terminal;
step 506, inputting the data set to be processed into the deployed target artificial intelligence model by using the user terminal, and receiving a first processing result output by the deployed artificial intelligence model after processing the input data set to be processed.
In an embodiment of the present invention, when the user terminal includes a verification information acquisition unit, a verification unit, and a feedback processing unit, the cloud computing node includes the storage processing unit, and the cloud computing platform includes the feedback information acquisition unit, the training processing unit, and the issuing processing unit, the method further includes:
acquiring a verification result corresponding to the data set to be processed by utilizing the verification information acquisition unit of the user terminal;
checking whether the first processing result is the same as the verification result by using the checking unit of the user terminal;
generating feedback information corresponding to the artificial intelligence model according to the data set to be processed by using the feedback processing unit of the user terminal when the first processing result is different from the verification result, and sending the feedback information to the target fog calculation node which is correspondingly connected;
receiving and storing feedback information corresponding to the artificial intelligence model by using the storage processing unit of each target fog computing node;
acquiring each piece of feedback information corresponding to the artificial intelligence model, which is stored in a storage processing unit of each target fog computing node, by using the feedback information acquisition unit of the cloud computing platform;
training the artificial intelligence model according to the acquired feedback information by utilizing the training processing unit of the cloud computing platform to form an optimized artificial intelligence model;
and issuing the optimized training model to each target fog computing node by using the issuing processing unit of the cloud computing platform.
In an embodiment of the present invention, the training the artificial intelligence model by using the training processing unit of the cloud computing platform according to the obtained feedback information to form an optimized artificial intelligence model, including: and detecting the total feedback amount of each piece of acquired feedback information by using the training processing unit of the cloud computing platform, and training the artificial intelligence model according to each piece of acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
In an embodiment of the present invention, when the user terminal further includes the compression request unit, and the fog computing node further includes a compression processing unit, before the utilizing the fog computing node, when receiving the artificial intelligence model delivered by the cloud computing platform, performs compression processing on the artificial intelligence model to form a target artificial intelligence model, the method further includes:
determining a compression ratio under the triggering of a user by utilizing the compression request unit of the user terminal, and sending a compression request carrying the compression ratio and corresponding to the artificial intelligence model to a correspondingly connected target fog computing node;
then, said compressing said artificial intelligence model to form a target artificial intelligence model comprises:
and when the compression processing unit of the fog computing node receives a compression request which is sent by one target user terminal and corresponds to the artificial intelligence model, the compression processing unit compresses the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model.
In an embodiment of the present invention, when the fog computing node includes the model deployment unit and a service response unit, and the user terminal includes the service processing unit, the method further includes:
deploying the received artificial intelligence model with the model deployment unit of the fog computing node;
sending a data set to be processed to one target fog computing node correspondingly connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal;
when the service response unit of the fog computing node receives the data set to be processed sent by one target user terminal connected with the service response unit, the input stage to be processed is input into the deployed artificial intelligence model, a second processing result output after the deployed artificial intelligence model processes the input data set to be processed is received, and the received second processing result is sent to the one target user terminal connected with the service response unit;
and receiving and providing the second processing result sent by one target fog computing node connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal.
In summary, the embodiments of the present invention have at least the following advantages:
1. in an embodiment of the invention, the system is composed of a cloud computing platform, at least one user terminal and a fog computing cluster comprising at least two fog computing nodes, wherein the cloud computing platform can train an artificial intelligence model, the trained artificial intelligence model can be issued to one or more target fog computing nodes connected with the cloud computing platform, each target fog computing node receiving the artificial intelligence model can respectively compress the artificial intelligence model received by the target fog computing node to form a relatively small target artificial intelligence model, when the artificial intelligence service is required to be realized, a user can send a deployment request corresponding to the artificial intelligence model to one target fog computing node connected with the user through the corresponding target user terminal, so that the corresponding target fog computing node provides the compressed target artificial intelligence model to the corresponding target user terminal, and the target user terminal can further receive and deploy the target fog computing model sent by the corresponding target fog computing node And processing the data set to be processed through a target fog calculation model deployed at the target user terminal to output a first processing result. In summary, when the technical scheme provided by the embodiment of the invention is used for realizing the artificial intelligence service, because the number of access users of the user terminal relative to the cloud computing platform is relatively small, the probability that the artificial intelligence model deployed at the user terminal is maliciously changed by an intruder is relatively low, meanwhile, the data set to be processed does not need to be released to the cloud computing platform, the data leakage of the data set to be processed is not easy to occur, and the artificial intelligence service can be realized more safely.
2. In an embodiment of the invention, a verification information acquisition unit of a user terminal is used for acquiring a verification result corresponding to a data set to be processed, a correction unit is used for correcting whether a first processing result is the same as the acquired verification result, if the verification result is different from the first processing result, a target artificial intelligence model fails to accurately process the input data set to be processed, at the moment, feedback information corresponding to the artificial intelligence model can be generated through an information feedback unit and sent to a target fog computing node which is correspondingly connected, a storage unit of each target fog computing node can store each piece of feedback information received by the storage unit, in the subsequent process, a cloud computing platform can acquire each piece of feedback information corresponding to the artificial intelligence model and stored in each target fog computing node through a feedback information acquisition unit of the cloud computing platform, and then the artificial intelligence model is subjected to entering through a training processing unit of the cloud computing platform according to each piece of feedback information And after the issuing processing unit issues the optimized training model to each target fog computing node, more accurate artificial intelligence service can be provided for the user through the optimized artificial intelligence model.
3. In an embodiment of the invention, more computing resources and longer training time are generally consumed when the cloud computing platform trains the artificial intelligence model, and if and only if the total feedback amount of each acquired feedback information reaches a certain amount, the artificial intelligence model is further trained through the training processing unit, so that resource waste caused by training the artificial intelligence model for a long time on the cloud computing platform is avoided.
4. In one embodiment of the invention, the computing capacity of the user terminal relative to the fog computing node is poor, the compression ratio is determined by the compression request unit of the user terminal under the triggering of a user, and the compression request carrying the compression ratio is sent to a target fog computing node which is correspondingly connected, so that the target fog computing node can compress the artificial intelligence model received by the fog computing node by combining the compression ratio carried by the compression request to form a target artificial intelligence model with a certain size, and the data processing capacity of the formed target artificial intelligence model and the computing capacity of the user terminal deploying the target artificial intelligence model are comprehensively considered, thereby ensuring that the formed target artificial intelligence model can be normally deployed on the user terminal and can accurately process a data set to be processed.
5. In one embodiment of the invention, the computing power of the fog computing node is relatively strong compared with that of the user terminal, and the data processing power of a target artificial intelligence model deployed and operated on the user terminal is relatively low compared with that of the artificial intelligence model; the received artificial intelligence model is deployed through the model deployment unit of the fog computing node, when the accuracy requirement of a user on the processing result corresponding to the data set to be processed is high, the data set to be processed can be sent to a target fog computing node correspondingly connected with the service processing unit through the service processing unit, the artificial intelligence model of the target fog computing node is deployed to process the data set to be processed, and a more accurate second processing result is obtained.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An artificial intelligence service system, comprising:
the system comprises a cloud computing platform, a fog computing cluster and at least one user terminal; wherein the content of the first and second substances,
the fog computing cluster comprises at least two fog computing nodes;
each fog computing node is connected with the cloud computing platform;
each fog computing node is respectively connected with at least one user terminal;
each user terminal is respectively connected with at least one fog computing node;
the cloud computing platform is used for training an artificial intelligence model and issuing the trained artificial intelligence model to at least one target fog computing node;
the fog computing node is used for compressing the artificial intelligence model to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform; when a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the target user terminal is received, sending the target artificial intelligence model to the target user terminal;
the user terminal is used for sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user; receiving and deploying the target artificial intelligence model sent by the target fog computing node; and inputting a data set to be processed into the deployed target artificial intelligence model, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
2. The system of claim 1,
the user terminal includes: the verification device comprises a verification information acquisition unit, a correction unit and a feedback processing unit; wherein the content of the first and second substances,
the verification information acquisition unit is used for acquiring a verification result corresponding to the data set to be processed;
the checking unit is used for checking whether the first processing result is the same as the verification result or not, and if the first processing result is different from the verification result, the feedback processing unit is triggered;
the feedback processing unit is used for generating feedback information corresponding to the artificial intelligence model according to the data set to be processed and sending the feedback information to a target fog computing node which is correspondingly connected;
then the process of the first step is carried out,
the fog computing node, comprising: a storage processing unit; wherein the content of the first and second substances,
the storage processing unit is used for receiving and storing feedback information corresponding to the artificial intelligence model;
the cloud computing platform comprises: the system comprises a feedback information acquisition unit, a training processing unit and an issuing processing unit; wherein the content of the first and second substances,
the feedback information acquisition unit is used for acquiring each piece of feedback information which is stored in each target fog calculation node and corresponds to the artificial intelligence model;
the training processing unit is used for training the artificial intelligence model according to the acquired feedback information to form an optimized artificial intelligence model;
and the issuing processing unit is used for issuing the optimized training model to each target fog computing node.
3. The system of claim 2,
the training processing unit is used for detecting the total feedback amount of each acquired feedback information, and training the artificial intelligence model according to each acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
4. The system according to claim 2, and characterized in that,
the user terminal further comprises: a compression request unit; wherein the content of the first and second substances,
the compression request unit is used for determining a compression rate under the trigger of a user and sending a compression request which carries the compression rate and corresponds to the artificial intelligence model to one correspondingly connected target fog computing node;
then the process of the first step is carried out,
the fog calculation node further comprises: a compression processing unit; wherein the content of the first and second substances,
and the compression processing unit is used for compressing the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model when receiving the compression request which is sent by the target user terminal and corresponds to the artificial intelligence model.
5. The system of claim 1,
the fog computing node, comprising: the system comprises a model deployment unit and a service response unit; wherein the content of the first and second substances,
the model deployment unit is used for deploying the received artificial intelligence model;
the service response unit is used for inputting the input stage to be processed into the deployed artificial intelligence model when receiving the data set to be processed sent by the target user terminal connected with the service response unit, receiving a second processing result output by the deployed artificial intelligence model after the input data set to be processed is processed, and sending the received second processing result to the target user terminal connected with the service response unit;
the user terminal includes: a service processing unit; wherein the content of the first and second substances,
the service processing unit is used for sending the data set to be processed to the target fog computing node correspondingly connected with the service processing unit; and receiving and providing the second processing result sent by one target fog computing node connected with the target fog computing node.
6. A method for implementing artificial intelligence service using the artificial intelligence service system of any one of claims 1 to 5, comprising:
training an artificial intelligence model by using a cloud computing platform, and issuing the trained artificial intelligence model to at least one target fog computing node;
compressing the artificial intelligence model by using the fog computing node to form a target artificial intelligence model when receiving the artificial intelligence model issued by the cloud computing platform;
sending a deployment request corresponding to the artificial intelligence model to the target fog computing node connected with the user terminal under the triggering of the user by utilizing the user terminal;
when receiving a deployment request corresponding to the artificial intelligence model and sent by a target user terminal connected with the mist computing node, the mist computing node sends the target artificial intelligence model to the target user terminal;
receiving and deploying the target artificial intelligence model sent by the target fog computing node by using the user terminal;
and inputting the data set to be processed into the deployed target artificial intelligence model by using the user terminal, and receiving a first processing result output by the deployed artificial intelligence model after the input data set to be processed is processed.
7. The method of claim 6,
when the user terminal comprises a verification information acquisition unit, a proofreading unit and a feedback processing unit, the fog computing node comprises a storage processing unit, and the cloud computing platform comprises a feedback information acquisition unit, a training processing unit and a sending processing unit,
further comprising:
acquiring a verification result corresponding to the data set to be processed by utilizing the verification information acquisition unit of the user terminal;
checking whether the first processing result is the same as the verification result by using the checking unit of the user terminal;
generating feedback information corresponding to the artificial intelligence model according to the data set to be processed by utilizing the feedback processing unit of the user terminal when the first processing result is different from the verification result, and sending the feedback information to a correspondingly connected target fog computing node;
receiving and storing feedback information corresponding to the artificial intelligence model by utilizing a storage processing unit of each target fog computing node;
acquiring each piece of feedback information corresponding to the artificial intelligence model, which is stored in a storage processing unit of each target fog computing node, by using the feedback information acquisition unit of the cloud computing platform;
training the artificial intelligence model according to the acquired feedback information by utilizing the training processing unit of the cloud computing platform to form an optimized artificial intelligence model;
and issuing an optimization training model to each target fog computing node by using the issuing processing unit of the cloud computing platform.
8. The method of claim 7,
the training the artificial intelligence model according to the obtained feedback information by using the training processing unit of the cloud computing platform to form an optimized artificial intelligence model, including:
and detecting the total feedback amount of each piece of acquired feedback information by using the training processing unit of the cloud computing platform, and training the artificial intelligence model according to each piece of acquired feedback information to form an optimized artificial intelligence model when the total feedback amount reaches a preset amount.
9. The method of claim 7,
when the user terminal further includes a compression request unit, and the fog computing node further includes a compression processing unit, before the artificial intelligence model is compressed by the fog computing node when the artificial intelligence model issued by the cloud computing platform is received, so as to form a target artificial intelligence model, the method further includes:
determining a compression ratio under the triggering of a user by utilizing the compression request unit of the user terminal, and sending a compression request carrying the compression ratio and corresponding to the artificial intelligence model to a correspondingly connected target fog computing node;
then, said compressing said artificial intelligence model to form a target artificial intelligence model comprises:
and when the compression processing unit of the fog computing node receives a compression request which is sent by one target user terminal and corresponds to the artificial intelligence model, the compression processing unit compresses the received artificial intelligence model according to the compression rate carried in the received compression request to form the target artificial intelligence model.
10. The method of claim 6,
when the fog computing node comprises a model deployment unit and a service response unit, and the user terminal comprises a service processing unit, the method further comprises the following steps:
deploying the received artificial intelligence model with the model deployment unit of the fog computing node;
sending a data set to be processed to one target fog computing node correspondingly connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal;
when the service response unit of the fog computing node receives the data set to be processed sent by one target user terminal connected with the service response unit, the input stage to be processed is input into the deployed artificial intelligence model, a second processing result output after the deployed artificial intelligence model processes the input data set to be processed is received, and the received second processing result is sent to the one target user terminal connected with the service response unit;
and receiving and providing the second processing result sent by one target fog computing node connected with the service processing unit of the user terminal by utilizing the service processing unit of the user terminal.
CN201810488394.8A 2018-05-21 2018-05-21 Artificial intelligence service system and method for realizing artificial intelligence service Active CN108667850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810488394.8A CN108667850B (en) 2018-05-21 2018-05-21 Artificial intelligence service system and method for realizing artificial intelligence service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810488394.8A CN108667850B (en) 2018-05-21 2018-05-21 Artificial intelligence service system and method for realizing artificial intelligence service

Publications (2)

Publication Number Publication Date
CN108667850A CN108667850A (en) 2018-10-16
CN108667850B true CN108667850B (en) 2020-10-27

Family

ID=63777179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810488394.8A Active CN108667850B (en) 2018-05-21 2018-05-21 Artificial intelligence service system and method for realizing artificial intelligence service

Country Status (1)

Country Link
CN (1) CN108667850B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408500B (en) * 2018-11-06 2020-11-17 北京深度奇点科技有限公司 Artificial intelligence operation platform
CN109828831B (en) * 2019-02-12 2020-10-16 成都考拉悠然科技有限公司 Artificial intelligence cloud platform
CN112241648A (en) * 2019-07-16 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing system and image device
CN110505446A (en) * 2019-07-29 2019-11-26 西安电子科技大学 The hotel's video security protection system calculated based on mist
CN112306925B (en) * 2019-08-02 2023-02-10 华为技术有限公司 Access request processing method, device, equipment and storage medium
CN110765077B (en) * 2019-11-07 2022-06-28 中电福富信息科技有限公司 Method and system for uniformly managing AI model based on distributed file system
US11582260B2 (en) * 2019-11-14 2023-02-14 Baidu Usa Llc Systems and methods for verifying a watermark of an AI model for a data processing accelerator
CN111835548B (en) * 2020-03-02 2021-03-23 北京物资学院 Artificial intelligence model processing method and device in O-RAN system
CN111581615A (en) * 2020-05-08 2020-08-25 南京大创师智能科技有限公司 Method and system for providing artificial intelligence platform for individuals
CN113747462A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Information processing method and related equipment
CN112394950B (en) * 2021-01-19 2021-04-27 共达地创新技术(深圳)有限公司 AI model deployment method, device and storage medium
CN113691579A (en) * 2021-06-30 2021-11-23 山东新一代信息产业技术研究院有限公司 Robot AI service method and system based on cloud edge
WO2023082112A1 (en) * 2021-11-10 2023-05-19 Nokia Shanghai Bell Co., Ltd. Apparatus, methods, and computer programs

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734558A (en) * 2017-10-26 2018-02-23 北京邮电大学 A kind of control of mobile edge calculations and resource regulating method based on multiserver
CN107766889A (en) * 2017-10-26 2018-03-06 济南浪潮高新科技投资发展有限公司 A kind of the deep learning computing system and method for the fusion of high in the clouds edge calculations
CN107808098A (en) * 2017-09-07 2018-03-16 阿里巴巴集团控股有限公司 A kind of model safety detection method, device and electronic equipment
CN107871164A (en) * 2017-11-17 2018-04-03 济南浪潮高新科技投资发展有限公司 A kind of mist computing environment personalization deep learning method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180067779A1 (en) * 2016-09-06 2018-03-08 Smartiply, Inc. AP-Based Intelligent Fog Agent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808098A (en) * 2017-09-07 2018-03-16 阿里巴巴集团控股有限公司 A kind of model safety detection method, device and electronic equipment
CN107734558A (en) * 2017-10-26 2018-02-23 北京邮电大学 A kind of control of mobile edge calculations and resource regulating method based on multiserver
CN107766889A (en) * 2017-10-26 2018-03-06 济南浪潮高新科技投资发展有限公司 A kind of the deep learning computing system and method for the fusion of high in the clouds edge calculations
CN107871164A (en) * 2017-11-17 2018-04-03 济南浪潮高新科技投资发展有限公司 A kind of mist computing environment personalization deep learning method

Also Published As

Publication number Publication date
CN108667850A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108667850B (en) Artificial intelligence service system and method for realizing artificial intelligence service
US11157512B2 (en) Method and system for replicating data to heterogeneous database and detecting synchronization error of heterogeneous database through SQL packet analysis
US20190385612A1 (en) Low power integrated circuit to analyze a digitized audio stream
CN110490246B (en) Garbage category determination method and device, storage medium and electronic equipment
CN106940679A (en) Data processing method and device
CN108256718B (en) Policy service task allocation method and device, computer equipment and storage equipment
CN111345011A (en) APP pushing method and device, electronic equipment and computer readable storage medium
CN109669795B (en) Crash information processing method and device
CN107786628B (en) Service number distribution method and device, computer equipment and storage medium
CN107896170B (en) Insure the monitoring method and device of application system
CN109801191A (en) A kind of legal document is sent to method, collection methods and system
CN105227557A (en) A kind of account number processing method and device
CN110019285A (en) A kind of alert identification allocating method and electronic equipment
CN114443135A (en) Model deployment method and prediction method, device, electronic equipment and storage medium
CN113778864A (en) Test case generation method and device, electronic equipment and storage medium
CN107623620B (en) Processing method of random interaction data, network server and intelligent dialogue system
CN115633093A (en) Resource acquisition method and device, computer equipment and computer readable storage medium
CN110874379A (en) Data transfer method and device
CN111258882B (en) Test data acquisition method and device based on digital media system
CN113836428A (en) Business pushing method and device, computer equipment and storage medium
CN104407846B (en) Information processing method and device
CN110209553B (en) Data acquisition method and device
CN112101810A (en) Risk event control method, device and system
CN112035287A (en) Data cleaning result testing method and device, storage medium and equipment
CN107305610B (en) Access path processing method and device, and automaton identification method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200929

Address after: 250100 Ji'nan high tech Zone, Shandong, No. 1036 wave road

Applicant after: INSPUR GROUP Co.,Ltd.

Address before: 250100, Ji'nan province high tech Zone, Sun Village Branch Road, No. 2877, building, floor, building, on the first floor

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230407

Address after: S02 Building, 1036 Langchao Road, Jinan Area, China (Shandong) Pilot Free Trade Zone, Jinan City, Shandong Province, 250000

Patentee after: Shandong Inspur innovation and entrepreneurship Technology Co.,Ltd.

Address before: No. 1036, Shandong high tech Zone wave road, Ji'nan, Shandong

Patentee before: INSPUR GROUP Co.,Ltd.

TR01 Transfer of patent right