CN117592102A - Service execution method, device, equipment and storage medium - Google Patents

Service execution method, device, equipment and storage medium Download PDF

Info

Publication number
CN117592102A
CN117592102A CN202311562671.2A CN202311562671A CN117592102A CN 117592102 A CN117592102 A CN 117592102A CN 202311562671 A CN202311562671 A CN 202311562671A CN 117592102 A CN117592102 A CN 117592102A
Authority
CN
China
Prior art keywords
service
sample
branch network
branch
service data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311562671.2A
Other languages
Chinese (zh)
Inventor
郑开元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311562671.2A priority Critical patent/CN117592102A/en
Publication of CN117592102A publication Critical patent/CN117592102A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a service execution method, a device, equipment and a storage medium, wherein after service data are determined, a target branch network which needs to be operated for executing the service is predicted through a branch prediction unit in a service model, and then the service is executed through the operation target branch network. The service model comprises a plurality of branch networks, the quantity of service characteristics required by running each branch network is not identical, and the computing resources required by running each branch network are different. The accurate service execution result can be obtained by executing only one branch network in the service model without iterating each branch network, thereby ensuring the service execution efficiency and reducing the demand on computing resources. Meanwhile, when the number of the services is multiple, at least part of the service execution process only needs a small amount of service features to participate, so that potential safety hazards caused by exposure of all the service features in the service execution process are avoided, and the information security of the service data is further ensured.

Description

Service execution method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for executing a service.
Background
With the development of artificial intelligence technology and the attention of people to self privacy, using models to execute services has become one of the more common application scenarios of artificial intelligence technology.
Currently, a service model generally includes a plurality of branch networks, as shown in fig. 1. The business model is illustrated in fig. 1 as comprising three branched networks. The features required for operating each branch network are not exactly the same, in the figure, feature 1 is a feature required for operating branch network 1, feature 1 and feature 2 are features required for operating branch network 2, and feature 1, feature 2 and feature 3 are features required for operating branch network 3.
In general, a server for executing a service may perform feature extraction on service data, determine initial features, and split the initial features into feature 1, feature 2, and feature 3. Then, the server may input the feature 1 into the branch network 1 according to the order of from less to more computing resources required during the operation, and operate the branch network 1 to obtain the execution result 1. Then, when the confidence of the execution result 1 is lower than the confidence threshold, the feature 1 and the feature 2 are continuously input into the branch network 2, the branch network 2 is operated to obtain the execution result 2, and whether the confidence of the execution result 2 is higher than the confidence threshold is continuously judged. If yes, determining the execution result 2 as a service execution result, if not, continuing to execute the branch network 3 until the confidence coefficient of the service execution result output by the branch network is higher than a preset confidence coefficient threshold value.
Therefore, in the prior art, when executing a service, it is generally required to execute and run multiple branch networks, so as to obtain an accurate service execution result, which results in more time and computing resources required by a service execution process, and reduces service execution efficiency.
Based on the above, the present application provides a service execution method.
Disclosure of Invention
The present application provides a method, an apparatus, a device, and a storage medium for executing a service, so as to partially solve the foregoing problems in the prior art.
The application adopts the following technical scheme:
the present disclosure provides a service execution method, where the service execution method is applied to a server, where a pre-trained service model is deployed in the server, where the service model includes a plurality of branch networks, service features required for running each branch network are not completely the same, computing resources required for running each branch network are different, and when each branch network runs, the same service is executed, where the method includes:
responding to a service execution request, and determining service data required by executing a service;
inputting the service data into a feature extraction unit of the service model to obtain a plurality of service features corresponding to the service data;
Inputting the service data into a branch prediction unit of the service model to obtain a target branch network output by the branch prediction unit;
determining target characteristics required by running the target branch network from the service characteristics, and inputting the target characteristics into the target branch network in an execution unit of the service model to obtain a service execution result output by the target branch network;
and returning the service execution result according to the service execution request.
The present disclosure provides a service execution device, where the service execution device is applied to a server, where a pre-trained service model is deployed in the server, where the service model includes a plurality of branch networks, service features required for running each branch network are not completely the same, computing resources required for running each branch network are different, and when running each branch network, the device executes the same service, and includes:
the determining module is used for responding to the service execution request and determining service data required by executing the service;
the extraction module is used for inputting the service data into the feature extraction unit of the service model to obtain a plurality of service features corresponding to the service data;
The prediction module is used for inputting the service data into a branch prediction unit of the service model to obtain a target branch network output by the branch prediction unit;
the execution module is used for determining target characteristics required by running the target branch network from the service characteristics, inputting the target characteristics into the target branch network in an execution unit of the service model, and obtaining a service execution result output by the target branch network;
and the return module is used for returning the service execution result according to the service execution request.
The present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the above-described service execution method.
The application provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the service execution method when executing the program.
The above-mentioned at least one technical scheme that this application adopted can reach following beneficial effect:
after the service data is determined, a target branch network for executing the service is predicted by a branch prediction unit in the service model, and then the service is executed by operating the target branch network. The service model comprises a plurality of branch networks, service characteristics required by running each branch network are not identical, and computing resources required by running each branch network are different.
The accurate service execution result can be obtained by executing only one branch network in the service model without iterating each branch network, thereby ensuring the service execution efficiency and reducing the demand on computing resources.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic diagram of a business model;
fig. 2 is a schematic flow chart of a service execution method provided in the present application;
fig. 3 is a schematic flow chart of a service execution method provided in the present application;
fig. 4 is a schematic structural diagram of a service execution device provided in the present application;
fig. 5 is a schematic view of the electronic device corresponding to fig. 1 provided in the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that, all actions for acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the place where the actions are performed and obtaining the authorization given by the owner of the corresponding device.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a service execution method provided in the present application.
S100: in response to the service execution request, service data required for executing the service is determined.
The embodiment of the application provides a service execution method, and an execution process of the service execution method can be executed by electronic equipment such as a server for executing a service, a terminal storing a service model and the like. The electronic external equipment can be a terminal held by a user such as a mobile phone, a tablet personal computer, an intelligent device and the like, and can also be equipment such as a display screen panel and the like provided for the user for a service provider. For convenience of description, a server for executing a service will be described hereinafter as an execution subject of the service execution method.
In order to avoid the situation that the service needs to be executed in an iterative branch network mode when the service is executed through the service model, the required computing resources and the required time are long. The present specification provides a new service execution method, in which a pre-trained service model is deployed in a server, after service data is determined, a target branch network to be operated for executing the service is predicted by a branch prediction unit in the service model, and then the service is executed by operating the target branch network. The service model comprises a plurality of branch networks, service characteristics required by running each branch network are not identical, and computing resources required by running each branch network are different.
The accurate service execution result can be obtained by executing only one branch network in the service model without iterating each branch network, thereby ensuring the service execution efficiency and reducing the demand on computing resources.
It should be noted that the server may perform the same service by running each branch network. That is, the functions that each branch network can perform are consistent.
Based on the above brief description of the service execution method in the present specification, it is apparent that the server may first determine service data.
Specifically, the server may receive a service execution request. The service execution request may be sent to the server by a device such as a client when the user needs to execute a service, or may be automatically generated by the server when the user performs a specified operation. The specific operation may be a clicking operation of a specific control key by a user, a collecting operation of a specific commodity by the user, inputting specific content by the user, and the like, and specifically how to determine that the user needs to execute the service according to the user behavior may be set according to the need.
It should be noted that, the system may use any method to determine that the user needs to execute the service according to the user operation, but the step of the server receiving the service execution request and responding to the service execution request may be implemented no matter what method is used to determine that the user needs to execute the service. That is, no matter what way is adopted to determine that the user needs to execute the service, the execution process of the method described in the application is not affected.
Of course, the service execution request may also be sent to the client after the other device monitors the specified operation performed by the user. And in particular, how the service execution request is generated can be set according to needs, which is not limited in the application.
Generally, the service execution request carries service data. The server can then parse the service execution request to obtain service data carried in the service execution request. The service data may be at least one of a service type corresponding to the service, a time of initiating the service, and information required for executing the service. Of course, besides the service data, the service execution request may also carry user data, where the user data may be at least one of portrait data of the user and historical behavior data of the user.
S102: and inputting the service data into a feature extraction unit of the service model to obtain a plurality of service features corresponding to the service data.
In one or more embodiments provided herein, for a branch network of the same function in the same service model, the less data is required to run the branch network, the less computing resources are required to run the branch network, and the faster the service execution result is obtained. It follows that the traffic characteristics required to operate each branch network are not exactly the same, and the computing resources required to operate each branch network are different. The service executing method in the present specification is to determine the target branch network to be operated by executing the service in advance, determine the target characteristics required by operating the target branch network, and operate the target branch network according to the target characteristics to realize the purpose of executing the service. Therefore, the server also needs to determine each service feature corresponding to the service data.
Specifically, the feature extraction unit is preset with a plurality of feature extraction sublayers, and feature extraction modes corresponding to the feature extraction sublayers are different.
Then, the server may input the service data into each feature extraction sub-layer of the feature extraction unit of the service model, to obtain service features respectively output by each feature extraction sub-layer.
And finally, the server can take the determined business characteristics as the business characteristics corresponding to the business data.
S104: and inputting the service data into a branch prediction unit of the service model to obtain a target branch network output by the branch prediction unit.
In one or more embodiments provided herein, as described above, the method for executing a service in this specification achieves this objective by determining a target branch network for executing the service in advance, determining target characteristics required for running the target branch network, and running the target branch network according to the target characteristics. Therefore, the server needs to determine the target branch network.
Specifically, the server may input the service data as input into a branch prediction unit of the service model, to obtain a target branch network output by the branch prediction unit. The branch prediction unit is used for predicting a target branch network which needs to run for executing the service corresponding to the service execution request according to the service data.
S106: and determining target characteristics required by running the target branch network from the service characteristics, and inputting the target characteristics into the target branch network in an execution unit of the service model to obtain a service execution result output by the target branch network.
In one or more embodiments provided herein, after determining a target branch network to be operated for executing a service, the server may determine service characteristics required for operating the target branch network and operate the target branch network according to the service characteristics.
Specifically, for each branch network, the branch network may have its corresponding feature extraction mode. Taking the case that the branch network 1 corresponds to the feature extraction mode 1 and the branch network 2 corresponds to the feature extraction mode 2 as an example, the service feature obtained by performing feature extraction in the feature extraction mode 1 can be determined as the service feature required for operating the branch network 1. Similarly, the service features obtained by feature extraction in the feature extraction mode 2 are service features required for operating the branch network 2.
The server may determine, according to the target branch network, a feature extraction manner corresponding to the target branch network, and determine, as the target feature corresponding to the target branch network, the service feature extracted by the feature extraction manner corresponding to the target branch network. The number of feature extraction modes corresponding to each target branch network may be one or more.
The server may take the target feature as input to the target branch network in the execution unit of the business model and run the target branch network after determining the target feature. Thus, the service execution result output by the target branch network can be obtained.
The service corresponding to the service execution request may be an identification service, a segmentation service, a wind control service, etc. Taking the identification service as an example, the service data may be image data to be identified, text data to be identified, account data corresponding to an account to be identified, and the like. The service execution result may be a target object contained in the image data, a specific text or a specific character string contained in the text data, an account type corresponding to the account to be identified, or the like.
Taking the segmentation service as an example, the service data may be image data to be segmented, corpus data to be segmented, and the like. The service execution result may be a region type corresponding to each region in the image data, corpus data corresponding to each topic included in the corpus data, and the like.
Taking the wind control service as an example, the service data may be transaction data corresponding to a transaction to be wind controlled, account data corresponding to an account to be wind controlled, etc., and the service execution result may be whether the transaction has risk and/or risk level corresponding to the transaction, whether the account has risk and/or risk level corresponding to the account, etc.
The service type corresponding to the service execution request and the type of the service execution result can be set according to the needs, which is not limited in the present specification.
S108: and returning the service execution result according to the service execution request.
In one or more embodiments provided herein, when determining a service execution result, the service execution result needs to be returned to the sender of the service execution request.
Then, after determining the service execution result, the server may return the service execution result to the sender of the service execution request according to the service execution request.
The sender of the service execution request can receive the service execution result sent by the server, and after receiving the service execution result, the sender displays the service execution result to the user according to the service execution request.
In the service execution method shown in fig. 1, a pre-trained service model is deployed in a server, after service data is determined, a target branch network to be operated for executing the service is predicted by a branch prediction unit in the service model, and then the service is executed by operating the target branch network. The service model comprises a plurality of branch networks, service characteristics required by running each branch network are not identical, and computing resources required by running each branch network are different.
The accurate service execution result can be obtained by executing only one branch network in the service model without iterating each branch network, thereby ensuring the service execution efficiency and reducing the demand on computing resources.
In addition, in the process of executing the service, in order to ensure the accuracy of the service execution result, a branch network with the most calculation resources required by the operation, that is, a branch network with the most service characteristics required by the operation, may be directly used as a target branch network, and each determined service characteristic is input into the target branch network to obtain the prediction result output by the target branch network. In this case, although the service execution efficiency is lost, the accuracy of the service execution result is ensured.
However, as described above, the service executed by the service execution method may also be a wind-controlled service. In this case, the service data determined by the server may be sensitive data, for example, different types of sensitive data such as a user id number, a name, a mobile phone number, and the like, and the service feature determined according to the service data is also a sensitive feature. The method directly uses the branch network with the most calculation resources required by operation as the target branch network to execute the service, and all sensitive features need to participate in the execution step of the service in the process of executing the service, so that the risk of privacy leakage of users exists.
Based on the service execution method in the application, a target branch network is selected from a plurality of branch networks, and the service is executed through the determined target branch network. The occurrence of the situation that all the services executed by the server need to participate in the service execution process corresponding to all the service features is avoided. That is, for at least part of the services executed by the server, only the corresponding part of the service features are needed to participate in the service execution process, so that an accurate service execution result can be obtained. The method and the device have the advantages that the accuracy of service execution results is guaranteed, potential safety hazards caused by exposure of all service features in the service execution process are avoided, and privacy safety of users is further guaranteed.
Based on the same thought, the application provides a flow diagram of a service execution method. As shown in fig. 3.
Fig. 3 is a schematic flow chart of a service execution method provided in the present specification, and similar to fig. 1, a service model for executing the service execution method also includes three branch networks. The service characteristics required for operating each branch network are not identical, in the figure, characteristic 1 is a characteristic required for operating branch network 1, characteristic 1 and characteristic 2 are characteristics required for operating branch network 2, and characteristic 1, characteristic 2 and characteristic 3 are characteristics required for operating branch network 3. In the figure, straight lines represent connection relations, and arrows are used for representing the transmission direction of data. The arrowed curve is used to characterize the data flow in the step of determining the target branch network by the branch prediction unit.
It can be seen that the business model contains three parts: the feature extraction unit, the branch prediction unit and the execution unit, wherein a plurality of branch networks are deployed in the unit.
The server may then input the service data into the feature extraction unit, and the feature extraction unit may perform feature extraction on the service data to obtain an initial feature, and split the initial feature to obtain three service features of feature 1, feature 2, and feature 3.
Meanwhile, the server may input the initial feature into the branch prediction unit to obtain a target branch network output by the branch prediction unit, where the branch network 3 is illustrated as a target branch network.
Finally, the server may determine the target feature corresponding to the branch network 3: feature 1, feature 2, and feature 3. And inputting the feature 1, the feature 2 and the feature 3 into the branch network 3 to obtain an execution result 3 output by the branch network 3 as a service execution result.
It should be noted that the steps of determining the initial feature and determining the service feature according to the initial feature and inputting the initial feature into the branch prediction unit to obtain the target branch network are merely illustrative. The server can also directly conduct feature extraction of different feature extraction modes on the service data to obtain each service data, and the server can also directly input the service data as input into the branch prediction unit to obtain a target branch network. How the traffic characteristics are determined in particular and how the inputs to the branch prediction unit are determined in particular may be set as desired, and this specification is not limiting.
In addition, the business model in the specification can be trained by the following ways:
specifically, the server may determine sample service data, and determine a service execution result of the sample service data as a first label of the sample service data.
Then, the server can take the sample service data as input, and input the sample service data into a feature extraction unit of the service model to be trained to obtain a plurality of sample features corresponding to the sample service data.
Then, for each branch network in the execution unit of the service model, determining sample characteristics required by running the branch network from the sample characteristics, and taking the sample characteristics required by running the branch network as input, inputting the sample characteristics into the branch network to obtain a sample execution result output by the branch network.
And then, the server can determine the branch network matched with the sample service data as a second label of the sample service data according to the difference between sample execution results output by the branch networks respectively. The branch network matched with the sample service data is a branch network which is required to run the service corresponding to the original service data.
The server may then input the sample traffic data into a branch prediction unit of the traffic model, resulting in a sample branch network output by the branch prediction unit.
The server may then determine a loss based on the difference between the sample execution result and the first annotation of the sample session traffic data and the difference between the sample branch network and the second annotation of the sample traffic data, respectively, entered by the branch networks, and train the traffic model with the loss minimized as a training objective.
And training the service model according to the difference between the sample execution result respectively output by each branch network and the first annotation of the sample service data and the difference between the sample branch network and the second annotation of the sample service data.
The server may be preset with a difference threshold, so that the server may sort the execution results of each sample according to the amount of resources required to calculate resources during operation, and determine the difference between the execution results of two adjacent samples. Taking the example of three branched networks contained in the traffic model, the server may determine the first difference and the second difference. Then, the server may determine whether there is a difference greater than the difference threshold value among the first difference and the second difference. If so, the server can determine that the accuracy of the execution result obtained by running the sample service data on different branch networks is greatly different, that is, the number of service features participating in the branch network running can bring great improvement to the accuracy of the execution result. Therefore, taking the example that the second difference is greater than the difference threshold, the server may determine, among the branch networks outputting the sample execution results corresponding to the second difference, a branch network with more calculation resources required for execution as a branch network matched with the sample service data.
If the service characteristics do not exist, the server can determine that the number of the service characteristics participating in the operation of the branch network does not bring about improvement or less improvement to the accuracy of the execution result. The branch network with the least computational resources required at run-time may be the branch network that matches the sample traffic data.
It should be noted that, the number of service features required for running each branch network may be determined according to the number of branch networks and the determined number of service features, with the number of branch networks being 3, and the determined number of service features being 15, the branch network with the minimum calculation resources required for running may be preset, only 15/3, that is, 5 service features participate, an accurate execution result may be obtained, the branch network with the medium calculation resources required for running may be preset, only (15/3) x 2, that is, 10 service features participate, an accurate execution result may be obtained, and the branch network with the maximum calculation resources required for running may need 15 service features participate to obtain an accurate execution result, and training may be performed on each branch network according to the corresponding preset condition. Of course, the number of service features required for the operation of each branch network may be set as required, which is not limited in this specification.
Further, the server may determine, for each branch network, a difference between the sample execution result output by the branch network and the first label as a first difference, and determine the first loss according to the first difference respectively corresponding to each branch network. And determining a second loss based on the gap between the sample branch network and the second annotation. And finally, determining total loss according to the first loss and the second loss, and adjusting model parameters of the service model by taking the total loss as an optimization target.
Of course, the above-mentioned calculation resources required for determining the first loss are relatively large, and in order to improve this problem, the minus infinity may further determine the prediction execution result corresponding to the sample service data according to the sample execution results respectively output by the branch networks. Then upon determining the first loss, the server may determine the first loss based solely on the gap between the predicted execution result and the first annotation.
Further, in general, in the case where the computing resources in the service model are constant, there is a branched network with less computing resources required for the operation, and the time required for executing one service is shorter. In order to ensure the processing efficiency of the service model, the server can execute the service through the branch network with the least calculation resources required during the running execution as much as possible under the condition of ensuring the accuracy of the service execution result. Therefore, when determining the second label of the sample service data, the server can also determine the second label corresponding to each sample service data according to the preset proportion.
Specifically, the server may sort the branch networks according to the number of service features corresponding to the branch networks. That is, the branch networks are ordered according to the amount of resources of the computing resources required to operate the branch networks.
Then, the server may determine, for each sample service data, a gain of each branch network corresponding to the sample service data according to a sample execution result of the sample service data output by each branch network, respectively, and a sequencing result of each branch network. Taking fig. 3 as an example, the difference between the execution result 1 and the execution result 2 may be taken as a gain of the branch network 2 corresponding to the sample service data, and the difference between the execution result 2 and the execution result 3 may be taken as a gain of the branch network 3 corresponding to the sample service data.
Finally, the server may determine, for each branch network in order of the resource amount of the computing resource required for operating the branch network, sample service data matching the branch network from among the sample service data according to the gain of the branch network corresponding to the sample service data and a preset proportion of the service amount required to be executed by the branch network, and use the branch network as a second label of the sample service data matching the branch network.
In addition, the server can also preset the ratio of each sample branch network output by the branch prediction unit to further ensure the efficiency of the service model.
Specifically, the server may determine weights corresponding to the branch networks according to the number of service features corresponding to the branch networks, where the weights are inversely related to the number.
Then, the server can input each sample service data into the branch prediction unit of the service model respectively to obtain a sample branch network corresponding to each sample service data output by the branch prediction unit.
Wherein, for each branch network, the number of the branch networks and the weight of the branch network are positively correlated in each sample branch network output by the branch prediction unit.
In addition, the priorities of the service features output by the feature extraction sublayers can also be different. Taking fig. 1 as an example, feature 1 has a higher priority than feature 2, and feature 2 has a higher priority than feature 3. The server can determine each service feature corresponding to each branch network according to the preset number of service features required by each branch network to execute the service and the importance corresponding to each service feature.
Based on the same thought, the present application provides a schematic structural diagram of a service execution device, as shown in fig. 4.
Fig. 4 is a schematic diagram of a service execution device provided in the present application, where the service execution device is applied to a server, and the server deploys a pre-trained service model, where the service model includes a plurality of branch networks, service features required for running each branch network are not completely the same, computing resources required for running each branch network are different, and the same service is executed when each branch network runs, where:
a determining module 200, configured to determine service data required for executing the service in response to the service execution request.
The extraction module 202 is configured to input the service data into a feature extraction unit of the service model, so as to obtain a plurality of service features corresponding to the service data.
And the prediction module 204 is configured to input the service data into a branch prediction unit of the service model, and obtain a target branch network output by the branch prediction unit.
And the execution module 206 is configured to determine, from the service features, a target feature required for running the target branch network, and input the target feature into the target branch network in the execution unit of the service model, so as to obtain a service execution result output by the target branch network.
And the return module 208 is configured to return the service execution result according to the service execution request.
The apparatus further comprises:
the training module 210 is configured to train to obtain the service model in the following manner: determining sample service data, determining a service execution result of the sample service data, inputting the sample service data into a feature extraction unit of a service model to be trained as a first label of the sample service data, obtaining a plurality of sample features corresponding to the sample service data, determining sample features required for running the branch network from each sample feature for each branch network in an execution unit of the service model, inputting the sample features required for running the branch network into the branch network, obtaining a sample execution result output by the branch network, determining a branch network matched with the sample service data according to a difference between sample execution results respectively output by each branch network, inputting the sample service data into a branch prediction unit of the service model as a second label of the sample service data, obtaining a sample branch network output by the branch prediction unit, and training the service model according to a difference between the sample execution result respectively output by each branch network and the first label of the sample service data and a difference between the sample branch network and the second label of the sample service data.
Optionally, the training module 210 is configured to determine a prediction execution result corresponding to the sample service data according to a sample execution result respectively output by each branch network, determine a first loss according to a gap between the prediction execution result and the first label, determine a second loss according to a gap between the sample branch network and the second label, determine a total loss according to the first loss and the second loss, and adjust model parameters of the service model with the total loss minimized as an optimization target.
Optionally, the training module 210 is configured to determine weights corresponding to the branch networks respectively according to the number of service features corresponding to the branch networks, where the weights are inversely related to the number; respectively inputting each sample service data into a branch prediction unit of the service model to obtain a sample branch network respectively corresponding to each sample service data output by the branch prediction unit; and for each branch network, the number of the branch networks is positively correlated with the weight of the branch network in each sample branch network output by the branch prediction unit, and the number of the sample service data is a plurality of sample service data.
Optionally, the extracting module 202 is configured to input the service data into a feature extracting unit of a pre-trained service model, determine initial features output by the feature extracting unit, split the initial features, and determine a plurality of service features.
Optionally, the execution module 206 is configured to determine the importance degree corresponding to each service feature, determine each service feature corresponding to each branch network according to the preset number of service features required to operate each branch network respectively and the importance degree corresponding to each service feature respectively, determine the target branch network from each branch network, and determine each service feature corresponding to the target branch network as a target feature.
Optionally, the training module 210 is configured to sort each branch network according to the number of service features corresponding to each branch network, determine, for each sample service data, a gain of each branch network corresponding to the sample service data according to a sample execution result corresponding to the sample service data output by each branch network and the sorting result of each branch network, and sequentially determine, for each branch network, sample service data matched with the branch network according to the gain of each branch network corresponding to the sample service data and a preset ratio of a service volume required to be executed by the branch network in total service volume, and use the branch network as a second label of the sample service data matched with the branch network, where the number of sample service data is multiple.
The present application also provides a computer-readable storage medium storing a computer program operable to execute the service execution method shown in fig. 2 described above.
The present application also provides a schematic block diagram of the electronic device shown in fig. 5. As shown in fig. 5, the electronic device includes, on the hardware unit side, a processor, an internal bus, a network interface, a memory, and a nonvolatile storage, and may of course include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the service execution method shown in fig. 2. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A service execution method applied to a server, the server having deployed a pre-trained service model, the service model including a plurality of branch networks, service features required to operate each branch network being not exactly the same, computing resources required to operate each branch network being different, the branch networks executing the same service when operating, the method comprising:
Responding to a service execution request, and determining service data required by executing a service;
inputting the service data into a feature extraction unit of the service model to obtain a plurality of service features corresponding to the service data;
inputting the service data into a branch prediction unit of the service model to obtain a target branch network output by the branch prediction unit;
determining target characteristics required by running the target branch network from the service characteristics, and inputting the target characteristics into the target branch network in an execution unit of the service model to obtain a service execution result output by the target branch network;
and returning the service execution result according to the service execution request.
2. The method of claim 1, wherein the business model is trained by:
determining sample service data, and determining a service execution result of the sample service data as a first label of the sample service data;
inputting the sample service data into a feature extraction unit of a service model to be trained to obtain a plurality of sample features corresponding to the sample service data;
for each branch network in the execution unit of the service model, determining sample characteristics required by running the branch network from the sample characteristics, and inputting the sample characteristics required by running the branch network into the branch network to obtain a sample execution result output by the branch network;
Determining a branch network matched with the sample service data as a second label of the sample service data according to the difference between sample execution results respectively output by the branch networks;
inputting the sample service data into a branch prediction unit of the service model to obtain a sample branch network output by the branch prediction unit;
and training the service model according to the difference between the sample execution result respectively output by each branch network and the first annotation of the sample service data and the difference between the sample branch network and the second annotation of the sample service data.
3. The method of claim 2, training the service model according to a gap between a sample execution result respectively output by each branch network and a first annotation of the sample service data and a gap between the sample branch network and a second annotation of the sample service data, specifically comprising:
according to sample execution results respectively output by each branch network, determining a prediction execution result corresponding to the sample service data;
determining a first loss according to a gap between the prediction execution result and the first annotation;
Determining a second loss based on a gap between the sample branch network and the second annotation;
and determining total loss according to the first loss and the second loss, and adjusting model parameters of the service model by taking the total loss as an optimization target.
4. The method of claim 2, the number of sample traffic data being a plurality;
inputting the sample service data into a branch prediction unit of the service model to obtain a sample branch network output by the branch prediction unit, wherein the method specifically comprises the following steps:
determining weights corresponding to the branch networks respectively according to the number of the service features corresponding to the branch networks, wherein the weights are inversely related to the number;
respectively inputting each sample service data into a branch prediction unit of the service model to obtain a sample branch network respectively corresponding to each sample service data output by the branch prediction unit; wherein, for each branch network, the number of the branch networks and the weight of the branch network are positively correlated in each sample branch network output by the branch prediction unit.
5. The method of claim 1, wherein the inputting the service data into the feature extraction unit of the pre-trained service model to obtain the service features corresponding to the execution data specifically comprises:
Inputting the service data into a feature extraction unit of a pre-trained service model, and determining initial features output by the feature extraction unit;
splitting the initial characteristics to determine a plurality of service characteristics.
6. The method of claim 5, wherein determining the target feature corresponding to the target branch network from the service features specifically includes:
determining importance degrees corresponding to the service features respectively;
determining each service feature corresponding to each branch network according to the number of the service features required by each branch network to operate and the importance corresponding to each service feature;
and determining the target branch network from the branch networks, and determining the business characteristics corresponding to the target branch network as target characteristics.
7. The method of claim 2, wherein the number of sample traffic data is plural;
determining a second label of the sample service data according to the difference between sample execution results respectively output by the branch networks, wherein the second label specifically comprises the following steps:
sequencing the branch networks according to the number of the service features corresponding to the branch networks respectively;
for each sample service data, determining the gain of each branch network corresponding to the sample service data according to the sample execution result corresponding to the sample service data respectively output by each branch network and the sequencing result of each branch network;
And determining sample service data matched with each branch network according to the gain of each branch network corresponding to the sample service data and the preset duty ratio of the service volume required to be executed by the branch network in the total service volume, and taking the branch network as a second label of the sample service data matched with the branch network.
8. A service execution device applied to a server in which a pre-trained service model is deployed, the service model including a plurality of branch networks, service characteristics required for operating each branch network being not exactly the same, and computing resources required for operating each branch network being different, the same service being executed when each branch network is operating, the device comprising:
the determining module is used for responding to the service execution request and determining service data required by executing the service;
the extraction module is used for inputting the service data into the feature extraction unit of the service model to obtain a plurality of service features corresponding to the service data;
the prediction module is used for inputting the service data into a branch prediction unit of the service model to obtain a target branch network output by the branch prediction unit;
The execution module is used for determining target characteristics required by running the target branch network from the service characteristics, inputting the target characteristics into the target branch network in an execution unit of the service model, and obtaining a service execution result output by the target branch network;
and the return module is used for returning the service execution result according to the service execution request.
9. A computer readable storage medium storing a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1-7 when the program is executed.
CN202311562671.2A 2023-11-21 2023-11-21 Service execution method, device, equipment and storage medium Pending CN117592102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311562671.2A CN117592102A (en) 2023-11-21 2023-11-21 Service execution method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311562671.2A CN117592102A (en) 2023-11-21 2023-11-21 Service execution method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117592102A true CN117592102A (en) 2024-02-23

Family

ID=89912809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311562671.2A Pending CN117592102A (en) 2023-11-21 2023-11-21 Service execution method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117592102A (en)

Similar Documents

Publication Publication Date Title
CN110457578B (en) Customer service demand identification method and device
CN111324533A (en) A/B test method and device and electronic equipment
CN111783018A (en) Page processing method, device and equipment
CN116049761A (en) Data processing method, device and equipment
CN116974676A (en) Page content sending method, device and equipment
CN114710318B (en) Method, device, equipment and medium for limiting high-frequency access of crawler
CN115545720A (en) Model training method, business wind control method and business wind control device
CN111241395B (en) Recommendation method and device for authentication service
CN115563584A (en) Model training method and device, storage medium and electronic equipment
CN117592102A (en) Service execution method, device, equipment and storage medium
CN113516480B (en) Payment risk identification method, device and equipment
CN117348999B (en) Service execution system and service execution method
CN116340852B (en) Model training and business wind control method and device
CN110728516A (en) Method, device and equipment for updating wind control model
CN116109008B (en) Method and device for executing service, storage medium and electronic equipment
CN115688130B (en) Data processing method, device and equipment
CN115828171B (en) Method, device, medium and equipment for executing service cooperatively by end cloud
CN116501852B (en) Controllable dialogue model training method and device, storage medium and electronic equipment
CN117009729B (en) Data processing method and device based on softmax
CN115545938B (en) Method, device, storage medium and equipment for executing risk identification service
CN117591217A (en) Information display method, device, equipment and storage medium
CN115952271B (en) Method and device for generating dialogue information, storage medium and electronic equipment
CN115795342B (en) Method and device for classifying business scenes, storage medium and electronic equipment
CN117369783B (en) Training method and device for security code generation model
CN117591703A (en) Graph data optimization method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination