CN113448545A - Method, apparatus, storage medium, and program product for machine learning model servitization - Google Patents
Method, apparatus, storage medium, and program product for machine learning model servitization Download PDFInfo
- Publication number
- CN113448545A CN113448545A CN202110700125.5A CN202110700125A CN113448545A CN 113448545 A CN113448545 A CN 113448545A CN 202110700125 A CN202110700125 A CN 202110700125A CN 113448545 A CN113448545 A CN 113448545A
- Authority
- CN
- China
- Prior art keywords
- machine learning
- data
- learning model
- processing
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 211
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 211
- 230000003068 static effect Effects 0.000 claims abstract description 41
- 238000007781 pre-processing Methods 0.000 claims abstract description 27
- 238000012805 post-processing Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 9
- 238000011161 development Methods 0.000 abstract description 45
- 230000006870 function Effects 0.000 abstract description 28
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000004904 shortening Methods 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/20—Software design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/31—Programming languages or programming paradigms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Stored Programmes (AREA)
Abstract
The present disclosure provides a method, an apparatus, a storage medium, and a program product for machine learning model servitization, which relate to the field of artificial intelligence, in particular to computer vision and deep learning technologies, and are particularly applicable in infrastructure scenarios. The specific implementation scheme is as follows: the service framework comprises a service module realized by using a dynamic programming language and a model processing module realized by using a static programming language, the service module realized by using a dynamic compiling language realizes upper functions of receiving data, preprocessing data, returning results and the like, and the model processing module realized by using the static programming language realizes functions of machine learning model related authentication and the like, thereby facilitating the development of services, ensuring the safety of machine learning model related data, reducing the risk of data leakage, improving the data safety, shortening the development period and improving the development efficiency.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to computer vision and deep learning techniques, which can be used in infrastructure scenarios, and in particular to a method, device, storage medium, and program product for machine learning model servization.
Background
The model training of machine learning is usually implemented by using a dynamic programming language such as python, and the trained machine learning model is served, that is, when the machine learning model of a training number is packaged into an HTTP service online, a static compiling language such as C + + is required to package the model for functions such as performance, security, encryption and the like. However, the trained model is packaged and deployed as the HTTP service by using a static language framework such as C + + and the like, so that the development cost is high, the service update iteration is slow, and the development efficiency is low.
Disclosure of Invention
The present disclosure provides a method, apparatus, storage medium, and program product for machine learning model servitization.
According to a first aspect of the present disclosure, there is provided a method of machine learning model servitization, comprising:
calling a service module realized by using a dynamic programming language in a service framework, and determining a machine learning model required to be used and input data contained in a service request according to the service request;
calling a model processing module realized by using a static programming language in the service framework, and after the service module passes authentication, performing data processing by using the machine learning model according to the input data to obtain a processing result;
and outputting the processing result through the service module.
According to a second aspect of the present disclosure, there is provided an apparatus for machine learning model servization, comprising:
the data acquisition module is used for calling a service module realized by using a dynamic programming language in a service framework and determining a machine learning model required to be used and input data contained in a service request according to the service request;
the machine learning data processing module is used for calling a model processing module realized by using a static programming language in the service framework, and after the service module is authenticated, the machine learning data processing module is used for performing data processing according to the input data to obtain a processing result;
and the result returning module is used for outputting the processing result through the service module.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to the technology disclosed by the invention, the development efficiency of the machine learning model service is improved while the safety of the machine learning model and related data is ensured.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a machine learning model service framework provided by embodiments of the present disclosure;
FIG. 2 is a flowchart of a method of machine learning model servitization provided by a first embodiment of the disclosure;
FIG. 3 is a flow chart of a method of machine learning model servitization provided by a second embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an apparatus for machine learning model servization provided by a third embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an apparatus for machine learning model servization provided by a third embodiment of the present disclosure;
FIG. 6 is a block diagram of an electronic device for implementing a method of machine learning model servization of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a method, an apparatus, a storage medium, and a program product for machine learning model servitization, which relate to the field of artificial intelligence, and in particular to a computer vision and deep learning technology, and are particularly applicable to infrastructure scenarios to improve the security of a machine learning model while realizing machine learning model servitization.
Model training of machine learning is usually realized by using a python language, a python language is also used to realize a python service framework when a trained model packaging service is on-line and machine learning model servitization is realized, but the python language is a dynamic programming language and is plaintext in the using process, a machine learning model can be leaked out, and no control authority is provided for service deployment and use, so that the safety is low.
Another way to implement machine learning model servitization for functions of performance, security, encryption, and the like is to implement machine learning model servitization based on static language frameworks such as C + + and the like, and package and deploy the model as hypertext Transfer Protocol (HTTP) service by using static compilation languages such as C + + and the like. However, for training model developers, development using C + + has certain difficulty, high development cost, long development period, slow service update iteration, and low development efficiency.
The method for machine learning model servitization provided by the present disclosure can be applied to a service framework as shown in fig. 1, where the service framework includes a service module implemented by using a dynamic programming language, a model processing module implemented by using a static programming language, and an interface corresponding to the model processing module. The service module provides machine learning prediction service for the outside, and uses python and other dynamic programming languages to realize upper-layer functions of data preprocessing, post-processing and the like according to input data when a user requests the service, and calls the model processing module through an interface. When the interface is called, the model processing module realizes the authentication of the service, and after the authentication is passed, the machine learning model is used for processing data according to the input data to obtain a processing result. The model processing module uses static programming languages such as C + + and the like to package the machine learning model, and functions such as loading, encryption, authentication, decryption and the like related to the machine learning model are realized. The dynamic programming language service module calls the static programming language module, so that the development of the service is facilitated, the safety of the service is ensured, the risk of data leakage is reduced, the data safety is improved, the development period is shortened, and the development efficiency is improved.
The machine learning model according to the present disclosure may be a model used for image Processing, such as image prediction, classification, and recognition of a person or an object in an image, or may be a Natural Language Processing (NLP) model, and the present embodiment is not limited to this. When the method is applied to image processing, the data security is ensured, the development period of model service can be obviously shortened, and the development efficiency is improved.
Fig. 2 is a flowchart of a method for servicing a machine learning model according to a first embodiment of the present disclosure. The method provided in this embodiment may be specifically used for a device for machine learning model service, and may be an electronic device such as a server and a terminal device for implementing machine learning prediction service.
As shown in fig. 2, the method comprises the following specific steps:
step S201, calling a service module implemented by using a dynamic programming language in the service framework, and determining a machine learning model to be used and input data included in the service request according to the service request.
The service module is used for providing an overall service framework and providing machine learning prediction service for the outside, and the service module can be HTTP service and the like. The service module can realize upper-layer functions of data receiving, data preprocessing, data post-processing, result returning and the like, so that the development cost is reduced, the development period is shortened, and the development efficiency is improved.
The service module is realized by adopting a python and other dynamic programming languages, can receive a service request of a user, and determines a machine learning model required to be used and input data contained in the service request according to the service request of the user. Since model training for machine learning is usually implemented in python language, the service module employs python language to facilitate the conversion process from model training to service deployment.
The interface of the machine learning model is realized by using a static programming language such as C + +.
Step S202, calling a model processing module realized by using a static programming language in the service framework, and after the service module is authenticated, performing data processing by using a machine learning model according to input data to obtain a processing result.
The model processing module is used for realizing the encapsulation of the machine learning model and can realize the processing of authentication, encryption, decryption and the like related to the machine learning model.
In this embodiment, when the machine learning model is serviced, the corresponding model processing module is obtained by encapsulating functions of the machine learning model, such as authentication, encryption, decryption, and the like.
The model processing module corresponding to the machine learning model is realized by adopting static programming languages such as C + + and the like, so that the safety of relevant data of the machine learning model can be improved.
And when the interface corresponding to the model processing module is called, the model processing module authenticates the service module, and after the authentication is passed, the machine learning model is used for processing data according to the input data to obtain a processing result. And the model processing module returns the processing result to the service module.
And step S203, outputting the processing result through the service module.
And after the processing result is obtained, outputting the processing result through the service module, thereby feeding back the processing result to the user.
In this embodiment, the dynamic programming language may also be PHP, ASP, Ruby, or the like, and the static programming language may also be C, Java, C #, or the like, which is not limited in this embodiment.
In this embodiment, the service framework of the machine learning model includes a service module implemented using a dynamic programming language and a model processing module implemented using a static programming language, and the service module determines, according to a service request of a user, a machine learning model to be used and input data included in the service request; calling a model processing module to authenticate the service module, and after the authentication is passed, using a machine learning model to process data according to input data to obtain a processing result; and outputting a processing result through the service module. The upper-layer functions of receiving data, preprocessing data, returning results and the like are realized through the dynamic compiling language, and the functions of machine learning model-related authentication and the like are realized through the static programming language, so that the development of services is facilitated, the safety of machine learning model-related data is ensured, the risk of data leakage is reduced, the data safety is improved, the development period is shortened, and the development efficiency is improved.
Fig. 3 is a flowchart of a method for machine learning model servitization provided by a second embodiment of the disclosure. Based on the first embodiment, in this embodiment, a specific implementation of the machine learning model service is described in detail, and as shown in fig. 3, the specific steps of the method are as follows:
step S301, calling a service module realized by using a dynamic programming language in a service framework, and receiving a service request of a user.
In this embodiment, the service module is configured to provide an overall service framework and provide a machine learning prediction service to the outside, and the service module may be an HTTP service or the like. The service module can realize upper-layer functions of data receiving, data preprocessing, data post-processing, result returning and the like, so that the development cost is reduced, the development period is shortened, and the development efficiency is improved.
The machine learning prediction service may be an Artificial Intelligence (AI) service implemented based on a machine learning model, and is used to implement image processing functions such as image prediction, classification, and recognition of people/objects in an image, or may be used to implement natural language processing, and the specific functions of the machine learning model are not specifically limited in this embodiment.
In this step, the service module receives a service request of a user.
Alternatively, the service request may be an HTTP request.
The data information included in the service request is input data, and may specifically include: entry parameters of the model, raw data to be processed. In addition, the service request may further include: the service request specifically includes which input data can be set and adjusted according to the needs of the actual application scenario, and the data such as the pre-processing parameter and the post-processing parameter are not specifically limited here.
Step S302, according to the service request of the user, determining the machine learning model to be used and the input data contained in the service request.
In the step, the service request is analyzed based on the dynamic programming language implementation, and a machine learning module corresponding to the current service and input data contained in the service request are determined.
Step S303, inputting data including: and performing data preprocessing on the original data according to the preprocessing parameters to obtain preprocessed data.
In an optional implementation manner of this embodiment, the inputting data includes: the entry parameters of the machine learning model, the preprocessing parameters and the raw data to be processed.
When the input data includes the preprocessing parameters, the original data needs to be preprocessed through a service module realized by using a dynamic programming language to obtain preprocessed data. The preprocessed data is used as input to a machine learning model.
For example, the raw data to be processed may be image data, and the data preprocessing performed on the raw data may include: image-size cropping, image filling, and the like of the image data.
The data preprocessing to be performed on the original data can be set and adjusted according to the requirements of the machine learning model and the specific application scenario, which is not specifically limited herein.
In the step, the service module realized by using the dynamic programming language is used for carrying out data preprocessing on the original data, so that the development cost can be reduced, the development period can be shortened, and the development efficiency can be improved.
And step S304, calling an interface corresponding to a model processing module realized by using a static programming language in the service framework according to the entry parameters and the preprocessed data of the machine learning model.
In this embodiment, when the machine learning model is serviced, the corresponding model processing module is obtained by encapsulating functions of the machine learning model, such as authentication, encryption, decryption, and the like. The service framework provides an interface corresponding to the model processing module, and the calling of the model processing module can be realized by calling the interface corresponding to the model processing module.
In the step, according to the entry parameters of machine learning and the preprocessed data, the preprocessed data are used as the data to be processed of the machine learning model, and an interface corresponding to a model processing module realized by using a static programming language in a service framework is called.
The model processing module corresponding to the machine learning model is realized by adopting static programming languages such as C + + and the like, so that the safety of relevant data of the machine learning model can be improved.
And step S305, calling a model processing module to authenticate the service module.
When the corresponding interface is called, the model processing module authenticates the service module (i.e., the corresponding service).
And if the authentication is not passed, returning authentication failure information to the service module through the model processing module.
And if the authentication is passed, executing the steps S306-S307, and performing data processing on the preprocessed data by using the machine learning model through the model processing module according to the entry parameters of the machine learning model to obtain a processing result.
In the step, the model processing module realized by using the static programming language is used for authenticating whether the service obtains the authorization of the machine learning model, so that the authentication of the machine learning model can be realized based on the static programming language, and the safety of the machine learning model can be improved.
For example, the authentication of the service module (i.e., the corresponding service) is a verification that the service has the authority of the machine learning model corresponding to the called interface, and it needs to verify whether the machine learning model corresponding to the called interface has authorized the current service based on the authentication information and an authenticated software toolkit (SDK). The authentication information may be an authentication token (token), and may include: identification information (e.g., IP address, sequence ID, etc.) of the HTTP service. The service module (i.e. corresponding service) is authenticated, and any existing method for authenticating the machine learning model can be implemented, which is not described herein again.
And S306, storing the machine learning model in a ciphertext data mode, and decrypting the ciphertext data of the machine learning model to obtain the plaintext machine learning model.
In this embodiment, the machine learning model is stored in the form of ciphertext data, which can further improve the security of the machine learning model.
Optionally, according to the input data, the machine learning model is used for data processing, before a processing result is obtained, the machine learning model is loaded through the model processing module, and is realized based on the static programming language, so that the safety of the machine learning model can be improved.
After the ciphertext data of the machine learning model are loaded, the model processing module realized by using the static programming language is used for decrypting the ciphertext data of the machine learning model to obtain the machine learning model of a plaintext, so that the decryption of the model can be realized based on static programming prediction, and the safety of the machine learning model is improved.
And step S307, performing data processing by using a machine learning model of a plaintext according to the input data to obtain a processing result.
And after the machine learning model of the plaintext is obtained through decryption, the preprocessed data is subjected to data processing by using the machine learning model of the plaintext through a model processing module realized by using a static programming language, so that a processing result is obtained.
Optionally, the machine learning model may be trained using a dynamic programming language such as python, and packaged as a model call interface, and when the machine learning model is required to perform data processing, the model call interface corresponding to the machine learning model may be called by using a model processing module implemented using a static programming language.
In an optional implementation manner, through the model processing module implemented by using a static programming language, the recording and statistics of the service usage of the machine learning model can be further implemented.
Optionally, in response to the interface corresponding to the model processing module being called, the model processing module records the use information of the machine learning model, so that the use condition of the machine learning model can be recorded while the machine learning model is authenticated based on the static programming language, the safety of the relevant data of the machine learning model is improved, the development is facilitated, and the development efficiency is improved.
Illustratively, the number of times of use of the machine learning model is recorded by the model processing module in response to the interface corresponding to the model processing module being called.
And S308, encrypting the processing result through the model processing module to obtain a ciphertext processing result.
After the processing result of the machine learning model is obtained, the processing result is encrypted through the model processing module, the encrypted ciphertext processing result is returned to the service module, the encryption of the processing result of the machine learning model can be achieved based on the static programming language, and the safety of the processing result of the machine learning model is improved.
And S309, performing data post-processing on the processing result according to the post-processing parameters through the service module to obtain a final result, and outputting the final result.
In this embodiment, the input data in the service request may further include a post-processing parameter.
In the step, the service module performs data post-processing on the processing result returned by the model processing module according to the post-processing parameters to obtain a final result, and can realize the function of data post-processing based on the dynamic programming language, thereby facilitating development and improving development efficiency.
Exemplary post-processing of data on the processing results may include: filtering effective results, image superposition, image annotation and the like based on a set threshold.
The post-processing of the data may be set and adjusted as needed by the machine learning model and the specific application scenario, and is not specifically limited herein.
After the final result is obtained, the final result is output through the service module, the function of returning the result can be realized based on the dynamic programming language, the development is convenient, and the development efficiency is improved.
In some application scenarios, if a plaintext processing result of the machine learning model needs to be directly returned through the service module, or data post-processing needs to be performed based on the plaintext processing result, the ciphertext processing result can be decrypted through the service module to obtain the plaintext processing result; or, the model processing module may also directly return the plaintext processing result to the service module as required.
For a certain model, service development is carried out, a python is used for realizing a service module based on the scheme, and a C + + is used for realizing the model processing module for realizing the service development, so that compared with the method of completely using C + + for developing service functions, the development cost time can be shortened to 1-2 days from about two weeks, and the time cost is saved by about 80%.
In this embodiment, the service framework of the machine learning model includes a service module implemented using a dynamic programming language, and a model processing module implemented using a static programming language; the model processing module realizes the bottom functions of authentication, loading, decryption, result encryption, data statistics and the like of the machine learning model based on the static programming language, reduces the risk of data leakage and improves the safety of the machine learning model and related data; the service module realizes upper-layer functions of data receiving, data preprocessing, data post-processing, result returning and the like based on the dynamic programming language, facilitates service development, greatly shortens the development period and improves the development efficiency.
Fig. 4 is a schematic diagram of a device for machine learning model servitization provided by a third embodiment of the disclosure. The equipment for machine learning model servitization provided by the embodiment of the disclosure can execute the processing flow provided by the method for machine learning model servitization. As shown in fig. 4, the apparatus 40 for machine learning model servization includes: a data acquisition module 401, a machine learning data processing module 402 and a result return module 403.
Specifically, the data obtaining module 401 is configured to invoke a service module implemented in a service framework using a dynamic programming language, and determine, according to the service request, a machine learning model to be used and input data included in the service request.
And the machine learning data processing module 402 is used for calling a model processing module realized by using a static programming language in the service framework, and after the service module is authenticated, performing data processing by using a machine learning model according to input data to obtain a processing result.
And a result returning module 403, configured to output the processing result through the service module.
The device provided in the embodiment of the present disclosure may be specifically configured to execute the method embodiment provided in the first embodiment, and specific functions are not described herein again.
In this embodiment, the service framework of the machine learning model includes a service module implemented using a dynamic programming language and a model processing module implemented using a static programming language, and the service module determines, according to a service request of a user, a machine learning model to be used and input data included in the service request; calling a model processing module to authenticate the service, and after the authentication is passed, using a machine learning model to perform data processing according to input data to obtain a processing result; and outputting a processing result through the service module. The upper-layer functions of receiving data, preprocessing data, returning results and the like are realized through the dynamic compiling language, and the functions of machine learning model-related authentication and the like are realized through the static programming language, so that the development of services is facilitated, the safety of machine learning model-related data is ensured, the risk of data leakage is reduced, the data safety is improved, the development period is shortened, and the development efficiency is improved.
Fig. 5 is a schematic diagram of a device for machine learning model servitization provided by a fourth embodiment of the disclosure. The equipment for machine learning model servitization provided by the embodiment of the disclosure can execute the processing flow provided by the method for machine learning model servitization. As shown in fig. 5, the apparatus 50 for machine learning model servization includes: a data acquisition module 501, a machine learning data processing module 502 and a result return module 503.
Specifically, the data obtaining module 501 is configured to invoke a service module implemented in a service framework using a dynamic programming language, and determine, according to a service request, a machine learning model to be used and input data included in the service request.
And the machine learning data processing module 502 is used for calling a model processing module realized by using a static programming language in the service framework, and after the service module is authenticated, performing data processing by using a machine learning model according to input data to obtain a processing result.
And a result returning module 503, configured to output the processing result through the service module.
In an optional embodiment, the machine learning data processing module is further configured to:
and calling the model processing module by calling an interface corresponding to the model processing module in the service framework, and after the service module is authenticated, performing data processing by using a machine learning model according to input data to obtain a processing result.
In an alternative embodiment, the input data comprises: the entry parameters of the machine learning model, the preprocessing parameters and the raw data to be processed. As shown in fig. 5, the apparatus 50 for machine learning model servization further includes:
a data pre-processing module 504 configured to:
and according to the preprocessing parameters, carrying out data preprocessing on the original data to obtain preprocessed data.
In an optional embodiment, the machine learning data processing module is further configured to:
and according to the entrance parameters of the machine learning model, performing data processing on the preprocessed data by using the machine learning model to obtain a processing result.
In an alternative embodiment, as shown in fig. 5, the machine learning data processing module 502 includes:
the information recording sub-module 5021 is used for:
and recording the use information of the machine learning model through the model processing module in response to the calling of the corresponding interface of the model processing module.
In an optional embodiment, the information recording sub-module is further configured to:
and recording the use times of the machine learning model through the model processing module in response to the calling of the interface corresponding to the model processing module.
In an alternative embodiment, as shown in fig. 5, the machine learning data processing module 502 includes:
the model decryption submodule 5022 is used for decrypting ciphertext data of the machine learning model to obtain the machine learning model of a plaintext, wherein the machine learning model is stored in a mode of the ciphertext data.
The data processing sub-module 5023 is configured to perform data processing using a plaintext machine learning model according to the input data, and obtain a processing result.
In an alternative embodiment, as shown in fig. 5, the machine learning data processing module 502 includes:
and the model loading submodule 5024 is used for loading the machine learning model through the model processing module.
In an optional embodiment, as shown in fig. 5, the apparatus 50 for machine learning model servization further includes:
and the data post-processing module 505 is configured to perform data post-processing on the processing result according to the post-processing parameters through the service module, obtain a final result, and output the final result.
In an alternative embodiment, as shown in fig. 5, the machine learning data processing module 502 includes:
the result encryption submodule 5025 is used for encrypting the processing result through the model processing module to obtain the processing result of the ciphertext.
In an optional implementation, the data obtaining module is further configured to:
and receiving the service request through the service module before determining the machine learning model required to be used and the input data contained in the service request according to the service request.
The device provided in the embodiment of the present disclosure may be specifically configured to execute the method embodiment provided in the second embodiment, and specific functions are not described herein again.
In this embodiment, the service framework of the machine learning model includes a service module implemented using a dynamic programming language, and a model processing module implemented using a static programming language; the model processing module realizes the bottom functions of authentication, loading, decryption, result encryption, data statistics and the like of the machine learning model based on the static programming language, reduces the risk of data leakage and improves the safety of the machine learning model and related data; the service module realizes upper-layer functions of data receiving, data preprocessing, data post-processing, result returning and the like based on the dynamic programming language, facilitates service development, greatly shortens the development period and improves the development efficiency.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the method of machine learning model servization. For example, in some embodiments, the method of machine learning model servization may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the method of machine learning model servization described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of machine learning model servitization.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (25)
1. A method of machine learning model servitization, comprising:
calling a service module realized by using a dynamic programming language in a service framework, and determining a machine learning model required to be used and input data contained in a service request according to the service request;
calling a model processing module realized by using a static programming language in the service framework, and after the service module passes authentication, performing data processing by using the machine learning model according to the input data to obtain a processing result;
and outputting the processing result through the service module.
2. The method of claim 1, wherein the invoking a model processing module implemented in the service framework using a static programming language, and after the service module is authenticated, performing data processing using the machine learning model according to the input data to obtain a processing result comprises:
and calling the model processing module by calling an interface corresponding to the model processing module in the service framework, and after the service module passes authentication, performing data processing by using the machine learning model according to the input data to obtain a processing result.
3. The method of claim 2, wherein the input data comprises: the method comprises the steps of obtaining entry parameters, preprocessing parameters and original data to be processed of the machine learning model;
the method for calling the service module realized by the dynamic programming language in the service framework further comprises the following steps of, after determining the machine learning model to be used and the input data contained in the service request according to the service request:
and according to the preprocessing parameters, carrying out data preprocessing on the original data to obtain preprocessed data.
4. The method of claim 3, wherein the processing data using the machine learning model from the input data to obtain a processing result comprises:
and according to the entrance parameters of the machine learning model, performing data processing on the preprocessed data by using the machine learning model to obtain a processing result.
5. The method of claim 2, further comprising:
and recording the use information of the machine learning model through the model processing module in response to the corresponding interface of the model processing module being called.
6. The method of claim 5, wherein the recording, by the model processing module, usage information of the machine learning model in response to the interface corresponding to the model processing module being called comprises:
and recording the use times of the machine learning model through the model processing module in response to the calling of the interface corresponding to the model processing module.
7. The method according to any one of claims 1-6, wherein the performing data processing using the machine learning model from the input data to obtain a processing result comprises:
decrypting ciphertext data of the machine learning model to obtain the machine learning model of a plaintext, wherein the machine learning model is stored in a ciphertext data mode;
and according to the input data, performing data processing by using the machine learning model in plaintext to obtain a processing result.
8. The method according to any one of claims 1-7, wherein the performing data processing using the machine learning model based on the input data further comprises, before obtaining a processing result:
loading, by the model processing module, the machine learning model.
9. The method of any of claims 1-7, wherein the input data includes post-processing parameters, the outputting, by the service module, the processing result comprising:
and performing data post-processing on the processing result according to the post-processing parameters through the service module to obtain a final result, and outputting the final result.
10. The method according to any one of claims 1-9, wherein the performing data processing using the machine learning model according to the input data further comprises, after obtaining a processing result:
and encrypting the processing result through the model processing module to obtain a processing result of the ciphertext.
11. The method according to any one of claims 1-10, wherein prior to determining from a service request a machine learning model to be used and input data contained in the service request, further comprising:
receiving, by the service module, the service request.
12. An apparatus for machine learning model servization, comprising:
the data acquisition module is used for calling a service module realized by using a dynamic programming language in a service framework and determining a machine learning model required to be used and input data contained in a service request according to the service request;
the machine learning data processing module is used for calling a model processing module realized by using a static programming language in the service framework, and after the service module is authenticated, the machine learning data processing module is used for performing data processing according to the input data to obtain a processing result;
and the result returning module is used for outputting the processing result through the service module.
13. The device of claim 12, wherein the machine learning data processing module is further to:
and calling the model processing module by calling an interface corresponding to the model processing module in the service framework, and after the service module passes authentication, performing data processing by using the machine learning model according to the input data to obtain a processing result.
14. The device of claim 13, wherein the input data comprises: the entry parameters of the machine learning model, the preprocessing parameters, the raw data to be processed,
the apparatus further comprises:
a data pre-processing module to:
and according to the preprocessing parameters, carrying out data preprocessing on the original data to obtain preprocessed data.
15. The device of claim 14, wherein the machine learning data processing module is further to:
and according to the entrance parameters of the machine learning model, performing data processing on the preprocessed data by using the machine learning model to obtain a processing result.
16. The apparatus of claim 13, wherein the machine learning data processing module comprises:
an information recording sub-module for:
and recording the use information of the machine learning model through the model processing module in response to the corresponding interface of the model processing module being called.
17. The apparatus of claim 16, wherein the information recording sub-module is further configured to:
and in response to the fact that the interface corresponding to the model processing module is called, recording the use times of the machine learning model through the model processing module corresponding to the interface.
18. The apparatus of any of claims 12-17, wherein the machine learning data processing module comprises:
the model decryption submodule is used for decrypting ciphertext data of the machine learning model to obtain the machine learning model of a plaintext, wherein the machine learning model is stored in a ciphertext data mode;
and the data processing submodule is used for processing data by using the machine learning model of the plaintext according to the input data to obtain a processing result.
19. The apparatus of any of claims 12-18, wherein the machine learning data processing module comprises:
and the model loading submodule is used for loading the machine learning model through the model processing module.
20. The apparatus of any of claims 12-19, further comprising:
and the data post-processing module is used for performing data post-processing on the processing result according to the post-processing parameters through the service module to obtain a final result and outputting the final result.
21. The apparatus of any of claims 12-20, wherein the machine learning data processing module comprises:
and the result encryption submodule is used for encrypting the processing result through the model processing module to obtain the processing result of the ciphertext.
22. The device of any of claims 12-21, wherein the data acquisition module is further to:
receiving a service request through the service module before determining a machine learning model required to be used and input data contained in the service request according to the service request.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110700125.5A CN113448545B (en) | 2021-06-23 | 2021-06-23 | Method, apparatus, storage medium and program product for machine learning model servitization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110700125.5A CN113448545B (en) | 2021-06-23 | 2021-06-23 | Method, apparatus, storage medium and program product for machine learning model servitization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113448545A true CN113448545A (en) | 2021-09-28 |
CN113448545B CN113448545B (en) | 2023-08-08 |
Family
ID=77812404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110700125.5A Active CN113448545B (en) | 2021-06-23 | 2021-06-23 | Method, apparatus, storage medium and program product for machine learning model servitization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113448545B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114138507A (en) * | 2021-11-09 | 2022-03-04 | 中国联合网络通信集团有限公司 | Python program service method, device and computer readable storage medium |
CN114691148A (en) * | 2022-04-11 | 2022-07-01 | 北京百度网讯科技有限公司 | Model reasoning acceleration method and device, electronic equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101840334A (en) * | 2010-04-16 | 2010-09-22 | 中国电子科技集团公司第二十八研究所 | Software component service packaging method |
US10453165B1 (en) * | 2017-02-27 | 2019-10-22 | Amazon Technologies, Inc. | Computer vision machine learning model execution service |
CN111209013A (en) * | 2020-01-15 | 2020-05-29 | 深圳市守行智能科技有限公司 | Efficient deep learning rear-end model deployment framework |
CN111290778A (en) * | 2020-02-06 | 2020-06-16 | 网易(杭州)网络有限公司 | AI model packaging method, platform and electronic equipment |
CN111340232A (en) * | 2020-02-17 | 2020-06-26 | 支付宝(杭州)信息技术有限公司 | Online prediction service deployment method and device, electronic equipment and storage medium |
CN111611065A (en) * | 2020-05-29 | 2020-09-01 | 远光软件股份有限公司 | Calling method and device of machine learning algorithm, storage medium and electronic equipment |
CN111651149A (en) * | 2020-07-03 | 2020-09-11 | 大连东软教育科技集团有限公司 | Machine learning model system convenient to deploy and calling method thereof |
CN111930419A (en) * | 2020-07-30 | 2020-11-13 | 深圳市威富视界有限公司 | Code packet generation method and system based on deep learning model |
US20200394566A1 (en) * | 2019-06-14 | 2020-12-17 | Open Text Sa Ulc | Systems and methods for lightweight cloud-based machine learning model service |
CN112508200A (en) * | 2020-12-18 | 2021-03-16 | 北京百度网讯科技有限公司 | Method, apparatus, device, medium, and program for processing machine learning model file |
US20210125329A1 (en) * | 2019-10-25 | 2021-04-29 | International Business Machines Corporation | Loading deep learning network models for processing medical images |
-
2021
- 2021-06-23 CN CN202110700125.5A patent/CN113448545B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101840334A (en) * | 2010-04-16 | 2010-09-22 | 中国电子科技集团公司第二十八研究所 | Software component service packaging method |
US10453165B1 (en) * | 2017-02-27 | 2019-10-22 | Amazon Technologies, Inc. | Computer vision machine learning model execution service |
US20200394566A1 (en) * | 2019-06-14 | 2020-12-17 | Open Text Sa Ulc | Systems and methods for lightweight cloud-based machine learning model service |
US20210125329A1 (en) * | 2019-10-25 | 2021-04-29 | International Business Machines Corporation | Loading deep learning network models for processing medical images |
CN111209013A (en) * | 2020-01-15 | 2020-05-29 | 深圳市守行智能科技有限公司 | Efficient deep learning rear-end model deployment framework |
CN111290778A (en) * | 2020-02-06 | 2020-06-16 | 网易(杭州)网络有限公司 | AI model packaging method, platform and electronic equipment |
CN111340232A (en) * | 2020-02-17 | 2020-06-26 | 支付宝(杭州)信息技术有限公司 | Online prediction service deployment method and device, electronic equipment and storage medium |
CN111611065A (en) * | 2020-05-29 | 2020-09-01 | 远光软件股份有限公司 | Calling method and device of machine learning algorithm, storage medium and electronic equipment |
CN111651149A (en) * | 2020-07-03 | 2020-09-11 | 大连东软教育科技集团有限公司 | Machine learning model system convenient to deploy and calling method thereof |
CN111930419A (en) * | 2020-07-30 | 2020-11-13 | 深圳市威富视界有限公司 | Code packet generation method and system based on deep learning model |
CN112508200A (en) * | 2020-12-18 | 2021-03-16 | 北京百度网讯科技有限公司 | Method, apparatus, device, medium, and program for processing machine learning model file |
Non-Patent Citations (1)
Title |
---|
李京,孙颖博,刘智深,张道一: "模型库管理系统的设计和实现", 软件学报, no. 08 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114138507A (en) * | 2021-11-09 | 2022-03-04 | 中国联合网络通信集团有限公司 | Python program service method, device and computer readable storage medium |
CN114138507B (en) * | 2021-11-09 | 2024-05-17 | 中国联合网络通信集团有限公司 | Python program service method, device and computer readable storage medium |
CN114691148A (en) * | 2022-04-11 | 2022-07-01 | 北京百度网讯科技有限公司 | Model reasoning acceleration method and device, electronic equipment and storage medium |
CN114691148B (en) * | 2022-04-11 | 2024-07-19 | 北京百度网讯科技有限公司 | Model reasoning acceleration method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113448545B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105471823B (en) | A kind of sensitive information processing method, device, server and safe decision-making system | |
CN107249004B (en) | Identity authentication method, device and client | |
WO2015096411A1 (en) | Systems and methods for password reset | |
US20170249492A1 (en) | Two-dimensional code scanning interaction methods and apparatuses | |
CN106850503B (en) | Login-free identity authentication method and device | |
CN113448545B (en) | Method, apparatus, storage medium and program product for machine learning model servitization | |
CN107634947A (en) | Limitation malice logs in or the method and apparatus of registration | |
CN113722683A (en) | Model protection method, device, equipment, system and storage medium | |
CN111079152A (en) | Model deployment method, device and equipment | |
CN115150063A (en) | Model encryption method and device and electronic equipment | |
CN111242462A (en) | Data processing method and device, computer storage medium and electronic equipment | |
CN114338191A (en) | Risk verification method, device, equipment and storage medium | |
CN114186206A (en) | Login method and device based on small program, electronic equipment and storage medium | |
CN116916310B (en) | Verification code generation and verification method and device and electronic equipment | |
US8910260B2 (en) | System and method for real time secure image based key generation using partial polygons assembled into a master composite image | |
CN113158196A (en) | Login verification method, device, equipment and medium | |
CN117314433A (en) | Transaction risk control method, device and equipment for banking outlets and storage medium | |
CN115333851A (en) | Automatic driving data transmission method and device and electronic equipment | |
CN114036364B (en) | Method, apparatus, device, medium, and system for identifying crawlers | |
CN115396206A (en) | Message encryption method, message decryption method, device and program product | |
CN115391805A (en) | Encrypted data migration method, device, equipment and storage medium | |
US8777100B2 (en) | Method for inputting a password and a device therefor | |
CN114040404A (en) | Data distribution method, system, device and storage medium | |
CN111835815A (en) | Synchronous storage method and device for internet self-media data on block chain | |
CN111125250A (en) | Method and device for storing internet evaluation data on block chain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |