CN112631730A - Model processing method and device, equipment and computer readable storage medium - Google Patents

Model processing method and device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112631730A
CN112631730A CN202011604381.6A CN202011604381A CN112631730A CN 112631730 A CN112631730 A CN 112631730A CN 202011604381 A CN202011604381 A CN 202011604381A CN 112631730 A CN112631730 A CN 112631730A
Authority
CN
China
Prior art keywords
model
information
docker
user
mirror image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011604381.6A
Other languages
Chinese (zh)
Inventor
钱戴明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202011604381.6A priority Critical patent/CN112631730A/en
Publication of CN112631730A publication Critical patent/CN112631730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/283Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides a model processing method, a model processing device and a computer readable storage medium, data are obtained by using a Docker mirror image, and a model is trained, because software environment and hardware resource used by the Docker mirror image in operation are isolated from the outside, the isolation of environment and hardware environment between the training of different machine models can be realized, and the isolation effect is still effective in an offline environment.

Description

Model processing method and device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of electronic information, and in particular, to a model processing method, apparatus, device, and computer-readable storage medium.
Background
With the rise of big data and machine learning techniques, enterprises have accumulated a large amount of data. Typically these data are placed in a data warehouse. Machine learning techniques are requiring these data as inputs to achieve their intended results. Thus, enterprises have a strong driving force to better utilize data in the data warehouse for modeling activities, rather than just using it for reporting.
Machine learning is one of the important ways to utilize data, i.e., a machine learning model can be trained by using data to obtain various trained models for production and life.
At present, with the rapid development of machine learning, the types of machine learning models supported by open source software are various, and the functions of the models required by different departments of an enterprise are different, so that the machine learning has the characteristics of multiple users, multiple servers and multiple software environments.
Disclosure of Invention
In the process of research, the applicant finds that the prior art generally has the problem of machine learning various software environments required based on the way of constructing multi-version software environments by software. However, in the case of multiple users and multiple servers, the problem of resource contention among part of users is caused by a large probability, and the problem of resource waste of part of servers is caused, so that hardware resources cannot be fully and effectively utilized. Because of the high requirement of some financial enterprises on data security, data is accessed in an off-line mode, and the existing mode of utilizing cluster software to realize hardware resource isolation is restricted by diversified requirements of software environment and cannot effectively isolate hardware resources in an off-line environment.
The application provides a model processing method, a model processing device and a computer readable storage medium, and aims to solve the problem of how to effectively isolate hardware resources in an offline environment aiming at the characteristics of multiple users, multiple servers and multiple software environments.
In order to achieve the above object, the present application provides the following technical solutions:
a method of model processing, comprising:
acquiring first modeling information, wherein the first modeling information comprises training data of a first model and information of the first model;
configuring a first container Docker mirror image according to the first modeling information; wherein a data source of the first Docker image is configured as the training data, a function of the first Docker image being configured to train the first model using the training data;
and starting the first Docker mirror image by using the first hardware resource usage amount, wherein the first hardware resource usage amount is determined according to the training data and the information of the first model.
Optionally, before the starting of the first Docker image, the method further includes:
packaging the authority information of the first model into the first Docker mirror image, wherein the authority information of the first model comprises: information of the user and/or data having the right to make the prediction using the first model.
Optionally, the authority information of the first model further includes:
the authority information of the first user comprises data of which the first user has access authority.
Optionally, the obtaining the first modeling information includes:
after the first user logs in and passes the verification, displaying an information selection interface according to the authority information of the first user;
and responding to the operation that a first user selects the first modeling information in the information selection interface, and acquiring the first modeling information.
Optionally, the authority information of the first model and the authority information of the first user are pre-stored by the UAAP platform.
Optionally, after the first Docker image is started, the method further includes:
recording the occupation parameters of the hardware resources in the running process of the first Docker mirror image;
and adjusting the usage amount of the independent hardware resource configured to the first Docker mirror image according to the occupation parameter.
Optionally, the method further includes:
after the Docker mirror image is operated, packaging the trained first model and the software environment operated by the first model into a second Docker mirror image;
and establishing a calling interface of the second Docker mirror image.
An apparatus for processing a model, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first modeling information, and the first modeling information comprises training data of a first model and information of the first model;
the configuration module is used for configuring a first container Docker mirror image according to the first modeling information; wherein a data source of the first Docker image is configured as the training data, a function of the first Docker image being configured to train the first model using the training data;
and the starting module is used for starting the first Docker mirror image by using the first hardware resource usage amount, and the first hardware resource usage amount is determined according to the training data and the information of the first model.
A model processing apparatus comprising:
a memory and a processor;
the memory is used for storing programs;
the processor is used for running the program to realize the processing method of the model.
A computer-readable storage medium having stored thereon a program which, when executed by a computing device, implements the model processing method described above.
According to the model processing method and device, the Docker mirror image is used for obtaining data and training the model, and because the software environment and the hardware resource used by the Docker mirror image in operation are isolated from the outside, the isolation of the environment and the hardware environment between the training of different machine models can be realized, and the isolation effect is still effective in an offline environment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for processing a model disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of yet another model processing method disclosed in an embodiment of the present application;
FIG. 3 is a flow chart of yet another model processing method disclosed in an embodiment of the present application;
FIG. 4 is a flow chart of yet another model processing method disclosed in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a model processing device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a processing method of a model disclosed in an embodiment of the present application, including the following steps:
s101, obtaining first modeling information.
The first modeling information comprises training data of the first model and information of the first model. The first model herein may refer to any one machine learning model, may refer to a plurality of machine learning models, and is not limited thereto. When a plurality of machine learning models are used, the training data of the first model is training data of each machine model. The information of the first model may include, but is not limited to, an identification of the model, a structure of the model, and the like.
S102, configuring a first container Docker mirror image according to the first modeling information.
Wherein the data source of the first Docker image is configured to train data, and the function of the first Docker image is configured to train the first model using the training data.
It is to be understood that, in a case where the first model refers to a plurality of machine learning models, a first Docker mirror is configured for each first model, and a function of the first Docker mirror of any one first model is to train the first model using training data of the first model.
S103, starting the first Docker mirror image by using the first hardware resource usage amount.
The usage amount of the first hardware resource is determined according to the training data and the information of the first model.
Specifically, the more the amount of data of the training data is, the more the hardware resources are configured, and the more the amount of resources required for the first model to run is, the more the hardware resources are configured. And determining hardware resources required by the first Docker mirror image under the condition of the data volume of the current training data according to the data volume and the running condition of the existing Docker mirror image. The resource allocation rules in the prior art are all applicable to this, and are not limited here.
It can be understood that, after the first Docker image is started, the Docker instance in the first Docker image obtains training data according to the configuration in S102, and trains the first model using the training data.
In the processing method of the model shown in fig. 1, the Docker mirror image is used for acquiring data and training the model, and because the software environment and the hardware resource used by the Docker mirror image are isolated from the outside, the isolation between the environment and the hardware environment between the training of different machine models can be realized, and the isolation effect is still effective in the offline environment.
In addition, the number of independent hardware resources configured for the first Docker image is determined according to the training data and the information of the first model, so that the first Docker image can be flexibly configured on the server as required in a multi-server scenario.
Furthermore, compared with the existing cluster software, the container Docker technology is easy to implement, so that the technical threshold is low, the deployment difficulty is low, and the requirement on technicians is low. Compared with the existing mode of realizing the isolation of hardware resources by the virtual machine, the Docker technology has the advantages of quicker response and higher resource utilization rate.
Fig. 2 is a processing method of another model disclosed in an embodiment of the present application, where compared with the above embodiment, a step of authority control is added, and a flow shown in fig. 2 includes the following steps:
s201, after the first user logs in and passes the verification, displaying an information selection interface according to the authority information of the first user.
After the first user logs in, the password input by the first user can be verified. The authority information of the first user includes data that the first user has access authority, and optionally, may further include a model and a software version that the first user has usage authority.
Optionally, the user group to which the first user belongs may be determined first, and then the authority information of the user group may be used as the authority information of the first user.
It is understood that the corresponding relationship of the authority between the data and the user can be configured in advance. The model and the corresponding relationship of the software version and the authority of the user can also be configured in advance.
Specifically, the information selection interface may include various information items that the first user has permission, where the information items include, but are not limited to: a data selection item, a model algorithm selection item, and a software version selection item.
The data selection items may include various structured data and unstructured data, among others. Different data may come from different data warehouses. The user may filter data selections based on various topics for the data warehouse and information related to the data dictionary. The model algorithm selection items comprise various machine model algorithms, such as a regression algorithm, a classification algorithm and the like. Different machine model algorithms, may include multiple software versions.
Specifically, in this embodiment, the verification of the login password and the acquisition of the authority information may be implemented by using a Unified Authentication Authorization and fraud prevention management Platform (UAAP). That is, the corresponding relationship between the user and the password and the authority information of the user are configured in the UAAP in advance, and then the UAAP detects whether the password of the user is correct or not, and feeds back the authority information of the user if the password is correct.
S202, responding to the operation that the first user selects the first modeling information, and acquiring the first modeling information.
Specifically, each information item in the selection interface may be selected in a checking manner, that is, the user checks each information item on the selection interface to complete the selection.
After receiving an operation instruction of the first user for checking the information item, detecting whether the first user checks the use permission of the information item, and if so, taking the checked information item as the acquired first modeling information.
S203, configuring a first container Docker mirror image according to the first modeling information.
It should be noted that the first Docker image is already created before the first Docker image is configured. The first Docker image can be created after password verification of the creating user is passed, or can be created at other occasions as long as the creation is completed before configuration.
S204, packaging the authority information of the first model into a first Docker mirror image.
Specifically, the authority information of the first model may include: information of the user and/or data having the right to make the prediction using the first model. Optionally, the authority information of the first Docker image may further include: the authority information of the first user.
It is understood that the authority information of the first Docker image may be pre-configured on the UAAP platform.
S205, starting the first container Docker mirror image by using the first hardware resource usage amount.
Because the containerization technology can effectively divide the hardware resources of a single operating system into isolated groups so as to better balance the use requirements of the hardware resources with conflict among the isolated groups, the training of the model realized based on the container mirror image in the embodiment is beneficial to isolating the software and hardware resources in the training process of different machine learning models.
Moreover, the user authentication system of the data warehouse and the user system of the machine learning platform in the existing system architecture are usually mutually independent authentication systems, manual configuration by an administrator is needed to be matched one by one, the system is complicated and easy to make mistakes, and the response speed to the change of personnel is slow. In the embodiment, the UAAP platform is used to realize the control of the authority, and based on the authority control information and the authority interaction that can cover each system (subsystem), the unified control of the authority can be realized, and the possibility of password leakage is reduced.
More importantly, the user of the training model does not need to pay attention to authority control in the subsequent prediction of the model, and authority control information is uniformly realized by the UAAP platform, so that the time cost and the technical threshold are reduced, and the rapid deployment of model training is favorably realized.
Fig. 3 is a processing method of another model disclosed in an embodiment of the present application, which adds steps of adjusting hardware resources and refines execution modes of the steps compared with the above embodiment, where the flow shown in fig. 3 includes the following steps:
s301, a user logs in a Portal page and applies for an open source machine learning environment through UAAP.
Specifically, after the user logs in the Portal page, the UAAP verifies the password, and after the password passes the verification, the authority information of the user is fed back. And displaying an information selection interface according to the authority information of the user.
In the displayed information selection interface, the user needs to check the following contents: 1. the check right control subsystem provides structured data and unstructured data that are accessible according to project requirements. The user may filter data items according to various topics for the data warehouse and according to information associated with the data dictionary. 2. The algorithm to be used is selected, and the system provides a plurality of algorithm functions, including a common regression algorithm, a classification algorithm, a clustering algorithm or a deep learning related algorithm. 3. According to the different algorithms outlined in 2, the system will give installation suggestions of different software environments, and the user can select the software version familiar with himself more according to the preference of different software versions. The system will provide the user with a machine learning environment that is automatically constructed according to the user's above selections.
S302, automatically generating a Dockerfile according to the data, the algorithm and the software version selected by the user, and configuring an independent machine learning environment based on a container through the Dockerfile, namely a Docker mirror image.
Optionally, specific data access rights can be packaged in the environment, and the user does not need to configure any content related to the data access rights.
When the Dockerfile is generated, the Docker mirror image with the corresponding software version is automatically configured through a private pip warehouse according to the software package version selected by a user.
In this embodiment, the Docker image is mainly constructed based on Python, and thus the base image is a Python image. Where the base images required to build the Docker container will be placed in a private image library.
S303, using the first hardware resource usage amount to start the Docker mirror image.
Specifically, according to modeling information selected by a user, the calculated amount and the algorithm complexity are preliminarily calculated, hardware resources required by the project are evaluated, and an independent machine learning environment is allocated by utilizing the isolation characteristic of the Docker technology. After the allocation is completed, the users can share and allocate hardware resources independently, and the users cannot interfere with each other to influence the operation of the project. Eventually, a single Docker instance will provide a way to log into the machine learning environment similar to a jupyter notebook.
A user can directly access a data warehouse through a data fetching module built in a Docker container to obtain corresponding data for model training.
And S304, recording the occupation parameters of the hardware resources in the operating process of the Docker mirror image.
Specifically, the occupation parameter may be an occupation proportion of the occupation amount of the hardware resources of the Docker image in the total amount of the hardware resources.
Hardware resource use conditions during model training are collected and fed back to a resource allocation process, and a resource allocation algorithm is updated and iterated accordingly. Providing basis for subsequent resource allocation and more effectively utilizing limited hardware resources.
S305, adjusting the usage amount of the independent hardware resources configured to the Docker mirror image according to the occupation parameters.
It is understood that, besides the usage amount of the hardware resource of the Docker image which is already running can be adjusted, a basis can be provided for the allocation of the hardware resource of the Docker image to be started.
The method described in this embodiment can dynamically adjust the allocation of the hardware resources of the Docker image, so that the use of the hardware resources is more flexible and reasonable.
The method described in the embodiment has the following beneficial effects:
machine learning is combined with UAAP, and a user can automatically acquire corresponding data access authorization according to the affiliated department or subsidiary company after finishing one-time verification. The user data access authority and the modeling, the model issuing, the model verifying and the iterative process are deeply bound, so that the configuration work of an administrator is greatly reduced, the possibility of password exposure is reduced, and the safety of data assets is ensured.
The user can select any self-familiar algorithm and the software version depending on the algorithm, and the system automatically solves the software dependence problem. The user can realize the out-of-box and instant use of the tool level.
When the hardware resources are allocated, the resources on different servers are dynamically allocated to the users according to the monitored service conditions of the existing server resources and the data of the estimated required approximate computing resources according to the algorithm and the data volume applied by the users, and the hardware resources are fully utilized on the premise of ensuring the enough hardware resources of each project.
Fig. 4 is a flowchart of another model processing method disclosed in the embodiment of the present application, in which a step of using a model after training of the model is completed is added, and the flowchart shown in fig. 4 includes the following steps:
s401, obtaining first modeling information.
S402, configuring a first container Docker mirror image according to the first modeling information.
S403, packaging the authority information of the first model into the first Docker mirror image.
S404, starting the first Docker image by using the first hardware resource usage amount.
S405, after the first Docker mirror image is operated, packaging the trained first model and the software environment in which the first model is operated into a second Docker mirror image.
It should be noted that the first Docker image and the second Docker image are different, and the first Docker image is responsible for training and includes control information of user right access. The first docker mirror image solves the problem that different algorithms with different data volumes have different requirements on hardware resources in the training process, namely, the initial resources are automatically and dynamically allocated according to the collected historical resource use conditions. Initial resources are also adjusted if insufficient resources occur during the training process.
The second docker mirror image is used for model prediction, and the method is used for single picking or batch data prediction by automatically generating an api with an external rest style according to a model which is solidified after the first docker mirror image is trained. The user can load the second docker image to perform external service release or batch data processing, so that the difficulty of deployment is greatly reduced.
And S406, establishing a workflow for verifying the second Docker image.
And S407, executing the workflow to verify the validity of the second Docker mirror image.
Since verifying the validity of the second Docker image is to verify the validity of the first model, the verification method can be referred to in the prior art.
S408, under the condition that the validity of the second Docker image meets a preset condition, establishing a calling interface of the second Docker image.
And S409, under the condition that the validity of the second Docker mirror image does not meet the preset condition, recreating the first Docker mirror image to train the first model.
In this embodiment, after the model training is completed, the trained model may be packaged, and a call interface is established, thereby completing the release of the model. More importantly, the model is issued by using a container technology, so that the isolation of software and hardware resources can be realized in the test process of the model. And the rapid verification and rapid release of the model can be realized by utilizing the rapid loading property of the containerization technology.
Furthermore, the model can be retrained according to effectiveness, and effective management of the model is achieved.
Fig. 5 is a processing apparatus of a model according to an embodiment of the present application, including: the device comprises an acquisition module, a configuration module and a starting module.
The acquisition module is used for acquiring first modeling information, and the first modeling information comprises training data of a first model and information of the first model.
The configuration module is used for configuring a first container Docker mirror image according to the first modeling information; wherein a data source of the first Docker image is configured as the training data, and a function of the first Docker image is configured to train the first model using the training data.
And the starting module is used for starting the first Docker mirror image by using the first hardware resource usage amount, and the first hardware resource usage amount is determined according to the training data and the information of the first model.
Optionally, the configuration module is further configured to encapsulate, before the start module starts the first Docker image, the authority information of the first model into the first Docker image, where the authority information of the first model includes: information of the user and/or data having the right to make the prediction using the first model.
Optionally, the authority information of the first model further includes: the authority information of the first user comprises data of which the first user has access authority.
Optionally, the obtaining module is configured to obtain the first modeling information, and includes: the acquisition module is specifically used for displaying an information selection interface according to the authority information of the first user after the first user logs in and passes the verification; and responding to the operation that a first user selects the first modeling information in the information selection interface, and acquiring the first modeling information.
Optionally, the authority information of the first model and the authority information of the first user are pre-stored by the UAAP platform.
Optionally, the apparatus may further include: the adjusting module is used for recording the occupation parameters of the hardware resources in the running process of the first Docker mirror image; and adjusting the usage amount of the independent hardware resource configured to the first Docker image according to the occupation parameter.
Optionally, the apparatus may further include: the application module is used for packaging the trained first model and the software environment in which the first model operates into a second Docker mirror image after the Docker mirror image operates; and establishing a calling interface of the second Docker mirror image.
The processing apparatus of the model described in this embodiment can implement isolation of hardware resources of multiple users in an offline state.
The embodiment of the application also discloses a model processing device, which comprises: a memory and a processor. The memory is used for storing programs, and the processor is used for running the programs so as to realize the model processing method of the embodiment.
The embodiment of the application also discloses a computer readable storage medium, wherein a program is stored on the computer readable storage medium, and when the program is run by computing equipment, the processing method of the model in the embodiment is realized.
The functions described in the methods of the embodiments of the present application, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution of the embodiments of the present application to the prior art or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of model processing, comprising:
acquiring first modeling information, wherein the first modeling information comprises training data of a first model and information of the first model;
configuring a first container Docker mirror image according to the first modeling information; wherein a data source of the first Docker image is configured as the training data, a function of the first Docker image being configured to train the first model using the training data;
and starting the first Docker mirror image by using the first hardware resource usage amount, wherein the first hardware resource usage amount is determined according to the training data and the information of the first model.
2. The method of claim 1, further comprising, prior to the initiating the first Docker image:
packaging the authority information of the first model into the first Docker mirror image, wherein the authority information of the first model comprises: information of the user and/or data having the right to make the prediction using the first model.
3. The method of claim 2, wherein the privilege information of the first model further comprises:
the authority information of the first user comprises data of which the first user has access authority.
4. The method of claim 3, wherein obtaining the first modeling information comprises:
after the first user logs in and passes the verification, displaying an information selection interface according to the authority information of the first user;
and responding to the operation that a first user selects the first modeling information in the information selection interface, and acquiring the first modeling information.
5. The method of claim 4, wherein the permission information of the first model and the permission information of the first user are pre-stored by a UAAP platform.
6. The method of claim 1, further comprising, after starting the first Docker image:
recording the occupation parameters of the hardware resources in the running process of the first Docker mirror image;
and adjusting the usage amount of the independent hardware resource configured to the first Docker image according to the occupation parameter.
7. The method of claim 1, further comprising:
after the Docker mirror image is operated, packaging the trained first model and the software environment operated by the first model into a second Docker mirror image;
and establishing a calling interface of the second Docker mirror image.
8. An apparatus for processing a model, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first modeling information, and the first modeling information comprises training data of a first model and information of the first model;
the configuration module is used for configuring a first container Docker mirror image according to the first modeling information; wherein a data source of the first Docker image is configured as the training data, a function of the first Docker image being configured to train the first model using the training data;
and the starting module is used for starting the first Docker mirror image by using the first hardware resource usage amount, and the first hardware resource usage amount is determined according to the training data and the information of the first model.
9. An apparatus for processing a model, comprising:
a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the processing method of the model according to any one of claims 1 to 7.
10. A computer-readable storage medium on which a program is stored, the program implementing the processing method of the model of any one of claims 1 to 7 when executed by a computing device.
CN202011604381.6A 2020-12-30 2020-12-30 Model processing method and device, equipment and computer readable storage medium Pending CN112631730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011604381.6A CN112631730A (en) 2020-12-30 2020-12-30 Model processing method and device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011604381.6A CN112631730A (en) 2020-12-30 2020-12-30 Model processing method and device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112631730A true CN112631730A (en) 2021-04-09

Family

ID=75286733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011604381.6A Pending CN112631730A (en) 2020-12-30 2020-12-30 Model processing method and device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112631730A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369950A (en) * 2023-12-04 2024-01-09 上海凯翔信息科技有限公司 Configuration system of docker container

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
US20190095254A1 (en) * 2017-09-22 2019-03-28 Open Text Corporation Stateless content management system
CN109857518A (en) * 2019-01-08 2019-06-07 平安科技(深圳)有限公司 A kind of distribution method and equipment of Internet resources
CN110780987A (en) * 2019-10-30 2020-02-11 上海交通大学 Deep learning classroom analysis system and method based on container technology
CN111857942A (en) * 2019-04-30 2020-10-30 北京金山云网络技术有限公司 Deep learning environment building method and device and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190095254A1 (en) * 2017-09-22 2019-03-28 Open Text Corporation Stateless content management system
CN108958927A (en) * 2018-05-31 2018-12-07 康键信息技术(深圳)有限公司 Dispositions method, device, computer equipment and the storage medium of container application
CN109857518A (en) * 2019-01-08 2019-06-07 平安科技(深圳)有限公司 A kind of distribution method and equipment of Internet resources
CN111857942A (en) * 2019-04-30 2020-10-30 北京金山云网络技术有限公司 Deep learning environment building method and device and server
CN110780987A (en) * 2019-10-30 2020-02-11 上海交通大学 Deep learning classroom analysis system and method based on container technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369950A (en) * 2023-12-04 2024-01-09 上海凯翔信息科技有限公司 Configuration system of docker container
CN117369950B (en) * 2023-12-04 2024-02-20 上海凯翔信息科技有限公司 Configuration system of docker container

Similar Documents

Publication Publication Date Title
CN102624677B (en) Method and server for monitoring network user behavior
US10628228B1 (en) Tiered usage limits across compute resource partitions
EP3120281B1 (en) Dynamic identity checking
CN109831419A (en) The determination method and device of shell program authority
CN106656932A (en) Business processing method and device
US20180033075A1 (en) Automatic recharge system and method, and server
US11663337B2 (en) Methods and systems for system call reduction
CN110162407A (en) A kind of method for managing resource and device
CN112631730A (en) Model processing method and device, equipment and computer readable storage medium
CN108037984A (en) Method for managing resource, system and the readable storage medium storing program for executing of data analysis
CN108268605B (en) Shared space resource management method and system
CN113722725A (en) Resource data acquisition method and system
CN114969834B (en) Page authority control method, device, storage medium and equipment
CN109040491B (en) Hanging-up behavior processing method and device, computer equipment and storage medium
CN110086826A (en) Information processing method
CN112149139A (en) Authority management method and device
CN112631577B (en) Model scheduling method, model scheduler and model safety test platform
CN110516922B (en) Method and device for distributing data processing objects
CN104572036B (en) Event processing method and device
CN113687891A (en) Data management method, device and equipment
RU2818490C1 (en) Method and system for distributing system resources for processing user requests
CN113111328B (en) User identity authentication method, system, terminal and computer readable storage medium
CN117319212B (en) Multi-tenant isolated password resource automatic scheduling system and method in cloud environment
CN116611081A (en) Method, terminal and storage medium for realizing distributed current limiter
CN117827365A (en) Port allocation method, device, equipment, medium and product of application container

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination