CN111861020A - Model deployment method, device, equipment and storage medium - Google Patents

Model deployment method, device, equipment and storage medium Download PDF

Info

Publication number
CN111861020A
CN111861020A CN202010734650.4A CN202010734650A CN111861020A CN 111861020 A CN111861020 A CN 111861020A CN 202010734650 A CN202010734650 A CN 202010734650A CN 111861020 A CN111861020 A CN 111861020A
Authority
CN
China
Prior art keywords
model
target
target model
deployment
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010734650.4A
Other languages
Chinese (zh)
Inventor
庞俊涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010734650.4A priority Critical patent/CN111861020A/en
Publication of CN111861020A publication Critical patent/CN111861020A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Resources & Organizations (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Tourism & Hospitality (AREA)
  • Molecular Biology (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Stored Programmes (AREA)

Abstract

The application relates to data display, and particularly discloses a model deployment method, a device, equipment and a storage medium, wherein the method comprises the following steps: obtaining a model training operation instruction to be executed; loading background service resources on a preset model training page according to the model training operation instruction, and constructing a topological relation of a target model on the preset model training page according to the background service resources; modeling the target model according to the topological relation, and evaluating the modeled target model; and when the target model meets the preset evaluation condition, carrying out online deployment on the target model. A unified processing interface is defined in the model training process, and data conversion logic after the training process is stored, so that the deployment is more convenient, and the accuracy of prediction is ensured.

Description

Model deployment method, device, equipment and storage medium
Technical Field
The present application relates to the field of data display technologies, and in particular, to a model deployment method and apparatus, a computer device, and a storage medium.
Background
Machine learning technology is one of the hottest technologies in recent years, and is already integrated into various businesses of enterprises, a data scientist trains a corresponding model by combining a large amount of existing data, new data is input into the model to obtain a prediction result, and a decision maker makes relevant decisions for the enterprises according to the result.
However, the knowledge and skills involved in the technology are relatively professional, various development languages and frames require related coding skills, the threshold is high, and a locally developed model is difficult to maintain and deploy.
Most of the existing machine learning is established on the basis of python, and during model training, python engineers often pull data to a local computer, open python software for reading, then program, debug data step by step, and then train the model. However, because the coding habits of each python engineer are not uniform, the trained model cannot be uniformly deployed in a production environment, and in addition, the data processing process in the training cannot be extracted for deployment, so that the training process and the model prediction process are inconsistent, the subsequent upgrade and maintenance of the model are relatively difficult, and the deployment of developers is difficult.
Disclosure of Invention
The application provides a model deployment method, a model deployment device, computer equipment and a storage medium for data display, wherein a unified processing interface is defined in a model training process, and data conversion logic after the training process is stored, so that the deployment is more convenient, and the accuracy of prediction is ensured.
In a first aspect, the present application provides a model deployment method, including:
obtaining a model training operation instruction to be executed;
loading background service resources on a preset model training page according to the model training operation instruction, and constructing a topological relation of a target model on the preset model training page according to the background service resources;
modeling the target model according to the topological relation, and evaluating the modeled target model;
and when the target model meets the preset evaluation condition, carrying out online deployment on the target model.
In a second aspect, the present application further provides a model deployment apparatus, the apparatus comprising:
the acquisition module is used for acquiring a model training operation instruction to be executed;
the loading module is used for loading background service resources on a preset model training page according to the model training operation instruction and constructing a topological relation of a target model on the preset model training page according to the background service resources;
the modeling module is used for modeling the target model according to the topological relation and evaluating the modeled target model;
and the deployment module is used for deploying the target model on line when the target model meets the preset evaluation condition.
In a third aspect, the present application further provides a computer device comprising a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and to implement the model deployment method as described above when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the model deployment method as described above.
The application discloses a model deployment method, a device, computer equipment and a storage medium, wherein when a user builds a model, the user performs selection operation through a model training page and inputs a model training operation instruction, after the device obtains the model training operation instruction, background service resources are loaded on a preset model training page, a topological relation of a target model is built on the preset model training page according to the background service resources, the target model is built according to the topological relation, and the modeled target model is evaluated; and when the target model meets the preset evaluation condition, carrying out online deployment on the target model. The background service resources are used for carrying out uniform interface on various algorithms, so that the development and the use of each component on a model training page are facilitated to be expanded, the maintenance difficulty is simplified, the background service resources are uniformly managed when a target model is obtained, the deployment is conveniently and directly carried out, the data conversion processes of model training and model prediction are the same, and the prediction accuracy is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram schematic illustration of a model training platform provided by an embodiment of the present application;
FIG. 2 is a diagram of the logical architecture of a model training platform provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a model deployment method provided by an embodiment of the present application;
FIG. 4 is a diagram of a create project interface provided by an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of automated modeling provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an artificially modeled canvas provided by an embodiment of the present application;
FIG. 7 is a schematic block diagram of another model deployment apparatus provided by an embodiment of the present application;
fig. 8 is a schematic block diagram of a structure of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The embodiment of the application provides a model deployment method, a model deployment device, computer equipment and a storage medium. According to the model deployment method, a unified processing interface is defined in the model training process, and data conversion logic after the training process is stored, so that deployment is more convenient, and prediction accuracy is guaranteed.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a model training platform provided in an embodiment of the present application, where the model training platform includes a management background and a spark clustering algorithm component, and both the management background and the spark clustering algorithm component are disposed on a terminal or a server and are displayed through a web platform of the terminal.
Optionally, the model training platform is an AI (artificial intelligence) model training platform, a machine learning and deep learning full-flow service platform is provided for algorithm engineers and data scientists, one-stop service from data development, model training, model evaluation and service deployment to prediction is provided, and a modeler can combine various components, algorithms, models and evaluation modules through the platform and can also develop algorithm components by himself.
The AI (artificial intelligence) model training platform comprises a management background and spark cluster algorithm components, wherein the management background provides a dragging and pulling visual operation environment, the creation process of data mining is as simple as building blocks, the distance between a user and data is shortened, and the accessibility of the data is really realized; various algorithms are packaged in the Spark cluster, so that a unified external interface is realized, data are stored in hive, and files such as models and the like are stored in hdfs directory files. By using the platform, a user can complete modeling in a one-stop mode from data uploading, data preprocessing, feature engineering, model training and model evaluation to the final model release to an offline or online environment, and development efficiency is effectively improved.
Referring to fig. 2, fig. 2 is a logic architecture diagram of a model training platform provided in an embodiment of the present application, where an access terminal interacts with a background service, and the access terminal interacts with resource management through the background service, a user may perform component dragging operation, parameter editing operation, and connection operation on an application service on a Web platform on the access terminal, use a spark big data cluster as a calculation engine for model training to implement operation and storage of a large data volume, use hive storage structured data and hdfs storage file data to disassemble a model training process, define a uniform processing interface, and implement the uniform interface by each processing node, thereby forming a training workflow.
Referring to fig. 3, fig. 3 is a schematic flowchart of a model deployment method according to an embodiment of the present application. The model deployment method can be applied to the model training platform in the figure 1, defines a uniform processing interface in the model training process, and converts and logically stores data after the training process, so that the deployment is more convenient, and the accuracy of prediction is ensured.
As shown in fig. 3, the model deployment method specifically includes steps S101 to S104.
And S101, obtaining a model training operation instruction to be executed.
Specifically, after a user logs in a model training platform, the user enters a training platform project page, the user can input a model training operation instruction, the model training operation instruction can comprise a project creating instruction, a model related information input instruction and the like, the model training platform controls the model training page to display when receiving the model training operation instruction, so that the user can operate on the model training page, and the model training page loads background service resources. For example, when the user clicks the create project button, the user may input information such as the name and description of the project, and the training platform saves the related information such as the name and description of the project. As shown in fig. 4, fig. 4 is a creating project interface diagram, and a user can click a creating project button on a model training page to input information such as a project name and description.
In some embodiments, the model training page includes a data source module, a component module, an experiment module, and a model module, and the loading of background service resources by the model training page includes: and loading a background public data source table name by a data source module of the model training page, and loading a component list menu by a component module.
S102, loading background service resources on a preset model training page according to the model training operation instruction, and constructing a topological relation of a target model according to the background service resources on the preset model training page.
S103, modeling the target model according to the topological relation, and evaluating the modeled target model.
A user can input different model training operation instructions on a model training page, background service resources are loaded on a preset model training page according to the model training operation instructions, a topological relation of a target model is built on the preset model training page according to the background service resources, and the target model is modeled according to the topological relation, so that two operations of manual modeling and automatic modeling are realized. And carrying out a modeling process by triggering different selection operation instructions and configuring corresponding algorithm parameters for background service resources to obtain a target model and a model evaluation report corresponding to the target model.
In some embodiments, there are two ways of automatically modeling, as described in detail below.
As shown in fig. 5, fig. 5 is a flow chart of an automatic modeling one, which may include:
s103a, when a first automatic modeling instruction is received, selecting an algorithm component supporting the first automatic modeling instruction to obtain first model parameter setting.
The user clicks an Automl (automatic modeling) button in the model training page, so that the model training platform receives a first automatic modeling instruction, and the model training platform can automatically select an algorithm component supporting the Automl in the experimental canvas, such as logistic regression two-classification. And clicking the next step by the user, so that the user can see the parameters corresponding to the algorithm, namely the parameters of the first model, then setting each parameter, segmenting a plurality of numerical values by commas, and then clicking the next step by the user.
S103b, outputting corresponding model parameters according to the first model parameter setting.
After obtaining the first model parameter setting, e.g., logistic regression, classify the corresponding model parameter configuration, e.g., 8 sets of model parameters may be generated according to the parameter combination.
S103c, when the model evaluation index command is received, sets a model evaluation criterion.
When the user obtains 8 groups of model parameters, the user selects the model evaluation index, so that the model training page obtains a corresponding model evaluation index instruction, for example, the algorithm type is two-class evaluation, the evaluation standard is AUC, and the model evaluation standard is set.
S103d, when the automatic parameter adjusting instruction is received, model training is carried out according to the model parameters to obtain a first candidate model.
After setting the model evaluation standard, clicking the next step by the user to obtain a page logistic regression binary algorithm identifier, wherein the identifier represents that the component is provided with automatic parameter adjustment.
S103, determining a target model in the candidate models according to the model evaluation standard, and S103 e.
And then, the user clicks and executes the node, at this time, the background service resource performs model training for 8 groups according to the setting, and finally, according to the evaluation index, the best evaluation result is selected as a final model in a sorting mode.
The automatic modeling two may include:
when an automatic modeling component of the model training page is triggered to execute a second automatic modeling instruction, setting parameters of a second model;
performing model training based on the data algorithm and the model type triggered by the background service resource in the second model parameter setting to obtain a second candidate model;
and obtaining a target model and a model evaluation report corresponding to the target model according to the model evaluation index instruction and the second candidate model.
Specifically, the model training page has two-classification, multi-classification and regression automatic modeling buttons under an automatic modeling component (AutoML) menu of the left component module, and is correspondingly an automatic component based on spark + transmogrif packaging, and the automatic component has the characteristics of feature inference, automatic feature engineering, automatic feature verification, automatic model selection, hyper-parameter optimization and the like. As shown in fig. 8, after selecting a multi-class-automatic modeling component, a user selects training data on a canvas, and then drags an automatic modeling component, as shown in the following figure, the multi-class component is clicked, parameter setting occurs on the right side, and algorithms such as logistic regression, random forest, decision tree, bayes and the like can be seen, in this case, multi-selection, and we select two algorithms and then perform model training.
The logistic regression in the training results has 8 sets of models and the decision tree has 18 sets of models, one of which is then the last to be selected. In the training process, only the types and parameters of the data and the model are selected, and intermediate steps such as data preprocessing, feature engineering and the like do not need to be manually performed, so that the training time can be saved, and the training efficiency is improved.
The artificial modeling approach may include:
instantiating the component node based on a triggered component selection instruction of the component node of the model training page so as to configure a corresponding request interface through a background service resource;
and directionally connecting the component nodes according to a received dragging instruction of the component nodes and a connection instruction of the component nodes, and requesting the background service resources to configure corresponding algorithm parameters through the request interface so as to perform a modeling process and obtain a target model and a model evaluation report corresponding to the target model.
Specifically, after the user clicks on the test module on the model training page, the test name can be created, and after the user clicks on the test name and saves the test name, the user can enter the blank canvas on the right side of the model training page.
The user's operations on the blank canvas may include:
and S31, dragging a certain table of the left data source module to a blank canvas by a user, specifically, when the user right clicks the node of the table, outputting a selection operation menu by a training platform page, and clicking to view data by the user so as to know the data condition of the table.
S32, the user drags the missing value processing node in the left component module to the canvas, specifically, the user left clicks the missing value processing node in the canvas, and the field setting, the field selection, and the replacement mode of the node are displayed on the right side of the canvas, for example, a null value is replaced by an average value or a fixed value, and then stored. When a user clicks the component to select and execute the node, the background service forms a message sending request according to the table of the previous node and the parameter setting of the node, the message sending request is sent to a corresponding algorithm interface of the spark for calculation, and the spark stores the running result in an output table defined by the node for the next node to use.
S33, the user drags the type conversion node in the left component module to the canvas, specifically, the user clicks the type conversion node in the canvas with a left button, a field setting page of the node appears on the right side of the canvas, the user can select the field of the conversion type required by the user, for example, the field age can be converted into an int type, after the user clicks and saves the field, the user clicks and executes the node with a right button, the background service can obtain the output table name of the previous node and the field setting information of the node, the interface requests to the spark corresponding algorithm, and the algorithm function saves the result to the output table of the node after calculating according to the request information.
S34, a user drags a characteristic discrete node in a left component module to a canvas, specifically, the user clicks the component node in the canvas by a left key, a field and parameter setting page of the node appears on the right side of the canvas, a field tab page sets algorithm hyper-parameter, after storage, the node is executed, background service can obtain an output table name of a previous node, field and parameter setting information of the node and the output table name of the node, an interface request is sent to a corresponding algorithm of saprk, and the algorithm function stores a result into an output table of the node after calculation according to request information.
S35, a user drags a splitting node in a left component module to a canvas, specifically, the user clicks the component node in the canvas by a left key, a parameter setting page of the node appears on the right side of the canvas, a parameter tab page sets a splitting mode, for example, splitting and storing are carried out according to a proportion, then the node is executed, a background service can obtain an output table name of a previous node, parameter setting information of the node and two table names output by the node, an interface request is sent to a spark corresponding algorithm, and the algorithm function stores a result into two output tables after calculation according to the request information.
S36, dragging a binary node in a left component module to a canvas by a user, specifically, clicking the component node in the canvas by the left key of the user, enabling a node field and a parameter setting page to appear on the right side of the canvas, enabling a field tab page to set a characteristic field and a label field, enabling a parameter tab page to set algorithm hyper-parameter parameters, storing, executing the node, enabling a background service to obtain an output table name of a previous node, field setting and parameter setting information of the node, an output table name and a model path of the node, requesting an interface to a spark corresponding algorithm, calculating according to request information, storing a result in the output table of the node by an algorithm function, storing a model file in the defined model path, and clicking the node by the right key to view model description and the model file.
S37, dragging a prediction node in a left component module to a canvas by a user, specifically, clicking the component node by the left key of the user, enabling a field page of the node to appear on the right side of the canvas, setting a label field and a result field on a field tab page, storing, executing the node, obtaining an output table name of a previous split node test set by a background service and setting the output table name of the node by the field of the node, requesting an interface to a spark corresponding algorithm, and storing the result into an output table of the node after the algorithm function is calculated according to request information.
S38, dragging the two classification evaluation nodes in the left component module to the canvas by a user, specifically, clicking the component node in the canvas by the left key of the user, enabling a field page of the node to appear on the right side of the canvas, setting a target field and a result column on a field tab page, storing, and then executing the node, wherein at the moment, the background service can obtain the output table name of the previous node, the field setting of the node and the output table name of the node, an interface request is sent to a spark corresponding algorithm, the algorithm function calculates according to the request information and then stores the result in the output table of the node, and the evaluation report can be viewed when the user makes a right key.
The model training interfaces obtained in the above-mentioned S31-S38 are shown in FIG. 6. Defining each interface as a template, defining field parameters required by the interface in the template, rendering the component according to the template and the parameters on a front-end page, dragging the component to a canvas by a user, instantiating a node according to the component, selecting and inputting the parameters on the node and storing the parameters, so that the configuration of the interface parameters corresponding to the request of the data processing interface is completed, clicking a right button to operate, sending a request to the developed interface in spark by the node to perform data processing, and storing the processed data in a hive file and an hdfs file.
The user drags and pulls the left component module on the canvas, the component nodes are directionally connected on the canvas, and the background service requests an algorithm corresponding to spark according to the corresponding component nodes and the connection direction, so that model training is performed.
And the node components are connected with each other through a line with a direction, so that the data processed by the previous node is upwards searched and used by the next node according to the connecting line, the model training process is realized by using the mode of dragging the components and the connecting line, each node component can independently run and can be used for multiple times, the model training process is simplified, the threshold is reduced, and even an analyst who cannot use python codes can train the model of the analyst.
It is emphasized that the algorithm parameters may also be stored in the nodes of the blockchain in order to further ensure the privacy and security of the algorithm parameters.
And S104, when the target model meets the preset evaluation condition, carrying out online deployment on the target model.
Specifically, when the model evaluation report satisfies a preset evaluation condition, the method may include:
determining whether the model evaluation report satisfies the preset evaluation condition; when the model evaluation report does not meet the preset evaluation condition, adjusting training parameters according to the received parameter adjusting instruction; and retraining the target model according to the adjusted training parameters until the target model reaches the preset evaluation condition.
In some embodiments, it is determined whether the model evaluation report satisfies a preset evaluation condition, where the preset evaluation condition may be a model with a top evaluation result, and may be a model with a best evaluation result as a final model, and in order to satisfy the preset evaluation condition, if the model does not satisfy the preset evaluation condition, the user may readjust the parameters, so that the model training platform trains the parameters according to the received parameter adjustment instruction to obtain a new model, and evaluates the new model again until the obtained target model reaches the preset evaluation condition.
Specifically, the online deployment of the target model includes:
acquiring a model file corresponding to the target model and a processing node corresponding to the model file, and searching corresponding processing logic information according to the processing node;
and copying the model file and the processing logic information to a deployment environment, storing and analyzing the model file and the processing logic information through the deployment environment to obtain a deployment model and a prediction interface used for external system calling to perform model prediction, and completing model deployment.
In some embodiments, the trained model needs to be deployed and used, an online deployment button is clicked on the upper left of a trained page, the trained model is selected for deployment, in the process, a model file stored by hdfs is found according to the information of nodes and copied to a deployment environment, meanwhile, the processing logic information of each node is searched according to the inverse process of the model nodes and also copied to the deployment environment for storage and analysis, a model prediction interface is generated based on the model, so that the deployment of the model is completed, and other systems can perform model prediction by calling the interface.
Specifically, the prediction interface is a test interface, which is defined as a parameter input variable of each model, and is used for inputting parameter input data through the interface, acquiring a model result, and judging whether the deployed model can be normally called.
Specifically, when deployment is performed, a user clicks a deployment button, and the model training platform screens a model which can be deployed in the current experiment canvas and selects a model which needs to be deployed.
Then, the user fills in the model name to complete the deployment, and the user can see the model completing the deployment and the interface for providing the service to the outside. The user fills in the model data calling interface again, and then can see the model execution result, thereby realizing the online deployment of the model and facilitating the use of the model.
The embodiment provides a model deployment method, wherein when modeling is performed, a user performs selection operation through a model training page and inputs a model training operation instruction, the device loads background service resources on a preset model training page after obtaining the model training operation instruction, constructs a topological relation of a target model according to the background service resources on the preset model training page, models the target model according to the topological relation, and evaluates the modeled target model; and when the target model meets the preset evaluation condition, carrying out online deployment on the target model. The background service resources are used for carrying out uniform interface on various algorithms, so that the development and the use of each component on a model training page are facilitated to be expanded, the maintenance difficulty is simplified, the background service resources are uniformly managed when a target model is obtained, the deployment is conveniently and directly carried out, the data conversion processes of model training and model prediction are the same, and the prediction accuracy is ensured.
Illustratively, after the model deployment is completed, if a user needs to perform editing and modifying operations on the historical model, the model training platform modifies the historical model based on the editing and training operation instructions when receiving the editing and training operation instructions triggered by the historical model of the model training page.
In some embodiments, after deployment is completed, the user can also check the deployment information of the model in the model module, click the created experiment module according to the own requirements, and continue editing, modifying, training and other operations.
Referring to fig. 7, fig. 7 is a schematic block diagram of a model deployment apparatus according to an embodiment of the present application, where the model deployment apparatus is configured to perform the model deployment method described above. Wherein, the model deploying device can be configured on a terminal or a server.
As shown in fig. 7, the model deployment apparatus 400 includes: an acquisition module 401, a loading module 402, a modeling module 403, and a deployment module 404.
An obtaining module 401, configured to obtain a model training operation instruction to be executed;
a loading module 402, configured to load a background service resource on a preset model training page according to the model training operation instruction, and construct a topology relationship of a target model on the preset model training page according to the background service resource;
the modeling module 403 is configured to model the target model according to the topological relation, and evaluate the modeled target model;
a deployment module 404, configured to perform online deployment on the target model when the target model meets a preset evaluation condition.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus and the modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The apparatus described above may be implemented in the form of a computer program which is executable on a computer device as shown in fig. 8.
Referring to fig. 8, fig. 8 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
Referring to fig. 8, the computer device includes a processor, a memory, and a network interface connected through a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any of the model deployment methods.
The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.
The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which when executed by the processor, causes the processor to perform any of the model deployment methods.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
obtaining a model training operation instruction to be executed;
loading background service resources on a preset model training page according to the model training operation instruction, and constructing a topological relation of a target model on the preset model training page according to the background service resources;
modeling the target model according to the topological relation, and evaluating the modeled target model;
and when the target model meets the preset evaluation condition, carrying out online deployment on the target model.
In some embodiments, the processor implements the online deployment of the target model when the target model meets a preset evaluation condition, including:
acquiring a model file corresponding to the target model and a processing node corresponding to the model file, and searching corresponding processing logic information according to the processing node;
and copying the model file and the processing logic information to a deployment environment, storing and analyzing the model file and the processing logic information through the deployment environment to obtain a deployment model and a prediction interface used for external system calling to perform model prediction, and completing model deployment.
In some embodiments, the modeling the target model according to the topological relation and evaluating the modeled target model by the processor include:
when a first automatic modeling instruction is received, selecting an algorithm component supporting the first automatic modeling instruction to obtain first model parameter setting;
outputting corresponding model parameters according to the first model parameter setting;
setting a model evaluation standard when a model evaluation index instruction is received;
when an automatic parameter adjusting instruction is received, performing model training according to the model parameters to obtain a first candidate model;
determining a target model among the candidate models according to the model evaluation criteria.
In some embodiments, the modeling the target model according to the topological relation and evaluating the modeled target model by the processor include:
when an automatic modeling component of the model training page is triggered to execute a second automatic modeling instruction, setting parameters of a second model;
performing model training based on the data algorithm and the model type triggered by the background service resource in the second model parameter setting to obtain a second candidate model;
and obtaining a target model and a model evaluation report corresponding to the target model according to the model evaluation index instruction and the second candidate model.
In some embodiments, the modeling the target model according to the topological relation and evaluating the modeled target model by the processor include:
instantiating the component node based on a triggered component selection instruction of the component node of the model training page so as to configure a corresponding request interface through a background service resource;
and directionally connecting the component nodes according to a received dragging instruction of the component nodes and a connection instruction of the component nodes, and requesting the background service resources to configure corresponding algorithm parameters through the request interface so as to perform a modeling process and obtain a target model and a model evaluation report corresponding to the target model.
In some embodiments, after the processor implements the modeling of the target model according to the topological relation and evaluates the modeled target model, the implementation further implements the following steps:
when the model evaluation report does not meet the preset evaluation condition, adjusting training parameters according to the received parameter adjusting instruction;
and retraining the target model according to the adjusted training parameters until the target model reaches the preset evaluation condition.
In some embodiments, the processor further enables, upon receiving an edit training operation instruction that triggers a historical model of the model training page, modifying the historical model based on the edit training operation instruction.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the processor executes the program instructions to implement any one of the model deployment methods provided in the embodiments of the present application.
The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of model deployment, the method comprising:
obtaining a model training operation instruction to be executed;
loading background service resources on a preset model training page according to the model training operation instruction, and constructing a topological relation of a target model on the preset model training page according to the background service resources;
modeling the target model according to the topological relation, and evaluating the modeled target model;
and when the target model meets the preset evaluation condition, carrying out online deployment on the target model.
2. The method according to claim 1, wherein the deploying the target model online when the target model meets a preset evaluation condition comprises:
acquiring a model file corresponding to the target model and a processing node corresponding to the model file, and searching corresponding processing logic information according to the processing node;
and copying the model file and the processing logic information to a deployment environment, storing and analyzing the model file and the processing logic information through the deployment environment to obtain a deployment model and a prediction interface used for external system calling to perform model prediction, and completing model deployment.
3. The method of claim 1, wherein modeling the target model according to the topological relation and evaluating the modeled target model comprises:
when a first automatic modeling instruction is received, selecting an algorithm component supporting the first automatic modeling instruction to obtain first model parameter setting;
outputting corresponding model parameters according to the first model parameter setting;
setting a model evaluation standard when a model evaluation index instruction is received;
when an automatic parameter adjusting instruction is received, performing model training according to the model parameters to obtain a first candidate model;
determining a target model among the candidate models according to the model evaluation criteria.
4. The method of claim 1, wherein modeling the target model according to the topological relation and evaluating the modeled target model comprises:
when an automatic modeling component of the model training page is triggered to execute a second automatic modeling instruction, setting parameters of a second model;
performing model training based on the data algorithm and the model type triggered by the background service resource in the second model parameter setting to obtain a second candidate model;
and obtaining a target model and a model evaluation report corresponding to the target model according to the model evaluation index instruction and the second candidate model.
5. The method of claim 1, wherein modeling the target model according to the topological relation and evaluating the modeled target model comprises:
instantiating the component node based on a triggered component selection instruction of the component node of the model training page so as to configure a corresponding request interface through a background service resource;
according to the received dragging instruction of the component node and the connection instruction of the component node, the component node is connected directionally, the background service resource is requested to be configured with corresponding algorithm parameters through the request interface, so that a modeling process is carried out, a target model and a model evaluation report corresponding to the target model are obtained, and the algorithm parameters are stored in a block chain.
6. The method according to any one of claims 1 to 5, wherein after modeling the target model according to the topological relation and evaluating the modeled target model, the method further comprises:
when the model evaluation report does not meet the preset evaluation condition, adjusting training parameters according to the received parameter adjusting instruction;
and retraining the target model according to the adjusted training parameters until the target model reaches the preset evaluation condition.
7. The method of claim 1, further comprising:
and when an editing training operation instruction triggered by the historical model of the model training page is received, modifying the historical model based on the editing training operation instruction.
8. A model deployment apparatus, comprising:
the acquisition module is used for acquiring a model training operation instruction to be executed;
the loading module is used for loading background service resources on a preset model training page according to the model training operation instruction and constructing a topological relation of a target model on the preset model training page according to the background service resources;
the modeling module is used for modeling the target model according to the topological relation and evaluating the modeled target model;
and the deployment module is used for deploying the target model on line when the target model meets the preset evaluation condition.
9. A computer device, wherein the computer device comprises a memory and a processor;
the memory is used for storing a computer program;
the processor for executing the computer program and implementing the model deployment method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the model deployment method of any one of claims 1 to 7.
CN202010734650.4A 2020-07-27 2020-07-27 Model deployment method, device, equipment and storage medium Pending CN111861020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010734650.4A CN111861020A (en) 2020-07-27 2020-07-27 Model deployment method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010734650.4A CN111861020A (en) 2020-07-27 2020-07-27 Model deployment method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111861020A true CN111861020A (en) 2020-10-30

Family

ID=72947913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010734650.4A Pending CN111861020A (en) 2020-07-27 2020-07-27 Model deployment method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111861020A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256537A (en) * 2020-11-12 2021-01-22 腾讯科技(深圳)有限公司 Model running state display method and device, computer equipment and storage medium
CN112311605A (en) * 2020-11-06 2021-02-02 北京格灵深瞳信息技术有限公司 Cloud platform and method for providing machine learning service
CN112363967A (en) * 2020-11-09 2021-02-12 成都卫士通信息产业股份有限公司 Method, device, equipment and medium for unifying interface standards of password equipment
CN112560938A (en) * 2020-12-11 2021-03-26 上海哔哩哔哩科技有限公司 Model training method and device and computer equipment
CN112988119A (en) * 2021-03-10 2021-06-18 中国邮政储蓄银行股份有限公司 Data modeling method and device, storage medium and processor
CN113065843A (en) * 2021-03-15 2021-07-02 腾讯科技(深圳)有限公司 Model processing method and device, electronic equipment and storage medium
CN113159040A (en) * 2021-03-11 2021-07-23 福建自贸试验区厦门片区Manteia数据科技有限公司 Method, device and system for generating medical image segmentation model
CN113190582A (en) * 2021-05-06 2021-07-30 北京三维天地科技股份有限公司 Data real-time interactive mining flow modeling analysis system
CN113326113A (en) * 2021-05-25 2021-08-31 北京市商汤科技开发有限公司 Task processing method and device, electronic equipment and storage medium
CN113741887A (en) * 2021-08-19 2021-12-03 北京百度网讯科技有限公司 Model production method, system, device and electronic equipment
CN113971032A (en) * 2021-12-24 2022-01-25 百融云创科技股份有限公司 Full-process automatic deployment method and system of machine learning model for code generation
CN114706864A (en) * 2022-03-04 2022-07-05 阿波罗智能技术(北京)有限公司 Model updating method and device for automatically mining scene data and storage medium
WO2023004806A1 (en) * 2021-07-30 2023-02-02 西门子股份公司 Device deployment method for ai model, system, and storage medium
CN116069318A (en) * 2023-03-07 2023-05-05 北京麟卓信息科技有限公司 Rapid construction and deployment method and system for intelligent application
CN116578300A (en) * 2023-07-13 2023-08-11 江西云眼视界科技股份有限公司 Application creation method, device and storage medium based on visualization component
CN116737803A (en) * 2023-08-10 2023-09-12 天津神舟通用数据技术有限公司 Visual data mining arrangement method based on directed acyclic graph
CN117035065A (en) * 2023-10-10 2023-11-10 浙江大华技术股份有限公司 Model evaluation method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764808A (en) * 2018-03-29 2018-11-06 北京九章云极科技有限公司 Data Analysis Services system and its on-time model dispositions method
CN109146081A (en) * 2017-06-27 2019-01-04 阿里巴巴集团控股有限公司 It is a kind of for quickly creating the method and device of model item in machine learning platform
CN110378463A (en) * 2019-07-15 2019-10-25 北京智能工场科技有限公司 A kind of artificial intelligence model standardized training platform and automated system
CN110598868A (en) * 2018-05-25 2019-12-20 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
CN110991649A (en) * 2019-10-28 2020-04-10 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Deep learning model building method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146081A (en) * 2017-06-27 2019-01-04 阿里巴巴集团控股有限公司 It is a kind of for quickly creating the method and device of model item in machine learning platform
CN108764808A (en) * 2018-03-29 2018-11-06 北京九章云极科技有限公司 Data Analysis Services system and its on-time model dispositions method
CN110598868A (en) * 2018-05-25 2019-12-20 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
CN110378463A (en) * 2019-07-15 2019-10-25 北京智能工场科技有限公司 A kind of artificial intelligence model standardized training platform and automated system
CN110991649A (en) * 2019-10-28 2020-04-10 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Deep learning model building method, device, equipment and storage medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311605A (en) * 2020-11-06 2021-02-02 北京格灵深瞳信息技术有限公司 Cloud platform and method for providing machine learning service
CN112311605B (en) * 2020-11-06 2023-12-22 北京格灵深瞳信息技术股份有限公司 Cloud platform and method for providing machine learning service
CN112363967A (en) * 2020-11-09 2021-02-12 成都卫士通信息产业股份有限公司 Method, device, equipment and medium for unifying interface standards of password equipment
CN112363967B (en) * 2020-11-09 2023-11-14 成都卫士通信息产业股份有限公司 Method, device, equipment and medium for unifying interface standards of password equipment
CN112256537B (en) * 2020-11-12 2024-03-29 腾讯科技(深圳)有限公司 Model running state display method and device, computer equipment and storage medium
CN112256537A (en) * 2020-11-12 2021-01-22 腾讯科技(深圳)有限公司 Model running state display method and device, computer equipment and storage medium
CN112560938A (en) * 2020-12-11 2021-03-26 上海哔哩哔哩科技有限公司 Model training method and device and computer equipment
CN112560938B (en) * 2020-12-11 2023-08-25 上海哔哩哔哩科技有限公司 Model training method and device and computer equipment
CN112988119A (en) * 2021-03-10 2021-06-18 中国邮政储蓄银行股份有限公司 Data modeling method and device, storage medium and processor
CN113159040A (en) * 2021-03-11 2021-07-23 福建自贸试验区厦门片区Manteia数据科技有限公司 Method, device and system for generating medical image segmentation model
CN113159040B (en) * 2021-03-11 2024-01-23 福建自贸试验区厦门片区Manteia数据科技有限公司 Method, device and system for generating medical image segmentation model
CN113065843B (en) * 2021-03-15 2024-05-14 腾讯科技(深圳)有限公司 Model processing method and device, electronic equipment and storage medium
CN113065843A (en) * 2021-03-15 2021-07-02 腾讯科技(深圳)有限公司 Model processing method and device, electronic equipment and storage medium
CN113190582A (en) * 2021-05-06 2021-07-30 北京三维天地科技股份有限公司 Data real-time interactive mining flow modeling analysis system
CN113326113A (en) * 2021-05-25 2021-08-31 北京市商汤科技开发有限公司 Task processing method and device, electronic equipment and storage medium
WO2023004806A1 (en) * 2021-07-30 2023-02-02 西门子股份公司 Device deployment method for ai model, system, and storage medium
CN113741887A (en) * 2021-08-19 2021-12-03 北京百度网讯科技有限公司 Model production method, system, device and electronic equipment
CN113971032A (en) * 2021-12-24 2022-01-25 百融云创科技股份有限公司 Full-process automatic deployment method and system of machine learning model for code generation
CN114706864A (en) * 2022-03-04 2022-07-05 阿波罗智能技术(北京)有限公司 Model updating method and device for automatically mining scene data and storage medium
CN116069318B (en) * 2023-03-07 2023-05-30 北京麟卓信息科技有限公司 Rapid construction and deployment method and system for intelligent application
CN116069318A (en) * 2023-03-07 2023-05-05 北京麟卓信息科技有限公司 Rapid construction and deployment method and system for intelligent application
CN116578300A (en) * 2023-07-13 2023-08-11 江西云眼视界科技股份有限公司 Application creation method, device and storage medium based on visualization component
CN116578300B (en) * 2023-07-13 2023-11-10 江西云眼视界科技股份有限公司 Application creation method, device and storage medium based on visualization component
CN116737803A (en) * 2023-08-10 2023-09-12 天津神舟通用数据技术有限公司 Visual data mining arrangement method based on directed acyclic graph
CN116737803B (en) * 2023-08-10 2023-11-17 天津神舟通用数据技术有限公司 Visual data mining arrangement method based on directed acyclic graph
CN117035065A (en) * 2023-10-10 2023-11-10 浙江大华技术股份有限公司 Model evaluation method and related device

Similar Documents

Publication Publication Date Title
CN111861020A (en) Model deployment method, device, equipment and storage medium
Daradkeh et al. Technologies for making reliable decisions on a variety of effective factors using fuzzy logic
US11544625B2 (en) Computing system for training, deploying, executing, and updating machine learning models
US10902339B2 (en) System and method providing automatic completion of task structures in a project plan
CN114546365B (en) Flow visualization modeling method, server, computer system and medium
CN116034369A (en) Automated functional clustering of design project data with compliance verification
Silva et al. A multi-criteria decision model for the selection of a more suitable Internet-of-Things device
CN112817560B (en) Computing task processing method, system and computer readable storage medium based on table function
CN117149410A (en) AI intelligent model based training, scheduling, commanding and monitoring system
CN116645550A (en) Generalized image recognition method for airborne display system based on test case
CN112181511B (en) Executable information analysis flow interaction configuration generation method
CN110766163A (en) System for implementing a machine learning process
US20130346141A1 (en) Workflow modeling with workets and transitions
Bohács et al. Production logistics simulation supported by process description languages
CN115033212A (en) Avionics system primitive model integrated construction method and device and computer equipment
CN110928761B (en) Demand chain and system and method for application thereof
Chouchen et al. Predicting code review completion time in modern code review
US20230280991A1 (en) Extensibility recommendation system for custom code objects
TWI787669B (en) System and method of automated machine learning based on model recipes
US11803702B1 (en) Executing document workflows using document workflow orchestration runtime
KR20240053911A (en) Method and system for AI collaboration service based on source code automatic generation system
US20240004937A1 (en) Monitoring execution of document workflows using cloud platform independent document workflow orchestration runtime
US20240005243A1 (en) Creating document workflows using cloud platform independent representation
US20230368086A1 (en) Automated intelligence facilitation of routing operations
Perez-Castanos et al. Holistic Production Overview: Using XAI for Production Optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030