CN117951089A - Model visualization method, device, electronic equipment and storage medium - Google Patents

Model visualization method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117951089A
CN117951089A CN202311343235.6A CN202311343235A CN117951089A CN 117951089 A CN117951089 A CN 117951089A CN 202311343235 A CN202311343235 A CN 202311343235A CN 117951089 A CN117951089 A CN 117951089A
Authority
CN
China
Prior art keywords
model
processed
file
analysis data
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311343235.6A
Other languages
Chinese (zh)
Inventor
李国冬
李云彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202311343235.6A priority Critical patent/CN117951089A/en
Publication of CN117951089A publication Critical patent/CN117951089A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the disclosure provides a model visualization method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: inquiring whether model analysis data stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model; if the model analysis data does not exist, a model file of the model to be processed is obtained; determining the model type of the model to be processed according to the model file; analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data; rendering the model analysis data to obtain a visualization result of the model to be processed, so that the efficiency of model visualization can be improved.

Description

Model visualization method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method and apparatus for visualizing a model, an electronic device, and a storage medium.
Background
With the development of electronic technology, artificial intelligence is increasingly used. In the field of artificial intelligence, each model training requires the generation of a corresponding model file and the evaluation of the index parameters of the model. The index parameter may reflect the accuracy of the model, but in case of low accuracy, it is difficult for the index parameter to give a specific explanation that results in low accuracy of the model. In this case, the internal structure of the model and the working logic of the model can be more intuitively reflected through the visualization of the model, which is beneficial to data analysis of the model.
However, the user generating the model visualization requirement may be a different user than the user performing the model training for the same model, the user may respectively use different devices, the time of the requirement generation and the time of the model training may have a longer time interval, and various factors may cause inefficiency in the model visualization. Therefore, the problem of how to improve the efficiency of model visualization is becoming more and more important.
Disclosure of Invention
The embodiment of the application provides a model visualization method, a device, electronic equipment and a storage medium, so as to improve the efficiency of model visualization.
In a first aspect, an embodiment of the present application provides a method for visualizing a model, including:
Inquiring whether model analysis data stored in association with the first identification information exists in a service platform according to the first identification information of the to-be-processed model;
if the model analysis data does not exist, a model file of the model to be processed is obtained;
determining the model type of the model to be processed according to the model file;
analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
rendering the model analysis data to obtain a visual result of the model to be processed.
In a second aspect, an embodiment of the present application provides a model visualization apparatus, including:
The query unit is used for querying whether model analysis data stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model;
the obtaining unit is used for obtaining a model file of the model to be processed if the model analysis data does not exist;
the determining unit is used for determining the model type of the model to be processed according to the model file;
The analysis unit is used for analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
And the rendering unit is used for rendering the model analysis data to obtain a visual result of the model to be processed.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to perform the model visualization method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the method of model visualization according to the first aspect.
It can be seen that in the embodiment of the present application, first, according to first identification information of a to-be-processed model, whether model analysis data stored in association with the first identification information exists is queried in a service platform; if the model analysis data does not exist, a model file of the model to be processed is obtained; then, determining the model type of the model to be processed according to the model file; then, according to a model analysis mode corresponding to the model type, analyzing the model file to obtain model analysis data; and finally, rendering the model analysis data to obtain a visual result of the model to be processed. On the one hand, whether the model analysis data which is stored in association with the first identification information exists or not is inquired through the first identification information of the to-be-processed model on the service platform, and the analysis processing of the model file is executed under the condition that the model analysis data does not exist on the service platform, so that whether the reusable model analysis data exists or not can be determined, repeated analysis work is reduced, and the visualization efficiency of the model is improved; on the other hand, under the condition that the model types of the models to be processed are different, different model analysis modes can be adopted to conduct targeted analysis on the models to be processed, so that the model analysis efficiency is improved, and the model visualization efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art;
FIG. 1 is a schematic diagram of an implementation environment of a model visualization method according to an embodiment of the present application;
FIG. 2 is a process flow diagram of a model visualization method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of format conversion of a model to be processed according to an embodiment of the present application;
FIG. 4 is a visual result of a conventional machine learning model provided by an embodiment of the present application;
FIG. 5 is a view of a model of a graph convolution provided in an embodiment of the present disclosure;
FIG. 6 is a visual result of a decision tree model provided by an embodiment of the present application;
FIG. 7 is a diagram of a model visualization method for a machine learning platform according to an embodiment of the present application;
FIG. 8 is a process flow diagram of another method for visualizing models provided in accordance with an embodiment of the present application;
FIG. 9 is a functional block diagram of a model visualization system provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a model visualization device according to an embodiment of the present disclosure;
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the embodiments of the present application, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The embodiment of the model visualization method is provided in the specification:
The method can intuitively reflect the topological structure inside the model and the detailed information of each module in the model through the model visualization, and further can provide the reasons whether the logic and the accuracy of the model operation can reach the expected threshold through the model visualization.
Considering that the visualization result of a part of the model may need to trigger the presentation of detailed information through interaction, the visualization result of the part of the model may not be convenient to store and transmit in a network, and the visualization result of the model is obtained by reading the historical visualization result generated and stored by others, so that the realization difficulty of the part of the model is high. Therefore, when a user performs model visualization on the local device, the user needs to acquire a model file first, and then analyze and render the model file, so as to obtain a visualization result.
However, the model file is generated after the model training is completed, the user who has a need for model visualization may be a different user than the user who performs model training, and the user may employ a different device, and thus, the user who has a need for model visualization needs to ask for the model file from the user who performs model training. There may be a long time interval, for example, months or even years, considering the generation time of the model visualization requirements and the execution time of the model training. Various factors may cause inefficiency in model visualization, and therefore, in order to solve the above-mentioned problems, an embodiment of the present application provides a model visualization method.
The model visualization method provided in one or more embodiments of the present disclosure may be applied to an implementation environment of a model visualization method, as shown in fig. 1, where the implementation environment includes at least a server 101 for performing model visualization.
The server 101 may be a server, or a server cluster formed by a plurality of servers, or one or more cloud servers in a cloud computing platform, for performing model visualization.
In the implementation environment, a server 101 queries whether model analysis data stored in association with first identification information exists in a service platform according to the first identification information of a model to be processed in a model visualization process; if the model analysis data does not exist, a model file of the model to be processed is obtained; determining the model type of the model to be processed according to the model file; analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data; rendering the model analysis data to obtain a visual result of the model to be processed. On the one hand, whether the reusable model analysis data exist or not can be determined by inquiring whether the model analysis data stored in association with the first identification information exist in the service platform through the first identification information of the to-be-processed model and executing the analysis processing of the model file under the condition that the model analysis data do not exist in the service platform, so that repeated analysis work is reduced, and the visualization efficiency of the model is improved; on the other hand, under the condition that the model types of the models to be processed are different, different model analysis modes can be adopted to conduct targeted analysis on the models to be processed, so that the model analysis efficiency is improved, and the model visualization efficiency is further improved.
Fig. 2 is a process flow diagram of a model visualization method according to an embodiment of the present application. Referring to fig. 2, the method for visualizing a model provided in the present embodiment specifically includes steps S202 to S210.
Step S202, inquiring whether model analysis data stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model.
Step S202 may be performed after model training of the initial model is completed to obtain a model to be processed.
The initial model may be a machine learning model that has applied artificial intelligence techniques that has not been engaged in model training.
The model to be processed may be a model obtained by inputting a training sample set into an initial model for iterative training.
The machine learning model may be a mathematical model that is trained and learned by input data to enable prediction or classification of new data. Machine learning models include, but are not limited to: linear models, nonlinear models, decision trees, neural networks, support vector machines, naive bayes classifiers, and the like.
The purpose of the machine learning model is to learn the relationships between the data so that new data can be predicted or classified. The model typically learns these relationships using training data and adjusts its own parameters based on errors in the training data to improve the accuracy of the model.
And respectively inputting different training sample sets into the same initial model for iterative training, so that the to-be-processed models applicable to different business scenes can be obtained. The initial model has the same model structure as each model to be processed, and model parameters are different. The model parameters of the initial model may be pre-configured parameter initial values, and the model parameters of each model to be processed need to be obtained through iterative training.
For example, inputting the training sample set 1 into an initial model A for iterative training to obtain a to-be-processed model 1, wherein the to-be-processed model 1 is applicable to a service scene 1; inputting the training sample set 2 into an initial model A for iterative training to obtain a to-be-processed model 2, wherein the to-be-processed model 2 is suitable for a service scene 2; and inputting the training sample set into an initial model A for iterative training to obtain a to-be-processed model 3, wherein the to-be-processed model 3 is suitable for the business scene 3. The model structures of the initial model A, the model 1 to be processed, the model 2 to be processed and the model 3 to be processed are the same, and model parameters of any two models of the initial model A, the model 1 to be processed, the model 2 to be processed and the model 3 to be processed are different.
The two models are identical in structure, namely the modules included by the two models are identical and the connection relation among the modules is identical.
Model parameters are configuration variables inside the model that change continuously during model training. For example, weight information of the neural network, coefficients of logistic regression, and the like.
During model training, the model structure is usually kept unchanged, and there is no more than one module, one module is reduced or the connection relation between the modules is changed, but model parameters are adjusted based on training loss, for example, weight values are increased, and the like. The service platform may be preconfigured with identification information identifying the respective models.
And a corresponding relation exists between the first identification information and the model to be processed. The unique corresponding model to be processed can be determined on the service platform through the first identification information.
After model training of the initial model is completed to obtain a to-be-processed model, inquiring whether model analysis data which is stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model, or inquiring whether the model analysis data which is stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model when the training is completed to obtain the to-be-processed model. In this case, the length of time between the time point at which training ends and the execution time point of the query operation is smaller than the first time length threshold.
For example, the first time length threshold may be 5 minutes. And the service platform immediately executes the query operation every time the service platform completes one-time model training to obtain a new model to be processed.
After model training of the initial model is completed to obtain a to-be-processed model, whether the model analysis data which is stored in association with the first identification information exists or not is inquired at the service platform according to the first identification information of the to-be-processed model, or whether the model analysis data which is stored in association with the first identification information exists or not is inquired at the service platform according to the first identification information of the to-be-processed model after the training is completed to obtain the to-be-processed model for a period of time. In this case, the length of time between the time point at which training ends and the execution time point of the query operation is greater than the second time length threshold.
The numbers "first", "second", etc. in this specification are merely for convenience in distinguishing similar features, and do not have a practical meaning, and are not described in detail below.
For example, the second time length threshold may be one day. After finishing one model training, the service platform obtains a new model to be processed for three months, and the user with the model visualization requirement actively triggers and executes the query operation.
The service platform may store at least one identification information, and model parsing data corresponding to each identification information.
The identification information is used for determining a unique corresponding model among a plurality of models stored in the service platform.
According to the first identification information of the to-be-processed model, inquiring whether the model analysis data stored in association with the first identification information exists in the service platform or not can be:
First, whether first identification information exists is inquired in the identification information stored in the service platform.
And if the service platform does not have the first identification information, the service platform is indicated to have no model analysis data which is stored in association with the first identification information.
If the service platform has the first identification information, continuously inquiring whether the model analysis data with the association relation with the first identification information exists or not.
If the model analysis data with the corresponding relation with the first identification information exists, determining that the service platform has the model analysis data which is stored in association with the first identification information.
If the model analysis data with the corresponding relation with the first identification information does not exist, determining that the service platform does not exist the model analysis data stored in association with the first identification information.
The model analysis data may be data obtained by analyzing a model file of a model to be processed, and the data may be used for model visualization.
If the service platform has the model analysis data stored in association with the first identification information, the model analysis data can be read, rendering processing is carried out according to the model analysis data, and a visualization result of the model to be processed is obtained.
Step S204, if the model analysis data does not exist, a model file of the model to be processed is obtained.
The model training can be performed by a service end of the service platform, or can be performed by other service ends except the service end of the service platform.
The model training is performed by the service end of the service platform as an example, and in this case, after the service end of the service platform obtains the model to be processed after the model training is finished, a model file of the model to be processed may be generated.
The model file may be a file obtained by storing the model to be processed obtained by training through a preset frame.
It should be noted that, in a plurality of different servers, if each server performs model training and generates a model file by using a different framework, the format of the model file generated by each server may be different.
Step S206, determining the model type of the model to be processed according to the model file.
Model types are used to describe which type of model the model to be processed belongs to.
In specific implementation, the machine learning model may be divided into a plurality of model types in advance: a first type, a second type, a third type, etc., then the model type of the model to be processed is one of the plurality of model types.
For example, the model to be processed is a decision tree model, and the model type of the model to be processed is a second model.
In a specific implementation, determining a model type of the model to be processed according to the model file includes: extracting a first model parameter from a model file; and carrying out query processing in a corresponding relation between the pre-configured model parameters and the model types according to the first model parameters to obtain the model types of the models to be processed.
The first model parameter may be parameter information indicating a model type, or may be parameter information having a correspondence with the model type.
The first model parameter may be a model name, a model class identification, or the like, for example.
The business platform may be preconfigured with correspondence of model parameters to model types, e.g., each model type corresponds to one or more model names, each model type corresponds to a model class identification, etc.
Under the condition that the first model parameter comprises the model name of the model to be processed, carrying out query processing in the corresponding relation between the preconfigured model parameter and the model type according to the first model parameter to obtain the model type of the model to be processed, or carrying out query processing in the corresponding relation between the preconfigured model name and the model type according to the model name of the model to be processed to obtain the model type corresponding to the model name, and determining the model type as the model type of the model to be processed.
Under the condition that the first model parameter comprises a model type identifier of the model to be processed, carrying out query processing in a corresponding relation between a preconfigured model parameter and a model type according to the first model parameter to obtain the model type of the model to be processed, or carrying out query processing in a corresponding relation between the preconfigured model type identifier and the model type according to the model type identifier of the model to be processed to obtain the model type corresponding to the model type identifier, and determining the model type as the model type of the model to be processed.
According to the model file, determining the model type of the model to be processed, or extracting a first model parameter from the model text, inputting the first model parameter into a classification model for classification processing, and obtaining a classification result, wherein the classification result comprises the model type of the model to be processed.
And step S208, analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data.
In specific implementation, the service platform may be preconfigured with a correspondence between a model type and a model parsing mode.
According to the model type of the model to be processed and the corresponding relation between the model type and the model analysis mode, the model analysis mode corresponding to the model type of the model to be processed can be inquired and obtained.
In a specific implementation manner, if the model type includes the first type, analyzing the model file according to a model analysis manner corresponding to the model type to obtain model analysis data, including: inquiring to obtain second identification information of the initial model according to the corresponding relation between the to-be-processed model and the initial model and the first identification information; inquiring whether model structure information stored in association with the second identification information exists in the service platform according to the second identification information; if the model structure information stored in association with the second identification information exists, the model structure information is read; extracting model parameter information from a model file; and analyzing the model parameter information and the model structure metadata information to obtain model analysis data. The first type of model to be processed may be a deep neural network model, a logistic regression model, etc.
The deep neural network model may be a DNN (Deep Neural Networks, deep neural network) model.
The neural network is a perceptron-based extension, whereas DNN can be understood as a neural network with many hidden layers. The multilayer neural network and the deep neural network DNN are also substantially identical in practice, and the DNN is also called MLP (Multilayer Perceptron ).
The logistic regression model may be an LR (Logistic regression ) model.
The LR model is a linear classification model, and is also a generalized linear regression model, and can calculate the probability of occurrence of an event under a certain sample feature.
In consideration of model training on the same initial model by utilizing different training sample sets, a plurality of to-be-processed models applicable to different service scenes can be obtained, the initial models are identical to the model structures of the to-be-processed models, model parameters are different, in order to reduce repeated workload, model structure information and second identification information of the initial models can be stored in an associated mode, and therefore, under the condition that the to-be-processed models obtained by training the initial models need to be visualized, model structure information corresponding to the initial models can be directly read, model parameter information is extracted from model files of the to-be-processed models, and model analysis data is obtained based on the model structure information and the model parameter information in a common analysis mode.
In the scene of model batch visualization, the analysis workload of determining the model structure information from the model file can be reduced by repeatedly utilizing the model structure information stored in association with the second identification information, namely repeated analysis work is reduced, and the visualization efficiency is improved.
There may be a many-to-one correspondence between the model to be processed and the initial model. According to the model to be processed, a unique corresponding initial model can be determined; if the method is not true, a plurality of corresponding to-be-processed models can be determined according to the initial model instead of one to-be-processed model.
According to the corresponding relation between the to-be-processed model and the initial model and the first identification information, inquiring to obtain the second identification information of the initial model, wherein the other identification information with the corresponding relation with the first identification information can be determined to be the second identification information.
According to the second identification information, inquiring whether the model structure information stored in association with the second identification information exists in the service platform can be:
Firstly, inquiring whether second identification information exists in the service platform.
And if the second identification information does not exist, determining that the service platform inquires that the model structure information stored in association with the second identification information does not exist.
If the second identification information exists, continuing to inquire whether the model structure information with the corresponding relation with the second identification information exists.
And if the model structure information with the corresponding relation with the second identification information exists, determining that the model structure information which is stored in association with the second identification information exists in the service platform.
In one specific implementation, if the model type includes a second type; analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data, wherein the method comprises the following steps: performing format conversion processing on the model file to obtain a first model file with a first preset format; extracting model parameter information from a first model file; performing deserialization processing on the model parameter information to obtain a second model file; and analyzing the model structure information in the first model file and the second model file to obtain model analysis data.
The second type of model to be processed may be a deep neural network model, a logistic regression model, etc.
The second type in this implementation may be the same model type corresponding to a different model parsing scheme than the first type described above.
The first preset format may be ONNX format, for example.
ONNX is an open format representing a deep learning model defining a set of standard formats independent of environment and platform.
A ONNX-format model file structure is generally composed of ModelProto, graphProto, nodeProto, attributeProto, valueInfoProto, tensorProto.
When we load the ONNX format model file into memory, we get a ModelProto that contains some version information, producer information and a very important GraphProto.
Four key recurrent arrays are contained in GraphProto, namely node (NodeProto type), input (ValueInfoProto type), output (ValueInfoProto type) and initializer (TensorProto type), wherein all computing nodes in the model are stored in the node, all input nodes in the model are stored in the input, all output nodes in the model are stored in the output, and all weights of the model are stored in the initializer; the topology definition between nodes can be obtained through the pointing relation of two string arrays, namely input and output, so that a deep learning model network topology graph can be quickly constructed by utilizing the information. Finally, each computing node further includes a AttributeProto array for describing the attributes of the node, for example: the attribute of Conv layer contains information such as pads (edge-fill) and strides (move step size).
And performing format conversion processing on the model file to obtain a first model file in a first preset format, which may be that the model file is subjected to format conversion processing by a preset model conversion tool to obtain a first model file in ONNX format.
Considering that the model formats of the model files generated by utilizing different frameworks are possibly different, for the DNN model or the LR model, the formats of the model files are uniformly converted into a universal standard format approved by multiple parties, so that the follow-up process is facilitated to extract appointed data from the model files according to a uniform flow, and the efficiency of model visualization is improved.
The extracting of the model parameter information from the first model file may be extracting the model parameter information from a specified area of the first model file.
The model parameter information may be configuration variables inside the model. For example, the model parameter information may include all weights of the model stored initializer in GraphProto.
And performing deserialization processing according to the model parameter information to obtain a second model file.
Deserialization may be understood as converting a binary unreadable string into a well readable string, thereby forming a second model file.
For example, the model structure information in the first model file may include a node array, an input array, and an output array in GraphProto.
In the process of analyzing the model structure information and the second model file in the first model file to obtain the model analysis data, the source of the model parameter information and the model structure information are the first model file, so that the model parameter information is separated and an independent file is generated, because the analysis difficulty is higher under the condition that the model parameter information and the model structure information are mixed together.
In a specific implementation, if the model type includes a third type; analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data, wherein the method comprises the following steps: positioning a root node in a model file and acquiring first attribute information of the root node; in the model file, according to the root node, detecting at least one sub-node which has a direct connection relation or an indirect connection relation with the root node, and acquiring second attribute information of each sub-node; and generating model analysis data according to the root node, each child node, the first attribute information and the second attribute information.
The third type of model to be processed may be a tree model.
Tree Models (Tree Models) are a class of decision Tree-based machine learning algorithms. The method is mainly characterized in that a tree structure is adopted to display classification rules or regression processes.
It should be noted that, considering that the visualization result of the tree model may only include the topology structure of the tree model, and the parsing difficulty of the model file of the tree model is low, if the model file is first converted into the specified format and then parsed, the efficiency of the visualization of the model may be reduced, so in the case that the model type includes the third type, it is generally not necessary to perform the format conversion process on the model file.
In case the model type is the third type, only one root node is included in the model file.
And positioning the root node in the model file and acquiring first attribute information of the root node.
In the model file, at least one child node with a direct connection relationship or an indirect connection relationship with the root node is detected according to the root node, and second attribute information of each child node is obtained.
The root node may be connected to each of N nodes by a directed edge, where each of the N nodes is a child node that has a direct connection relationship with the root node. N is a natural number greater than or equal to 1. Namely, the root node and the N nodes have father-son relations, the root node is a father node, and the N nodes are child nodes.
For example, the root node is node 1, a direct connection relationship exists between node 2 and node 1, node 1 and node 2 can be connected through edge 1, the edge 1 points to node 2 from node 1, a father-son relationship exists between node 1 and node 2 is represented, node 1 is a father node, and node 2 is a son node.
The root node may be connected to each of the M nodes by x nodes and y directed edges, x being a natural number greater than 0 and y being a natural number greater than 1. Each node in the M nodes is a child node which has an indirect connection relation with the root node. M is a natural number greater than or equal to 0.
For example, the root node is node 1, the node 2 has a direct connection relationship with node 1, the node 3 has a direct connection relationship with node 2, and the node 1 has an indirect connection relationship with node 3. Node 1 and node 2 may be connected by an edge 1, where the edge 1 points from node 1 to node 2, indicating that a parent-child relationship exists between node 1 and node 2, and node 1 is a parent node and node 2 is a child node. Node 2 and node 3 may be connected by an edge 2, where the edge 2 points from node 2 to node 3, indicating that a parent-child relationship exists between node 2 and node 3, and node 2 is a parent node and node 3 is a child node. Node 1 and node 3 may be connected by edge 1, node 2, and edge 2.
For another example, the root node is node 1, the child nodes having a direct connection relationship with node 1 include node 2 and node 3, the child nodes having a direct connection relationship with node 2 include node 4 and node 5, and the child nodes having a direct connection relationship with node 3 include node 6. Among the plurality of nodes, the child nodes having an indirect connection relationship with the root node include a node 4, a node 5, and a node 6.
According to the root node, each child node, the first attribute information and the second attribute information, model analysis data can be generated, and the model analysis data comprises a topological structure of a model to be processed.
Taking the model type including the third type as an example, taking the model to be processed as a binary tree model, and analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data, wherein the model analysis data can be realized in the following mode:
positioning a child node of the root node in the model file, wherein the child node can be called a first child node for convenience of distinguishing, and acquiring second attribute information of the first child node;
Positioning a child node of a first child node in a model file, wherein the child node of the first child node can be called a second child node for convenience of distinguishing, and second attribute information of the second child node is acquired;
……
the child node of the N-1 th child node is located in the model file, so that the child node of the N-1 th child node can be called as an nth child node for convenience of distinguishing, second attribute information of the nth child node is obtained, and the nth child node is a leaf node without child nodes. N is a natural number greater than or equal to 1.
Each child node may include a left child node and a right child node.
Among the above nodes, a direct connection relationship exists between the root node and the first child node, and an indirect connection relationship exists between the root node and the second child node, … … and the nth child node.
The first child node and the second child node have a direct connection relationship, and the first child node and the Nth child node have an indirect connection relationship under the condition that N is larger than 2.
And generating analysis data of the model to be processed according to the root node, each child node, the first attribute information and the second attribute information.
In a specific implementation, the model types include a fourth type; according to the model analysis mode corresponding to the model type, analyzing the model file to obtain model analysis data, which comprises the following steps: acquiring a configuration file of a model to be processed; reading configuration information of model training of a plurality of sub-models in the model to be processed from a configuration file; and analyzing the model file according to the configuration information to obtain model analysis data.
The fourth type of model to be processed may be a large model.
A large model refers to a machine learning model with a large number of parameters and complex structures. Large models can be applied to handle large-scale data and complex problems.
It should be noted that, in the case where the model type is the fourth type, the storage space occupied by the model file of the model to be processed may be very large, for example, 50G. If the model file is subjected to the format conversion process, a large amount of storage resources and computing resources may be consumed, which causes unnecessary waste, and the disadvantage is far greater than that, so that in the case that the model type is the fourth type, the format conversion process for the model file is not generally necessary.
The configuration file of the model to be processed may be a file generated after the model training is completed, and the file may include configuration information of model training performed by a plurality of sub-models in the model to be processed.
The larger the model is, the higher the requirements of model training on the device, i.e. the stronger the computing power of the single card, i.e. the CPU (Central Processing Unit ), in the device is required, and the larger the memory space of the device is. When the single-card operation can not meet the model training requirement, multiple cards and even multiple machines are often required to coordinate to finish the training work together.
Under the condition that the model to be processed is a large model, a parallel strategy can be adopted, and the memory/calculation force requirement of the model on a single card is reduced by uniformly distributing the weight value in a specific subgraph in the model to be processed to a plurality of cards for processing. The parallel policy may be used to represent how the large model is split into multiple sub-models, and to which device training each sub-model is assigned.
In the case where the model type of the model to be processed includes the fourth type, the model to be processed may have a plurality of sub-models. For each sub-model, the configuration information for model training of the sub-model may include splitting information describing how to split the local model structure belonging to the sub-model from the model to be processed, and may further include equipment information for model training of the sub-model. The model file is analyzed according to the configuration information to obtain model analysis data, namely the model analysis data of the sub model can be obtained by carrying out splitting treatment on the model file of the model to be treated according to the configuration information of model training according to the sub model to obtain the sub model file of the sub model, and then carrying out analysis treatment on the sub model file.
The model resolution data of the model to be processed may include model resolution data of each of a plurality of sub-models of the model to be processed.
For example, the configuration file includes configuration information for model training by sub-model 1, configuration information for model training by sub-model 2, and configuration information for model training by sub-model 3.
Configuration information of model training of sub model 1 describes that data of x1-x2 lines in a model file of a model to be processed corresponds to sub model 1, and that sub model 1 is responsible for model training by device 1. x1 and x2 are natural numbers greater than or equal to 1, and x1 is less than x2.
Configuration information for model training of sub-model 2 describes that the data of lines y1-y2 in the model file of the model to be processed corresponds to sub-model 2, and that sub-model 2 is responsible for model training by device 2. y1 and y2 are natural numbers greater than or equal to 1, y1 is less than y2, and y1 is greater than x2.
Configuration information of model training of sub-model 3 describes that data of lines z1-z2 in the model file of the model to be processed corresponds to sub-model 3, and that sub-model 3 is responsible for model training by device 3. z1 and z2 are natural numbers greater than or equal to 1, z1 is less than z2, and z1 is greater than y2.
Based on the configuration information of model training of the sub-model 1, the data of the x1-x2 rows can be split from the model file of the model to be processed to obtain the sub-model file 1 of the sub-model 1. And analyzing the sub-model file 1 to obtain model analysis data of the sub-model 1.
Based on the configuration information of model training of the sub-model 2, the data of the y1-y2 rows can be split from the model file of the model to be processed to obtain the sub-model file 2 of the sub-model 2. And analyzing the sub-model file 2 to obtain model analysis data of the sub-model 2.
Based on the configuration information of model training of the sub-model 3, the data of the z1-z2 rows can be split from the model file of the model to be processed to obtain the sub-model file 3 of the sub-model 3. And analyzing the sub-model file 3 to obtain model analysis data of the sub-model 3.
The format conversion of the model to be processed for each model type may be exemplarily described below in connection with fig. 3.
Fig. 3 is a schematic diagram of format conversion of a to-be-processed model according to an embodiment of the present application.
As shown in fig. 3, the to-be-processed models of the three model types are respectively:
(a1) Deep neural network model/logistic regression model
I.e. the DNN/LR model.
(A2) Tree model
(A3) Large model
The to-be-processed models of the three model types are obtained by iterative training based on data, calculation force and an algorithm. Wherein, the data refers to training samples, the computing power refers to computing resources, and the algorithm refers to training methods.
For the DNN/LR model of (a 1), after model training is finished, a model to be processed is obtained and stored, and a model file 302 of the DNN/LR model is obtained. Before data analysis, the model file 302 of the DNN/LR model needs to be subjected to format conversion processing, so as to obtain a first model file 304 in ONNX format.
For the tree model of (a 2), after model training is finished, a model to be processed is obtained and saved, and a model file 306 of the tree model is obtained. Before data analysis, the analysis difficulty of the model file 306 of the tree model is considered to be low, and format conversion processing is not required to be performed on the model file 306 of the tree model.
For the large model of (a 3), after model training is finished, a model to be processed is obtained and saved, and a model file 308 of a plurality of large models is obtained. Before data analysis, the format conversion may consume a large amount of computing resources and storage resources in consideration of the excessive storage space occupied by the model file 306 of the large model, so that the format conversion processing on the model file 308 of the large model is not needed.
And step S210, performing rendering processing according to the model analysis data to obtain a visual result of the model to be processed.
Rendering may be the process of outputting a visualization result using various page resources.
The visualization results of the model to be processed may include static visualization results and dynamic visualization results.
The static visualization result may be, for example, a topological graph of the model to be processed presented in a picture format. In the dynamic visualization result, corresponding detail parameters can be displayed by each node in the topological graph of the model to be processed through interaction triggering.
The visualization results may be illustrated below in connection with fig. 4-6.
Fig. 4 is a visual result of a conventional machine learning model according to an embodiment of the present application.
The model type of the conventional machine learning model as shown in fig. 4 may be a second type, and the visualization result is a static visualization result, which reveals the topology of the conventional machine learning model.
As shown in fig. 4, the input data 402 is the input to a linear classifier 404. The output of the linear classifier 404 includes the input of the normalizer 406 and the input of the predictor 412. The output of normalizer 406 is the input to compression dictionary 408. The output of the compression dictionary 408 is the probability 410 for each target class. The output of predictor 412 is a predicted target class 414.
Fig. 5 is a visual result of a graph convolution model according to an embodiment of the present disclosure.
The model type of the graph rolling model shown in fig. 5 may be a second type, and the visualization result is a dynamic visualization result, which shows both the topology structure of the graph rolling model and the detail parameters of each node in the topology structure. Fig. 5 may be regarded as one screenshot of the dynamic visualization result.
As shown in fig. 5, the topology map 502 is located on the left side of fig. 5, and the node information 506 is located on the right side of fig. 5. The node information 506 is used to demonstrate the detail parameters corresponding to the selected node 504 in the topology map 502. That is, the node information 506 is popped up on the right in the case where the user interacts with the selected node 504, and the node information of the nodes participating in the interaction is popped up on the right in the case where the user interacts with other nodes than the selected node 504.
FIG. 6 is a visual result of a decision tree model according to an embodiment of the present application.
The model type of the decision tree model as shown in fig. 6 may be a third type, and the visualization result may be a static visualization result, showing the topology of the decision tree model.
As shown in fig. 6, the histogram 602 corresponds to the root node.
Pie chart 604 corresponds to the left child node of the root node and that node is a leaf node with no child nodes.
Bar graph 606 corresponds to the right child node of the root node.
The histogram 608 corresponds to the left child node of the right child node of the root node, and the node is a leaf node with no child nodes.
The histogram 610 corresponds to the right child node of the root node, and the node is a leaf node without child nodes.
In a specific implementation, the model visualization method further includes: and storing the model analysis data and the first identification information in a service platform in a correlated manner.
After step S208 is performed, model analysis data is obtained, and the model analysis data may be stored in association with the first identification information in the service platform.
In a model visualization scene, the model analysis data of the to-be-processed model and the first identification information corresponding to the to-be-processed model are stored in the service platform in an associated mode, any user of the service platform can inquire at any time to obtain the model analysis data of the to-be-processed model, and then the model analysis data is utilized for carrying out model visualization instead of re-analysis from a model file, so that repeated analysis workload is reduced, and the repeated utilization rate of the model analysis data is improved.
In a specific implementation manner, after rendering the model analysis result to obtain a visualization result of the model to be processed, the model visualization method further includes: storing the visual result meeting the preset format in a service platform; and establishing an association relation between the visualization result and the first identification information.
The preset format can be a picture format or other file formats convenient to store.
Taking a preset format as a picture format as an example, the visual result meeting the preset format refers to the visual result stored according to the picture format.
For example, the model type is a third type, the model to be processed is a decision tree model, and the visualization result of the model to be processed can be saved in jpg (Joint Photographic Experts Group, joint image specialist group) format. The visualization result is a visualization result satisfying a preset format.
Jpg is an image format saved using a lossy compression method.
And storing the visual result meeting the preset format in the service platform, wherein the visual result in the picture format can be stored in a designated storage area of the service platform.
The establishing of the association relationship between the visualization result and the first identification information may be storing the visualization result in the picture format in association with the first identification.
When a user has a model visualization requirement, a visualization result of a corresponding picture format can be queried according to the first identification information, and then the picture is downloaded from the service platform to the local equipment for viewing.
In a specific implementation, the model visualization method further includes: responding to a visualization request of a to-be-processed model, and acquiring first identification information of the to-be-processed model; inquiring whether a visual result corresponding to the first identification information exists or not in the service platform according to the first identification information and the association relation; and if the visualization result corresponding to the first identification information exists, outputting the visualization result.
The visualization request may be used to request the business platform to visualize the pending model.
The visualization request of the model to be processed may carry first identification information corresponding to the model to be processed.
And responding to the visualization request of the model to be processed, and acquiring the first identification information of the model to be processed, wherein the first identification information can be extracted from the visualization request.
Under the condition that a service platform receives a visualization request carrying first identification information, according to the association relation between the visualization result and the first identification information, the service platform inquires whether the visualization result corresponding to the first identification information exists or not. And if the visualization result corresponding to the first identification information exists, outputting the visualization result.
Considering that the visual results meeting the preset format can be stored in the service platform, the visual results not meeting the preset format are inconvenient to store, and under the condition that the visual results of the model to be processed meet the preset format, the service platform has the visual results corresponding to the first identification information.
By inquiring the visual result by using the first identification information, the visual result generated in the past and stored in the service platform can be reused, repeated analysis work and rendering work are reduced, and the model visual efficiency is improved.
In a specific implementation, the model visualization method further includes: if the visualization result corresponding to the first identification information does not exist, inquiring model analysis data corresponding to the first identification in the service platform according to the first identification information; rendering the model analysis data to obtain a visual result of the model to be processed.
And under the condition that the service platform does not have the visualization result corresponding to the first identification information, inquiring the corresponding model analysis data according to the first identification information.
Rendering the model analysis data to obtain a visualization result of the model to be processed, and reference may be made to the corresponding description part of step S210.
The visual result is preferentially inquired through the first identification information, and the model analysis data is inquired under the condition that the visual result does not exist, so that the efficiency of model visualization can be improved, repeated analysis work and rendering work are reduced as much as possible, and the model visualization is realized rapidly.
In the embodiment shown in fig. 2, firstly, according to first identification information of a to-be-processed model, inquiring whether model analysis data stored in association with the first identification information exists in a service platform; if the model analysis data does not exist, a model file of the model to be processed is obtained; then, determining the model type of the model to be processed according to the model file; then, according to a model analysis mode corresponding to the model type, analyzing the model file to obtain model analysis data; and finally, rendering the model analysis data to obtain a visual result of the model to be processed. On the one hand, whether the model analysis data which is stored in association with the first identification information exists or not is inquired through the first identification information of the to-be-processed model on the service platform, and the analysis processing of the model file is executed under the condition that the model analysis data does not exist on the service platform, so that whether the reusable model analysis data exists or not can be determined, repeated analysis work is reduced, and the visualization efficiency of the model is improved; on the other hand, under the condition that the model types of the models to be processed are different, different model analysis modes can be adopted to conduct targeted analysis on the models to be processed, so that the model analysis efficiency is improved, and the model visualization efficiency is further improved.
Fig. 7 is a schematic diagram of a model visualization method applied to a machine learning platform according to an embodiment of the present application.
In step S702, the portal sends a model training request to the front-end model management module.
In step S704, the front-end model manages response data of the model training request returned to the portal.
In step S706, the portal sends a model training instruction to the server through the gateway according to the response data.
After receiving the model training instruction, the server inputs the training sample set into the initial model for iterative training to obtain a model to be processed.
In step S708, a model file is generated.
In step S710, after the model file is generated, the model parsing service is automatically triggered.
Step S712, the model file is read.
Note that, in fig. 7, an arrow corresponding to step S712 points to NFS (Network FILE SYSTEM ) by the model resolution service, which means that, in the process of calling the model resolution service, a model file is read from NFS, but in reality, in this step, the model file is transferred from NFS to a processing module of the model resolution service.
Step S714, a second model file is generated according to the model file and transmitted to NFS.
The second model file contains model parameter information.
In step S716, the front-end model visualization module transmits response data of the model visualization request to the portal.
Before execution of step S716, the portal sends a model visualization request to the front end model visualization module.
In step S718, the portal sends a model visualization instruction to the backend business system through the gateway.
In step S720, the back-end service system reads the model structure information from the database.
It should be noted that, in fig. 7, the arrow corresponding to step S720 points to the database from the back-end service system, which means that the back-end service system reads the model structure information from the database, but in reality, in this step, the model structure information is transmitted from the database to the back-end service system.
In step S722, the backend service system reads the second model file from the NFS.
It should be noted that, in fig. 7, the arrow corresponding to step S722 points to the NFS from the back-end service system, which means that the back-end service system reads the second model file from the NFS, but in reality, in this step, the second model file is transmitted from the NFS to the back-end service system.
After the back-end service system acquires the model structure information and the second model file, the back-end service system can generate model analysis data of the to-be-processed model according to the model structure information and the second model file and render the model analysis data to obtain a visual result of the to-be-processed model.
Since the technical conception is the same, the description in this embodiment is relatively simple, and the relevant parts only need to refer to the corresponding descriptions of the method embodiments provided above.
FIG. 8 is a process flow diagram of another method for visualizing models provided in an embodiment of the present application.
Step S802, reading a model file.
Step S804, judging the model type of the model file.
In step S806, the model file is converted into a first model file in ONNX format.
In the case that the model type is the second type, the model file is converted into a first model file in ONNX format.
The second type of model to be processed may be a deep neural network model/logistic regression model.
Step S808, extracting model parameter information from the first model file.
Step S810, deserializing the model parameter information to obtain a second model file.
In step S812, the model structure information in the first model file is determined.
Step S814, analyzing according to the model structure information and the second model file.
Step S816, converting the model analysis data into a picture format for storage.
And under the condition that a visualization result obtained by rendering the model analysis data is a static visualization result, converting the model analysis data into a picture format for storage.
Step S818, read the model file stream.
In case the model type is a third type, a tree model file stream is obtained from the model file.
The third type of model to be processed may be a tree model.
Step S820, parse the tree model file stream according to the tree model metadata format.
The format of the model files of the tree model is typically fixed. Metadata refers to data describing data, and tree model metadata formats may be used to describe the format of model files of a tree model.
After the execution of step S820, step S816 is executed.
In step S822, the model file stream is read for each large model file in turn.
The model files of the large model may include a plurality of large model files.
In the case where the model type is the fourth type, a large model file stream is acquired from each large model file.
The fourth type of model to be processed may be a large model.
In step S824, merging is performed for the large models.
In step S826, the decomposition is performed based on the parallel policy of the large model.
In the step, configuration information of model training by a plurality of sub-models in the model to be processed can be read from a configuration file in the embodiment of fig. 2; and analyzing the model file according to the configuration information to obtain a corresponding description part of the implementation mode of the model analysis data.
After the execution of step S826, step S816 is executed.
Since the technical conception is the same, the description in this embodiment is relatively simple, and the relevant parts only need to refer to the corresponding descriptions of the method embodiments provided above.
Fig. 9 is a functional block diagram of a model visualization system according to an embodiment of the present application.
As shown in fig. 9, the algorithm engineer 902 and the data scientist 904 may be two users located at different places, or may be two users respectively using two different terminal devices in the model visualization process.
The algorithm engineer 902 parses 912 the model file through a model file parsing unit in the model visualization system, namely the model file:
(b1) Reading the model file 906 and the model file 908 locally at the terminal device of the algorithm engineer 902;
(b2) Determining a model type of model file 906 and determining a model type of model file 908;
(b3) The model file 906 is analyzed according to the model analysis mode corresponding to the model type of the model file 906 to obtain model analysis data of the model file 906, and the model file 908 is analyzed according to the model analysis mode corresponding to the model type of the model file 908 to obtain model analysis data of the model file 908.
The model file parsing 912 transmits the model parsing data of the model file 906 to a model visual presentation unit in the model visual system, that is, the model visual presentation 914, through network transmission.
Model visualization presentation 914 performs rendering processing according to the model parsing data of model file 906 to generate model A visualization results 916.
Model file parsing 912 sends model parsing data for model file 908 to model visualization presentation 914 over a network transmission.
The model visualization presentation 914 performs rendering processing according to the model parsing data of the model file 908 to generate a model B visualization result 918.
Data scientist 904 parses 912 through model files:
(c1) Reading the model file 910 locally at the end device of the data scientist 904;
(c2) Determining a model type of the model file 910;
(c3) And analyzing the model file 910 according to a model analysis mode corresponding to the model type of the model file 910 to obtain model analysis data of the model file 910.
Model file parsing 912 sends model parsing data for model file 910 to model visualization presentation 914 over a network transmission.
The model visualization presentation 914 performs rendering processing according to the model parsing data of the model file 910 to generate a model C visualization result 920.
Since the technical conception is the same, the description in this embodiment is relatively simple, and the relevant parts only need to refer to the corresponding descriptions of the method embodiments provided above.
In the foregoing embodiments, a model visualization method is provided, and correspondingly, based on the same technical concept, the embodiments of the present application further provide a model visualization apparatus, which is described below with reference to the accompanying drawings.
Fig. 10 is a schematic diagram of a model visualization device according to an embodiment of the present application.
The present embodiment provides a model visualization apparatus 1000, including:
A query unit 1002, configured to query, according to first identification information of a to-be-processed model, whether there is model analysis data stored in association with the first identification information in a service platform;
An obtaining unit 1004, configured to obtain a model file of the model to be processed if the model analysis data does not exist;
A determining unit 1006, configured to determine a model type of the model to be processed according to the model file;
The parsing unit 1008 is configured to parse the model file according to a model parsing manner corresponding to the model type, so as to obtain model parsing data;
And the rendering unit 1010 is used for rendering the model analysis data to obtain a visual result of the model to be processed.
Optionally, the determining unit 1006 is specifically configured to:
Extracting a first model parameter from the model file;
And carrying out query processing in a corresponding relation between the pre-configured model parameters and the model types according to the first model parameters to obtain the model types of the model to be processed.
Optionally, if the model type includes a first type, the parsing unit 1008 is specifically configured to:
Inquiring to obtain second identification information of the initial model according to the corresponding relation between the to-be-processed model and the initial model and the first identification information;
Inquiring whether model structure information stored in association with the second identification information exists in the service platform according to the second identification information;
If the model structure information stored in association with the second identification information exists, the model structure information is read;
Extracting model parameter information from the model file;
and analyzing the model parameter information and the model structure information to obtain the model analysis data.
Optionally, if the model type includes the second type, the model type parsing unit 1008 is specifically configured to:
Performing format conversion processing on the model file to obtain a first model file with a first preset format;
Extracting model parameter information from the first model file;
Performing deserialization processing according to the model parameter information to obtain a second model file;
And analyzing according to the model structure information in the first model file and the second model file to obtain the model analysis data.
Optionally, if the model type includes a third type, the model type parsing unit 1008 is specifically configured to:
positioning a root node in the model file and acquiring first attribute information of the root node;
In the model file, detecting at least one child node which has a direct connection relation or an indirect connection relation with the root node according to the root node, and acquiring second attribute information of each child node;
And generating the model analysis data according to the root node, each child node, the first attribute information and the second attribute information.
Optionally, if the model type includes a fourth type, the model type parsing unit 1008 is specifically configured to:
acquiring a configuration file of the model to be processed;
reading configuration information of model training of a plurality of sub-models in the model to be processed from the configuration file;
and analyzing the model file according to the configuration information to obtain the model analysis data.
Optionally, the model visualization apparatus 1000 further includes:
and the storage unit is used for storing the model analysis data and the first identification information in the service platform in a correlated way.
Optionally, the storage unit is further configured to:
storing a visual result meeting a preset format in the service platform;
the model visualization apparatus 1000 further includes:
And the establishing unit is used for establishing the association relation between the visualization result and the first identification information.
Optionally, the acquiring unit 1004 is further configured to:
responding to a visualization request of a model to be processed, and acquiring first identification information of the model to be processed;
the query unit 1002 is further configured to:
inquiring whether a visual result corresponding to the first identification information exists in the service platform according to the first identification information and the association relation;
the model visualization apparatus 1000 further includes:
and the output unit is used for outputting the visual result if the visual result corresponding to the first identification information exists.
Optionally, the query unit 1002 is further configured to:
if the visualization result corresponding to the first identification information does not exist, inquiring model analysis data corresponding to the first identification in the service platform according to the first identification information;
The rendering unit 1010 is further configured to:
rendering the model analysis data to obtain a visual result of the model to be processed.
The model visualization device provided by the embodiment of the application comprises the following components: the query unit is used for querying whether model analysis data stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model; the acquisition unit is used for acquiring a model file of the model to be processed if the model analysis data does not exist; the determining unit is used for determining the model type of the model to be processed according to the model file; the analysis unit is used for analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data; and the rendering unit is used for rendering the model analysis data to obtain a visual result of the model to be processed. On the one hand, whether the model analysis data which is stored in association with the first identification information exists or not is inquired through the first identification information of the to-be-processed model on the service platform, and the analysis processing of the model file is executed under the condition that the model analysis data does not exist on the service platform, so that whether the reusable model analysis data exists or not can be determined, repeated analysis work is reduced, and the visualization efficiency of the model is improved; on the other hand, under the condition that the model types of the models to be processed are different, different model analysis modes can be adopted to conduct targeted analysis on the models to be processed, so that the model analysis efficiency is improved, and the model visualization efficiency is further improved.
Corresponding to the above-described model visualization method, based on the same technical concept, the embodiment of the present application further provides an electronic device, where the electronic device is configured to execute the above-provided model visualization method, and fig. 11 is a schematic structural diagram of an electronic device provided by the embodiment of the present application.
As shown in fig. 11, the electronic device may have a relatively large difference due to different configurations or performances, and may include one or more processors 1101 and a memory 1102, where the memory 1102 may store one or more storage applications or data. Wherein the memory 1102 may be transient storage or persistent storage. The application programs stored in the memory 1102 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in the electronic device. Still further, the processor 1101 may be arranged to communicate with the memory 1102 and execute a series of computer executable instructions in the memory 1102 on an electronic device. The electronic device can also include one or more power supplies 1103, one or more wired or wireless network interfaces 1104, one or more input/output interfaces 1105, one or more keyboards 1106, and the like.
In one particular embodiment, an electronic device includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the electronic device, and execution of the one or more programs by one or more processors includes instructions for:
Inquiring whether model analysis data stored in association with the first identification information exists in a service platform according to the first identification information of the to-be-processed model;
if the model analysis data does not exist, a model file of the model to be processed is obtained;
determining the model type of the model to be processed according to the model file;
analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
rendering the model analysis data to obtain a visual result of the model to be processed.
An embodiment of a computer-readable storage medium provided in the present specification is as follows:
Corresponding to the above-described model visualization method, the embodiment of the application further provides a computer readable storage medium based on the same technical concept.
The computer readable storage medium provided in this embodiment is configured to store computer executable instructions, where the computer executable instructions when executed by a processor may implement the following procedures:
Inquiring whether model analysis data stored in association with the first identification information exists in a service platform according to the first identification information of the to-be-processed model;
if the model analysis data does not exist, a model file of the model to be processed is obtained;
determining the model type of the model to be processed according to the model file;
analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
rendering the model analysis data to obtain a visual result of the model to be processed.
It should be noted that, in the present specification, the embodiments related to the computer readable storage medium and the embodiments related to the model visualization method in the present specification are based on the same inventive concept, so that the specific implementation of the embodiments may refer to the implementation of the corresponding method, and the repetition is omitted.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-readable storage media (including, but not limited to, magnetic disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Embodiments of the application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (13)

1. A method of visualizing a model, comprising:
Inquiring whether model analysis data stored in association with the first identification information exists in a service platform according to the first identification information of the to-be-processed model;
if the model analysis data does not exist, a model file of the model to be processed is obtained;
determining the model type of the model to be processed according to the model file;
analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
rendering the model analysis data to obtain a visual result of the model to be processed.
2. The method according to claim 1, wherein said determining a model type of the model to be processed from the model file comprises:
Extracting a first model parameter from the model file;
And carrying out query processing in a corresponding relation between the pre-configured model parameters and the model types according to the first model parameters to obtain the model types of the model to be processed.
3. The method according to claim 2, wherein if the model type includes a first type, the analyzing the model file according to the model analysis mode corresponding to the model type to obtain model analysis data includes:
Inquiring to obtain second identification information of the initial model according to the corresponding relation between the to-be-processed model and the initial model and the first identification information;
Inquiring whether model structure information stored in association with the second identification information exists in the service platform according to the second identification information;
If the model structure information stored in association with the second identification information exists, the model structure information is read;
Extracting model parameter information from the model file;
and analyzing the model parameter information and the model structure information to obtain the model analysis data.
4. The method according to claim 2, wherein if the model type includes a second type, the analyzing the model file according to the model analysis mode corresponding to the model type to obtain model analysis data includes:
Performing format conversion processing on the model file to obtain a first model file with a first preset format;
Extracting model parameter information from the first model file;
Performing deserialization processing on the model parameter information to obtain a second model file;
and analyzing the model structure information in the first model file and the second model file to obtain the model analysis data.
5. The method according to claim 2, wherein if the model type includes a third type, the analyzing the model file according to the model analysis mode corresponding to the model type to obtain model analysis data includes:
positioning a root node in the model file and acquiring first attribute information of the root node;
In the model file, detecting at least one child node which has a direct connection relation or an indirect connection relation with the root node according to the root node, and acquiring second attribute information of each child node;
And generating the model analysis data according to the root node, each child node, the first attribute information and the second attribute information.
6. The method according to claim 2, wherein if the model type includes a fourth type, the analyzing the model file according to the model analysis mode corresponding to the model type to obtain model analysis data includes:
acquiring a configuration file of the model to be processed;
reading configuration information of model training of a plurality of sub-models in the model to be processed from the configuration file;
and analyzing the model file according to the configuration information to obtain the model analysis data.
7. The method as recited in claim 1, further comprising:
And storing the model analysis data and the first identification information in the service platform in a correlated way.
8. The method according to claim 7, wherein after rendering the model analysis result to obtain the visualization result of the model to be processed, further comprises:
storing a visual result meeting a preset format in the service platform;
And establishing an association relation between the visualization result and the first identification information.
9. The method as recited in claim 8, further comprising:
responding to a visualization request of a model to be processed, and acquiring first identification information of the model to be processed;
inquiring whether a visual result corresponding to the first identification information exists in the service platform according to the first identification information and the association relation;
and if the visualization result corresponding to the first identification information exists, outputting the visualization result.
10. The method as recited in claim 9, further comprising:
if the visualization result corresponding to the first identification information does not exist, inquiring model analysis data corresponding to the first identification in the service platform according to the first identification information;
rendering the model analysis data to obtain a visual result of the model to be processed.
11. A model visualization apparatus, comprising:
The query unit is used for querying whether model analysis data stored in association with the first identification information exists in the service platform according to the first identification information of the to-be-processed model;
the obtaining unit is used for obtaining a model file of the model to be processed if the model analysis data does not exist;
the determining unit is used for determining the model type of the model to be processed according to the model file;
The analysis unit is used for analyzing the model file according to a model analysis mode corresponding to the model type to obtain model analysis data;
And the rendering unit is used for rendering the model analysis data to obtain a visual result of the model to be processed.
12. An electronic device, the device comprising:
A processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to perform the model visualization method of any of claims 1-10.
13. A computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the model visualization method of any of claims 1-10.
CN202311343235.6A 2023-10-16 2023-10-16 Model visualization method, device, electronic equipment and storage medium Pending CN117951089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311343235.6A CN117951089A (en) 2023-10-16 2023-10-16 Model visualization method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311343235.6A CN117951089A (en) 2023-10-16 2023-10-16 Model visualization method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117951089A true CN117951089A (en) 2024-04-30

Family

ID=90799442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311343235.6A Pending CN117951089A (en) 2023-10-16 2023-10-16 Model visualization method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117951089A (en)

Similar Documents

Publication Publication Date Title
US11854671B2 (en) Method and apparatus for identifying heterogeneous graph and property of molecular space structure and computer device
Nasar et al. Suitability of influxdb database for iot applications
Ribeiro et al. Mlaas: Machine learning as a service
Gil et al. Review of the complexity of managing big data of the internet of things
WO2019015631A1 (en) Method for generating combined features for machine learning samples and system
WO2018059016A1 (en) Feature processing method and feature processing system for machine learning
US11416754B1 (en) Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
Patwardhan et al. A survey on predictive maintenance through big data
Padhy et al. Brown dog: Leveraging everything towards autocuration
CN110648080A (en) Information physical system based on intelligent points and construction method thereof
CN113806429A (en) Canvas type log analysis method based on large data stream processing framework
WO2016093839A1 (en) Structuring of semi-structured log messages
CN117312535A (en) Method, device, equipment and medium for processing problem data based on artificial intelligence
CN115905924B (en) Data processing method and system based on artificial intelligence Internet of things and cloud platform
CN117951089A (en) Model visualization method, device, electronic equipment and storage medium
CN112149826B (en) Profile graph-based optimization method in deep neural network inference calculation
Lombardo et al. A Scalable and Distributed Actor-Based Version of the Node2Vec Algorithm.
Barapatre et al. Data preparation on large datasets for data science
CN112817560A (en) Method and system for processing calculation task based on table function and computer readable storage medium
Wang et al. Parallel ordinal decision tree algorithm and its implementation in framework of MapReduce
CN111459990A (en) Object processing method, system, computer readable storage medium and computer device
WO2021068529A1 (en) Image recognition method and apparatus, computer device and storage medium
CN113051303A (en) Business data processing method and device, electronic equipment and storage medium
Jittawiriyanukoon et al. AN APPROXIMATION METHOD OF REGRESSION ANALYSIS IN CONCURRENT BIG DATA STREAM.
Dass et al. Amelioration of big data analytics by employing big data tools and techniques

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination