CN114492321A - Neural network model generation method, device and storage medium based on XML - Google Patents

Neural network model generation method, device and storage medium based on XML Download PDF

Info

Publication number
CN114492321A
CN114492321A CN202111673232.XA CN202111673232A CN114492321A CN 114492321 A CN114492321 A CN 114492321A CN 202111673232 A CN202111673232 A CN 202111673232A CN 114492321 A CN114492321 A CN 114492321A
Authority
CN
China
Prior art keywords
sub
neural network
xml
network model
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111673232.XA
Other languages
Chinese (zh)
Inventor
高嘉欣
廖名学
晁永越
李光耀
梁媛媛
吕品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202111673232.XA priority Critical patent/CN114492321A/en
Publication of CN114492321A publication Critical patent/CN114492321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • G06F40/143Markup, e.g. Standard Generalized Markup Language [SGML] or Document Type Definition [DTD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/221Parsing markup language streams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a method, equipment and a storage medium for generating a neural network model based on XML, wherein the method comprises the following steps: acquiring an XML (extensive Makeup language) instance text to be analyzed, wherein deep learning frame attributes and deep learning frame attribute values thereof are defined in the XML instance text; and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame. According to the method, the device and the storage medium provided by the invention, only the attribute values of the deep learning frames need to be modified for different deep learning frames, the XML instance text can be compatible with various deep learning frames, and a user does not need to design, realize and debug own specific models for each deep learning frame, so that the generation workload of the neural network model is greatly reduced.

Description

Neural network model generation method, device and storage medium based on XML
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a neural network model generation method, equipment and a storage medium based on XML.
Background
With the rapid development of deep learning technology, the deep learning frameworks such as TensorFlow, Pythrch, PaddlePaddle and the like are continuously emerging and developed. The design patterns, characteristics, usages and their underlying support libraries of these deep learning frameworks are largely different.
Due to the difference between the learning frames with different depths, researchers and engineers need to invest a large amount of time to learn various details of the learning frames with different depths, so that neural network models corresponding to the learning frames with different depths can be generated, and the generation efficiency of the models is low; meanwhile, the problem of difficult model migration is also brought, specifically, models and algorithms developed by different deep learning frames cannot be compatible and migrated, and the migration efficiency and sharing capability of the models and algorithms are severely restricted; moreover, due to the huge difference between different depth learning frames, the performances of the same type of models and algorithms are inconsistent on different depth learning frames, and the justice evaluation in the fields of scientific monitoring, industrial production and the like is seriously influenced.
In summary, the neural network models generated based on the current mainstream deep learning framework have a problem of poor compatibility with each other, which results in difficult model migration, and further, a user needs to design, implement and debug a specific model for each deep learning framework, thereby greatly improving the workload of generating the neural network models.
Disclosure of Invention
The invention provides a method, equipment and a storage medium for generating a neural network model based on XML (extensive markup language), which are used for overcoming the defect of large workload in generating the neural network model in the prior art.
The invention provides a neural network model generation method based on XML, which comprises the following steps:
acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text;
and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame.
According to the method for generating the neural network model based on the XML, the step of analyzing the XML instance text to generate the target neural network model corresponding to the attribute value of the deep learning frame comprises the following steps:
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text;
determining a first target sub-element to be analyzed currently in the root element, and calling a corresponding analysis algorithm to analyze the first target sub-element based on the type of the first target sub-element so as to update the initial neural network model;
and returning to the step of determining the first target sub-element to be analyzed currently in the root element until all sub-elements in the root element are analyzed, and determining the initial neural network model after the last update as the target neural network model.
According to the method for generating the XML-based neural network model, the root element comprises the hyperparameter subelements, and the execution steps of the analytic algorithm corresponding to the hyperparameter subelements are as follows:
and reading the attribute value of the hyperparameter attribute in the hyperparameter subelement, and updating the initial neural network model based on the read attribute value of the hyperparameter attribute.
According to the method for generating the neural network model based on the XML, provided by the invention, the root element comprises network layer sub-elements, and the execution steps of the analytic algorithm corresponding to the network layer sub-elements are as follows:
reading the attribute value of the network layer attribute in the network layer sub-element, and constructing a target neural network layer to be added based on the read attribute value of the network layer attribute;
adding the target neural network layer to the initial neural network model;
the root element further comprises sub-elements of an element operation layer, and the execution steps of the analysis algorithm corresponding to the sub-elements of the element operation layer are as follows:
reading attribute values of the attributes of the operation layers according to the elements in the sub-elements of the operation layers according to the elements, and constructing a target operation layer according to the elements to be added based on the read attribute values of the attributes of the operation layers according to the elements;
adding the target per element operation layer to the initial neural network model.
According to the method for generating the neural network model based on the XML, provided by the invention, the root element comprises the block sub-elements, and the execution steps of the analysis algorithm corresponding to the block sub-elements are as follows:
initializing to obtain an initial block based on the attribute of the block sub-element;
determining a second target sub-element to be analyzed currently in the block sub-elements, and calling a corresponding analysis algorithm to analyze the second target sub-element based on the type of the second target sub-element so as to update the initial block;
and returning to the step of determining the second target sub-element to be analyzed currently in the block sub-elements until all sub-elements in the block sub-elements are analyzed, and adding the latest updated initial block to the initial neural network model.
According to the neural network model generation method based on the XML, provided by the invention, the deep learning framework attribute is defined in the root element;
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text, wherein the initializing comprises the following steps:
and initializing to obtain an initial neural network model corresponding to the deep learning frame attribute value based on the deep learning frame attribute value of the root element.
According to the XML-based neural network model generation method provided by the invention, the XML instance text comprises root elements, the root elements comprise hyper-parameter sub-elements, network layer sub-elements, operation layer by element sub-elements, block sub-elements and data set sub-elements, and the block sub-elements comprise network layer sub-elements and operation layer by element sub-elements.
According to the method for generating the neural network model based on the XML, the step of acquiring the XML instance text to be analyzed comprises the following steps:
acquiring an input XML text, and verifying the XML text based on a preset standardized definition specification;
if the XML text conforms to the standardized definition specification, determining the XML text as the XML instance text;
if the XML text does not conform to the standardized definition specification, sending error prompt information, wherein the error prompt information is used for prompting the modification of the XML text;
and returning to the step of acquiring the input XML text until the modified XML text conforms to the standardized definition specification.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the method for generating the neural network model based on the XML.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the XML-based neural network model generation method as described in any one of the above.
The present invention also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method for generating an XML-based neural network model as described in any one of the above.
The invention provides a neural network model generating method, equipment and a storage medium based on XML, which are used for acquiring an XML instance text to be analyzed, wherein deep learning frame attributes and deep learning frame attribute values thereof are defined in the XML instance text; and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame. Through the method, the deep learning frame attribute and the deep learning frame attribute value are defined in the XML instance text, so that the neural network model corresponding to the defined deep learning frame can be generated when the XML instance text is analyzed subsequently, and based on the neural network model, only the deep learning frame attribute value needs to be modified for different deep learning frames.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for generating an XML-based neural network model according to the present invention;
FIG. 2 is a second schematic flow chart of the method for generating an XML-based neural network model according to the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the rapid development of deep learning technology, the deep learning frameworks such as TensorFlow, Pythrch, PaddlePaddle and the like are continuously emerging and developed. The design patterns, characteristics, usages and their underlying support libraries of these deep learning frameworks are largely different. For example, the early version of the TensorFlow constructs a neural network through a static graph, the static computation graph mode cannot be modified at runtime, and usually has a faster running speed, but program debugging is complex and user-friendly, while in the Pytorch, a user can simultaneously define and run a dynamic computation graph, which is more humanized in programming and debugging, but the dynamic characteristics of the dynamic computation graph cause a slight disadvantage in comparison with the model running speed.
Due to the difference between the learning frames with different depths, researchers and engineers need to invest a large amount of time to learn various details of the learning frames with different depths, so that neural network models corresponding to the learning frames with different depths can be generated, and the generation efficiency of the models is low; meanwhile, the problem of difficult model migration is also brought, specifically, models and algorithms developed by different deep learning frames cannot be compatible and migrated, and the migration efficiency and sharing capability of the models and algorithms are severely restricted; moreover, due to the huge difference among different depth learning frames, the performances of the same type of model and algorithm are inconsistent on different depth learning frames, and the justice evaluation in the fields of scientific monitoring, industrial production and the like is seriously influenced.
In summary, the neural network models generated based on the current mainstream deep learning framework have a problem of poor compatibility with each other, which results in difficult model migration, and further, a user needs to design, implement and debug a specific model for each deep learning framework, so that the generation workload of the neural network model is greatly increased, and the generation efficiency of the neural network model is reduced.
Based on the above problem, the present invention provides a method for generating an XML-based neural network model, and fig. 1 is one of the flow diagrams of the method for generating an XML-based neural network model provided by the present invention, as shown in fig. 1, the method includes:
step 110, obtaining an XML instance text to be analyzed, wherein the XML instance text defines a deep learning frame attribute and a deep learning frame attribute value thereof.
Here, the XML instance text is text that needs to be parsed, that is, the XML instance text is text that needs to be decoded. Specifically, parsing the XML instance text may convert the XML instance text into a neural network model.
The XML instance text can be compiled by a user, and further, the user compiles the corresponding XML text according to the standardized definition specification and the requirement of the user. Wherein the standardized definition specification is used for assisting and restricting a user to write corresponding XML texts.
Here, the deep learning frame attribute is used to specify which deep learning frame to embed. For example, if the deep learning frame attribute is platform, and if the platform is "pytorech", that is, if the deep learning frame attribute value is pytorech, it is specified to be embedded in the pytorech deep learning frame.
In a particular embodiment, the deep learning frame attribute and its deep learning frame attribute value may be defined in a root element of the XML instance text. For example, the root element of the XML instance text is < model >, the deep learning frame attribute is platform, and the deep learning frame attribute value is Pytorch, and at this time, the root element is expressed as < model platform ═ Pytorch >.
Specifically, the XML instance text includes information such as a network structure, a network parameter, and the like. More specifically, the neural network components supported by the XML instance text include one or more of: convolutional neural networks, pooling layers, Padding layers, Batch Normalization layers, Dropout layers, upsampling layers, residual concatenation, various Element-wise operations, various tensor operations (e.g., scatter, unscueze, Reshape, etc.), various mathematical computation operations, and the like, as well as attention modules, activation function modules, loss function modules, parameter initialization modules, optimizer modules, evaluation index modules, and the like.
The convolutional neural network may be a 2-dimensional convolutional neural network, a 3-dimensional convolutional neural network, or another dimension convolutional neural network, which is not limited herein.
Attention module, the attention mechanisms provided include, but are not limited to: an additive attention mechanism, a point-by-point attention mechanism, a full-connection attention mechanism and the like.
And the activation function module comprises all the activation function layers supported by various deep learning frameworks, such as all the activation function layers supported by TensorFlow or Pytrch, and the activation function layer can be a ReLU activation function layer, a Sigmoid activation function layer, a Softmax activation function layer and the like.
The parameter initialization module supports all built-in parameter initialization methods of various deep learning frameworks, for example, all built-in parameter initialization methods of TensorFlow or Pyorch are supported, and the method comprises random uniform initialization, random normal initialization, orthogonal initialization, Xavier initialization and the like. Further, fine-grained initialization may be provided based on the parameter initialization module, e.g., a user may specify a mean and variance at random normal initialization based on the module.
An optimizer module, all optimizers that provide various deep learning framework support, for example, all optimizers that provide TensorFlow or Pythrch support, including a Stochastic Gradient Descent (SGD) optimizer, a Momentum (Momentum) optimizer, an Adam optimizer, and the like. Further, based on the optimizer module, fine-grained control of the learning rate may be provided, e.g., a user may specify an initial learning rate, a minimum learning rate, a learning rate decay rate, etc., based on the module.
The evaluation index module supports indexes such as accuracy, precision, recall rate and F1 value aiming at the classification task; aiming at a target detection task, mAP evaluation indexes are supported; for the image segmentation task, evaluation indexes such as pixel accuracy and IoU are supported. Of course, there may be more or less evaluation indexes for other tasks, and the evaluation indexes are not limited herein.
In a specific embodiment, each of the neural network components needs to meet a standardized definition specification, and is defined at a corresponding position of the XML instance text.
The convolutional neural network can be defined by a network layer subelement, and the output size of the convolutional neural network, the name of the convolutional neural network, the size of a convolution kernel of the convolutional neural network, the rate of the convolutional neural network, whether the convolutional neural network can be reused or not can be defined in the network layer subelement.
For example, if the network layer sub-element is < layer >, the attribute value of the type attribute in the network layer sub-element is set to be constraint, the attribute of the output size of the convolutional neural network is out, the attribute of the name of the convolutional neural network is name, the attribute of the convolutional kernel size of the convolutional neural network is kernel, and the attribute of whether reuse is possible is reuse, the following definitions may be used:
<layer type=“Convolution”out=“64”name=“conv1_1”/>;
<layer type=“Convolution”out=“1024”kernel=“1,1”name=“conv7”/>;
<layer type=“Convolution”out=“1024”rate=“6”name=“conv6”/>;
<layer type=“Convolution”out=“[0]”reuse=“True”name=“cls_pred”/>。
here, the network layer sub-element corresponding to the convolutional neural network may be a sub-element of a root element in the XML instance text, or may be a sub-element of a block sub-element of the root element in the XML instance text.
The pooling layer may be defined by a network layer sub-element in which the pooling core size of the pooling layer, the name of the pooling layer, the Padding pattern of the pooling layer, the pooling algorithm (max-pooling or average-pooling), the step size of the pooling layer, etc. may be defined.
For example, if the network layer sub-element is < layer >, the attribute value of the type attribute in the network layer sub-element is set as Subsampling, the attribute of the pooling core size is kernel, the attribute of the name of the pooling layer is name, the attribute of the Padding mode of the pooling layer is mode, the attribute of the pooling algorithm is algorithm, and the attribute of the step size of the pooling layer is stride, in this case, the following definitions may be used:
<layer type=“Subsampling”kernel=“2,2”stride=“2,2”mode=“SAME”algorithm=“MAX”name=“pool2”/>;
<layer type=“Subsampling”kernel=“3,3”mode=“SAME”algorithm=“MAX”name=“pool5”/>。
here, the network layer sub-element corresponding to the pooling layer may be a sub-element of the root element in the XML instance text, or may be a sub-element of a block sub-element of the root element in the XML instance text.
A Dropout layer may be defined by a network layer sub-element in which a drop probability of the Dropout layer, etc. may be defined.
For example, when the network layer sub-element is < layer >, the attribute value of the type attribute in the network layer sub-element is Dropout, the attribute of the drop probability of the Dropout layer is rate, and the attribute of the name of the Dropout layer is name, the following definitions may be used:
<layer type=“Dropout”rete=“0.1”/>;
<layer type=“Dropout”rete=“0.1”name=“dropout2”/>。
here, the network layer sub-element corresponding to the Dropout layer may be a sub-element of the root element in the XML instance text, and may also be a sub-element of a block sub-element of the root element in the XML instance text.
The optimizer module may define in the superparameter subelement, that is, define an optimizer subelement in the superparameter subelement, where the optimizer subelement may be < update >, and may define an optimizer type in the optimizer subelement, where the attribute of the optimizer type may be type, for example, the superparameter subelement is < parameters >, and in this case, the following may be defined:
<parameters>
<updater type=“Adam”/>
</parameters>。
in addition, more network structures or more network parameters may be defined in the XML instance text, and are not described herein any more.
And 120, analyzing the XML instance text to generate a target neural network model corresponding to the deep learning frame attribute value.
Specifically, the XML instance text is analyzed based on a preset analysis algorithm, the XML instance text is converted into a calculation graph instance, and then the calculation graph instance is further converted into a target neural network model corresponding to the attribute value of the deep learning framework. The following embodiments are referred to for the execution process of the predetermined parsing algorithm, and are not described in detail here.
It can be understood that the deep learning frame attribute values in the XML instance text are analyzed, the deep learning frame to be embedded can be determined, and then the target neural network model corresponding to the deep learning frame attribute values is generated.
Here, the target neural network model is a trainable neural network model. For example, if the attribute value of the deep learning framework is Pytrch, a trainable Pytrch neural network model is generated.
The neural network model generation method based on the XML provided by the embodiment of the invention obtains the XML instance text to be analyzed, wherein the XML instance text is defined with deep learning frame attributes and deep learning frame attribute values thereof; and analyzing the XML example text to generate a target neural network model corresponding to the attribute value of the deep learning frame. Through the method, the deep learning frame attribute and the deep learning frame attribute value are defined in the XML instance text, so that the neural network model corresponding to the defined deep learning frame can be generated when the XML instance text is analyzed subsequently, and based on the neural network model, only the deep learning frame attribute value needs to be modified for different deep learning frames.
Based on the above embodiment, fig. 2 is a second schematic flow chart of the method for generating an XML-based neural network model according to the present invention, as shown in fig. 2, in the method, the step 120 includes:
and step 121, initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text.
Specifically, when the XML instance text is analyzed, the attribute of the root element in the XML instance text is analyzed first, and then the initial neural network model is obtained based on the initialization of the analysis result, so that when the sub-elements of the root element are analyzed subsequently, the initial neural network model is continuously updated until the sub-elements of the root element are analyzed completely, and the initial neural network model after the latest update is used as the finally generated target neural network model.
Here, the attributes of the root element may include, but are not limited to, one or more of the following: the deep learning framework property platform, whether the property backpropagate is propagated backwards or not, the model class property type, whether the property pretrained or not, and the like.
The platform attribute is used for specifying which deep learning frame is embedded, i.e. the platform attribute is a deep learning frame attribute. The backpropagate attribute is used to specify whether to back-propagate. the type attribute is used to specify the class of the generated neural network model. The pretrain attribute is used to specify whether pre-training is to be performed.
For example, the root element is < model >, and in this case, the root element may be defined as follows:
<model type=“SSD”pretrain=“false”backpropagate=“true”platform=“Pytorch”>。
in an embodiment, the deep learning framework attribute is defined in the root element, and the step 121 includes:
and initializing to obtain an initial neural network model corresponding to the deep learning frame attribute value based on the deep learning frame attribute value of the root element.
For example, the root element of the XML instance text is < model >, the deep learning frame attribute is platform, and the deep learning frame attribute value is Pytorch, where the root element is expressed as < model platform ═ Pytorch >, based on which, when the attribute of the root element in the XML instance text is analyzed, an initial neural network model corresponding to Pytorch can be initialized based on the analysis result, and a target neural network model obtained by continuously updating the initial neural network model is also a neural network model corresponding to Pytorch.
Step 122, determining a first target sub-element to be analyzed currently in the root element, and invoking a corresponding analysis algorithm to analyze the first target sub-element based on the type of the first target sub-element, so as to update the initial neural network model.
Specifically, the root element may include, but is not limited to, one or more of the following: a hyper-parameter sub-element, a network layer sub-element, a per-element operation layer sub-element, a block sub-element, a data set sub-element, and so forth.
The hyper-parameter sub-element is used to set a hyper-parameter. For example, the hyper-parameter sub-element is < parameters >, which may be defined as
<parameters>
<updater type=“Adam”/>
</parameters>。
The network layer sub-elements are used for building main components for the neural network model, and can be neural network layers such as a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), a fully connected layer (Dense), a Dropout layer and a Transformer. For example, the network layer sub-element is < layer >, which may be defined as
<layer type=“Convolution”out=“64”name=“conv1_1”/>。
The operation-by-ELEMENT layer sub-ELEMENTs are used to represent operators that operate on two or more input data, such as concatenation (CONCAT) operation, ADD-by-ELEMENT (ELEMENT _ ADD), multiply-by-ELEMENT (ELEMENT _ MUL), maximize-by-ELEMENT (ELEMENT _ MAX), average-by-ELEMENT (ELEMENT _ MEAN), and so on. For example, the per element operation layer sub-element is < vertex >.
The block sub-elements include network layer sub-elements, operation layer by element sub-elements, etc., in which a block type, a block name, an activation function of the block, a block output size, a kernel size of the block, etc., may be defined.
For example, a block sub-element is < block >, a block type attribute is type, a block name attribute is name, an activation function attribute of the block is activation, a block output size attribute is out, and a core size attribute of the block is kernel, and in this case, the following definitions may be used:
<block type=“VGG16”name=“vgg”kernel=“3,3”activation=“relu”></block>;
<block kernel=“3,3”out=“84,16”parent=“vgg/conv4_3”name=“output1”></block>。
the dataset sub-element is used to specify a dataset and set related parameters. For example, the data set child element is < dataset >.
Furthermore, the dataset sub-element is typically the first sub-element of the root element, and the hyper-parameter sub-element is typically the second sub-element of the root element.
Specifically, if the first target sub-element is a hyper-parameter sub-element, a hyper-parameter analysis algorithm is called to analyze the first target sub-element so as to update the initial neural network model; if the first target sub-element is a network layer sub-element, calling a network layer analysis algorithm to analyze the first target sub-element so as to update the initial neural network model; if the first target sub-element is an element-based operation layer sub-element, calling an element-based operation layer analysis algorithm to analyze the first target sub-element so as to update the initial neural network model; if the first target sub-element is a block sub-element, calling a block analysis algorithm to analyze the first target sub-element so as to update the initial neural network model; and if the first target sub-element is the data set sub-element, calling a data set analysis algorithm to analyze the first target sub-element so as to update the initial neural network model. For each analysis algorithm, reference is made to the following embodiments, which are not described herein again.
And 123, returning to the step of determining the first target sub-element to be analyzed currently in the root element until all sub-elements in the root element are analyzed, and determining the initial neural network model after the last update as the target neural network model.
Specifically, each sub-element in the root element is analyzed, and the initial neural network model is updated while each sub-element is analyzed, so that it can be understood that analyzing all sub-elements updates the same initial neural network model, that is, the updating effects of each sub-element are superimposed.
According to the neural network model generation method based on the XML, which is provided by the embodiment of the invention, the initial neural network model is obtained by initializing based on the attribute of the root element in the XML instance text; determining a first target sub-element to be analyzed currently in the root element, and calling a corresponding analysis algorithm to analyze the first target sub-element based on the type of the first target sub-element so as to update the initial neural network model; and returning to the step of determining the first target sub-element to be analyzed currently in the root element until all sub-elements in the root element are analyzed, and determining the initial neural network model after the latest update as the target neural network model. Through the manner, no matter what kind of deep learning framework is, the root element and the sub-elements of the XML instance text are analyzed, and a corresponding analysis algorithm does not need to be designed for each kind of deep learning framework, so that the algorithm migration can be realized, and the generation workload of the neural network model is further reduced.
Based on any of the above embodiments, in the method, the root element includes a super-parameter sub-element, and the execution steps of the analysis algorithm corresponding to the super-parameter sub-element are as follows:
and reading the attribute value of the hyperparameter attribute in the hyperparameter subelement, and updating the initial neural network model based on the read attribute value of the hyperparameter attribute.
Here, the hyper-parameter sub-element is used to set the hyper-parameter. Attributes of the superparameter subelement include, but are not limited to: seed number seed, optimization algorithm, learning rate, training round epochs, batch size, weight type weight, optimizer update, and so on.
For example, the hyper-parameter sub-element is < parameters >, which may be defined as
Figure BDA0003453619280000141
Figure BDA0003453619280000151
Specifically, reading the attribute value of the hyperparameter attribute in the hyperparameter subelement, analyzing the attribute value of the hyperparameter attribute, and updating the initial neural network model based on the analysis result.
The method provided by the embodiment of the invention provides an algorithm for analyzing the hyperparameter subelements, so as to provide support for an analysis algorithm of an XML instance text.
Based on any of the above embodiments, in the method, the root element includes a network layer sub-element, and the execution steps of the parsing algorithm corresponding to the network layer sub-element are as follows:
reading the attribute value of the network layer attribute in the network layer sub-element, and constructing a target neural network layer to be added based on the read attribute value of the network layer attribute;
adding the target neural network layer to the initial neural network model.
Specifically, a type attribute value in a network layer sub-element is read to determine the type of a neural network layer to be added, and then attribute values of other network layer attributes in the network layer sub-element are read based on the type of the neural network layer to construct a target neural network layer to be added based on the read attribute values.
The root element further comprises sub-elements of an element operation layer, and the execution steps of the analysis algorithm corresponding to the sub-elements of the element operation layer are as follows:
reading attribute values of the attributes of the operation layers according to the elements in the sub-elements of the operation layers according to the elements, and constructing a target operation layer according to the elements to be added based on the read attribute values of the attributes of the operation layers according to the elements;
adding the target to the initial neural network model by an element operation layer.
Specifically, a type attribute value in a sub-element of the per-element operation layer is read, so that the type of the per-element operation to be added is determined, and further, based on the type of the per-element operation, attribute values of other per-element operation layer attributes in the sub-element of the per-element operation layer are read, so that the per-element operation layer to be added is constructed based on the read attribute values.
The method provided by the embodiment of the invention provides an algorithm for analyzing the network layer sub-elements and the element-based operation layer sub-elements, so as to provide support for an analysis algorithm of an XML instance text.
Based on any of the above embodiments, in the method, the root element includes a block sub-element, and the execution steps of the parsing algorithm corresponding to the block sub-element are as follows:
initializing to obtain an initial block based on the attribute of the block sub-element;
determining a second target sub-element to be analyzed currently in the block sub-elements, and calling a corresponding analysis algorithm to analyze the second target sub-element based on the type of the second target sub-element so as to update the initial block;
and returning to the step of determining the second target sub-element to be analyzed currently in the block sub-elements until all sub-elements in the block sub-elements are analyzed, and adding the latest updated initial block to the initial neural network model.
Specifically, a type attribute value in a block sub-element is read to determine the type of a block to be added, and then attribute values of other block attributes in the block sub-element are read based on the type of the block, so that an initial block is obtained through initialization based on the read attribute values.
Then, if the second target sub-element is a network layer sub-element, calling a network layer analysis algorithm to analyze the second target sub-element so as to update the initial block; and if the second target sub-element is the sub-element of the operation layer according to the element, calling an analysis algorithm of the operation layer according to the element to analyze the second target sub-element so as to update the initial block. For each analysis algorithm, reference is made to the above embodiments, which are not described herein again.
The method provided by the embodiment of the invention provides an algorithm for analyzing the block sub-elements so as to provide support for an analysis algorithm of the XML instance text.
Based on any of the above embodiments, the XML instance text includes a root element, and the root element includes a hyper-parameter sub-element, a network layer sub-element, an operation layer per element sub-element, a block sub-element, and a data set sub-element, and the block sub-element includes a network layer sub-element and an operation layer per element sub-element.
According to any of the above embodiments, the step 110 includes:
acquiring an input XML text, and verifying the XML text based on a preset standardized definition specification;
if the XML text conforms to the standardized definition specification, determining the XML text as the XML instance text;
if the XML text does not conform to the standardized definition specification, sending error prompt information, wherein the error prompt information is used for prompting the modification of the XML text;
and returning to the step of acquiring the input XML text until the modified XML text conforms to the standardized definition specification.
The standardized definition specification is used for assisting and restricting a user to write a corresponding XML text, namely assisting and restricting the user to write a corresponding neural network structure, and further, the user writes the corresponding XML text according to the standardized definition specification and the requirement of the user on the neural network structure.
The standardized definition specification may be used to check the input XML text, in particular, to check whether the XML text meets the encoding requirements and meets the relevant restrictions defined by the specification according to the standardized definition specification.
The standardized definition specifications include, but are not limited to, one or more of the following: a network organization architecture specification, an inclusion element specification, a length specification of an element, a type specification of an element, and the like, which are not specifically limited in the embodiments of the present invention.
It will be appreciated that the writing of XML text becomes disciplined without the ever-changing XML text, based on the standardized definition specifications determining whether the input XML text is problematic.
Specifically, the standardized definition specification includes a general information interaction protocol specification corresponding to a text organization structure of an XML text, and also includes some restrictions on elements formulated for a neural network.
In one embodiment, the standardized definition specification is as follows:
the root element may include, but is not limited to, one or more of the following: a hyper-parameter sub-element, a network layer sub-element, a per-element operation layer sub-element, a block sub-element, a data set sub-element, and so forth.
The hyper-parameter sub-element is used to set a hyper-parameter. For example, the hyper-parameter sub-element is < parameters >, which may be defined as
<parameters>
<updater type=“Adam”/>
</parameters>。
The network layer sub-elements are used for building main components for the neural network model, and can be neural network layers such as a Recurrent Neural Network (RNN), a Convolutional Neural Network (CNN), a fully connected layer (Dense), a Dropout layer and a Transformer. For example, the network layer sub-element is < layer >, which may be defined as
<layer type=“Convolution”out=“64”name=“conv1_1”/>。
The operation-by-ELEMENT layer sub-ELEMENTs are used to represent operators that operate on two or more input data, such as concatenation (CONCAT) operation, ADD-by-ELEMENT (ELEMENT _ ADD), multiply-by-ELEMENT (ELEMENT _ MUL), maximize-by-ELEMENT (ELEMENT _ MAX), average-by-ELEMENT (ELEMENT _ MEAN), and so on. For example, the per element operation layer sub-element is < vertex >.
The block sub-elements include network layer sub-elements, operation layer by element sub-elements, etc., in which a block type, a block name, an activation function of the block, a block output size, a kernel size of the block, etc., may be defined.
For example, a block sub-element is < block >, a block type attribute is type, a block name attribute is name, an activation function attribute of the block is activation, a block output size attribute is out, and a core size attribute of the block is kernel, and in this case, the following definitions may be used:
<block type=“VGG16”name=“vgg”kernel=“3,3”activation=“relu”></block>;
<block kernel=“3,3”out=“84,16”parent=“vgg/conv4_3”name=“output1”></block>。
the dataset sub-element is used to specify a dataset and set related parameters. For example, the dataset sub-element is < dataset >.
Furthermore, the dataset sub-element is typically the first sub-element of the root element, and the hyper-parameter sub-element is typically the second sub-element of the root element.
Here, the error prompt information is used to prompt the user to modify the XML text, that is, the user can modify the XML text according to the error prompt information. The error prompt message may include a location where the XML text does not meet the specification, so that the user may quickly determine where the XML text is wrongly written, thereby speeding up the XML text modification. In addition, after the user finishes modifying, the modified XML text is input again.
The neural network model generation method based on the XML provided by the embodiment of the invention obtains the input XML text, and verifies the XML text based on the preset standardized definition specification; if the XML text conforms to the standardized definition specification, determining the XML text as an XML instance text; if the XML text does not accord with the standardized definition specification, sending out error prompt information, wherein the error prompt information is used for prompting to modify the XML text; and returning to the step of acquiring the input XML text until the modified XML text conforms to the standardized definition specification. Through the method, the XML text is verified based on the standardized definition specification, and the XML implementation text which accords with the standardized definition specification is ensured to be analyzed, so that the target neural network model can be obtained.
The following describes a generation apparatus of a neural network model provided by the present invention, and the generation apparatus of the neural network model described below and the generation method of the neural network model based on XML described above may be referred to in correspondence with each other.
In this embodiment, the apparatus for generating a neural network model includes:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an XML instance text to be analyzed, and the XML instance text is defined with a deep learning frame attribute and a deep learning frame attribute value thereof;
and the analysis module is used for analyzing the XML instance text to generate a target neural network model corresponding to the deep learning frame attribute value.
The device for generating the neural network model, provided by the embodiment of the invention, is used for acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text; and analyzing the XML example text to generate a target neural network model corresponding to the attribute value of the deep learning frame. Through the method, the deep learning frame attribute and the deep learning frame attribute value are defined in the XML instance text, so that the neural network model corresponding to the defined deep learning frame can be generated when the XML instance text is analyzed subsequently, and based on the neural network model, only the deep learning frame attribute value needs to be modified for different deep learning frames.
Based on any of the above embodiments, the parsing module is further configured to:
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text;
determining a first target sub-element to be analyzed currently in the root element, and calling a corresponding analysis algorithm to analyze the first target sub-element based on the type of the first target sub-element so as to update the initial neural network model;
and returning to the step of determining the first target sub-element to be analyzed currently in the root element until all sub-elements in the root element are analyzed, and determining the initial neural network model after the last update as the target neural network model.
Based on any of the above embodiments, the root element includes a super-parameter sub-element, and the execution steps of the analysis algorithm corresponding to the super-parameter sub-element are as follows:
and reading the attribute value of the hyperparameter attribute in the hyperparameter subelement, and updating the initial neural network model based on the read attribute value of the hyperparameter attribute.
Based on any of the above embodiments, the root element includes a network layer sub-element, and the execution steps of the parsing algorithm corresponding to the network layer sub-element are as follows:
reading the attribute value of the network layer attribute in the network layer sub-element, and constructing a target neural network layer to be added based on the read attribute value of the network layer attribute;
adding the target neural network layer to the initial neural network model;
the root element further comprises sub-elements of an element operation layer, and the execution steps of the analysis algorithm corresponding to the sub-elements of the element operation layer are as follows:
reading attribute values of the attributes of the operation layers according to the elements in the sub-elements of the operation layers according to the elements, and constructing a target operation layer according to the elements to be added based on the read attribute values of the attributes of the operation layers according to the elements;
adding the target per element operation layer to the initial neural network model.
Based on any of the above embodiments, the root element includes a block sub-element, and the execution steps of the parsing algorithm corresponding to the block sub-element are as follows:
initializing to obtain an initial block based on the attribute of the block sub-element;
determining a second target sub-element to be analyzed currently in the block sub-elements, and calling a corresponding analysis algorithm to analyze the second target sub-element based on the type of the second target sub-element so as to update the initial block;
and returning to the step of determining the second target sub-element to be analyzed currently in the block sub-elements until all sub-elements in the block sub-elements are analyzed, and adding the latest updated initial block to the initial neural network model.
According to any of the above embodiments, the deep learning framework attribute is defined in the root element;
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text, wherein the initializing comprises the following steps:
and initializing to obtain an initial neural network model corresponding to the deep learning frame attribute value based on the deep learning frame attribute value of the root element.
Based on any of the above embodiments, the XML instance text includes a root element, and the root element includes a hyper-parameter sub-element, a network layer sub-element, an operation layer per element sub-element, a block sub-element, and a data set sub-element, and the block sub-element includes a network layer sub-element and an operation layer per element sub-element.
Based on any of the above embodiments, the obtaining module is further configured to:
acquiring an input XML text, and verifying the XML text based on a preset standardized definition specification;
if the XML text conforms to the standardized definition specification, determining the XML text as the XML instance text;
if the XML text does not conform to the standardized definition specification, sending error prompt information, wherein the error prompt information is used for prompting the modification of the XML text;
and returning to the step of acquiring the input XML text until the modified XML text conforms to the standardized definition specification.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)310, a communication Interface (communication Interface)320, a memory (memory)330 and a communication bus 340, wherein the processor 310, the communication Interface 320 and the memory 330 communicate with each other via the communication bus 340. The processor 310 may invoke logic instructions in the memory 330 to perform an XML-based neural network model generation method, the method comprising: acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text; and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame.
In addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product including a computer program, the computer program being stored on a non-transitory computer-readable storage medium, wherein when the computer program is executed by a processor, the computer is capable of executing the method for generating an XML-based neural network model provided by the above methods, the method including: acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text; and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor implements a method for generating an XML-based neural network model provided by the above methods, the method including: acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text; and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A neural network model generation method based on XML is characterized by comprising the following steps:
acquiring an XML instance text to be analyzed, wherein a deep learning frame attribute and a deep learning frame attribute value are defined in the XML instance text;
and analyzing the XML instance text to generate a target neural network model corresponding to the attribute value of the deep learning frame.
2. The method according to claim 1, wherein the parsing the XML instance text to generate the target neural network model corresponding to the deep learning framework attribute value includes:
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text;
determining a first target sub-element to be analyzed currently in the root element, and calling a corresponding analysis algorithm to analyze the first target sub-element based on the type of the first target sub-element so as to update the initial neural network model;
and returning to the step of determining the first target sub-element to be analyzed currently in the root element until all sub-elements in the root element are analyzed, and determining the initial neural network model after the last update as the target neural network model.
3. The method of generating an XML-based neural network model according to claim 2, wherein the root element includes a sub-element with hyper-parameters, and the parsing algorithm corresponding to the sub-element with hyper-parameters is executed as follows:
and reading the attribute value of the hyperparameter attribute in the hyperparameter subelement, and updating the initial neural network model based on the read attribute value of the hyperparameter attribute.
4. The method of generating an XML-based neural network model according to claim 2, wherein the root element includes a network layer sub-element, and the parsing algorithm corresponding to the network layer sub-element is executed as follows:
reading the attribute value of the network layer attribute in the network layer sub-element, and constructing a target neural network layer to be added based on the read attribute value of the network layer attribute;
adding the target neural network layer to the initial neural network model;
the root element further comprises sub-elements of an element operation layer, and the execution steps of the analysis algorithm corresponding to the sub-elements of the element operation layer are as follows:
reading attribute values of the attributes of the operation layers according to the elements in the sub-elements of the operation layers according to the elements, and constructing a target operation layer according to the elements to be added based on the read attribute values of the attributes of the operation layers according to the elements;
adding the target per element operation layer to the initial neural network model.
5. The method of generating an XML-based neural network model according to claim 2, wherein the root element includes a block sub-element, and the parsing algorithm corresponding to the block sub-element is executed as follows:
initializing to obtain an initial block based on the attribute of the block sub-element;
determining a second target sub-element to be analyzed currently in the block sub-elements, and calling a corresponding analysis algorithm to analyze the second target sub-element based on the type of the second target sub-element so as to update the initial block;
and returning to the step of determining the second target sub-element to be analyzed currently in the block sub-elements until all sub-elements in the block sub-elements are analyzed, and adding the latest updated initial block to the initial neural network model.
6. The XML-based neural network model generating method of claim 2, wherein the deep learning framework attribute is defined in the root element;
initializing to obtain an initial neural network model based on the attribute of the root element in the XML instance text, wherein the initializing comprises the following steps:
and initializing to obtain an initial neural network model corresponding to the deep learning frame attribute value based on the deep learning frame attribute value of the root element.
7. The method of any of claims 1 to 6, wherein the XML instance text comprises a root element, the root element comprises a hyper-parameter sub-element, a network layer sub-element, an operate-by-element layer sub-element, a block sub-element, a dataset sub-element, and the block sub-element comprises a network layer sub-element, an operate-by-element layer sub-element.
8. The method according to any one of claims 1 to 6, wherein the obtaining the XML instance text to be parsed comprises:
acquiring an input XML text, and verifying the XML text based on a preset standardized definition specification;
if the XML text conforms to the standardized definition specification, determining the XML text as the XML instance text;
if the XML text does not conform to the standardized definition specification, sending error prompt information, wherein the error prompt information is used for prompting the modification of the XML text;
and returning to the step of acquiring the input XML text until the modified XML text conforms to the standardized definition specification.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the XML-based neural network model generation method of any one of claims 1 to 8.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the XML-based neural network model generation method of any one of claims 1 to 8.
CN202111673232.XA 2021-12-31 2021-12-31 Neural network model generation method, device and storage medium based on XML Pending CN114492321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673232.XA CN114492321A (en) 2021-12-31 2021-12-31 Neural network model generation method, device and storage medium based on XML

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673232.XA CN114492321A (en) 2021-12-31 2021-12-31 Neural network model generation method, device and storage medium based on XML

Publications (1)

Publication Number Publication Date
CN114492321A true CN114492321A (en) 2022-05-13

Family

ID=81507326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673232.XA Pending CN114492321A (en) 2021-12-31 2021-12-31 Neural network model generation method, device and storage medium based on XML

Country Status (1)

Country Link
CN (1) CN114492321A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911465A (en) * 2022-05-19 2022-08-16 北京百度网讯科技有限公司 Operator generation method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911465A (en) * 2022-05-19 2022-08-16 北京百度网讯科技有限公司 Operator generation method, device, equipment and storage medium
CN114911465B (en) * 2022-05-19 2023-01-10 北京百度网讯科技有限公司 Method, device and equipment for generating operator and storage medium

Similar Documents

Publication Publication Date Title
Huang et al. Gamepad: A learning environment for theorem proving
US11681925B2 (en) Techniques for creating, analyzing, and modifying neural networks
US20230229978A1 (en) Debugging correctness issues in training machine learning models
Nelli Python data analytics with Pandas, NumPy, and Matplotlib
WO2021190597A1 (en) Processing method for neural network model, and related device
CN106537333A (en) Systems and methods for a database of software artifacts
US20210081841A1 (en) Visually creating and monitoring machine learning models
US11640539B2 (en) Techniques for visualizing the operation of neural networks using samples of training data
CN112199086A (en) Automatic programming control system, method, device, electronic device and storage medium
Bagnall et al. Certifying the true error: Machine learning in Coq with verified generalization guarantees
TWI826702B (en) Techniques for defining and executing program code specifying neural network architectures
Djurfeldt et al. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions
Paassen et al. Mapping python programs to vectors using recursive neural encodings
US11615321B2 (en) Techniques for modifying the operation of neural networks
US11593076B2 (en) Method for merging architecture data
CN114492321A (en) Neural network model generation method, device and storage medium based on XML
US20240061653A1 (en) Collaborative industrial integrated development and execution environment
CN113966494A (en) System, method and storage medium for supporting graphical programming based on neuron blocks
WO2023107207A1 (en) Automated notebook completion using sequence-to-sequence transformer
US11726775B2 (en) Source code issue assignment using machine learning
Köstler et al. A Scala prototype to generate multigrid solver implementations for different problems and target multi-core platforms
CN112860534A (en) Hardware architecture performance evaluation and performance optimization method and device
Louw et al. Applying recent machine learning approaches to accelerate the algebraic multigrid method for fluid simulations
Bernstein Abstractions for Probabilistic Programming to Support Model Development
Bugnion et al. Scala: applied machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination