CN113537496A - Deep learning model visual construction system and application and design method thereof - Google Patents

Deep learning model visual construction system and application and design method thereof Download PDF

Info

Publication number
CN113537496A
CN113537496A CN202110632104.4A CN202110632104A CN113537496A CN 113537496 A CN113537496 A CN 113537496A CN 202110632104 A CN202110632104 A CN 202110632104A CN 113537496 A CN113537496 A CN 113537496A
Authority
CN
China
Prior art keywords
model
deep learning
user
module
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110632104.4A
Other languages
Chinese (zh)
Inventor
李晖
李一水
周彧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Youlian Borui Technology Co ltd
Original Assignee
Guizhou Youlian Borui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Youlian Borui Technology Co ltd filed Critical Guizhou Youlian Borui Technology Co ltd
Priority to CN202110632104.4A priority Critical patent/CN113537496A/en
Publication of CN113537496A publication Critical patent/CN113537496A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • G06N3/105Shells for specifying net layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/10Requirements analysis; Specification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning model visual construction system and an application and design method thereof, wherein the deep learning model visual construction system comprises a user management module and a business function module, and the user management module is used for classifying users and providing authority; the business function module comprises a data management module, a model definition module, a model training module and a model visualization module, wherein the data management module is used for managing and accessing a training data set, the model definition module is used for defining a model structure and providing a common deep learning algorithm for a user, and the model visualization module is used for visualizing a pre-training model. The system can define, set and construct the deep learning model in a visual mode in the browser Web page, visually display the process of reasoning and predicting the model in the deep learning training process, capture the output information of each network layer and dynamically display the process of data flow between layers.

Description

Deep learning model visual construction system and application and design method thereof
Technical Field
The invention relates to the technical field of deep learning, in particular to a deep learning model visual construction system and an application and design method thereof.
Background
In recent years, deep learning is widely applied to different industries such as medical treatment, industry, finance and the like, and a new production activity mode is brought to the traditional industries. Deep learning models have achieved good performance in many tasks. However, the end-to-end training mode makes the neural network act like a "black box" and the internal working mechanism lacks interpretability. When the deep learning model is used for reasoning and prediction, people often have difficulty in understanding the working principle of the model. The "black box" nature of neural networks has facilitated the study of deep learning visualization techniques. The deep learning visualization technology mainly uses a neural network feature visualization method to analyze and explain the working mechanism of the model.
Today, the popularization and application of deep learning techniques face two problems: firstly, the development process of the deep learning model is complex, and the mainstream deep learning framework can not provide a simple and easy-to-use integrated model construction process; secondly, the construction and prediction processes of the deep learning model are poor in interpretability, and people are difficult to understand the internal working mechanism of the deep neural network.
Disclosure of Invention
The invention mainly aims to provide a deep learning model visualization construction system and an application and design method thereof, and aims to solve the problems of complex development process of a deep learning model and weak interpretability of construction and prediction processes.
In order to solve the technical problem, the invention provides a deep learning model visualization construction system, which comprises:
the user management module is used for classifying users and providing authority;
the business function module comprises a data management module, a model definition module, a model training module and a model visualization module, wherein the data management module is used for managing and accessing a training data set, the model definition module is used for defining and setting a model structure in a visual mode in a browser Web page and providing a common deep learning algorithm for a user, the model training module is used for super-parameter setting, model training, model evaluation and model management, and the model visualization module is used for visualizing a pre-training model.
Optionally, the user management module further includes an administrator module and a general user module, the administrator module is configured to set administrator management users and their permissions, and the general user module is configured to log in and register for individuals and manage personal information.
Optionally, the model definition module includes a neural network structure design submodule and a general deep learning algorithm library submodule, the neural network structure design submodule is used for enabling a user to visually design a structure of a neural network and configure model parameters based on a browser, and the general deep learning algorithm library submodule is used for enabling the user to directly invoke a classical algorithm in the deep learning field.
Optionally, the model visualization module includes a network structure visualization submodule and a feature map visualization submodule, the network structure visualization submodule is used for 3D and dynamic presentation of the structure of the neural network, and the feature map visualization submodule is used for visually presenting the feature maps generated by the network layers.
Optionally, the system further comprises a database, wherein the database comprises a user information table, a model information table, a data set information table and a model training workflow table, and the database is respectively used for storing the user information, the model information, the data set information and the relevant information of model training.
Optionally, the user information includes a user number, a user name, a user password, and a user rating; the model information comprises a model number, a model name, a model storage path, a user number for creating the model, model creation time, model description and a pre-training model; the data set information comprises a data set number, a data set name, a data set storage path and data set creation information; the relevant information of the model training comprises a training workflow number, a user number, training starting and finishing time, a model number, a data set number and a log file storage path.
Optionally, the database is a MYSQL database.
Optionally, the system further comprises a hardware module, wherein the hardware module comprises a CPU, a GPU and a display.
The invention also provides application of the deep learning model visual construction system as industrial product defect detection, which is characterized by being used for detecting the defects of the popping beads in the filter paper cigarette.
The invention also provides a design of a deep learning model visual construction system, which comprises the following steps:
s1, setting functional requirements and non-functional requirements of a system, wherein the functional requirements comprise acquisition of training data, design of a neural network structure, provision of a general algorithm library, creation of a model training task workflow, management of a model and model visualization; the non-functional requirements include security, ease of use, extensibility, and compatibility;
s2, designing the system according to the system requirement, and designing a layered architecture with at least four layers including an interaction layer, a service layer, a data layer and a basic platform layer;
s3, performing definite division of labor for each layer, wherein the basic platform layer is used for providing hardware facilities required by the system to realize functions and computing resources required by model training, the data layer comprises a database and a file system and is used for storing personal information, operation information, data set information and model information of a user, the business layer is used for realizing system functions, the business layer comprises an application service layer, an algorithm library layer and a computing layer, and the interaction layer is used for providing a visual operation mode for the user;
s4, designing a functional module and a database according to the division of labor of each layer;
and S5, realizing each module and the database.
Optionally, the deep learning model visualization construction system adopts a B/S architecture, the development environment adopts a centros 7 operating system, the development tool adopts Visual Studio Code and PyCharm, and the development mode adopts a front-end and back-end separation mode.
According to the technical scheme, a user management module and a business function module are arranged to classify users and provide authority, the division of labor of each module in the business function module is clear, a data management module can manage and access a training data set, a model definition module can visually define and set a model structure in a browser Web page and provide a common deep learning algorithm for the users, a model training module carries out hyper-parameter setting, model training, model evaluation and model management, and finally a pre-training model is visualized through a visualization module. The system can visually display the process of reasoning and predicting the model in the deep learning and training process, capture the output information of each network layer and dynamically display the process of data flow among the layers.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a functional block diagram of an embodiment of a deep learning model visualization construction system provided by the present invention;
FIG. 2 is a schematic flow diagram of a design submodule of the neural network structure of FIG. 1;
FIG. 3 is an architectural diagram of a model visualization module of FIG. 1;
FIG. 4 is a schematic flow diagram of an implementation of the model visualization module of FIG. 1;
FIG. 5 is a diagram of a physical data model of the database in the embodiment of FIG. 1;
FIG. 6 is a network structure diagram of a YOLOv3 Tiny model adopted in an embodiment of the deep learning model visualization construction system provided by the present invention;
FIG. 7 is a sample image of a popped bead according to an embodiment of the application of the deep learning model visualization construction system provided by the present invention;
FIG. 8 is another sample image of a popped bead according to an embodiment of the application of the deep learning model visualization construction system provided in the present invention;
FIG. 9 is a further sample image of a popped bead according to an embodiment of the application of the deep learning model visualization construction system provided in the present invention;
FIG. 10 is a sample image of a popped bead used in an embodiment of the application of the deep learning model visualization construction system provided by the present invention;
FIG. 11 is an image of the test results of the popped bead sample of FIG. 10;
FIG. 12 is a 3D visualization of the YOLOv3 Tiny model of the popped bead sample of FIG. 10;
FIG. 13 is a YOLOv3 Tiny model signature plot visualization of the popped bead sample of FIG. 10;
fig. 14 is a schematic flow chart of an embodiment of a design method of a deep learning model visualization construction system provided by the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, if there is a description of "first", "second", etc. in an embodiment of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" appearing throughout includes three juxtapositions, exemplified by "A and/or B" including either A or B or both A and B. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The popularization and application of the existing deep learning technology face two problems: firstly, the development process of the deep learning model is complex, a mainstream deep learning framework cannot provide a simple and easy-to-use integrated model construction process, and the developed deep learning model needs to have higher professional knowledge level and also needs to have the development experience and stronger programming capability of a corresponding framework; secondly, the construction and prediction processes of the deep learning model are poor in interpretability, and people are difficult to understand the internal working mechanism of the deep neural network. In view of this, the present invention provides a deep learning model visualization construction system, and fig. 1 is a schematic structural diagram of an embodiment of the deep learning model visualization construction system provided by the present invention, which includes a user management module and a service function module, where the user management module is used to classify users and provide authority for the management and registration of users; the service function module comprises four sub-modules: the system comprises a data management module, a model definition module, a model training module and a model visualization module, wherein the data management module is mainly responsible for management and access of a training data set, the model definition module is mainly responsible for definition of a model structure, the model structure can be defined and set in a visual mode in a browser Web page, a common deep learning algorithm is provided for a user, the model training module provides functions of hyper-parameter setting, model training, model evaluation, model management and the like, the model visualization module visualizes a pre-training model, and the model definition module, the model training module and the model visualization module realize the core function of the system.
It should be noted that the hyper-parameter setting, i.e., the hyper-parameter of the deep learning model, mainly includes the number of iterations of learning, the batch size, the learning rate, and the like. The process of one complete training of the model using all the data of the training data set is referred to as an iteration (Epoch), and the "number of iterations" defines how many iteration processes the model training contains. In the training process, the whole training data set is usually divided into several batches (Batch), where "Batch size" refers to the size of each Batch of data set, and the model is updated with the weight once per Batch of data training. The learning rate is used to control the speed of model weight update. These hyper-parameters affect the convergence speed, accuracy, robustness, complexity, etc. of the model training. The system provides an interface for setting the hyper-parameters for a user, provides a visual hyper-parameter configuration interface, and the user can set the hyper-parameters only by simple operation. The model training is mainly divided into two modes, and for the algorithm in the user-defined algorithm library, a mode of training the model at the browser end is adopted, so that a user is supported to utilize the computing resource of the user computer to directly perform the model training in the environment of the user browser. For the algorithm in the general deep learning algorithm library, a mode of training a model at a server side is adopted, and after the model is defined by a user, the model can be trained by selectively using a GPU or a CPU. Before the training of the model is started, a proper loss function and a model training optimization algorithm need to be selected, and the system provides a common loss function and model training optimization algorithm, such as a cross entropy loss function, an average variance loss function, a random gradient descent method, a batch gradient descent method and the like. And the user performs related setting through the configuration interface. For commonly used deep learning algorithms, the system provides pre-trained models that have been trained on classical datasets, which can be used by users for transfer learning. When the model is trained, the user can use the pre-training model to initialize the weights of the model, and can also freeze some network layer weights of the pre-training model, and only train the network layer weights which are not frozen. And (3) model evaluation, wherein in the training process of the model, the loss value and the accuracy are important evaluation indexes which reflect the convergence condition of model training and the performance of the model. A model evaluation module of the system is mainly responsible for detecting loss values and accuracy, dynamically displays the change conditions of the loss values and the accuracy in real time in a trend graph mode, supports displaying the parameter conditions of each network layer of the model in a list, histogram and other graph modes, visually presents the states of the model in different training stages, helps a user to adjust parameters, and enables the performance of the model to be optimal more quickly. Model management mainly provides the functions of storing trained models and managing model files. After the model training is finished, two types of files are generated, one type is a file for storing model structure information, the other type is a file for storing model weight information, and a user can store the model files in a server side of the system or download the model files to the local of the user.
According to the technical scheme, a user management module and a business function module are arranged to classify users and provide authority, the division of labor of each module in the business function module is clear, a data management module can manage and access a training data set, a model definition module can visually define and set a model structure in a browser Web page and provide a common deep learning algorithm for the users, a model training module carries out hyper-parameter setting, model training, model evaluation and model management, and finally a pre-training model is visualized through a visualization module. The system can visually display the process of reasoning and predicting the model in the deep learning and training process, capture the output information of each network layer and dynamically display the process of data flow among the layers.
It should be noted that there are many mature deep learning frameworks in the deep learning field. Tensorflow is currently the most popular deep learning framework. In the system of the present embodiment, model construction was performed using tensiflow and tensiflow. Tensorspace is a set of framework for constructing 3D visualization application of a neural network, the framework provides an API for model visualization, a user is allowed to construct an interactive 3D visualization model in a browser, the structure of the neural network can be displayed, and a series of processes such as internal feature extraction of the network, data interaction of an intermediate layer and final result prediction can be presented. Tensorspace supports models trained using Tensorflow, Keras, and Tensorflow. The embodiment is based on Tensorspace to realize the model visualization function of the system.
Furthermore, the user management module further comprises an administrator module and a common user module, wherein the administrator module is used for setting administrator management users and authorities thereof, the common user module is used for personal login registration and personal information management, namely, users of the system are mainly divided into administrators and common users, the administrators are responsible for user management and authority management, the common users need to register accounts before using the system, only the system is logged in to use the related functions of the system, and the system users can be well managed through the module.
Specifically, the model definition module comprises a neural network structure design submodule and a universal deep learning algorithm library submodule, wherein the neural network structure design submodule is used for enabling a user to visually design a structure of a neural network and configure model parameters based on a browser Web page, and the universal deep learning algorithm library submodule is used for enabling the user to directly call a classic algorithm in the deep learning field. Preferably, referring to fig. 2, fig. 2 is a basic flow of interactive neural network structure design, in which the neural network structure design submodule establishes a neural network component library based on tensoflow.js, provides basic components constituting the neural network structure for a user, and allows the user to select different network layer components in an interactive interface by using a network layer as a basic unit to design the neural network structure in a manner of dragging the components. The specific process is as follows: firstly, a user selects a network layer component from a neural network component library, and the operations of adding the network layer component and deleting the network layer component are carried out in an editing area according to the design requirement; then, the user sorts and connects the selected network layer components in a component dragging mode, and can click the network layer components to set the model parameters of the network layer in the popped parameter configuration interface; and then, the user can store the designed neural network algorithm into a self-defined algorithm library, and the user can load a training data set and call a model training function module of the system to train the model. The universal deep learning algorithm library submodule is a universal deep learning algorithm library established based on a Tensorflow framework, and provides common deep learning algorithms for users, such as AlexNet, VGG, MobileNet, Yolov3, Word2Vec, LSTM and the like; when the deep learning model is constructed, a user can directly use the algorithm in the general deep learning algorithm library, the default model parameter configuration can be reserved, the parameters of the model can be reset, the relevant operations of the model definition can be completed through a visual interactive interface without writing codes, a pre-training model is provided for part of general algorithms, and the user can use the pre-training model to perform transfer learning.
Specifically, the model visualization module comprises a network structure visualization submodule and a feature map visualization submodule, the network structure visualization submodule is used for 3D and dynamic presentation of the structure of the neural network, and the feature map visualization submodule is used for visually presenting the feature maps generated by the network layers. Through the network structure visualization sub-module and the feature map visualization sub-module, when a user loads a trained deep learning model at a browser end and uses the model to perform tasks such as image classification, target detection and the like, the structure of a neural network can be interactively presented in a 3D mode, feature maps generated by various network layers can be visually presented, the conversion and transmission processes of data in the model are dynamically presented, and the user is helped to understand the reasoning prediction process of the model. Preferably, referring to fig. 3, fig. 3 is an architecture diagram of the model visualization module, and the model visualization module may be divided into three layers according to a sequence from bottom to top, where a bottom layer is a data resource layer, a middle layer is a visualization layer, and an uppermost layer is an interaction layer. The data resource layer is composed of a pre-training model and a data set, the pre-training model refers to a model which is trained by a user through a training data set, and structural information and weight information of the pre-training model are usually stored in an h5 file, a pb file or a JSON file. The visualization layer is a core part of the model visualization module, provides a model visualization engine and comprises a model preprocessing mechanism, a visualization model generation mechanism and a data preprocessing mechanism, wherein the model preprocessing mechanism is used for preprocessing the model and converting a pre-training model into a uniform format: preparing a JSON file containing model structure information and some weight files containing model weight information for generating a visual model; a visualization model generation mechanism automatically generates a corresponding visualization model based on a structural information file of the model, and input data of the visualization model must be in a tensor form; the data preprocessing mechanism is used for uniformly converting input images in JPEG, PNG and other formats into tensor forms and storing the tensor forms in JSON files. The interaction layer is mainly responsible for loading a visual model in an interaction interface, the 3D can interactively present the network structure of the model, when a user inputs data for reasoning and prediction, the visual display model carries out the process of reasoning and prediction, captures output information of each network layer, visualizes a generated characteristic diagram, and dynamically displays the process of data flow between layers.
After the user has trained the model, the model can be rendered visually in 3D at the browser end by using the model visualization function of the system, please refer to fig. 4, where fig. 4 is a specific flow for implementing model visualization. Firstly, a user selects a model needing visualization, the model preprocessing mechanism preprocesses the model, and the model file is converted into a JSON file containing model structure information and some weight files containing model weight information. And then a visual model generation mechanism of the system reads the JSON file of the model and automatically generates a corresponding visual model in the background according to the structural information of the model. When the user opens the visualization page, the browser automatically renders the visualization model. If the user wants to visually display the inference prediction process of the model in a 3D mode, the user needs to select an image to be input into the model, the data preprocessing mechanism preprocesses the input image, converts the image into a JSON file, then the visual model reads the JSON file of the image to perform inference prediction, and meanwhile the browser automatically renders the prediction process of the model. In the interactive area, a user can drag the model through a mouse, change the position of the model, observe the structure of the neural network and the interlayer relation of each network layer from different angles, and observe the generated characteristic diagram by clicking the network layer.
In order to classify and call system data, the deep learning model visualization construction system further comprises a database, wherein the database comprises a user information table, a model information table, a data set information table and a model training workflow table, and the database is respectively used for storing user information, model information, data set information and relevant information of model training. Referring to FIG. 5, FIG. 5 is a diagram of a physical data model of a database.
Specifically, the user information includes a user number, a user name, a user password, and a user level; the model information comprises a model number, a model name, a model storage path, a user number for creating the model, model creation time, model description and a pre-training model; the data set information comprises a data set number, a data set name, a data set storage path and data set creation information; the relevant information of the model training comprises a training workflow number, a user number, training starting and finishing time, a model number, a data set number and a log file storage path. The detailed information of the system database tables is as follows:
(1) the User information table (User) mainly stores the related information of the User, including the User number, the User name, the User password and the User grade. The table structure is shown in Table 1-1.
TABLE 1-1 User information Table (User)
Figure BDA0003104071850000091
(2) The Model information table (Model) mainly stores relevant information of the Model, including Model numbers, Model names, Model storage paths, user numbers for creating the Model, Model creation time, description of the Model, pre-training models and the like. The table structure is shown in tables 1-2.
TABLE 1-2 Model information Table (Model)
Figure BDA0003104071850000092
Figure BDA0003104071850000101
(3) The Data set information table (Data) mainly stores the related information of the Data set, including the Data set number, the Data set name, the Data set storage path and the creation information of the Data set. The table structures are shown in tables 1-3.
Tables 1-3 Data set information Table (Data)
Figure BDA0003104071850000102
(4) The model training workflow table (Train _ workflow) mainly stores relevant information of model training, including a training workflow number, a user number, training start and end times, a model number, a data set number and a log file storage path. The table structures are shown in tables 1-4.
Table 1-4 model training workflow table (Train _ workflow)
Figure BDA0003104071850000103
Preferably, the database is a MYSQL database. MySQL is a relational database management system that keeps data in different tables instead of putting all the data in one large repository, which increases speed and flexibility.
In addition, the data management module in this embodiment is mainly responsible for management and access of the data set. When model training is carried out, a user can upload a data set of the user to the system, or the data set provided by the system is used, the system provides a data set in the deep learning field, the user can directly call the data set provided by the system to carry out model training, the system provides a download link of the data set, and the user can download the data set to the local. The data sets provided by the system comprise an MNIST data set, a CIFAR-10 data set, a COCO data set, an ImageNet data set, an ILSVRC2012 data set and the like. The detailed information of each data set is as follows:
(1) the MNIST data set is a very classical data set in the field of machine learning and consists of 60000 training samples and 10000 testing samples, and each sample is a 28X 28 gray-scale handwritten digital image.
(2) The CIFAR-10 dataset contains 10 categories, covering the daily life categories of aviation, vehicles, birds, cats, dogs, foxes, horses, boats, trucks, etc., with 6000 images per category, each image having a size of 32X 3.
(3) The image of the COCO data set not only has category and position information, but also has semantic text description of the image, and can be used for image recognition and image segmentation.
(4) ImageNet is a computer vision system recognition project, is the largest data set of image recognition in the world at present, the data set comprises 2 million of multiple category objects, more than 1400 million of images are annotated manually, and in at least one million images, an object bounding box is provided.
(5) The ILSVRC2012 dataset is a subset of the ImageNet dataset and is commonly used for training image classification models.
In industrial production, due to the influence of equipment and processes, industrial products often have defects of different degrees, and the defect detection of the industrial products is an essential link in the manufacturing process. In the aspect of industrial product defect detection, the traditional manual detection method has the defects of low efficiency, high cost, strong subjectivity and the like, and is easy to have the conditions of false detection, missed detection and the like. Machine vision technologies such as image processing and pattern recognition are applied to defect detection of industrial products, and are a trend of industrial automation. Compared with the traditional machine vision method, the deep learning can reduce the influence of manually extracted features on the identification accuracy, has the characteristics of high detection speed, high accuracy and the like, and is gradually applied to industrial product defect detection. In view of this, the invention further provides an application of the deep learning model visualization construction system as industrial product defect detection, which is used for detecting the defect of the exploded bead in the filter paper cigarette, and fig. 6 to 13 are schematic diagrams of an application embodiment of the deep learning model visualization construction system provided by the invention. In the application, an industrial product defect detection model based on a YOLOv3 Tiny algorithm is constructed by using an explosion bead image data set acquired in an explosion bead industrial production process and is used in a defect detection link in the explosion bead production process. And then visualizing the network structure of the model and the characteristic diagram generated by the network layer, and analyzing the visualization result.
The bead is a small liquid rubber bead embedded in the tobacco filter tip and wraps liquids with different types of flavors, so that a smoker can puff in the smoking process to flow out the liquid in the bead, and the cigarette is richer in taste and more fragrant and moist.
In the production process of the blasting beads, defective products such as air bubbles, conjoined products, capped products, over-large products, over-small products and the like can be produced, in order to enable the blasting beads to meet the quality requirements, the defect detection of the blasting beads needs to be carried out, and the blasting beads with unqualified quality are selected. A computer vision processing system is required to be established for realizing the detection of the bead blasting defects, and the system consists of an adsorption rod, an industrial camera, an image processing unit and an electric control system. The specific implementation process is as follows: the method comprises the steps that firstly, the blasting beads are adsorbed on an adsorption rod, then the blasting beads on the adsorption rod are photographed by an industrial camera, the photographed pictures are transmitted to an image processing unit, the image processing unit analyzes the blasting bead pictures, defective products which do not conform to conventional shapes, colors and sizes are recognized, signals are transmitted to an electric control system, and the defective products are removed through a high-speed pulse electromagnetic valve. The image processing unit is the core of the defect detection system, and the recognition rate of the defect detection algorithm influences the quality of the final sorting effect.
In order to quickly and accurately sort out defective shot, the application embodiment constructs a defect detection model by using a shot image data set acquired in shot industrial production based on a YOLOv3 Tiny algorithm provided by a system. The YOLOv3 Tiny model is mainly composed of 13 convolutional layers and 6 maximal pooling layers, and the Normalization method adopted is Batch Normalization, and the activation function is LReLU function. Referring to fig. 6, fig. 6 shows a network structure of the model. In the network structure diagram of YOLOv3 Tiny, Conv denotes a convolutional layer, Pool denotes a pooling operation layer, upsamplale denotes an upsampling operation, and Concat denotes a tensor splicing operation. The upsampling operation expands the size of the eigen map, and the tensor stitching operation connects the eigen map in the channel dimension. The Yolov3 Tiny model performs target detection by using 2 feature maps with different sizes, and performs target detection on a 32-time down-sampling feature map and a 16-time down-sampling feature map of an input image respectively. In the experiments herein, the input image data size is 416 × 416 × 3, and the model output layer will obtain a feature map with a size of 13 × 13 × 18 and a feature map with a size of 26 × 26 × 18, i.e., divide the image into 13 × 13 grid cells and 26 × 26 grid cells, respectively. And then regressing on each cell to obtain the position information of the target and the corresponding probability of the category to which the target belongs.
1.1 Industrial product Defect detection model construction
1.1.1 application example data
The data of the application embodiment is from a certain tobacco enterprise, the diameter of the bead is within the range of 2mm-5mm, the bead is shot by an industrial camera, 100 bead images are collected, the resolution of the images is 1600 x 1200, in order to reduce the calculated amount of the model, the original data is divided into 416 x 416 image blocks which are used as experimental data, and the data amount is expanded from 100 to 200.
The bead blasting image comprises normal bead blasting and defect bead blasting, and the model constructed by the application embodiment only aims at the defect of 'bubbles' of the bead blasting. Referring to fig. 7, 5 exploded beads are provided on one adsorption rod, the frame labeled exploded beads are samples with "bubble" defect, and the rest exploded beads are normal samples.
There are 5 pop beads in each sample image, and there are two cases of the pop beads in the image:
(1) if there is no defective bead explosion in the image, please refer to fig. 8, the bead explosion in the image is all normal bead explosion;
(2) there are one or more defective popped beads in the image, see fig. 9, and 4 of the 5 popped beads are "bubble" defects.
According to the characteristic, in the embodiment of the application, a marking tool Label Img is utilized to manually mark image data, defect represents the defect bead explosion, 200 pieces of image data exist in an experimental data set, and 303 defect targets are marked.
The application example divides the experimental data set into a training set and a testing set according to the proportion of 7: 3. The quantities of normal bead blasting and defect bead blasting in the training set and the testing set are shown in a table 2-1, the training set comprises 140 image data, and 480 normal bead blasting and 220 defect bead blasting are performed in total; the test set contained 60 images for a total of 217 normal pop beads and 83 defective pop beads.
TABLE 2-1 number of normal and defective blasting beads
Figure BDA0003104071850000131
1.2.2 model training and results analysis
The Yolov3 Tiny algorithm is packaged in a general algorithm library of a deep learning model visual construction system based on Tensorflow, and in the experiment, a preprocessed bead blasting data set is uploaded to the system, and then the Yolov3 Tiny algorithm in the general algorithm library is called for model training. The specific experimental environment is as follows: cuda9.0 acceleration, an Nvidia GeForce GTX 1660Ti display card, 6G of memory of the display card, 16G of memory of a CPU and a Centos7 of an operating system.
The hyper-parameter setting in the embodiment of the application is shown in table 2-2, the batch size is set to 6 according to the parameter number of the model and the hardware resources of the experiment platform, the learning rate is set to 0.001 based on the experience value of the training model, and the training is iterated for 100 times.
TABLE 2-2 Superparameter
Figure BDA0003104071850000141
After the model is trained for 100 times, the loss tends to be stable, and the target detection is carried out by the weight of the model when the model is trained for 100 times.
And inputting the image of the test data set into the model, drawing a prediction frame in the image by the model, and predicting the target class and the confidence coefficient of the class. Referring to fig. 10 and 11, the sample data input model shown in fig. 10 is detected, and the obtained detection result is shown in fig. 11. The model detects 3 targets with "bubble" defects in the input sample, and marks the targets with 3 prediction boxes on the image, wherein the confidence degrees of the "bubble" defect targets are 0.9, 0.99 and 1 respectively.
In order to evaluate the detection performance of the model, in the embodiment of the application, Accuracy (Accuracy), Precision (Precision), False Positive Rate (FPR), False Negative Rate (FNR), and average detection Time (Time) are selected as evaluation indexes, and the calculation formula is as follows.
Figure BDA0003104071850000142
Figure BDA0003104071850000143
Wherein TP represents the number of defective blasting beads correctly detected by the model in the test sample, FN represents the number of defective blasting beads which are not detected in the test sample, FP represents the number of defective blasting beads which are falsely detected in the test sample, and TN represents the number of normal blasting beads which are falsely detected as defective blasting beads in the test sample.
Figure BDA0003104071850000144
Figure BDA0003104071850000145
Respectively representing the time when the model starts to detect and finishes to detect on the kth sample, and n represents the number of samples.
In the embodiment of the present application, the threshold of the confidence is set to 0.7, that is, when the confidence of the prediction box exceeds 0.7, the recognition is correct, and when the confidence is not higher than 0.7, the recognition is incorrect. Experiments were performed on the test data set and the results of the model measurements are shown in tables 2-3.
Table 2-3 model test results
Figure BDA0003104071850000151
The performance evaluation of the model was formulated as shown in tables 2-4.
Tables 2-4 model Performance evaluation
Figure BDA0003104071850000152
The accuracy of the model is 97.36%, the accuracy is 94.94%, the false detection rate is 1.79%, the omission rate is 5.06%, and the average detection time is 45.31 ms. As can be seen from the model performance evaluation results, for detecting the bubble defect of the exploded bead, the model has high accuracy and precision, low false detection rate and omission factor and short average detection time, and achieves the expected detection effect.
The result proves that the deep learning algorithm provided by the deep learning model visualization construction system and the relevant functions of the model construction can meet the requirements of practical application, the construction efficiency of the model is improved, and the model construction process is simplified.
2.3 model visualization presentation and result analysis
Based on the model visualization function of the deep learning model visualization construction system developed in the present section, the section visualizes the network structure of the YOLOv3 Tiny defect target detection model constructed in the previous section and the feature map generated by the network layer.
2.3.1 network Structure visualization
Fig. 12 is a 3D visualization effect diagram of a YOLOv3 Tiny model network structure. The network structure of the 3D display model in the model visualization effect diagram is as follows: the network structure of the model consists of 13 convolutional layers, 6 pooling operation layers, 1 upsampling operation layer and 1 tensor splicing layer.
In the visual effect diagram, the squares with different colors represent different network layer structures, the yellow square represents a convolutional layer, the blue square represents a pooling operation layer, the green square represents an up-sampling operation layer, the red square represents a tensor splicing layer, and the gray square represents an input layer. As can be seen from the visual effect graph, the network structure of the model has two branches, and when the input image is subjected to target detection, the output layer of the model generates 2 feature graphs with different scales. And clicking each network layer, and viewing the visualization result of the characteristic diagram generated by the network layer. The next subsection will detail the feature map visualization.
2.3.2 feature map visualization
Inputting bead explosion image data (please refer to fig. 10) in a YOLOv3 Tiny model for target detection, wherein each network layer of the model performs a series of processing on the input image, and the convolution operation is performed on the filter of the convolution layer and the input image to generate a feature map containing the feature information of the input image; and the pooling operation layer performs down-sampling on the feature map generated by the previous convolution layer, reduces the size of the feature map and retains the most important feature information. The up-sampling operation layer increases the size of the feature map generated by a certain network layer so as to be fused with the feature maps generated by other network layers; and the tensor splicing layer connects the characteristic images generated by different network layers on the channel dimension and fuses the characteristic images generated by different network layers.
The dimensional information of the generated feature maps of the various network layers of the YOLOv3 Tiny model is shown in tables 2-5. The first column of the table represents different network layers, the naming rule of the network layers is composed of network layer type and network layer number, for example, Conv _1 represents a 1 st layer convolutional layer, maxporoling _1 represents a 1 st layer pooling operation layer, Upsample _1 represents a 1 st layer up-sampling layer, and Concatenate _1 represents a 1 st layer tensor splicing layer; the second column of the table is the size of the convolution and pooling kernels; the third column of the table is the size information of the generated profile of the corresponding network layer.
Tables 2-5 sizing information for feature maps
Figure BDA0003104071850000161
Figure BDA0003104071850000171
The model visualization function of the deep learning model visualization construction system can also visualize the feature maps generated by each network layer of the model. Referring to fig. 13, clicking a network layer in the 3D network structure of the model to obtain a visualization result of the feature map generated by the network layer, and selecting a feature map will obtain the size information of the corresponding feature map. The connection lines between the characteristic diagrams of different network layers present the conversion and transmission process of the characteristic diagrams between layers.
The invention further provides a design method of the deep learning model visual construction system, and fig. 14 is a flow diagram of an embodiment of the design method of the deep learning model visual construction system provided by the invention, which includes the following steps:
s1, setting functional requirements and non-functional requirements of a system, wherein the functional requirements comprise acquisition of training data, design of a neural network structure, provision of a general algorithm library, creation of a model training task workflow, management of a model and model visualization; the non-functional requirements include security, ease of use, extensibility, and compatibility.
In this embodiment, the requirements are specifically: training data are obtained, the system provides a data set which is usually used for deep learning model construction for a user, and the user can train a model by using the data set provided by the system and can upload own data set; designing a neural network structure, wherein the system needs to provide components for forming the neural network structure, and a user designs the neural network in an interactive interface in a mode of dragging the components; providing a universal algorithm library, wherein a system needs to encapsulate common deep learning algorithms such as LeNet, AlexNet, VGG, MobileNet and the like, and when a user trains a deep learning model by using the algorithms, only relevant hyper-parameters are configured without compiling codes; creating a model training task workflow, wherein the system provides related functional components for training a deep learning model, including data loading, hyper-parameter setting, model training, model evaluation and the like, and a user can rapidly create the model training task workflow by using the functional components to complete the training of the deep learning model; managing the model, wherein the user can store the trained model in a server side and download the model to the local of the user; and the model visualization is realized, the system provides a model visualization function, when the model performs inference prediction, data are output between layers of each network layer of the visualization model, and the conversion process of the data in the model is displayed, so that a user can understand the internal working mechanism of the neural network. Safety, the system needs to ensure that the personal information of the user is not leaked, and the user can only access the authorized resource. The user can upload the own data set to the system, and the system stores the data set of the user and needs to ensure that the training data of the user is not damaged, changed and leaked; the system has the advantages that the system is easy to use, an attractive and concise system interface is provided, each functional component of the system is easy to learn and operate, a user can quickly master basic functions, and the use requirements of novice developers are met; expandability, related functions of the system support expansion, new functions are added in the using process of the system, and the functions of the system are gradually improved; compatibility, the system can be used on a mainstream browser, and the page has self-adaption capability and can be displayed on screens with different resolutions.
S2, designing the system according to the system requirement, and designing layered architecture with at least four layers including interaction layer, service layer, data layer and basic platform layer.
Referring to fig. 7, the layered architecture is the most common system architecture, and in the design method of the deep learning model visualization construction system of the embodiment, the system architecture mainly includes four major layers: interaction layer, service layer, data layer, and base platform layer. The business layer can be divided into an application service layer, an algorithm library layer and a calculation layer. The system is divided into four horizontal layers, the lower layer is the foundation of the upper layer, and each layer has definite division of labor.
S3, performing definite division of labor for each layer, wherein the basic platform layer is used for providing hardware facilities required by the system to realize functions and computing resources required by model training, the data layer comprises a database and a file system and is used for storing personal information, operation information, data set information and model information of users, the business layer is used for realizing system functions, the business layer comprises an application service layer, an algorithm library layer and a computing layer, and the interaction layer is used for providing a visual operation mode for the users.
In specific implementation, the specific functions of each layer are as follows:
(1) base platform layer
The basic platform layer is positioned at the lowest layer of the whole system architecture and mainly provides hardware facilities required by the system to realize each function and provides computing resources required by model training, and a user can select to use a GPU or a CPU to train the model. The layer is the basic guarantee for the normal operation of the system.
(2) Data layer
The data layer is composed of a database and a file system, and the database mainly stores personal information of users, operation information of the users, data set information, model information and the like. The model and the data set are stored in a file form, and relevant model information and data information are stored in a database. The database of the system adopts MySQL database.
(3) Business layer
The business layer is a core layer for realizing system functions, and can be subdivided into an application service layer, an algorithm library layer and a calculation layer.
The calculation layer is arranged at the lowest part of the business layer and mainly comprises a deep learning framework Tensorflow, so that support is provided for realizing a deep learning algorithm. On the basis of a computing layer, a deep learning algorithm library is provided and is divided into a user-defined algorithm library and a general algorithm library. The user-defined algorithm library is a deep learning algorithm designed by a user, and the general algorithm library is a common deep learning algorithm packaged based on a Tensorflow framework. The application service layer manages and defines the API and the method related in the system core function and is divided into four parts of data management, model definition, model training and model visualization. The data management provides a data acquisition and storage method, which mainly deals with the data layer. Js encapsulates the network layer component of the neural network based on Tensorflow, and provides a method for designing the neural network by dragging the component at the browser end. The model training defines the relevant interfaces of the model training and provides support for creating the workflow of the model training task. Model visualization provides a relevant method of visualizing a model.
(4) Interaction layer
The system is based on a B/S (browser/server) architecture mode, an interaction layer is a browser used by a user, and the interaction layer is mainly responsible for providing a visual operation mode for the user and is an external interface of the system. A user designs a neural network at a browser end in a dragging component mode, and deep learning model training workflows can be quickly created through corresponding functional components. The model visualization function can be realized by simple operations.
And S4, designing a functional module and a database according to the division of the layers.
In this embodiment, the module and the database are designed according to the division of labor and the functional requirements of each layer, and specifically include a hardware module and a software module. The software modules are shown in the above embodiments, and are not described herein again.
And S5, realizing each module and the database.
Furthermore, the deep learning model visualization construction system adopts a B/S architecture, a Centos7 operating system is adopted in a development environment, Visual Studio Code and Pycharm are adopted in a development tool, a front-end and back-end separation mode is adopted in the development mode, and the tool can greatly improve the convenience of user operation.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A deep learning model visualization construction system is characterized by comprising:
the user management module is used for classifying users and providing authority;
the business function module comprises a data management module, a model definition module, a model training module and a model visualization module, wherein the data management module is used for managing and accessing a training data set, the model definition module is used for defining and setting a model structure in a visual mode in a browser Web page and providing a common deep learning algorithm for a user, the model training module is used for super-parameter setting, model training, model evaluation and model management, and the model visualization module is used for visualizing a pre-training model.
2. The deep learning model visualization construction system of claim 1, wherein the user management module further comprises an administrator module and a general user module, the administrator module is used for setting administrator management users and their authorities, and the general user module is used for personal login registration and personal information management.
3. The deep learning model visualization construction system of claim 1, wherein the model definition module comprises a neural network structure design submodule and a universal deep learning algorithm library submodule, the neural network structure design submodule is used for enabling a user to visually design the structure of the neural network and configure model parameters based on a browser, and the universal deep learning algorithm library submodule is used for enabling the user to directly invoke a classic algorithm in the deep learning field.
4. The deep learning model visualization construction system of claim 1, wherein the model visualization module comprises a network structure visualization submodule and a feature map visualization submodule, the network structure visualization submodule is used for 3D and dynamic presentation of the structure of the neural network, and the feature map visualization submodule is used for visually displaying the feature map generated by each network layer.
5. The deep learning model visualization construction system of claim 1, further comprising a database, wherein the database comprises a user information table, a model information table, a data set information table and a model training workflow table, and is used for storing the user information, the model information, the data set information and the relevant information of model training respectively.
6. The deep learning model visualization construction system of claim 5, wherein the user information comprises a user number, a user name, a user password, and a user rating; the model information comprises a model number, a model name, a model storage path, a user number for creating the model, model creation time, model description and a pre-training model; the data set information comprises a data set number, a data set name, a data set storage path and data set creation information; the relevant information of the model training comprises a training workflow number, a user number, training starting and finishing time, a model number, a data set number and a log file storage path.
7. The deep learning model visualization building system of claim 5, wherein the database employs a MYSQL database.
8. Use of the deep learning model visualization construction system according to claims 1-7 as industrial product defect detection for detecting defects of popping beads in filter paper cigarettes.
9. A design method of the deep learning model visualization construction system according to the claims 1-7, characterized by comprising the following steps:
s1, setting functional requirements and non-functional requirements of a system, wherein the functional requirements comprise acquisition of training data, design of a neural network structure, provision of a general algorithm library, creation of a model training task workflow, management of a model and model visualization; the non-functional requirements include security, ease of use, extensibility, and compatibility;
s2, designing the system according to the system requirement, and designing a layered architecture with at least four layers including an interaction layer, a service layer, a data layer and a basic platform layer;
s3, performing definite division of labor for each layer, wherein the basic platform layer is used for providing hardware facilities required by the system to realize functions and computing resources required by model training, the data layer comprises a database and a file system and is used for storing personal information, operation information, data set information and model information of a user, the business layer is used for realizing system functions, the business layer comprises an application service layer, an algorithm library layer and a computing layer, and the interaction layer is used for providing a visual operation mode for the user;
s4, designing a functional module and a database according to the division of labor of each layer;
and S5, realizing each module and the database.
10. The design method of the deep learning model visualization construction system as claimed in claim 9, wherein the deep learning model visualization construction system adopts a B/S architecture, the development environment adopts a Centos7 operating system, the development tools adopt Visual Studio Code and Pycharm, and the development mode adopts a front-end and back-end separation mode.
CN202110632104.4A 2021-06-07 2021-06-07 Deep learning model visual construction system and application and design method thereof Withdrawn CN113537496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110632104.4A CN113537496A (en) 2021-06-07 2021-06-07 Deep learning model visual construction system and application and design method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110632104.4A CN113537496A (en) 2021-06-07 2021-06-07 Deep learning model visual construction system and application and design method thereof

Publications (1)

Publication Number Publication Date
CN113537496A true CN113537496A (en) 2021-10-22

Family

ID=78124598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110632104.4A Withdrawn CN113537496A (en) 2021-06-07 2021-06-07 Deep learning model visual construction system and application and design method thereof

Country Status (1)

Country Link
CN (1) CN113537496A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066828A (en) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 Image processing method and system based on multifunctional bottom layer algorithm
CN114139728A (en) * 2021-12-06 2022-03-04 神州数码系统集成服务有限公司 Visual full-flow machine learning platform, control method, client and application
CN114237635A (en) * 2022-02-24 2022-03-25 视睿(杭州)信息科技有限公司 Method, system and storage medium for rapid deployment, operation and maintenance of semiconductor visual inspection
CN115361051A (en) * 2022-07-12 2022-11-18 中国科学院国家空间科学中心 Frequency sharing analysis system for large-scale space internet constellation
CN116360759A (en) * 2023-03-10 2023-06-30 青软创新科技集团股份有限公司 Visual system and method of artificial intelligence algorithm
CN117474125A (en) * 2023-12-21 2024-01-30 环球数科集团有限公司 Automatic training machine learning model system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066828A (en) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 Image processing method and system based on multifunctional bottom layer algorithm
CN114139728A (en) * 2021-12-06 2022-03-04 神州数码系统集成服务有限公司 Visual full-flow machine learning platform, control method, client and application
CN114237635A (en) * 2022-02-24 2022-03-25 视睿(杭州)信息科技有限公司 Method, system and storage medium for rapid deployment, operation and maintenance of semiconductor visual inspection
CN114237635B (en) * 2022-02-24 2022-07-15 视睿(杭州)信息科技有限公司 Method, system and storage medium for rapid deployment, operation and maintenance of semiconductor visual inspection
CN115361051A (en) * 2022-07-12 2022-11-18 中国科学院国家空间科学中心 Frequency sharing analysis system for large-scale space internet constellation
CN115361051B (en) * 2022-07-12 2023-06-13 中国科学院国家空间科学中心 Frequency sharing analysis system for large-scale space internet constellation
CN116360759A (en) * 2023-03-10 2023-06-30 青软创新科技集团股份有限公司 Visual system and method of artificial intelligence algorithm
CN117474125A (en) * 2023-12-21 2024-01-30 环球数科集团有限公司 Automatic training machine learning model system
CN117474125B (en) * 2023-12-21 2024-03-01 环球数科集团有限公司 Automatic training machine learning model system

Similar Documents

Publication Publication Date Title
CN113537496A (en) Deep learning model visual construction system and application and design method thereof
CN110533045B (en) Luggage X-ray contraband image semantic segmentation method combined with attention mechanism
CN106778682B (en) A kind of training method and its equipment of convolutional neural networks model
CN108491858A (en) Method for detecting fatigue driving based on convolutional neural networks and system
CN105574550A (en) Vehicle identification method and device
CN107341518A (en) A kind of image classification method based on convolutional neural networks
CN108664971A (en) Pulmonary nodule detection method based on 2D convolutional neural networks
CN111339935B (en) Optical remote sensing picture classification method based on interpretable CNN image classification model
McKeown Jr et al. Automating knowledge acquisition for aerial image interpretation
CN108596329A (en) Threedimensional model sorting technique based on end-to-end Deep integrating learning network
CN108426994A (en) Digital holographic microscopy data are analyzed for hematology application
CN105975573A (en) KNN-based text classification method
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
WO2020228283A1 (en) Feature extraction method and apparatus, and computer readable storage medium
CN111402979A (en) Method and device for detecting consistency of disease description and diagnosis
CN105989336A (en) Scene recognition method based on deconvolution deep network learning with weight
TWI672637B (en) Patern recognition method of autoantibody immunofluorescence image
Domik et al. User modeling for adaptive visualization systems
CN110210380A (en) The analysis method of personality is generated based on Expression Recognition and psychology test
CN110188662A (en) A kind of AI intelligent identification Method of water meter number
CN110321867A (en) Shelter target detection method based on part constraint network
Anderson Visual Data Mining: The VisMiner Approach
CN114295967A (en) Analog circuit fault diagnosis method based on migration neural network
CN112115779B (en) Interpretable classroom student emotion analysis method, system, device and medium
CN113627522A (en) Image classification method, device and equipment based on relational network and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211022