CN111967570A - Implementation method, device and machine equipment of mysterious neural network system - Google Patents

Implementation method, device and machine equipment of mysterious neural network system Download PDF

Info

Publication number
CN111967570A
CN111967570A CN202010617504.3A CN202010617504A CN111967570A CN 111967570 A CN111967570 A CN 111967570A CN 202010617504 A CN202010617504 A CN 202010617504A CN 111967570 A CN111967570 A CN 111967570A
Authority
CN
China
Prior art keywords
neural network
layer
network structure
data
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010617504.3A
Other languages
Chinese (zh)
Other versions
CN111967570B (en
Inventor
杨韵帆
孙术仁
邓琪敏
李俊昊
谈国禹
杨婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dingji Technology Co ltd
Original Assignee
Jiaxing Diji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Diji Technology Co ltd filed Critical Jiaxing Diji Technology Co ltd
Publication of CN111967570A publication Critical patent/CN111967570A/en
Application granted granted Critical
Publication of CN111967570B publication Critical patent/CN111967570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a realization method, a device and machine equipment of a mysterious neural network system. The method comprises the following steps: collecting the distributed neural network structure to obtain network structure data corresponding to the middle layer component of the neural network structure; parameterizing the neural network structure according to the network structure data to obtain network data, the network data corresponding to the layer components by the contained network structure data; and calling a method of a neural network layer under the neural network structure through network data to convert the neural network structure into the neural network. Therefore, the neural network with the visualized structure is obtained from the visualized neural network structure, namely, a visualized neural network system is realized, so that the architecture of the neural network is not transparent any more, and conditions are provided for the adjustability of the neural network.

Description

Implementation method, device and machine equipment of mysterious neural network system
Technical Field
The invention relates to the technical field of internet application, in particular to a method, a device and machine equipment for realizing a visual neural network system.
Background
Under the continuous development and application of neural network technology, more and more scenes achieve the solution of problems through the implementation of a neural network system.
The existing neural network system is realized based on a back-end server. Specifically, the neural network system deploys a neural network at a back-end server, performs various neural network operations through operations of a front-end client, and then outputs a result to a user facing the front-end client.
However, the neural network deployed in the back-end server is developed according to the business requirements of the front-end client, and is transparent to the front-end client and the user, and the architecture of the neural network cannot be known or adjusted.
Therefore, how to obtain a neural network with a visualized structure, in other words, how to implement a neural network on a neural network structure, so as to obtain a visualized neural network system, is a technical problem to be solved at present.
Disclosure of Invention
The invention provides a method for realizing a visual neural network system, which aims to solve the technical problem that a neural network structure is converted into an operable neural network to obtain the visual neural network system in the related art. Apparatus and machine equipment.
An implementation method of a visual neural network system, the method comprising:
collecting the distributed neural network structure to obtain network structure data corresponding to a layer component in the neural network structure;
parameterizing the neural network structure according to the network structure data to obtain network data, wherein the network data correspond to layer components through the contained network structure data;
and calling a method of a neural network layer under the neural network structure through the network data to convert the neural network structure into a neural network.
An apparatus for implementing a visual neural network system, the apparatus comprising:
the structure acquisition module is used for acquiring the distributed neural network structure to obtain network structure data corresponding to the layer component in the neural network structure;
a parameterization module for parameterizing the neural network structure according to the network structure data to obtain network data, the network data corresponding to a layer component by including the network structure data;
and the method calling module is used for calling a method of a neural network layer under the neural network structure through the network data and converting the neural network structure into a neural network.
A machine device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement a method as described above.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
the method comprises the steps of acquiring a distributed neural network structure to obtain network structure data of layer components, wherein the network structure data is used for describing the structure relationship between the distributed layer components in the neural network structure and the layer components, parameterizing the neural network structure according to the network structure data to obtain network data, and finally calling a neural network layer method under the neural network structure through the network structure data to convert the neural network structure into the neural network, so that the neural network with a visualized structure is obtained from the visualized neural network structure, namely a visualized neural network system is realized, the architecture of the neural network is not transparent any more, and conditions are provided for the adjustability of the neural network.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present invention;
FIG. 2 is a block diagram illustrating an apparatus in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of implementing a visual neural network system in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating a description of step 310 according to a corresponding embodiment of FIG. 3;
FIG. 5 is a flowchart illustrating a description of step 330 according to a corresponding embodiment of FIG. 3;
FIG. 6 is a flowchart illustrating a description of step 350 according to a corresponding embodiment of FIG. 3;
FIG. 7 is a flowchart illustrating a description of step 351 shown in a corresponding embodiment in FIG. 6;
FIG. 8 is a flowchart illustrating a description of step 353 according to a corresponding embodiment of FIG. 6;
FIG. 9 is a flowchart illustrating the neural network architecture steps for obtaining a layout through the layout and connection of selected layer components under a user interface for a fixed number of layer years, in accordance with an exemplary embodiment;
FIG. 10 is a flowchart illustrating a method of implementation of a visualization neural network system in accordance with another exemplary embodiment;
FIG. 11 is a schematic diagram of a system architecture implemented in a particular application;
FIG. 12 is a schematic diagram illustrating a node interpretation process in accordance with an exemplary embodiment;
FIG. 13 is a schematic diagram illustrating an application of neural network training in accordance with an exemplary embodiment;
fig. 14 is a block diagram illustrating an apparatus for implementing a visual neural network system in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a schematic illustration of an implementation environment in which the present invention is directed. In an exemplary embodiment, the implementation environment is shown in FIG. 1, and includes a front-end client 110 and a back-end server 130. The front-end client 110 is configured to implement the layout of the neural network structure, so that the deployed neural network forms a visualized neural network system. The front-end client 110 will serve as a portal to implement the setup of the neural network structure, and under the action of the front-end client 110, an operational neural network is obtained by means of the back-end server 130.
In other words, the front-end client 110 is a visual neural network development tool based on a graphical user interface, and is a neural network designer supporting a drag-and-drop operation and an interactive mode, so that the user can be helped to develop a neural network system without threshold zero programming, the visual neural network system is realized at the front-end client 110, and for the user, what you see is what you get is a development experience along with the adjustment of the laid neural network structure and the neural network structure, so that the visual neural network development tool is widely applied to various fields.
The back-end server 130 will provide various support for the front-end client 110, such as implementation of corresponding method calls in component calls, etc., to support implementation of functions in the front-end client 110 and operation of the visual neural network system implemented by the front-end client 110.
The back-end server 130 faces the at least one front-end client 110, and correspondingly, the back-end server 130 does not limit a single business requirement to implement the neural network deployment, but meets a plurality of business requirements to implement the neural network deployment on the at least one front-end client 110, so as to obtain different visual neural network systems.
FIG. 2 is a block diagram illustrating an apparatus according to an example embodiment. For example, the apparatus 200 may be the front-end client 110 in the implementation environment shown in FIG. 1. For example, the front-end client 110 may be a terminal device such as a smart phone or a tablet computer.
Referring to fig. 2, the apparatus 200 includes at least the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations, among others. The processing components 202 include at least one or more processors 218 to execute instructions to perform all or a portion of the steps of the methods described below. Further, the processing component 202 includes at least one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the apparatus 200. Examples of such data include instructions for any application or method operating on the apparatus 200. The Memory 204 is implemented by at least any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. Also stored in memory 204 are one or more modules configured to be executed by the one or more processors 218 to perform all or a portion of the steps of any of the methods described below in fig. 3-14.
The power supply component 206 provides power to the various components of the device 200. The power components 206 include at least a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 200.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. The screen further includes an Organic Light Emitting Display (OLED for short).
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive external audio signals when the device 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The sensor component 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor assembly 214 detects the open/closed status of the device 200, the relative positioning of the components, the sensor assembly 214 also detects a change in position of the device 200 or a component of the device 200, and a change in temperature of the device 200. In some embodiments, the sensor assembly 214 also includes a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 accesses a WIreless network based on a communication standard, such as WiFi (WIreless-Fidelity). In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, bluetooth technology, and other technologies.
In an exemplary embodiment, the apparatus 200 is implemented by one or more Application Specific Integrated Circuits (ASICs), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for performing the methods described below.
FIG. 3 is a flow chart illustrating an implementation method of a visualization neural network system in accordance with an exemplary embodiment. In an exemplary embodiment, the implementation method of the visualized neural network system, as shown in fig. 3, includes at least the following steps.
In step 310, the deployed neural network structure is collected to obtain network structure data corresponding to layer components in the neural network structure.
It should be noted that, first, the neural network structure describes the network layers deployed in the corresponding neural network and their relationship with each other. The neural network structure is a graphical description of the corresponding neural network, and the neural network structure may be laid by a user or initialized by the front-end client 110. Specifically, the neural network structure includes a plurality of layer components, and the layer components are connected with each other, and each layer component corresponds to a network layer, so that the network layers connected together in the corresponding neural network are realized through the layer components connected with each other in the neural network structure.
In an exemplary embodiment, the neural network structure is laid out by dragging the components of several layers by a user. Various layer components are initialized and configured on a toolbar of the front-end client 110, and a user selects the layer component corresponding to the required network layer according to the requirement of the neural network used by the user, drags the layer component to a building area on a graphical interface, and connects the dragged layer component with other layer components until the neural network structure is formed.
In an exemplary embodiment, layer component dragging performed by a user is oriented, setting of a corresponding neural network application scene is performed firstly, algorithm models are recommended in the application scene set by the user in a user-defined mode, each algorithm model corresponds to a neural network structure, and the user can obtain the required neural network structure through selection and optimization. At the moment, the user can select the network automatic parameter adjusting function to automatically optimize the neural network structure.
Further, in an exemplary embodiment, the layer components may be obtained by performing a js architecture encapsulation, for example, obtaining layer components corresponding to a Network layer detect (fully connected Neural Network layer), a CNN (convolutional Neural Network layer), an RNN (Recurrent Neural Network layer), and an LSTM (long short-term memory Network). It can be understood that the layer components are independent and reusable widgets on the graphical interface of the front-end client 110, and for the neural network to be constructed, the layer components map the neural network layer method, and the layer components connected on the neural network structure implement the network layer for the neural network corresponding to the layer components.
As indicated in the foregoing description, the neural network structure describes the layer components laid by the user and the connection relationships among the layer components, and therefore, the acquisition of the neural network structure reads the connection relationships among the layer components and the layer components for the neural network structure, and the obtained network structure data describes the layer components in the neural network structure by acquiring the obtained data. The acquisition process of the neural network structure is the acquisition process of the layer assembly, and the obtained network structure data also corresponds to the layer assembly. The corresponding layer component and other layer components connected with the layer component can be known through the network structure data.
In an exemplary embodiment, the network structure data corresponding to a layer component includes a self component identifier, a predecessor component identifier, and a successor component identifier of the corresponding layer component, where the component identifier is used to uniquely identify the layer component, the predecessor component identifier is used to uniquely identify a previous layer component to which the layer component is connected, and the successor component identifier is used to uniquely identify a next layer component to which the layer component is connected.
For the obtained neural network structure, for example, the neural network structure built by the user through the dragging operation, and for example, the neural network structure read by the front-end client 110, are triggered to be collected to obtain the network structure data.
In step 330, the neural network structure is parameterized according to the network structure data to obtain network data, which corresponds to the layer component by the included network structure data.
Because the network structure data indicates the corresponding layer component, the front-layer component and the rear-layer component through the component identifier, the parameter attributes are controlled according to the layer component in the neural network structure identified by the component identifier, and the neural network structure is parameterized to obtain the network data. The network data still corresponds to the layer component and includes the network structure data and the read parameter attributes.
The parameterization of the neural network structure yields network data which describe the layer components and make the implemented network layer operable. The network data is used as the connection between the neural network structure and the neural network, and the transformation from the neural network structure to the neural network is ensured.
In step 350, a method of a neural network layer under the neural network structure is called through the network data, and the neural network structure is converted into a neural network.
After the network data is obtained through parameterization of the neural network structure in the previous step, because the method of corresponding network layer mapping can be obtained according to the network data, and each layer component under the neural network structure has the corresponding network data, the method of the neural network layer can be called through the network data, and the conversion from the neural network structure to the neural network is realized.
Furthermore, the network data comprises parameter attributes, and the parameter attributes at least comprise layer types of corresponding layer components, and the layer types are mapped with the methods of the neural network layer, so that the method calls of the neural network layer can be achieved through the network number.
By the method, the graphical neural network structure is converted into the neural network in the front-end client 110, so that the graphical design of the neural network is realized, and once the neural network structure is obtained, the neural network with a visualized structure can be obtained correspondingly, so that a visualized neural network system is obtained.
Therefore, the front-end client becomes a neural network visualization development tool based on a graphical user interface, a user can build and adjust the neural network according to the self requirement, the flexibility is enhanced, and the neural network can be widely used for realizing various services of the user.
Fig. 4 is a flowchart illustrating the description of step 310 according to the corresponding embodiment of fig. 3. In an exemplary embodiment, as shown in FIG. 4, this step 310 includes:
in step 311, in the neural network structure, according to the distance offset between the laid layer components, the layer components are read with their own component identifier, the predecessor component identifier, and the successor component identifier.
The neural network structure is characterized in that the layer components connected with each other are arranged in the front-back sequence, the distance offset corresponding to each layer of component is different, the distance offset of each layer of component is different from that of the front-driving layer component and that of the back-driving layer component, and the distance offset of the front-driving layer component and that of the back-driving layer component are different, so that the corresponding front-driving layer component and the corresponding back-driving layer component can be identified for the layer components according to the distance offset, and component identification is obtained.
Thus, the obtained component identification comprises the self component identification, the predecessor component identification and the background component identification of the layer component.
In one exemplary embodiment, the neural network structure starts with an input layer component, connects the plurality of layer components in a front-to-back order until finally connecting to an output layer component. Correspondingly, the input layer assembly is used as a start, the assembly identification is read for each layer assembly, the front driving layer assembly and the rear driving layer assembly are determined according to the distance deviation, and the front driving assembly identification corresponding to the front driving layer assembly and the rear driving assembly identification corresponding to the rear driving layer assembly are read.
The layer components present in the neural network structure are acquired with the distance offset by the execution of step 311. For the acquired layer assembly, the layer assembly is uniquely marked by the assembly identification, and the connection relation between the layer assembly and other assemblies in the neural network structure is described by the precursor assembly identification and the back-drive assembly identification.
The collection of the neural network structure is realized through the distance offset existing between the components of the laid layer, and further, a visual neural network system corresponding to the neural network structure is realized on the basis of the laid neural network structure.
It should be added here that, in the neural network structure, for a layer of components, the corresponding predecessor layer component is a layer component that implements the operation of the neural network corresponding to the layer of components, and the corresponding successor layer component is a layer component that implements the operation of the neural network corresponding to the layer of components. In other words, the front drive layer component and the back drive layer component of the one-layer component are consistent with the layer-by-layer processing sequence in the neural network, and the neural network formed by stacking a plurality of network layers has a certain layer-by-layer processing sequence for the input information.
In one exemplary embodiment, step 311 includes: the method comprises the steps of starting with an initial default input layer assembly in a neural network structure, sequentially reading a self assembly identification, a front drive assembly identification and a rear drive assembly identification for the layer assembly in the neural network structure according to the distance offset between the layer assemblies until the reading is finished until an output layer assembly is read.
The input layer assembly and the output layer assembly are arranged no matter what kind of neural network structure is, so that the neural network structure collection is started by the input layer assembly and finished when the collection of the output layer assembly is completed.
In step 313, network configuration data for the layer component is formed with the layer component's own component identification as an index and the predecessor and successor component identifications as index values.
After the layer components in the neural network structure are all read to obtain the self component identifier, the predecessor component identifier and the back-drive component identifier in the step 311, the network structure data of the layer components are obtained by storing the predecessor component identifier and the back-drive component identifier by using the component identifiers as indexes.
Accordingly, the network structure data of a layer component indicates the corresponding layer component, and the precursor layer component and the back-drive layer component of the layer component, and the network structure data of all the layer components together describe the acquired neural network structure.
In an exemplary embodiment, step 313 includes: in reading the component identification of the layer component in the neural network structure, the read component identification is taken as an index, the front-drive component identification and the rear-drive component identification are taken as index values and are updated to the network structure data of the neural network structure, and the network structure data of the layer component is formed in the network structure data of the neural network structure.
Wherein, with the reading of the component identification in the neural network structure, the network structure data updating of the neural network structure is continuously carried out.
Therefore, the acquisition of the neural network structure is completed, each layer of component in the neural network structure acquires network structure data through acquisition, the conversion of the neural network structure to the neural network is realized by taking the network structure data as a basis, and the acquisition of the neural network structure and the acquisition of the network structure data provide possibility for the recognition of the neural network structure and the realization of a visual neural network system.
Fig. 5 is a flow chart illustrating a description of step 330 according to a corresponding embodiment of fig. 3. In an exemplary embodiment, as shown in FIG. 5, this step 330 includes:
in step 331, the component identifiers in the network structure data that are used as indexes are identified one by one, and parameter attributes are collected for the corresponding layer components.
After the acquisition of the neural network structure is completed and the network structure data is obtained, each layer of component is parameterized, and the obtained parameter attribute and the network structure data form the network data of the layer of component. In the executed parameterization of the layer components, since all the layer components have the network structure data, the collection of the parameter attributes is carried out according to the component identification as an index in the network structure data.
Specifically, each layer component in the neural network structure has a parameter attribute associated therewith, and the parameter attribute may be preset or dynamically configured by a user. The parameter attribute is used for indicating the network layer corresponding to the implementation layer component and parameters necessary for the network layer to execute the neural network operation, namely network layer structure parameters, such as hyper-parameters and the like.
In one exemplary embodiment, the parameter attributes corresponding to the layer components are stored in a configuration file of the neural network structure and will also be rendered and presented to the corresponding setup window as the layer components are triggered. Therefore, the parameter attribute collection is performed on the configuration file or the corresponding setting window, so as to obtain the parameter attribute for the layer component.
In step 333, the parameter attribute is updated to the index of the component identifier, and the network data of the layer component is formed with the network structure data.
Each layer of components has network structure data with component identification as an index, and the acquired parameter attributes are stored in the index to form network data. That is, each layer of component has corresponding network data, and the network data uses the component identifier as an index, and the parameter attribute, the predecessor component identifier, and the background component identifier as index values.
Fig. 6 is a flowchart illustrating a description of step 350 according to a corresponding embodiment of fig. 3. In an exemplary embodiment, as shown in FIG. 6, this step 350 includes:
in step 351, a linked list corresponding to the neural network structure is constructed by parsing the network data, and nodes in the linked list respectively correspond to layer components in the neural network structure.
After network data are obtained through parameterization of the neural network structure, conversion of the neural network structure to the neural network is executed, and the conversion of the neural network structure to the neural network is achieved through a linked list, so that a data storage mode oriented to components in the neural network structure is obtained, and flexibility and orderliness of neural network conversion are enhanced.
The nodes in the linked list correspond to layer components in the neural network structure, and thus the linked list constructed corresponds to the laid neural network structure. The layer components in the neural network structure all have corresponding network data, the network data corresponding to each layer of component is stored node by node through analyzing the network data, and a linked list corresponding to the neural network structure is constructed and obtained through linking the layer components in the neural network structure in the front-back sequence.
In step 353, the neural network layer structure is transformed into a runnable neural network by calling the neural network layer method mapped by the nodes in the linked list.
It can be understood that the nodes in the linked list store the network data of the corresponding layer component, and the network data includes parameter attributes, for example, indicating the layer type of the neural network layer method, based on which, the nodes in the linked list can be mapped to a certain neural network layer method.
And obtaining a mapped neural network layer method according to the link sequence of the nodes in the linked list, wherein the neural network layer method realizes an operable neural network under the action of the parameter attributes in the network data stored by the nodes.
Fig. 7 is a flow chart illustrating a description of step 351 according to a corresponding embodiment of fig. 6. In an exemplary embodiment, step 351, as shown in FIG. 7, includes:
in step 401, nodes are constructed and network data are stored in the nodes one by one according to the sequence of the corresponding layer components in the neural network structure.
In step 403, the nodes are sequentially linked to form a linked list corresponding to the neural network structure.
The sequence of the layer components in the neural network structure refers to a sequence from the input layer component to the output layer component, which is a sequence in which the corresponding network layer executes the neural network operation and is also a processing sequence of the input information. Each layer of component in the neural network structure has corresponding network data, so that nodes are constructed for the corresponding layer of component according to the front-back sequence of the layer of component in the neural network structure, and the network data of the layer of component is stored to the constructed nodes.
By analogy, nodes are constructed for each layer of components in the neural network structure to complete storage of network data, and the nodes are sequentially linked together to form a linked list corresponding to the neural network structure. The data conversion of the graphical neural network structure is simply, conveniently and quickly realized in a linked list mode, and the reliability and the flexibility are enhanced.
Fig. 8 is a flow chart illustrating the description of step 353 according to the corresponding embodiment of fig. 6. In an exemplary embodiment, the parameter attribute in the network data includes a layer type and a network layer structure parameter of a corresponding layer component, and the node is a layer component storage layer type, as shown in fig. 8, this step 353 includes:
in step 501, the mapped neural network layer method is invoked according to the layer types stored in the nodes in the linked list, and the neural network layer method is organized in a model manner to form the network layer in the neural network.
The parameter attribute in the network data is used for realizing the network layer corresponding to the layer component, the layer type contained in the network data is used for indicating the neural network layer method required to be called, and the neural network layer method is used for executing the neural network operation so as to complete the calculation of the corresponding network layer. That is, the layer type stored in the node is also the network layer type, and the front-end client can use the layer type to implement the neural network layer method. For example, the layer types may be Dense, RNN, LSTM, GRU, and the like.
For a network layer, the neural network layer methods called via the same layer type will form a neural network model, and then implement this network layer in a model manner.
In step 503, the network layer structure parameters stored in the node are loaded into the called neural network layer method.
The network layer structure parameters included in the network data are parameters set before the neural network layer method is learned, such as hyper-parameters. After the method of the neural network layer is called, the network layer structure parameters stored in the nodes are transmitted into the method to ensure the calculation of the called neural network layer method.
In step 505, a neural network of the neural network layer structure map is formed from a number of neural network layer methods that are called and loaded with network layer structure parameters.
The method comprises the following steps of calling a plurality of neural network layer methods in the same layer type, stacking the neural network layer methods together to form a neural network, realizing the conversion from a neural network structure to the neural network, and under the action of a graphical neural network structure, enabling a neural network system realized by the neural network to be visual and to be the result of the seen neural network structure.
In another exemplary embodiment, the method for implementing a visualized neural network system as described above further includes:
and initializing the network layer structure parameters of the layer components in the distributed neural network structure, wherein the network layer structure parameters are used for the operation of the layer components.
For the layer components provided by the front-end client 110, the network layer structure parameters are initialized in advance, so that the operation of the layer components and the corresponding network layers is ensured through the initialized network layer structure parameters, the normal operation of the corresponding neural network can be realized even if the user does not set the network layer structure parameters, and the reliability of the visual neural network system is ensured.
In one exemplary embodiment, once the laying of the neural network structure is completed, i.e., once the neural network structure is obtained, network layer structure parameter initialization of layer components in the neural network structure is performed, configuring initial network layer structure parameters for each layer component.
In another exemplary embodiment, after initializing the network layer structure parameters of the layer components in the deployed neural network structure, the method for implementing the visualized neural network system further comprises:
and if the monitoring obtains the control of the layer component on the neural network structure, rendering the network layer structure parameters initialized by the layer component to be presented in a corresponding setting window.
On one hand, the layer components in the neural network structure are obtained by dragging from the front-end client 110, and on the other hand, the layer components in the neural network structure can display the network layer structure parameters along with the click of the user on the layer components, so as to provide an entrance for the user to view and modify the network layer structure parameters.
Specifically, when the control of the layer component on the neural network structure is obtained by interception, for example, a user clicks the layer component, a corresponding setting window is rendered for the layer component, and the network layer structure parameters are controlled to be presented in the setting window.
In another exemplary embodiment, after the step of controlling the rendering of the network layer structure parameters initialized by the layer components to be presented in the corresponding setting window, the implementation method of the visualization neural network system further includes:
and if the modified network layer structure parameters are intercepted, updating the modified network layer structure parameters to the network data corresponding to the layer component in real time.
The network layer structure parameters are all modifiable parameters along with the rendering display of the network layer structure parameters in the setting window, and a user can modify the network layer structure parameters in a self-defined manner. As previously mentioned, the network layer structure parameters are stored in the network data, and the custom modified network layer structure parameters are updated to the network data in real time.
In another exemplary embodiment, the method for implementing the visualized neural network system further includes:
and obtaining the distributed neural network structure through the layout and connection of the selected layer components under the user interface of the fixed layer components.
The tool bar of the front-end client is provided with a plurality of layers of components for users to select, a plurality of layers of components are fixedly arranged in a building area for building the neural network structure, and the layers of components are fixedly arranged in the building area due to the fact that the layer of components can be selected by the built neural network structure with high probability, so that the user operation is simplified. Specifically, the fixed layer assemblies include an input layer assembly, an output layer assembly, and a layer assembly corresponding to the fully connected layer.
The layer components selected by the user in the toolbar are dragged to the positions between the layer components in the fixed configuration so as to lay and connect the layer components, and finally a laid neural network structure is formed. Of course, it can be understood that, for a layer component with a fixed configuration, the layer component can be deleted under the control of a user under the condition that the layer component is not suitable for the situation, for example, the layer component can be dragged to a trash can by the user to be deleted, so that the flexibility of laying the neural network structure by the user is ensured, and the laying of the neural network structure is not limited by the layer component with the fixed configuration.
FIG. 9 is a flowchart illustrating the neural network architecture steps for obtaining a layout through the layout and connection of selected layer components under a user interface for a fixed number of layer years, in accordance with an exemplary embodiment. In an exemplary embodiment, as shown in FIG. 9, the step of obtaining a deployed neural network structure by fixing the deployment and connection of the selected layer components under the user interface of the plurality of layer components comprises:
in step 601, the selected layer component is placed between the input layer component and the output layer component at the user interface where the input layer component and the output layer component are secured.
In step 603, connections between layer components in the user interface are made to form a neural network structure, the layer components including fixed input layer components, output layer components, and selected layer components.
The neural network structure is formed by placing and connecting selected layer components starting with the input layer components and ending with the output layer components. Of course, for the user interface, the fixed layer components are not limited to the input layer components and the output layer components, and may include other layer components, so that the obtaining of the neural network structure further includes the adjustment of the original layer components, for example, the deletion of unnecessary layer components, the position adjustment of layer components, and the like.
That is, in another exemplary embodiment, the fixed number of layer components includes a deletable layer component, and the step of obtaining the deployed neural network structure by deploying and connecting the selected layer components under the user interface of the fixed number of layer components further includes:
and if the user deletes the deletable layer component by interception, deleting the deletable layer component under the user interface for fixing the plurality of layer components.
In one exemplary embodiment, step 601 includes: the selected set of layers is placed in a user interface having fixed input layer components and output layer components according to the order of the selected layer components back and forth in the neural network structure to be laid down.
Wherein the offset of the distance of the selected layer components placed relative to the input layer components corresponds to their sequential order in the desired neural network architecture.
It will be appreciated that for a layer assembly, the smaller its distance offset relative to the input layer assembly, the more sequentially forward, and the more sequentially rearward the layer assembly with the relatively greater distance offset. The user will select a layer component according to the network layer to be implemented and the stacking relationship of the network layer with other network layers in the neural network, and set the front-back sequence of the selected layer component in the neural network structure to be laid.
In another exemplary embodiment, the method for implementing the visualized neural network system as described above further includes:
and uploading data through operation triggered by a user on the input layer component on the user interface, wherein the uploaded data is used for training the neural network converted by the neural network structure.
It should be noted here that, first, the input layer component has an upload function, and the upload function is used for uploading data required for performing corresponding neural network training and prediction, for example, uploading training data and test data for neural network training and learning.
And triggering the input layer assembly on the neural network structure by the user to execute an uploading function, and uploading data imported by the user to the neural network structure.
The data uploading referred to is not limited to the data transmission from the front-end client 110 to the back-end server 130 via the internet, and in some embodiments, the data uploading refers to a process of importing data from the front-end client 110 by a user.
Fig. 10 is a flowchart illustrating an implementation method of a visualization neural network system, according to another example embodiment. In another exemplary embodiment, as shown in fig. 10, the method for implementing a visualized neural network system further includes the following steps after performing step 350:
in step 810, the format of the uploaded data is converted through the calling of the corresponding control in the front-end client, so as to form a model dictionary containing training data and test data, and the front-end client is used for realizing a user interface.
As mentioned above, the visualized neural network system is implemented by the front-end client 110, and the corresponding neural network, i.e. the neural network obtained by the transformation of the neural network structure through the foregoing steps, needs training data and test data in order to perform training learning.
As the input layer components are triggered to perform the upload function, the front-end client 110 obtains upload data, which in an exemplary embodiment is mostly in the form of table files, such as table files suffixed by xlsx, csv, etc., and thus, the table files will be extracted with training data and test data and converted into the format of the model dictionary
Specifically, if the training of the neural network is performed in the back-end server 130, the model dictionary also needs to be packaged into a specified format, such as json format, pandas, dataframe format, etc., so as to be easily read by the back-end server 130 and transmitted.
For example, the test data in the uploaded data corresponds to two broad categories of features and tags, that is, the test data includes feature data and tag data, and the two broad categories of data are arranged in columns in the table file so as to be read orderly and accurately and stored as { features: characteristic data, label: label data } this dictionary format. Similarly, the same is true of the test data in the upload data.
In step 830, the deep learning framework is invoked by the front-end client, and the local resource is invoked under the deep learning framework.
In the specific implementation of the present exemplary embodiment, the neural network obtained by what is seen as the neural network structure, i.e., the obtained transformation, will not depend on the training of the back-end server, and can be used only by performing the training learning locally at the front-end client.
Therefore, for the uploaded data obtained by the front-end client for the neural network structure, a deep learning framework, for example, a tensoflow.js framework, which is a javascript-based deep learning framework, is called for the uploaded data, and the operations of training and using the model are realized by calling hardware resources, such as cpu and gpu, of the local computer through a browser under the deep learning framework.
In step 850, training and use of the neural network is performed using the called local resources and the model dictionary under the deep learning framework.
It should be understood that the operation of using the model under the deep learning framework is the use process of the neural network, and is a series of neural network operation processes performed based on the neural network training learning realized by the training data and the test data.
With the present exemplary embodiment, the above-mentioned series of steps are all executed locally at the front-end client 110, so that server resources are not consumed, and operations related to training and learning, etc. which occupy the server resources, are avoided.
In another exemplary embodiment, the method for implementing the visualized neural network system as described above further includes:
and performing local saving of the model on the neural network through a model saving interface of the deep learning framework.
After the training of the neural network is completed locally at the front-end client according to the exemplary embodiment shown in fig. 10, the model of the neural network is saved locally at the front-end client 110 under the action of the model saving interface of the deep learning model, so that the neural network can be reused and called at any time when needed.
In another exemplary embodiment, the method for implementing the visualized neural network system as described above further includes:
and executing the downloading function of the neural network through the operation triggered by the user on the output layer component on the user interface.
The output layer component has a download function, similar to the input layer component, for downloading data returned after the neural network evaluation, data output by using the neural network, and the like. The output layer components, like the input layer components, all serve as default settings for the construction of the neural network architecture by the front-end client 110.
With the training learning or the use of the neural network converted by the neural network structure, the user downloads the output data through the triggering of the output layer component, so as to obtain the training learning evaluation and prediction result of the neural network.
Based on the implementation method of the visualized neural network system, the neural network can be implemented on the graphical neural network structure at the front-end client 110, and then the visualized neural network system is implemented, that is, what network layers are stacked in the neural network system, how to stack the network layers is clear and visible for corresponding users, and the neural network can be accurately optimized in real time.
Through the front-end client 110, while the graphical user interface-based neural network visualization development is realized, the neural network structure formed in various modes such as layer component dragging helps a user develop a neural network without threshold and zero programming, a rear-end server and local various neural network training can be carried, and the occupation and consumption of resources are reduced.
The implementation of the visual neural network system is exemplified by the drag and drop construction of the layer component in the front-end client 110, in combination with the above The implementation of the method is illustrated.
The user builds a neural network by dragging and uploads training data and test data at the front-end client 110. The front-end client 110 collects the neural network structure dragged and constructed by the user and the uploaded data, and on one hand, the neural network structure can be sent to the back-end server 130, and on the other hand, the neural network structure can be locally processed.
At this time, for the acquired neural network structure, the front-end client 110 converts the neural network structure into a neural network with the cooperation of the back-end server, thereby implementing the developed visual neural network system for the user.
For the uploaded data, after a series of processing such as abnormal data elimination and normalization is performed, iterative training of the neural network is performed, the neural network is evaluated after the training is completed, and an evaluation result is returned to the front-end client 110. Of course, on the other hand, the processing and training performed may also be performed locally at the front-end client 110, and is not limited herein.
The trained neural network stores the corresponding model for the subsequent calling of the user.
FIG. 11 is a block diagram of a system architecture implemented in a particular application. In this particular application, the uploaded data is processed at the back end, i.e., the back end server 130 referred to above, and then returned to the front end.
As shown in fig. 11, the front-end client 110 is implemented by a local client browser, and a user calls a deep learning framework, such as a tensrflow.js framework, through the browser at the front end, and the deep learning framework calls hardware resources, such as a CPU and a GPU, of a local computer under the client browser to perform similar models, such as keras and tenserflow, i.e., training and using the aforementioned neural network layer method.
That is, the front end collects data for a neural network constructed by dragging a user, that is, a neural network structure, and the collected neural network structure and data, that is, network data, are packaged and sent to the back end.
The backend receives the data, and as depicted in fig. 11, the backend will perform operations such as "process data," "store to database," and "return data" on the received data. The back end processes the data in series, stores partial information in the database, packages the data to be returned and sends the data back to the front end when appropriate, and the front end displays the data returned by the back end and displays the data to the user.
Further, after receiving the data, the back end unpacks the data, and constructs a corresponding neural network according to the transmitted parameters, so as to obtain the neural network corresponding to the neural network structure set at the front end.
For the neural network construction, as indicated in the foregoing description, the network data obtained by collecting and parameterizing the neural network structure is stored for each layer of component in the neural network structure through the nodes in the linked list, that is, the network data is received at the back end, after the network data is obtained, a linked list is first established according to the content of each layer of component in the network data, each node of the linked list stores information of one layer in the neural network, including the layer type, the network layer structure parameters (such as the number of units, the value of dropout, and the like), and since the layer type is mapped with the neural network layer method, the node storing the layer type further includes a method capable of establishing a neural network layer using its own information, that is, the invocation of the aforementioned neural network layer method, for example, different keras functions (details, data, and the like) are invoked according to the layer types included in the node, simpleRNN, LSTM, and GRU) and passes the network layer structure parameters stored in the nodes into the keras function in the form of × kwags, so here the network data is transformed into a neural network.
Traversing the link table from the head node to the tail node for the established link table, sequentially calling the neural network layer method of layer type mapping in the node, and finally completely converting the stored data in the whole link table into the trainable neural network.
Further, the trainable neural network creates one layer of the neural network by reading and interpreting information in nodes until all nodes are interpreted.
FIG. 12 is a schematic diagram illustrating a node interpretation process according to an exemplary embodiment. In an exemplary embodiment, taking a Keras network as an example, for the network data constructed as a linked list, it is dictionary-type data, so that each node content stored in the linked list includes a node number, a type of a corresponding network layer, other parameters, a pointed next node number, and the like, where the pointed number is an identifier of a corresponding layer component, the type of the corresponding network layer is a layer type, and the other parameters are network layer structure parameters.
As shown in fig. 12, each node corresponds to a network layer, for example, the node numbered 1 corresponds to an RNN network layer in a Keras network.
For the transformed trainable neural network, a user uploads data through an input layer component on the neural network structure, and for the data uploaded by the user, the front end extracts features and labels and stores the extracted features and labels in a dictionary format, namely { features: characteristic data, label: tag data }.
The data packets stored in the dictionary format are sent to the back-end server 130. The user may use his or her own uploaded data or choose to use data provided by the front-end client 110 for training learning of the neural network.
And processing the data uploaded by the user and then using the processed data. The processes involved include: pretreatment, normalization and the like. For example, preprocessing data for discarding anomalies; the normalization is performed for the tag data and the feature data, specifically, the normalization method used may be selected by the user, for example, the user may select the normalization method such as dividing all data by an average value, or dividing all data by a difference between a maximum value and a minimum value, and at this time, the back-end server 130 selects a corresponding method in the configured normalization method dictionary according to the selection of the user at the front end, and applies the corresponding method to the feature data and the tag data.
The processed data can be input into the neural network for training, and after each training round is finished, the data are evaluated and the evaluation result is returned to the front end.
FIG. 13 is a schematic diagram illustrating an application of neural network training, according to an example embodiment. In this exemplary embodiment, as shown in fig. 13, the training model, i.e. the neural network, will be trained using the uploaded and processed data, and each training round will be evaluated under the evaluation model, and the obtained evaluation result is returned to the front end for presentation to the user.
The data returned to the front end includes: current loss values, current training set accuracy (for classification type tasks), current validation set accuracy (for classification type tasks), current training set prediction results (for regression type tasks), current validation set prediction results (for regression type tasks), and predicted remaining time for this training task, among others.
The data returned to the front end is still dictionary-type, and is packaged into a specified format, such as json format or pandas.
The front-end performs automatic visualization of the data with the support of the back-end server 130. It should be noted first that the data that can be automatically visualized at the front-end includes data returned to the front-end by the back-end server 130 and data uploaded by the user.
In this implementation, the format of the data is pandas, the backend server 130 pre-deploys a data visualization platform, and under the support of the data visualization platform, the variables in the data are listed as one dimension in the form of a selection box for the user to select. After the user selects the interested dimension, the visualization button is clicked to trigger the data visualization platform to process visualization, and a corresponding visualization graph is obtained.
For the use of the neural network, in an exemplary embodiment, sensitivity analysis may be further performed on the predicted result to obtain an influence of each feature on the predicted result, and thus, visual presentation of the sensitivity analysis result may also be performed under the support of the data visualization platform.
Thus, a trained and usable neural network, i.e., a plurality of network layers stacked together, is saved in the back-end server 130 and locally used as a module component, which is convenient for multiple utilization and sharing, and can also be added as a custom layer component to the toolbar of the front-end client 110.
As indicated above, the transformation of the neural network structure into the neural network includes the invocation of the neural network layer method mapped by the neural network structure, and it should be understood that the invoked neural network layer method is used to implement the neural network operation of the corresponding network layer, and specifically, the neural network layer method is composed of a series of code information, and the neural network operation is implemented through the execution of the series of code information, so that the obtained neural network can be exported with corresponding code information, and the neural network constructed by the user through the graphical interface is made into executable code information, thereby meeting the requirements in many scenarios.
The data stored in the database by the back end at least comprises the following steps: user basic information, network addresses and physical addresses, training execution time, resource occupation duration, neural network structures and the like, so as to realize quick search and call depending on data stored in a database.
By the method, a visualized neural network system is realized at the front-end client 110 for the user, and any user can use the front-end client 110 to build a neural network required by the user and apply the neural network to a required scene.
The following is an embodiment of the apparatus of the present invention, which is used to implement the embodiment of the apparatus for implementing the above-mentioned visualized neural network system of the present invention. For details that are not disclosed in the embodiments of the apparatus of the present invention, please refer to the embodiments of the method for implementing the visualized neural network system of the present invention.
Fig. 14 is a block diagram illustrating an apparatus for implementing a visual neural network system in accordance with an exemplary embodiment. In an exemplary embodiment, as shown in fig. 14, the implementation apparatus of the visualized neural network system includes but is not limited to: a structure acquisition module 910, a parameterization module 930, and a method invocation module 950.
A structure acquisition module 910, configured to acquire a distributed neural network structure to obtain network structure data corresponding to a layer component in the neural network structure;
a parameterization module 930 configured to parameterize the neural network structure according to the network structure data to obtain network data, the network data corresponding to the layer components by including the network structure data;
a method invoking module 950, configured to invoke a method of a neural network layer under the neural network structure through the network data, so as to convert the neural network structure into a neural network.
Optionally, the present invention further provides an electronic device, which may be used in the implementation environment shown in fig. 1 to execute all or part of the steps of the method shown in any one of fig. 3 to 10. The device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the method for realizing the foregoing.
The specific manner in which the processor of the apparatus in this embodiment performs operations has been described in detail in relation to the foregoing embodiments and will not be elaborated upon here.
In an exemplary embodiment, a storage medium is also provided that is a computer-readable storage medium, such as may be transitory and non-transitory computer-readable storage media, including instructions. The storage medium includes, for example, the memory 204 of instructions executable by the processor 218 of the device 200 to perform the methods described above.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An implementation method of a visual neural network system, the method comprising:
collecting the distributed neural network structure to obtain network structure data corresponding to a layer component in the neural network structure;
parameterizing the neural network structure according to the network structure data to obtain network data, wherein the network data correspond to layer components through the contained network structure data;
and calling a method of a neural network layer under the neural network structure through the network data to convert the neural network structure into a neural network.
2. The method of claim 1, wherein the collecting the deployed neural network structure obtains network structure data corresponding to layer components in the neural network structure, comprising:
reading a self component identifier, a front-drive component identifier and a rear-drive component identifier of a layer component in the neural network structure according to the distance offset between the distributed layer components;
and forming the network structure data of the layer component by taking the self component identification of the layer component as an index and taking the front-driving component identification and the rear-driving component identification as index values.
3. The method of claim 2, wherein reading the self component identifier, the front-drive component identifier and the back-drive component identifier for the layer components according to the distance offset of the laid layer components from each other in the neural network structure comprises:
and starting with an initial default input layer assembly in the neural network structure, reading a self assembly identification, a front drive assembly identification and a rear drive assembly identification of the layer assembly in the neural network structure according to the distance offset between the layer assemblies until the reading to an output layer assembly is finished.
4. The method of claim 2, wherein the step of forming the network structure data of the layer component by using the layer component self-component identification as an index and using the predecessor component identification and the successor component identification as index values comprises the steps of:
in reading the component identifier of the layer component in the neural network structure, updating the network structure data of the neural network structure by taking the read component identifier as an index and taking the front-driving component identifier and the rear-driving component identifier as index values, and forming the network structure data of the layer component in the network structure data of the neural network structure.
5. The method of claim 1, wherein parameterizing the neural network structure from the network structure data to obtain network data comprises:
acquiring parameter attributes for the corresponding layer components by one pair of component identifiers serving as indexes in the network structure data;
and updating the parameter attribute to the index of the component identifier, and forming the network data of the layer component with the network structure data.
6. The method of claim 1, wherein said invoking a method of a neural network layer below said neural network structure by said network structure data, transforming said neural network layer structure into a neural network, comprises:
building a linked list corresponding to the neural network structure by analyzing the network data, wherein nodes in the linked list respectively correspond to layer components in the neural network structure;
and converting the neural network layer structure into a runnable neural network by calling the neural network layer method mapped by the nodes in the linked list.
7. The method of claim 6, wherein constructing a linked list corresponding to the neural network structure by parsing the network data comprises:
constructing nodes according to the front-back sequence of the corresponding layer components in the neural network structure and storing the network data in the nodes one by one;
the nodes are linked in sequence to form a linked list corresponding to the neural network structure.
8. The method of claim 6, wherein the parameter attributes in the network data include layer types and network layer structure parameters of corresponding layer components, the node is a layer component storage layer type, and the transforming the neural network layer structure into a functional neural network through invocation of a neural network layer method mapped by nodes in a linked list comprises:
calling the mapped neural network layer method according to the layer type stored in the node in the linked list, wherein the neural network layer method is organized in a model mode to form a network layer in the neural network;
loading the network layer structure parameters stored in the nodes into the called neural network layer method;
a neural network of the neural network layer structure map is formed by a number of neural network layer methods that are invoked and loaded with the network layer structure parameters.
9. An apparatus for implementing a visual neural network system, the apparatus comprising:
the structure acquisition module is used for acquiring the distributed neural network structure to obtain network structure data corresponding to the layer component in the neural network structure;
a parameterization module for parameterizing the neural network structure according to the network structure data to obtain network data, the network data corresponding to a layer component by including the network structure data;
and the method calling module is used for calling a method of a neural network layer under the neural network structure through the network data and converting the neural network structure into a neural network.
10. A machine device, characterized in that the machine device comprises:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of claims 1 to 8.
CN202010617504.3A 2019-07-01 2020-06-30 Implementation method, device and machine equipment of visual neural network system Active CN111967570B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910591931 2019-07-01
CN2019105919316 2019-07-01

Publications (2)

Publication Number Publication Date
CN111967570A true CN111967570A (en) 2020-11-20
CN111967570B CN111967570B (en) 2024-04-05

Family

ID=73360922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010617504.3A Active CN111967570B (en) 2019-07-01 2020-06-30 Implementation method, device and machine equipment of visual neural network system

Country Status (1)

Country Link
CN (1) CN111967570B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130239006A1 (en) * 2012-03-06 2013-09-12 Sergey F. Tolkachev Aggregator, filter and delivery system for online context dependent interaction, systems and methods
US20160217368A1 (en) * 2015-01-28 2016-07-28 Google Inc. Batch normalization layers
WO2016197054A1 (en) * 2015-06-05 2016-12-08 Google Inc. Whitened neural network layers
WO2017196929A1 (en) * 2016-05-10 2017-11-16 Google Llc Audio processing with neural networks
US20180144242A1 (en) * 2016-11-23 2018-05-24 Microsoft Technology Licensing, Llc Mirror deep neural networks that regularize to linear networks
CN108319456A (en) * 2018-01-29 2018-07-24 徐磊 A kind of development approach for exempting to program deep learning application
CN108369664A (en) * 2015-11-30 2018-08-03 谷歌有限责任公司 Adjust the size of neural network
CN108475345A (en) * 2015-11-12 2018-08-31 谷歌有限责任公司 Generate larger neural network
CN108470213A (en) * 2017-04-20 2018-08-31 腾讯科技(深圳)有限公司 Deep neural network configuration method and deep neural network configuration device
CN108537328A (en) * 2018-04-13 2018-09-14 众安信息技术服务有限公司 Method for visualizing structure neural network
US20180288086A1 (en) * 2017-04-03 2018-10-04 Royal Bank Of Canada Systems and methods for cyberbot network detection
US20180285740A1 (en) * 2017-04-03 2018-10-04 Royal Bank Of Canada Systems and methods for malicious code detection
CN109283469A (en) * 2018-09-21 2019-01-29 四川长虹电器股份有限公司 Battery management system failure prediction method, device and readable storage medium storing program for executing
CN109492747A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of the network structure generation method and device of neural network
US20190122360A1 (en) * 2017-10-24 2019-04-25 General Electric Company Deep convolutional neural network with self-transfer learning
CN109767001A (en) * 2019-01-07 2019-05-17 深圳增强现实技术有限公司 Construction method, device and the mobile terminal of neural network model

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130239006A1 (en) * 2012-03-06 2013-09-12 Sergey F. Tolkachev Aggregator, filter and delivery system for online context dependent interaction, systems and methods
US20160217368A1 (en) * 2015-01-28 2016-07-28 Google Inc. Batch normalization layers
CN107690663A (en) * 2015-06-05 2018-02-13 谷歌有限责任公司 Albefaction neural net layer
WO2016197054A1 (en) * 2015-06-05 2016-12-08 Google Inc. Whitened neural network layers
CN108475345A (en) * 2015-11-12 2018-08-31 谷歌有限责任公司 Generate larger neural network
CN108369664A (en) * 2015-11-30 2018-08-03 谷歌有限责任公司 Adjust the size of neural network
WO2017196929A1 (en) * 2016-05-10 2017-11-16 Google Llc Audio processing with neural networks
US20180144242A1 (en) * 2016-11-23 2018-05-24 Microsoft Technology Licensing, Llc Mirror deep neural networks that regularize to linear networks
US20180288086A1 (en) * 2017-04-03 2018-10-04 Royal Bank Of Canada Systems and methods for cyberbot network detection
US20180285740A1 (en) * 2017-04-03 2018-10-04 Royal Bank Of Canada Systems and methods for malicious code detection
CN108470213A (en) * 2017-04-20 2018-08-31 腾讯科技(深圳)有限公司 Deep neural network configuration method and deep neural network configuration device
CN109492747A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of the network structure generation method and device of neural network
US20190122360A1 (en) * 2017-10-24 2019-04-25 General Electric Company Deep convolutional neural network with self-transfer learning
CN108319456A (en) * 2018-01-29 2018-07-24 徐磊 A kind of development approach for exempting to program deep learning application
CN108537328A (en) * 2018-04-13 2018-09-14 众安信息技术服务有限公司 Method for visualizing structure neural network
CN109283469A (en) * 2018-09-21 2019-01-29 四川长虹电器股份有限公司 Battery management system failure prediction method, device and readable storage medium storing program for executing
CN109767001A (en) * 2019-01-07 2019-05-17 深圳增强现实技术有限公司 Construction method, device and the mobile terminal of neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIN, ZHUWEI等: "How convolutional neural network see the world-A survey of convolutional neural network visualization methods", 《ARXIV PREPRINT ARXIV:1804.11191》, 31 May 2018 (2018-05-31), pages 1 - 32 *
张凯姣: "基于Python机器学习的可视化麻纱质量预测系统", 《中国优秀硕士学位论文全文数据库 工程科技I辑》, no. 1, 15 January 2018 (2018-01-15), pages 024 - 29 *

Also Published As

Publication number Publication date
CN111967570B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
JP6944548B2 (en) Automatic code generation
US11368415B2 (en) Intelligent, adaptable, and trainable bot that orchestrates automation and workflows across multiple applications
US11392855B1 (en) GUI for configuring machine-learning services
US11182697B1 (en) GUI for interacting with analytics provided by machine-learning services
CN108664331A (en) Distributed data processing method and device, electronic equipment, storage medium
US11868739B2 (en) Device and method for providing application translation information
US20210208854A1 (en) System and method for enhancing component based development models with auto-wiring
CN108304201A (en) Object updating method, device and equipment
US20180081642A1 (en) Connectors framework
US11442704B2 (en) Computerized system and method for a distributed low-code / no-code computing environment
US20230206420A1 (en) Method for detecting defect and method for training model
US11467553B2 (en) Efficient configuration of scenarios for event sequencing
CN111967570B (en) Implementation method, device and machine equipment of visual neural network system
US20230117893A1 (en) Machine learning techniques for environmental discovery, environmental validation, and automated knowledge repository generation
CN108804088A (en) protocol processing method and device
CN111694637B (en) Online full-automatic multi-agent control simulation compiling system
CN111819536A (en) Method, device and machine equipment for realizing construction and operation of artificial intelligence application
US11132374B2 (en) Property painter
CN111868683B (en) Operation realization method, device and machine equipment in artificial intelligent application construction
CN109783152B (en) Physical hardware control method, device and computer readable storage medium
CN111310425A (en) System implementation method for list intellectualization and intelligent list system
US20230289358A1 (en) Analysis System
JP6916330B2 (en) Image analysis program automatic build method and system
US20240221318A1 (en) Solution of body-garment collisions in avatars for immersive reality applications
US20230168923A1 (en) Annotation of a Machine Learning Pipeline with Operational Semantics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240227

Address after: Building 683, 2nd Floor, No. 5 Zhongguancun Street, Haidian District, Beijing, 100000, 20209

Applicant after: Beijing Dingji Technology Co.,Ltd.

Country or region after: Zhong Guo

Address before: Room 420, 4th Floor, Building 4, No. 133 Development Avenue, Tongxiang Economic Development Zone, Tongxiang City, Jiaxing City, Zhejiang Province, 314500

Applicant before: Jiaxing Diji Technology Co.,Ltd.

Country or region before: Zhong Guo

GR01 Patent grant
GR01 Patent grant