CN108470213A - Deep neural network configuration method and deep neural network configuration device - Google Patents
Deep neural network configuration method and deep neural network configuration device Download PDFInfo
- Publication number
- CN108470213A CN108470213A CN201710262693.5A CN201710262693A CN108470213A CN 108470213 A CN108470213 A CN 108470213A CN 201710262693 A CN201710262693 A CN 201710262693A CN 108470213 A CN108470213 A CN 108470213A
- Authority
- CN
- China
- Prior art keywords
- configuration
- layer
- area
- neural network
- deep neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Abstract
The present invention proposes a kind of deep neural network configuration method and deep neural network configuration device, wherein, layer assembly selection area, hierarchal model definition and layer parameter setting area are shown in the configuration interface of deep neural network, then deep neural network configuration method includes:Layer structure configuration information is obtained by configuration interface;According to layer structure configuration information, corresponding layer assembly is selected in layer assembly selection area for each hierarchical location in hierarchal model definition;Layer parameter setting information is obtained by layer parameter setting area;According to layer parameter setting information, the layer assembly at the hierarchical location that corresponding layer parameter setting component is selected is called to configure corresponding layer parameter.Technical scheme of the present invention can be achieved with the configuration of deep neural network, greatly simplifie the configuration work of deep neural network, reduce deployment cost and error rate, improve the performance of deep neural network by way of with the patterned configuration interface of operation.
Description
【Technical field】
The present invention relates to field of computer technology more particularly to a kind of deep neural network configuration method and a kind of depth god
Through network configuration device.
【Background technology】
Currently, user comes configurable deep neural network, simple text often through the mode of editor's text configuration file
Configuration file is as shown in Figure 1, wherein NN (Neural Network) refers to neural network, and the part above NN is used for depth
The global configuration of neural network model, the relevant positions NN are that the level of deep neural network model configures, and are divided into multiple layers
(layer), each layer has the configuration parameter of oneself.
When editing text configuration file, user needs the parameter of each layer in content and the level configuration to global configuration
It is edited one by one, level is more, and the editor of text configuration file the complicated cumbersome, to be easy to out in configuration process
Mistake, the work of influence depth neural network.
Therefore, how a kind of more simply and efficiently mode of configurable deep neural network is provided, becomes and urgently solves at present
Certainly the technical issues of.
【Invention content】
An embodiment of the present invention provides a kind of deep neural network configuration method and a kind of deep neural network configuration device,
It aims to solve the problem that the excessively complicated cumbersome technical problem of configurable deep neural network in the related technology, is capable of providing a kind of more simple
The efficiently mode of configurable deep neural network.
In a first aspect, an embodiment of the present invention provides a kind of deep neural network configuration methods, in deep neural network
Configuration interface is provided with layer assembly selection area, hierarchal model definition and layer parameter setting area, then the deep neural network is matched
The method of setting includes:Layer structure configuration order is obtained by the configuration interface;It is the layer according to the layer structure configuration order
Each hierarchical location in grade model definition selects to select corresponding layer assembly in area in the layer assembly;Joined by the layer
Number setting area obtains layer parameter setting command;According to the layer parameter setting command, the corresponding layer parameter setting component is called to be
Layer assembly at selected hierarchical location configures corresponding layer parameter.
In the above embodiment of the present invention, optionally, according to the layer structure configuration order, defined for the hierarchal model
Each hierarchical location in area specifically includes the step of selecting corresponding layer assembly during the layer assembly selects area:According to institute
A layer structure configuration order is stated, selects area's selection target layer assembly in the layer assembly, and select in the hierarchal model definition
Target tier position;The target layer assembly is arranged at the target tier position.
In the above embodiment of the present invention, optionally, further include:Level quantity is obtained by the hierarchal model definition
Configuration order;According to the level quantity configuration order, the quantity of the hierarchical location in the hierarchal model definition is set.
In the above embodiment of the present invention, optionally, the configuration interface is additionally provided with global configuration area, and the overall situation is matched
Setting area, there is at least one global parameter configuration item, each global parameter configuration item to be corresponding with global parameter configuration component, and
The deep neural network configuration method further includes:Global parameter is obtained by the global parameter configuration item in the global configuration area
Configuration order;According to the global parameter configuration order, it is the depth nerve net to call corresponding global parameter configuration component
Network configures global parameter.
In the above embodiment of the present invention, optionally, the configuration interface is additionally provided with floor subsequent configuration area, after the layer
There is at least one layer of subsequent configuration item, each layer subsequent configuration item to be corresponding with a layer subsequent configuration component, Yi Jisuo for continuous configuring area
Stating deep neural network configuration method further includes:Floor subsequent configuration is obtained by the floor subsequent configuration item in floor subsequent configuration area
Order;According to the layer subsequent configuration order, it is that the deep neural network carries out layer to call corresponding layer subsequent configuration component
Subsequent configuration.
It is optionally, layer assembly selection area, the layer parameter setting area, described complete in the above embodiment of the present invention
Office one or more of configuring area and the floor subsequent configuration area area has corresponding component Configuration item and depth god
Further include through network collocating method:Increase or delete the component in corresponding area by the component Configuration item.
Second aspect, an embodiment of the present invention provides a kind of deep neural network configuration devices, in deep neural network
Configuration interface is provided with layer assembly selection area, hierarchal model definition and layer parameter setting area, then the deep neural network is matched
Setting device includes:First acquisition unit obtains layer structure configuration order by the configuration interface;Layer structure dispensing unit, root
It is each hierarchical location in the hierarchal model definition in the layer assembly selects area according to the layer structure configuration order
Select corresponding layer assembly;Second acquisition unit obtains layer parameter setting command by the layer parameter setting area;Layer parameter is matched
Unit is set, according to the layer parameter setting command, calls the layer at the hierarchical location that corresponding layer parameter setting component is selected
The corresponding layer parameter of component Configuration.
In the above embodiment of the present invention, optionally, the layer structure dispensing unit is specifically used for:According to the layer structure
Configuration order selects area's selection target layer assembly in the layer assembly, and in the hierarchal model definition selection target level
Position, and the target layer assembly is arranged at the target tier position.
In the above embodiment of the present invention, optionally, further include:Third acquiring unit is defined by the hierarchal model
Area obtains level quantity configuration order;The level is arranged according to the level quantity configuration order in level number setting unit
The quantity of hierarchical location in model definition.
In the above embodiment of the present invention, optionally, the configuration interface is additionally provided with global configuration area, and the overall situation is matched
Setting area, there is at least one global parameter configuration item, each global parameter configuration item to be corresponding with global parameter configuration component, and
The deep neural network configuration device further includes:4th acquiring unit is configured by the global parameter in the global configuration area
Item obtains global parameter configuration order;Global parameter dispensing unit calls corresponding complete according to the global parameter configuration order
Office's parameter configuration component is that the deep neural network configures global parameter.
In the above embodiment of the present invention, optionally, the configuration interface is additionally provided with floor subsequent configuration area, after the layer
There is at least one layer of subsequent configuration item, each layer subsequent configuration item to be corresponding with a layer subsequent configuration component, Yi Jisuo for continuous configuring area
Stating deep neural network configuration device further includes:5th acquiring unit passes through the floor subsequent configuration item in floor subsequent configuration area
Obtain layer subsequent configuration order;Layer subsequent configuration unit calls corresponding layer subsequent configuration according to the layer subsequent configuration order
Component is that the deep neural network carries out layer subsequent configuration.
It is optionally, layer assembly selection area, the layer parameter setting area, described complete in the above embodiment of the present invention
Office one or more of configuring area and the floor subsequent configuration area area has corresponding component Configuration item and depth god
Further include through network configuration device:Component edits unit increases by the component Configuration item or deletes the group in corresponding area
Part.
For the excessively complicated cumbersome technical problem of configurable deep neural network in the related technology, above technical scheme carries
A kind of patterned configuration interface, the configuration interface has been supplied to show layer assembly selection area, hierarchal model definition and layer parameter
Setting area, wherein layer assembly selects area that can have multiple layer assemblies, hierarchal model definition that can have multiple hierarchical locations to be linked to be
Hierarchal model.
On the configuration interface, user can select corresponding layer assembly as layer knot in configuration interface for hierarchical location
Structure configuration information, for example, selecting the layer assembly A in area to be dragged to first level position in hierarchal model definition layer assembly
It sets, in this way, system can get a layer structure configuration information according in configuration interface, it is that the hierarchical location configuration in hierarchal model is used
Layer assembly selected by family.
Then, user can be directed to the layer assembly at any hierarchical location, and layer parameter is arranged for it in layer parameter setting area,
In this way, system will receive layer parameter setting information in layer parameter setting area, and then it is institute according to the layer parameter setting information
Layer assembly at the hierarchical location of choosing configures corresponding layer parameter.
Above technical scheme, it is complicated cumbersome without editing instead of the mode for editing text configuration file in the related technology
Configuration text, but with by way of operating patterned configuration interface, can be achieved with the configuration of deep neural network, pole
The earth simplifies the configuration work of deep neural network, deployment cost is reduced, simultaneously as the simplification of configuration work, also drops
Low error rate, improves the performance of deep neural network.
【Description of the drawings】
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this field
For those of ordinary skill, without having to pay creative labor, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 shows the schematic diagram of the text configuration file of deep neural network in the related technology;
Fig. 2 shows the schematic diagrames of the configuration interface of one embodiment of the present of invention;
Fig. 3 shows the flow chart of the deep neural network configuration method of one embodiment of the present of invention;
Fig. 4 shows the flow chart of the quantity of the setting hierarchical location of one embodiment of the present of invention;
Fig. 5 shows the flow chart of the global configuration of one embodiment of the present of invention;
Fig. 6 shows the flow chart of the layer subsequent configuration of one embodiment of the present of invention;
Fig. 7 shows the block diagram of the deep neural network configuration device of one embodiment of the present of invention;
Fig. 8 shows the block diagram for the server that one embodiment of the present of invention provides.
【Specific implementation mode】
For a better understanding of the technical solution of the present invention, being retouched in detail to the embodiment of the present invention below in conjunction with the accompanying drawings
It states.
It will be appreciated that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Base
Embodiment in the present invention, those of ordinary skill in the art obtained without creative efforts it is all its
Its embodiment, shall fall within the protection scope of the present invention.
An embodiment provides a kind of configuration interfaces, as shown in Fig. 2, having layer assembly choosing in configuration interface
Select area and hierarchal model definition.
Wherein, layer assembly selection area has data (data Layer), convolution (convolutional layer), fully connected
The layer assemblies such as (full-mesh network layer), loss (depletion layer), lstm (time recurrent neural net network layers), and defined in hierarchal model
Qu Zhong, the first hierarchical location are data layer assemblies, and second to layer 6 level is set to convolution layer assemblies, the 7th to
9th hierarchical location is fully connected layer assemblies, and the tenth hierarchical location is loss layer assemblies.
In addition, on the right side of configuration interface, it is additionally provided with layer parameter setting area, global configuration area and floor subsequent configuration area.
It needs to know, Fig. 2 is to schematically illustrate a kind of realization method of configuration interface, configuration interface and its interior
Layer assembly selects position, the appearance of area, hierarchal model definition, layer parameter setting area, global configuration area and floor subsequent configuration area
Can also be any form displaying other than above-mentioned realization method with content.
In the following, in conjunction with Fig. 2 shows configuration interface to the deep neural network configuration method of the embodiment of the present invention into traveling
One step describes.
Fig. 3 shows the flow chart of the deep neural network configuration method of one embodiment of the present of invention.
In the deep neural network configuration method of one embodiment of the present of invention, a kind of patterned configuration circle is provided
Face shows layer assembly selection area, hierarchal model definition and layer parameter setting area in the configuration interface of deep neural network,
In, layer assembly selection area can show multiple layer assemblies, and hierarchal model definition can show the layer that multiple hierarchical locations are linked to be
Grade model, then as shown in figure 3, this method includes:
Step 302, layer structure configuration information is obtained by configuration interface.
Step 304, it is that each hierarchical location in hierarchal model definition is selected in layer assembly according to layer structure configuration information
It selects and selects corresponding layer assembly in area.
The operation of layer structure configuration can be carried out to realize by user in configuration interface by obtaining layer structure configuration information, be tied
Close Fig. 2 shows configuration interface, specifically, when layer assembly of the user on configuration interface selects area's selection target layer assembly,
And in hierarchal model definition selection target hierarchical location, system can obtain the selection of user by configuration interface, that is, obtain
Get corresponding layer structure configuration information.Further, system target layer assembly can be arranged according to this layer of structure configuration information
At target tier position in hierarchal model definition, to quickly and easily complete the hierarchical structure of deep neural network
Configuration, improve allocative efficiency.
Wherein, system is capable of providing the mode that the configuration operation of layer structure is carried out to user, including but not limited to following several.
First, can configuration interface carry out drag operation, for example, Fig. 2 shows configuration interface in, layer assembly is selected
The data layer assemblies in area are placed on by drag operation at first hierarchical location of hierarchal model definition, and layer assembly is selected
The convolution layer assemblies in area are placed on the second of hierarchal model definition to layer 6 level position by drag operation respectively
Place.
Second, can first a hierarchical location, then one of selected layer assembly selection area be selected in hierarchal model definition
Layer assembly, in this way, system will judge to fill the layer assembly to the hierarchical location selected.For example, Fig. 2 shows
In configuration interface, can layer 7 level position first be selected in hierarchal model definition, then fully is selected in layer assembly selects area
Connected layer assemblies, in this way, system will judge to fill fully connected layer assemblies to hierarchal model definition
Layer 7 level position.For another example, the 7th to the 9th hierarchical location is first selected simultaneously in hierarchal model definition, then in layer assembly
It selects to select fully connected layer assemblies in area, in this way, system will judge to fill out fully connected layer assemblies
It is charged at the 7th to the 9th hierarchical location of hierarchal model definition.
Step 306, layer parameter setting information is obtained by layer parameter setting area.
Step 308, it according to layer parameter setting information, calls at the hierarchical location that corresponding layer parameter setting component is selected
Layer assembly configure corresponding layer parameter.I.e. the layer parameter set in the layer parameter setting information directly can be allocated to this by system
Layer assembly at selected hierarchical location.
Layer parameter setting area is provided with multiple layer parameter setting components, alternatively, layer parameter setting area is provided with multiple layers of ginseng
Number setting options, there are one layer parameters, and component is arranged for each layer parameter setting options correspondence.
For example, Fig. 2 shows configuration interface in, layer parameter setting area show output dimension, weight (weight)
The layer parameters configuration items such as initialization, bias (biasing) initialization, user can click any layer parameter configuration item, in this way, system
Corresponding layer parameter setting window can be popped up on configuration interface according to the click commands of user, user can set in the layer parameter
It is layer assembly input or selection layer parameter at selected hierarchical location to set in window, in this way, system has been received by layer parameter
Setting information.
To sum up, the layer structure configuration information got by configuration interface and layer parameter setting information are filled in layer by system
In grade model, configuration file is directly formed, configuration file includes but not limited to txt formats, is matched as a result, without user manual editing
File is set, but configuration file can be automatically generated in the operation of configuration interface according to user.In other words, the above technical side
Case without editing complicated cumbersome configuration text, but passes through instead of the mode for editing text configuration file in the related technology
With the mode for operating patterned configuration interface, the configuration of deep neural network is can be achieved with, greatly simplifies depth nerve
The configuration work of network, reduces deployment cost, simultaneously as the simplification of configuration work, also reduces error rate, improves depth
Spend the performance of neural network.
On the basis of the embodiment shown in Fig. 3, the present invention can also include the steps that the quantity of hierarchical location is arranged, should
Step can be before the either step of the embodiment shown in Fig. 3, later or between any two steps, then as shown in figure 4, setting
The step of quantity for setting hierarchical location, specifically includes:
Step 402, level quantity configuration information is obtained by hierarchal model definition.
Level quantity config option can be shown in hierarchal model definition, which is corresponding with level quantity configuration letter
Breath, user can select in level quantity config option or be arranged the quantity of hierarchical location, as level quantity configuration information.
In an optional implementation manner, level quantity config option, which directly displays, is arranged on configuration interface.
In another optional realization method, the menu in hierarchal model definition is arranged in level quantity config option
In, which is double-clicked in hierarchal model definition by user or the operations such as right button selects are opened.
Step 404, according to level quantity configuration information, the quantity of the hierarchical location in hierarchal model definition is set.This
The quantity of the hierarchical location in hierarchal model definition is arranged when system gets level quantity configuration information in sample accordingly.
On the basis of any of the above-described embodiment of the invention, configuration interface is also provided with global configuration area, and the overall situation is matched
Setting area, there is at least one global parameter configuration item, each global parameter configuration item to be corresponding with global parameter configuration component, then such as
Can be located at shown in Fig. 5, the step of global configuration in any of the above-described embodiment of the present invention before either step, later or any two
Between step:
Step 502, global parameter configuration information is obtained by the global parameter configuration item in global configuration area.
In an optional implementation manner, global parameter configuration item, which directly displays, is arranged on configuration interface.Shown with Fig. 2
For the configuration interface gone out, global configuration area has epoch (timestamp), momentum (momentum), learning rate more new strategy etc.
There is corresponding parameter, the parameter can be selected by users or be arranged for layer parameter configuration item, each configuration item.
In another optional realization method, global parameter configuration item is arranged in the menu in global configuration area, the dish
List is double-clicked in global configuration area by user or the operations such as right button selection are opened.
Step 504, according to global parameter configuration information, it is deep neural network to call corresponding global parameter configuration component
Configure global parameter.
User is without carrying out complicated text editing as a result, as long as carrying out the selection of each global parameter in global configuration area
Or setting, you can complete the global configuration being simple and efficient, simplify user's operation.
On the basis of any of the above-described embodiment of the invention, configuration interface is additionally provided with floor subsequent configuration area, and layer is subsequently matched
Setting area has at least one floor subsequent configuration item, and each layer subsequent configuration item is corresponding with a layer subsequent configuration component, then such as Fig. 6 institutes
Show, the step of layer subsequent configuration includes:
Step 602, floor subsequent configuration information is obtained by the floor subsequent configuration item in floor subsequent configuration area.
In an optional implementation manner, layer subsequent configuration item, which directly displays, is arranged on configuration interface.With Fig. 2 shows
Configuration interface for, floor subsequent configuration area have activation primitive, pooling (pond), LRN (Local Response
Normalization, local acknowledgement's regularization) etc. layers subsequent configuration item, each layer subsequent configuration item have corresponding parameter, should
Parameter can be selected by users or be arranged.
In another optional realization method, floor subsequent configuration item is arranged in the menu in floor subsequent configuration area, the dish
List is double-clicked in floor subsequent configuration area by user or the operations such as right button selection are opened.
Step 604, according to layer subsequent configuration information, corresponding layer subsequent configuration component is called to be carried out for deep neural network
Layer subsequent configuration.
You need to add is that one in layer assembly selection area, layer parameter setting area, global configuration area and floor subsequent configuration area
A or multiple areas further include with corresponding component Configuration item and deep neural network configuration method:Pass through component Configuration item
Increase or delete the component in corresponding area.
That is, the component in each area can be deleted or be increased according to actual needs, to facilitate the reality of user
Configuration needs.For example, Fig. 2 shows configuration interface in, " addition " option, Yong Huke are additionally provided in floor subsequent configuration area
A layer editing interface for subsequent configuration component is opened to be somebody's turn to do " addition " option by selection, to increase or to delete on configuration interface
Layer assembly.
Unshowned in Fig. 2 to be, layer assembly selects in area, hierarchal model definition, layer parameter setting area, global configuration area
" addition " option can be equally set, so as to user by " addition " option in the area component or the contents such as option increase
The operations such as add, delete or change.
Certainly, the scheme of above-mentioned setting " addition " option is only to each area assembly in configuration interface or option into edlin
One of realization method, in actual scene, can also by clicking, double-clicking in corresponding region, a variety of operations such as touch-control come
The editting function of trigger assembly or option, directly single or selected batch component or option directly can also be increased,
The edit operations such as delete or change.
It is similarly, layer parameter setting area, complete in addition, the layer assembly in layer assembly selection area is not limited to above-mentioned several layer assemblies
Item in office configuring area and floor subsequent configuration area is also not necessarily limited to above-mentioned several items, can also in addition to this any as needed
.
Fig. 7 shows the block diagram of the deep neural network configuration device of one embodiment of the present of invention.
As shown in fig. 7, one embodiment of the present of invention shows layer assembly selection in the configuration interface of deep neural network
Area, hierarchal model definition and layer parameter setting area, then corresponding deep neural network configuration device 700, including:First obtains
Unit 702, layer structure dispensing unit 704, second acquisition unit 706 and layer parameter dispensing unit 708.
First acquisition unit 702 is used to obtain layer structure configuration information by configuration interface.
Layer structure dispensing unit 704 is for being each level in hierarchal model definition according to layer structure configuration information
Position selects to select corresponding layer assembly in area in layer assembly.
Second acquisition unit 706 is used to obtain layer parameter setting information by layer parameter setting area.
Layer parameter dispensing unit 708 is used for according to layer parameter setting information, and it is choosing to call corresponding layer parameter setting component
Layer assembly at fixed hierarchical location configures corresponding layer parameter.
In the above embodiment of the present invention, optionally, layer structure dispensing unit 704 is specifically used for:It is configured according to layer structure
Information selects area's selection target layer assembly in layer assembly, and in hierarchal model definition selection target hierarchical location, and by mesh
Layer assembly is marked to be arranged at target tier position.
In the above embodiment of the present invention, optionally, deep neural network configuration device 700 further includes third acquiring unit
710 and level number setting unit 712.
Third acquiring unit 710 is used to obtain level quantity configuration information by hierarchal model definition;Level quantity is set
Unit 712 is set for according to level quantity configuration information, the quantity of the hierarchical location in hierarchal model definition to be arranged.
In the above embodiment of the present invention, optionally, configuration interface also shows global configuration area, the global configuration area
At least one global parameter configuration item is shown, each global parameter configuration item is corresponding with global parameter configuration component, Yi Jishen
It further includes the 4th acquiring unit 714 and global parameter dispensing unit 716 to spend neural network configuration device 700.
4th acquiring unit 714, which is used to obtain global parameter by the global parameter configuration item in global configuration area, matches confidence
Breath.
Global parameter dispensing unit 716 is used to, according to global parameter configuration information, call corresponding global parameter configuration group
Part is that deep neural network configures global parameter.
In the above embodiment of the present invention, optionally, configuration interface is additionally provided with floor subsequent configuration area, floor subsequent configuration area
With at least one layer of subsequent configuration item, each layer subsequent configuration item is corresponding with layer subsequent configuration component and depth nerve net
Network configuration device 700 further includes the 5th acquiring unit 718 and layer subsequent configuration unit 720.
5th acquiring unit 718 is used to obtain floor subsequent configuration information by the floor subsequent configuration item in floor subsequent configuration area;
Layer subsequent configuration unit 720 is for according to layer subsequent configuration information, it to be depth nerve net to call corresponding layer subsequent configuration component
Network carries out layer subsequent configuration.
In addition, in the above embodiment of the present invention, optionally, layer assembly selects area, layer parameter setting area, global configuration area
With one or more of floor subsequent configuration area area there is corresponding component Configuration item and deep neural network configuration to fill 700
Further include component edits unit 722, component edits unit 722 can be used for increasing by component Configuration item or deleting in corresponding area
Component.
Due to shown in Fig. 7 deep neural network configuration fill 700 using any embodiment in Fig. 2 to Fig. 6 shown in technology
Scheme, therefore, whole technique effects with any embodiment in above-mentioned Fig. 2 to Fig. 6, details are not described herein.
Fig. 8 shows the block diagram for the server that one embodiment of the present of invention provides.
As shown in figure 8, server 800 may include the processor 802 being connect with one or more data storage facilities, the number
May include storage medium 804 and internal storage location 806 according to storage tool.Server 800 can also include input interface 808 and defeated
Outgoing interface 810, for being communicated with another device or system.It can be deposited by the CPU8022 of processor 802 program codes executed
Storage is in storage medium 804 or internal storage location 806.
Processor 802 in server 800 calls the program code for being stored in storage medium 804 or internal storage location 806, can
Execute following each step:
Layer structure configuration information is obtained by configuration interface;It is in hierarchal model definition according to layer structure configuration information
Each hierarchical location layer assembly select area in select corresponding layer assembly;Layer parameter setting is obtained by layer parameter setting area
Information;According to layer parameter setting information, the layer assembly at the hierarchical location that corresponding layer parameter setting component is selected is called to match
Set corresponding layer parameter.
In a concrete implementation scheme, processor 802 is executable:
According to layer structure configuration information, area's selection target layer assembly is selected in layer assembly, and select in hierarchal model definition
Select target tier position;Target layer assembly is arranged at target tier position.
In a concrete implementation scheme, processor 802 is executable:
Level quantity configuration information is obtained by hierarchal model definition;According to level quantity configuration information, level is set
The quantity of hierarchical location in model definition.
In a concrete implementation scheme, processor 802 is executable:
Global parameter configuration information is obtained by the global parameter configuration item in global configuration area;Match confidence according to global parameter
Breath, it is that deep neural network configures global parameter to call corresponding global parameter configuration component.
In a concrete implementation scheme, processor 802 is executable:
Floor subsequent configuration information is obtained by the floor subsequent configuration item in floor subsequent configuration area;According to layer subsequent configuration information,
It is that deep neural network carries out layer subsequent configuration to call corresponding layer subsequent configuration component.
In a concrete implementation scheme, processor 802 is executable:
Increase or delete the component in corresponding area by component Configuration item.
Technical scheme of the present invention is described in detail above in association with attached drawing, technical solution through the invention, instead of phase
The mode that text configuration file is edited in the technology of pass, without editing complicated cumbersome configuration text, but by with operation figure
The mode of the configuration interface of change can be achieved with the configuration of deep neural network, greatly simplifie the configuration of deep neural network
Work, reduces deployment cost, simultaneously as the simplification of configuration work, also reduces error rate, improves deep neural network
Performance.
Depending on context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determination " or " in response to detection ".Similarly, depend on context, phrase " if it is determined that " or " if detection
(condition or event of statement) " can be construed to " when determining " or " in response to determination " or " when the detection (condition of statement
Or event) when " or " in response to detection (condition or event of statement) ".
The term used in embodiments of the present invention is the purpose only merely for description specific embodiment, is not intended to be limiting
The present invention.In the embodiment of the present invention and "an" of singulative used in the attached claims, " described " and "the"
It is also intended to including most forms, unless context clearly shows that other meanings.
In several embodiments provided by the present invention, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the division of unit,
Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, for example, multiple units or component can be with
In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed
Mutual coupling, direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING of device or unit or
Communication connection can be electrical, machinery or other forms.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can be stored in one and computer-readable deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
It is each that device (can be personal computer, server or network equipment etc.) or processor (Processor) execute the present invention
The part steps of embodiment method.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (Read-Only
Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disc or CD etc. are various to deposit
Store up the medium of program code.
The above is merely preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of protection of the invention.
Claims (12)
1. a kind of deep neural network configuration method, which is characterized in that the configuration interface of deep neural network is provided with a layer group
Part selects area, hierarchal model definition and layer parameter setting area, then the deep neural network configuration method includes:
Layer structure configuration order is obtained by the configuration interface;
According to the layer structure configuration order, selected in the layer assembly for each hierarchical location in the hierarchal model definition
It selects and selects corresponding layer assembly in area;
Layer parameter setting command is obtained by the layer parameter setting area;
According to the layer parameter setting command, the layer assembly at the hierarchical location that corresponding layer parameter setting component is selected is called
Configure corresponding layer parameter.
2. deep neural network configuration method according to claim 1, which is characterized in that configured and ordered according to the layer structure
It enables, corresponding layer assembly is selected in layer assembly selection area for each hierarchical location in the hierarchal model definition
Step specifically includes:
According to the layer structure configuration order, area's selection target layer assembly is selected in the layer assembly, and in the hierarchal model
Definition selection target hierarchical location;
The target layer assembly is arranged at the target tier position.
3. deep neural network configuration method according to claim 1 or 2, which is characterized in that further include:
Level quantity configuration order is obtained by the hierarchal model definition;
According to the level quantity configuration order, the quantity of the hierarchical location in the hierarchal model definition is set.
4. deep neural network configuration method according to claim 1 or 2, which is characterized in that the configuration interface is also set
It is equipped with global configuration area, the global configuration area has at least one global parameter configuration item, each global parameter configuration item pair
There should be global parameter configuration component, and
The deep neural network configuration method further includes:
Global parameter configuration order is obtained by the global parameter configuration item in the global configuration area;
According to the global parameter configuration order, corresponding global parameter configuration component is called to be configured for the deep neural network
Global parameter.
5. deep neural network configuration method according to claim 4, which is characterized in that the configuration interface is additionally provided with
There is at least one floor subsequent configuration item, each layer subsequent configuration item to be corresponding with for floor subsequent configuration area, floor subsequent configuration area
Layer subsequent configuration component, and
The deep neural network configuration method further includes:
Floor subsequent configuration order is obtained by the floor subsequent configuration item in floor subsequent configuration area;
According to the layer subsequent configuration order, it is after the deep neural network carries out layer to call corresponding layer subsequent configuration component
Continuous configuration.
6. deep neural network configuration method according to claim 5, which is characterized in that layer assembly selection area, institute
Stating area of one or more of layer parameter setting area, the global configuration area and floor subsequent configuration area has corresponding component
Configuration item, and
The deep neural network configuration method further includes:
Increase or delete the component in corresponding area by the component Configuration item.
7. a kind of deep neural network configuration device, which is characterized in that the configuration interface of deep neural network is provided with a layer group
Part selects area, hierarchal model definition and layer parameter setting area, then the deep neural network configuration device includes:
First acquisition unit obtains layer structure configuration order by the configuration interface;
Layer structure dispensing unit is each level position in the hierarchal model definition according to the layer structure configuration order
It sets and selects to select corresponding layer assembly in area in the layer assembly;
Second acquisition unit obtains layer parameter setting command by the layer parameter setting area;
Layer parameter dispensing unit calls the layer that corresponding layer parameter setting component is selected according to the layer parameter setting command
Layer assembly at level position configures corresponding layer parameter.
8. deep neural network configuration device according to claim 7, which is characterized in that the layer structure dispensing unit tool
Body is used for:
According to the layer structure configuration order, area's selection target layer assembly is selected in the layer assembly, and in the hierarchal model
Definition selection target hierarchical location, and the target layer assembly is arranged at the target tier position.
9. deep neural network configuration device according to claim 7 or 8, which is characterized in that further include:
Third acquiring unit obtains level quantity configuration order by the hierarchal model definition;
The level in the hierarchal model definition is arranged according to the level quantity configuration order in level number setting unit
The quantity of position.
10. deep neural network configuration device according to claim 7 or 8, which is characterized in that the configuration interface is also set
It is equipped with global configuration area, the global configuration area has at least one global parameter configuration item, each global parameter configuration item pair
There should be global parameter configuration component, and
The deep neural network configuration device further includes:
4th acquiring unit obtains global parameter configuration order by the global parameter configuration item in the global configuration area;
Global parameter dispensing unit calls corresponding global parameter configuration component for institute according to the global parameter configuration order
State deep neural network configuration global parameter.
11. deep neural network configuration device according to claim 10, which is characterized in that the configuration interface is also set up
There is at least one floor subsequent configuration item, each layer subsequent configuration item to correspond to for You Ceng subsequent configurations area, floor subsequent configuration area
There is a layer subsequent configuration component, and
The deep neural network configuration device further includes:
5th acquiring unit obtains floor subsequent configuration order by the floor subsequent configuration item in floor subsequent configuration area;
Layer subsequent configuration unit, according to the layer subsequent configuration order, it is the depth to call corresponding layer subsequent configuration component
Neural network carries out layer subsequent configuration.
12. deep neural network configuration device according to claim 11, which is characterized in that layer assembly selection area,
Area of one or more of the layer parameter setting area, the global configuration area and floor subsequent configuration area has corresponding group
Part configuration item, and
The deep neural network configuration device further includes:
Component edits unit increases by the component Configuration item or deletes the component in corresponding area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710262693.5A CN108470213A (en) | 2017-04-20 | 2017-04-20 | Deep neural network configuration method and deep neural network configuration device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710262693.5A CN108470213A (en) | 2017-04-20 | 2017-04-20 | Deep neural network configuration method and deep neural network configuration device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108470213A true CN108470213A (en) | 2018-08-31 |
Family
ID=63266907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710262693.5A Pending CN108470213A (en) | 2017-04-20 | 2017-04-20 | Deep neural network configuration method and deep neural network configuration device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108470213A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020172829A1 (en) * | 2019-02-27 | 2020-09-03 | 华为技术有限公司 | Method and apparatus for processing neural network model |
CN111967570A (en) * | 2019-07-01 | 2020-11-20 | 嘉兴砥脊科技有限公司 | Implementation method, device and machine equipment of mysterious neural network system |
WO2022073398A1 (en) * | 2020-10-07 | 2022-04-14 | Huawei Technologies Co., Ltd. | Location prediction using hierarchical classification |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150242761A1 (en) * | 2014-02-26 | 2015-08-27 | Microsoft Corporation | Interactive visualization of machine-learning performance |
CN105809248A (en) * | 2016-03-01 | 2016-07-27 | 中山大学 | Method for configuring DANN onto SDN and an interaction method between them |
US20160300156A1 (en) * | 2015-04-10 | 2016-10-13 | Facebook, Inc. | Machine learning model tracking platform |
-
2017
- 2017-04-20 CN CN201710262693.5A patent/CN108470213A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150242761A1 (en) * | 2014-02-26 | 2015-08-27 | Microsoft Corporation | Interactive visualization of machine-learning performance |
US20160300156A1 (en) * | 2015-04-10 | 2016-10-13 | Facebook, Inc. | Machine learning model tracking platform |
CN105809248A (en) * | 2016-03-01 | 2016-07-27 | 中山大学 | Method for configuring DANN onto SDN and an interaction method between them |
Non-Patent Citations (6)
Title |
---|
JASON.HEVEY: "paper 75:使用MATLAB的神经网络工具箱创建神经网络", 《HTTPS://WWW.CNBLOGS.COM/MOLAKEJIN/P/5563088.HTML》 * |
MATLAB技术联盟: "《MATLAB网络超级学习手册》", 31 May 2014, 人民邮电出版社 * |
ZEALER_V: "使用MATLAB的神经网络工具箱创建神经网络", 《HTTP://BLOG.SINA.COM.CN/S/BLOG_AB8DE1320102WC PH.HTML》 * |
南君: "paper75:使用MATLAB的神经网络工具箱创建神经网络", 《HTTPS://WWW.BBSMAX.COM/A/RV57MAMWZP/》 * |
巴斯(BARRS,B.J.): "《认知 大脑和意识》", 31 December 2015, 上海人民出版社 * |
朱凯: "《精通MATLAB神经网络》", 31 January 2010 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020172829A1 (en) * | 2019-02-27 | 2020-09-03 | 华为技术有限公司 | Method and apparatus for processing neural network model |
CN111967570A (en) * | 2019-07-01 | 2020-11-20 | 嘉兴砥脊科技有限公司 | Implementation method, device and machine equipment of mysterious neural network system |
CN111967570B (en) * | 2019-07-01 | 2024-04-05 | 北京砥脊科技有限公司 | Implementation method, device and machine equipment of visual neural network system |
WO2022073398A1 (en) * | 2020-10-07 | 2022-04-14 | Huawei Technologies Co., Ltd. | Location prediction using hierarchical classification |
US11438734B2 (en) | 2020-10-07 | 2022-09-06 | Huawei Technologies Co., Ltd. | Location prediction using hierarchical classification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7703018B2 (en) | Apparatus and method for automating the diagramming of virtual local area networks | |
CN104202473B (en) | Merge the method and mobile terminal of session | |
Pawson et al. | Naked objects | |
EP2159743A1 (en) | Dynamic node extensions and extension fields for business objects | |
CN105955617B (en) | For selecting the gesture of text | |
CN108470213A (en) | Deep neural network configuration method and deep neural network configuration device | |
CN106844019A (en) | Application control method, application program redirect associated configuration method and device | |
CN108205467A (en) | The intelligence auxiliary of repetitive operation | |
US9237073B2 (en) | Inferring connectivity among network segments in the absence of configuration information | |
CN104866585B (en) | A kind of experiment test flight data total system | |
US8898261B1 (en) | Configuring agent services operable by agents in a storage area network | |
CN104182232B (en) | A kind of method and user terminal for creating context-aware applications | |
EP2199933A1 (en) | UI-driven binding of extension fields to business objects | |
US11429264B1 (en) | Systems and methods for visually building an object model of database tables | |
CA2505158A1 (en) | Techniques for managing multiple hierarchies of data from a single interface | |
CN103518360B (en) | Multiple service treatment methods and browser | |
CN110221859A (en) | A kind of online management method of the deployment of application, device, equipment and storage medium | |
CN104750490A (en) | Interface animation implementation method and system | |
CN105988933B (en) | The operable node recognition methods in interface, application testing method, apparatus and system | |
CN109933402A (en) | A kind of methods of exhibiting of function menu, system and relevant device | |
CN109634607A (en) | A kind of method and device of Code automatic build | |
US9304746B2 (en) | Creating a user model using component based approach | |
CN108459792A (en) | A kind of flow switching method, device and computer equipment | |
CN109284152A (en) | A kind of menu visual configuration method, equipment and computer readable storage medium | |
US20160267063A1 (en) | Hierarchical navigation control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180831 |
|
RJ01 | Rejection of invention patent application after publication |