CN111985644B - Neural network generation method and device, electronic equipment and storage medium - Google Patents

Neural network generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111985644B
CN111985644B CN202010882925.9A CN202010882925A CN111985644B CN 111985644 B CN111985644 B CN 111985644B CN 202010882925 A CN202010882925 A CN 202010882925A CN 111985644 B CN111985644 B CN 111985644B
Authority
CN
China
Prior art keywords
neural network
channel
channel number
target
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010882925.9A
Other languages
Chinese (zh)
Other versions
CN111985644A (en
Inventor
游山
苏修
黄涛
王飞
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010882925.9A priority Critical patent/CN111985644B/en
Publication of CN111985644A publication Critical patent/CN111985644A/en
Application granted granted Critical
Publication of CN111985644B publication Critical patent/CN111985644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a neural network generation method and device, electronic equipment and storage medium, wherein a plurality of channel number configuration schemes corresponding to a target neural network are determined based on preset loads; then, aiming at each channel number configuration scheme, based on a preset free weight and the channel number included by each network layer in the channel number configuration scheme, determining various neural network structures under the channel number configuration scheme; and determining the number of target channels included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme, and finally generating the target neural network based on the number of target channels included in each network layer in the target neural network. According to the method and the device, the channel number learning is performed by selecting the channel within a certain degree of freedom by using the preset free weight, so that the quality and the searching efficiency of the channel number learning of the neural network are improved, and the accuracy and the speed of the generated target neural network are improved.

Description

Neural network generation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computers and deep learning technologies, and in particular, to a neural network generation method and device, an electronic device, and a storage medium.
Background
Before the neural network is generated, the number of channels of each network layer in the neural network needs to be learned first, and the quality of the learning of the number of channels of the neural network is crucial to the accuracy and the speed of the neural network. At present, the number of channels of the neural network is generally set manually according to experience, so that the accuracy and the speed of the neural network cannot be guaranteed, and the learning cost is too high due to a large number of manual settings and tests, so that the accuracy and the speed of the neural network are not improved.
In addition, there is a method for automatically learning the number of channels of the neural network at present, but the method for automatically learning is generally selected from front to back when the channels are selected, which results in that the number of training times of the channels positioned in front is more, the number of training times of the channels positioned in back is less, the quality of learning the number of channels of the neural network is reduced, and huge calculation cost is required, and the searching efficiency is low.
Disclosure of Invention
The embodiment of the application at least provides a neural network generation method and device, so that the quality of the neural network channel number learning is improved, and the accuracy of the generated neural network is improved.
In a first aspect, an embodiment of the present application provides a neural network generating method, including:
Determining a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network;
determining a plurality of neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included in each network layer in the channel number configuration scheme;
determining the number of target channels included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme;
and generating the target neural network based on the target channel number included in each network layer in the target neural network.
In the aspect, a plurality of neural network structures are constructed by utilizing preset free weights, and the number of the neural network channels is learned, so that the defects of limitation of manually setting the number of the channels to improvement of learning quality and high learning cost are overcome; meanwhile, the channel participating in training can be flexibly selected in a certain range by utilizing the preset free weight, the defect of fixed channel selection in the number of the automatic learning network channels in the prior art is overcome, the quality and the searching efficiency of the number learning of the neural network channels can be improved, and the accuracy and the speed of the generated target neural network are improved.
In one possible implementation, determining the plurality of neural network structures includes:
for each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region comprises a plurality of channels in a corresponding network layer;
and selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights to form various neural network structures under the channel number configuration scheme.
According to the embodiment, the free channel area is set by using the preset free weight, and the channels forming the neural network structure are selected from the free channel area, so that the defect that the channel selection in the number of the network channels is fixed in the automatic learning process in the prior art is overcome, the channels in the neural network can be flexibly selected, and the quality and the searching efficiency of the neural network channel number learning are improved.
In one possible implementation manner, the selecting, based on the preset free weight, a target channel from each free channel area for a corresponding network layer, to form multiple neural network structures under the channel number configuration scheme includes:
Determining various optional position information of the target channel in the free channel area based on the preset free weight; wherein the free channel regions in different network layers comprise the same number of channels;
and selecting a target channel according to the optional position information in the free channel area of each network layer according to each piece of optional position information, and forming a neural network structure corresponding to the optional position information based on the selected target channel in each network layer.
According to the embodiment, the channels of the neural network are selected based on the selectable position information, the defect that the channel selection in the number of the channels of the network is fixed in the automatic learning process in the prior art is overcome, the channels in the neural network can be flexibly selected, and the quality and the searching efficiency of the channel number learning of the neural network are improved.
In a possible implementation manner, the forming a neural network structure corresponding to the optional location information based on the target channels in the selected network layers includes:
determining a basic channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the basic channel zone comprises at least one channel in a corresponding network layer;
A neural network structure corresponding to the selectable location information is formed based on the basic channel region in each network layer and the target channel in each network layer.
According to the embodiment, the neural network structure is constructed by utilizing the basic channel region and the target channels selected from each network layer, so that the freedom degree of the selected channels can be limited, the diversity of the neural network structure is reduced, the calculated amount of the neural network channel number learning is reduced on the premise of ensuring the neural network channel number learning quality, and the learning efficiency is improved.
In one possible implementation, after determining the plurality of neural network structures, the neural network generation method further includes:
constructing a super network based on various neural network structures under various channel number configuration schemes;
and determining the measurement precision corresponding to each neural network structure under each channel number configuration scheme based on the super network.
According to the embodiment, the super network is used for testing the measurement precision corresponding to each neural network structure, so that the learning efficiency of the neural network channel number can be improved.
In one possible implementation manner, the determining, based on the measurement accuracy corresponding to each neural network structure in each channel number configuration scheme, the number of target channels included in each network layer in the target neural network includes:
Determining the measurement precision of the neural network corresponding to each channel number configuration scheme based on the measurement precision of each neural network structure under the channel number configuration scheme;
and taking the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy as the target channel number included in each network layer in the target neural network.
According to the embodiment, the channel number included in the channel number configuration scheme with the highest neural network measurement accuracy is taken as the target channel number, so that the quality of the neural network channel number learning can be ensured.
In one possible implementation manner, after determining the number of target channels included in each network layer in the target neural network, the neural network generating method further includes:
acquiring a new preset load, and determining a plurality of new channel number configuration schemes corresponding to the target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network in the previous time;
For each new channel number configuration scheme, determining a plurality of neural network structures under the new channel number configuration scheme based on a preset free weight, the channel number included in each network layer in the new channel number configuration scheme, and the target channel number included in each network layer in the target neural network under a preset load in a previous iteration;
and determining the target channel number included in each network layer in the target neural network under a new preset load based on the measurement precision corresponding to each neural network structure under each new channel number configuration scheme.
According to the embodiment, the new channel number configuration scheme is determined by utilizing the preset load smaller than the preset load in the previous iteration, and the new target channel is selected from the channel space corresponding to the previous iteration based on the new channel number configuration scheme, so that the efficiency and the quality of channel number learning can be improved.
In one possible implementation manner, the determining, based on the preset free weight, the number of channels included in each network layer in the new channel number configuration scheme, and the number of target channels included in each network layer in the target neural network under the preset load in the previous iteration, the multiple neural network structures under the new channel number configuration scheme includes:
Determining an optional channel area corresponding to each network layer in the current iteration based on the target channel number included by each network layer in the target neural network under the preset load in the previous iteration;
and determining various neural network structures under the new channel number configuration scheme based on the selectable channel area corresponding to each network layer, the preset free weight and the channel number included by each network layer in the new channel number configuration scheme.
According to the embodiment, the selectable channel area is constructed by utilizing the target channel number corresponding to the preset load in the previous iteration, and the new target channel is selected from the selectable channel area to learn the channel number of the neural network, so that the learning space of the channel number can be reduced, the calculated amount of the channel number learning is reduced, and the efficiency and quality of the channel number learning are improved.
In a possible implementation manner, the generating the target neural network based on the target channel number included in each network layer in the target neural network includes:
training the target neural network based on the number of target channels included in each network layer, and determining the positions of the channels included in each network layer in the target neural network and the parameter information corresponding to each channel to obtain the trained target neural network.
In this embodiment, after the number of channels of each network layer of the target neural network is determined, the target neural network is retrained based on the determined number of channels, so that the measurement accuracy of the generated target neural network can be improved.
In one possible implementation manner, before determining the plurality of channel number configuration schemes, the neural network generation method further includes:
determining influence degree information of a single channel in each network layer of the target neural network on measurement accuracy of the target neural network;
determining the number of single channels included in each channel group in the network layer based on the influence degree information;
dividing a single channel in the network layer based on the determined number to obtain a plurality of channel groups;
the channel group will be obtained as a new channel in the network layer.
According to the embodiment, the single channel in the neural network layer is segmented according to the influence degree information of the single channel, the channel group is obtained, the channel group is used as a new channel to learn the channel number, and the channel mathematical learning efficiency can be improved.
In a second aspect, the present application provides a neural network generation apparatus, including:
The channel design module is used for determining a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network;
the network construction module is used for determining various neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included by each network layer in the channel number configuration scheme;
the channel determining module is used for determining the target channel number included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme;
and the network generation module is used for generating the target neural network based on the target channel number included in each network layer in the target neural network.
In one possible implementation, the network construction module, when determining the plurality of neural network structures, is configured to:
for each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region comprises a plurality of channels in a corresponding network layer;
And selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights to form various neural network structures under the channel number configuration scheme.
In one possible implementation manner, the channel determining module is configured to, when determining the number of target channels included in each network layer in the target neural network based on the measurement accuracy corresponding to each neural network structure under each channel number configuration scheme,:
determining the measurement precision of the neural network corresponding to each channel number configuration scheme based on the measurement precision of each neural network structure under the channel number configuration scheme;
and taking the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy as the target channel number included in each network layer in the target neural network.
In a possible implementation manner, the channel design module is further configured to obtain a new preset load, and determine a plurality of new channel number configuration schemes corresponding to the target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network in the previous time;
The network construction module is further configured to determine, for each new channel number configuration scheme, a plurality of neural network structures under the new channel number configuration scheme based on a preset free weight, a channel number included in each network layer in the new channel number configuration scheme, and a target channel number included in each network layer in the target neural network under a preset load in a previous iteration;
the channel determining module is further configured to determine, based on measurement accuracy corresponding to each neural network structure under each new channel number configuration scheme, a target channel number included in each network layer in the target neural network under a new preset load.
In one possible embodiment, the channel design module is further configured to, prior to determining the plurality of channel number configuration schemes:
determining influence degree information of a single channel in each network layer of the target neural network on measurement accuracy of the target neural network;
determining the number of single channels included in each channel group in the network layer based on the influence degree information;
dividing a single channel in the network layer based on the determined number to obtain a plurality of channel groups;
The channel group will be obtained as a new channel in the network layer.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the neural network generation method as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor performs steps of a neural network generation method as described above.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are necessary for use in the embodiments are briefly described below, which drawings are incorporated in and form a part of the present description, these drawings illustrate embodiments consistent with the present application and together with the description serve to explain the technical solutions of the present application. It is to be understood that the following drawings illustrate only certain embodiments of the present application and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may derive other relevant drawings from the drawings without inventive effort.
Fig. 1 shows a flowchart of a neural network generation method provided in an embodiment of the present application;
FIG. 2 shows a schematic diagram of a neural network structure in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a neural network generating device according to an embodiment of the present application;
fig. 4 shows a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Aiming at the defects of low efficiency, high learning cost and incapability of guaranteeing measurement accuracy and measurement speed in the conventional neural network channel number learning, the application provides a neural network generation method and device, electronic equipment and storage medium, wherein the application utilizes preset free weights to construct various neural network structures to learn the neural network channel number, and overcomes the defects of limitation of manually setting the channel number to improvement of learning quality and high learning cost; meanwhile, the channel participating in training can be flexibly selected in a certain range by utilizing the preset free weight, the defect of fixed channel selection in the number of the automatic learning network channels in the prior art is overcome, the quality and the searching efficiency of the number learning of the neural network channels can be improved, and meanwhile, the accuracy and the speed of the generated neural network are improved.
The neural network generation method and device, the electronic device and the storage medium provided by the application are described in detail below.
The neural network generation method provided by the application is executed by the server, so that the proper target channel number can be learned, the quality of channel number learning and the accuracy of the target neural network are ensured, the efficiency of channel number searching is improved, and meanwhile, the defects of limitation of manually setting the channel number to improvement of learning quality and high learning cost can be overcome. Specifically, as shown in fig. 1, the neural network generation method provided in the present application may include the following steps:
s110, determining a plurality of channel number configuration schemes corresponding to a target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network.
Here, the preset load is set in advance based on an application scenario, and may specifically be determined according to the computing capability of the device where the target neural network is located. When the target channel number of each network layer is determined through multiple iterations, multiple preset loads can be set, and only the preset load corresponding to the last iteration is smaller than the preset load corresponding to the previous iteration. The channel number configuration scheme determined according to the preset load has fewer and fewer channels, and when the target channel number is searched for in the next iteration, searching and learning are performed in the search space corresponding to the target channel number determined in the previous iteration, so that the calculation amount of channel number learning can be reduced, and the quality of channel number learning can be improved.
S120, determining a plurality of neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included in each network layer in the channel number configuration scheme.
Here, when constructing the neural network structure based on the preset free weight, specifically, according to the number of channels of each network layer included in each channel number configuration scheme, channels in different positions are selected for each channel number configuration scheme based on the preset free weight, so as to obtain different neural network structures.
In the prior art, the channels at corresponding positions are sequentially selected according to the preset channel number from front to back, so that the number of times of selecting and training the channels arranged in front is more, the number of times of selecting and training the channels arranged in back is less, and the quality of learning the channel number of the neural network is reduced. The channels at different positions can be randomly selected from the network by using the preset free weight, so that the defects brought by the prior art are overcome, and the quality of channel number learning is improved.
After determining the various neural network structures under each channel number configuration scheme, a super network can be constructed by utilizing the various neural network structures under each channel number configuration scheme, and training is performed on the constructed super network to obtain the corresponding measurement precision of each neural network structure under each channel number configuration scheme. The super network is used for testing the measurement precision corresponding to each neural network structure, so that the learning efficiency of the neural network channel number can be improved.
S130, determining the number of target channels included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme.
Specifically, firstly, the neural network measurement accuracy of the target neural network under each channel number configuration scheme is determined based on the measurement accuracy corresponding to each neural network structure under each channel number configuration scheme, and then, the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy is used as the target channel number included in each network layer in the target neural network.
When determining the measurement accuracy of the neural network of the target neural network under a certain channel number configuration scheme, specifically, the highest measurement accuracy of the neural network structure under the channel number configuration scheme can be used as the measurement accuracy of the neural network.
The channel number included in the channel number configuration scheme with the highest neural network measurement accuracy is used as the target channel number, so that the quality of the neural network channel number learning can be ensured.
And S140, generating the target neural network based on the number of target channels included in each network layer in the target neural network.
The method specifically comprises the step of retraining the neural network after the number of target channels included in each network layer is determined, so as to generate the target neural network. The network retraining is carried out based on the number of the target channels included in each network layer, so that the measurement accuracy of the generated target neural network can be improved.
In some embodiments, the determining the plurality of neural network structures in the step 120 may specifically include the following steps:
step one, aiming at each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region includes a plurality of channels in a corresponding network layer.
Here, when determining the free channel area for a certain network layer, specifically, a channel in the order of the number of channels of the network layer in the channel number configuration scheme may be used as a channel in the center position of the free channel area, and then the free channel area of the network layer may be formed by using the determined channel in the center position, the first N channels of the channel, and the last N channels of the channel. The N corresponds to a preset free weight, and can be flexibly set according to an application scene.
As shown in fig. 2, the preset free weight is set to 1, and the first network layer in the channel number configuration scheme includes 5 channels, so that the 5 th channel of the network layer is a channel in the center position of the free channel area, and the channel in the center position, the first 1 channels of the channel, and the last 1 channels of the channel form the free channel area 21 of the first network layer. For another example, in the channel number configuration scheme, the second network layer includes 3 channels, and then the 3 rd channel of the network layer is a channel at the center of the free channel area, and the channel at the center, the first 1 channels of the channels, and the last 1 channels of the channels form the free channel area 22 of the second network layer. For another example, in the channel number configuration scheme, the third network layer includes 4 channels, and then the 4 th channel of the network layer is a channel at the center of the free channel area, and the channel at the center, the first 1 channels of the channels, and the last 1 channels of the channels form the free channel area 23 of the second network layer.
And step two, selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights, and forming various neural network structures under the channel number configuration scheme.
When selecting the target channel for each network layer, specifically, a channel with preset free weight is removed from the free channel area of the network layer, the positions of the removed channels are random, and the remaining channels in the free channel area are used as the target channels.
According to the method, the free channel area is set by using the preset free weight, and the channels forming the neural network structure are selected from the free channel area, so that the defect that the channel selection is fixed in the number of the channels of the automatic learning network in the prior art is overcome, the channels in the neural network can be flexibly selected, and the quality and the searching efficiency of the learning of the number of the channels of the neural network are improved.
As can be seen from the above description, the number of channels included in the free channel area of all the network layers is equal, so that when a target channel is selected for each network layer, the target channel can be selected for each network layer randomly, and the relative position information of the target channels in the free channel area of different network layers may be the same or different.
In order to reduce the calculation amount of the channel number learning, the relative position information corresponding to the target channel in each network layer of the same neural network structure may be set to be the same. As shown in fig. 2, in the first neural network structure 24, the relative position information of the target channel in each network layer in the corresponding free channel area is the same, and the target channel is located at two positions from the left of the free channel area; in the second neural network structure 25, the relative position information of the target channel in each network layer in the corresponding free channel area is the same, and the target channel is located at the left and right positions of the free channel area; in the third neural network structure 26, the relative position information of the target channel in each network layer in the corresponding free channel region is the same, and the target channel is located at two positions from right to left in the free channel region.
In particular implementations, the various neural network structures 24, 25, and 26 shown in FIG. 2 may be formed using the following steps:
step one, determining various optional position information of the target channel in the free channel area based on the preset free weight; wherein the free channel regions in different network layers comprise the same number of channels.
Here, the positions of the channels removed in each free channel region can be determined based on the preset free weights, and the remaining channel positions are the selectable channel positions. As shown in fig. 2, based on the preset free weight 1, the structure of the free channel area corresponding to the determined three optional position information is shown as 27, 28 and 29, among the optional position information corresponding to the symbol 27, the removed channel is located at the rightmost side of the free channel area, among the optional position information corresponding to the symbol 28, the removed channel is located at the center of the free channel area, among the optional position information corresponding to the symbol 29, and the removed channel is located at the leftmost side of the free channel area.
Selecting a target channel according to the optional position information in the free channel area of each network layer according to each piece of optional position information, and forming a neural network structure corresponding to the optional position information based on the target channel in each selected network layer.
As shown in fig. 2, in the optional position information corresponding to symbol 27, the removed channel is located on the rightmost side of the free channel area, and the two channels from the left are taken as target channels. In the optional position information corresponding to the symbol 28, the removed channel is located at the center of the free channel area, and two channels on both sides are used as target channels. In the optional position information corresponding to the symbol 29, the removed channel is located at the leftmost side of the free channel area, and the two channels from right to left are used as target channels.
By utilizing the steps, in the same neural network structure, the relative position information of the target channels in different network layers in the corresponding free channel areas is the same, so that the number of the constructed neural network structures can be reduced, and the calculation amount of channel number learning can be reduced.
The embodiment selects the channels of the neural network based on the selectable position information, overcomes the defect that the channel selection in the number of the channels of the automatic learning network in the prior art is fixed, can flexibly select the channels in the neural network, and improves the quality and the searching efficiency of the channel number learning of the neural network.
After the target channel is selected for each network layer, the target channel in that network layer may be merged with the channel preceding the free channel region in that network layer to form the channel structure of that network layer. The channels preceding the free channel region in the network layer may form a basic channel region, such that the basic channel region in the network layer and the target channel in the network layer are merged to form a channel structure of the network layer. The channel structures of the various network layers merge to form a neural network structure.
In a specific implementation, the basic channel area may be that after the free channel area is formed, all channels before the free channel area are used as the basic channel area based on the position of the free channel area, or may be determined based on the channel number included in the network layer in the preset free weight and channel number configuration scheme, that is, the channel number included in the basic channel area is determined based on the channel number included in the network layer in the preset free weight and channel number configuration scheme, and then the channel number from the left in the network layer is used as the basic channel area. The basic channel zone includes at least one channel in a corresponding network layer.
According to the embodiment, the neural network structure is constructed by utilizing the basic channel region and the target channels selected from each network layer, so that the freedom degree of selecting the channels can be limited, the diversity of the neural network structure is reduced, the calculated amount of the neural network channel number learning is reduced on the premise of ensuring the neural network channel number learning quality, and the learning efficiency is improved.
In some embodiments, after determining the number of target channels included in each network layer in the target neural network, a next iteration may be performed to determine a new number of target channels, specifically:
Step one, acquiring a new preset load, and determining a plurality of new channel number configuration schemes corresponding to a target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network.
Here, the new preset load is also determined based on the computing capability of the device in which the target neural network is located, and the next round of training of the network channel number can be performed based on the new preset load. In addition, a smaller preset load can be set to train the number of network channels of a new round. The larger the number of rounds, the smaller the preset load setting, and the closer to the computing power of the device where the target neural network is located. Therefore, the gradual reduction of the channel number searching space can be realized, and the searching efficiency and the channel mathematics learning quality are improved.
And step two, determining a plurality of neural network structures under the new channel number configuration scheme based on the preset free weight, the channel number included in each network layer in the new channel number configuration scheme and the target channel number included in each network layer in the target neural network under the preset load in the previous iteration.
Here, for each network layer, a search space for the iteration is determined based on the number of target channels included in the network layer determined in the previous iteration, and then a channel structure in the network layer in the iteration is formed by screening channels from the determined search space. Specific: determining an optional channel area corresponding to each network layer in the current iteration, namely the search space, based on the target channel number included by each network layer in the target neural network under the preset load in the previous iteration; and then, determining a new free channel area corresponding to each network layer from the determined search space based on the preset free weight and the channel number included by each network layer in the new channel number configuration scheme, screening a new target channel in each network layer from the new free channel area, and forming a new channel structure of each network layer based on the new target channel and a channel in front of the new free channel area.
And then forming a neural network structure in the iteration by utilizing the new channel structure corresponding to each network layer.
In summary, this step first determines, based on the number of target channels included in each network layer in the target neural network, an optional channel area corresponding to each network layer; and then determining various neural network structures under the new channel number configuration scheme based on the selectable channel area corresponding to each network layer, the preset free weight and the channel number included by each network layer in the new channel number configuration scheme.
Here, the selectable channel area is constructed by utilizing the target channel number corresponding to the preset load in the previous iteration, and the new target channel is selected from the selectable channel area to learn the channel number of the neural network, so that the learning space of the channel number can be reduced, the calculated amount of the channel number learning is reduced, and the efficiency and quality of the channel number learning are improved.
And thirdly, determining the target channel number included in each network layer in the target neural network under a new preset load based on the measurement precision corresponding to each neural network structure under each new channel number configuration scheme.
The method for determining the number of target channels by using the measurement accuracy is the same as that in the above embodiment, and will not be described in detail.
The above embodiment utilizes a new preset load smaller than the preset load in the previous iteration to determine a new channel number configuration scheme, and selects a new target channel in a search space corresponding to the preset load in the previous iteration based on the new channel number configuration scheme, so that the search space for channel number learning is reduced, and the above embodiment can reduce the search space 10 under the condition of ensuring the same search granularity 8 And the efficiency and the quality of channel number learning can be effectively improved.
In some embodiments, the generating the target neural network based on the target channel number included in each network layer in the target neural network may specifically be implemented by the following steps:
training the target neural network based on the number of target channels included in each network layer, and determining the positions of the channels included in each network layer in the target neural network and the parameter information corresponding to each channel to obtain the trained target neural network.
In the training process of determining the number of target channels, the obtained channel position and channel parameter information cannot enable the measurement accuracy of the target neural network to be better, and at the moment, the target neural network needs to be retrained based on the determined number of target channels to obtain the target neural network with higher measurement accuracy.
According to the embodiment, after the channel number of each network layer of the target neural network is determined, the target neural network is retrained based on the determined channel number, so that the measurement accuracy of the generated target neural network can be improved.
All the channels mentioned above may be a channel group, in which a plurality of individual channels are included, and a channel group is used as a channel, which can reduce the amount of calculation. When a single channel is divided for each network layer to form a channel group, the influence of the single channel in each layer on the measurement accuracy of the target neural network can be combined, and the influence of each channel group in different network layers on the measurement accuracy of the target neural network is the same. Specifically, the method can be realized by the following steps:
Step one, determining influence degree information of a single channel in each network layer on measurement accuracy of the target neural network according to each network layer.
The influence degree information of all the single channels in each network layer on the measurement accuracy of the target neural network is the same. The influence degree information of single channels in different network layers on the measurement accuracy of the target neural network can be the same or different.
In a specific implementation, the influence degree information may be floating point operations per second number of times of FLPs corresponding to a single channel.
And step two, determining the number of single channels included in each channel group in the network layer based on the influence degree information.
Here, before determining the number of individual channels included in a channel group in a certain network layer, it is necessary to first preset comprehensive influence degree information of the channel group on measurement accuracy of a target neural network. And then, determining the number of the single channels included in each channel group in the network layer by utilizing the influence degree information and the comprehensive influence degree information of the single channels in the network layer on the measurement precision of the target neural network.
The comprehensive influence degree information corresponding to different network layers is the same.
In a specific implementation, the integrated influence degree information may be floating point operations per second number of times FLPs corresponding to a single channel.
And thirdly, dividing a single channel in the network layer based on the determined quantity to obtain a plurality of channel groups.
And step four, taking the obtained channel group as a new channel in the network layer.
According to the embodiment, the single channel in the neural network layer is segmented according to the influence degree information of the single channel, so that the channel group is obtained, the channel group is used as a new channel to learn the channel number, and the channel mathematical learning efficiency can be improved.
The target neural network obtained by training according to the preset load in the above embodiment is suitable for more devices with different computing capacities, and can be suitable for small-sized devices, portable devices and the like, and also suitable for devices with large computing capacities.
Corresponding to the neural network generation method, the application also discloses a neural network generation device, which is applied to the server, and each module in the device can realize each step in the neural network generation method of each embodiment and can obtain the same beneficial effects, so that the description of the same parts is omitted here. Specifically, as shown in fig. 3, the neural network generation device includes:
the channel design module 310 is configured to determine a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network.
The network construction module 320 is configured to determine, for each channel number configuration scheme, a plurality of neural network structures under the channel number configuration scheme based on a preset free weight and the channel number included in each network layer in the channel number configuration scheme.
The channel determining module 330 is configured to determine, based on measurement accuracy corresponding to each neural network structure under each channel number configuration scheme, a target channel number included in each network layer in the target neural network;
the network generation module 340 is configured to generate the target neural network based on the number of target channels included in each network layer in the target neural network.
In some embodiments, the network construction module 320, when determining the plurality of neural network structures, is to:
for each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region comprises a plurality of channels in a corresponding network layer;
and selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights to form various neural network structures under the channel number configuration scheme.
In some embodiments, the channel determining module 330 is configured to, when determining the number of target channels included in each network layer in the target neural network based on the measurement accuracy corresponding to each neural network structure under each channel number configuration scheme:
determining the measurement precision of the neural network corresponding to each channel number configuration scheme based on the measurement precision of each neural network structure under the channel number configuration scheme;
and taking the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy as the target channel number included in each network layer in the target neural network.
In some embodiments, the channel design module 310 is further configured to obtain a new preset load, and determine a plurality of new channel number configuration schemes corresponding to the target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network in the previous time;
the network construction module 320 is further configured to determine, for each new channel number configuration scheme, a plurality of neural network structures under the new channel number configuration scheme based on a preset free weight, a channel number included in each network layer in the new channel number configuration scheme, and a target channel number included in each network layer in the target neural network under a preset load in a previous iteration;
The channel determining module 330 is further configured to determine, based on the measurement accuracy corresponding to each neural network structure under each new channel number configuration scheme, a target channel number included in each network layer in the target neural network under a new preset load.
In some embodiments, the channel design module 310 is further configured to, prior to determining the plurality of channel number configuration schemes:
determining influence degree information of a single channel in each network layer of the target neural network on measurement accuracy of the target neural network;
determining the number of single channels included in each channel group in the network layer based on the influence degree information;
dividing a single channel in the network layer based on the determined number to obtain a plurality of channel groups;
the channel group will be obtained as a new channel in the network layer.
Corresponding to the above neural network generation method, the embodiment of the present application further provides an electronic device 400, as shown in fig. 4, which is a schematic structural diagram of the electronic device 400 provided in the embodiment of the present application, including:
a processor 41, a memory 42, and a bus 43; memory 42 is used to store execution instructions, including memory 421 and external memory 422; the memory 421 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 41 and data exchanged with the external memory 422 such as a hard disk, and the processor 41 exchanges data with the external memory 422 through the memory 421, and when the electronic device 400 operates, the processor 41 and the memory 42 communicate with each other through the bus 43, so that the processor 41 executes the following instructions: determining a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network; determining a plurality of neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included in each network layer in the channel number configuration scheme; determining the number of target channels included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme; and generating the target neural network based on the target channel number included in each network layer in the target neural network.
The present application also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps of the neural network generation method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product generated by the neural network provided in the embodiments of the present application includes a computer readable storage medium storing program code, where the program code includes instructions for executing the steps of the neural network generating method described in the method embodiments, and specifically, reference may be made to the method embodiments described above, and details thereof are not repeated herein.
The present application also provides a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A neural network generation method, comprising:
determining a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network; the preset load is used for determining according to the computing capacity of the equipment where the target neural network to be generated is located;
determining a plurality of neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included in each network layer in the channel number configuration scheme;
determining the number of target channels included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme;
and generating the target neural network based on the number of target channels included in each network layer in the target neural network, wherein the target neural network is suitable for small-sized equipment and/or portable equipment.
2. The neural network generation method of claim 1, wherein determining the plurality of neural network structures comprises:
For each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region comprises a plurality of channels in a corresponding network layer;
and selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights to form various neural network structures under the channel number configuration scheme.
3. The neural network generation method according to claim 2, wherein the selecting, based on the preset free weights, target channels from the free channel areas for the corresponding network layers respectively, to form a plurality of neural network structures under the channel number configuration scheme includes:
determining various optional position information of the target channel in the free channel area based on the preset free weight; wherein the free channel regions in different network layers comprise the same number of channels;
and selecting a target channel according to the optional position information in the free channel area of each network layer according to each piece of optional position information, and forming a neural network structure corresponding to the optional position information based on the selected target channel in each network layer.
4. A neural network generation method according to claim 3, wherein forming a neural network structure corresponding to the selectable location information based on the target channels in the selected respective network layers comprises:
determining a basic channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the basic channel zone comprises at least one channel in a corresponding network layer;
a neural network structure corresponding to the selectable location information is formed based on the basic channel region in each network layer and the target channel in each network layer.
5. The neural network generation method according to any one of claims 1 to 4, characterized by further comprising, after determining the plurality of neural network structures:
constructing a super network based on various neural network structures under various channel number configuration schemes;
and determining the measurement precision corresponding to each neural network structure under each channel number configuration scheme based on the super network.
6. The method for generating a neural network according to any one of claims 1 to 4, wherein determining the number of target channels included in each network layer in the target neural network based on the measurement accuracy corresponding to each neural network structure under each channel number configuration scheme includes:
Determining the measurement precision of the neural network corresponding to each channel number configuration scheme based on the measurement precision of each neural network structure under the channel number configuration scheme;
and taking the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy as the target channel number included in each network layer in the target neural network.
7. The neural network generation method according to any one of claims 1 to 4, characterized by further comprising, after determining the number of target channels included in each network layer in the target neural network:
acquiring a new preset load, and determining a plurality of new channel number configuration schemes corresponding to the target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network in the previous time;
for each new channel number configuration scheme, determining a plurality of neural network structures under the new channel number configuration scheme based on a preset free weight, the channel number included in each network layer in the new channel number configuration scheme, and the target channel number included in each network layer in the target neural network under a preset load in a previous iteration;
And determining the target channel number included in each network layer in the target neural network under a new preset load based on the measurement precision corresponding to each neural network structure under each new channel number configuration scheme.
8. The neural network generation method of claim 7, wherein determining the plurality of neural network structures under the new channel number configuration scheme based on the preset free weight, the channel number included in each network layer in the new channel number configuration scheme, and the target channel number included in each network layer in the target neural network under the preset load in the previous iteration, comprises:
determining an optional channel area corresponding to each network layer in the current iteration based on the target channel number included by each network layer in the target neural network under the preset load in the previous iteration;
and determining various neural network structures under the new channel number configuration scheme based on the selectable channel area corresponding to each network layer, the preset free weight and the channel number included by each network layer in the new channel number configuration scheme.
9. The neural network generation method according to any one of claims 1 to 4, wherein the generating a target neural network based on the number of target channels included in each network layer in the target neural network includes:
Training the target neural network based on the number of target channels included in each network layer, and determining the positions of the channels included in each network layer in the target neural network and the parameter information corresponding to each channel to obtain the trained target neural network.
10. The neural network generation method according to any one of claims 1 to 4, characterized by further comprising, before determining the plurality of channel number configuration schemes:
determining influence degree information of a single channel in each network layer of the target neural network on measurement accuracy of the target neural network;
determining the number of single channels included in each channel group in the network layer based on the influence degree information;
dividing a single channel in the network layer based on the determined number to obtain a plurality of channel groups;
the channel group will be obtained as a new channel in the network layer.
11. A neural network generation device, comprising:
the channel design module is used for determining a plurality of channel number configuration schemes corresponding to the target neural network based on a preset load; each channel number configuration scheme comprises the channel number included in each network layer in the target neural network; the preset load is used for determining according to the computing capacity of the equipment where the target neural network to be generated is located;
The network construction module is used for determining various neural network structures under each channel number configuration scheme based on a preset free weight and the channel number included by each network layer in the channel number configuration scheme;
the channel determining module is used for determining the target channel number included in each network layer in the target neural network based on the measurement precision corresponding to each neural network structure under each channel number configuration scheme;
and the network generation module is used for generating the target neural network based on the number of target channels included in each network layer in the target neural network, and the target neural network is suitable for small-sized equipment and/or portable equipment.
12. The neural network generation device of claim 11, wherein the network construction module, when determining the plurality of neural network structures, is to:
for each channel number configuration scheme, determining a free channel area in each network layer under the channel number configuration scheme based on the preset free weight and the channel number included in each network layer in the channel number configuration scheme; wherein the free channel region comprises a plurality of channels in a corresponding network layer;
And selecting target channels from the free channel areas for the corresponding network layers respectively based on the preset free weights to form various neural network structures under the channel number configuration scheme.
13. The neural network generation apparatus according to claim 11 or 12, wherein the channel determination module, when determining the number of target channels included in each network layer in the target neural network based on the measurement accuracy corresponding to each neural network structure under each channel number configuration scheme, is configured to:
determining the measurement precision of the neural network corresponding to each channel number configuration scheme based on the measurement precision of each neural network structure under the channel number configuration scheme;
and taking the channel number included in each network layer in the channel number configuration scheme corresponding to the highest neural network measurement accuracy as the target channel number included in each network layer in the target neural network.
14. The neural network generating device of claim 11 or 12, wherein,
the channel design module is also used for acquiring a new preset load and determining a plurality of new channel number configuration schemes corresponding to the target neural network based on the new preset load; the new preset load is smaller than the preset load in the previous iteration, and the previous iteration is the process of determining the number of target channels included in each network layer in the target neural network in the previous time;
The network construction module is further configured to determine, for each new channel number configuration scheme, a plurality of neural network structures under the new channel number configuration scheme based on a preset free weight, a channel number included in each network layer in the new channel number configuration scheme, and a target channel number included in each network layer in the target neural network under a preset load in a previous iteration;
the channel determining module is further configured to determine, based on measurement accuracy corresponding to each neural network structure under each new channel number configuration scheme, a target channel number included in each network layer in the target neural network under a new preset load.
15. The neural network generation apparatus of claim 11 or 12, wherein the channel design module, prior to determining the plurality of channel number configurations, is further configured to:
determining influence degree information of a single channel in each network layer of the target neural network on measurement accuracy of the target neural network;
determining the number of single channels included in each channel group in the network layer based on the influence degree information;
dividing a single channel in the network layer based on the determined number to obtain a plurality of channel groups;
The channel group will be obtained as a new channel in the network layer.
16. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the neural network generation method of any of claims 1 to 10.
17. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the neural network generation method according to any one of claims 1 to 10.
CN202010882925.9A 2020-08-28 2020-08-28 Neural network generation method and device, electronic equipment and storage medium Active CN111985644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010882925.9A CN111985644B (en) 2020-08-28 2020-08-28 Neural network generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010882925.9A CN111985644B (en) 2020-08-28 2020-08-28 Neural network generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111985644A CN111985644A (en) 2020-11-24
CN111985644B true CN111985644B (en) 2024-03-08

Family

ID=73440146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010882925.9A Active CN111985644B (en) 2020-08-28 2020-08-28 Neural network generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111985644B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819138A (en) * 2021-01-26 2021-05-18 上海依图网络科技有限公司 Optimization method and device of image neural network structure

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247991A (en) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 A kind of method and device for building neutral net
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
CN108256646A (en) * 2018-01-22 2018-07-06 百度在线网络技术(北京)有限公司 model generating method and device
CN110490323A (en) * 2019-08-20 2019-11-22 腾讯科技(深圳)有限公司 Network model compression method, device, storage medium and computer equipment
CN111401516A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Neural network channel parameter searching method and related equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247991A (en) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 A kind of method and device for building neutral net
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
CN108256646A (en) * 2018-01-22 2018-07-06 百度在线网络技术(北京)有限公司 model generating method and device
CN110490323A (en) * 2019-08-20 2019-11-22 腾讯科技(深圳)有限公司 Network model compression method, device, storage medium and computer equipment
CN111401516A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Neural network channel parameter searching method and related equipment

Also Published As

Publication number Publication date
CN111985644A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN107943874B (en) Knowledge mapping processing method, device, computer equipment and storage medium
Karrer et al. Stochastic blockmodels and community structure in networks
US9619758B2 (en) Method and apparatus for labeling training samples
CN108573355B (en) Method and device for replacing operation after model updating and business server
CN107391512B (en) Method and device for predicting knowledge graph
CN109754359B (en) Pooling processing method and system applied to convolutional neural network
US11468521B2 (en) Social media account filtering method and apparatus
CN112966438A (en) Machine learning algorithm selection method and distributed computing system
CN110929867A (en) Method, device and storage medium for evaluating and determining neural network structure
CN111985644B (en) Neural network generation method and device, electronic equipment and storage medium
CN104035978B (en) Combo discovering method and system
CN113743594B (en) Network traffic prediction model establishment method and device, electronic equipment and storage medium
CN110516475A (en) A kind of data processing method, device and server
CN110135428A (en) Image segmentation processing method and device
CN110782020A (en) Network structure determination method and device and electronic system
CN111832693A (en) Neural network layer operation and model training method, device and equipment
CN111221827B (en) Database table connection method and device based on graphic processor, computer equipment and storage medium
CN110263842B (en) Neural network training method, apparatus, device, and medium for target detection
CN112445978A (en) Electronic book pushing method, electronic equipment and storage medium
CN109993338B (en) Link prediction method and device
CN111985645A (en) Neural network training method and device, electronic equipment and storage medium
CN112817525A (en) Method and device for predicting reliability grade of flash memory chip and storage medium
CN111290932A (en) Performance estimation method and device of storage equipment
US10949761B2 (en) Partitioning of packet classification rules
CN112825143A (en) Deep convolutional neural network compression method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant