CN109359727A - Structure determination methodology, device, equipment and the readable medium of neural network - Google Patents

Structure determination methodology, device, equipment and the readable medium of neural network Download PDF

Info

Publication number
CN109359727A
CN109359727A CN201811494899.1A CN201811494899A CN109359727A CN 109359727 A CN109359727 A CN 109359727A CN 201811494899 A CN201811494899 A CN 201811494899A CN 109359727 A CN109359727 A CN 109359727A
Authority
CN
China
Prior art keywords
network structure
network
exist
function value
target function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811494899.1A
Other languages
Chinese (zh)
Other versions
CN109359727B (en
Inventor
胡耀全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811494899.1A priority Critical patent/CN109359727B/en
Publication of CN109359727A publication Critical patent/CN109359727A/en
Application granted granted Critical
Publication of CN109359727B publication Critical patent/CN109359727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present disclosure discloses structure determination methodology, device, equipment and the readable medium of a kind of neural network.Wherein, method includes: to go out Exist Network Structure by sampler samples;Calculate the target function value of Exist Network Structure;The parameter of the sampler is adjusted according to the target function value;It returns to execute and goes out the operation of Exist Network Structure by sampler samples, until target function value reaches preset function value, and/or adjust number and reach frequency threshold value.The embodiment of the present disclosure can be realized the automatic search of network structure, be not limited to a kind of network structure of fixation, but be continuously available new, the higher network structure of quality by target function value, can be suitable for the data processing of almost various scenes.

Description

Structure determination methodology, device, equipment and the readable medium of neural network
Technical field
The embodiment of the present disclosure is related to computer vision technique more particularly to a kind of structure determination methodology of neural network, dress It sets, equipment and readable medium.
Background technique
With the development of computer vision, the data such as image, sound can be handled by neural network, such as Target detection, target following, segmentation and classification etc. are carried out to the object in image.
With the promotion of user demand and the development of terminal technology, the accuracy and speed of data processing are proposed more High requirement, this just needs the preferable neural network for the treatment of effect.In the prior art, mostly using comparative maturity The neural networks such as RCNN, TOLO, SSD carry out image procossing, and still, inventor has found in the course of the research, these mature minds Through network and be not suitable for the processing of all data, such as when handling certain images, the treatment effects of these neural networks compared with Difference.
Summary of the invention
The embodiment of the present disclosure provides structure determination methodology, device, equipment and the readable medium of a kind of neural network, with reality The automatic search of existing network structure, the data processing for almost various scenes.
In a first aspect, the embodiment of the present disclosure provides a kind of structure determination methodology of neural network, comprising:
Go out Exist Network Structure by sampler samples;
Calculate the target function value of Exist Network Structure;
The parameter of the sampler is adjusted according to the target function value;
The operation for executing to go out by sampler samples Exist Network Structure is returned to, until target function value reaches default letter Numerical value, and/or adjustment number reach frequency threshold value.
Second aspect, the embodiment of the present disclosure additionally provide a kind of structure determination device of neural network, comprising:
Sampling module, for going out Exist Network Structure by sampler samples;
Computing module, for calculating the target function value of Exist Network Structure;
Module is adjusted, for adjusting the parameter of the sampler according to the target function value;
Return module executes to go out by sampler samples the operation of Exist Network Structure for returning, until objective function Value reaches preset function value, and/or adjustment number reaches frequency threshold value.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, and the electronic equipment includes:
One or more processing units;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of places Manage the structure determination methodology that device realizes neural network described in any embodiment.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of computer-readable medium, are stored thereon with computer journey Sequence realizes the structure determination methodology of neural network described in any embodiment when the program is executed by processing unit.
In the embodiment of the present disclosure, go out Exist Network Structure by sampler samples, and calculate the target of Exist Network Structure Functional value, to characterize the quality of network structure by target function value;By being adopted according to target function value adjustment The parameter of sample device returns to the operation for executing to go out by sampler samples Exist Network Structure, until target function value reaches default Functional value, and/or adjustment number reach frequency threshold value, so that the automatic search of network structure is realized, moreover, by adjusting repeatedly The parameter of whole sampler, so that sampler can sample the higher network structure of mass, to can be obtained in subsequent use To preferable result.The embodiment of the present disclosure is not limited to a kind of network structure of fixation in concrete application scene, but logical It crosses target function value and is continuously available new, the higher network structure of quality, can be suitable at the almost data of various scenes Reason.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure one provides;
Fig. 2 is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure two provides;
Fig. 3 a is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure three provides;
Fig. 3 b is the schematic diagram for the network layer predetermined coding that the embodiment of the present disclosure three provides;
Fig. 3 c is a kind of structural schematic diagram for sampler that the embodiment of the present disclosure three provides;
Fig. 3 d is a kind of schematic diagram for network structure that the embodiment of the present disclosure three provides;
Fig. 3 e is the schematic diagram for another network structure that the embodiment of the present disclosure three provides;
Fig. 4 is a kind of structural schematic diagram of the structure determination device for neural network that the embodiment of the present disclosure four provides;
Fig. 5 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present disclosure five provides.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the disclosure, rather than the restriction to the disclosure.It also should be noted that in order to Convenient for description, part relevant to the disclosure is illustrated only rather than entire infrastructure in attached drawing.In following each embodiments, Mei Geshi Optional feature and example are applied in example while providing, each feature recorded in embodiment can be combined, and be formed multiple optional The embodiment of each number should not be considered merely as a technical solution by scheme.
Embodiment one
Fig. 1 is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure one provides, the present embodiment It is applicable to the case where determining the structure of neural network, this method can be executed by the structure determination device of neural network, should Device can be by hardware and/or software sharing, and integrates in the electronic device, which can be server or end End.In conjunction with Fig. 1, the method that the embodiment of the present disclosure provides specifically includes following operation:
S110, go out Exist Network Structure by sampler samples.Continue to execute S120.
Sampler is used to sample out the network structure of neural network.Each run sampler can sample out at least one and work as Preceding network structure.
S120, the target function value for calculating Exist Network Structure.Continue to execute S130.
Target function value is used to characterize the quality of Exist Network Structure.By taking identification model as an example, target function value is higher, The recognition result of identification model is more accurate.By taking disaggregated model as an example, target function value is higher, and the confidence level of disaggregated model is bigger.
Optionally, objective function is constructed by least one element, and calculates the target function value of Exist Network Structure. Wherein, the element for constructing objective function is determined according to the actual demand of network structure, for example, network structure needs to operate in end End, it is desirable that it occupies little space, then element includes the occupied space of network structure;In another example, it is desirable that network structure has higher Accuracy rate, then element includes the accuracy rate of network structure.
S130, judge whether target function value reaches preset function value, and/or whether adjustment number reaches frequency threshold value. If so, jumping to S140;If not, jumping to S150.
The present embodiment is using at least one of preset function value and frequency threshold value the two factors, the cut-off as circulation Condition.Optionally, can be judged only with one of factor, another factor is given up.Such as judge that target function value is It is no to reach preset function value, or judge to adjust whether number reaches frequency threshold value.Optionally, two kinds of factors can also be integrated Judged, for example, judging whether target function value reaches preset function value simultaneously, and whether adjustment number reaches number Threshold value jumps to S140 if one of them is judged as YES;If two kinds of factors are judged as NO, i.e., target function value is not Reach preset function value, and adjust number and be not up to frequency threshold value, jumps to S150.
In a specific embodiment, if target function value reaches preset function value, illustrate that Exist Network Structure meets Actual demand, and then Exist Network Structure is determined as to final network structure, for subsequent use.On the contrary, if target letter Number is not up to preset function value, illustrates that Exist Network Structure is unsatisfactory for actual demand, can be adjusted and be sampled according to target function value The parameter of device, and continue to sample out next network structure;Or it can also continue to judge to adjust whether number reaches number threshold Value.This is because in some cases, after adjusting many times, target function value cannot still reach preset function value, In order to save time and computing resource, if adjustment number reaches frequency threshold value, Exist Network Structure is determined as to final net Network structure.If adjustment number is not up to frequency threshold value, the parameter of sampler is adjusted according to target function value, and continues to adopt Sample goes out next network structure.
S140, it determines Exist Network Structure, and terminates this operation.
S150, the parameter that sampler is adjusted according to target function value.It returns and executes S110.
Optionally, if preset function value is a lesser value, i.e. target function value is smaller, illustrates current network knot The quality of structure is higher, then the parameter of sampler is adjusted by minimizing target function value;If preset function value is one larger Value, i.e. target function value is bigger, illustrate that the quality of Exist Network Structure is higher, then by maximize target function value adjustment The parameter of sampler.Optionally, parameter regulation means include but is not limited to Policy-Gradient algorithm, gradient descent method etc..
By adjusting the parameter of sampler, so that mesh of the target function value of next network structure compared with Exist Network Structure Offer of tender numerical value is more nearly preset function value, to be conducive to sample the higher network structure of mass.
In the embodiment of the present disclosure, go out Exist Network Structure by sampler samples, and calculate the target of Exist Network Structure Functional value, to characterize the quality of network structure by target function value;By the ginseng for adjusting sampler according to target function value Number, returns to execute and goes out the operation of Exist Network Structure by sampler samples, until target function value reaches preset function value, And/or adjustment number reaches frequency threshold value, so that the automatic search of network structure is realized, moreover, by adjusting sampler repeatedly Parameter, so that sampler can sample the higher network structure of mass, thus can be obtained in subsequent use preferably As a result.The embodiment of the present disclosure is not limited to a kind of network structure of fixation, but passes through target letter in concrete application scene Numerical value is continuously available new, the higher network structure of quality, can be suitable for the data processing of almost various scenes.
Embodiment two
Fig. 2 is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure two provides.The present embodiment Each optional embodiment of above-described embodiment is advanced optimized, optionally, operation " is calculated into the target of Exist Network Structure Functional value " is refined as " calculating the accuracy rate and/or operation duration of Exist Network Structure;According to the accuracy rate of Exist Network Structure And/or operation duration, obtain target function value ", to sample out the network knot that accuracy rate is higher or operation duration is shorter Structure.Optionally, after operation " adjusting the parameter of sampler according to target function value ", additional " initialization Exist Network Structure Network parameter;By data set, the network parameter of Exist Network Structure is calculated " to not only obtain network structure, also obtain Suitable network parameter, and then final neural network is obtained, it is applied directly in follow-up data processing.In conjunction with Fig. 2, this reality The method for applying example offer specifically includes following operation:
S210, go out Exist Network Structure by sampler samples.
S220, the accuracy rate and/or operation duration for calculating Exist Network Structure.
Optionally, the accuracy rate of Exist Network Structure perhaps operation duration or accuracy rate and operation duration are calculated.Its In, accuracy rate refers to that the accuracy rate on verifying collection, operation duration refer to operation duration on the target device, and target device is for example It is terminal.
For calculating the accuracy rate of Exist Network Structure, firstly, being joined using the network of training set training Exist Network Structure Number;Then, the accuracy rate for calculating network structure is collected using verifying.Optionally, entire training set is divided into two parts, wherein A part, such as 70%, it is used as training set, another part, such as 30% uses as verifying collection.Optionally, for net There are two above situations for network structure, and the network parameter of each network structure of training set training is respectively adopted.Then, using testing Card collection calculates the accuracy rate of each network structure, for example, the accuracy rate of three network structures is respectively 80%, 85% and 90%, In some application scenarios, verifying, which collects calculated accuracy rate, can also be known as score.Further, from more than two accurate Highest accuracy rate is selected in rate, alternatively, being averaging to more than two accuracys rate, is obtained for calculating target function value Accuracy rate.If selecting highest accuracy rate, only retain the corresponding network structure of highest accuracy rate and corresponding network ginseng Number deletes other network structures and corresponding network parameter.
In some cases, the network parameter obtained according to training set can not obtain higher accuracy rate, it is also necessary into One-step optimization.Optionally, for each Exist Network Structure, the accuracy rate for using verifying collection, calculating Exist Network Structure it Before, firstly, being led to using the penalty values (loss) that verifying collection calculates Exist Network Structure if network structure is disaggregated model The cross entropy crossed between output valve and a reference value obtains penalty values;If network structure is regression model, by output valve with Euclidean distance between a reference value, obtains penalty values.Then, network parameter is adjusted according to penalty values specifically pass through minimum Change penalty values, adjust network parameter, so that penalty values are lower than loss threshold value;The calculating operation for executing penalty values is returned to, until damage Mistake value is lower than loss threshold value.
Operation duration for calculating Exist Network Structure optionally in terminal operating Exist Network Structure, and obtains The operation duration of Exist Network Structure.Optionally, the network parameter of Exist Network Structure can be initial value or default value. For Exist Network Structure there are two it is above in the case of, choose any one Exist Network Structure or accuracy rate be highest current Network structure and corresponding network parameter and input data are provided to terminal, so that terminal passes through Exist Network Structure and net Network parameter handles input data.
In most cases, network parameter and the format of input data are floating types, need computing resource larger, operation Duration is longer, it is contemplated that the operational capability of terminal is limited, in order to save the operation duration and computing resource of terminal, in terminal Run Exist Network Structure, and before obtaining the operation duration of Exist Network Structure, by the network parameter of Exist Network Structure and The format of input data is converted to fixed point;Network structure, the network parameter of fixed point format and input data are provided to terminal.
Specifically, input data, output data and the network parameter of each layer in Exist Network Structure are obtained, determination includes Input data, the maximum data range -2 of output data and network parameter of each layer-fl~+2fl, obtain index fl.By current net The network parameter and input data of network structure are divided by 2fl, obtained quotient is rounded, to be converted into fixed point format, is provided to Terminal.Optionally, after terminal is multiple to Exist Network Structure difference, when synthesis is chosen most short operation duration or is averagely run It is long, as the operation duration for calculating target function value.
For calculating the accuracy rate and operation duration of Exist Network Structure, optionally, using training set, the current net of training The network parameter of network structure;Collected using verifying, calculates the accuracy rate of network structure;In terminal operating Exist Network Structure, and obtain Take the operation duration of Exist Network Structure.It is specifically detailed in and above-mentioned to the accuracy rate of Exist Network Structure " calculate " and " calculates currently The description of the operation duration of network structure ".With foregoing description the difference is that, in terminal operating Exist Network Structure, when The network parameter of preceding network structure is obtained using training set, or continues to adjust to obtain according to the penalty values of network structure 's.It is usually floating type by the network parameter that training set obtains, then in terminal operating Exist Network Structure, and obtains current Before the operation duration of network structure, the format of the network parameter of Exist Network Structure and input data is converted into fixed point;It will Network structure, the network parameter of fixed point format and input data are provided to terminal.
S230, accuracy rate and/or operation duration according to Exist Network Structure, obtain target function value.
In an optional embodiment, by the inverse or accuracy rate of the operation duration of Exist Network Structure, directly as Target function value.Then the operation duration of Exist Network Structure is shorter, alternatively, accuracy rate is higher, target function value is higher.In turn Need to adjust the parameter of sampler by maximizing target function value.Conversely, by the inverse of the accuracy rate of Exist Network Structure or Person's operation duration then needs to adjust the parameter of sampler by minimizing target function value directly as target function value.
In another alternative, according to formula (1), the target function value Q (m) of Exist Network Structure m is calculated.
Wherein, ACC (m) is the accuracy rate of Exist Network Structure m, and t (m) is the operation duration of Exist Network Structure m, and r is Default index, T is constant, indicates operation duration threshold value.Optionally, the value of r is detailed in formula (2).
As it can be seen that t (m) is smaller, ACC (m) is bigger, and Q (m) is bigger, needs to adjust sampler by maximizing target function value Parameter.If r uses piecewise function form, Q (m) is not smooth enough with the change curve of t (m), then sets definite value for r. By test of many times, when r=-0.07, determine that the fast speed of network structure, accuracy rate are higher, operation duration is shorter.
S240, judge whether target function value reaches preset function value, and/or whether adjustment number reaches frequency threshold value. If so, jumping to S250;If not, jumping to S260.
S250, the network parameter for initializing Exist Network Structure;Continue to execute S251.
S251, pass through data set, calculate the network parameter of Exist Network Structure.
Verifying collection and training set are marked off from entire training set described in S220, it is therefore intended that calculate network The accuracy rate and penalty values of structure, to choose preferable network structure, therefore, to verifying collection and training set it is of less demanding. However, after preferable network structure has been determined, it is also necessary to calculate more particularly suitable network parameter, again to improve net Network precision.Therefore, initialization network parameter, for example, setting initial value for network parameter.Then, using entire training set, The network parameter of training Exist Network Structure;Then, the accuracy rate for calculating network structure is collected using verifying, wherein verifying, which collects, is By in current application scene sample data and label constitute, to verify standard of the neural network in current application scene True degree.
S260, the parameter that sampler is adjusted according to target function value, return and execute S210.
In the embodiment of the present disclosure, by the accuracy rate and/or operation duration that calculate Exist Network Structure;According to current net The accuracy rate and/or operation duration of network structure, obtain target function value, to sample out, accuracy rate is higher or operation duration Shorter network structure.By the network parameter for initializing Exist Network Structure;By data set, Exist Network Structure is calculated Network parameter also obtain suitable network parameter to not only obtain network structure, and then obtain final neural network, It is applied directly in follow-up data processing, and improves the accuracy of neural network.
Embodiment three
Fig. 3 a is a kind of flow chart of the structure determination methodology for neural network that the embodiment of the present disclosure three provides.This implementation Example advanced optimizes each optional embodiment of the various embodiments described above, optionally, " will go out current net by sampler samples Network structure " is optimized for " by network layer coding input predetermined into sampler, obtaining Exist Network Structure coding;According to Exist Network Structure coding, constructs Exist Network Structure ", provide the method for sampling of network structure.In conjunction with Fig. 3 a, this implementation The method that example provides specifically includes following operation:
S310, by network layer coding input predetermined into sampler, obtain Exist Network Structure coding.
One network unit includes at least one network layer, such as convolutional layer, pond layer and articulamentum.If network list The stacking number of member is N, then N number of network unit is sequentially connected with, and the network layer and network layer for including in N number of network unit Connection relationship it is all the same.N number of network unit, which is sequentially connected with, constitutes a module (block), and some network structures only include one A module, some network structures include at least two modules, and each module includes multiple sequentially connected network units, The network unit number that possible module includes is different, network layer is different, to sum up, Exist Network Structure includes: network unit In network layer, the connection relationship of network layer and network unit stacking number.
The present embodiment is encoded by network layer and sampler, samples out above-mentioned network structure.In one example, Fig. 3 b shows Network layer coding predetermined, including 1 × 1 convolutional layer coding, 3 × 3 convolutional layers coding, 5 × 5 convolutional layers coding, pond are gone out Change layer coding, articulamentum coding and stacking times N of network unit (Cell) etc., N is preset value, such as 3,5 etc..
By obtaining multiple network structures interconnected in network layer coding input value sampler predetermined Son coding constitutes Exist Network Structure coding.Wherein, Exist Network Structure coding includes: the volume of network layer in network unit The stacking number of code, the connection relationship of network layer and network unit.
As shown in Figure 3c, sampler includes multiple concatenated shot and long term memory LSTM networks, optionally, each LSTM net The parameter (i.e. the parameter of sampler) of network can be the same or different, and the output end of each LSTM network connects output layer.Base In this, firstly, being connected to the network by network layer coding input predetermined into first LSTM network from each LSTM Output layer obtains each network structure coding;Optionally, output layer is softmax layers, for selecting most from output content Large network structure coding.It is obtained in conjunction with Fig. 3 c by first LSTM network of network layer coding input value shown in Fig. 3 b Operation result A1=[0.9 0.2 0.3 0.5 0.1 0]T, by the output layer of first LSTM connection, select maximum net Network structon coding 0.9, i.e. 1 × 1 convolutional layer coding.Then by A1It is input to second LSTM network, obtains operation result A2 =[0 0.8 0.3 0.5 0.1 0]T, by the output layer of second LSTM connection, maximum network structure is selected to encode 0.8, i.e. 3 × 3 convolutional layers coding.Then by A2It is input to third LSTM network, obtains operation result A3=[0 0.8 0.3 0.5 0.1 0.9]T, by the output layer of third LSTM connection, maximum network structure coding 0.9 is selected, that is, is stacked The network unit that number enables first 1 × 1 convolutional layer coding and 3 × 3 convolutional layers coding constitute stacks n times, such as 2 times.It connects , by A3It is input in the 4th LSTM network, obtains operation result A4=[0 0.8 0.3 0.5 0.1 0.1]T, pass through The output layer of 4th LSTM connection selects maximum network structure coding 0.8, i.e. 3 × 3 convolutional layers coding.Then, will A4It is input in the 5th LSTM network, obtains operation result A5=[0 0.6 0.3 0.5 0.1 0.9]T, pass through the 5th The output layer of LSTM connection selects maximum network structure coding 0.9, i.e. stacking number, then enables primary stacking number in the past First network structure coding later, until the last one network structure coding before currently stacking number constitutes net Network unit, and the network unit is stacked into n times, such as 2 times.In this example, current network unit only includes 3 × 3 convolutional layers Coding then stacks 3 × 3 convolutional layers coding 2 times.As it can be seen that in this example, Exist Network Structure includes two modules, first Module includes 2 network units, and second module includes 2 network units.
Then, as shown in Figure 3c, according to the series sequence of multiple LSTM networks, multiple network structure is encoded and are carried out Sequence obtains Exist Network Structure coding.Wherein, network structure coding include: in network unit the coding of network layer or The stacking number of network unit.
S320, it is encoded according to Exist Network Structure, constructs Exist Network Structure.
It in an optional embodiment, is encoded according to Exist Network Structure, direct construction Exist Network Structure, it is, Exist Network Structure coding is replaced with into corresponding network layer, and by corresponding network layer stacking N times, as shown in Figure 3d.
It in another optional embodiment, in order to reduce data volume, is encoded according to Exist Network Structure, constructs original net Network structure;It is inserted into down-sampled layer in initial network structure, forms Exist Network Structure.Optionally, due to the operation of convolutional layer It measures larger, down-sampled layer is inserted into before convolutional layer, for example, the stem in first network unit of each module is inserted respectively Enter down-sampled layer, enable stride=2, to gradually reduce data volume, while guaranteeing the accuracy rate of network structure, as shown in Figure 3 e.
S330, the target function value for calculating Exist Network Structure.
S340, judge whether target function value reaches preset function value, and/or whether adjustment number reaches frequency threshold value. If so, jumping to S350;If not, jumping to S360.
S350, it determines Exist Network Structure, and terminates this operation.
S360, the parameter that sampler is adjusted according to target function value.It returns and executes S310.
In the embodiment of the present disclosure, by the way that network layer coding input predetermined into sampler, is obtained current network Structured coding;It is encoded according to Exist Network Structure, constructs Exist Network Structure, provide the method for sampling of network structure;And And the mode successively sampled meets the multilayered structure of neural network, in another example sampling out preferable network structure.
Example IV
Fig. 4 is a kind of structural schematic diagram of the structure determination device for neural network that the embodiment of the present disclosure four provides, packet Include: first, which obtains module 41, second, obtains module 42, interpolating module 43 and feature acquisition module 44.
Sampling module 41, for going out Exist Network Structure by sampler samples;
Computing module 42, for calculating the target function value of Exist Network Structure;
Module 43 is adjusted, for adjusting the parameter of sampler according to target function value;
Return module 44 executes to go out by sampler samples the operation of Exist Network Structure for returning, until target letter Numerical value reaches preset function value, and/or adjustment number reaches frequency threshold value.
In the embodiment of the present disclosure, go out Exist Network Structure by sampler samples, and calculate the target of Exist Network Structure Functional value, to characterize the quality of network structure by target function value;By the ginseng for adjusting sampler according to target function value Number, returns to execute and goes out the operation of Exist Network Structure by sampler samples, until target function value reaches preset function value, And/or adjustment number reaches frequency threshold value, so that the automatic search of network structure is realized, moreover, by adjusting sampler repeatedly Parameter, so that sampler can sample the higher network structure of mass, thus can be obtained in subsequent use preferably As a result.The embodiment of the present disclosure is not limited to a kind of network structure of fixation, but passes through target letter in concrete application scene Numerical value is continuously available new, the higher network structure of quality, can be suitable for the data processing of almost various scenes.
Optionally, computing module 42 is specifically used for when calculating the target function value of Exist Network Structure: calculating current The accuracy rate and/or operation duration of network structure;According to the accuracy rate and/or operation duration of Exist Network Structure, target is obtained Functional value.
Optionally, computing module 42 is specifically used for: adopting when calculating the accuracy rate and operation duration of Exist Network Structure With training set, the network parameter of training Exist Network Structure;Collected using verifying, calculates the accuracy rate of network structure;It is transported in terminal Row Exist Network Structure, and obtain the operation duration of Exist Network Structure.
Optionally, which further includes network parameter adjustment module, for calculating current network knot using verifying collection Before the accuracy rate of structure, is collected using verifying, calculate the penalty values of Exist Network Structure;Network parameter is adjusted according to penalty values;It returns The calculating operation of receipt row penalty values, until penalty values are lower than loss threshold value.
Optionally, which further includes vertex conversion module, in terminal operating Exist Network Structure, and is obtained Before the operation duration of Exist Network Structure, it is fixed that the format of the network parameter of Exist Network Structure and input data is converted to Point;Network structure, the network parameter of fixed point format and input data are provided to terminal.
Optionally, computing module 42 obtains target letter in accuracy rate and/or operation duration according to Exist Network Structure When numerical value, it is specifically used for: according to formulaCalculate the objective function of Exist Network Structure m Value Q (m);Wherein, ACC (m) is the accuracy rate of Exist Network Structure m, and t (m) is the operation duration of Exist Network Structure m, and T is Constant, r are default index.
Optionally, sampling module 41 is specifically used for when going out Exist Network Structure by sampler samples: will determine in advance The network layer coding input of justice obtains Exist Network Structure coding into sampler;It is encoded, is constructed according to Exist Network Structure Exist Network Structure;Wherein, Exist Network Structure coding includes: that the connection of the coding of network layer in network unit, network layer is closed The stacking number of system and network unit, Exist Network Structure includes: network layer, the connection relationship of network layer in network unit With the stacking number of network unit.
Optionally, sampler includes multiple concatenated shot and long term memory LSTM networks, and the output end of each LSTM network connects Connect output layer.Network layer coding input predetermined into sampler, is being obtained Exist Network Structure volume by sampling module 41 When code, it is specifically used for: by network layer coding input predetermined into first LSTM network, connects from each LSTM network The output layer connect obtains each network structure coding;According to the series sequence of multiple LSTM networks, to multiple network structure Coding is ranked up, and obtains Exist Network Structure coding;Wherein, network structure coding includes: network layer in network unit The stacking number of coding or network unit.
Optionally, sampling module 41 is encoded according to Exist Network Structure, when constructing Exist Network Structure, is specifically used for: It is encoded according to Exist Network Structure, constructs initial network structure;It is inserted into down-sampled layer in initial network structure, is formed current Network structure.
Optionally, which further includes network parameter computing module, for executing to go out to work as by sampler samples in return The operation of preceding network structure, after target function value reaches preset function value, and/or adjustment number reaches frequency threshold value, Initialize the network parameter of Exist Network Structure;By data set, the network parameter of Exist Network Structure is calculated.
The structure determination device of neural network provided by the embodiment of the present disclosure can be performed disclosure any embodiment and be mentioned The structure determination methodology of the neural network of confession has the corresponding functional module of execution method and beneficial effect.
Embodiment five
Below with reference to Fig. 5, it illustrates the structural representations for the electronic equipment 500 for being suitable for being used to realize the embodiment of the present disclosure Figure.Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting Receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as Vehicle mounted guidance terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal or various The server of form, such as separate server or server cluster.Electronic equipment shown in Fig. 5 is only an example, is not answered Any restrictions are brought to the function and use scope of the embodiment of the present disclosure.
As shown in figure 5, electronic equipment 500 may include processing unit (such as central processing unit, graphics processor etc.) 501, it can be loaded at random according to the program being stored in read-only memory device (ROM) 502 or from storage device 505 It accesses the program in storage device (RAM) 503 and executes various movements appropriate and processing.In RAM 503, also it is stored with Electronic equipment 500 operates required various programs and data.Processing unit 501, ROM502 and RAM 503 pass through bus 504 It is connected with each other.Input/output (I/O) interface 505 is also connected to bus 504.
In general, following device can connect to I/O interface 505: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 506 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 507 of dynamic device etc.;Storage device 508 including such as tape, hard disk etc.;And communication device 509.Communication dress It sets 509 and can permit electronic equipment 500 and wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 5 is shown Electronic equipment 500 with various devices, it should be understood that being not required for implementing or having all devices shown.It can Alternatively to implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable Jie Computer program in matter, the computer program include for execute can operational controls display methods program code.At this In the embodiment of sample, which can be downloaded and installed from network by communication device 509, or from storage Device 505 is mounted, or is mounted from ROM 502.When the computer program is executed by processing unit 501, this public affairs is executed Open the above-mentioned function of limiting in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- But be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above group It closes.The more specific example of computer readable storage medium can include but is not limited to: have being electrically connected for one or more conducting wires It connects, portable computer diskette, hard disk, random access memory device (RAM), read-only memory device (ROM), erasable type can be compiled Journey read-only memory device (EPROM or flash memory), optical fiber, portable compact disc read-only memory device (CD-ROM), optical storage Device, magnetic memory apparatus part or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.And in the disclosure, computer-readable signal media may include in a base band or as carrier wave one The data-signal that part is propagated, wherein carrying computer-readable program code.The data-signal of this propagation can use Diversified forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal is situated between Matter can also be any computer-readable medium other than computer readable storage medium, which can With transmission, propagation or transmission for by the use of instruction execution system, device or device or journey in connection Sequence.The program code for including on computer-readable medium can transmit with any suitable medium, including but not limited to: electric wire, Optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by this When processing unit executes, so that the electronic equipment: going out Exist Network Structure by sampler samples;Calculate Exist Network Structure Target function value;The parameter of sampler is adjusted according to target function value;It returns to execute and goes out current network by sampler samples The operation of structure, until target function value reaches preset function value, and/or adjustment number reaches frequency threshold value.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be held as an independent software package Part executes on the remote computer or holds on a remote computer or server completely on the user computer for row, part Row.In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as using because of spy Service provider is netted to connect by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can be with A part of a module, program segment or code is represented, a part of the module, program segment or code includes one or more A executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, box Middle marked function can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated Can actually be basically executed in parallel, they can also be executed in the opposite order sometimes, this according to related function and It is fixed.It is also noted that the group of each box in block diagram and or flow chart and the box in block diagram and or flow chart Close, can the dedicated hardware based systems of the functions or operations as defined in executing realize, or can with it is dedicated firmly The combination of part and computer instruction is realized.
Being described in module involved in the embodiment of the present disclosure can be realized by way of software, can also be passed through The mode of hardware is realized.Wherein, the title of module does not constitute the restriction to the module itself under certain conditions, for example, Sampling module is also described as " sampling module of Exist Network Structure ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Art technology Personnel should be appreciated that the open scope involved in the disclosure, however it is not limited to skill made of the specific combination of above-mentioned technical characteristic Art scheme, while should also cover in the case where not departing from design disclosed above, by above-mentioned technical characteristic or its equivalent feature into Row any combination and the other technical solutions formed.Such as (but being not limited to) disclosed in features described above and the disclosure has class Technical characteristic like function is replaced mutually and the technical solution that is formed.

Claims (13)

1. a kind of structure determination methodology of neural network characterized by comprising
Go out Exist Network Structure by sampler samples;
Calculate the target function value of Exist Network Structure;
The parameter of the sampler is adjusted according to the target function value;
It returns to execute and goes out the operation of Exist Network Structure by sampler samples, until target function value reaches preset function value, And/or adjustment number reaches frequency threshold value.
2. the method according to claim 1, wherein the target function value for calculating Exist Network Structure, packet It includes:
Calculate the accuracy rate and/or operation duration of Exist Network Structure;
According to the accuracy rate and/or operation duration of Exist Network Structure, target function value is obtained.
3. according to the method described in claim 2, it is characterized in that, it is described calculate Exist Network Structure accuracy rate and operation when It is long, comprising:
Using training set, the network parameter of training Exist Network Structure;
Collected using verifying, calculates the accuracy rate of network structure;
In terminal operating Exist Network Structure, and obtain the operation duration of Exist Network Structure.
4. according to the method described in claim 3, it is characterized in that, calculating Exist Network Structure using verifying collection described Before accuracy rate, further includes:
Collected using verifying, calculates the penalty values of Exist Network Structure;
Network parameter is adjusted according to the penalty values;
The calculating operation for executing penalty values is returned to, until penalty values are lower than loss threshold value.
5. according to the method described in claim 3, it is characterized in that, in terminal operating Exist Network Structure, and obtaining current net Before the operation duration of network structure, further includes:
The format of the network parameter of Exist Network Structure and input data is converted into fixed point;
Network structure, the network parameter of fixed point format and input data are provided to terminal.
6. according to the method described in claim 2, it is characterized in that, when the accuracy rate and operation according to Exist Network Structure It is long, obtain target function value, comprising:
According to formulaCalculate the target function value Q (m) of Exist Network Structure m;
Wherein, ACC (m) is the accuracy rate of Exist Network Structure m, and t (m) is the operation duration of Exist Network Structure m, and T is constant, R is default index.
7. being wrapped the method according to claim 1, wherein described go out Exist Network Structure by sampler samples It includes:
By network layer coding input predetermined into sampler, Exist Network Structure coding is obtained;
It is encoded according to Exist Network Structure, constructs Exist Network Structure;
Wherein, Exist Network Structure coding includes: the coding of network layer in network unit, the connection relationship of network layer and network list The stacking number of member, the Exist Network Structure includes: network layer, the connection relationship of network layer and network list in network unit The stacking number of member.
8. the method according to the description of claim 7 is characterized in that the sampler includes multiple concatenated shot and long term memories The output end of LSTM network, each LSTM network connects output layer;
It is described by network layer coding input predetermined into sampler, obtain Exist Network Structure coding, comprising:
By network layer coding input predetermined into first LSTM network, the output layer being connected to the network from each LSTM is obtained Each network structure is taken to encode;
According to the series sequence of multiple LSTM networks, multiple network structure coding is ranked up, Exist Network Structure is obtained Coding;
Wherein, the network structure coding includes: the coding of network layer or the stacking number of network unit in network unit.
9. constructing current net the method according to the description of claim 7 is characterized in that described encode according to Exist Network Structure Network structure, comprising:
It is encoded according to Exist Network Structure, constructs initial network structure;
It is inserted into down-sampled layer in initial network structure, forms Exist Network Structure.
10. -9 described in any item methods according to claim 1, which is characterized in that gone out returning to execute by sampler samples The operation of Exist Network Structure, until target function value reaches preset function value, and/or adjustment number reach frequency threshold value it Afterwards, further includes:
Initialize the network parameter of Exist Network Structure;
By data set, the network parameter of Exist Network Structure is calculated.
11. a kind of structure determination device of neural network characterized by comprising
Sampling module, for going out Exist Network Structure by sampler samples;
Computing module, for calculating the target function value of Exist Network Structure;
Module is adjusted, for adjusting the parameter of the sampler according to the target function value;
Return module executes to go out by sampler samples the operation of Exist Network Structure for returning, until target function value reaches Reach frequency threshold value to preset function value, and/or adjustment number.
12. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
One or more processing units;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing units Realize the structure determination methodology of the neural network as described in any in claim 1-10.
13. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that the program is held by processing unit The structure determination methodology of the neural network as described in any in claim 1-10 is realized when row.
CN201811494899.1A 2018-12-07 2018-12-07 Method, device and equipment for determining structure of neural network and readable medium Active CN109359727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811494899.1A CN109359727B (en) 2018-12-07 2018-12-07 Method, device and equipment for determining structure of neural network and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811494899.1A CN109359727B (en) 2018-12-07 2018-12-07 Method, device and equipment for determining structure of neural network and readable medium

Publications (2)

Publication Number Publication Date
CN109359727A true CN109359727A (en) 2019-02-19
CN109359727B CN109359727B (en) 2022-01-11

Family

ID=65331758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811494899.1A Active CN109359727B (en) 2018-12-07 2018-12-07 Method, device and equipment for determining structure of neural network and readable medium

Country Status (1)

Country Link
CN (1) CN109359727B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905880A (en) * 2019-03-22 2019-06-18 苏州浪潮智能科技有限公司 A kind of network partitioning method, system and electronic equipment and storage medium
CN110084172A (en) * 2019-04-23 2019-08-02 北京字节跳动网络技术有限公司 Character recognition method, device and electronic equipment
CN112283889A (en) * 2020-10-10 2021-01-29 广东美的暖通设备有限公司 Method, device and equipment for controlling pre-starting time of air conditioner and storage medium
CN115080796A (en) * 2022-06-20 2022-09-20 北京沃东天骏信息技术有限公司 Network structure searching method and device, storage medium and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140067738A1 (en) * 2012-08-28 2014-03-06 International Business Machines Corporation Training Deep Neural Network Acoustic Models Using Distributed Hessian-Free Optimization
CN105760933A (en) * 2016-02-18 2016-07-13 清华大学 Method and apparatus for fixed-pointing layer-wise variable precision in convolutional neural network
WO2017142397A1 (en) * 2016-02-19 2017-08-24 Scyfer B.V. Device and method for generating a group equivariant convolutional neural network
CN107111782A (en) * 2014-11-26 2017-08-29 卡里尔斯公司 Neural network structure and its method
CN107480770A (en) * 2017-07-27 2017-12-15 中国科学院自动化研究所 The adjustable neutral net for quantifying bit wide quantifies the method and device with compression
CN107909583A (en) * 2017-11-08 2018-04-13 维沃移动通信有限公司 A kind of image processing method, device and terminal
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
CN108228325A (en) * 2017-10-31 2018-06-29 深圳市商汤科技有限公司 Application management method and device, electronic equipment, computer storage media
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
CN108564165A (en) * 2018-03-13 2018-09-21 上海交通大学 The method and system of convolutional neural networks fixed point optimization
CN108921210A (en) * 2018-06-26 2018-11-30 南京信息工程大学 A kind of cloud classification method based on convolutional neural networks

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140067738A1 (en) * 2012-08-28 2014-03-06 International Business Machines Corporation Training Deep Neural Network Acoustic Models Using Distributed Hessian-Free Optimization
CN107111782A (en) * 2014-11-26 2017-08-29 卡里尔斯公司 Neural network structure and its method
CN105760933A (en) * 2016-02-18 2016-07-13 清华大学 Method and apparatus for fixed-pointing layer-wise variable precision in convolutional neural network
WO2017142397A1 (en) * 2016-02-19 2017-08-24 Scyfer B.V. Device and method for generating a group equivariant convolutional neural network
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
CN107480770A (en) * 2017-07-27 2017-12-15 中国科学院自动化研究所 The adjustable neutral net for quantifying bit wide quantifies the method and device with compression
CN108229647A (en) * 2017-08-18 2018-06-29 北京市商汤科技开发有限公司 The generation method and device of neural network structure, electronic equipment, storage medium
CN108228325A (en) * 2017-10-31 2018-06-29 深圳市商汤科技有限公司 Application management method and device, electronic equipment, computer storage media
CN107909583A (en) * 2017-11-08 2018-04-13 维沃移动通信有限公司 A kind of image processing method, device and terminal
CN108564165A (en) * 2018-03-13 2018-09-21 上海交通大学 The method and system of convolutional neural networks fixed point optimization
CN108921210A (en) * 2018-06-26 2018-11-30 南京信息工程大学 A kind of cloud classification method based on convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DARRYL D. LIN等: "Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks", 《ARXIV》 *
TAO GONG等: "GPU-based parallel optimization of immune convolutional neural network and embedded system", 《ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE》 *
ZHAO ZHONG等: "BlockQNN: Efficient Block-wise Neural Network Architecture Generation", 《ARXIV》 *
冯淦: "浮点傅里叶变换硬件架构综合研究", 《中国硕士学位论文全文数据库 信息科技辑》 *
陈浩广: "基于扩展神经网络的非线性不确定系统自适应控制设计研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905880A (en) * 2019-03-22 2019-06-18 苏州浪潮智能科技有限公司 A kind of network partitioning method, system and electronic equipment and storage medium
CN110084172A (en) * 2019-04-23 2019-08-02 北京字节跳动网络技术有限公司 Character recognition method, device and electronic equipment
CN110084172B (en) * 2019-04-23 2022-07-29 北京字节跳动网络技术有限公司 Character recognition method and device and electronic equipment
CN112283889A (en) * 2020-10-10 2021-01-29 广东美的暖通设备有限公司 Method, device and equipment for controlling pre-starting time of air conditioner and storage medium
CN115080796A (en) * 2022-06-20 2022-09-20 北京沃东天骏信息技术有限公司 Network structure searching method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN109359727B (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN111860573B (en) Model training method, image category detection method and device and electronic equipment
CN109359727A (en) Structure determination methodology, device, equipment and the readable medium of neural network
CN110458107B (en) Method and device for image recognition
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
CN108830235A (en) Method and apparatus for generating information
CN108898086A (en) Method of video image processing and device, computer-readable medium and electronic equipment
CN108171191B (en) Method and apparatus for detecting face
CN108491816A (en) The method and apparatus for carrying out target following in video
JP2017536635A (en) Picture scene determination method, apparatus and server
CN111414953B (en) Point cloud classification method and device
JP2023545423A (en) Point cloud segmentation method, device, equipment and storage medium
CN111340131A (en) Image annotation method and device, readable medium and electronic equipment
CN109308490A (en) Method and apparatus for generating information
CN109360028A (en) Method and apparatus for pushed information
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
CN108062416B (en) Method and apparatus for generating label on map
CN110019939A (en) Video temperature prediction technique, device, terminal device and medium
CN111222557A (en) Image classification method and device, storage medium and electronic equipment
CN109902190A (en) Image encrypting algorithm optimization method, search method, device, system and medium
CN109583367A (en) Image text row detection method and device, storage medium and electronic equipment
CN112149699A (en) Method and device for generating model and method and device for recognizing image
CN110298850A (en) The dividing method and device of eye fundus image
CN110069997B (en) Scene classification method and device and electronic equipment
CN109978058B (en) Method, device, terminal and storage medium for determining image classification
CN111126358A (en) Face detection method, face detection device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant