CN111027668B - Neural network self-recommendation method based on greedy algorithm - Google Patents

Neural network self-recommendation method based on greedy algorithm Download PDF

Info

Publication number
CN111027668B
CN111027668B CN201911237773.0A CN201911237773A CN111027668B CN 111027668 B CN111027668 B CN 111027668B CN 201911237773 A CN201911237773 A CN 201911237773A CN 111027668 B CN111027668 B CN 111027668B
Authority
CN
China
Prior art keywords
network model
neural network
detection precision
exploration
test data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911237773.0A
Other languages
Chinese (zh)
Other versions
CN111027668A (en
Inventor
常一志
彭图胜
张漠松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Niu Niu Tu Technology Co ltd
Original Assignee
Niu Niu Tu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Niu Niu Tu Technology Co ltd filed Critical Niu Niu Tu Technology Co ltd
Priority to CN201911237773.0A priority Critical patent/CN111027668B/en
Publication of CN111027668A publication Critical patent/CN111027668A/en
Application granted granted Critical
Publication of CN111027668B publication Critical patent/CN111027668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a neural network self-recommendation method based on greedy algorithm, which comprises three stages of intelligent network selection, wherein 1/3D group test data is selected in the first stage of network selection, network models are selected according to the exploration probability Pe of 0.45, network models are selected according to the exploration probabilities Pe of 0.1 and 0.05 respectively in the second stage of network selection, network models are selected according to the exploration probabilities Pe of 0.01 and 0.005 respectively in the third stage of network selection, network selection stopping conditions are set, and detection precision, detection precision and Mean are ranked 30 And taking the network model with the fluctuation rate and the detection speed meeting the requirements as an optimal network model. According to the invention, the optimal network model can be obtained only by about 1/2D iterative operation, and the accurate ranking of all network models in the new detection environment in the neural network model collection is obtained, so that the operation complexity is greatly reduced, the evaluation efficiency is improved, the time for updating the detection system caused by the detection environment change is reduced, and an intelligent and efficient tool is provided for the intelligent upgrading and networking of a distributed network system in an industrial automation system.

Description

Neural network self-recommendation method based on greedy algorithm
Technical Field
The invention relates to the technical field of networks, in particular to a neural network self-recommendation method based on a greedy algorithm.
Background
In industrial and commercial application scenes, different network structures are constructed by adopting deep learning and neural network positioning and recognition methods for bar codes, two-dimensional codes or OCR (optical character recognition), according to different use scenes, different resolution ratios of shot pictures, different bit depths of the pictures of 8 and 24 and the like. For example, for an application scene with few identified categories, a basic neural network model with a shallow feature extraction layer and a few filters (convolution kernels) is adopted, so that overfitting of the model is prevented, and the identification efficiency and real-time performance are improved; for an application scene that the camera is far away from a detection target or the size of the target in a picture is small due to high resolution of the camera, the missing rate is reduced and the detection precision/recall rate is improved by constructing a model with high resolution of an input graph.
In addition, because the working platform and the environment are changed, for example, the distance between cameras is adjusted, the cameras with different resolutions are replaced, and in order to ensure that the bar code and OCR recognition system can work normally, the network system is generally upgraded by the following two ways: 1. a large and complete neural network suitable for various conditions is established, and the network has obvious defects of low efficiency of detection and identification due to a large amount of redundancy of network input, layer depth and filters (convolution kernels), and can not meet the requirement of industrial-grade real-time detection at all; 2. and reselecting a most appropriate neural network recognition system according to the environment change.
In a traditional upgrading system, all test data needs to be traversed for each candidate network to obtain performance parameters (such as detection precision and detection speed), and then the optimal network combination can be selected by comparing all results. Assuming that there are N candidate networks in the network pool, and the test data set size of the verification network in the new environment is D, the amount of computation required for this network selection process is: nxDxC f (wherein C f The amount of computation required to detect a picture for the network). For example, 5 models are selected and 15000 data tests are used for verification, 5 × 15000 iterative operations are needed in the traditional method, the iterative operation amount is large, the network selection efficiency is low, and the time consumption is long.
Disclosure of Invention
The invention provides a neural network self-recommendation method based on a greedy algorithm, aiming at selecting a network model by setting different exploration probabilities, attenuating the exploration probability according to a network selection result, simplifying a network selection process and improving the intelligent network selection speed.
In order to achieve the purpose, the neural network self-recommendation method based on the greedy algorithm provided by the invention comprises the following steps:
step S1: preparing a test data set and a pre-trained neural network model set, wherein the sample capacity of the test data set is D groups of test data, the number of network models in the neural network model set is N, and D and N are both natural numbers.
Step S2: in the first stage of network selection, 1/3D groups of test data in a test data set are taken, an exploration probability Pe is set to be a, iterative exploration is carried out on each group of test data by adopting a neural network model collection, after exploration, the average detection precision of each neural network model is counted, each network model is ranked, and a first ranking neural network model collection is formed, wherein 1 >.
And step S3: in the second stage of network selection, firstly setting the exploration probability Pe as b, adopting the first sequencing neural network model set to carry out iterative exploration on the rest 2/3D group test data, counting the average detection precision of each neural network model after exploration, sequencing each network model, setting the attenuation of the exploration probability Pe to 50% when the detection precision of the explored optimal network model is more than 90% or the iterative operation frequency reaches D, carrying out iterative exploration on the rest 2/3D group test data again by using the sequenced neural network model set, counting the average detection precision of each neural network model after exploration, and sequencing each neural network model again to form a second sequencing neural network model set; among them, 1 a b 0.
And step S4: and in a third stage of network selection, when the average detection precision of all the neural network models in the second ordered neural network model set fluctuates by less than 0.1% in 200 iterations, successively setting the attenuation of the exploration probability Pe by 20% and 50%, and performing iteration exploration twice on the test data in the test data set by adopting the second ordered neural network model set, wherein after each iteration exploration, the average detection precision of each neural network model is counted, each network model is ordered, the ordered neural network model set is used for next exploration, and finally a third ordered neural network model set is formed.
Step S5: and setting a network selection stopping condition, and selecting an optimal neural network model.
Each iterative exploration operation in the first, second and third network selection stages comprises:
selecting a neural network model, generating a random number v between one (0, 1) by a voting module of the intelligent network selection, randomly selecting one network model in a neural network model set if v falls into an interval (0, pe), and selecting the network model with the optimal performance so far if v falls into an interval (Pe, 1);
and then calculating the current average detection precision of the network model according to the previous selected times and the previous average detection precision of the network model.
Preferably, the average detection precision of the network model is recorded as A _ M _ R n And then:
A_M_R n =(S_C-1)(A_M_R n-1 +A_R)/S_C
wherein:
s _ C is the accumulated selection times of the selected network;
A_M_R n-1 the average detection precision of the selected network model at the last time of selection;
a _ R is precision return; a _ R =1 if the network model is selected, and a _ R =0 if the network model is not selected.
Preferably, a in the step S2 is 0.45; b in the step S3 and the step S4 is 0.1.
Preferably, the step S5 specifically executes: the conditions for stopping network selection are set as follows:
(1) The average detection precision of the neural network model with the highest rank meets the requirement;
(2) If the average detection precision of a plurality of candidate neural network models meets the requirement, the precision ranking sequence of the candidate neural network models needs to be iterated for at least 50 times;
(3) The neural network model satisfying the condition (1) or (2) needs to satisfy: in nearly 150 iterations, 5 means were obtained 30 Less than<0.1% of, wherein Mean 30 The average value of every 30 times of the average detection precision of the neural network model is obtained;
if the requirements of the conditions (1), (2) and (3) are all met and only one candidate network meets the detection precision requirement, taking the candidate network as an optimal neural network model;
if the requirements of the conditions (1), (2) and (3) are met, but a plurality of candidate networks meet the precision requirement, selecting the candidate network model with the highest detection speed as the optimal neural network model;
if any one or more of the conditions (1), (2) or (3) cannot be met, intelligent network selection continues until the iteration number reaches NxD, the intelligent network selection is finished, and no candidate network model meets the requirements.
Preferably, the average detection precision of the network model is equal to or more than 95%, which is the requirement of the detection precision.
Preferably, the number N of the network models is 5, and the number D of the test data sets is 15000.
Compared with the prior art, the invention has the beneficial effects that:
and calculating the current average detection precision of the network model according to the previous selected times and the previous average detection precision of the network model, and ranking the network model in real time. Meanwhile, in the exploration process, the exploration probability attenuation condition and the attenuation rate are set according to the average detection precision change condition, so that the network selection process has real-time performance.
The first stage of network selection sets a larger exploration probability to accelerate convergence and reduce iteration times, the second and third stages of network selection gradually attenuate the exploration probability, a network model with the optimal current performance is preferred, and a potential network model is selected through a small probability to avoid selection omission due to accidental factors.
The method can obtain the optimal network model only by about 1/2D iterative operation, and compared with the average detection precision obtained by detecting all the test data in the whole data test data set in the traditional method, the result is consistent, and the ranking change condition of each network model is also consistent. The method greatly reduces the complexity of operation, improves the evaluation efficiency, reduces the time for updating the detection system caused by detecting the environmental change, and provides an intelligent and efficient tool for intelligent upgrading and networking of a distributed network system in an industrial automation system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a neural network self-referral method of the present invention;
FIG. 2 is a diagram of the result of the average detection accuracy test of the network model according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The invention provides a neural network self-recommendation method based on a greedy algorithm, which comprises the following steps:
preparing a test data set and a pre-trained neural network model set, wherein the sample capacity of the test data set is D groups of test data, the number of network models in the neural network model set is N, and D and N are both natural numbers. The neural network model set comprises industrial detection one-dimensional codes, two-dimensional codes and ASCII OCR, and a plurality of neural network models which are suitable for a plurality of detection environment configurations. And randomly disordering the test data in the test data set, and then starting intelligent network selection.
The intelligent network selection process adopts an exploration mode to select a network model, an exploration probability Pe is set each time, an exploration probability Pe attenuation condition is set according to a current detection result, the exploration probability is gradually attenuated, and then the current average detection precision of the network model is calculated according to the previous selection times and the previous average detection precision of the network model.
The average detection precision of the network model is recorded as A _ M _ R n Then:
A_M_R n =(S_C-1)(A_M_R n-1 +A_R)/S_C
wherein:
s _ C is the accumulated selection times of the selected network;
A_M_R n-1 the average detection precision of the selected network model at the last time of selection;
a _ R is the precision return; a _ R =1 if the network model is selected, and a _ R =0 if the network model is not selected.
Specifically, the intelligent network selection comprises the following three stages.
In the first stage of network selection, 1/3D groups of test data in a test data set are taken, an exploration probability Pe is set to be a, iterative exploration is carried out on each group of test data by adopting a neural network model collection, after exploration, the average detection precision of each neural network model is counted, each network model is ranked, and a first ranking neural network model collection is formed, wherein 1 >.
Specifically, the exploration probability Pe is set to be 0.45, each group of test data is iteratively explored by adopting a neural network model set, and a random number v between one (0, 1) is generated by a voting module of the intelligent network selection 1 If v is 1 Falls in the interval (0, 0.45)]Randomly selecting a network model in the neural network model set with a probability of 45%; if v is 1 Falling within the interval (0.45, 1), the network model with the best performance so far is selected with a probability of 55%. And then according to the previous selected times of the network model and the last average detection precision, calculating the current average detection precision of the network model, updating and recording the selected times of the network model, and obtaining a first reordered ordered neural network model set. And a larger exploration probability is set in the first stage of network selection, so that convergence is accelerated, and the iteration times are reduced.
In the second stage of network selection, firstly setting an exploration probability Pe as b, adopting the first sequencing neural network model set to carry out iterative exploration on the rest 2/3D group test data, counting the average detection precision of each neural network model after exploration, sequencing each network model, setting the attenuation of the exploration probability Pe to be 50% when the detection precision of the explored optimal network model is more than 90% or the iterative operation frequency reaches D, using the sequenced neural network model set to carry out iterative exploration on the rest 2/3D group test data again, counting the average detection precision of each neural network model after exploration, and sequencing each neural network model again to form a second sequencing neural network model set; among them, 1 a b 0.
Specifically, the second stage of network selection comprises the following steps:
s31, setting the exploration probability Pe to be 0.1, adopting the first sequencing neural network model set to conduct iterative exploration on the rest 2/3D group of test data, and generating a random number v between one (0, 1) by a voting module of the intelligent network selection 2 If v is 2 Falls in the interval (0, 0.10)]Randomly selecting a network model in the first ordered set of neural network models with a probability of 10%; if v is 2 And if the network model falls into the interval (0.10, 1), selecting the network model with the best performance so far with the probability of 90%, calculating the current average detection precision of the network model according to the previous selected times of the network model and the last average detection precision, updating and recording the selected times of the network model, and reordering all the network models in the first sequencing neural network model set.
Step S32, on the basis of the network selection in the step S31, when the average detection precision of the optimal network model is reached>90 percent or when the iterative operation times reach D, the attenuation exploration probability Pe is 50 percent, the exploration probability Pe is 0.05 at the moment, and a random number v between one (0, 1) is continuously generated by the voting module of the intelligent network selection for each group of test data 3 If v is 3 Fall into the interval (0,0.05)]Randomly selecting one network model in the neural network model set sequenced in the step S31 with a probability of 5%; if v is 3 And if the network model falls into the interval (0.05, 1), selecting the network model with the optimal performance so far with the probability of 95%, calculating the current average detection precision of the network model according to the previous selected times of the network model and the last average detection precision, updating and recording the selected times of the network model, and obtaining a reordered second ordered neural network model set.
And a third stage of network selection, wherein on the basis of the network selection in the step S32, when the average detection precision of all the neural network models in the second ordered set of neural network models fluctuates by less than 0.1% in 200 iterations, the search probabilities Pe are set successively to attenuate by 20% and 50%, the second ordered set of neural network models is used to perform iterative search on the test data in the test data set twice, wherein after each iterative search, the average detection precision of each neural network model is counted, each network model is ordered, the ordered set of neural network models is used to perform next search, and a third ordered set of neural network models is finally formed.
Specifically, the third step of network selection is:
step S41, on the basis that the search probability Pe in step S32 is 0.05, when the average detection precision of all the neural network models in the second ordered neural network model set fluctuates in 200 iterations<Setting the exploration probability Pe to attenuate by 20% when the exploration probability Pe is 0.1%, iteratively exploring the test data in the test data set by adopting the second sequencing neural network model set, and generating a random number v between one (0, 1) for each group of test data by a voting module of the intelligent network selection 4 If v is 4 Falls in the interval (0,0.01)]Randomly selecting a network model in the second ordered set of neural network models with a probability of 1%; if v is 4 And if the network model falls into the interval (0.01, 1), selecting the network model with the optimal performance so far with the probability of 99%, calculating the current average detection precision of the network model according to the previous selected times of the network model and the last average detection precision, updating and recording the selected times of the network model, and reordering all the network models in the second ordered neural network model set.
Step S42, on the basis that the search probability Pe in the step S41 is 0.01, when the average detection precision of all the neural network models in the neural network model set reordered in the step S41 fluctuates in average value in 200 iterations<When the probability of search Pe is 0.1%, continuing to set the attenuation of the probability of search Pe to 50%, when the probability of search Pe is 0.005, adopting the neural network model collection reordered in the step S41 to iteratively search the test data in the test data set, and generating a random number v between one (0, 1) for each group of test data by a voting module of the intelligent network selection 5 If v is 5 Falls in the interval (0,0.005)]Then the neural network model set reordered in step S41 is randomly selected with a probability of 0.5%Selecting a network model; if v is 5 And if the model falls into the interval (0.005,1), selecting the network model with the optimal performance so far with the probability of 99.5%, calculating the current average detection precision of the network model according to the previous selected times of the network model and the average detection precision of the previous time, updating and recording the selected times of the network model, and obtaining a third reordered neural network model set.
Setting a network selection stopping condition, detecting the performance of all network models, namely detecting the performance of all network models in the third sequencing neural network model set, and ranking the detection precision, the detection precision and the Mean in all the network models 30 And taking the network model with the fluctuation rate and the detection speed meeting the requirements as an optimal network model.
Wherein, the conditions for stopping network selection are as follows:
(1) The average detection precision of the network model with the highest ranking meets the detection precision requirement;
(2) If the average detection precision of a plurality of candidate network models meets the detection precision requirement, the precision ranking sequence of the candidate network models needs to be iterated for at least 50 times;
(3) The network models satisfying the condition (1) or (2) both need to satisfy: in nearly 150 iterations, 5 means were obtained 30 Less than<0.1%, wherein Mean 30 The average value of the network model average detection precision is the average value of every 30 times.
If the requirements of the conditions (1), (2) and (3) are met and only one candidate network meets the detection precision requirement, taking the candidate network as an optimal network model;
if the requirements of the conditions (1), (2) and (3) are met, but a plurality of candidate networks meet the accuracy requirement, selecting the candidate network model with the highest detection speed as the optimal network model;
if any one or more of the conditions (1), (2) or (3) cannot be met, intelligent network selection continues until the iteration number reaches NxD, the intelligent network selection is finished, and no candidate network model meets the requirements.
And finally, the system automatically starts and loads the optimal network model, initializes the network and runs into a working state.
In this embodiment, the number N of the selected network models is 5, the sample capacity D of the test data set is 15000, and as shown in fig. 2, the average detection precision change condition of the 5 network models in the whole network selection process is represented by the abscissa and the ordinate, the average detection precision of the network models is represented by the abscissa.
In the traditional method, 15000 groups of test data in the whole data test data set are detected, namely the total iterative operation times is 5 multiplied by 15000=75000, the intelligent network selection system in the invention can obtain the average detection precision change condition of 5 network models in the iterative operation process only by about 1/2D iterative operations, the average detection precision and the ranking of the 5 models tend to be stable in the later stage of network selection, when the network selection is stopped, the average detection precision of the models 1-5 are respectively 0.81, 0.98, 0.65, 0.92 and 0.40, and the average detection precision is compared with the average detection precision obtained by detecting all the test data in the whole data test data set in the traditional method, and the result is consistent. And setting the average detection precision of the network model to be more than or equal to 95 percent to meet the detection precision requirement, wherein only the average detection precision of the model 2 meets the requirement, and the ranking is always in the first place in the whole iterative exploration process, so that the model 2 is selected as the optimal network model.
The intelligent network selection method only consumes 10% of the time of the traditional method, and the network selection speed is about 10 times of that of the traditional method. The method greatly reduces the complexity of operation, improves the evaluation efficiency, reduces the time for updating the detection system caused by the change of the detection environment, provides an intelligent network selection expert system without human intervention, and provides an intelligent and efficient tool for intelligent upgrading and networking of a distributed network system in an industrial automation system.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A neural network self-recommendation method based on a greedy algorithm is characterized by comprising the following steps:
step S1: preparing a test data set and a pre-trained neural network model set, wherein the sample capacity of the test data set is D groups of test data, the number of network models in the neural network model set is N, and D and N are both natural numbers;
step S2: a first stage of network selection, namely 1/3D groups of test data in a test data set are taken, an exploration probability Pe is set as a, iterative exploration is carried out on each group of test data by adopting a neural network model collection, after the iterative exploration, the average detection precision of each neural network model is counted, each network model is ranked, and a first ranking neural network model collection is formed, wherein 1>a 0;
and step S3: in the second stage of network selection, firstly setting an exploration probability Pe as b, adopting the first sequencing neural network model set to carry out iterative exploration on the rest 2/3D group of test data, counting the average detection precision of each neural network model after exploration, sequencing each network model, setting the attenuation of the exploration probability Pe as 50% when the detection precision of the explored optimal network model is more than 90% or the iterative operation frequency reaches D, using the sequenced neural network model set to carry out iterative exploration on the unused test data in the rest 2/3D group of test data again, counting the average detection precision of each neural network model after exploration, and sequencing each neural network model again to form a second sequencing neural network model set; among them, 1 >;
and step S4: a third stage of network selection, when the average detection precision of all the neural network models in the second ordered neural network model set fluctuates by less than 0.1% in 200 iterations, successively setting the attenuation of the exploration probability Pe by 20% and 50%, and performing iterative exploration on test data in a test data set twice by adopting the second ordered neural network model set, wherein after each iterative exploration, the average detection precision of each neural network model is counted, each network model is ordered, the ordered neural network model set is used for the next exploration, and a third ordered neural network model set is finally formed;
step S5: and setting a network selection stopping condition, and selecting an optimal neural network model.
2. The greedy algorithm-based neural network self-recommendation method of claim 1, wherein each iterative exploration operation in the first, second, and third stages of network selection comprises:
selecting a neural network model, generating a random number v between one (0, 1) by a voting module of the intelligent network selection, randomly selecting one network model in a neural network model set if v falls into an interval (0, pe), and selecting the network model with the optimal performance so far if v falls into an interval (Pe, 1);
and then calculating the current average detection precision of the network model according to the previous selected times and the previous average detection precision of the network model.
3. The greedy algorithm-based neural network self-recommendation method of claim 2, wherein an average detection precision of the network model is denoted as A _ M _ R n And then:
A_M_R n =(S_C-1)(A_M_R n-1 +A_R)/S_C
wherein:
s _ C is the accumulated selection times of the selected network;
A_M_R n-1 the average detection precision of the selected network model when the selected network model is selected last time;
a _ R is the precision return; a _ R =1 if the network model is selected, and a _ R =0 if the network model is not selected.
4. The greedy algorithm-based neural network self-recommendation method according to claim 1, wherein a in the step S2 is 0.45; b in the step S3 and the step S4 is 0.1.
5. The greedy algorithm-based neural network self-recommendation method according to claim 1, wherein the step S5 specifically performs: the conditions for stopping network selection are set as follows:
(1) The average detection precision of the neural network model with the highest ranking meets the requirement;
(2) If the average detection precision of a plurality of candidate neural network models meets the requirement, the precision ranking sequence of the candidate neural network models needs to be iterated for at least 50 times;
(3) The neural network model satisfying the condition (1) or (2) needs to satisfy: in nearly 150 iterations, 5 means were obtained 30 Less than<0.1%, wherein Mean 30 The average value of the average detection precision of the neural network model is the average value of every 30 times;
if the requirements of the conditions (1), (2) and (3) are met and only one candidate network meets the detection precision requirement, taking the candidate network as an optimal neural network model;
if the requirements of the conditions (1), (2) and (3) are met, but a plurality of candidate networks meet the precision requirement, selecting the candidate network model with the highest detection speed as the optimal neural network model;
if any one or more of the conditions (1), (2) or (3) cannot be met, intelligent network selection continues until the iteration number reaches NxD, the intelligent network selection is finished, and no candidate network model meets the requirements.
6. The greedy algorithm based neural network self-recommendation method according to claim 5, wherein an average detection precision of the network model of greater than or equal to 95% is required to meet the detection precision requirement.
7. The greedy algorithm-based neural network self-recommendation method of claim 1, wherein the number of network models N is 5 and the number of test data sets D is 15000.
CN201911237773.0A 2019-12-05 2019-12-05 Neural network self-recommendation method based on greedy algorithm Active CN111027668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911237773.0A CN111027668B (en) 2019-12-05 2019-12-05 Neural network self-recommendation method based on greedy algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911237773.0A CN111027668B (en) 2019-12-05 2019-12-05 Neural network self-recommendation method based on greedy algorithm

Publications (2)

Publication Number Publication Date
CN111027668A CN111027668A (en) 2020-04-17
CN111027668B true CN111027668B (en) 2023-04-07

Family

ID=70208014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911237773.0A Active CN111027668B (en) 2019-12-05 2019-12-05 Neural network self-recommendation method based on greedy algorithm

Country Status (1)

Country Link
CN (1) CN111027668B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201717651D0 (en) * 2017-10-26 2017-12-13 Gb Gas Holdings Ltd Determining operating state from complex sensor data
WO2018222204A1 (en) * 2017-06-02 2018-12-06 Google Llc Systems and methods for black-box optimization
CN109120610A (en) * 2018-08-03 2019-01-01 上海海事大学 A kind of fusion improves the intrusion detection method of intelligent ant colony algorithm and BP neural network
EP3573068A1 (en) * 2018-05-24 2019-11-27 Siemens Healthcare GmbH System and method for an automated clinical decision support system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572800B2 (en) * 2016-02-05 2020-02-25 Nec Corporation Accelerating deep neural network training with inconsistent stochastic gradient descent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018222204A1 (en) * 2017-06-02 2018-12-06 Google Llc Systems and methods for black-box optimization
GB201717651D0 (en) * 2017-10-26 2017-12-13 Gb Gas Holdings Ltd Determining operating state from complex sensor data
EP3573068A1 (en) * 2018-05-24 2019-11-27 Siemens Healthcare GmbH System and method for an automated clinical decision support system
CN109120610A (en) * 2018-08-03 2019-01-01 上海海事大学 A kind of fusion improves the intrusion detection method of intelligent ant colony algorithm and BP neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于高维粒子群算法的神经网络结构优化研究;黄余;《现代电子技术》;20170201(第03期);全文 *

Also Published As

Publication number Publication date
CN111027668A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN109948029B (en) Neural network self-adaptive depth Hash image searching method
CN111382868B (en) Neural network structure searching method and device
CN102184205B (en) Based on the Multi-Pattern Matching method of easily extensible precision chaos Hash
CN111325338A (en) Neural network structure evaluation model construction and neural network structure search method
CN111008631B (en) Image association method and device, storage medium and electronic device
CN111461753B (en) Method and device for recalling knowledge points in intelligent customer service scene
CN108072896B (en) A kind of fully automatically seismic wave first break pickup method and system
CN103020321B (en) Neighbor search method and system
CN116151319A (en) Method and device for searching neural network integration model and electronic equipment
CN113762519A (en) Data cleaning method, device and equipment
CN109284700B (en) Method, storage medium, device and system for detecting multiple faces in image
CN112116010A (en) ANN-SNN conversion classification method based on membrane potential pretreatment
JP2022539423A (en) Image feature extraction and network training method, device and equipment
CN114091650A (en) Searching method and application of deep convolutional neural network architecture
CN110163206B (en) License plate recognition method, system, storage medium and device
CN111027668B (en) Neural network self-recommendation method based on greedy algorithm
US7987250B2 (en) Maximum clique in a graph
CN108668265B (en) Method for predicting meeting probability among mobile users based on cyclic neural network
CN112818859A (en) Deep hash-based multi-level retrieval pedestrian re-identification method
CN116433980A (en) Image classification method, device, equipment and medium of impulse neural network structure
CN114547286A (en) Information searching method and device and electronic equipment
CN112905832B (en) Complex background fine-grained image retrieval system and method
CN112801271B (en) Method for generating neural network, data processing method and intelligent driving control method
CN111985542B (en) Representative graph structure model, visual understanding model establishing method and application
CN111144233B (en) Pedestrian re-identification method based on TOIM loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant