CN113139582B - Image recognition method, system and storage medium based on artificial bee colony - Google Patents

Image recognition method, system and storage medium based on artificial bee colony Download PDF

Info

Publication number
CN113139582B
CN113139582B CN202110323612.4A CN202110323612A CN113139582B CN 113139582 B CN113139582 B CN 113139582B CN 202110323612 A CN202110323612 A CN 202110323612A CN 113139582 B CN113139582 B CN 113139582B
Authority
CN
China
Prior art keywords
deep learning
bee colony
learning model
model
artificial bee
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110323612.4A
Other languages
Chinese (zh)
Other versions
CN113139582A (en
Inventor
刘廉如
王永斌
张忠平
肖益珊
季文翀
丁雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Yitong Lianyun Intelligent Information Co ltd
Original Assignee
Guangdong Yitong Lianyun Intelligent Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Yitong Lianyun Intelligent Information Co ltd filed Critical Guangdong Yitong Lianyun Intelligent Information Co ltd
Priority to CN202110323612.4A priority Critical patent/CN113139582B/en
Publication of CN113139582A publication Critical patent/CN113139582A/en
Application granted granted Critical
Publication of CN113139582B publication Critical patent/CN113139582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image recognition method, a system and a storage medium based on artificial bee colony, wherein the method comprises the following steps: acquiring a plurality of image files carrying labels to form a training sample, training according to the training sample to obtain a deep learning model, and compressing the deep learning model through artificial bee colony; acquiring an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model; the step of compressing the deep learning model by artificial bee colony comprises: and (3) reducing the channel combination of the deep learning model, and obtaining the pruning structure of the channel combination through artificial bee colony search. The method effectively solves the problem of combining huge pruning structures which are difficult to process in a deep network, converts the search problem of an optimal pruning structure into an optimization problem, and combines a manual bee colony algorithm to automatically solve the problem, so that the artificial interference is reduced, the performance of an image recognition model is effectively improved, and the method can be widely applied to the technical field of deep learning.

Description

Image recognition method, system and storage medium based on artificial bee colony
Technical Field
The invention relates to the technical field of deep learning, in particular to an image recognition method, an image recognition system and a storage medium based on artificial bee colony.
Background
Deep learning solves a number of challenging problems, and its results have been widely used in the fields of computer vision, speech recognition, natural language processing, etc. Especially, the deep learning image recognition technology has great application prospect and demand on end equipment of an edge computing system. However, the conventional deep learning model usually contains a large amount of parameter redundancy, so that the fitting problem is easily encountered, and the optimal result is difficult to ensure in the training process.
Disclosure of Invention
In view of the above, in order to at least partially solve one of the above technical problems, an embodiment of the present application is to provide an image recognition method based on artificial bee colony, which can solve the problem of overfitting in training of a deep learning model; the application also provides a corresponding system for realizing the method and a computer readable storage medium.
In a first aspect, the present invention provides an image recognition method based on artificial bee colony, including the steps of: acquiring a plurality of image files carrying labels to form a training sample, training according to the training sample to obtain a deep learning model, and compressing the deep learning model by using artificial bee colony;
Acquiring an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model;
the step of compressing the deep learning model by artificial bee colony comprises:
and reducing the channel combination of the deep learning model, and searching through the artificial bee colony to obtain a pruning structure of the channel combination.
In a possible embodiment of the present application, the step of training to obtain a deep learning model according to the training sample includes:
determining the joint weight and the activation function of a node, and determining the deep learning model according to the joint weight and the activation function;
And determining a pruning model and the number of channels of the pruning model according to the network layer of the deep learning model.
In a possible embodiment of the present application, the step of training to obtain a deep learning model according to the training sample includes:
and determining an error according to the label and the deep learning model, and correcting the connection weight according to the error.
In a possible embodiment of the present application, the step of shrinking the channel combinations of the deep learning model to obtain the pruning structure of the channel combinations through the artificial bee colony search includes:
Generating first structure candidates for the structure of the pruning model by employing bees in the artificial bee colony;
And replacing the pruning model according to the first structure candidate.
In a possible embodiment of the present application, the step of replacing the pruning model according to the first structure candidate includes:
and determining the suitability of the first structure candidates through a greedy algorithm according to the first structure candidates and the pruning model, and replacing the pruning model with the first structure candidates according to the suitability.
In a possible embodiment of the present application, the step of shrinking the channel combinations of the deep learning model to obtain the pruning structure of the channel combinations through the artificial bee colony search further includes:
determining the suitability probability of the structural candidate through following bees in the artificial bee colony, and obtaining a second structural candidate according to the suitability probability;
And replacing the pruning model according to the second structure candidate.
In a possible embodiment of the present application, the step of shrinking the channel combinations of the deep learning model to obtain the pruning structure of the channel combinations through the artificial bee colony search further includes:
determining that the updating times of the pruning model in the cycle period is smaller than a preset value, and obtaining a third structure candidate by the detection bees in the artificial bee colony;
And replacing the pruning model according to the third structure candidate.
In a second aspect, the technical solution of the present invention further provides a software system for image recognition based on artificial bee colony, including: the model training unit is used for obtaining a plurality of image files carrying labels to form a training sample, training according to the training sample to obtain a deep learning model, obtaining an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model;
The model compression unit is used for compressing the deep learning model through the artificial bee colony; the step of compressing the deep learning model by artificial bee colony comprises: and reducing the channel combination of the deep learning model, and searching through the artificial bee colony to obtain a pruning structure of the channel combination.
In a third aspect, the present invention further provides a hardware system for image recognition based on artificial bee colony, including:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to perform the artificial bee colony based image recognition method of the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored therein a processor executable program which when executed by a processor is for running the method of the first aspect.
Advantages and benefits of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
The application provides an image recognition method based on a manual bee colony algorithm, which effectively finds out an optimal pruning structure through a model channel pruning mode of automatic search and effectively solves the problem of large pruning structure combination which is difficult to process in a deep network; in addition, the method converts the search problem of the optimal pruning structure into an optimization problem, and combines the artificial bee colony algorithm to automatically solve the problem, so that the artificial interference is reduced, and the performance of the image recognition model is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating steps of an embodiment of an image recognition method based on artificial bee colony according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a convolutional neural network model in an image recognition method based on artificial bee colony according to an embodiment of the present invention;
FIG. 3 is a flow chart of convolutional neural network model construction in an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a hardware system embodiment of image recognition based on artificial bee colony according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
First, the related nouns in the present application are explained:
The artificial bee colony (ARTIFICIAL BEE COLONY ALGORITHM, ABC) is an optimization method provided by simulating bee behaviors, is a specific application of the intelligent idea of the colony, and is mainly characterized in that special information of problems is not needed to be known, only the problems are compared in quality, and the global optimal value is finally highlighted in the colony through the local optimizing behaviors of each artificial bee individual, so that the method has a higher convergence rate.
The artificial bee colony algorithm has three basic components: employment of bees, non-employment of bees, food sources. The employment bees and non-employment bees are responsible for finding better food sources. The location of the food source represents a viable solution, while the food content of the food source represents the quality of the solution.
1. Employment of bees (employed bees): in connection with a particular food source, the employment bees become the detection bees after the food source is depleted;
2. following bees (on-looker bees), observing information conveyed by the employing bees and selecting a food source based thereon;
3. The reconnaissance bees (scout bees) are generated by employed bees whose food sources are depleted, and randomly seek the food sources.
In a first aspect, as shown in fig. 1, the present application provides an image recognition method based on artificial bee colony, comprising steps S01-S02:
s01, acquiring a plurality of image files carrying labels to form a training sample, training according to the training sample to obtain a deep learning model, and compressing the deep learning model through artificial bee colony;
Specifically, the acquired image file is an image data set which is classified or identified in advance, wherein a tag carried by the image file is classification information of the image file, and the image data set is divided to obtain a training set and a testing set. The deep learning model selected in the embodiment is a convolutional neural network, the convolutional neural network is trained through a training set to obtain a deep learning model (convolutional neural network model), the model is optimized and adjusted through a testing set, finally, the search problem of the optimal pruning structure is converted into an optimization problem through a manual bee colony algorithm, and the optimization problem is automatically solved by combining the manual bee colony algorithm, so that the compression of the convolutional neural network model is completed.
In an embodiment, the process of training to obtain a deep learning model from training samples includes the more detailed steps S011-S012:
s011, determining a joint weight and an activation function of the node, and determining a deep learning model according to the joint weight and the activation function;
S012, determining a pruning model and the number of channels of the pruning model according to the network layer of the deep learning model.
Specifically, as shown in fig. 2, { x1, x2, x3} represents the input of the convolutional neural network model, w ij (k) represents the joining weight between the network layers, where i represents the i-th element in each input, j represents the j-th element in the output after the joining weight processing, and k indicates the joining weight between the k-th layer to the (k+1) -th layer. a ij denotes an output after the processing of the joint weight, where i denotes an i-th layer and j denotes a j-th output; h (x) denotes an activation function, and Z ij denotes an output after a ij is subjected to the activation function, that is, z_ij=h (a ij). { y1, y2, y3} represents the final output of the model after neural network processing. In the embodiment, firstly, the forward propagation thought is adopted to construct the convolutional neural network model, an initial value is given to the weight of each node in a given network, and an activation function is selected, in the embodiment, a sigmoid function is selected as the activation function, and the function form is as follows:
after the joining weight w and the activation function H (x) are determined, all values of a and z can be calculated sequentially from front to back, and finally the y value of the output layer is calculated. The nodes in each hidden layer and the joint weights before and after the nodes in the convolutional neural network are used as the channels of the branch and leaf, namely, the pruning model, which comprises a plurality of channels, and c j is the number of channels of the pruning model.
In an embodiment, the step of compressing the deep learning model by the artificial bee colony comprises: and (3) reducing the channel combination of the deep learning model, and obtaining the pruning structure of the channel combination through artificial bee colony search. The automatic search of the pruning network structure is realized mainly by a manual bee colony algorithm in the pruning stage. This step may further include step S013:
S013, generating a plurality of first structure candidates through the structure of the pruning model by employing bees in the artificial bee colony; and replacing the pruning model according to the first structure candidate.
Wherein, for any pruning model N 'obtained in step S012, its structure is represented as C' = (C '1,c′2,…,c′L), where C' j≤cj,cj is the number of channels of the pruning model in the j-th layer. Given training set T train and test set T test, the goal of the embodiments is to get an optimal combination of C 'so that the pruning model N' trained/trimmed on T train achieves the best accuracy. The problem of channel clipping is expressed in the examples as:
Where W ' is the weight of the pruning model trained or trimmed on T Train, and acc (·) represents the accuracy of N ' with structure C ' on T test. The combination is further reduced because the equations are more difficult to optimize. Given the neural network model N, the combination of pruning structures may be But this is computationally difficult for resource-constrained devices, and therefore, constrains the equation:
s.t.c′1∈{0.1ci,0.2ci,…,αci}L
Where α=10%, 20%, …,100%, is a predetermined constant that indicates that for the i-th layer, up to αc i channels in the pretrained network N are in N ', i.e. the value of c' i≤αci,c′i is limited to {0.1c i,0.2ci,…,αici }.
In the embodiment, in the network structure searching process by adopting the artificial bee colony algorithm, the bee colony honey collecting behavior corresponds to the optimization problem of a specific pruning model in the searching process, the food source or the honey source corresponds to the feasible solution of the optimization problem, the position of the honey source is the position of the solution, the nectar amount in the honey source corresponds to the fitness value in the optimization process, the process of searching and collecting the honey source corresponds to the problem solving process, and the maximum nectar amount is the optimal solution of the problem. First, the initialized honey source position is determined: random numbers conforming to the range are randomly generated for each element of each honey source position matrix, and the honey source position matrix is filled. Calculating the initial honey source nectar amount: traversing all honey source nodes, substituting each parameter of each node into a cost function to calculate the nectar amount of each honey source, and possibly replacing the function with a positive value to fill a honey source nectar amount matrix, and simultaneously storing the 0 th generation optimal honey source position into a historical optimal honey source matrix vector. Next, by employing bees to find the next honey source, specifically, traversing all honey source nodes, randomly selecting another node k that is not i for node i, randomly calculating the next honey source found by employing bees, i.e., the first candidate structure, the i-th element of employing bees to generate a new structure candidate G 'j.G′j for each pruned structure C' j is defined as follows:
g′ji=「c′ji+r·(c′ji-c′gi)」
where r is a random number within [ -1, +1], g+.j represents the g-th clipping structure. The value closest to the input in {0.1c i,0.2ci,…,αici } is returned.
In an embodiment, step S013 further includes step S013a:
and S013a, determining the suitability of the first structure candidates through a greedy algorithm according to the first structure candidates and the pruning model, and replacing the pruning model with the first structure candidates according to the suitability.
Specifically, calculating the nectar amount of the new honey source generation, carrying out greedy selection in the current honey source i and the new honey source, updating a honey source position matrix and a node i in the honey source nectar amount matrix if the new honey source is selected, otherwise, increasing 1 on the ith position in the honey source exploitation amount matrix M. In the examples, the following examples are given. The employment of bees will decide whether to replace C 'j based on the fitness of the generated candidate G' j. The fitness is defined as follows:
In an embodiment, the process of shrinking the channel combinations of the deep learning model and obtaining the pruning structure of the channel combinations through the artificial bee colony search may further include step S014:
s014, determining the suitability probability of the structural candidate through following bees in the artificial bee colony, and obtaining a second structural candidate according to the suitability probability; and replacing the pruning model according to the second structure candidate.
Specifically, a honey source selection probability matrix is generated and updated by following bees; i.e., traversing each follower bee, for each follower bee, first performing a roulette selection method using the calculated selection probability matrix to select one honey source from all honey sources, i.e., a second candidate structure, e.g., honey source k, and repeating a greedy selection operation of the employment bee phase for that honey source. In the process, a clipping structure is selected by using a fitness-related probability P j, which is defined as:
Thus, the better the adaptation of C 'j, the higher the probability of selecting C' j, resulting in a new and better pruning structure.
In an embodiment, the process of shrinking the channel combinations of the deep learning model and obtaining the pruning structure of the channel combinations through artificial bee colony search may further include step S015:
S015, determining that the updating times of the pruning model in a cycle period is smaller than a preset value, and obtaining a third structure candidate by the detection bees in the artificial bee colony; and replacing the pruning model according to the third structure candidate.
Specifically, after all following bees complete the search process, if a solution is not updated further after a preset limit number of cycles, then the solution is considered to be locally optimal and the honey source is discarded. If the honey source k is discarded, the leading bees corresponding to the honey source are converted into a detection bees. I.e. if the clipping structure C' j is not updated more than M times, the scout bee will reinitialize it to further generate a new clipping structure, i.e. a third structure candidate. The detection bees traverse all honey source nodes, find nodes with the value of the honey source exploitation quantity matrix M larger than the threshold value L, execute random selection coverage on the nodes, recalculate the nectar quantity, and return the honey source exploitation quantity matrix value to zero. Traversing all the honey source nodes, and storing the optimal nodes in a historical optimal honey source matrix.
After pruning the automatic search solution model by the artificial bee colony, namely after compressing the model, the embodiment trains according to the training sample to obtain the deep learning model, which comprises the following steps of S016:
s016, determining errors according to the labels and the deep learning model, and correcting the connection weights according to the errors.
Specifically, as shown in fig. 3, the output layer value and the actual value calculated by the initialized joint weight in step S011 may have a large deviation, and the embodiment needs to optimize the connection weight, and then needs to use a back propagation algorithm. Some output value calculated by the forward propagation algorithm is y k, which represents the kth output of the output layer, and the actual value of the label value of the training sample is t k. Then the error function is defined as follows:
The back propagation algorithm optimizes the junction weights by gradient descent, so the partial derivative delta of the error function to the junction weights needs to be calculated. And calculating the delta corresponding to the output layer according to the error of the output layer, and then sequentially and reversely pushing out the delta of the hidden layer. In fact, a certain δk of the kth layer corresponds to that a part of errors are allocated from δ (k+1) of the kth+1th layer, and the allocated weight is the connection weight w used in the forward propagation algorithm, which is similar to an iterative process.
S02, acquiring the image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model.
Specifically, according to the neural network model obtained by training, pruning, optimizing and error compensating in the steps S01 and S011-S016, classifying and identifying the image file to be identified, and outputting the classification or identification result through the model.
In a second aspect, a software system embodiment of the present invention, a software system based on image recognition of artificial bee colony, comprises:
The model training unit is used for obtaining a plurality of image files carrying labels to form a training sample, training the training sample to obtain a deep learning model, obtaining an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model;
The model compression unit is used for compressing the deep learning model through the artificial bee colony; the step of compressing the deep learning model by artificial bee colony comprises: and (3) reducing the channel combination of the deep learning model, and obtaining the pruning structure of the channel combination through artificial bee colony search.
In a third aspect, as shown in fig. 4, an embodiment of the present invention further provides a hardware system for image recognition based on artificial bee colony, comprising at least one processor; at least one memory for storing at least one program; the at least one program, when executed by the at least one processor, causes the at least one processor to perform the artificial bee colony based image recognition method as in the first aspect.
For example, in one possible embodiment, implemented using pytorch codes, the model of the GPU graphics card for training is Telsa P100, and the server operating system version is linux16.04. The algorithm is tested on the traditional VGG16 network by using CIFAR-10 data sets, as shown in table 1, the model is compressed to a great extent on the premise of keeping the accuracy by a channel pruning method based on the artificial bee colony algorithm, and the channel number, the operation amount and the parameter number of the model are reduced.
TABLE 1
In addition, on the public datasets CIFAR and CIFAR100 in the field of image classification, the size of the deep learning model is effectively compressed on the premise of not affecting the recognition accuracy, so that the deep learning model can be deployed on the edge equipment with limited resources, and the application requirements on the edge computing system end equipment are met.
The embodiment of the invention also provides a storage medium storing a program, and the program is executed by a processor to implement the method as in the first aspect.
From the above specific implementation process, it can be summarized that, compared with the prior art, the technical solution provided by the present invention has the following advantages or advantages:
according to the method, by adding the presenter tracking algorithm, the problem that an existing conference tracking system cannot track the moving behavior on the presenter platform, the state change of the presenter uses panoramic picture transition is solved, and the problem of picture shaking in the presenter switching process can be effectively solved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
Wherein the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (7)

1. The image recognition method based on the artificial bee colony is characterized by comprising the following steps of:
obtaining a plurality of image files carrying labels to form training samples, training according to the training samples to obtain a deep learning model,
Compressing the deep learning model through artificial bee colony;
Acquiring an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model;
the step of compressing the deep learning model by artificial bee colony comprises:
Shrinking the channel combination of the deep learning model, and searching through the artificial bee colony to obtain a pruning structure of the channel combination;
the step of training to obtain a deep learning model according to the training sample comprises the following steps:
determining the joint weight and the activation function of a node, and determining the deep learning model according to the joint weight and the activation function;
determining a pruning model and the number of channels of the pruning model according to the network layer of the deep learning model;
The step of shrinking the channel combination of the deep learning model and obtaining the pruning structure of the channel combination through the artificial bee colony search comprises the following steps:
Generating first structure candidates for the structure of the pruning model by employing bees in the artificial bee colony;
and determining the suitability of the first structure candidates through a greedy algorithm according to the first structure candidates and the pruning model, and replacing the pruning model with the first structure candidates according to the suitability.
2. The artificial bee colony based image recognition method according to claim 1, wherein the training to obtain the deep learning model based on the training sample comprises:
and determining an error according to the label and the deep learning model, and correcting the connection weight according to the error.
3. The image recognition method based on artificial bee colony as defined in claim 1, wherein the step of narrowing down the channel combination of the deep learning model to obtain a pruning structure of the channel combination by the artificial bee colony search further comprises:
determining the suitability probability of the structural candidate through following bees in the artificial bee colony, and obtaining a second structural candidate according to the suitability probability;
And replacing the pruning model according to the second structure candidate.
4. The image recognition method based on artificial bee colony as defined in claim 1, wherein the step of narrowing down the channel combination of the deep learning model to obtain a pruning structure of the channel combination by the artificial bee colony search further comprises:
determining that the updating times of the pruning model in the cycle period is smaller than a preset value, and obtaining a third structure candidate by the detection bees in the artificial bee colony;
And replacing the pruning model according to the third structure candidate.
5. An image recognition system based on artificial bee colony, comprising:
The model training unit is used for obtaining a plurality of image files carrying labels to form a training sample, training according to the training sample to obtain a deep learning model, obtaining an image file to be identified, and classifying and identifying the image file to be identified according to the compressed deep learning model;
The model compression unit is used for compressing the deep learning model through the artificial bee colony; the step of compressing the deep learning model by artificial bee colony comprises: shrinking the channel combination of the deep learning model, and searching through the artificial bee colony to obtain a pruning structure of the channel combination;
The model training unit is specifically configured to: determining the joint weight and the activation function of a node, and determining the deep learning model according to the joint weight and the activation function; determining a pruning model and the number of channels of the pruning model according to the network layer of the deep learning model;
the model compression unit is specifically used for: generating first structure candidates for the structure of the pruning model by employing bees in the artificial bee colony; and determining the suitability of the first structure candidates through a greedy algorithm according to the first structure candidates and the pruning model, and replacing the pruning model with the first structure candidates according to the suitability.
6. An image recognition system based on artificial bee colony, comprising:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to perform the artificial bee colony based image recognition method of any of claims 1 to 4.
7. A storage medium having stored therein a program executable by a processor, characterized in that: the processor executable program when executed by a processor is for running the artificial bee colony based image recognition method as claimed in any of claims 1 to 4.
CN202110323612.4A 2021-03-26 2021-03-26 Image recognition method, system and storage medium based on artificial bee colony Active CN113139582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110323612.4A CN113139582B (en) 2021-03-26 2021-03-26 Image recognition method, system and storage medium based on artificial bee colony

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110323612.4A CN113139582B (en) 2021-03-26 2021-03-26 Image recognition method, system and storage medium based on artificial bee colony

Publications (2)

Publication Number Publication Date
CN113139582A CN113139582A (en) 2021-07-20
CN113139582B true CN113139582B (en) 2024-04-30

Family

ID=76809985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110323612.4A Active CN113139582B (en) 2021-03-26 2021-03-26 Image recognition method, system and storage medium based on artificial bee colony

Country Status (1)

Country Link
CN (1) CN113139582B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609630A (en) * 2017-08-02 2018-01-19 广东建设职业技术学院 A kind of depth confidence network parameter optimization method and system based on artificial bee colony
WO2019094729A1 (en) * 2017-11-09 2019-05-16 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609630A (en) * 2017-08-02 2018-01-19 广东建设职业技术学院 A kind of depth confidence network parameter optimization method and system based on artificial bee colony
WO2019094729A1 (en) * 2017-11-09 2019-05-16 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗浩 ; 刘宇 ; .一种强化互学习的人工蜂群算法.计算机工程与应用.2016,(16),全文. *

Also Published As

Publication number Publication date
CN113139582A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
US20230316149A1 (en) Permissions in a dataset management system
CN110366734B (en) Optimizing neural network architecture
US20200134456A1 (en) Video data processing method and apparatus, and readable storage medium
US11640714B2 (en) Video panoptic segmentation
Bochinski et al. Deep active learning for in situ plankton classification
KR102225579B1 (en) Method for semantic segmentation based on knowledge distillation with improved learning performance
CN116134454A (en) Method and system for training neural network models using knowledge distillation
CN111709493B (en) Object classification method, training device, object classification equipment and storage medium
KR102042168B1 (en) Methods and apparatuses for generating text to video based on time series adversarial neural network
CN112819099B (en) Training method, data processing method, device, medium and equipment for network model
CN116049459B (en) Cross-modal mutual retrieval method, device, server and storage medium
CN112948708A (en) Short video recommendation method
CN113065013B (en) Image annotation model training and image annotation method, system, equipment and medium
CN113469186B (en) Cross-domain migration image segmentation method based on small number of point labels
CN109710842B9 (en) Method and device for pushing service information and readable storage medium
CN113051914A (en) Enterprise hidden label extraction method and device based on multi-feature dynamic portrait
CN110866564A (en) Season classification method, system, electronic device and medium for multiple semi-supervised images
CN114417058A (en) Video material screening method and device, computer equipment and storage medium
CN115830324A (en) Semantic segmentation domain adaptive label correction method and device based on candidate label set
CN113420651B (en) Light weight method, system and target detection method for deep convolutional neural network
CN111626058B (en) Based on CR 2 Image-text double-coding realization method and system of neural network
CN113139582B (en) Image recognition method, system and storage medium based on artificial bee colony
CN112633246A (en) Multi-scene recognition method, system, device and storage medium in open scene
CN115169472A (en) Music matching method and device for multimedia data and computer equipment
CN116702835A (en) Neural network reasoning acceleration method, target detection method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510630 room 1101, building 1, No.16 Keyun Road, Tianhe District, Guangzhou City, Guangdong Province (office use only)

Applicant after: Guangdong Yitong Lianyun Intelligent Information Co.,Ltd.

Address before: 510630 building 1101, No.16 Keyun Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: YITONG CENTURY INTERNET OF THINGS RESEARCH INSTITUTE (GUANGZHOU) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant