CN113505628A - Target identification method based on lightweight neural network and application thereof - Google Patents
Target identification method based on lightweight neural network and application thereof Download PDFInfo
- Publication number
- CN113505628A CN113505628A CN202110359604.5A CN202110359604A CN113505628A CN 113505628 A CN113505628 A CN 113505628A CN 202110359604 A CN202110359604 A CN 202110359604A CN 113505628 A CN113505628 A CN 113505628A
- Authority
- CN
- China
- Prior art keywords
- convolution layer
- target
- layer
- back bone
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 35
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 39
- 238000000926 separation method Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims description 18
- 230000007704 transition Effects 0.000 claims description 8
- 230000015654 memory Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 9
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target identification method based on a lightweight neural network and application thereof, wherein a target picture is input into a trained Densenet improved model to obtain a target category; the difference between the Densenet improved model and the Densenet model lies in that Back Bone in the net block, Channel Split, first convolution layer, first depth separation convolution layer, second convolution layer, Concat and Channel Shuffle in the Back Bone are connected in sequence, the second depth separation convolution layer and third convolution layer are connected in parallel with the first convolution layer, first depth separation convolution layer and second convolution layer, the second depth separation convolution layer is connected with the Channel Split, and the third convolution layer is connected with the Concat. The target identification method of the invention not only has small data processing amount, but also has high identification precision, and has great application prospect.
Description
Technical Field
The invention belongs to the technical field of equipment identification, and relates to a target identification method based on a lightweight neural network and application thereof.
Background
Target detection is one of important research directions in the field of computer vision, and the traditional target detection method is to realize target detection by constructing a feature descriptor to extract features and then classifying the features by using a classifier, such as a histogram of gradient directions (HOG) and a Support Vector Machine (SVM) (support Vector machine). With the excellent performance of deep learning in the image classification field, convolutional neural networks are beginning to be widely used in various fields of computer vision. The deep learning is used in the field of target detection to realize target detection, and the target detection becomes a new direction.
The traditional neural network uses a full connection layer to connect layers, and the weight sharing network of the convolutional neural network greatly reduces the calculated amount and reduces the complexity of a network model. The translation invariance of the convolutional neural network can enable the convolutional neural network to better process the characteristics of pictures, a large number of convolutional neural networks appear based on an image recognition method, and the algorithms are modified on the basis of Backbone from the original LeNet, AlexNet, ZFNext, VGGNet, increment series and ResNet series to light-weight neural networks, so that high precision can be obtained, and the technologies are gradually used in other applications such as face recognition, intelligent storage and the like.
Nevertheless, the following drawbacks exist at present with this technique: due to the fact that residual connection exists, deeper networks can be stacked on the premise that accuracy is guaranteed, complexity and calculation amount of a network model are increased, some commercial computers can run the networks through superior GPUs and memories, and for embedded devices with limited performance, processing data amount is too large, and requirements of time effectiveness and accuracy are difficult to meet.
Therefore, it is very significant to develop a target identification method with high precision, small data processing amount and applicability to embedded devices with limited performance.
Disclosure of Invention
The invention aims to overcome the defects that the prior method is difficult to realize good consideration of precision and data processing amount and has higher requirements on equipment hardware, and provides a target identification method which has high precision and small data processing amount and can be suitable for embedded equipment with limited performance. The target identification method greatly reduces the data processing amount while ensuring the precision, and can greatly reduce the requirements on equipment hardware. In addition, the invention provides a method for realizing the target identification method by adopting the cooperation of a plurality of embedded devices, and the method has a wide application prospect.
In order to achieve the purpose, the invention provides the following technical scheme:
a target identification method based on a lightweight neural network inputs a target picture into a trained Densenet improved model, and the trained Densenet improved model outputs a target category;
the Densenet improved model is improved over the Densenet model in that Back Bone in the Net Block (Net Block) includes Channel Split, first convolutional layer (first Conv), second convolutional layer (second Conv), third convolutional layer (third Conv), first depth separation convolutional layer (first DWConv), second depth separation convolutional layer (second DWConv), Concat, Channel Shuffle, wherein the Channel Split layer, the first convolution layer (first Conv), the first depth separation convolution layer (first DWConv), the second convolution layer (second Conv), the Concat and the Channel Shuffle are connected in sequence, the second depth separation convolution layer (second DWConv) and the third convolution layer (third Conv) are connected in parallel with the first convolution layer (first Conv), the first depth separation convolution layer (first DWConv) and the second convolution layer (second Conv), the second depth separation convolution layer (second DWConv) is connected with the Channel Split layer, and the third convolution layer (third Conv) is connected with the Concat. Namely, after the output channels of Back Bone are distributed by Channel Split, one part of the output channels passes through a first convolution layer (first Conv), a first depth separation convolution layer (first DWConv) and a second convolution layer (second Conv), the other part of the output channels passes through a second depth separation convolution layer (second DWConv) and a third convolution layer (third Conv), and then the output channels are merged by Concat and then subjected to Channel scrambling by Channel Shuffle to be output to the next layer to be used as the input of the next layer.
The invention relates to a target identification method based on a lightweight neural network, which firstly proposes that a Densenet improved model is obtained by combining the characteristics of the convolutional neural network and the ShuffLeNet channel planned sequence, a residual network in ResNet, channel segmentation in increment, multiple residual networks in Densenet and DWConv in MobileNet.
As a preferred technical scheme:
the object identification method based on the lightweight neural network comprises the following steps that a Densenet improved model comprises a main convolution Layer (main Conv), a feature extraction Layer (Pooling), a first net block, a Transition Layer (Transition Layer), a second net block and a Classification Layer (Classification Layer) which are connected in sequence; the first net block and the second net block have the same structure.
According to the target identification method based on the lightweight neural network, the network block comprises a first Back Bone, a second Back Bone, a third Back Bone and a fourth Back Bone which are sequentially connected, the output of the first Back Bone is simultaneously the input of the second Back Bone, the third Back Bone and the fourth Back Bone, and the output of the second Back Bone is simultaneously the input of the third Back Bone and the fourth Back Bone.
The target identification method based on the light weight neural network is characterized in that the Transition Layer (Transition Layer) comprises a convolution Layer (Conv) and a feature extraction Layer (Pooling);
the Classification Layer (Classification Layer) comprises a global averaging pool (global averaging pool) and a Softmax classifier.
The Softmax classifier is used for calculating the classification probability of each sample, and specifically comprises the following steps:
in the formula, siRepresents the output value, s, of the ith neuron of the Softmax classifieriF is an image feature vector of a certain training sample, η is a corresponding weight, and n is the number of categories to be classified;
and then according to the probability yiCalculating to obtain a training error:
when i is k, θikI denotes the ith class, and y is the original input belongs to class ik *=1;
The training error is sequentially transmitted from the last layer of the convolutional neural network in a forward and backward mode, the cross entropy is used as a loss function, an Adam self-adaptive gradient optimizer is used for optimizing, the initial learning rate is set to be 0.01, the training round is 100 times, a TensorFlow2.0 framework is used for carrying out experiments, and the trained model is stored.
According to the target identification method based on the lightweight neural network, the number ratio of the training set to the testing set of the Densenet improved model is 4: 1. The model training end point is the preset training times. The scope of the invention is not limited thereto, and the number of training sets and test sets may be set by those skilled in the art according to the actual situation.
The invention also provides equipment applying the target identification method based on the light weight neural network, which comprises one or more processors, one or more memories, one or more programs and an input unit;
the input unit is used for inputting a target picture;
the one or more programs are stored in the memory and, when executed by the processor, cause the apparatus to perform a method of target recognition based on a lightweight neural network as described above.
In addition, the invention also provides a device kit comprising the device as described above, which includes a central host and one or more devices as described above (the hardware of the device and the hardware of the central host may be different, for example, some devices have good performance and poor performance, and at the same time, the deep learning model may be written in C + +, or python, or even the densenert improved model may be split into sub-models, each device corresponds to one sub-model, and the writing modes of the sub-models may be different, and may be written in C + +, or python);
the equipment is in communication connection with the central host.
The equipment suite uses the idea of XGboost for reference, adopts a plurality of embedded equipment embedded with the target identification method of the lightweight neural network, improves the system precision, ensures that a part of embedded equipment can still normally operate under the downtime condition, realizes hot deployment (software is upgraded when the application is running without restarting the application), and improves the availability of the system; meanwhile, the equipment suite of the invention can adopt different neural networks according to different hardware of the embedded equipment, and can also adopt different frameworks and languages to write the neural networks.
The invention sets a high-availability framework for intelligent storage based on a lightweight neural network, and for a writer in deep learning, the writer can run in a host by using different languages and even different models, and only needs to put a prediction structure into a RabbitMQ, and a central host only needs to obtain a result from the RabbitMQ, so that decoupling between services is realized. The method has the advantages that the number of the embedded devices can be increased at any time, different models with different languages can be adopted in consideration of different hardware performances of the embedded devices, and when one of the embedded devices is down, the overall structure cannot be influenced; when a new model is added, the original model does not need to be stopped, and the hot deployment is really realized. In the central host, different initialization weights can be manually assigned to different models in consideration of different model complexity and precision, and the host can adaptively adjust the weights according to the precision in subsequent training.
As a preferred technical scheme:
the device suite is characterized in that the device is an embedded device.
The device suite as described above, the central host running the following programs:
(1) the central host acquires heartbeat packets sent by each device at regular time, confirms the online condition of the device and eliminates the devices which are not online;
(2) the central host sends a target picture to each online device, and each online device operates the target identification method based on the lightweight neural network to obtain a target category;
(3) and each online device generates the obtained target type to the central host, and the central host obtains a final result according to the target type obtained by each online device. The central host operates based on the RabbitMQ (which can be selected according to specific requirements, of course), and the specific operation flow is as follows: the central host computer puts the industrial device (goods) image into the RabbitMQ, each embedded device can take the picture from the RabbitMQ, then the predicted result is put into the RabbitMQ, and the central host computer can take the predicted result from the RabbitMQ.
In the above-mentioned equipment suite, the specific scheme of obtaining the final result according to the target category obtained by each online device is to confirm that the final result is the correct variety with the highest prediction according to the election rule, or confirm the final result according to the weight coefficient of each online device, and with the weight adjustment algorithm, all the initial 5 embedded devices (taking 5 embedded devices as an example) are 0.2, and if one of the devices makes a multiple prediction error n times, the result will be in [0,2 ]n]If a number x is selected, the new weight z is 0.2-x 0.01, when z is 0, the system will send out an alarm and remove the device, and let 5 machines predict the result and weight y respectively1、y2、y3、y4、y5、z1、z2、z3、z4、z5If y is1、y2、y3Same, y4、y5Same, z1+z2+z3>z4+z5Then the final result is y1. Different weight value ratios can be set during initialization according to different performances of equipment and models.
Has the advantages that:
(1) the invention relates to a target identification method based on a lightweight neural network, which firstly proposes that a Densenet improved model is obtained by combining the characteristics of the expected sequence of a ShuffleNet channel, a residual network in ResNet, channel segmentation in increment, multiple residual networks in Densenet and DWConv in MobileNet, compared with the traditional network and the lightweight network, the method further improves the precision and reduces the structure of the model, and the Densenet improved model is applied to finish target class identification, so that the data processing amount is small, the identification precision is high, and the method has a wide application prospect;
(2) the equipment suite applying the target identification method based on the light weight neural network provides a high-availability architecture based on the light weight neural network, has good flexibility, can be applied to embedded equipment for step-by-step processing, and does not influence the overall structure after one of the embedded equipment is down; when a new model is added, the original model does not need to be stopped, the hot deployment is really realized, and the method has a great application prospect.
Drawings
FIG. 1 is a schematic structural diagram of a Densenet improved model of the present invention;
fig. 2 is a schematic operation flow chart of the equipment kit of the present invention.
Detailed Description
The present invention will be described in more detail with reference to the accompanying drawings, in which embodiments of the invention are shown and described, and it is to be understood that the embodiments described are merely illustrative of some, but not all embodiments of the invention.
Example 1
A target identification method based on a lightweight neural network comprises the following steps:
(1) training a Densenet improved model:
the densenert improved model is shown in fig. 1 and comprises a main convolution layer (main Conv), a feature extraction layer, a first net block, a transition layer, a second net block and a classification layer which are sequentially connected;
the first net block and the second net block have the same structure, the net block comprises a first Back Bone, a second Back Bone, a third Back Bone and a fourth Back Bone which are sequentially connected, the output of the first Back Bone is the input of the second Back Bone, the third Back Bone and the fourth Back Bone at the same time, and the output of the second Back Bone is the input of the third Back Bone and the fourth Back Bone at the same time;
the Back Bone in the net block comprises a Channel Split layer, a first convolution layer, a second convolution layer, a third convolution layer, a first depth separation convolution layer, a second depth separation convolution layer, a Concat and a Channel Shuffle, wherein the Channel Split layer, the first convolution layer, the first depth separation convolution layer, the second convolution layer, the Concat and the Channel Shuffle are sequentially connected, the second depth separation convolution layer, the third convolution layer, the first depth separation convolution layer and the second convolution layer are connected in parallel, the second depth separation convolution layer is connected with the Channel Split layer, and the third convolution layer is connected with the Concat;
the transition layer comprises a convolution layer and a characteristic extraction layer which are connected in sequence;
the classification layer comprises an average pooling layer and a Softmax classifier which are sequentially connected;
the Softmax classifier is used for calculating the classification probability of each sample, and specifically comprises the following steps:
in the formula, siRepresents the output value, s, of the ith neuron of the Softmax classifieriF is an image feature vector of a certain training sample, η is a corresponding weight, and n is the number of categories to be classified;
and then according to the probability yiCalculating to obtain a training error:
when i is k, θikI denotes the ith class, and y is the original input belongs to class ik *=1;
Selecting a training set to train the Densenet improved model, selecting a testing set to verify the trained Densenet improved model to obtain the trained Densenet improved model, wherein the number ratio of the training set to the testing set is 4:1, and the training process is as follows: sequentially carrying out forward and backward propagation from the last layer of the convolutional neural network by utilizing a training error, using cross entropy as a loss function, optimizing by using an Adam adaptive gradient optimizer, setting the initial learning rate to be 0.01, carrying out a training turn for 100 times, carrying out an experiment by adopting a TensorFlow2.0 framework, and storing a trained model;
(2) inputting the target picture into the Densenet improved model trained in the step (1), and outputting the target category by the trained Densenet improved model.
In order to verify the performance of the densenert improved model of the present invention, the present embodiment performs an experiment on a private data set, and the experimental results are shown in table 1:
TABLE 1
Model (model) | SqueezeNet | MobileNet | The invention |
Rate of accuracy | 94.92% | 95.93% | 98.71% |
According to the results, the accuracy of the Densenet improved model and the traditional network is improved to a small extent.
Through verification, the target identification method based on the lightweight neural network firstly provides a Densenet improved model obtained by combining the convolutional neural network with the characteristics of the sequence planned by ShuffleNet channels, a residual network in ResNet, channel segmentation in increment, multiple residual networks in Densenet and DWConv in MobileNet.
Example 2
A spare part comprises a central host and five embedded devices, wherein the embedded devices are in communication connection with the central host;
the embedded device comprises one or more processors, one or more memories, one or more programs and an input unit, wherein the input unit is used for inputting a target picture, the one or more programs are stored in the memories, and when the one or more programs are executed by the processors, the embedded device is enabled to execute the same target identification method based on the light weight neural network as the embodiment 1;
the central host runs the program shown in fig. 2:
(1) the central host (running based on a RabbitMQ (message queue management) (mq in fig. 2)) acquires heartbeat packets sent by each embedded device at fixed time, confirms the online condition of the embedded devices and eliminates the non-online embedded devices;
(2) the central host (based on) sends a target picture to each online embedded device, and each online embedded device operates a target identification method based on a lightweight neural network to obtain a target category;
(3) each online embedded device sends the obtained target type to the central host, and the central host obtains a final result according to the target type obtained by each online embedded device (specifically, the final result is confirmed according to the weight coefficient of each online embedded device).
For verification, the experiment is performed on a private data set, and the experimental results are shown in table 2:
TABLE 2
Model (model) | Single equipment | 5 embedded equipment |
Rate of accuracy | 98.71% | 99.21% |
Through the results, the system integrating 5 devices improves the usability and improves the target identification precision to a small extent.
Through verification, the equipment kit provided by the invention provides a high-availability architecture based on a lightweight neural network, has good flexibility, can be applied to embedded equipment for step-by-step processing, and does not influence the overall structure when one of the embedded equipment is down; when a new model is added, the original model does not need to be stopped, the hot deployment is really realized, and the method has a great application prospect.
Although specific embodiments of the present invention have been described above, it will be appreciated by those skilled in the art that these embodiments are merely illustrative and various changes or modifications may be made without departing from the principles and spirit of the invention.
Claims (10)
1. A target identification method based on a lightweight neural network is characterized in that a target picture is input into a trained Densenet improved model, and the trained Densenet improved model outputs a target category;
the improved Densenet model is improved compared with the Densenet model in that a Back Bone in a net block comprises a Channel Split, a first convolution layer, a second convolution layer, a third convolution layer, a first depth separation convolution layer, a second depth separation convolution layer, a Concat and a Channel Shuffle, wherein the Channel Split, the first convolution layer, the first depth separation convolution layer, the second convolution layer, the Concat and the Channel Shuffle are connected in sequence, the second depth separation convolution layer and the third convolution layer are connected with the first convolution layer, the first depth separation convolution layer and the second convolution layer in parallel, the second depth separation convolution layer is connected with the Channel Split, and the third convolution layer is connected with the Concat.
2. The target identification method based on the light weight neural network is characterized in that the Densenet improved model comprises a main convolution layer, a feature extraction layer, a first net block, a transition layer, a second net block and a classification layer which are connected in sequence; the first net block and the second net block have the same structure.
3. The method for identifying the target based on the light weight neural network as claimed in claim 2, wherein the net block comprises a first Back Bone, a second Back Bone, a third Back Bone and a fourth Back Bone which are connected in sequence, the output of the first Back Bone is the input of the second Back Bone, the third Back Bone and the fourth Back Bone at the same time, and the output of the second Back Bone is the input of the third Back Bone and the fourth Back Bone at the same time.
4. The method for target recognition based on the light weight neural network, according to claim 2, wherein the transition layer comprises a convolutional layer and a feature extraction layer;
the classification layer comprises an average pooling layer and a Softmax classifier.
5. The method for identifying the target based on the lightweight neural network as claimed in claim 1, wherein the number ratio of the training set to the testing set of the Densenet improved model is 4: 1.
6. The device for applying the target identification method based on the light weight neural network as claimed in any one of claims 1 to 5, comprising one or more processors, one or more memories, one or more programs and an input unit;
the input unit is used for inputting a target picture;
the one or more programs are stored in the memory and, when executed by the processor, cause the apparatus to perform a method of lightweight neural network based object recognition as claimed in any one of claims 1 to 5.
7. A kit of devices comprising the device of claim 6, comprising one central host and more than one device of claim 6;
the equipment is in communication connection with the central host.
8. The kit of claim 7, wherein the device is an embedded device.
9. The kit according to claim 7, characterized in that said central host runs the following programs:
(1) the central host acquires heartbeat packets sent by each device at regular time, confirms the online condition of the device and eliminates the devices which are not online;
(2) the central host sends a target picture to each online device, and each online device operates the target identification method based on the light weight neural network according to any one of claims 1-5 to obtain a target category;
(3) and each online device generates the obtained target type to the central host, and the central host obtains a final result according to the target type obtained by each online device.
10. The equipment set according to claim 9, wherein the specific scheme for obtaining the final result according to the target category of each online equipment is to confirm the final result according to an election rule or confirm the final result according to a weight coefficient of each online equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359604.5A CN113505628A (en) | 2021-04-02 | 2021-04-02 | Target identification method based on lightweight neural network and application thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110359604.5A CN113505628A (en) | 2021-04-02 | 2021-04-02 | Target identification method based on lightweight neural network and application thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113505628A true CN113505628A (en) | 2021-10-15 |
Family
ID=78009201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110359604.5A Pending CN113505628A (en) | 2021-04-02 | 2021-04-02 | Target identification method based on lightweight neural network and application thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113505628A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009525A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks |
CN109063666A (en) * | 2018-08-14 | 2018-12-21 | 电子科技大学 | The lightweight face identification method and system of convolution are separated based on depth |
CN109685017A (en) * | 2018-12-26 | 2019-04-26 | 中山大学 | A kind of ultrahigh speed real-time target detection system and detection method based on light weight neural network |
CN110532878A (en) * | 2019-07-26 | 2019-12-03 | 中山大学 | A kind of driving behavior recognition methods based on lightweight convolutional neural networks |
CN111126333A (en) * | 2019-12-30 | 2020-05-08 | 齐齐哈尔大学 | Garbage classification method based on light convolutional neural network |
CN111307480A (en) * | 2020-02-20 | 2020-06-19 | 吉林建筑大学 | Embedded heat pipe-based heat transfer management system, method and storage medium |
-
2021
- 2021-04-02 CN CN202110359604.5A patent/CN113505628A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009525A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks |
CN109063666A (en) * | 2018-08-14 | 2018-12-21 | 电子科技大学 | The lightweight face identification method and system of convolution are separated based on depth |
CN109685017A (en) * | 2018-12-26 | 2019-04-26 | 中山大学 | A kind of ultrahigh speed real-time target detection system and detection method based on light weight neural network |
CN110532878A (en) * | 2019-07-26 | 2019-12-03 | 中山大学 | A kind of driving behavior recognition methods based on lightweight convolutional neural networks |
CN111126333A (en) * | 2019-12-30 | 2020-05-08 | 齐齐哈尔大学 | Garbage classification method based on light convolutional neural network |
CN111307480A (en) * | 2020-02-20 | 2020-06-19 | 吉林建筑大学 | Embedded heat pipe-based heat transfer management system, method and storage medium |
Non-Patent Citations (1)
Title |
---|
MA, N等: "Shufflenet v2: Practical guidelines for efficient cnn architecture design", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV)》, 31 December 2018 (2018-12-31), pages 116 - 131 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11093805B2 (en) | Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image | |
CN108182394B (en) | Convolutional neural network training method, face recognition method and face recognition device | |
WO2019228317A1 (en) | Face recognition method and device, and computer readable medium | |
WO2022042123A1 (en) | Image recognition model generation method and apparatus, computer device and storage medium | |
US20160171346A1 (en) | Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image | |
US20200134377A1 (en) | Logo detection | |
US11587356B2 (en) | Method and device for age estimation | |
CN112434655B (en) | Gait recognition method based on adaptive confidence map convolution network | |
CN109145717B (en) | Face recognition method for online learning | |
US20170083754A1 (en) | Methods and Systems for Verifying Face Images Based on Canonical Images | |
WO2022077646A1 (en) | Method and apparatus for training student model for image processing | |
CN109740679B (en) | Target identification method based on convolutional neural network and naive Bayes | |
Ravi et al. | Explicitly imposing constraints in deep networks via conditional gradients gives improved generalization and faster convergence | |
CN109754359B (en) | Pooling processing method and system applied to convolutional neural network | |
CN111914908B (en) | Image recognition model training method, image recognition method and related equipment | |
CN112861659A (en) | Image model training method and device, electronic equipment and storage medium | |
CN112183668A (en) | Method and device for training service models in parallel | |
WO2023035904A9 (en) | Video timing motion nomination generation method and system | |
CN112232395B (en) | Semi-supervised image classification method for generating countermeasure network based on joint training | |
CN116089883B (en) | Training method for improving classification degree of new and old categories in existing category increment learning | |
JP2023526899A (en) | Methods, devices, media and program products for generating image inpainting models | |
CN113673482A (en) | Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution | |
CN113592008B (en) | System, method, device and storage medium for classifying small sample images | |
CN111079930A (en) | Method and device for determining quality parameters of data set and electronic equipment | |
CN117575044A (en) | Data forgetting learning method, device, data processing system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |