CN112668631B - Mobile terminal community pet identification method based on convolutional neural network - Google Patents
Mobile terminal community pet identification method based on convolutional neural network Download PDFInfo
- Publication number
- CN112668631B CN112668631B CN202011557450.2A CN202011557450A CN112668631B CN 112668631 B CN112668631 B CN 112668631B CN 202011557450 A CN202011557450 A CN 202011557450A CN 112668631 B CN112668631 B CN 112668631B
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- training
- mobile terminal
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 48
- 238000013135 deep learning Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims description 13
- 241001465754 Metazoa Species 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000008014 freezing Effects 0.000 claims description 3
- 238000007710 freezing Methods 0.000 claims description 3
- 238000010257 thawing Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 210000000078 claw Anatomy 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 241000282472 Canis lupus familiaris Species 0.000 description 4
- 241000282465 Canis Species 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009901 attention process Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009395 breeding Methods 0.000 description 1
- 230000001488 breeding effect Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a mobile end pet identification method based on a convolutional neural network. The method comprises the following steps: the method comprises the steps of constructing a lightweight convolutional neural network, training the convolutional neural network by using a pre-training network model training method at a server side, deploying a trained convolutional neural network model on mobile terminal equipment through a mobile terminal deep learning framework, and acquiring images and carrying out target identification and risk judgment through the mobile terminal equipment. The invention discloses a method for detecting the pet types of a community on mobile terminal equipment, and a resident can be far away from dangerous pets by identifying results, thereby effectively avoiding the occurrence of an event that the pets in the community hurt people; in addition, the lightweight convolutional network constructed by the method has low requirements on hardware environment, and can adapt to most of mobile terminal operating environments under the condition of ensuring the identification precision.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a mobile terminal community pet recognition method based on a convolutional neural network.
Background
With the continuous improvement of living standard of people, more and more people like to raise one or more pets to accompany the life of the people, and particularly in cities, the number of the pets is increasing, and the types of the pets are also eight-door. This leads to a series of problems: among numerous pets, large dogs or virulent dogs are not known, the former dogs are large in size and have sudden actions outside the range controllable by manpower, and the latter dogs are naturally aggressive and are easy to hurt people or other pets; in addition, some pets with small sizes also have strong defense consciousness and territorial consciousness and are also aggressive, and the pets may carry viruses or bacteria, so that serious consequences can be caused once the pets are injured.
Although security personnel can monitor the activity of pets in the community and roughly judge whether the pets are dangerous according to the body types or behaviors of the pets, the method not only consumes a great deal of manpower, but also cannot effectively detect the pets in areas which cannot be covered by monitoring equipment.
In order to solve the above problems, a monitoring camera or a device with a camera can be used to collect pet images, and the types and dangerousness of pets can be obtained by detecting and analyzing the images. The mainstream detection method at present is a pet detection algorithm based on deep learning. The pet features are extracted through the convolutional neural network, the extracted feature information is richer and richer as the number of network layers is deepened, the bottom layer features can be effectively combined through the perceptron of the hidden layer, a more abstract high-level representation attribute category or feature is formed, and finally the image category is output through the classifier. With the rapid development of artificial intelligence technology, the deep learning algorithm is developing towards high precision and low delay. For example, a representative YOLOV3 algorithm in recent years can realize rapid end-to-end target detection, and under a GPU 1080Ti hardware environment, the detection accuracy reaches 74.8%, and the detection speed reaches 29.8 frames/second. The other representative algorithm, fast-RCNN, has higher detection precision but poorer real-time performance. Although the pet detection method based on deep learning has good application prospect. However, problems still exist with current algorithms of this type: most of the existing algorithms are dedicated to extremely improving the detection precision and the real-time performance, so that the network model scale and the calculation amount are huge, the algorithms are suitable for the GPU server hardware operating environment, and the real-time performance of the embedded video monitoring system installed in mobile terminal equipment or a community can not meet the actual application requirements.
In summary, how to reasonably construct a lightweight convolutional neural network so as to be suitable for most hardware device environments and maintain the precision and real-time performance of the convolutional network becomes a problem to be solved urgently.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a mobile terminal community pet identification method based on a convolutional neural network, which comprises the following steps:
step 1, constructing a lightweight convolutional neural network L;
step 2, training a convolutional neural network L by using a pre-training network model training method at a server side;
step 3, deploying a trained convolutional neural network model on the mobile terminal equipment by using a mobile terminal deep learning framework;
and 4, acquiring an image through the mobile terminal equipment, and identifying the target pet and judging the danger.
Preferably, in step 1, the layers 2 to 10 of the lightweight convolutional neural network L are a lightweight convolutional structure (benck), which is a lightweight network structure combining a deep separable convolutional network and a residual network.
Preferably, the lightweight convolution structure (bench) is subjected to feature extraction by depth separable convolution, meanwhile, a lightweight attention module is added to adjust weight distribution, and then residual connection is added between input and output layers.
The depth separable Convolution decomposes a complete Convolution operation into two steps, namely depth Convolution (Depthwise Convolution) and point-by-point 1 × 1 Convolution (Pointwise Convolution), so that the Convolution mode can achieve almost the same result as the common Convolution with fewer parameters and fewer operations.
Preferably, the present invention introduces a lightweight attention module that combines a spatial attention mechanism and a channel attention mechanism to significantly improve the accuracy of image classification and target detection. The two modules of channel attention and spatial attention may be combined in parallel or sequentially, but usually the better results are achieved by focusing the channel attention on the front.
Preferably, the pre-trained network M used in step 2 is an open-source convolutional neural network model trained on an animal data set.
Preferably, the training process of step 2 further comprises the steps of:
(1) connecting a convolution neural network L after the convolution base of the pre-training network M;
(2) freezing the convolution basis of the convolutional neural network L;
(3) performing feature extraction on the marked pet learning sample by using the convolution base of the pre-training network M;
(4) the extracted features are used as training samples to train a fully connected classifier of the convolutional neural network L;
(5) randomly thawing one or more convolutional layers of the convolutional neural network L;
(6) and jointly training the unfrozen convolutional layer and the fully-connected classifier, and updating the parameters of the convolutional layer through back propagation.
Preferably, in step (1), the convolution basis of the pre-trained network M is a series of pooling and convolution layers of the network, excluding its dense connectors, the convolution basis is used to output the high-level features of the new training samples, and the task of classification is taken care of by the added classifier.
Preferably, in step (4), the classifier is preferentially trained to avoid an increase in error signals propagated through the network during training of the feature extraction layer.
Preferably, in the step (5), a dropout method is adopted to randomly unfreeze the network layer with a certain probability, so that some nodes of the network work, the weight is updated, and other processes are not changed.
Preferably, the mobile terminal deep learning framework used in step 3 is a tensrflow mobile, and the mobile terminal device uses an android terminal. The method for deploying the model to the android device by using the TensorFlow mobile is divided into the following three steps:
switching the training mode to TensorFlow;
adding TensorFlow mobile in android application as an additional function;
the inference is performed using the TensorFlow pattern written Java code in the application.
Preferably, in step 4, the output of the convolutional neural network L comprises: the position of the pet on the graph, the pet category, the prediction probability and the risk.
Specifically, the risk assessment criteria for the target pet include:
regarding pets whose prediction category is definitely forbidden in the urban management regulation as dangerous;
for the pet which is not forbidden to be raised, carrying out danger judgment according to the characteristic information extracted from the full connection layer of the convolutional neural network L: in the facial features, if the tooth exposure degree is higher than a preset safety threshold value, determining that the tooth is dangerous; in the claw feature, if the nail length is higher than a preset safety threshold, it is determined that there is a risk.
In the method, for the pet prohibited from breeding, a danger identification word is additionally added to the pet sample label during network training, so that the danger condition of the pet can be directly judged by the convolutional neural network.
The judgment of the dangerousness of the pet which is not forbidden is executed by a background server, and the background server comprises:
the extraction module is used for further extracting facial and claw features from the pet features extracted by the convolutional neural network full-link layer;
the analysis module analyzes and judges the facial and claw features extracted by the extraction module according to the risk evaluation standard;
and the execution module executes corresponding measures according to the dangerous pets judged by the convolutional neural network and the analysis module.
Further, the management measures taken for the dangerous pets include: providing the pet image and the position of the pet to a community security department and requesting professional processing; recording the pet information into a background server database for filing processing; and prompting the user on a mobile terminal interface, displaying information of dangerous pets and reminding the user of taking precautionary measures.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a mobile terminal pet identification method based on a convolutional neural network, which comprises the steps of constructing a lightweight convolutional neural network, introducing a depth separable convolution to reduce the size and the calculated amount of a model, adding an attention mechanism into the network to improve the detection precision of the network, training a neural network model at a server terminal, deploying the model at a mobile terminal, and realizing the pet category detection on mobile terminal equipment; in addition, the background server connected with the mobile terminal equipment can judge the dangerousness of the pet according to the characteristics extracted by the convolutional network, and takes corresponding measures while feeding back the result to the user, so that the personal safety of community residents is effectively guaranteed.
Drawings
FIG. 1 is a schematic flow chart of a method for identifying a pet at a mobile terminal based on a convolutional neural network according to the present invention;
FIG. 2 is a schematic flow chart of training a convolutional neural network by using a pre-training network model at a server side according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, instrumentalities well known to those skilled in the art have not been described in detail in order to not unnecessarily obscure the present disclosure.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Fig. 1 illustrates a convolutional neural network-based mobile-end pet identification method according to one embodiment of the present disclosure. The method can be applied to mobile terminals such as smart phones, tablet computers and the like. As shown in fig. 1, the method comprises the following steps:
step 1, constructing a lightweight convolutional neural network L;
step 2, training a convolutional neural network by using a pre-training network model training method at a server side;
step 3, deploying a trained convolutional neural network model on the mobile terminal equipment by using a mobile terminal deep learning framework;
and 4, acquiring an image through the mobile terminal equipment, and identifying the target pet and judging the danger.
Step 1, constructing a lightweight convolutional neural network L:
convolutional Neural Network (CNN) is a kind of feed-forward Neural Network, which includes Convolutional layer (Conv) and active layer, pooling layer (Pool), etc. In order to solve the efficiency problem so as to widely apply CNN to mobile terminals, the inventors have intensively studied and found that network parameters are reduced without losing accuracy by lightweight convolutional neural network design.
Layers 2-10 of the lightweight convolutional neural network L constructed by the invention are a lightweight convolutional structure (bneck), and the structure is a lightweight network structure combining deep separable convolution and a residual error network. The structure is characterized by carrying out feature extraction through depth separable convolution, adding a lightweight attention module to adjust channel weight, and then adding residual connection between input and output layers.
Deep separable convolution decomposes a standard convolution into a layer of deep convolutions and a layer of 1 × 1 convolutions, each layer of convolutions is followed by a batch normalization (BN layer) and a ReLU activation function, which is formulated as (1):
f(x)=max(0,x) (1)
the deep convolution uses a single filter to perform feature extraction on each input channel, and the 1 x 1 convolution combines the outputs of different depth convolutions, which greatly reduces the computation and size of the model by decomposing the convolution process.
The lightweight convolution structure constructed by the present invention adds an Attention module, in this embodiment, a cbam (conditional Block Attention module) module is used, and the module combines a spatial Attention mechanism and a channel Attention mechanism, and the two modules can be combined in a parallel or sequential manner, but usually the better effect can be obtained by putting the channel Attention in front, and the total Attention process is as formula (2) formula (3):
wherein F represents the input feature map,representing an element-by-element multiplication in which the attention value is broadcast. Accordingly, channel attention values are broadcast along the spatial dimension and vice versa. F' represents the final output characteristic diagram. Mc() And Ms() Channel and space attention, respectively.
The h-swish activating function is used in the second half part and the attention module of the convolutional neural network constructed by the invention, and the h-swish is shown in a formula (4):
the ReLU6 function in the above equation is a normal ReLU but limits the maximum output value to 6.
The h-swish function has better performance in a shallow network and does not increase the calculation cost of the network.
The structure of the convolutional neural network L is shown in Table 1:
TABLE 1
Column 1, Input size, represents the size variation of each feature layer of the convolutional neural network; the Operator in the 2 nd column represents the block structure to be experienced by each feature layer; column 3 Filter Shape represents the Filter size used for each feature layer; column 4 CBAM represents whether or not attention modules are introduced at this level; column 5, NL activation function category, HS for h-swish, RE for RELU; column 6S represents the step size used for each convolution structure.
Step 2, training a convolutional neural network by using a pre-training network model training method at a server side; referring to fig. 2, the specific process is as follows:
(1) connecting the lightweight convolutional neural network after the convolutional basis of the pre-training network; in the embodiment, a MobilenetV2 network trained on an animal training set is used as a pre-training network, the network structure of MobilnetV2 is similar to the lightweight convolutional neural network constructed by the invention, and deep separable convolution is adopted in feature extraction layers.
(2) Freezing the convolution basis of the convolutional neural network L;
(3) performing feature extraction on the marked pet learning sample by using a pre-training network MobilenetV 2;
(4) the extracted features are used as training samples to train a fully connected classifier of the convolutional neural network L; in this embodiment, features extracted from the convolution basis of MobilenetV2 are directly input to the fully-connected layer of the convolutional neural network L, the underlying features are combined by the fully-connected layer, and the features are classified by a Softmax classifier.
(5) Randomly thawing one or more convolutional layers of the convolutional neural network; the dropout idea is adopted here to randomly inactivate some neurons of the network, and in this embodiment, some layers are randomly opened to make nodes of the layer work, update weights, and leave other processes unchanged.
(6) Training the defrosted convolutional layer and the fully-connected classifier in a combined manner, and updating parameters of the convolutional layer through back propagation; this step is the complete convolutional neural network training process, including forward propagation and backward propagation.
Preferably, the pre-training network model may also use a series of open source pre-training models such as VGG, Resnet, etc., but the training set of the network model must be an animal data set or a data set containing animals.
Preferably, the training data set used to train the convolutional neural network is a data set with animal attributes in the AWA data set, a university of stanford canine data set, wherein the AWA2 data set contains 50 animal images, and the university of stanford canine data set contains 120 canine images from all over the world.
Preferably, since the data sets have limited data types and the data amount of certain pet types is insufficient, in the embodiment, a data enhancement algorithm is adopted for balancing the data sets, and the method specifically comprises the following steps:
(1) unifying the original image size to 224 x 224 using a linear interpolation algorithm;
(2) respectively unfolding three channels of images RGB into one-dimensional arrays, and splicing the arrays into an array with the length of 224 multiplied by 3 which is 150528;
(3) traversing each array x in the minority class, calculating Euclidean distance to obtain k neighbors in the arrays, and assuming that the nth neighbor is: x is the number ofn;
(4) Let array xnThe position of each element a in the original drawing is (a)1,a2) When the position satisfies 20 < a1<204,
20<a2If < 204, setting the weight of the central area as omegaCRand (0, 1); setting an edge weight to ωB=2×ωCObtaining a new sample through an algorithm formula, as shown in formula (5):
(5) and dividing the obtained new array according to three channels, and restoring the new array into images.
Here, the recombined new samples do not differ much from the subject, but mainly increase the complexity of the environmental factors in the image.
Step 3, deploying a trained convolutional neural network model on the mobile terminal equipment by using a mobile terminal deep learning framework:
the method comprises the following three steps that a mobile terminal deep learning framework is a TensorFlow mobile, mobile terminal equipment uses an android terminal, and a TensorFlow mobile deployment model to android equipment is deployed:
switching the training mode to TensorFlow;
adding TensorFlow mobile in android application as an additional function;
the inference is performed using the TensorFlow pattern written Java code in the application.
Optionally, the mobile-end deep learning framework may also select TensorFlow Lite, which may be accelerated with a neural network API in devices above android 8, and furthermore, if a convolutional neural network is deployed on iOS, Core ML may be considered for use.
And 4, acquiring images through the mobile terminal equipment and carrying out target pet identification and risk judgment:
the image collected by the mobile terminal device includes a camera of the mobile terminal device such as a mobile phone or a tablet computer, and the like, which is directly shot or an image is input from an internal storage (album) of the mobile terminal device.
In a possible implementation mode, the size adjustment and pixel normalization processing can be carried out on the collected image, so that the size of the input image is matched with the input size of the convolutional neural network; and performing noise reduction processing on the image to further improve the identification accuracy of the convolutional neural network.
The predicted result of the pet species is output through the convolutional network L, and the output result of the convolutional network further includes the position of the pet on the image, the predicted probability, and whether the pet is dangerous.
Further, the judgment of the pet risk by the convolutional network L is limited to the pet category labeled with the risk in the training data set, that is, the pet that is not prohibited by the government to be raised in the city is explicitly prohibited, and for the pet that is not prohibited and has a certain risk, the risk evaluation needs to be further performed in the background server.
The mobile terminal device transmits the characteristic information extracted by the full connection layer of the convolutional neural network L to a background server through wired or wireless connection, and carries out danger judgment through carrying out comparative analysis on the characteristics of key parts.
In the embodiment, the risk evaluation is carried out on the facial and claw characteristics of the pet, and the specific criteria are as follows: regarding pets whose prediction category is definitely forbidden in the urban management regulation as dangerous; for the predicted category of non-fostered pets, 1/3 with nails longer than the palms and 1/2 with teeth exposed to greater than the mouth area are considered dangerous.
Preferably, the background server comprises:
the extraction module is responsible for further extracting the facial and claw features of the pet from the pet features extracted from the full connection layer of the convolutional neural network L;
the analysis module is used for carrying out feature analysis according to a set safety threshold value through the facial and claw features extracted by the extraction module;
the execution module executes corresponding measures according to the dangerous pets judged by the analysis module and the convolutional neural network: providing the image of the pet and the specific position of the pet to a community security department, inputting the information of the pet into a background server database for record processing, displaying the information of the dangerous pet on a mobile terminal interface, and prompting and warning the user.
Although the present invention has been described in connection with the embodiments, the embodiments are not limited to the above embodiments, and the above embodiments are only exemplary and should not be construed as limiting the present invention, and those skilled in the art can make various modifications within the scope of the present invention without departing from the spirit of the present invention, and these modifications are within the protection scope of the present invention.
Claims (3)
1. The method for identifying the pet at the mobile terminal based on the convolutional neural network is characterized by comprising the following steps of:
step 1, constructing a lightweight convolutional neural network L;
step 2, training a convolutional neural network L by using a pre-training network model training method at a server side, wherein the pre-training network model is formed by cascading a pre-training network M and a lightweight convolutional neural network L, and the pre-training network M is an open-source network model trained on an animal data set or a data set containing animals; the training method comprises the following steps: (1) connecting a lightweight convolution neural network L after the convolution base of the pre-training network M; (2) freezing the convolution neural network L convolution base; (3) performing feature extraction on the marked pet learning sample by using the convolution base of the pre-training network M; (4) taking the extracted features as training samples to train a fully-connected classifier of the convolutional neural network L; (5) randomly thawing one or more convolutional layers of the convolutional neural network L; (6) training the unfrozen convolutional layer and the fully-connected classifier in a combined manner, and updating parameters of the convolutional layer through back propagation;
step 3, deploying a trained convolutional neural network model on the mobile terminal equipment through a mobile terminal deep learning framework;
and 4, acquiring images through the mobile terminal equipment and carrying out pet identification and risk judgment.
2. The method for identifying the mobile terminal pet based on the convolutional neural network as claimed in claim 1, wherein the constructed lightweight convolutional neural network has layers 2-10 as a lightweight convolutional structure (bneck) which is a convolutional structure combining a deep separable convolution and a residual error network.
3. The convolutional neural network-based mobile terminal pet identification method as claimed in claim 2, wherein feature extraction is performed by depth separable convolution, and a lightweight attention module is added to adjust weight distribution, and then residual connection is added between input and output layers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011557450.2A CN112668631B (en) | 2020-12-24 | 2020-12-24 | Mobile terminal community pet identification method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011557450.2A CN112668631B (en) | 2020-12-24 | 2020-12-24 | Mobile terminal community pet identification method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112668631A CN112668631A (en) | 2021-04-16 |
CN112668631B true CN112668631B (en) | 2022-06-24 |
Family
ID=75408768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011557450.2A Expired - Fee Related CN112668631B (en) | 2020-12-24 | 2020-12-24 | Mobile terminal community pet identification method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112668631B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113965722A (en) * | 2021-09-10 | 2022-01-21 | 浙江西谷数字技术股份有限公司 | Intelligent community security monitoring system based on Internet of things |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472883A (en) * | 2018-09-27 | 2019-03-15 | 中国农业大学 | Patrol pool method and apparatus |
CN110147772A (en) * | 2019-05-23 | 2019-08-20 | 河海大学常州校区 | A kind of underwater dam surface crack recognition methods based on transfer learning |
CN110298230A (en) * | 2019-05-06 | 2019-10-01 | 深圳市华付信息技术有限公司 | Silent biopsy method, device, computer equipment and storage medium |
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
CN112116560A (en) * | 2020-08-20 | 2020-12-22 | 华南理工大学 | Welding image defect identification method and device, storage medium and equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11222196B2 (en) * | 2018-07-11 | 2022-01-11 | Samsung Electronics Co., Ltd. | Simultaneous recognition of facial attributes and identity in organizing photo albums |
CN110647840B (en) * | 2019-09-19 | 2023-07-28 | 天津天地伟业信息系统集成有限公司 | Face recognition method based on improved mobileNetV3 |
CN110929603B (en) * | 2019-11-09 | 2023-07-14 | 北京工业大学 | Weather image recognition method based on lightweight convolutional neural network |
CN112016041B (en) * | 2020-08-27 | 2023-08-04 | 重庆大学 | Time sequence real-time classification method based on gram sum angle field imaging and Shortcut-CNN |
CN112101241A (en) * | 2020-09-17 | 2020-12-18 | 西南科技大学 | Lightweight expression recognition method based on deep learning |
-
2020
- 2020-12-24 CN CN202011557450.2A patent/CN112668631B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472883A (en) * | 2018-09-27 | 2019-03-15 | 中国农业大学 | Patrol pool method and apparatus |
CN110298230A (en) * | 2019-05-06 | 2019-10-01 | 深圳市华付信息技术有限公司 | Silent biopsy method, device, computer equipment and storage medium |
CN110147772A (en) * | 2019-05-23 | 2019-08-20 | 河海大学常州校区 | A kind of underwater dam surface crack recognition methods based on transfer learning |
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
CN112116560A (en) * | 2020-08-20 | 2020-12-22 | 华南理工大学 | Welding image defect identification method and device, storage medium and equipment |
Non-Patent Citations (1)
Title |
---|
基于卷积神经网络的动物识别算法研究;袁东芝;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)》;20190115;第20-53页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112668631A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110378381B (en) | Object detection method, device and computer storage medium | |
CN110188615B (en) | Facial expression recognition method, device, medium and system | |
CN109002766B (en) | Expression recognition method and device | |
CN111738044B (en) | Campus violence assessment method based on deep learning behavior recognition | |
EP3765995B1 (en) | Systems and methods for inter-camera recognition of individuals and their properties | |
CN112766355A (en) | Electroencephalogram signal emotion recognition method under label noise | |
US20210375441A1 (en) | Using clinical notes for icu management | |
Khosravi et al. | Crowd emotion prediction for human-vehicle interaction through modified transfer learning and fuzzy logic ranking | |
CN112668631B (en) | Mobile terminal community pet identification method based on convolutional neural network | |
KR102425522B1 (en) | Method for Establishing Prevention Boundary of Epidemics of Livestock Based On Image Information Analysis | |
Jain et al. | An explainable machine learning model for lumpy skin disease occurrence detection | |
Hoffman et al. | A benchmark for computational analysis of animal behavior, using animal-borne tags | |
Sirisha et al. | Nam-yolov7: An improved yolov7 based on attention model for animal death detection | |
US20220121953A1 (en) | Multi-task learning via gradient split for rich human analysis | |
Masilamani et al. | Art classification with pytorch using transfer learning | |
CN115240843A (en) | Fairness prediction system based on structure causal model | |
CN114358186A (en) | Data processing method and device and computer readable storage medium | |
Satheeshkumar et al. | Medical data analysis using feature extraction and classification based on machine learning and metaheuristic optimization algorithm | |
Jurj et al. | Real-time identification of animals found in domestic areas of Europe | |
İnkaya et al. | A YOLOv3-Based Smart City Application For Children’s Playgrounds | |
Kaya et al. | Binary classification of criminal tools from the images of the case using CNN | |
CN113887505A (en) | Cattle image classification method and device, electronic equipment and storage medium | |
Chen et al. | Artificial intelligence for image processing in agriculture | |
Viraktamath et al. | Wildlife monitoring and surveillance | |
CN111339952B (en) | Image classification method and device based on artificial intelligence and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220624 |