AU2019100354A4 - An animal image search system based on convolutional neural network - Google Patents

An animal image search system based on convolutional neural network Download PDF

Info

Publication number
AU2019100354A4
AU2019100354A4 AU2019100354A AU2019100354A AU2019100354A4 AU 2019100354 A4 AU2019100354 A4 AU 2019100354A4 AU 2019100354 A AU2019100354 A AU 2019100354A AU 2019100354 A AU2019100354 A AU 2019100354A AU 2019100354 A4 AU2019100354 A4 AU 2019100354A4
Authority
AU
Australia
Prior art keywords
animal
neural network
layer
model
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2019100354A
Inventor
Mingjie Chen
Yu Han
Xingchen Li
Chongwei Liu
Yuqing Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Han Yu Miss
Liu Yuqing Miss
Original Assignee
Han Yu Miss
Liu Yuqing Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Han Yu Miss, Liu Yuqing Miss filed Critical Han Yu Miss
Priority to AU2019100354A priority Critical patent/AU2019100354A4/en
Application granted granted Critical
Publication of AU2019100354A4 publication Critical patent/AU2019100354A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

Abstract The invention involves an animal image search system, aiming at allowing users to search for the same species' pictures of the uploaded animal's through it. Said animal image search system is based on technology of the Convolutional Neural Network (CNN). The invention mainly consists of the following steps: Firstly, a Python web crawler is used to get large amount of animal pictures from the Internet and divided them into training dataset and test dataset. And then the training set data is fed in batches to our convolutional neural network. After being trained, the performance of the model will be valued. The test dataset would be put into the trained neural network to test the result, and the average accuracy of different animal was 90.79%. Generally, this system can automatically recognize different kinds of animals without human intervention and after that, it will show the picture of the same animal species to the user on the screen. Said system can be helpful in the field of animal classification, animal behavior monitoring and animal knowledgepopularization. Testing Data Training Data Vectorization Vectorization Preprocessing Preprocessing Well Trained Deep Neural Model Network Evaluation Training Model Prediction parameters Figure 1

Description

TITLE
An animal image search system based on convolutional neural network
FIELD OF THE INVENTION
This invention is in the field of image processing and is specifically designed as an animal image search system based on convolutional neural network.
BACKGROUND OF THE INVENTION
In the field of zoology, there have been some problems that needed to be solved urgently With the development of science and technology, computer image recognition provides new ideas for solving these problems.
More and more animals are on the verge of extinction, which is casting a serious impact on the ecological environment and the balance of species. Animal image search system can be used to detect and protect endangered species. For example, global animal epidemics are continuing to emerge, threatening human life and health, and animal image recognition used for diagnosis and prevention of epidemic situation can help with it. Also, with the gradual development of animal husbandry and aquaculture, animal image search system can track and detect animals in batches, which effectively guarantees food safety and human health. Moreover, the increase in human’s life quality promotes livestock trading
2019100354 04 Apr 2019 industry, and animal image search system can enhance managing small and medium-sized animals, as well as monitoring animal habits. To better help with the development science and technology, image search system also serves as an important method to explore further in the field of animal science research.
The traditional image recognition algorithm firstly needs to acquire the underlying features of the image, and then it will analyze the acquired features through mathematical models, and finally uses the image matching algorithm to identify images[1]. For example, the key points are the detection and match based on SIFT features and SURF features[2]. Since the features produced by such methods can be considered as shallow features rather than deep features, and to obtain better recognition effects, human intervention is always required in the extraction process of the features. As a result, traditional image algorithms have defects in extracting valuable features of original images and producing final recognition results.
Nowadays, with the huge development of big data, deep learning has a better performance in image recognition field. Different from traditional artificial designed features, deep learning can automatically extract and learn features, prevent manual processing, and give the model more flexibility that can be automatically adjusted according to the data, so that
2019100354 04 Apr 2019 the model can implement an end-to-end process, which is from inputting original pixel-level data to extracting features layer-by-layer, and finally outputting recognition results, and we human needn’t bother to extract the features by hand.
In our invention, we implemented a convolutional neural network model for animal image retrieval. At the same time, the model increases the scalability of data, and the types of animals can be flexibly augmented in our structure.
SUMMARY OF THE INVENTION
The framework of the animal image recognition system involves: animal image dataset, convolutional neural networks, parameter optimization, and implementation of the recognition. Figure 1 shows the basic procedure of our project.
3.1 Data collection
To fulfil the function of animal picture searching, the first step to take is collecting a large amount of data, which is, the different pictures of various animals. The searching system implemented classify N kinds of animals.
3.2 Data processing
After that, we started the procedure of data cleaning and labeling as
2019100354 04 Apr 2019 shown in Figure 2. Firstly, we deleted the pictures that did not meet the requirements. Secondly, we reshaped the pictures size to M/.M, which means each picture has M pixels in row and in column. Additionally, in order to ensure the balance of the number of various animal image samples, we implemented data augmentation of the species that did not meet the specified number by rotating the images at d degrees or flipping the images. After cleaning the original images, we went through each species of the animal pictures and labeled them with 0,...,N by codes. And each number refers to a specific kind of animal. Finally, we divided the dataset into training set and test set in proportion and stored the images to file.
3.3 Model
In this invention, we used convolutional neural network to achieve animal images retrieval as shown in Figure 3, which is composed of input layer, convolutional layer, activation function layer, pooling layer, fully connected layer and SoftMax output layer.
3.3.1 Convolutional-Layer
The convolution layer is for automatic feature extraction. Each image can be represented as a three-dimensional matrix: vvx Tzx c , where w is the width of the image, h is the height and c is the number of color channels. Then we use the filter as shown in Figure 4, which means the
2019100354 04 Apr 2019 convolutional kernel in CNN, to do convolution with the input image to obtain the feature graph. We usually use multiple filters to learn different features and multiple layers of convolution to acquire deeper features.
Apart from the number of filters and the size of filter, there are several other hyperparameters in a single convolutional layer: 1) Stride: step size of window sliding during convolution. 2) Padding: that is, add a boundary to the image, and all boundary elements are 0, so that the convolved feature image has the same dimension with the input image. The formula for the output convolution image matrix size is:
η + Ίρ-f 1 ( +1)1 + L 5 'J k+2-M>- <0 L 'J
where n is the size of input image matrix, p means padding zero number, f is the size of filter , s is the stride size and K is the number of filters.
3.3.2 Activation Function
Activation function introduces nonlinear factors to neurons, enabling the neural network to map input to output of any complex function, making the neural network more powerful, so that the network can learn more complex features. There are several activation functions, such as ReLU (Rectified Linear Units) (formula 2), sigmoid function (formula 3), tanh function (formula 4).
2019100354 04 Apr 2019
relu(x) = <! ’ >= 0 (2)
sigmoid (x) = —-— l + e~x (3)
x — x
tanh(x) = —— ex + e x (4)
3.3.3 Pooling Layer
In general, features with large dimensions can be obtained after the convolutional layer. Pooling layer divides features into several regions and takes their maximum or average value to simplify the complexity of network calculation and obtain new features with small dimensions.
3.3.4 Fully Connected Layer
This layer combines all local features into global features and send the output value to the classifier. The structure is shown in Figure 5.
3.3.5 SoftMax Output Layer
SoftMax function is used to handle multiple classification problems, by converting the output value of multiple classifications to relative probability The function maps the input to real number between 0 and 1, and normalizes the sum to 1. The definition of SoftMax function is shown below:
2019100354 04 Apr 2019 (5)
SoftMax(z .) = —--k=\ where N is the number of categories, j is the index of categories and Zj is the output of previous layer.
3.4 Optimize
After building the neural network model, we will optimize the model, such as adjusting the hyperparameter and improving the running speed of the algorithm.
3.4.1 Regularization
Because the network structure may cause overfitting, that is, the model performs well in the training set, but the accuracy is poor in the test set, we need regularization. Li (formula 6) and L2 (formula 7) regularizations are the most commonly used regularization methods. Li regularization adds regularization terms to the target function to reduce the absolute sum of parameters. In L2 regularization, the purpose of adding regularization item is to reduce the sum of parameter squares.
= (6)
1=0 ν:ί = Σ(ζ·-Λ(^)) (7) i=0
In the formulas, i is the index of samples, y. is the true label of each
2019100354 04 Apr 2019 label and Λ( v;.) is the prediction value by our model.
3.4.2 Dropout
Dropout is another effective way to prevent overfitting. Dropout means to temporarily discard neurons in each layer from the network according to a certain probability in the training process (as shown in Figure 6). In other words, during each training, some neurons in each layer do not work, which can simplify the complex network model and avoid overfitting.
3.4.3 Gradient descent algorithm
For a neural network, we want the predicted result to be as close as possible to the real label, therefore, we use loss function J(w,b) to reflect the degree of the predicted value and the output value. We need to continuously modify w and b through gradient descent algorithm to make J(w,b) closer to the global minimum. There are several gradient descent algorithms such as Mini-batch Gradient Descent (formula 8), Adam optimization algorithm (formula 9), Momentum algorithm (formula 10).
=0 (8) JJdGj where Θ represents the network parameters (weight w or bias b), J(0) is the loss function and a represents the learning rate.
(9)
2019100354 04 Apr 2019 ma,=A-a/, + (i-A)'^|~/W %=ΑΑ + (ΐ-Α)·(^-/«
Figure AU2019100354A4_D0001
where βγ is the exponential decay rate of first order moment estimation, β2 is the exponential decay rate of second moment estimation, ε is a very small number, which prevents it from being divided by zero in the implementation, M and V are intermediate variables which are initialized to 0.
(10) θ^θ-α-V^
3.4.4 Learning rate
The learning rate is a significant hyperparameter that guides us how to adjust the weight of the network through the gradient of the loss function. The lower the learning rate, the slower the loss function changes. It is important to choose an appropriate learning rate. The following formula represents the relationship described above.
θ.=θ.-α~· J(0) 7 7 8Θj (Π) a is the learning rate. If a is too small, gradient descent can be slow.
2019100354 04 Apr 2019
If a is too large, gradient descent can over shoot the minimum and it may fail to converge or even diverge.
3.5 GUI
The interface of animal image search system based on convolutional neural network is shown in Figure 7 and Figure 8.
DESCRIPTION OF DRAWINGS
Figure 1 illustrates the basic procedure of the whole project.
Figure 2 illustrates the basic procedure of data processing.
Figure 3 illustrates the general framework of the model.
Figure 4 illustrates the operation of filter.
Figure 5 illustrates the full connected layer.
Figure 6 illustrates the after applying dropout.
Figure 7 illustrates the user upload and search for images.
Figure 8 illustrates our system returns results based on user input.
Figure 9 illustrates the detailed procedure of data processing.
Figure 10 illustrates the detailed procedure of the model.
Figure 11 illustrates the detailed framework of the model.
Figure 12 illustrates the structure of convolutional neural network.
Figure 13 illustrates the convolutional layer 1.
io
2019100354 04 Apr 2019
Figure 14 illustrates the convolutional layer2.
Figure 15 illustrates the convolutional layer3.
Figure 16 illustrates the convolutional layer4.
Figure 17 illustrates the fully Connected Layer and SoftMax Layer.
DESCRIPTION OF PREFERRED EMBODIMENTS
In order to achieve our goal of animal detection, firstly, we input the preprocessed data into the convolutional neural network in batches. And then, we used the convolutional layers and the pooling layers to extract features, and the amount of network parameters and complexity of learning are reduced because of local receptive field and weight sharing characteristics of the convolutional neural network. Ultimately, the final recognition result is output after the fully connected layer and the SoftMax layer, and the model is optimized by continuously adjusting the parameters to minimize the loss function.
2019100354 04 Apr 2019
Table 1 Recognition result
Dropout rate train batch size test batch size base learning rate iteration steps test average accuracy (%)
0.99 64 500 0.001 2000 85.68
0.9 64 500 0.001 2000 87.25
0.8 64 500 0.001 2000 84.11
0.99 64 500 0.001 5000 83.60
0.99 64 300 0.01 2000 87.68
0.9 32 300 0.001 2000 88.92
0.9 32 680 0.001 5000 89.33
0.99 32 680 0.001 2000 90.79
0.99 32 680 0.003 2000 87.29
4.1 Data Collection
The first step in our overall procedure is to collect a large number of different pictures of diverse animals. We used Python web crawler to collect data on the internet, downloaded data from local database as well as the traditional method such as taking photos. In our searching system, we implemented to classify 5 kinds of animals: tiger, cattle, dog, sheep and elephant.
4.2 Data Processing
We deleted the pictures which are too obscure to identify or have other confusing features such as human in the picture or the animals are of a large herd. And such pictures are called noise.
After that, we reshaped the collected pictures size to 32 x 32 , which
2019100354 04 Apr 2019 will help with the training process in our convolutional neural networks later on. Since our dataset was not so sufficient, we implemented data augmentation to avoid overfitting. We chose some of the pictures and rotated them, clockwise or anticlockwise, for 30 degrees to make up new data samples, and we also mirror reflected some of the pictures to be added to the new dataset.
To feed the collected data to our CNN model, we labeled the pictures with 0, 1, 2, 3, 4 to represent different species. Next, we shuffled the whole pictures, and divided the dataset by the rate of 4:1. We will feed
1 the — part to the model to train the parameters and the rest — part will be the test set. Finally, to feed the collected data to our CNN model, we transformed the data format from ‘jpg to ‘pkl’. The detailed procedure of data processing is shown in Figure 9.
4.3 Model
After pre-processing the data, we started to use this dataset to train our CNN model. We firstly needed to load our dataset to the form of matrix, and we would reshape the sample matrix to 4 dimensions, which is of the size of [sample numbers, heights, weights, color channels].
To make the result of different kinds of animals easier to read, we applied one-hot-encoding to the labels of the data. That is, we paired an
2019100354 04 Apr 2019 array to each image matrix, and each kind of animal is linked to one specific one-dimension array For example, [1, 0, 0, 0, 0] means tiger, [0, 1, 0, 0, 0] refers to cattle, and so on.
Next, we compacted the three channels (RGB) to one channel (Grayscale) by calculating the average of the three channels. After receiving the formatted input data, our models will initialize the parameters, such as weights and bias, in each layer randomly. And the optimization of our model will be achieved by means of minimizing the loss which will be discussed later. The detailed procedure of the model is shown in Figure 10.
Actually, we used 4 convolutional layers, 2 max pooling layers, 2 fully connected layers and a SoftMax output layer. Figure 11 and Figure 12 show the structure of our convolutional neural network. The activation function we chose is ReLU, and we added Max Pooling layer to the second and fourth convolutional layer to reduce the matrix dimension. And the extracted features will be assembled in the last two fully connected layers. Concrete explanations for them will be made in the following parts.
4.3.1 Convolutional Layer 1 (as shown in Figure 13)
After data processing, the three-dimensional matrix of each input image can be represented as [32x32x1], then we convolute the input
2019100354 04 Apr 2019 images by a [3><3xl] filter. Since the image will shrink after the convolution operation, we do not want the image to shrink every time the edge or other features are identified, so that the image information of the edge of the image is lost, and therefore we choose to fill the pixel of the edge with 0. This padding method, called Same padding, ensures that the output size of the layer is equal to the input. The convolution kernel will generate a new matrix from the input matrix, sum up the numbers in one filter-size small matrix and add to the corresponding part in the new matrix. In this layer, we set the value of the variable in formula 1 to n=32, p=l, f=3, 5 = 1, K=32. Therefore, the convolution kernel will generate 32+2xl~3 + l-32 pixels after the movement and the output convolution image matrix size is [32x32x32],
As for the nonlinear activation function, because sigmoid and tanh function converge slowly and can easily cause gradient vanishing, we choose ReLU (Rectified Linear Function) in both the convolutional layers and the fully connected layers. ReLU function can alleviate gradient disappear problems and decrease the training time, it can speed up the rate of convergence of the model greatly as well.
2019100354 04 Apr 2019
4.3.2 Convolutional Layer 2 (as shown in Figurel4)
The input data of second layer is the output of first layer, which is of size [32x32x32], And the next several steps are the same as layerl.
After getting the [32x32x32] data, we added Max Pooling layer to it. That is to take the maximum value of characteristic points in the neighborhood, which can reduce the deviation of the estimated mean caused by the parameter error of the convolution layer. The pooling layer can reduce the dimension of the extracted feature information. On the one hand, it can make the feature graph smaller, simplify the computational complexity of the network and avoid overfitting to a certain extent. On the other hand, feature compression is carried out to extract the main features. Here, m, n represents the area that the pooling core in a1^ covers a'=max(a'„')
The Pooling Layer operates independently on every depth slice of the input and resizes it spatially. We used a max pooling layer with filters of size [2x2] applied with a stride of 2. Every max operation will select the maximum number from the numbers in the [2x2] region, and finally generate the new matrix. However, the depth dimension will remain unchanged.
In summary, in this layer the input volume of size [32x32x32] was
2019100354 04 Apr 2019 pooled with p=Q, /=2, s = 2 into output volume of size [16x16x32],
4.3.3 Convolutional Layer 3 (as shown in Figure 15)
The input data of third layer is the output of the former pooling layer, which is of size [16x16x32], In this layer, the operations are the same as the convolutional layer 1, and the output size will be [16x16x32],
4.3.4 Convolutional Layer 4 (as shown in Figure 16)
The convolutional layer 4 operates the same as layer 2, and the output size will be [8x8x32],
4.3.5 Fully Connected Layer (as shown in Figure 17)
If operations such as convolutional layer, pooling layer and activation function layer are to map the original data to the hidden layer feature space, the fully connected layer will play the role of mapping the learned distributed feature representation to the sample tag space. The fully connected layer can integrate the local information in the convolutional layer or the pooling layer with class discrimination.
After the convolutional layers, we applied 2 fully connected layers after them and reshaped the image matrix from [#batchx8x8x32] to [#batchxl28]. The first fully connected layer has 2048 nodes, each of them has full connections to all input data and reshape the image matrix to [#batchx2048]. The second fully connected layer has 128 nodes.
2019100354 04 Apr 2019
4.3.6 SoftMax Layer (as shown in Figure 17)
SoftMax layer is applied in the last part of our CNN model. After receiving the values from fully connected layer, the SoftMax classification model output the probability of each category of animals valued between 0 and 1 and then it will decide to output 0, 1, 2, 3, 4 depending on the probability of each animal.
After that, our results will be output from the whole convolutional neural network.
4.4 Activation Function Pick
In our convolution neural networks, we compared sigmoid, tanh and ReLU function as the activation function. We found that when RELU function is greater than 0, the derivative is constant, while sigmoid and tanh functions are not constant. On the contrary, the derivatives of sigmoid and tanh are similar to the curve shape of Gauss function. When both ends approach the target, the derivative becomes smaller. And if the derivative is small, the back-propagation error will cause the convergence to slow down when training the neural network, while the ReLU function avoids this, which is very good and powerful. So finally, we chose ReLU as our activation function.
2019100354 04 Apr 2019
4.5 Optimize Method Pick
4.5.1 Regularization
Compared with L2 regularization, Li regularization yields more sparse weight values, that is, many weight values are zero. Its advantage may be saving storage space, but in fact, Li regularization has no advantage over L2 regularization in resolving high variance. Moreover, the derivative of Li is more complicated. Therefore, we finally chose L2 regularization as our regularization method.
4.5.2 Dropout
The key idea in dropout is to randomly drop units (along with their connections) from the neural network during training. In the forward propagation, we make the activation value of a neuron stop working with a certain probability, which can make the model more generalized, because it does not rely too much on some local features. In our network, we set the probability of dropout to 0.1. This significantly reduces overfitting and gives major improvements over other regularization methods.
4.5.3 Gradient descent algorithm
When choosing the optimizer in the convolutional layers, we compared three algorithms: Mini-batch Gradient Descent, Momentum algorithm and Adam (Adaptive Moment Estimation).
2019100354 04 Apr 2019
In our practical application, Adam algorithm has the best effect. For Mini-batch Gradient Descent, the algorithm uses the same learning rate for all parameter updates. For sparse data or features, sometimes we may want to update them faster for infrequent features and slower for frequent features, and then the algorithm will not be able to meet the requirements. However, Adam algorithm can calculate the adaptive learning rate of each parameter. And compared with other optimizer algorithms, it has faster convergence speed and more effective learning effect, and can correct the problems existing in other optimization techniques, such as the loss function fluctuation caused by the disappearance of learning rate, slow convergence or parameter updating with high variance.
4.5.4 Learning rate
We improve the speed of neural network training by reducing the learning rate, which is called leaning rate decay a=----------------αθ
1+decay _rate* epoch
In this formula, decay_rate is an adjustable parameter, epoch is the number of times all samples are trained. As the number of epoch goes up, a gets smaller and smaller.
4.6 Test
After training specific epochs, we will use the test set to measure the
2019100354 04 Apr 2019 capability of the current model, and adjust some hyperparameters based on the test results, such as batch size, the number of training iterations, initial learning rate and so on.
5. Application Scenarios
This invention can be applied to detect and protect endangered animal species, track and detect animals to help with animal husbandry and aquaculture, monitor animal habits to better popularize animal knowledge.

Claims (2)

1. An image search system based on deep learning, comprises:
inputting the animal image dataset to the model, then the features of the pictures will be automatically extracted;
afterwards, a classifier will be used to identify images containing target animals.
2. The image search system according to claim 1, wherein the structure of the convolutional neural network consists of four Convolutional layers with ReLU as the activation function, two Max Pooling layers and two Fully connected layers with ReLU functions, they all have great impact on the performance and accuracy of animal image recognition.
AU2019100354A 2019-04-04 2019-04-04 An animal image search system based on convolutional neural network Ceased AU2019100354A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2019100354A AU2019100354A4 (en) 2019-04-04 2019-04-04 An animal image search system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2019100354A AU2019100354A4 (en) 2019-04-04 2019-04-04 An animal image search system based on convolutional neural network

Publications (1)

Publication Number Publication Date
AU2019100354A4 true AU2019100354A4 (en) 2019-05-16

Family

ID=66443169

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019100354A Ceased AU2019100354A4 (en) 2019-04-04 2019-04-04 An animal image search system based on convolutional neural network

Country Status (1)

Country Link
AU (1) AU2019100354A4 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222630A (en) * 2019-06-03 2019-09-10 中国农业大学 One boar identification system
CN110738661A (en) * 2019-09-23 2020-01-31 山东工商学院 oral cavity CT mandibular neural tube segmentation method based on neural network
CN110781729A (en) * 2019-09-16 2020-02-11 长安大学 Evaluation model and evaluation method for fiber dispersibility of carbon fiber reinforced cement-based material
CN111242895A (en) * 2019-12-31 2020-06-05 福建工程学院 Bamboo chip wormhole and mildew spot detection method based on convolution flexible neural forest
CN112116090A (en) * 2020-09-28 2020-12-22 腾讯科技(深圳)有限公司 Neural network structure searching method and device, computer equipment and storage medium
CN112329546A (en) * 2020-10-15 2021-02-05 杭州电子科技大学 Eye height measuring method based on deep learning
CN112613536A (en) * 2020-12-08 2021-04-06 燕山大学 Near infrared spectrum diesel grade identification method based on SMOTE and deep learning
WO2021115123A1 (en) * 2019-12-12 2021-06-17 苏州科技大学 Method for footprint image retrieval
CN113222991A (en) * 2021-06-16 2021-08-06 南京农业大学 Deep learning network-based field ear counting and wheat yield prediction
CN113297956A (en) * 2021-05-22 2021-08-24 温州大学 Gesture recognition method and system based on vision
CN113298791A (en) * 2021-05-31 2021-08-24 中电福富信息科技有限公司 Image detection method of mixed cartoon based on deep learning
WO2021169723A1 (en) * 2020-02-27 2021-09-02 Oppo广东移动通信有限公司 Image recognition method and apparatus, electronic device, and storage medium
CN113657238A (en) * 2021-08-11 2021-11-16 南京精益安防系统科技有限公司 Fire early warning method based on neural network, storage medium and terminal equipment
CN114972249A (en) * 2022-05-24 2022-08-30 广州市华奕电子科技有限公司 Liver tumor segmentation method based on lightweight convolutional neural network
US11514567B2 (en) * 2019-06-24 2022-11-29 Inner Mongolia University Of Technology On-line real-time diagnosis system and method for wind turbine blade (WTB) damage

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222630A (en) * 2019-06-03 2019-09-10 中国农业大学 One boar identification system
US11514567B2 (en) * 2019-06-24 2022-11-29 Inner Mongolia University Of Technology On-line real-time diagnosis system and method for wind turbine blade (WTB) damage
CN110781729A (en) * 2019-09-16 2020-02-11 长安大学 Evaluation model and evaluation method for fiber dispersibility of carbon fiber reinforced cement-based material
CN110781729B (en) * 2019-09-16 2023-04-07 长安大学 Evaluation model and evaluation method for fiber dispersibility of carbon fiber reinforced cement-based material
CN110738661A (en) * 2019-09-23 2020-01-31 山东工商学院 oral cavity CT mandibular neural tube segmentation method based on neural network
US11809485B2 (en) 2019-12-12 2023-11-07 Suzhou University of Science and Technology Method for retrieving footprint images
WO2021115123A1 (en) * 2019-12-12 2021-06-17 苏州科技大学 Method for footprint image retrieval
CN111242895A (en) * 2019-12-31 2020-06-05 福建工程学院 Bamboo chip wormhole and mildew spot detection method based on convolution flexible neural forest
CN111242895B (en) * 2019-12-31 2023-04-18 福建工程学院 Bamboo chip wormhole and mildew detection method based on convolution flexible neural forest
WO2021169723A1 (en) * 2020-02-27 2021-09-02 Oppo广东移动通信有限公司 Image recognition method and apparatus, electronic device, and storage medium
CN112116090B (en) * 2020-09-28 2022-08-30 腾讯科技(深圳)有限公司 Neural network structure searching method and device, computer equipment and storage medium
CN112116090A (en) * 2020-09-28 2020-12-22 腾讯科技(深圳)有限公司 Neural network structure searching method and device, computer equipment and storage medium
CN112329546A (en) * 2020-10-15 2021-02-05 杭州电子科技大学 Eye height measuring method based on deep learning
CN112613536A (en) * 2020-12-08 2021-04-06 燕山大学 Near infrared spectrum diesel grade identification method based on SMOTE and deep learning
CN113297956A (en) * 2021-05-22 2021-08-24 温州大学 Gesture recognition method and system based on vision
CN113297956B (en) * 2021-05-22 2023-12-08 温州大学 Gesture recognition method and system based on vision
CN113298791A (en) * 2021-05-31 2021-08-24 中电福富信息科技有限公司 Image detection method of mixed cartoon based on deep learning
CN113222991A (en) * 2021-06-16 2021-08-06 南京农业大学 Deep learning network-based field ear counting and wheat yield prediction
CN113657238A (en) * 2021-08-11 2021-11-16 南京精益安防系统科技有限公司 Fire early warning method based on neural network, storage medium and terminal equipment
CN113657238B (en) * 2021-08-11 2024-02-02 南京精益安防系统科技有限公司 Fire early warning method based on neural network, storage medium and terminal equipment
CN114972249A (en) * 2022-05-24 2022-08-30 广州市华奕电子科技有限公司 Liver tumor segmentation method based on lightweight convolutional neural network

Similar Documents

Publication Publication Date Title
AU2019100354A4 (en) An animal image search system based on convolutional neural network
Militante et al. Plant leaf detection and disease recognition using deep learning
Wang et al. The effectiveness of data augmentation in image classification using deep learning
Haridasan et al. Deep learning system for paddy plant disease detection and classification
Pare et al. An efficient method for multilevel color image thresholding using cuckoo search algorithm based on minimum cross entropy
Khan et al. Deep learning for apple diseases: classification and identification
Belay et al. Development of a chickpea disease detection and classification model using deep learning
Afework et al. Detection of bacterial wilt on enset crop using deep learning approach
Chen-McCaig et al. Convolutional neural networks for texture recognition using transfer learning
Zhang et al. Deep learning based rapid diagnosis system for identifying tomato nutrition disorders
Nihar et al. Plant disease detection through the implementation of diversified and modified neural network algorithms
Abisha et al. Brinjal leaf diseases detection based on discrete Shearlet transform and Deep Convolutional Neural Network
Omer et al. An image dataset construction for flower recognition using convolutional neural network
Alshehhi et al. Date palm leaves discoloration detection system using deep transfer learning
Lovitt et al. A New U-Net Based Convolutional Neural Network for Estimating Caribou Lichen Ground Cover from Field-Level RGB Images
Li et al. Assessing and improving intelligent physical education approaches using modified cat swarm optimization algorithm
Gupta et al. Potato disease prediction using machine learning, image processing and IoT–a systematic literature survey
Bansod Rice crop disease identification and classifier
Lakshmi et al. Whale Optimization based Deep Residual Learning Network for Early Rice Disease Prediction in IoT
Yigbeta et al. Enset (Enset ventricosum) Plant Disease and Pests Identification Using Image Processing and Deep Convolutional Neural Network.
Al Wajieh et al. Classification of Longan Types Using The Back-Propagation Neural Network Algorithm Based on Leaf Morphology With Shape Characteristics
GEBREYES IMAGE BASED COFFEE BEAN CLASSIFICATION USING DEEP LEARNING TECHNIQUE.
Kaur A CNN-based identification of honeybees' infection using augmentation
US20240104900A1 (en) Fish school detection method and system thereof, electronic device and storage medium
Hui Research on Rose Classification Based on Neural Network Model

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry