CN112800856A - Livestock position and posture recognition method and device based on YOLOv3 - Google Patents

Livestock position and posture recognition method and device based on YOLOv3 Download PDF

Info

Publication number
CN112800856A
CN112800856A CN202110011560.7A CN202110011560A CN112800856A CN 112800856 A CN112800856 A CN 112800856A CN 202110011560 A CN202110011560 A CN 202110011560A CN 112800856 A CN112800856 A CN 112800856A
Authority
CN
China
Prior art keywords
livestock
candidate frame
candidate
posture
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110011560.7A
Other languages
Chinese (zh)
Inventor
陈明
陶朝辉
王丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tongshenghong Data Co ltd
Original Assignee
Nanjing Tongshenghong Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tongshenghong Data Co ltd filed Critical Nanjing Tongshenghong Data Co ltd
Priority to CN202110011560.7A priority Critical patent/CN112800856A/en
Publication of CN112800856A publication Critical patent/CN112800856A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a livestock position and posture recognition method and device based on YOLOv3, wherein the method comprises the following steps: (1) establishing a position and gesture recognition network, wherein the position and gesture recognition network is specifically a YOLOv3 improved network, the YOLOv3 improved network mainly improves the loss of the size of a candidate frame and the offset loss of the center position of the candidate frame in a loss function of the YOLOv3 network, (2) acquiring images of a plurality of known livestock position and gesture characteristics as training samples, and obtaining the size of a candidate frame anchor box by adopting a Gaussian mixture-based GMM candidate frame generation algorithm based on the training samples; (3) training the position and posture recognition network based on the training sample obtained in the step (2) and the size of the candidate frame; (4) the method comprises the steps of obtaining a video of the livestock to be recognized, dividing the video into a plurality of image frames, and inputting the image frames into a trained position and posture recognition network, so that the position and posture characteristics of all the livestock in the image are recognized. The invention has higher recognition performance and can recognize the posture characteristics of the livestock.

Description

Livestock position and posture recognition method and device based on YOLOv3
Technical Field
The invention relates to the technical field of computer vision artificial intelligence, in particular to a livestock position and posture recognition method and device based on YOLOv 3.
Background
The management of livestock by computer vision technology is becoming the core technology based on artificial intelligence animal husbandry. At present, the computer vision technology is only used for positioning and identifying the types of the livestock, and the acquisition of the morphological characteristic information of the livestock is omitted. In fact, the morphological characteristics of livestock play an extremely important role in livestock management: the time information of standing, lying, eating and the like of the livestock has great significance for the evaluation of the health state of the livestock.
At present, the livestock breeding based on the computer vision technology can only acquire the position of the livestock and identify the type of the livestock on a video image, and the information cannot reflect the posture of the livestock, so that the health condition is judged, the health state characteristics of the livestock are managed by other means, higher livestock management cost is brought, and the realization of artificial intelligence livestock breeding with low cost is not facilitated. In addition, the video image based on the animal farm has a wide-angle characteristic, and the target object at the edge of the video image is smaller than the target object at the center of the image. In this case, the mainstream neural network presents the problems of difficulty in recognizing edge targets and unreasonable size of target candidate frames, that is, the recognition of the general neural network technology for image recognition on animal video images presents lower recognition performance, which increases the difficulty of intelligent livestock management based on computer vision technology.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a livestock position and posture recognition method and device based on YOLOv3, which have higher recognition performance and can recognize the postures of livestock.
The technical scheme is as follows: the livestock position and posture recognition method based on YOLOv3 comprises the following steps:
(1) establishing a position and gesture recognition network, wherein the position and gesture recognition network is specifically a YOLOv3 improved network, and the YOLOv3 improved network improves the loss function of the YOLOv3 network into:
Loss=Lbbox2+Lbbox1+Lclass+Lscore
Figure BDA0002885270660000011
Figure BDA0002885270660000012
in the formula, Lbbox2Represents the loss of candidate box size, Lbbox1Indicates the loss of the shift of the center position of the candidate frame, LclassRepresents a loss of pose recognition, LscoreRepresenting a loss of confidence in recognition, a weighting factor
Figure BDA0002885270660000021
x and x 'are the central abscissa and the actual central abscissa of the candidate frame, y and y' are the central ordinate and the actual central ordinate of the candidate frame, λcoord2M, N denotes the number of rows and columns of blocks of the picture divided into identified regions, B denotes the number of candidate boxes generated per identified region,
Figure BDA0002885270660000022
the j candidate frame for representing the i-th recognition area contains a target, if the item containing the target is 1, otherwise, the item is 0, W and W 'are the width and the actual width of the candidate frame, H and H' are the height and the actual height of the candidate frame, and lambdacoord1Representing the weight loss of the offset of the center position of the candidate frame;
(2) acquiring images of a plurality of known livestock position and posture characteristics as training samples, and obtaining the sizes of candidate frames anchor boxes by adopting a Gaussian mixture-based GMM candidate box generation algorithm based on the training samples;
(3) training the position and posture recognition network based on the training sample obtained in the step (2) and the size of the candidate frame;
(4) the method comprises the steps of obtaining a video of the livestock to be recognized, dividing the video into a plurality of image frames, and inputting the image frames into a trained position and posture recognition network, so that the position and posture characteristics of all the livestock in the image are recognized.
Further, the GMM candidate box generation algorithm based on gaussian mixture specifically includes:
A. initializing 9 Gaussian distributed parameters, namely randomly selecting 9 samples from all the candidate frame samples to form a candidate frame sample set X ═ Xi=(wi,hi)T|i=1,2,...,9},wi,hiAnd representing the width and height of the selected ith candidate frame sample, the ith gaussian distribution parameters are respectively: mean value μi=Xi=(wi,hi)TCovariance matrix
Figure BDA0002885270660000023
Probability of each Gaussian distribution being selected
Figure BDA0002885270660000024
B. For each data X in the candidate frame sample set XiThe probability that it is generated by each gaussian distribution respectively is calculated:
Figure BDA0002885270660000025
k is 1, …,9, i is 1, …, 9; determining a likelihood function on the basis of the determined values
Figure BDA0002885270660000026
C. Computing each data X in a sample set X of candidate boxesiProbability dependent on each gaussian distribution:
Figure BDA0002885270660000031
k=1,…,9,i=1,…,9;
D. update the parameters of 9 gaussian distributions:
Figure BDA0002885270660000032
Figure BDA0002885270660000033
i=1,…,9;
E. judging whether the parameter D is receivedConverging, repeating B-D if not converging, and converging nine mean values { mu ] in the parameteri=(μiwih)TI 1.. 9} as the final 9 candidate frame sizes.
Further, the posture characteristics set in the training sample include standing, lying down, lying on side, feeding, defecation and scratching.
Further, the positions of all the livestock in the identified image are marked by rectangular frames.
The livestock position and posture recognition device based on YOLOv3 comprises a processor and a computer program stored on a memory and capable of running on the processor, wherein the processor executes the computer program to realize the method.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: aiming at the problems of poor image edge target identification performance and overlarge target frame size in the prior art, the invention provides a brand-new loss function and replaces the original prior frame generation algorithm. The new loss function strengthens the training of the neural network on the identification of the edge target, and greatly improves the identification performance of the network on the edge target of the animal monitoring image. The displaced prior box generation algorithm shows more accurate dimensions in the generation of the target box. Meanwhile, in order to effectively solve the problem that information acquisition is incomplete in the current livestock management based on computer vision, the method and the system can acquire the position of the livestock in the video image and can also accurately acquire the posture characteristic of each livestock, and the posture characteristic information has very important significance in evaluating the health condition of the livestock.
Drawings
FIG. 1 is a flow chart of a livestock position and posture recognition method based on YOLOv3 provided by the invention;
FIG. 2 is a structure of the output results of the gesture recognizer network and the loss function component of the present invention;
FIG. 3 is a diagram of a convolutional neural network Darknet network structure for extracting image features in the present invention;
FIG. 4 is a schematic diagram of a feature data processing module according to the present invention;
FIG. 5 is a comparison graph of the effect of the YOLOv3 network and the improved YOLOv3 network of the present invention on livestock identification;
fig. 6 is a schematic diagram of the effect of the invention on the acquisition of the posture characteristics and the position of the livestock.
Detailed Description
The embodiment provides a livestock position and posture recognition method based on YOLOv3, as shown in fig. 1, including the following steps:
(1) establishing a position and posture recognition network, wherein the position and posture recognition network is specifically a YOLOv3 improved network.
In order to solve the problem that the YOLOv3 network is exposed in the livestock recognition, the invention provides an optimized loss function when training network parameters: as shown in fig. 2, the completely new penalty function is composed of a candidate box size penalty, a candidate box center position offset penalty, a pose recognition penalty, and a confidence penalty:
loss of candidate box size: a penalty function for evaluating the size reasonableness of the target candidate box is given by:
Figure BDA0002885270660000041
frame candidate center position offset penalty: a loss function for evaluating the central position rationality of the target candidate box:
Figure BDA0002885270660000042
loss of gesture recognition: a loss function for evaluating the difference of the identified pose from the actual pose:
Figure BDA0002885270660000043
identifying a confidence loss: loss function for evaluating confidence of candidate box:
Figure BDA0002885270660000044
total loss: loss ═ Lbbox2+Lbbox1+Lclass+Lscore
In the formula, Lbbox2Represents the loss of candidate box size, Lbbox1Indicates the loss of the shift of the center position of the candidate frame, LclassRepresents a loss of pose recognition, LscoreRepresenting a loss of confidence in recognition, a weighting factor
Figure BDA0002885270660000045
x and x 'are the central abscissa and the actual central abscissa of the candidate frame, y and y' are the central ordinate and the actual central ordinate of the candidate frame, λcoord2M, N denotes the number of rows and columns of blocks of the picture divided into identified regions, B denotes the number of candidate boxes generated per identified region,
Figure BDA0002885270660000051
the j candidate frame for representing the i-th recognition area contains a target, if the item containing the target is 1, otherwise, the item is 0, W and W 'are the width and the actual width of the candidate frame, H and H' are the height and the actual height of the candidate frame, and lambdacoord1Representing the frame center offset penalty weight, pi(m)、pi' (m) denotes the probability that the target is in the attitude m and the probability that the target is actually in the attitude m in the recognition result, respectively, ci、ci' represents the confidence of the neural network judgment in the i-th recognition region and the actual confidence, lambdanoobjRepresenting the weight in the confidence loss when there is no target in the identified region,
Figure BDA0002885270660000052
the jth candidate box representing the ith recognition area has no targets.
In the embodiment, the improved YOLOv3 network mainly improves the loss function, and the improved YOLOv3 network mainly includes three modules: a Darknet 53-based feature extraction module, a feature data processing module, and a target and gesture recognizer module, a feature extraction module, a feature data processing module, and a target and gesture recognizer module. The characteristic extraction module is used for extracting the characteristics of the image from the input image; the characteristic data processing module is responsible for giving a rectangular candidate frame of the target in the picture according to the image characteristics; the target and posture recognizer module is responsible for detecting the target in the image and the posture characteristics corresponding to the target on the processed characteristic data, and the posture characteristics comprise six posture characteristics of standing, lying down, lying on side, feeding, excreting and scratching.
The network of the feature extraction module adopts a Darknet53 network, and the structural schematic diagram of the feature extraction module is shown in FIG. 3 and mainly comprises convolutional layers and residual error network components. The network extracts the rectangular features of the picture from three scales, and the main work flow of the network can be divided into the following three steps: the input picture is reshaped into an RGB format picture of size dimension 416 × 416. Secondly, a series of convolution and pooling processing are carried out on the picture data obtained in the first step, and the processing steps are as shown in the table 1. The output of each row in table 1 will be used as the input of the next row, and the convolution kernel type and convolution step size of each row respectively represent the parameters of the corresponding convolution layer in fig. 3. Thirdly, as shown in fig. 3, from the data obtained by processing in step 2, three feature data of different scales are extracted from scale 1, scale 2 and scale 3, and the sizes of the three feature data of different scales are: 52 × 52 × 256, 26 × 26 × 512, and 13 × 13 × 1024.
Table 1 dark net53 network processing flow
Figure BDA0002885270660000061
The feature data processing module is responsible for further processing the scale 1 and scale 2 feature data obtained in the third step in the feature extraction module, so that the feature data contains more deep information for the use of the full connection layer and the target and gesture recognizer module. The schematic diagram of the feature data processing module is shown in fig. 4, wherein three kinds of feature data of the input item are respectively from the feature data of 3 different scales obtained in the feature extraction module, and the up-sampling is responsible for converting the high-scale data into the low-scale data and simultaneously retaining the information of the high-scale feature data. And merging the feature data after the scale conversion is realized through tensor merging, and the processed feature data are sent to three different postures and target recognizers.
The target and posture recognizer is responsible for recognizing the position of the livestock target and judging the posture of the livestock target according to the characteristic data obtained by the characteristic data processing module. The method mainly comprises a fully-connected neural network, and for an identifier with a characteristic data scale of S multiplied by S, the working process is as follows: respectively generating K different candidate frames according to the prior frame aiming at each cell in the S multiplied by S cells, wherein each candidate frame respectively comprises center coordinate data, length and width data, confidence coefficient data and posture identification data; sorting all the S multiplied by K candidate frame data obtained in the step 1 according to confidence; deleting the candidate frame data with the confidence coefficient lower than score to obtain the remaining ordered candidate frames; from the first candidate frame, checking all the candidate frames after the sorting one by one, and deleting the candidate frames which are overlapped with the current candidate frame in proportion more than thred in the subsequent candidate frames; and outputting the information of the remaining candidate frames and the gesture recognition result. In the invention, S is respectively 13, 26 and 52, and corresponds to a macro-scale recognition result, a medium-scale recognition result and a fine-grained recognition result. score is a confidence threshold for screening out unreasonable candidate boxes, and thred is a repeated candidate box for screening out multiple identifications of the same target.
(2) And acquiring images of a plurality of known livestock position and posture characteristics as training samples, and obtaining the sizes of candidate frames anchor boxes by adopting a Gaussian mixture-based GMM candidate box generation algorithm based on the training samples.
The GMM candidate box generation algorithm based on Gaussian mixture specifically comprises the following steps:
A. initializing 9 Gaussian distribution parameters, namely randomly selecting 9 samples from all the candidate frame samples to form the candidate frame sampleThis set X ═ Xi=(wi,hi)T|i=1,2,...,9},wi,hiAnd representing the width and height of the selected ith candidate frame sample, the ith gaussian distribution parameters are respectively: mean value μi=Xi=(wi,hi)TCovariance matrix
Figure BDA0002885270660000071
Probability of each Gaussian distribution being selected
Figure BDA0002885270660000072
B. For each data X in the candidate frame sample set XiThe probability that it is generated by each gaussian distribution respectively is calculated:
Figure BDA0002885270660000073
k is 1, …,9, i is 1, …, 9; determining a likelihood function on the basis of the determined values
Figure BDA0002885270660000074
C. Computing each data X in a sample set X of candidate boxesiProbability dependent on each gaussian distribution:
Figure BDA0002885270660000075
k=1,…,9,i=1,…,9;
D. update the parameters of 9 gaussian distributions:
Figure BDA0002885270660000076
Figure BDA0002885270660000081
i=1,…,9;
E. judging whether the parameter D converges, if not, repeating B-D, if so, then calculating the nine mean values { mu ] in the parameteri=(μiwih)TI 1.. 9} as the final 9 candidate frame sizes.
(3) And (3) training the position and posture recognition network based on the training sample obtained in the step (2) and the size of the candidate frame.
(4) The method comprises the steps of obtaining a video of the livestock to be recognized, dividing the video into a plurality of image frames, and inputting the image frames into a trained position and posture recognition network, so that the position and posture characteristics of all the livestock in the image are recognized.
And marking the positions of all the livestock in the identified image by adopting a rectangular frame, and marking the corresponding posture characteristics.
The embodiment also provides a livestock position and posture recognition device based on YOLOv3, which comprises a processor and a computer program stored on a memory and capable of running on the processor, wherein the processor realizes the method when executing the program.
The identification and comparison effects of the network on goats by adopting the embodiment are shown in fig. 5 and 6, and it can be clearly found that the optimization network in the invention has greater improvement on the size of the target frame and the identification effect of the edge target.
The invention keeps the original real-time property of the position and type identification of the livestock and additionally obtains the posture characteristics of the livestock which have great effect on the health evaluation of the livestock, wherein the posture characteristics comprise six different posture characteristics of standing, lying on the side, feeding, defecation and itching of the livestock. Meanwhile, the accuracy of image edge livestock identification is greatly improved, so that the network can well identify and process the image edge target, and meanwhile, the size of a moral target frame identified by the network is more reasonable. The accuracy rate of the optimized network for recognizing the goat posture reaches 99.6%, and the recognition rate is improved by 17.2%.

Claims (5)

1. A livestock position and posture recognition method based on YOLOv3 is characterized by comprising the following steps:
(1) establishing a position and gesture recognition network, wherein the position and gesture recognition network is specifically a YOLOv3 improved network, and the YOLOv3 improved network improves the loss function of the YOLOv3 network into:
Loss=Lbbox2+Lbbox1+Lclass+Lscore
Figure FDA0002885270650000011
Figure FDA0002885270650000012
in the formula, Lbbox2Represents the loss of candidate box size, Lbbox1Indicates the loss of the shift of the center position of the candidate frame, LclassRepresents a loss of pose recognition, LscoreRepresenting a loss of confidence in recognition, a weighting factor
Figure FDA0002885270650000013
x and x 'are the central abscissa and the actual central abscissa of the candidate frame, y and y' are the central ordinate and the actual central ordinate of the candidate frame, λcoord2M, N denotes the number of rows and columns of blocks of the picture divided into identified regions, B denotes the number of candidate boxes generated per identified region,
Figure FDA0002885270650000014
the j candidate frame for representing the i-th recognition area contains a target, if the item containing the target is 1, otherwise, the item is 0, W and W 'are the width and the actual width of the candidate frame, H and H' are the height and the actual height of the candidate frame, and lambdacoord1Representing the weight loss of the offset of the center position of the candidate frame;
(2) acquiring images of a plurality of known livestock position and posture characteristics as training samples, and obtaining the sizes of candidate frames anchor boxes by adopting a Gaussian mixture-based GMM candidate box generation algorithm based on the training samples;
(3) training the position and posture recognition network based on the training sample obtained in the step (2) and the size of the candidate frame;
(4) the method comprises the steps of obtaining a video of the livestock to be recognized, dividing the video into a plurality of image frames, and inputting the image frames into a trained position and posture recognition network, so that the position and posture characteristics of all the livestock in the image are recognized.
2. The YOLOv 3-based livestock position and posture recognition method according to claim 1, wherein: the GMM candidate box generation algorithm based on gaussian mixture specifically includes:
A. initializing 9 Gaussian distributed parameters, namely randomly selecting 9 samples from all the candidate frame samples to form a candidate frame sample set X ═ Xi=(wi,hi)T|i=1,2,...,9},wi,hiAnd representing the width and height of the selected ith candidate frame sample, the ith gaussian distribution parameters are respectively: mean value μi=Xi=(wi,hi)TCovariance matrix
Figure FDA0002885270650000021
Probability of each Gaussian distribution being selected
Figure FDA0002885270650000022
B. For each data X in the candidate frame sample set XiThe probability that it is generated by each gaussian distribution respectively is calculated:
Figure FDA0002885270650000023
determining a likelihood function on the basis of the determined values
Figure FDA0002885270650000024
C. Computing each data X in a sample set X of candidate boxesiProbability dependent on each gaussian distribution:
Figure FDA0002885270650000025
D. update the parameters of 9 gaussian distributions:
Figure FDA0002885270650000026
Figure FDA0002885270650000027
E. judging whether the parameter D converges, if not, repeating B-D, if so, then calculating the nine mean values { mu ] in the parameteri=(μiwih)TI 1.. 9} as the final 9 candidate frame sizes.
3. The YOLOv 3-based livestock position and posture recognition method according to claim 1, wherein: the posture characteristics set in the training sample include standing, lying down, lying on side, feeding, defecation and scratching.
4. The YOLOv 3-based livestock position and posture recognition method according to claim 1, wherein: the positions of all the livestock in the identified image are marked by rectangular frames.
5. A YOLOv 3-based livestock position and orientation recognition device, comprising a processor and a computer program stored on a memory and operable on the processor, wherein: the processor, when executing the program, implements the method of any of claims 1-4.
CN202110011560.7A 2021-01-06 2021-01-06 Livestock position and posture recognition method and device based on YOLOv3 Pending CN112800856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110011560.7A CN112800856A (en) 2021-01-06 2021-01-06 Livestock position and posture recognition method and device based on YOLOv3

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110011560.7A CN112800856A (en) 2021-01-06 2021-01-06 Livestock position and posture recognition method and device based on YOLOv3

Publications (1)

Publication Number Publication Date
CN112800856A true CN112800856A (en) 2021-05-14

Family

ID=75809391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110011560.7A Pending CN112800856A (en) 2021-01-06 2021-01-06 Livestock position and posture recognition method and device based on YOLOv3

Country Status (1)

Country Link
CN (1) CN112800856A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537244A (en) * 2021-07-23 2021-10-22 深圳职业技术学院 Livestock image target detection method and device based on light-weight YOLOv4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537244A (en) * 2021-07-23 2021-10-22 深圳职业技术学院 Livestock image target detection method and device based on light-weight YOLOv4
CN113537244B (en) * 2021-07-23 2024-03-15 深圳职业技术学院 Livestock image target detection method and device based on lightweight YOLOv4

Similar Documents

Publication Publication Date Title
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN110321830B (en) Chinese character string picture OCR recognition method based on neural network
CN106022232A (en) License plate detection method based on deep learning
CN111553200A (en) Image detection and identification method and device
CN110909618B (en) Method and device for identifying identity of pet
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN107194418B (en) Rice aphid detection method based on antagonistic characteristic learning
CN111783576A (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
US20210383149A1 (en) Method for identifying individuals of oplegnathus punctatus based on convolutional neural network
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
CN111984817B (en) Fine-grained image retrieval method based on self-attention mechanism weighting
CN111259978A (en) Dairy cow individual identity recognition method integrating multi-region depth features
CN114863263B (en) Snakehead fish detection method for blocking in class based on cross-scale hierarchical feature fusion
CN111582337A (en) Strawberry malformation state detection method based on small sample fine-grained image analysis
CN112861917A (en) Weak supervision target detection method based on image attribute learning
CN113435355A (en) Multi-target cow identity identification method and system
CN113822185A (en) Method for detecting daily behavior of group health pigs
CN115019103A (en) Small sample target detection method based on coordinate attention group optimization
CN113470076A (en) Multi-target tracking method for yellow-feather chickens in flat-breeding henhouse
CN112861666A (en) Chicken flock counting method based on deep learning and application
CN111476119A (en) Insect behavior identification method and device based on space-time context
CN112800856A (en) Livestock position and posture recognition method and device based on YOLOv3
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
CN117132802A (en) Method, device and storage medium for identifying field wheat diseases and insect pests
CN116206208A (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination