CN115937791B - Poultry counting method and device suitable for multiple cultivation modes - Google Patents

Poultry counting method and device suitable for multiple cultivation modes Download PDF

Info

Publication number
CN115937791B
CN115937791B CN202310034920.4A CN202310034920A CN115937791B CN 115937791 B CN115937791 B CN 115937791B CN 202310034920 A CN202310034920 A CN 202310034920A CN 115937791 B CN115937791 B CN 115937791B
Authority
CN
China
Prior art keywords
poultry
culture
image
cage
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310034920.4A
Other languages
Chinese (zh)
Other versions
CN115937791A (en
Inventor
肖德琴
招胜秋
刘又夫
潘永琪
刘克坚
闫志广
殷建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202310034920.4A priority Critical patent/CN115937791B/en
Publication of CN115937791A publication Critical patent/CN115937791A/en
Application granted granted Critical
Publication of CN115937791B publication Critical patent/CN115937791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Housing For Livestock And Birds (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a poultry counting method and a counting device thereof suitable for various cultivation modes, wherein the poultry counting method comprises two horizontal transverse rails which are arranged on a fixed support and are parallel to each other, a horizontal longitudinal rail which is perpendicular to the transverse rails is arranged between the two horizontal transverse rails, first driving units are respectively arranged at two ends of the horizontal longitudinal rail in a sliding manner and synchronously run, second driving units are respectively arranged on the two horizontal transverse rails in a sliding manner, a sliding rail robot and a sensor unit are arranged on the second driving units, the sliding rail robot comprises a lifting unit, a camera shooting unit and a computer unit, the lifting unit is arranged at the bottom of the second driving unit, and the camera shooting unit is arranged at the bottom of the lifting unit; according to the method, the effective quantity statistics of the poultry taking cage culture and hurdle culture as main culture modes is realized through analysis and calculation of the pre-trained neural network model, the time and labor consumption problem of artificial poultry counting is solved, and the production efficiency is improved.

Description

Poultry counting method and device suitable for multiple cultivation modes
Technical Field
The invention relates to the technical field of counting of poultry cultivation quantity, in particular to a poultry counting method and a poultry counting device suitable for various cultivation modes.
Background
The intensive poultry (chickens, ducks, geese and the like) breeding industry in China starts later, and the problems of low intelligent degree, lagging breeding technology and the like exist; in order to know the stock quantity condition during the poultry cultivation in real time, a manager needs to count the quantity of multiple rounds; different from the cultivation and quantity statistics of larger livestock (pigs, cattle and the like), the cultivation quantity of the poultry in the laminated cage houses is large, the cultivation density of the poultry in the fence houses is high, and meanwhile, the poultry is easy to generate larger stress reaction, so the quantity counting operation is more challenging, the quantity counting operation is not only tedious and complicated, but also the counting process is required to be repeated when errors occur, and a large amount of human resources are consumed.
The existing method for counting the number of poultry mainly comprises two methods: 1. the manual counting is carried out by setting a counting area or a limit, so that the poultry is difficult to track and position by naked eyes due to continuous movement of the poultry, and the counting accuracy is low; 2. counting the aisles, establishing a channel and a device, driving the poultry to pass through the channel, activating the device and counting; the counting device for livestock and poultry cultivation, which is provided with a limit door and a jump preventing device, is designed by adopting a T-shaped structure and is used for counting the poultry passing through a channel, wherein the number of the limit door is CN 214546359U; this method is labor-intensive and requires the birds to be driven to the tunnel and transferred, which is not suitable for counting in large batches of poultry houses.
In summary, the technology adopted in the current large-scale poultry cultivation process cannot effectively and accurately count the number, and along with the vigorous development of intelligent agriculture, the intelligent and internet-of-things cultivation equipment is a future trend, so that a device and a method thereof are needed, which can count the number of the poultry (raised in cages or in columns) in different cultivation modes, save manpower resources and improve the production efficiency.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a poultry counting method and a counting device thereof which are applicable to various cultivation modes, and solves the problems of difficult poultry counting and high manpower consumption in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the utility model provides a poultry counting assembly suitable for multiple mode of breeding, it includes the fixed bolster, be provided with two horizontal cross rails that are parallel to each other on the fixed bolster, be provided with between two horizontal cross rails with cross rail vertically horizontal rail, the both ends of horizontal rail all are provided with first drive unit, two first drive units slide respectively and set up on two horizontal cross rails and synchronous operation, slide on the horizontal rail and be provided with the second drive unit, be provided with slide rail robot and sensor unit on the second drive unit, the tip of cross rail is provided with the contact type fills electric pile that is used for slide rail robot to charge, slide rail robot includes the elevating unit, camera unit and computer unit, the elevating unit sets up in the bottom of second drive unit, camera unit sets up in the bottom of elevating unit, first drive unit, the second drive unit, elevating unit, camera unit and sensor unit all with computer unit electric connection.
In another aspect, there is provided a method of counting poultry suitable for use in a plurality of modes of farming, comprising the steps of:
s1: acquiring cage culture image data of each culture cage, column culture image data of each culture column and corresponding position data;
s2: carrying out optimization processing on the cage culture image data and the fence culture image data;
s3: inputting the cage culture image data into a pre-trained first neural network model, and performing image segmentation, recognition confirmation and summation calculation to obtain the number of poultry in the corresponding culture cage;
s4: inputting the hurdle culture image data into a pre-trained second neural network model, generating a density image, and carrying out integral summation on the density image to obtain the number of poultry in a corresponding culture hurdle;
s5: comparing the number of poultry in each breeding cage and each breeding fence with the number of poultry recorded in the corresponding breeding cage and breeding fence in the database, and calculating the average error value of the inspection number of each breeding cage and each breeding fence;
s6: comparing the average error value of the inspection quantity with the set error value, if the average error value of the inspection quantity is not larger than the set error value, sending out inspection information of safe quantity of poultry in the cultivation cage and/or the cultivation fence, otherwise, sending out early warning information of unmatched quantity of poultry in the cultivation cage and/or the cultivation fence;
s7: and uploading the inspection information and the early warning information to the Internet of things platform through the communication module.
The beneficial effects of the invention are as follows:
1. the poultry counting device can effectively improve the operation efficiency of a farm, solves the problem of time and labor consumption in counting poultry, has strong adaptability, and is suitable for two main cultivation modes of a poultry house which is a cultivation cage and a cultivation fence; meanwhile, the poultry counting device can stay and data are acquired at any position of the three-dimensional space above the cultivation cage and/or the cultivation fence, the analysis quantity information is processed and transmitted to the internet of things platform in real time through the computer unit, so that workers can monitor and manage the quantity of the poultry conveniently, and meanwhile, the poultry counting device is contactless with the poultry, and stress response of the poultry to foreign matters is reduced.
2. According to the first neural network model, accurate quantity statistics can be carried out on poultry in a breeding cage, the method is different from a common target detection counting method, semantic segmentation is carried out on collected cage-rearing image data, inherent physical characteristics of the poultry are better identified according to depth information of the image, identification errors caused by shielding are reduced, and accuracy of cage-rearing counting is improved.
3. According to the method, the second neural network model can accurately count the number of the poultry in the cultivation fence, is different from the limitation of detecting objects in high density in target detection, can adapt to the number statistics of the high density well, generates density maps by identifying and distinguishing different cultivation densities, builds CNNs with different scales on different cultivation density images, and finally obtains the number of the poultry in an integral mode, thereby better solving the problem of counting omission caused by overlapping shielding of the poultry.
Drawings
Fig. 1 is a schematic structural diagram of the poultry counting device according to the scheme.
Fig. 2 is a schematic view of the poultry counting device when the poultry house is a cultivation cage.
Fig. 3 is a schematic view of the structure of the poultry counting device when the poultry house is a cultivation fence.
Fig. 4 is a logic structure diagram of the first neural network model.
Fig. 5 is a logic structure diagram of a second neural network model.
Wherein, 1, a fixed bracket, 2, a horizontal transverse rail, 3, a horizontal longitudinal rail, 4, a first driving unit, 5, a second driving unit, 6, a sensor unit, 7, a contact type charging pile, 8, a lifting unit, 9 and a shooting unit, 10, a computer unit, 11, a breeding cage, 12, a breeding fence, 13, a lifting motor, 14, a rotary cradle head, 15, an RGB camera, 16, a depth camera, 17, an LED light supplementing lamp, 18 and a light intensity sensor.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, the poultry counting device of this scheme includes fixed bolster 1, be provided with two horizontal transverse rails 2 that are parallel to each other on the fixed bolster 1, be provided with between two horizontal transverse rails 2 with transverse rail vertically horizontal rail 3, horizontal rail 3's both ends all are provided with first drive unit 4, two first drive units 4 slide respectively and set up on two horizontal transverse rails 2 and synchronous operation, horizontal rail 3 is last to slide and is provided with second drive unit 5, be provided with slide rail robot and sensor unit 6 on the second drive unit 5, the tip of transverse rail is provided with the contact type fills electric pile 7 that is used for slide rail robot to charge, slide rail robot includes lift unit 8, camera unit 9 and computer unit 10, lift unit 8 sets up the bottom at second drive unit 5, camera unit 9 sets up the bottom at lift unit 8, first drive unit 4, second drive unit 5, lift unit 8, camera unit 9 and sensor unit 6 all with computer unit 10 electric connection.
The poultry counting device of this scheme installs in the top of poultry house, and its fixed mode adopts the mode of hanging the top or the mode that the stand supported, through first drive unit 4, second drive unit 5 and elevating unit 8, can realize that camera unit 9 is in the three-dimensional space removal of poultry house top, in order to carry out the quantity statistics to all poultry, the I shape casting track's of two horizontal transverse rails 2 length sets up according to the total length of poultry house, and the I shape casting track's of horizontal longitudinal rail 3 length sets up according to the total width of poultry house.
As shown in fig. 2 and 3, the scheme is applicable to two main cultivation modes of the poultry house as a cultivation cage 11 and a cultivation fence 12, wherein fig. 2 is a specific embodiment diagram of the poultry house as the cultivation cage 11, the cultivation cages 11 are stacked in a stair shape, and each layer can be provided with an unlimited number of cages; FIG. 3 is a schematic view of an embodiment of a poultry house as a farm 12, wherein the farm 12 may be provided with a plurality of rows, and each row may be provided with a plurality of columns, which may be adapted for use in various on-net and ground-based modes of cultivation; meanwhile, the scheme is also suitable for a cultivation mode of combining the cultivation cage 11 and the cultivation fence 12.
Specifically, the computer unit 10 is internally provided with necessary hardware comprising a CPU, a memory, a storage, a graphic display card, a communication module and the like, wherein the storage stores a navigation program and a slide rail robot control program, so as to convert position coordinates into control signals and control each driving unit to operate, the storage also stores a first neural network model and a second neural network model, the graphic display card has a certain computing power, the computing power of the first neural network model and the second neural network model can be effectively supported, and the communication module is used for communicating with the internet of things platform.
The sensor unit 6 comprises integration or combination of temperature and humidity sensors, oxygen sensors, carbon dioxide sensors and the like; the lifting unit 8 adopts an existing scissor type lifting structure and is driven by a lifting motor 13, and a rotary cradle head 14 which is convenient for the rotation of the camera unit 9 is arranged between the lifting unit 8 and the camera unit 9; the camera unit 9 comprises an RGB camera 15, a Depth camera, an LED light supplementing lamp 17 and a light intensity sensor 18, and is fixed with the rotary holder 14 through a concave-convex structure, and a micro motor is arranged in the camera unit, so that the shooting angle of the camera unit 9 can be adjusted.
Assuming that the total length of the horizontal rail 2 is m, the total length of the horizontal rail 3 is n, and the ground clearance of the slide rail robot is h, the slide rail robot can move randomly in an area m x n formed by the rails, and can carry out inspection, stay and data acquisition on any point in the area m x n and in the height h.
The storage in the computer unit 10 stores the stay position of the slide rail robot in each cultivation cage 11 or cultivation column 12, and the storage format adopted is ((C/P), (X, Y, Z), (C, L, W, H), (F, L, W, H)), wherein (C/P) represents that the cultivation cage or cultivation column is currently being patrolled; (X, Y, Z) represents the position coordinates of the slide rail robot; (C, L, W, H) means that the sliding rail robot is positioned in a C cage, the length of the C cage is L, the width of the C cage is W, and the ground clearance of the sliding rail robot is H; (F, L, W, H) means that the slide rail robot is positioned at the F column, the length of the F column is L, the width of the F column is W, and the ground clearance of the slide rail robot is H.
The sliding rail robot takes the extending direction of one end of a contact type charging pile 7 on a horizontal transverse rail 2 as the positive X-axis direction along the other end, the extending direction of the horizontal longitudinal rail 3 from the horizontal transverse rail 2 where the contact type charging pile 7 is positioned to the other horizontal transverse rail as the positive Y-axis direction, and the extending direction of a lifting unit 8 as the positive Z-axis direction; every time the coordinates of the X axis, the Y axis or the Z axis are increased by 1, the slide rail robot moves by 1 cm to the X axis, the Y axis or the Z axis, and the X, Y, Z coordinates are all non-negative values. The contact position of the slide rail robot and the contact type charging pile is set as the origin of coordinates and is marked as (0, 0).
When the slide rail robot first performs inspection or performs a new round of counting and monitoring task, related parameter information including the number of each cultivation cage 11, the residence position, the size and the cultivation amount information of each cultivation cage, and the number of each cultivation fence 12, the residence position, the size and the cultivation amount information of each cultivation fence need to be acquired and stored, so that a database is constructed, and the slide rail robot inspects all planned poultry houses according to an inspection time plan.
On the other hand, the counting method of the poultry counting device comprises the following steps:
s1: the first driving unit 4, the second driving unit 5 and the lifting unit 8 drive the camera unit 9 to carry out inspection in a three-dimensional space above the plurality of cultivation cages 11 and the plurality of cultivation columns 12, and collect the cage cultivation image data of each cultivation cage 11, the fence cultivation image data of each cultivation column 12 and the corresponding position data;
the cage culture image data comprise cage culture RGB images and corresponding Depth images, and preferably, the position for acquiring the cage culture image data is a position right in front of the culture cage 11, right above the culture cage and forming an included angle of 45 degrees with the ground; the hurdle image data comprises hurdle RGB images;
the method for acquiring the fence image data of each fence 12 by the image pickup unit 9 includes the steps of:
c1: the computer unit 10 compares the length and width data of the cultivation fence 12 with the set length threshold, if the length and/or width of the cultivation fence 12 is greater than the set length threshold, executing the step C2, otherwise executing the step C4;
c2: the computer unit 10 equally divides the cultivation fence 12 into a plurality of subareas along the length and/or width direction of the cultivation fence 12 until the length and the width of the subareas are not more than a set length threshold value;
and C3: the slide rail robot runs to the intersection point of the diagonal line of each sub-area, and the camera unit 9 collects the column culture image data of each sub-area and jointly forms the column culture image data of the culture column 12;
and C4: the slide rail robot runs to the intersection point of the diagonal lines of the cultivation columns 12, and the camera unit 9 collects the column cultivation image data.
S2: the cage culture image data and the fence culture image data are optimized by the computer unit 10, and the method comprises the following steps:
m1: the computer unit 10 adopts the Laplace operator edge to detect the image ambiguity of the cage culture RGB image and the hurdle culture RGB image respectively, if the detected response variance is lower than the set threshold value, the corresponding cage culture RGB image and/or hurdle culture RGB image is discarded, and the image capturing unit 9 is used for re-acquisition until the detected response variance is higher than the set threshold value;
m2: the computer unit 10 performs edge trimming processing on the cage RGB image, the Depth image, and the hurdle RGB image to remove redundant edge pixel information.
S3: inputting the cage culture image data into a pre-trained first neural network model, and performing image segmentation, identification confirmation and summation calculation to obtain the number of poultry in the corresponding culture cage 11, and obtaining the number of poultry in each culture cage 11 by the same method, wherein the method comprises the following steps:
as shown in fig. 4, the first neural network model includes an RGB extraction layer, a Depth extraction layer, a fusion layer, and an output layer.
The RGB extraction layer and the Depth extraction layer respectively perform feature extraction on the cage-culture RGB image and the Depth image, and specifically comprise the following steps:
the RGB extraction layer receives the acquired cage-cultured RGB image, and the cage-cultured RGB image is input into a 7X 7 common rolling layer and a 3X 3 maximum pooling layer for channel number expansion. Then sequentially connecting 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, wherein the channel number is changed to 64-256, and the effective characteristic A of the cage RGB image is obtained; then sequentially connecting 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, wherein the channel number is changed to 128-512, and obtaining effective characteristics B of the cage RGB image; then sequentially connecting 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, wherein the channel number is changed to 256-1024, and obtaining effective feature C of the cage RGB image; finally, sequentially connecting 1×1 common convolution, 3×3 common convolution and 1×1 common convolution, wherein the number of channels is changed to 128-512, and obtaining effective features D of the cage RGB image; the effective features A, B, C and D are the feature concentration of the cage RGB image, and the depth is increased in sequence.
The Depth extraction layer receives the acquired Depth image, and inputs the acquired Depth image to the 7×7 common convolution and 3×3 maximum pooling layer for channel number expansion. Then sequentially connecting 1×1 common convolution, 3×3 common convolution and 1×1 common convolution, wherein the channel number is changed to 64-256, so as to obtain the effective feature a of the Depth image; then sequentially connecting 1×1 common convolution, 3×3 common convolution and 1×1 common convolution, wherein the channel number is changed to 128-512, and obtaining the valid feature b of the Depth image; then sequentially connecting 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, wherein the channel number is changed to 256-1024, and obtaining a Depth image effective feature c; finally, sequentially connecting 1×1 common convolution, 3×3 common convolution and 1×1 common convolution, wherein the number of channels is changed to 128-512, and obtaining the effective feature d of the Depth image; the effective characteristic layers a, b, c and d are all characteristic concentration of the Depth image of the poultry in the cage, and the Depth is sequentially increased.
The fusion layer carries out multi-mode data fusion on different level features extracted by the RGB extraction layer and the Depth extraction layer and outputs a segmented image, and the fusion layer specifically comprises the following steps:
the fusion layer combines the effective features of the cage-culture RGB image and the effective features of the Depth image to form dual-mode feature data; the effective feature A and the effective feature a are input into the A part of the DFF module together; the effective feature B and the effective feature B are input to a part B in the DFF module together; the effective feature C and the effective feature C are input to a C part in the DFF module together; the effective feature D is input to the D part of the DFF module together with the effective feature D.
The A, B, C, D part in the DFF module is of a double residual cascade structure, and firstly, the RGB features and the Depth features are respectively reduced in dimension and parameter explosion through common 1X 1 convolution. Each then goes through two cascaded ReLU activation functions and a3 x 3 normal convolution with an addition operation in between. Then each of the two paths of the three-dimensional data is subjected to 3×3 normal convolution, and the two paths of the three-dimensional data are added and then output after being subjected to 3×3 normal convolution. RGB features have better discrimination in segmentation than depth features, so the subsequent convolution operation is to learn the complementary or residual depth features to improve the discrimination of RGB features to aliasing patterns.
The DFF module is characterized in that a fusion expression is adopted to fuse RGB features and depth features, and the fusion expression is as follows:
Figure 599850DEST_PATH_IMAGE001
wherein Fuse represents a fusion operation, f RGB And f Dep For the RGB features and depth features obtained for the two paths of residuals in the DFF module,
Figure 844886DEST_PATH_IMAGE002
representing pixel addition operation, +.>
Figure 44530DEST_PATH_IMAGE003
Representing a pixel multiplication operation.
The refined features of the respective outputs of the DFF modules A, B, C, D are input to the core modules A, B, C, D, respectively, and the features of the respective outputs of the core modules A, B, C are input to the next core module. The PURE module is of a double residual structure. The RGB refinement feature and the Depth refinement feature are used for matching the feature channel number through 1×1 common convolution respectively, and after passing through a residual error module respectively, the dual-path input is fused into a feature map with higher resolution, and finally the chained 5×5 maximum pooling operation is carried out twice.
The output layer outputs a segmented image of the cage-reared poultry after connecting the PURE module D with 1 multiplied by 1 common convolution, identifies and confirms the target poultry object according to the segmented actual characteristics, and sums the identified poultry objects to obtain the number of the poultry in the corresponding cultivation cage 11.
The pre-training method of the first neural network model comprises the following steps:
a1: collecting a plurality of cage RGB images and corresponding Depth images, and calibrating and cutting edges of the cage RGB images and the Depth images; the position for acquiring the cage culture image data is a position right in front of the culture cage 11 and right above the culture cage, and forms an included angle of 45 degrees with the ground; using the depth information, performing segmentation evaluation on the physical characteristics of the poultry in the breeding cage 11 to determine single individual poultry;
a2: performing target edge point marking on poultry in the cage-raised RGB image through Labelme processing software, and generating a label file and a mask image for recording point marks, wherein the positions of the point marks are visible parts of the poultry body and head;
a3: each cage culture RGB image, the corresponding mask image and Depth image jointly form a cage culture data set, and the cage culture data set is divided into a training set, a testing set and a verification set according to a set proportion of 7:2:1;
a4: inputting RGB images and corresponding mask images in a training set to an RGB extraction layer in a first neural network model, inputting Depth images in the training set to the Depth extraction layer in the first neural network model, and training the first neural network model by adopting a transfer learning training method;
a5: the first neural network model utilizes the verification set to carry out precision evaluation on the training result of the first neural network model, whether the loss value of the first neural network model is converged is determined, and if so, a pre-training model of the first neural network model is obtained; otherwise, continuing training after optimizing the super parameters, and obtaining a pre-training model of the first neural network model after the loss value of the first neural network model is converged.
S4: inputting the hurdle culture image data into a pre-trained second neural network model, generating a density image, integrating and summing the density image to obtain the number of poultry in the corresponding culture columns 12, and similarly obtaining the number of poultry in each culture column 12, wherein the method comprises the following steps:
as shown in fig. 5, the second neural network model includes an input layer, an evaluation layer, a density layer, and an output layer.
The input layer receives the hurdle RGB image and performs segmentation processing on the hurdle RGB image to obtain a plurality of segmentation images.
The evaluation layer carries out density evaluation classification on a plurality of segmented images and consists of six parts, namely PartI to PartVI. PartII, partIII, partIV, partV are all composed of a stack of residual modules, and the PartI part is a 7×7 convolution module for raising the number of input image channels to 64; the PartII part is formed by connecting 2 identical first residual error modules in series, the main branch of the first residual error module is sequentially connected with 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, and finally is added with the first residual error branch, and the channel number is changed to 64-256; the PartIII part is formed by connecting 3 identical second residual error modules in series, the main branches of the second residual error modules are sequentially connected with 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, and finally added with the second residual error branches, and the channel number changes from 128 to 512; the PartIV part is formed by connecting 5 identical third residual error modules in series, the main branches of the residual error modules are sequentially connected with 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, and finally are added with the third residual error branches, and the channel number changes from 256 to 1024; the PartV part is formed by connecting 3 identical fourth residual error modules in series, the main branch of the fourth residual error module is sequentially connected with 1X 1 common convolution, 3X 3 common convolution and 1X 1 common convolution, and finally is added with the fourth residual error branch, and the number of channels is changed to 128-512; the PartVI part consists of a global average pooling module, a full connection module and a normalized exponential function, and outputs a classification result to the density layer.
The density layer evaluates and classifies according to the density of the evaluation layer, and selects different convolutional neural network models to generate density images of the segmented images;
the density layer is used for selecting different convolutional neural network models (CNNs) to generate the density of the image according to the classification of the evaluation layer, the density layer is composed of five parts, namely a part VII-part XI, the part VII is mainly a convolutional neural network selector, after receiving the classification information of the normalized index function layer, the density information is fused, and any one convolutional neural network model from the part VII-part XI is selected; the PartVIII part consists of 5X 5 normal convolution, 2X 2 max pooling, 3X 3 normal convolution and 1X 1 normal convolution which are connected in sequence; the PartIX part consists of 7×7 normal convolution, 2×2 max pooling, 5×5 normal convolution, 3×3 normal convolution and 1×1 normal convolution connected in sequence; the PartX part consists of 9×9 normal convolution, 2×2 max pooling, 7×7 normal convolution, 5×5 normal convolution and 1×1 normal convolution which are connected in sequence; and finally, inputting the result into a PartXI density correction module to judge whether the result is correct, outputting a density map after the result is correct, and returning to a convolutional neural network model (CNN) to carry out density generation again if the result is incorrect.
The output layer splices the density images and sums them to obtain the corresponding number of birds in the farm 12.
The pre-training method of the second neural network model comprises the following steps:
b1: collecting a plurality of column culture RGB images, performing edge cutting on the column culture RGB images, and constructing a column culture data set; wherein the hurdle culture data set comprises low culture volume (1-5/m) 2 ) The medium culture quantity (6-10/m) 2 ) And high cultivation amount (1-15/m) 2 ) Adopting overlooking shooting to respectively collect RGB images of a plurality of three cultivation quantities; preferably, the data set also comprises poultry of different ages in days, covering 1-15/m 2 The hurdle culture RGB images of various culture quantities are helpful for improving the precision of the second neural network model;
b2: the method comprises the steps that a plurality of column culture RGB images are randomly rotated and randomly shielded to obtain a plurality of new column culture RGB images, and the new column culture RGB images are combined with an original column culture data set to form a column culture enhancement data set;
b3: dividing the raising enhancement data set into a training set, a test set and a verification set according to a set ratio of 7:2:1;
b4: performing classified network training on the evaluation layer by adopting a training method of transfer learning;
b5: the evaluation layer utilizes the verification set to carry out precision evaluation on the training result of the evaluation layer, and determines whether the model loss value is converged or not; if yes, enter step B6; otherwise, continuing training after optimizing the super parameters, and obtaining a pre-training model of the evaluation layer after the model loss value converges;
b6: performing target center point marking on poultry in the fence RGB image through Labelme processing software, wherein the position of the point marking is the midpoint of the connecting line of the head and the tail of the poultry, and generating a poultry point marking image;
b7: converting the poultry point marker image into a poultry density image through a geometrically adapted Gaussian kernel function, wherein the calculation formula of the geometrically adapted Gaussian kernel function is as follows:
Figure 377423DEST_PATH_IMAGE004
wherein ,x i for the mark point pixel locations in the image,
Figure 847587DEST_PATH_IMAGE005
for Gaussian kernel function +.>
Figure 217520DEST_PATH_IMAGE006
As the variance of the gaussian kernel,
Figure 422236DEST_PATH_IMAGE007
the impact function represents the position of the poultry in the image,Nfor the total number of poultry in the image +.>
Figure 306622DEST_PATH_IMAGE008
For the average distance of the closest marker point to the marker point,βthe value is 0.4.
B8: classifying and corresponding a plurality of poultry density images and poultry point mark images, and constructing a density data set;
b9: density classification of a density data set, the density classification data including low density data (1-5/m 2 ) Medium density data (6-10 pieces/m) 2 ) And high density data (1-15/m 2 ) Three parts; dividing the density data set into a training set and a verification set according to a set ratio of 4:1;
b10: inputting the low culture density data in the training set into the convolutional neural network model of the PartIX part in the density layer, inputting the medium culture density data into the convolutional neural network model of the PartX part in the density layer, and inputting the high culture density data into the convolutional neural network model of the PartVIII part in the density layer; training the three convolutional neural network models independently;
b11: the PartVIII part, the PartIX part and the PartX part carry out precision evaluation on the training results of the parts by using respective verification sets, and whether the loss value of the convolutional neural network model is converged or not is determined; if yes, go to step B12; otherwise, continuing training after optimizing the super parameters, and after the loss value of each convolutional neural network model is converged, forming a pre-training model of the density layer together;
b12: the second neural network model is composed of an evaluation layer pre-training model and a density layer pre-training model.
S5: comparing the numbers of the poultry in each of the cultivation cages 11 and the cultivation columns 12 obtained through the first neural network model and the second neural network model respectively with the numbers of the poultry in each of the cultivation cages 11 and the cultivation columns 12 recorded in the database corresponding to the numbers of the poultry in the cultivation cages 11 and the cultivation columns 12 respectively, and calculating the average error value of the inspection numbers of each of the cultivation cages 11 and the cultivation columns 12, wherein the calculation formula of the average error value of the inspection numbers is as follows:
Figure 647605DEST_PATH_IMAGE009
wherein ,
Figure 359078DEST_PATH_IMAGE010
in order to check the number average error value,nfor the number of rounds of inspection the device is,N i for the number of cultivations obtained by this inspection,N r the number of the cultivation is recorded;
s6: comparing the average error value of the inspection quantity with the set error value, if the average error value of the inspection quantity is not larger than the set error value, sending out inspection information of safe quantity of the poultry in the cultivation cage 11 and/or the cultivation fence 12, otherwise, sending out early warning information of unmatched quantity of the poultry in the cultivation cage 11 and/or the cultivation fence 12, which comprises the following steps:
when (when)
Figure 51090DEST_PATH_IMAGE010
If the average error value of the inspection number of the cultivation cage 11 is +.>
Figure 210938DEST_PATH_IMAGE010
If the number of the poultry in the breeding cage 11 is less than or equal to 10%, sending out inspection information of safety of the number of the poultry in the breeding cage 11, otherwise, sending out early warning information of mismatching of the number of the poultry in the breeding cage 11; when->
Figure 937586DEST_PATH_IMAGE010
If the average error value of the inspection number of the cultivation fence 12 is + ->
Figure 570692DEST_PATH_IMAGE010
And if the number of the poultry in the cultivation fence 12 is less than or equal to 3%, sending out inspection information of safety of the number of the poultry in the cultivation fence 12, otherwise, sending out early warning information of mismatching of the number of the poultry in the cultivation fence 12.
S7: the computer unit 10 uploads inspection information and early warning information to the platform through the communication module for the staff to monitor and manage the poultry number.

Claims (8)

1. A method of counting poultry suitable for use in a plurality of modes of farming comprising the steps of:
s1: acquiring cage culture image data of each culture cage (11), hurdle culture image data of each culture hurdle (12) and corresponding position data, wherein the cage culture image data comprises cage culture RGB images and corresponding Depth images, and the hurdle culture image data comprises hurdle culture RGB images;
s2: the method for optimizing the cage culture image data and the fence culture image data comprises the following steps:
m1: respectively carrying out image ambiguity detection on the cage-culture RGB image and the hurdle-culture RGB image by using the Laplacian operator edge, discarding the corresponding cage-culture RGB image and/or hurdle-culture RGB image if the detected response variance is lower than a set threshold value, and re-acquiring until the detected response variance is higher than the set threshold value;
m2: performing edge cutting treatment on the cage RGB image, the Depth image and the fence RGB image;
s3: inputting the cage culture image data into a pre-trained first neural network model, and performing image segmentation, identification confirmation and summation calculation to obtain the number of poultry in a corresponding culture cage (11);
s4: inputting the hurdle culture image data into a pre-trained second neural network model, generating a density image, and carrying out integral summation on the density image to obtain the number of poultry in a corresponding culture hurdle (12); the second neural network model comprises an input layer, an evaluation layer, a density layer and an output layer; the input layer receives the hurdle RGB image and performs segmentation processing on the hurdle RGB image to obtain a plurality of segmentation images; the evaluation layer carries out density evaluation classification on the plurality of segmented images; the density layer evaluates and classifies according to the density of the evaluation layer, and selects different convolutional neural network models to generate density images of the segmented images; the output layer splices a plurality of density images and performs integral summation on the density images so as to obtain the number of poultry in the corresponding cultivation fence (12);
s5: comparing the number of poultry in each breeding cage (11) and each breeding fence (12) with the number of poultry recorded in the corresponding breeding cage (11) and breeding fence (12) in a database, and calculating the average error value of the inspection number of each breeding cage (11) and each breeding fence (12);
s6: comparing the average error value of the inspection quantity with the set error value, if the average error value of the inspection quantity is not larger than the set error value, sending out inspection information of safe quantity of the poultry in the cultivation cage (11) and/or the cultivation fence (12), otherwise, sending out early warning information of unmatched quantity of the poultry in the cultivation cage (11) and/or the cultivation fence (12);
s7: and uploading the inspection information and the early warning information to the Internet of things platform through the communication module.
2. The poultry counting method of claim 1, wherein the first neural network model includes an RGB extraction layer, a Depth extraction layer, a fusion layer, and an output layer;
the RGB extraction layer and the Depth extraction layer respectively conduct feature extraction on the cage-culture RGB image and the Depth image;
the fusion layer carries out multi-mode data fusion on different level features extracted by the RGB extraction layer and the Depth extraction layer, and outputs a segmented image;
and the output layer identifies and confirms the poultry objects according to the actual characteristics of the segmented images, and sums up the identified and confirmed poultry objects to obtain the corresponding poultry quantity in the breeding cage (11).
3. The poultry counting method according to claim 2, wherein the pre-training method of the first neural network model comprises the steps of:
a1: collecting a plurality of cage RGB images and corresponding Depth images, and calibrating and cutting edges of the cage RGB images and the Depth images;
a2: performing point marking on poultry in the cage-rearing RGB image, and generating a mask image after point marking, wherein the position of the point marking is a visible part of the poultry body and head;
a3: each cage culture RGB image, the corresponding mask image and Depth image jointly form a cage culture data set, and the cage culture data set is divided into a training set, a test set and a verification set according to a set proportion;
a4: inputting RGB images and corresponding mask images in a training set to an RGB extraction layer in a first neural network model, inputting Depth images in the training set to the Depth extraction layer in the first neural network model, and training the first neural network model by adopting a transfer learning training method;
a5: the first neural network model utilizes the verification set to carry out precision evaluation on the training result of the first neural network model, whether the loss value of the first neural network model is converged is determined, and if so, a pre-training model of the first neural network model is obtained; otherwise, continuing training after optimizing the super parameters, and obtaining a pre-training model of the first neural network model after the loss value of the first neural network model is converged.
4. The poultry counting method according to claim 1, wherein the pretraining method of the second neural network model comprises the steps of:
b1: collecting a plurality of column culture RGB images, performing edge cutting on the column culture RGB images, and constructing a column culture data set;
b2: the method comprises the steps that a plurality of column culture RGB images are randomly rotated and randomly shielded to obtain a plurality of new column culture RGB images, and the new column culture RGB images are combined with an original column culture data set to form a column culture enhancement data set;
b3: dividing the hurdle raising enhancement data set into a training set, a testing set and a verification set according to a set proportion;
b4: performing classified network training on the evaluation layer by adopting a training method of transfer learning;
b5: the evaluation layer utilizes the verification set to carry out precision evaluation on the training result of the evaluation layer, and determines whether the model loss value is converged or not; if yes, enter step B6; otherwise, continuing training after optimizing the super parameters, and obtaining a pre-training model of the evaluation layer after the model loss value converges;
b6: performing point marking on poultry in the hurdle RGB image, wherein the point marking position is the midpoint of the connecting line of the head and the tail of the poultry, and generating a poultry point marking image;
b7: converting the poultry point marker image into a poultry density image by geometrically adapting a gaussian kernel function;
b8: classifying and corresponding a plurality of poultry density images and poultry point mark images, and constructing a density data set;
b9: performing density classification on the density data set, and dividing the density data set into a training set and a verification set according to a set proportion;
b10: respectively inputting different density classification data in a training set into different convolutional neural network models in a density layer, and independently training each convolutional neural network model;
b11: performing precision evaluation on the training result by using the corresponding verification set, and determining whether the convolutional neural network model loss value is converged or not; if yes, go to step B12; otherwise, continuing training after optimizing the super parameters, and after the loss value of each convolutional neural network model is converged, forming a pre-training model of the density layer together;
b12: the second neural network model is composed of an evaluation layer pre-training model and a density layer pre-training model.
5. The poultry counting method according to claim 4, wherein the calculation formula of the geometrically adapted gaussian kernel function is:
Figure QLYQS_1
wherein ,x i for the mark point pixel locations in the image,
Figure QLYQS_2
for Gaussian kernel function +.>
Figure QLYQS_3
The impact function represents the position of the poultry in the image,Nfor the total number of poultry in the image +.>
Figure QLYQS_4
For the average distance of the closest marker point to the marker point,βthe value is 0.4.
6. The poultry counting method according to claim 1, wherein the calculation formula of the average error value of the patrol number in step S5 is:
Figure QLYQS_5
wherein ,
Figure QLYQS_6
in order to check the number average error value,nfor the number of rounds of inspection the device is,N i for the number of cultivations obtained by this inspection,N r the number of the cultivation is recorded;
the comparison method in the step S6 specifically comprises the following steps: when (when)
Figure QLYQS_7
If the average error value of the inspection quantity of the cultivation cage (11) is
Figure QLYQS_8
Sending out inspection information of the number of the poultry in the breeding cage (11) safely if the number of the poultry in the breeding cage is less than or equal to 10%, otherwise sending out early warning information of mismatching of the number of the poultry in the breeding cage (11); when->
Figure QLYQS_9
If the average error value of the inspection number of the cultivation fence (12) is +.>
Figure QLYQS_10
And (3) sending out inspection information of the number of the poultry in the cultivation fence (12) safely, or sending out early warning information of mismatching of the number of the poultry in the cultivation fence (12).
7. The poultry counting method according to claim 1, wherein the method for acquiring the cultivation image data of each cultivation column (12) by the image capturing unit (9) in the step S1 comprises the steps of:
c1: comparing the length and width data of the culture fence (12) with a set length threshold, if the length and/or width of the culture fence (12) is greater than the set length threshold, executing the step C2, otherwise executing the step C4;
c2: equally dividing the cultivation fence (12) into a plurality of subareas along the length and/or width direction of the cultivation fence (12) until the length and the width of the subareas are not more than a set length threshold value;
and C3: collecting the fence image data of each sub-area at the intersection point of the diagonal lines of each sub-area, and jointly forming the fence image data of the cultivation fence (12);
and C4: the hurdle image data is acquired at intersections of diagonal lines of the hurdle (12).
8. Poultry counting device based on the poultry counting method suitable for multiple cultivation modes according to any one of claims 1-7, characterized in that it comprises a fixed support (1), two mutually parallel horizontal transverse rails (2) are arranged on the fixed support (1), a horizontal longitudinal rail (3) perpendicular to the transverse rails is arranged between the two horizontal transverse rails (2), a first driving unit (4) is arranged at two ends of the horizontal longitudinal rail (3), two first driving units (4) are respectively and slidingly arranged on the two horizontal transverse rails (2) and synchronously run, a second driving unit (5) is slidingly arranged on the horizontal longitudinal rail (3), a sliding rail robot and a sensor unit (6) are arranged on the second driving unit (5), a contact type charging pile (7) for charging the sliding rail robot is arranged at the end of the transverse rail, the sliding rail robot comprises a lifting unit (8), a camera unit (9) and a computer unit (10), the lifting unit (8) is arranged at the bottom of the second horizontal transverse rail (2) and synchronously run, the lifting unit (8) is arranged at the bottom of the second driving unit (5), the lifting unit (8), and the lifting unit (8) is arranged at the bottom of the lifting unit (8) The camera shooting unit (9) and the sensor unit (6) are electrically connected with the computer unit (10).
CN202310034920.4A 2023-01-10 2023-01-10 Poultry counting method and device suitable for multiple cultivation modes Active CN115937791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310034920.4A CN115937791B (en) 2023-01-10 2023-01-10 Poultry counting method and device suitable for multiple cultivation modes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310034920.4A CN115937791B (en) 2023-01-10 2023-01-10 Poultry counting method and device suitable for multiple cultivation modes

Publications (2)

Publication Number Publication Date
CN115937791A CN115937791A (en) 2023-04-07
CN115937791B true CN115937791B (en) 2023-05-16

Family

ID=85818453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310034920.4A Active CN115937791B (en) 2023-01-10 2023-01-10 Poultry counting method and device suitable for multiple cultivation modes

Country Status (1)

Country Link
CN (1) CN115937791B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258977A1 (en) * 2019-06-28 2020-12-30 北京海益同展信息科技有限公司 Object counting method and device
KR102264281B1 (en) * 2020-12-31 2021-06-14 한국축산데이터 주식회사 농업회사법인 Livestock weight estimation system and method using livestock image

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009000173A1 (en) * 2009-01-13 2010-07-15 Robert Bosch Gmbh Device for counting objects, methods and computer program
US9946952B2 (en) * 2013-06-25 2018-04-17 University Of Central Florida Research Foundation, Inc. Multi-source, multi-scale counting in dense crowd images
CN106570557A (en) * 2015-10-13 2017-04-19 富士通株式会社 Device and method for counting moving objects
JP6803749B2 (en) * 2016-12-28 2020-12-23 パナソニックi−PROセンシングソリューションズ株式会社 Number measurement area setting method, number measurement area setting program, flow line analysis system, camera device and number measurement program
JP7130368B2 (en) * 2017-01-13 2022-09-05 キヤノン株式会社 Information processing device and information processing system
CN208459295U (en) * 2018-08-03 2019-02-01 广西职业技术学院 A kind of product appearance inspection device based on machine vision
CN109241941A (en) * 2018-09-28 2019-01-18 天津大学 A method of the farm based on deep learning analysis monitors poultry quantity
CN112307828A (en) * 2019-07-31 2021-02-02 梅特勒-托利多(常州)测量技术有限公司 Count verification device, count system and method
KR20190099155A (en) * 2019-08-06 2019-08-26 엘지전자 주식회사 Method and device for people counting
CN110956094B (en) * 2019-11-09 2023-12-01 北京工业大学 RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN111488794A (en) * 2020-02-24 2020-08-04 华中科技大学 Adaptive receptive wildman population density estimation method based on hole convolution
CN111626985A (en) * 2020-04-20 2020-09-04 北京农业信息技术研究中心 Poultry body temperature detection method based on image fusion and poultry house inspection system
US11263454B2 (en) * 2020-05-25 2022-03-01 Jingdong Digits Technology Holding Co., Ltd. System and method for video-based pig counting in the crowd
US11602132B2 (en) * 2020-10-06 2023-03-14 Sixgill, LLC System and method of counting livestock
CN113222889B (en) * 2021-03-30 2024-03-12 大连智慧渔业科技有限公司 Industrial aquaculture counting method and device for aquaculture under high-resolution image
CN113379561A (en) * 2021-05-28 2021-09-10 广州朗国电子科技有限公司 Intelligent calculation method, equipment and medium for poultry number
CN113724250A (en) * 2021-09-26 2021-11-30 新希望六和股份有限公司 Animal target counting method based on double-optical camera
CN114332096A (en) * 2021-12-14 2022-04-12 厦门农芯数字科技有限公司 Pig farm pig example segmentation method based on deep learning
CN114494863A (en) * 2022-01-12 2022-05-13 北京小龙潜行科技有限公司 Animal cub counting method and device based on Blend Mask algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258977A1 (en) * 2019-06-28 2020-12-30 北京海益同展信息科技有限公司 Object counting method and device
KR102264281B1 (en) * 2020-12-31 2021-06-14 한국축산데이터 주식회사 농업회사법인 Livestock weight estimation system and method using livestock image

Also Published As

Publication number Publication date
CN115937791A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Liu et al. YOLOv5-Tassel: Detecting tassels in RGB UAV imagery with improved YOLOv5 based on transfer learning
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN103324937B (en) The method and apparatus of label target
CN110378909B (en) Single wood segmentation method for laser point cloud based on Faster R-CNN
CN110096059A (en) Automatic Pilot method, apparatus, equipment and storage medium
CN112669348B (en) Fish body posture estimation and fish body surface type data measurement method and device
CN114037552B (en) Method and system for polling physiological growth information of meat ducks
CN114407051A (en) Livestock and poultry farm inspection method and livestock and poultry farm robot
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN112861666A (en) Chicken flock counting method based on deep learning and application
CN114092378A (en) Animal health detection method, device, equipment and storage medium
CN115937791B (en) Poultry counting method and device suitable for multiple cultivation modes
CN115541030A (en) Method and device for identifying temperature distribution of blast furnace top charge level and storage medium
CN113569971B (en) Image recognition-based catch target classification detection method and system
Tonachella et al. An affordable and easy-to-use tool for automatic fish length and weight estimation in mariculture
CN117029673B (en) Fish body surface multi-size measurement method based on artificial intelligence
CN116452967B (en) Fish swimming speed identification method based on machine vision
CN117036337A (en) Beef cattle body condition scoring method based on key points
CN116109633B (en) Window detection method and device for bearing retainer
CN117372854A (en) Real-time detection method for hidden danger diseases of deep water structure of dam
CN105824308B (en) Feed robot control system fault diagnosis expert system and diagnostic method
CN114898100A (en) Point cloud data extraction method, device, system, equipment and storage medium
CN107545754A (en) A kind of acquisition methods and device of road signs information threshold value
CN113379738A (en) Method and system for detecting and positioning epidemic trees based on images
Mu et al. Small scale dog face detection using improved Faster RCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant