CN111241908B - Device and method for identifying biological characteristics of young poultry - Google Patents

Device and method for identifying biological characteristics of young poultry Download PDF

Info

Publication number
CN111241908B
CN111241908B CN201911172403.3A CN201911172403A CN111241908B CN 111241908 B CN111241908 B CN 111241908B CN 201911172403 A CN201911172403 A CN 201911172403A CN 111241908 B CN111241908 B CN 111241908B
Authority
CN
China
Prior art keywords
neural network
submodule
layer
convolutional neural
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911172403.3A
Other languages
Chinese (zh)
Other versions
CN111241908A (en
Inventor
杨光华
陈奕宏
邓长兴
马少丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN201911172403.3A priority Critical patent/CN111241908B/en
Publication of CN111241908A publication Critical patent/CN111241908A/en
Application granted granted Critical
Publication of CN111241908B publication Critical patent/CN111241908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Mining & Mineral Resources (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Animal Husbandry (AREA)
  • Agronomy & Crop Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a young bird biological characteristic recognition device based on a deep convolution neural network, which comprises: the poultry positioning module comprises a plurality of cascaded convolutional neural network sub-modules and at least one fully-connected layer sub-module, inputs an original image containing the poultry, and outputs predicted position information of the poultry in the original image after operation; the cutting module is used for performing cutting processing on the original image according to the position information and outputting a poult appearance image with most of background cut; and the identification module comprises an identification submodule formed by cascading at least a first convolution neural network submodule, an expansion convolution submodule and a second convolution neural network submodule and at least one full-connection layer submodule, and the identification module inputs the appearance image of the young bird and outputs an identification result after operation. In this way, the feature extraction process from coarse to fine is completed by incremental increase of the convolutional layer, and thus the recognition can be completed with high accuracy.

Description

Device and method for identifying biological characteristics of young poultry
Technical Field
The invention relates to a device and a method for identifying biological characteristics of young birds, in particular to a method for identifying biological characteristics of young birds based on a deep convolution neural network.
Background
Current work on gender sorting young birds, particularly chicks, relies mostly on workers sorting by observing the chick anus. Although the accuracy can meet the requirement, the classification speed is slow, and the requirement on the experience of workers is high. For a large-scale hatching farm, the machine is expected to distinguish the large-scale hatching farm, and full-automatic assembly line work is realized, so that the cost is reduced and the production efficiency is improved.
In addition, the resistance of the newborn chicks is weak, the diseased chicks in a centralized hatching environment are easy to cause mass infection of the chicks, and some chicks which are underdeveloped or malformed in hatching are difficult to survive, if the chicks are discovered and treated in time in a short period, the chicks die in the transportation or breeding process, so that bacteria are easy to breed, and the health of the chicks is threatened. The hatchery is expected to recognize the dysplasia or lesion of the newborn chick at an early stage by a machine for a corresponding treatment, to improve the survival rate, and to improve the production efficiency.
The deep neural network is developed rapidly in recent years, plays a vital role in promoting the development of natural language processing and computer vision, is also applied to many practical lives and industrial production, and helps people to solve many problems. Compared with traditional shallow machine learning architectures such as SVM and the like, the deep neural network has better performance in some complex scene modes. In the natural image classification direction of computer vision, the deep neural network can reach more than 95% of accuracy, and almost exceeds the resolution capability of human beings. It also has many fields of application, potential value and so on to be developed and excavated by people. However, for the application of the deep neural network in gender classification and lesion identification of chicks, the conditions of low identification speed, extremely low accuracy rate and even no identification exist, and the actual production requirements are difficult to meet. This has led to the fact that, to date, chick gender classification has still been done by hand.
Disclosure of Invention
The invention aims to provide a young bird biological characteristic recognition device and a method, and aims to at least solve one of the technical problems in the prior art or the related art.
In order to solve the technical problem, the invention provides a young bird biological feature recognition device based on a deep convolutional neural network, which comprises: the poultry positioning module comprises a plurality of cascaded convolutional neural network sub-modules and at least one fully-connected layer sub-module, each convolutional neural network sub-module comprises a plurality of convolutional layers, a pooling layer and an activation function, the poultry positioning module inputs an original image containing the poultry and outputs predicted position information of the poultry in the original image after operation; the cropping module comprises a Crop layer, executes cropping processing on the original image according to the position information and outputs a poult appearance image with most of background cropped; and the identification module comprises an identification submodule formed by cascading at least a first convolution neural network submodule, an expansion convolution submodule and a second convolution neural network submodule and at least one full-connection layer submodule, and the identification module inputs the appearance image of the young bird and outputs an identification result after operation.
Preferably, the recognition module includes a plurality of recognition submodules, and each recognition submodule combines data sequentially processed by the first convolutional neural network submodule and the extended convolutional submodule, and data sequentially processed by the first convolutional neural network submodule, the extended convolutional submodule and the second convolutional neural network submodule, and uses the combined data as input of a next-stage recognition submodule.
Preferably, the first convolutional neural network sub-module further includes a convolutional layer and an extended convolutional layer connected in parallel, and the input data are respectively processed and then output to the extended convolutional sub-module; the identification module further comprises a soft-max layer.
Preferably, the cropping module further comprises an image enhancement sub-module, which performs image enhancement or normalization processing on the images of the appearances of the poultries with most of the background cropped.
Preferably, the biological characteristic comprises at least one of sex, developmental status, and health status of the hatchling.
The poultry larva biological characteristic recognition system of the invention, any one of the above-mentioned poultry larva biological characteristic recognition devices based on deep convolution neural network, also include: an image acquisition device that acquires images of poultries as identification targets; and the image preprocessing device is used for preprocessing the image acquired by the image acquisition device, and the preprocessing comprises at least one of frame cutting, zooming and image enhancement.
Preferably, the image acquisition device is provided with a posture adjusting mechanism so as to acquire images of the young birds in a specified posture.
The invention discloses a method for identifying biological characteristics of poultries based on a deep convolutional neural network, which comprises the following steps of: a positioning step, namely sequentially performing convolution, pooling and activation operation on an input original image containing the poultries by adopting a plurality of cascaded convolutional neural networks, inputting the image into at least one full-connection layer submodule for operation, and outputting predicted position information of the poultries in the original image; a cropping step, wherein a Crop layer is adopted, the original image is cropped according to the position information, and a poult appearance image with most of background cropped is output; and an identification step, which comprises a convolution step of processing the input poultry appearance image by adopting an identification submodule formed by cascading at least a first convolution neural network submodule, an expansion convolution submodule and a second convolution neural network submodule, and a judgment step of inputting the output data of the convolution step to at least one fully-connected layer submodule for processing, calculating by adopting an identification function and outputting a final identification result, wherein in the convolution step, each identification submodule combines the data sequentially processed by the first convolution neural network submodule and the expansion convolution submodule and the data sequentially processed by the first convolution neural network submodule, the expansion convolution submodule and the second convolution neural network submodule and uses the combined data as the input of a next-stage identification submodule, and the last identification submodule inputs the input data to the at least one fully-connected layer submodule after sequentially processing by the first convolution neural network submodule, the expansion convolution submodule and the second convolution neural network submodule.
Preferably, the first convolutional neural network sub-module further includes a convolutional layer and an extended convolutional layer connected in parallel, and the convolution step further includes respectively performing convolution and pooling on the input data by the convolutional layer and the extended convolutional layer, and outputting the data to the extended convolutional sub-module after activation processing.
Preferably, the method further comprises a training step of training the deep convolutional neural network by using a large amount of poultry data, wherein the positioning step, the cutting step and the identifying step are executed in the training step to train the convolutional neural network, a loss function is used to optimize model parameters, and a trained model is stored.
Preferably, the cropping step further comprises an image enhancement sub-step, which includes image enhancement or normalization processing, on the image of the appearance of the poultry after the cropping of most of the background.
Preferably, the biological characteristic comprises at least one of sex, developmental status, and health status of the hatchling.
The computer-readable recording medium of the present invention stores a computer program for executing the method for identifying biological characteristics of young birds based on deep convolutional neural network as described in any one of the above.
Effects of the invention
According to the invention, a deep convolution neural network model which has the advantages of high recognition speed and high accuracy and is suitable for various kinds of chicks is obtained by constructing a deep convolution neural network and continuously optimizing and adjusting through a large number of experiments. According to the technical scheme provided by the invention, the appearance image of the chick is collected in real time, the chick image is input into the neural network model after image preprocessing, the image of the chick is automatically separated from the background, and gender identification is carried out, so that the identification efficiency and accuracy can be improved, the chick can be separately bred according to the gender, the cost can be reduced, and the production efficiency can be improved.
Drawings
Fig. 1 is a schematic diagram of one embodiment of a poultry hatchling biometric identification system of the present invention.
FIG. 2 is a schematic diagram of an overall structure of an example of the deep convolutional neural network model according to the present invention.
Fig. 3 is an explanatory diagram of an operation module of the deep convolutional neural network model shown in fig. 2.
Fig. 4 is a schematic diagram of an embodiment of the method for identifying biological characteristics of young birds according to the invention.
Fig. 5 is a schematic diagram of an embodiment of a method for chicken gender identification by using a deep convolutional neural network according to the present invention.
Fig. 6 is a schematic diagram illustrating an overall structure of an example of the deep convolutional neural network model according to the present invention.
Fig. 7 is an explanatory diagram of an operation block of the deep convolutional neural network model shown in fig. 6.
Description of the reference numerals
10. Chick sex recognition system
11. Image acquisition device
12. Image preprocessing apparatus
13. Identification device
100. Deep neural network model
101. Chick positioning module
102. Cutting module
103. Gender prediction module
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings.
[ example 1]
Fig. 1 shows an embodiment of a chick sex recognition system as an example of a young bird biometric recognition apparatus based on a deep convolutional neural network of the present invention, and as shown in fig. 1, a chick sex recognition system 10 of the present invention includes: the image acquisition device 11, the image preprocessing device 12 and the recognition device 13.
The image pickup device 11 picks up an image of a chick as an identification target. The image capturing device 11 may include a plurality of capturing units to capture the appearance images of the chicken at various angles, for example, 3 cameras are provided to capture the front, side and side-over images of the chicken. The image acquisition unit can be arranged at the place where the chicks pass through in the production line of the hatchery, can automatically shoot when the chicks pass through, and can also be independently arranged outside the production line, and the chicks to be identified are grabbed by a mechanical grabbing device or manually grabbed to carry out image acquisition. Because the chick is good, according to needs, the image acquisition device can be provided with the posture adjusting mechanism to acquire the chick image with the specified posture, for example, a slope and a step can be arranged to stimulate the feather of the passing chick, or a square or cylindrical container is used for fixing the standing posture of the chick, and by arranging the posture adjusting mechanism, the standard image can be acquired, the calculation amount of post image processing and recognition can be reduced, the recognition precision is improved, and the working efficiency is improved.
The image acquired by the image acquisition device 11 is transmitted to the image preprocessing device 12 for preprocessing. The image preprocessing device 12 performs preprocessing such as frame clipping, scaling and image enhancement on the acquired image so as to reduce the influence of factors such as illumination intensity, chicken standing posture and shooting angle on the gender prediction of the chicken.
In this embodiment, as an example, the image preprocessing unit 12 performs contrast stretching on the input image according to the following formula (1),
Figure BDA0002289071390000051
where Imin and Imax are the minimum grayscale value and the maximum grayscale value of the original image, and MIN and MAX are the minimum grayscale value and the maximum grayscale value of the grayscale space to be stretched, as an example, the maximum value is 255 and the minimum value is 0, but the invention is not limited thereto, and may be adjusted according to the chicken variety, the age of day, the lighting condition, and the like.
After the image preprocessing device 12 preprocesses the images of the chicks, the images are input to the recognition device 13 for gender recognition, the recognition device 13 of the invention adopts a deep convolutional neural network based on training, each level of independent deep convolutional neural network in the level of deep convolutional neural network comprises a plurality of layers, and the method comprises the following steps: convolution layer, pooling layer, full-link layer, soft-max layer. Wherein the pooling layer comprises: maximum pooling layer, minimum pooling layer, average pooling layer.
Further, the deep convolutional neural network can be roughly divided into 2 parts:
1) A first stage: and carrying out target detection on the input chick appearance image, automatically cutting, automatically filling the image edge, and outputting an image containing the whole chick.
2) And a second stage: the chick image output in the first-stage deep convolutional neural network is used as the input of the second stage, and passes through the multilayer convolutional neural network module, the full connection layer and the soft-max layer. And finally, outputting a gender judgment result of the chick.
According to the deep convolutional neural network model adopted by the invention, the input of each layer of independent convolutional neural network module is the output of the last layer of convolutional neural network module, and the convolutional neural network modules are connected layer by layer. According to the increment of the convolution layer, the subsequent convolution module completes finer calculation on the previous basis, and completes the course of feature extraction from coarse to fine. Each independent convolutional neural network module consists of a convolutional neural network layer, a pooling layer and an activation layer. And the last convolution neural network layer is connected with a full connection layer and a soft-max layer, and the work of distinguishing the gender of the chick through the appearance of the chick is completed together. By adopting the method, the training time can be shortened, and the speed and the accuracy of the gender identification of the chicks can be effectively improved.
The deep convolutional neural network model is trained before being applied to automatic identification of chicks.
Model training phase
1) The image acquisition device 11 is used for shooting each angle of the appearance of a large number of male and female chicks, storing the images as image files, and attaching gender labels.
2) The image preprocessing device 12 uses an image enhancement algorithm to automatically perform image processing and image enhancement on the acquired chick appearance image data.
3) And inputting the preprocessed chick appearance image into a deep convolution neural network built by the recognition device 13 for training. The parameters were optimized using a cross entropy loss function as the loss function, as shown in the following equation (2), where y is the true value of the gender of the chick,
Figure BDA0002289071390000062
and (5) finishing training when the L value reaches a specified value and saving the model after training is finished.
Figure BDA0002289071390000061
After training, the system of the present invention can be used to implement automatic identification, and similarly, the image acquisition device 11 acquires the image data of the appearance of the chick, and after the image data is preprocessed by the image preprocessing device 12, the trained deep convolutional neural network model input to the recognition device 13 automatically detects, segments and identifies the chick, and outputs the result of predicting the gender of the chick.
Deep convolutional neural network model
Hereinafter, a specific model structure will be illustrated for a more specific description of the deep convolutional neural network employed in the present invention, but the specific structure of the model is provided only for better understanding of the present invention and its advantages, and is not intended to limit the present invention.
Fig. 2 is a schematic diagram of an overall structure of an example of the deep convolutional neural network model 100 used in the present invention, and fig. 3 is an explanatory diagram of an operation module of the deep convolutional neural network model shown in fig. 2. As shown in fig. 2 and 3, the model 100 is a deep convolutional neural network based on training, in which each stage of the deep convolutional neural network includes multiple layers, including: convolution layer, pooling layer, full-link layer, soft-max layer. Wherein the pooling layer comprises: maximum pooling layer, minimum pooling layer, average pooling layer.
In fig. 2:
1) Conv layer represents a convolutional neural network submodule. In the figure, each convolution neural network submodule comprises a plurality of layers of convolution operations, a pooling layer and an activation function.
2) And a full connected layer representing a full connection layer submodule for finally outputting the positioning information of the chicks and the sex prediction result of the chicks.
Further, the deep neural network model 100 can be roughly divided into a chick orientation module 101, a tailoring module 102, and a gender prediction module 103:
chick positioning module
The image acquired by the image acquisition device 11 is subjected to preprocessing such as cropping, scaling and image enhancement by the image preprocessing device 12, and then is input to the chick positioning module 101 of the neural network model.
The convolutional neural network of the chick positioning module 101 is composed of 6 Conv layer cascades, namely Conv layers 1-6, and 2 Fully connected layers Fully connected layers.
Conv layer 1~6
And transmitting the simple preprocessed chick appearance image to a first layer of convolution neural network sub-module Conv layer1 for convolution, pooling and activation operation. 32 feature maps are extracted using the information of the three channels of RGB of the chick image. The output of the first layer is used as the input of a second layer convolutional neural network submodule (convolutional layer Conv layer 2), and 64 feature maps are obtained through calculation. In this way, 1024 feature map images are obtained by performing operations of 6 layers with the output of the previous layer as the output of the next layer.
Fully connected layer 1~2
The full connected layer1 averagely divides 1024 feature maps output by Conv layer6 into 4 parts, each part is converted into one-dimensional data by adopting a reshape function, and the one-dimensional data is respectively input into 4 small Fully-connected layers; and combining the data output by the 4 small Fully-connected layers to output the data which is the most total, transmitting the data to the next Fully-connected layer2, and finally outputting the position prediction pixel point information of the chick in the original image after the operation.
Cutting module
The cropping module 102 includes a Crop layer and an enhancement layer Image enhancement
A Crop layer:
inputting: the position information of the chick in the original image and the chick original image output by the Fully connected layer 2;
and (3) outputting: and (5) automatic image cutting processing is carried out, and images of the chicks with most of backgrounds cut are output.
Image enhancement:
Inputting: the chick image after most of backgrounds are cut is output by the Crop layer;
and (3) outputting: and (4) using the image enhancement and normalization processing algorithm to process the chick image.
Gender prediction module
The convolutional neural network of the gender prediction module 103 is composed of 6 Conv layer cascades, i.e. Conv layers 7-12, 3 extended Conv layers for channel number fusion, and 1 Fully connected layer full connected layer.
Conv layer (8, 10, 12) uses Conv blocks, the basic unit in the convolutional neural network. And Conv layers (7, 9, 11) are composed of 2 different convolutional neural network sub-modules. The convolutional neural network consists of basic units Conv blocks and Deep Conv blocks in the convolutional neural network in parallel. The Deep Conv blocks are formed by combining Conv blocks and 1x1 extended Conv layer and are used for extracting characteristics of deeper data. Its concrete composition structure can be seen as a legend of a model diagram.
Chick image data flowing process: normalizing Image data of RGB three channels of the chick Image after Image enhancement processing, respectively inputting the normalized Image data into 2 different sub-modules of Conv layer 7 for convolution and pooling, combining the outputs of the 2 independent sub-modules after activation operation to obtain 64 feature maps, and outputting the feature maps.
The data output by Conv layer 7 is subjected to the fusion operation of extended Conv layer2, and then input into Conv layer 8.Conv layer8 convolves and pools the input data, activates the operation and outputs the operation with 64 feature maps. The output of the Conv layer8 and the input of the Conv layer8 are combined and then input to the Conv layer 9.
The latter Conv layer 9 → Conv layer 10; conv layer 11 → Conv layer 12 are connected and data is transmitted in this way. Through the operation Conv layer 9 → Conv layer 10, the input 64 feature maps output 256 feature maps as the input of Conv layer 11.
After Conv layer 11 → Conv layer 12 operation, 1024 feature maps are extracted from the input 256 feature maps in a further layer.
Fully connected layer 3: and (3) expanding 1024 feature maps output by Conv layer 12 into one-dimensional data, inputting the one-dimensional data into a Fully connected layer 3, calculating the output result through a softmax function, and outputting the final chick gender prediction result.
Thus, in the deep convolutional neural network model 100 of the present invention, the input of each layer of independent convolutional neural network module is the output of the last layer of convolutional neural network module, and the layers are connected one another. By increasing the convolution layer, the subsequent convolution module completes finer calculation on the basis of the former, completes the course of extracting the characteristics from coarse to fine and completes the work of distinguishing the gender of the chick through the appearance of the chick.
[ example 2]
Fig. 4 shows an embodiment of a chick sex recognition method as an example of the chick biometric feature recognition method based on the deep convolutional neural network of the present invention, and as shown in fig. 4, the chick sex recognition method of the present invention includes: an image acquisition step S11, an image preprocessing step S12, and a recognition step S13.
In the image capturing step S11, an image of a chick to be identified is captured by the image capturing device. The image acquisition device can comprise a plurality of shooting units for shooting appearance images of all angles of the chicken, for example, 3 cameras are arranged for acquiring images of the front, the side and the side upper part of the chicken. The image acquisition unit can be arranged at the place where the chicks pass through in the production line of the hatchery, can automatically shoot when the chicks pass through, and can also be independently arranged outside the production line, and the chicks to be identified are grabbed by a mechanical grabbing device or manually grabbed to carry out image acquisition. Because the chick is good, according to needs, the image acquisition device can be provided with the posture adjusting mechanism to acquire the chick image with the specified posture, for example, a slope and a step can be arranged to stimulate the feather of the passing chick, or a square or cylindrical container is used for fixing the standing posture of the chick, and by arranging the posture adjusting mechanism, the standard image can be acquired, the calculation amount of post image processing and recognition can be reduced, the recognition precision is improved, and the working efficiency is improved.
In the image preprocessing step S12, preprocessing including frame clipping, scaling, image enhancement, and the like is performed on the image acquired by the image acquisition device to reduce the influence of factors such as illumination intensity, chicken standing posture, shooting angle, and the like on the prediction of the gender of the chicken.
In this embodiment, as an example, the image preprocessing step performs contrast stretching on the input image according to formula (1),
Figure BDA0002289071390000091
where Imin and Imax are the minimum and maximum gray values of the original image, and MIN and MAX are the minimum and maximum gray values of the gray space to be stretched, as an example, the maximum value is 255 and the minimum value is 0, but the present invention is not limited thereto, and may be adjusted according to the breeds of chicks, the age of day, the lighting conditions, and the like.
In the identifying step S13, the image after the preprocessing is subjected to gender identification, the present invention adopts an identifying method of a deep convolutional neural network based on training, each stage of independent deep convolutional neural network in the stage of deep convolutional neural network includes a plurality of layers, including: convolution layer, pooling layer, full-link layer, soft-max layer. Wherein the pooling layer comprises: maximum pooling layer, minimum pooling layer, average pooling layer.
The deep convolutional neural network model is trained before being applied to automatic identification of chicks.
Stage of model training
1) And an image acquisition step S11 is executed to shoot each angle of the appearance of a large number of the male and female chicks, the images are stored as image files, and gender labels are attached to the image files.
2) And executing an image preprocessing step S12, and automatically performing image processing and image enhancement on the acquired chick appearance image data by using an image enhancement algorithm.
3) And (S13) executing a recognition step, and inputting the preprocessed chick appearance image into the deep convolution neural network constructed by the invention for training. The parameters were optimized using a cross entropy loss function as the loss function, as shown in the following equation (2), where y is the true value of the gender of the chick,
Figure BDA0002289071390000093
and (4) finishing training when the L value reaches a specified value and saving the model after training is finished for predicting the value.
Figure BDA0002289071390000092
After training, the system of the present invention can be used to perform automatic identification, and similarly, the image acquisition step S11 acquires the image data of the appearance of the chicken, and after the image data is preprocessed by the image preprocessing step S12, the recognition step S13 inputs the image data to the trained deep convolutional neural network model to perform automatic detection, segmentation and recognition on the chicken, and outputs the result of predicting the gender of the chicken.
The deep convolutional neural network can be roughly divided into 2 parts:
1) A first stage: and carrying out target detection on the input chick appearance image, automatically cutting, automatically filling the image edge, and outputting an image containing the whole chick.
2) And a second stage: the chick image output in the first-stage deep convolutional neural network is used as the input of the second stage, and passes through the multilayer convolutional neural network module, the full connection layer and the soft-max layer. And finally, outputting a gender judgment result of the chick.
Fig. 5 shows an embodiment of a method for identifying the gender of a chick by using the deep convolutional neural network built by the invention, and as shown in fig. 5, the identification method based on the deep convolutional neural network comprises the following steps.
A chick positioning step S101, inputting an original image containing the young birds into a first stage, sequentially performing convolution, pooling and activation operation by adopting a plurality of cascaded convolution neural networks, inputting the result into at least one fully-connected layer submodule for operation, and outputting predicted position information of the young birds in the original image;
a cropping step S102, adopting a Crop layer to perform cropping processing on the original image according to the position information and outputting a poult appearance image with most of background cropped; and
and a gender prediction step S103, performing convolution processing on the appearance image of the young bird by adopting an identification submodule formed by cascading at least a first convolution neural network submodule, an extended convolution submodule and a second convolution neural network submodule, inputting the result to at least one fully-connected layer submodule for processing, calculating by adopting an identification function, and outputting a final identification result. Each recognition submodule combines the data sequentially processed by the first convolution neural network submodule and the extended convolution submodule and the data sequentially processed by the first convolution neural network submodule, the extended convolution submodule and the second convolution neural network submodule to serve as the input of the next-stage recognition submodule, and the last recognition submodule inputs the input data to the at least one fully-connected layer submodule after the input data are sequentially processed by the first convolution neural network submodule, the extended convolution submodule and the second convolution neural network submodule.
In the deep convolutional neural network model adopted in this embodiment, the input of each independent convolutional neural network module is the output of the convolutional neural network module in the previous layer, and the convolutional neural network modules are connected layer by layer. According to the increment of the convolution layer, the subsequent convolution module completes finer calculation on the former basis, and completes the characteristic extraction process from coarse to fine. Each independent convolutional neural network module consists of a convolutional neural network layer, a pooling layer and an activation layer. And the last convolutional neural network layer is connected with a full connection layer and a soft-max layer, and the work of distinguishing the gender of the chick through the appearance of the chick is completed together. By adopting the method of the embodiment, the training time can be shortened, and the speed and the accuracy of the gender identification of the chicks can be effectively improved.
Hereinafter, a specific model structure is illustrated for a more specific description of the deep convolutional neural network employed in the present embodiment, but the specific structure of the model is provided only for better understanding of the present embodiment and its advantages, and is not intended to limit the present invention.
Fig. 6 is a schematic diagram of an overall structure of an example of the deep convolutional neural network model used in the present embodiment, and fig. 7 is an explanatory diagram of an operation module of the deep convolutional neural network model shown in fig. 6. As shown in fig. 6 and 7, the model is a deep convolutional neural network based on training, wherein each stage of the deep convolutional neural network comprises multiple layers, including: convolution layer, pooling layer, full-link layer, soft-max layer. Wherein the pooling layer comprises: maximum pooling layer, minimum pooling layer, average pooling layer.
In the figure:
1) Conv layer represents a convolutional neural network submodule. In the figure, each convolutional neural network submodule comprises a plurality of layers of convolution operations, a pooling layer and an activation function.
2) And a full connected layer representing a full connection layer submodule for finally outputting the positioning information of the chicks and the sex prediction result of the chicks.
The recognition method based on the deep convolutional neural network of the present embodiment can be roughly divided into the chick positioning step S101, the clipping step S102, and the gender prediction step S103, as described above.
Chick positioning step
The convolutional neural network adopted in the chick positioning step S101 is composed of 6 Conv layer cascades, namely Conv layers 1-6, and 2 Fully connected layers Fully connected layers.
Conv layer 1~6
And transmitting the simple preprocessed chick appearance image to a first layer of convolution neural network sub-module Conv layer1 for convolution, pooling and activation operation. 32 feature maps are extracted using the RGB three-channel information of the chick image. The output of the first layer is used as the input of the second layer convolutional neural network submodule (convolutional layer Conv layer 2), and 64 feature maps are obtained through calculation. In this way, 1024 feature map images are obtained by performing operations of 6 layers with the output of the previous layer as the output of the next layer.
Fully connected layer 1~2
The Fully connected layer1 averagely divides 1024 feature maps output by Conv layer6 into 4 parts, each part is converted into one-dimensional data by adopting a reshape function, and 4 small Fully-connected layers are respectively input; and combining the data output by the 4 small Fully-connected layers to output the data which is the most total, transmitting the data to the next Fully-connected layer2, and finally outputting the position prediction pixel point information of the chick in the original image after the operation.
Step of cutting
In the clipping step S102, a Crop layer and an enhancement layer Image enhancement are used for processing.
A Crop layer:
inputting: the position information of the chick in the original image and the original image of the chick, which are output by the Fully connected layer 2;
and (3) outputting: and (5) automatic image cutting processing is carried out, and images of the chicks with most of backgrounds cut are output.
Image enhancement:
Inputting: the chick image after cutting most of the background is output by the Crop layer;
and (3) outputting: and (5) using an image enhancement and normalization processing algorithm to process the chick image.
Sex prediction step
The convolutional neural network used in the gender prediction step S103 is composed of 6 Conv layer cascades, i.e., conv layers 7 to 12, 3 extended Conv layers for channel number fusion, and 1 Fully connected layer full connected layer.
Conv layer (8, 10, 12) uses Conv blocks, the basic unit in the convolutional neural network. And Conv layers (7, 9, 11) are composed of 2 different convolutional neural network sub-modules. The convolutional neural network consists of basic units Conv blocks and Deep Conv blocks in the convolutional neural network in parallel. The Deep Conv blocks are formed by combining Conv blocks and 1x1 extended Conv layer and are used for extracting characteristics of deeper data. Its concrete composition structure can be seen as a legend of a model diagram.
Chick image data flowing process: normalizing Image data of RGB three channels of the chick Image after Image enhancement processing, respectively inputting the normalized Image data into 2 different sub-modules of Conv layer 7 for convolution and pooling, combining the outputs of the 2 independent sub-modules after activation operation to obtain 64 feature maps, and outputting the feature maps.
The data output by Conv layer 7 is subjected to the fusion operation of extended Conv layer2, and then input into Conv layer 8.Conv layer8 convolves and pools the input data, activates the operation and outputs the operation with 64 feature maps. The output of the Conv layer8 and the input of the Conv layer8 are combined and then input to the Conv layer 9.
Conv layer 9 → Conv layer 10; conv layer 11 → Conv layer 12 are connected in this way to transfer data. After Conv layer 9 → Conv layer 10 operation, the input 64 feature maps output 256 feature maps as the input of Conv layer 11.
After Conv layer 11 → Conv layer 12 operation, 1024 feature maps are extracted from the input 256 feature maps in a further layer.
Fully connected layer 3: and (3) expanding 1024 feature maps output by Conv layer 12 into one-dimensional data, inputting the one-dimensional data into a Fully connected layer 3, calculating the output result through a softmax function, and outputting the final chick gender prediction result.
Therefore, according to the deep convolutional neural network model adopted by the invention, the input of each independent convolutional neural network module is the output of the convolutional neural network module in the previous layer, and the convolutional neural network modules are connected layer by layer. By increasing the convolution layer, the subsequent convolution module completes finer calculation on the basis of the former, completes the process of extracting the features from coarse to fine, and completes the work of distinguishing the gender of the chick through the appearance of the chick.
The experimental results are as follows:
in order to test the recognition effect of the model, 3000 chicks of 0 day old are divided into 3 groups, and Deep VGG Net and Deep Resnet obtained by finely adjusting the gender recognition experiment of the chicks on the basis of VGG Net and Resnet and the model are respectively adopted to carry out gender recognition tests on the chicks, and the recognition accuracy and the recognition speed are counted according to the recognition results, and the table is recorded.
Figure BDA0002289071390000131
As shown by the experimental comparison in the table, the method provided by the invention can be used for accelerating the training time, effectively improving the speed and accuracy of the gender identification of the chicks and having the best comprehensive performance.
It should be noted that in the preferred embodiment of the present invention, the organism feature recognition of the young birds of the present invention is described by taking the gender recognition of the chicks as an example. However, in other embodiments, the biological characteristics may also be development conditions, pathological features, health conditions, and the like, and are not limited to chickens, and the deep convolutional neural network model of the present invention may also be applicable to young birds such as ducks, and the like, and also exhibits good identification accuracy and speed, and is suitable for large-scale breeding industry. It will be understood by those skilled in the art that the foregoing embodiments are merely illustrative of the present invention and are not to be construed as limiting thereof, and that modifications and equivalents may be made thereto by those skilled in the art without departing from the spirit and scope of the present invention as set forth in the appended claims.

Claims (11)

1. A poultry larva biological characteristic recognition device based on a deep convolution neural network is characterized by comprising:
the poultry positioning module comprises a plurality of cascaded convolutional neural network sub-modules and at least one fully-connected layer sub-module, each convolutional neural network sub-module comprises a plurality of convolutional layers, a pooling layer and an activation function, the poultry positioning module inputs an original image containing the poultry and outputs predicted position information of the poultry in the original image after operation;
the cropping module comprises a Crop layer, executes cropping processing on the original image according to the position information and outputs a poult appearance image with most of background cropped; and
the identification module comprises an identification submodule formed by cascading at least a first convolution neural network submodule, an extended convolution submodule and a second convolution neural network submodule and at least one full-connection layer submodule, the identification module inputs the images of the appearances of the poultries and outputs identification results after operation, wherein the identification module comprises the identification submodule formed by cascading at least the first convolution neural network submodule, the extended convolution submodule and the second convolution neural network submodule, and the identification module outputs the identification results after operation, the identification module inputs the images of the poultries, and the identification results are displayed on the display screen of the poultries, and the identification module is used for identifying the poultries
The identification module comprises a gender prediction sub-module, wherein the gender prediction sub-module adopts a convolutional neural network and consists of 6 Conv layer cascades, namely Conv layers 7-12, 3 extended Conv layers for channel number fusion and 1 Fully connected layer fused layer, wherein Conv layers 8,10 and 12 use basic units Conv blocks in the convolutional neural network, conv layers 7,9 and 11 respectively consist of 2 different convolutional neural network sub-modules which consist of the basic units Conv blocks and Deep Conv blocks in the convolutional neural network in parallel, and Deep Conv blocks are formed by combining Conv blocks and 1x1 extended Conv layers and are used for extracting characteristics of a deeper data.
2. The device for identifying the biological characteristics of the young birds based on the deep convolutional neural network as claimed in claim 1, wherein:
the identification module further comprises a soft-max layer.
3. The device for identifying the biological characteristics of the young birds based on the deep convolutional neural network as claimed in claim 1, wherein:
the cutting module further comprises an image enhancement sub-module which is used for performing image enhancement or normalization processing on the images of the appearances of the young birds after most of backgrounds are cut.
4. The device for poultry hatchling biometric identification based on deep convolutional neural network as claimed in any of claims 1 to 3, wherein:
the biological characteristic includes at least one of a gender, a developmental status, and a health status of the hatchling.
5. A poultry hatchling biometric identification system comprising the deep convolutional neural network-based poultry biometric identification device of any one of claims 1 to 4, further comprising:
an image acquisition device that acquires images of poultries as identification targets;
and the image preprocessing device is used for preprocessing the image acquired by the image acquisition device, and the preprocessing comprises at least one of frame cutting, zooming and image enhancement.
6. The hatchling biometric identification system according to claim 5, wherein:
the image acquisition device is provided with a posture adjusting mechanism so as to acquire images of the young birds in the specified postures.
7. A poultry larval biological feature identification method based on a deep convolutional neural network is characterized by comprising the following steps:
a positioning step, namely performing convolution, pooling and activation operation on an input original image containing the poultries by adopting a plurality of cascaded convolution neural networks in sequence, inputting the result into at least one full-connection layer submodule for operation, and outputting predicted position information of the poultries in the original image;
a cropping step, namely performing cropping processing on the original image according to the position information by adopting a Crop layer, and outputting a poult appearance image with most of background cropped; and
the identification step comprises a convolution sub-step of processing the input poultry appearance image by adopting an identification sub-module which is formed by cascading at least a first convolution neural network sub-module, an extended convolution sub-module and a second convolution neural network sub-module, and a judgment sub-step of calculating by adopting an identification function and outputting a final identification result after inputting the output data of the convolution step to at least one full-connection layer sub-module for processing,
in the convolution substep, each recognition submodule combines the data sequentially processed by the first convolution neural network submodule and the extended convolution submodule and the data sequentially processed by the first convolution neural network submodule, the extended convolution submodule and the second convolution neural network submodule to serve as the input of the next-stage recognition submodule, and the last recognition submodule sequentially processes the input data by the first convolution neural network submodule, the extended convolution submodule and the second convolution neural network submodule and inputs the processed input data to the at least one full-connection layer submodule, wherein the input data is input to the at least one full-connection layer submodule
The identification step comprises a gender prediction sub-step, wherein the gender prediction sub-step adopts a convolutional neural network and consists of 6 Conv layer cascades, namely Conv layers 7-12, 3 extended Conv layers for channel number fusion and 1 Fully connected layer fused layer, wherein Conv layers 8,10 and 12 use basic units Conv blocks in the convolutional neural network, conv layers 7,9 and 11 respectively consist of 2 different convolutional neural network sub-modules which are formed by connecting the basic units Conv blocks and Deep Conv blocks in the convolutional neural network in parallel, and Deep Conv blocks are formed by combining Conv blocks and 1x1 extended Conv layers and are used for extracting characteristics of a deeper data layer.
8. The method according to claim 7, wherein the method comprises the steps of:
the method also comprises a training step of training the deep convolutional neural network by adopting a large amount of poult data, wherein the positioning step, the cutting step and the identifying step are executed in the training step to train the convolutional neural network, a loss function is adopted to optimize model parameters, and a trained model is stored.
9. The method for identifying biological characteristics of young birds based on deep convolutional neural network as claimed in claim 7, wherein:
the cutting step further comprises an image enhancement sub-step, and image enhancement or normalization processing is carried out on the images of the appearances of the poultries after most of backgrounds are cut.
10. The method for identifying the biological characteristics of the poultries based on the deep convolutional neural network as claimed in any one of claims 7 to 9, wherein:
the biological characteristic includes at least one of a gender, a developmental status, and a health status of the hatchling.
11. A computer-readable recording medium in which a computer program for executing the method for poultry hatchling biometric identification based on a deep convolutional neural network as claimed in any one of claims 7 to 10 is stored.
CN201911172403.3A 2019-11-26 2019-11-26 Device and method for identifying biological characteristics of young poultry Active CN111241908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911172403.3A CN111241908B (en) 2019-11-26 2019-11-26 Device and method for identifying biological characteristics of young poultry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911172403.3A CN111241908B (en) 2019-11-26 2019-11-26 Device and method for identifying biological characteristics of young poultry

Publications (2)

Publication Number Publication Date
CN111241908A CN111241908A (en) 2020-06-05
CN111241908B true CN111241908B (en) 2023-04-14

Family

ID=70863948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911172403.3A Active CN111241908B (en) 2019-11-26 2019-11-26 Device and method for identifying biological characteristics of young poultry

Country Status (1)

Country Link
CN (1) CN111241908B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112956427B (en) * 2021-01-29 2022-06-24 四川省畜牧科学研究院 Portable young bird sex identification anus pen
CN113231341A (en) * 2021-06-02 2021-08-10 黎一川 Poultry seedling sorting system based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364281A (en) * 2018-01-08 2018-08-03 佛山市顺德区中山大学研究院 A kind of ribbon edge hair defect defect inspection method based on convolutional neural networks
CN108491765A (en) * 2018-03-05 2018-09-04 中国农业大学 A kind of classifying identification method and system of vegetables image
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364281A (en) * 2018-01-08 2018-08-03 佛山市顺德区中山大学研究院 A kind of ribbon edge hair defect defect inspection method based on convolutional neural networks
CN108491765A (en) * 2018-03-05 2018-09-04 中国农业大学 A kind of classifying identification method and system of vegetables image
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Also Published As

Publication number Publication date
CN111241908A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
Ranjan et al. Detection and classification of leaf disease using artificial neural network
CN111241908B (en) Device and method for identifying biological characteristics of young poultry
CN109784378A (en) A kind of underwater fishing method based on machine vision
CN111507179A (en) Live pig feeding behavior analysis method
Ramkumar et al. An effectual plant leaf disease detection using deep learning network with IoT strategies
CN106846462B (en) insect recognition device and method based on three-dimensional simulation
CN111666897A (en) Oplegnathus punctatus individual identification method based on convolutional neural network
Manoharan et al. Identification of Mango Leaf Disease Using Deep Learning
CN115861721A (en) Livestock and poultry breeding spraying equipment state identification method based on image data
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
Kumar et al. Identification of plant diseases using image processing and image recognition
CN114529840A (en) YOLOv 4-based method and system for identifying individual identities of flocks of sheep in sheepcote
CN110428374A (en) A kind of small size pest automatic testing method and system
CN111241907B (en) Device and method for predicting biological characteristics of young poultry in incubation period
Raja et al. Convolutional Neural Networks based Classification and Detection of Plant Disease
Rakesh et al. An Overview on Machine Learning Techniques for Identification of Diseases in Aquaculture
CN111160422A (en) Analysis method for detecting attack behaviors of group-raised pigs by adopting convolutional neural network and long-term and short-term memory
Jin et al. An improved mask r-cnn method for weed segmentation
CN114997725A (en) Milk cow body condition scoring method based on attention mechanism and lightweight convolutional neural network
Yatoo et al. A novel model for automatic crop disease detection
CN113221704A (en) Animal posture recognition method and system based on deep learning and storage medium
Sundaram et al. Detection and Providing Suggestion for Removal of Weeds Using Machine Learning Techniques
Iyswaryalakshmi et al. Classification of Rice Leaf using RCNN with Transfer Learning
Hu et al. Preliminary design of a recognition system for infected fish species using computer vision
KR20130006865A (en) Mobile terminal, immediate shot service system of the living thing using that terminal and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant