CN113095441A - Pig herd bundling detection method, device, equipment and readable storage medium - Google Patents

Pig herd bundling detection method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113095441A
CN113095441A CN202110485028.9A CN202110485028A CN113095441A CN 113095441 A CN113095441 A CN 113095441A CN 202110485028 A CN202110485028 A CN 202110485028A CN 113095441 A CN113095441 A CN 113095441A
Authority
CN
China
Prior art keywords
picture
pig
mask
bunching
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110485028.9A
Other languages
Chinese (zh)
Inventor
张玉良
黄煜
尤园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Muyuan Intelligent Technology Co Ltd
Original Assignee
Henan Muyuan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Muyuan Intelligent Technology Co Ltd filed Critical Henan Muyuan Intelligent Technology Co Ltd
Priority to CN202110485028.9A priority Critical patent/CN113095441A/en
Publication of CN113095441A publication Critical patent/CN113095441A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a swinery bunching detection method which includes the steps of obtaining pictures of breeding columns of a pigsty, calling a pre-trained neural network model to extract a swinery foreground, and automatically identifying whether the swinery is bunched or not through a classification model. The method can automatically detect whether the pig herd is piled or not, and solves the problems of great labor and time consumption and untimely monitoring existing in the process of manually detecting the pig herd piling at present; moreover, the contact between the human and the swinery is reduced through automatic monitoring and identification, the disease transmission can be prevented to a certain extent, and the health of the swinery is ensured; meanwhile, the method can generate an accurate bundle pile detection result, and can provide effective and reliable reference information for pigsty environment control, pigsty disease reasoning and diagnosis and the like. The invention also discloses a pig herd bunding detection device, equipment and a readable storage medium, and has corresponding technical effects.

Description

Pig herd bundling detection method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of intelligent breeding, in particular to a pig herd bunding detection method, a pig herd bunding detection device, pig herd bunding detection equipment and a readable storage medium.
Background
In the swinery breeding, if the swinery is piled (the pigs are piled with multiple heads), the condition that the ambient temperature of the column is too low or the swinery is diseased is indicated. Accurately and timely finds out the pig herd prick pile, intervenes in advance and has larger production value.
At present, most farms adopt artificial swinery to tie and pile for judgment, and even in some automatic farms, the farms can only provide simple breeding environment monitoring and cannot realize automatic identification of the pig herd tying and pile. This leads to at present generally to detect the swinery through the mode of manpower patrolling and examining and whether appear pricking heap, but this kind of artifical detection mode not only wastes time and energy, and the control is untimely, also has higher requirement to personnel moreover, is unfavorable for intensive production.
In conclusion, how to solve the problems that the manual detection of the hogs during the pile-tying process is troublesome and labor-consuming and the monitoring is not timely is a technical problem which needs to be solved urgently by technical personnel in the field at present.
Disclosure of Invention
The invention aims to provide a pig herd bunching detection method, a device, equipment and a readable storage medium, which are used for automatically detecting whether a pig herd is bunched or not.
In order to solve the technical problems, the invention provides the following technical scheme:
a pig herd bunding detection method comprises the following steps:
acquiring pictures of breeding columns of the pigsty;
calling a pre-trained neural network model to perform mask recognition on the picture, and determining a pig picture in a target column in the picture;
and calling a pre-trained bunching classification model to perform bunching identification on the pig pictures to obtain a bunching detection result.
Optionally, before the calling the pre-trained neural network model to perform mask recognition on the picture, the method further includes:
judging whether the hue, the definition, the pixel value and the acquisition angle in the picture are abnormal or not;
if the abnormal condition exists, the step of obtaining the picture of the piggery breeding column is executed;
and if no abnormity exists, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture.
Optionally, the calling a pre-trained neural network model to perform mask recognition on the picture, and determining the pig picture in the target field in the picture includes:
calling a semantic segmentation neural network model to perform mask recognition on the picture to obtain a semantic recognition result;
and cutting the target column picture according to the pig mask in the semantic recognition result to obtain the pig picture.
Optionally, the clipping the target field picture according to the pig mask in the semantic recognition result includes:
setting the pixel value of the non-pig part image in the image to be 0 according to the pig mask in the semantic recognition result to obtain a pixel setting image;
and cutting out a minimum rectangle comprising all pigs in the pixel setting picture according to the coordinate value of the pig mask to be used as the pig picture.
Optionally, before the calling the pre-trained neural network model to perform mask recognition on the picture, the method further includes:
calling an example segmentation neural network model to perform mask identification on the picture to obtain an example identification result;
judging whether the example data of the column mask in the example identification result conforms to a complete target column rule or not; the instance data includes an instance number and instance pixel coordinates;
if not, executing the step of acquiring the picture of the piggery breeding column;
and if so, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture.
Optionally, before the step of calling the pre-trained neural network model to perform mask recognition on the picture is performed, the method further includes:
determining whether the number of the pig mask instances in the instance identification result in the target field reaches a detection threshold value according to the pixel coordinates of the field mask;
if so, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture;
if not, generating a bundling detection result without bundling.
Optionally, the pig herd bundle detection method further includes:
and if the bundling detection result shows that the pig herds are bundled and gathered, outputting bundling alarm information.
A pig herd bundle pile detection device includes:
the picture acquisition unit is used for acquiring pictures of the piggery breeding columns;
the pig farm foreground extraction unit is used for calling a pre-trained neural network model to perform mask recognition on the picture and determining a pig picture in a target column in the picture;
and the bunching identification unit is used for calling a pre-trained bunching classification model to perform bunching identification on the pig picture to obtain a bunching detection result.
A pig herd bunding detection device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the swinery bunching detection method when the computer program is executed.
A readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described herd bunching detection method.
According to the method provided by the embodiment of the invention, the picture of the breeding column of the pigsty is obtained, the pre-trained neural network model is called to extract the prospect of the swinery, and whether the swinery is piled or not is automatically identified through the classification model. The method can automatically detect whether the pig herd is piled or not, and solves the problems of great labor and time consumption and untimely monitoring existing in the process of manually detecting the pig herd piling at present; moreover, the contact between the human and the swinery is reduced through automatic monitoring and identification, the disease transmission can be prevented to a certain extent, and the health of the swinery is ensured; meanwhile, the method can generate an accurate bundle pile detection result, and can provide effective and reliable reference information for pigsty environment control, pigsty disease reasoning and diagnosis and the like.
Correspondingly, the embodiment of the invention also provides a swinery bunching detection device, equipment and a readable storage medium corresponding to the swinery bunching detection method, which have the technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a pig herd bunding detection method according to the present invention;
fig. 2 is a schematic structural diagram of a pig herd bunding detection device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a pig herd bunding detection apparatus according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a pig herd bunching detection method which can automatically detect whether the pig herd has bunching or not.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a pig herd bunding detection method according to an embodiment of the present invention, the method includes the following steps:
s101, obtaining pictures of breeding columns of a pigsty;
the pigsty comprises a plurality of breeding columns, a plurality of pigs are bred in each breeding column, pictures obtained by image acquisition of a certain breeding column in the pigsty are obtained, and the pigsty is piled according to the pictures.
It should be noted that, in this embodiment, the process of acquiring the picture is not limited, and the picture may be set according to the needs of the actual application scene, for example, the (visible light) picture may be acquired in the middle of each column by using a (visible light) camera on the inspection trolley running on the track above the pigsty; and a camera specially used for collecting the corresponding column can be arranged above each cultivation column to collect the image of the corresponding column. The number of the cameras erected in the former is small, and the acquisition cost is low. In this embodiment, only the two image acquisition processes are described as an example, and other implementation processes can refer to the description of this embodiment and are not described herein again.
S102, calling a pre-trained neural network model to perform mask recognition on a picture (for example, a round object exists in the picture, a circle with the same size as the round object is cut from a piece of paper, the piece of paper is covered on the picture, only the round object can be seen at the moment, and the piece of paper is the mask), and determining a pig picture in a target field in the picture;
in the embodiment, a pre-trained neural network (an algorithm mathematical model simulating animal neural network behavior characteristics and performing distributed parallel information processing) model is called to perform mask reasoning, and a mask outputted by the model is used for extracting the pig pictures, wherein the mask reasoning is performed by adjusting the interconnection relationship among a large number of internal nodes depending on the complexity of the system, and the extraction of the pig pictures is mainly performed by mask recognition of the pig. The target field refers to a field for detecting the cut-up, that is, a main shooting object of the obtained picture (for example, when the cut-up detection is performed on the field 1, a picture shot by taking the field 1 as the main shooting object is obtained). The pig picture refers to a picture only including pigs, and may also be referred to as a pig herd foreground (which means that only pixel values of a specified object in the picture are retained, and other pixel values are set to 0) picture.
In this embodiment, the network structure type and the specific network structure of the called neural network model for performing mask recognition on the picture are not limited, and the neural network model can be set according to actual accuracy extraction requirements.
In order to improve the detection precision of the pig, a semantic segmentation (identifying an image according to pixels and marking a category label of each pixel in the image) neural network model can be called, accurate detection of the pig is realized by the semantic segmentation neural network model, accordingly, the pre-trained neural network model is called to perform mask identification on the picture, and the process of determining the pig picture in the target column in the picture specifically comprises the following steps:
(1) calling a semantic segmentation neural network model to perform mask recognition on the picture to obtain a semantic recognition result;
the semantic segmentation means specifically selected in the semantic segmentation neural network model is not limited in this embodiment, and for example, the semantic segmentation neural network model of a swinery may be trained based on a deepabv 3plus (a semantic segmentation method) model, and other semantic segmentation means are not described herein again, and refer to the description of this embodiment. In addition, it should be noted that the training process of the semantic segmentation neural network model is not limited in this embodiment, and a model training method in the related art may be referred to, for example, a sample picture may be input into a convolutional neural network (a type of feed-forward neural network that includes convolutional calculation and has a deep structure) model to extract deep features, and then a mask of a pig in the picture is calculated through upsampling and convolutional calculation, and after a large number of inferences verify convergence, the trained semantic segmentation neural network model is obtained.
The method comprises the steps of inputting a picture into a semantic segmentation neural network model, carrying out pig mask feature extraction on the picture by the model, calculating a pig mask in the picture through upsampling and convolution, and taking the obtained pig mask as a semantic recognition result.
(2) And cutting the target column picture according to the pig mask in the semantic recognition result to obtain the pig picture.
The pig mask is used for shielding pictures of other non-pig objects, and the pictures only containing pigs can be obtained after the original pictures are combined with the pig mask, so that the pictures can be cut. The specific cutting implementation steps can be set according to actual use requirements, and are not limited in this embodiment, and optionally, the implementation steps for cutting the target field image according to the pig mask in the semantic recognition result are as follows:
(2.1) setting the pixel value of the non-pig part image in the picture to be 0 according to the pig mask in the semantic recognition result to obtain a pixel setting picture;
based on the pig-only mask, the pixel value of the non-pig-only image in the originally input picture is set to 0, i.e., black, as a pixel setting picture.
And (2.2) cutting out a minimum rectangle comprising all pigs in the pixel setting picture according to the coordinate values of the pig mask to be used as the pig picture.
And (3) according to the maximum and minimum values of the xy coordinates of the mask of the pig, cutting the minimum rectangle containing all the pigs to obtain the picture of the pig (the picture only containing the pig), and storing the picture.
In this embodiment, only the above clipping manner based on pixel setting is taken as an example for description, and other image clipping manners can refer to the description of this embodiment, which is not described herein again.
S103, calling a pre-trained bunching classification model to perform bunching identification on the pig pictures to obtain a bunching detection result.
The method includes training a bunching classification model based on a neural network to identify whether a swinery is bunched, and the type, specific structure and training process of the bunching classification model are not limited in this embodiment, for example, a mobilenetv2 (an image classification method and a skeleton network) bunching classification model may be selected, only the network structure is described in this embodiment, the training process of the mobilenetv2 bunching classification model may be introduced by taking a sample picture of a pig (labeled with picture categories: bunched or not bunched), inputting the sample picture of the pig into the mobilenetv2 bunching classification model to extract deep features, then using a full connection layer (used for integrating the extracted features) to calculate the category of the picture, and training for several times to obtain a classifiable mobilenetv2 bunching classification model.
The bundle classification model generates a bundle detection result after bundle identification, and in this embodiment, the form of the bundle detection result is not limited, and may be a bundle probability value, a density degree, or a classification result such as bundle or non-bundle. When a non-classification result is generated, a bunching threshold needs to be set, and for example, when the bunching probability value is greater than 0.5, it is determined that the pig herd is bunched.
After the bundle detection result is generated, further, if the bundle detection result shows that the swinery is bundled and gathered, bundle warning information can be output (the sending object is not limited, and can be only sent to a breeder), so that the breeder can intervene in time and the normal growth of the swinery is ensured.
Based on the introduction, in the swinery bunching detection method provided by this embodiment, the picture of the pigsty breeding column is obtained, the pre-trained neural network model is called to extract the swinery foreground, and whether the swinery is bunched or not is automatically identified through the classification model. The method can automatically detect whether the pig herd is piled or not, and solves the problems of great labor and time consumption and untimely monitoring existing in the process of manually detecting the pig herd piling at present; moreover, the contact between the human and the swinery is reduced through automatic monitoring and identification, the disease transmission can be prevented to a certain extent, and the health of the swinery is ensured; meanwhile, the method can generate an accurate bundle pile detection result, and can provide effective and reliable reference information for pigsty environment control, pigsty disease reasoning and diagnosis and the like.
It should be noted that, based on the above embodiments, the embodiments of the present invention also provide corresponding improvements. In the preferred/improved embodiment, the same steps as those in the above embodiment or corresponding steps may be referred to each other, and corresponding advantageous effects may also be referred to each other, which are not described in detail in the preferred/improved embodiment herein.
In order to ensure the accuracy of image recognition and avoid the influence of other environmental factors, the following steps can be further executed before the pre-trained neural network model is called to perform mask recognition on the image:
(1) judging whether the hue, the definition, the pixel value and the acquisition angle in the picture are abnormal or not;
(2) if the abnormal condition exists, executing the step of acquiring the picture of the piggery breeding column;
(3) and if the abnormality does not exist, executing a step of calling a pre-trained neural network model to perform mask recognition on the picture.
The judgment of abnormal illumination can be realized for the abnormal hue judgment, namely, the abnormal illumination pictures in the collected pictures are removed, so that the influence of the abnormal illumination pictures on the detection of the pigs is avoided. The specific implementation steps for determining the abnormal color tone are not limited in this embodiment, and for the sake of better understanding, an implementation manner is introduced here, which is specifically as follows: calling opencv (an open source cross-platform computer vision and machine learning software library) to calculate HSV (Hue, Saturation, Value) of an image, creating a color space according to the intuitive characteristics of colors, wherein the parameters of the colors in the model are Hue (H), Saturation (S) and brightness (V), and eliminating abnormal lighting pictures with H less than 20.
The judgment of the fuzzy degree of the picture can be realized by judging the abnormity of the definition, namely the fuzzy picture in the collected picture is removed, so that the influence of the picture with low definition degree on the detection of the pig is avoided. The specific implementation steps of the definition anomaly determination are not limited in this embodiment, and for the sake of understanding, an implementation manner is introduced here, specifically as follows: the method comprises the steps of converting an original picture into a gray scale image (an image represented by gray scale), dividing the gray scale image into 4 blocks of regions on average (for example, the image can be divided by field words on average, the detection effect is good), calculating a Laplacian operator of each block of region (for calculating a second derivative in the application, the difference of squares is larger because the boundary of a normal picture is clearer and the difference of squares is smaller because the boundary information contained in a fuzzy picture is less), and if the result of any region is smaller than a specified threshold value 100, the picture is fuzzy and the picture is rejected. The calculation method of the edge ambiguity degree value of the laplacian in each block region can refer to implementation steps of the related technology, and one calculation method of the edge ambiguity degree value of the laplacian in each block region is as follows: firstly, calculating the Laplace operator of each block region, wherein the formula is as follows:
Figure BDA0003049942150000081
Figure BDA0003049942150000082
Figure BDA0003049942150000083
Figure BDA0003049942150000084
where x and y are the coordinates of the pixel and f (x, y) is the value of the pixel.
Then, calculating an edge ambiguity degree value of each block region, wherein the formula is as follows:
Figure BDA0003049942150000085
wherein x is1,x2,x3...xnAnd the calculation result is the Laplacian edge fuzzy degree value of each block of region.
In this embodiment, the calculation process of the edge ambiguity degree value of the laplacian operator is only described as an example, and other calculation methods can refer to the description of this embodiment and are not described herein again.
The judgment of whether the fog exists in the acquisition lens can be realized by judging the abnormity of the pixel value, namely the image with the fog on the lens when the image is acquired is removed, so that the influence of the fog of the lens on the detection of the pig is avoided. The specific implementation steps for determining the pixel value abnormality are not limited in this embodiment, and for the sake of better understanding, an implementation manner is introduced here, specifically as follows:
performing minimum value filtering processing on the image (taking the minimum value of the neighborhood pixels as the pixel value of the output image), wherein the minimum value filtering formula is as follows:
Figure BDA0003049942150000091
where x and y are the coordinates of the pixel, vx,yMin represents the minimum pixel value for calculating the specified pixel region for the value of the pixel.
And then counting the number of pixels with the pixel value larger than 35, if the number of the pixels exceeds 300, indicating that fog exists on a lens when the picture is acquired, and rejecting the picture.
The abnormal judgment of the acquisition angle can realize the elimination of the image with the abnormal acquisition angle, so that the influence of the abnormal acquisition angle on the detection of the pigs is avoided. The specific implementation steps for the abnormality determination of the acquisition angle are not limited in this embodiment, and for the sake of deeper understanding, an implementation manner is introduced here, which is specifically as follows:
firstly, based on a linear detector (a method for detecting a linear line), a gradient of an image is calculated, and a formula of the linear detector is as follows:
gx=f(x+1,y)-f(x,y)+f(x+1,y+1)-f(x,y+1)
gy=f(x,y+1)-f(x,y)+f(x+1,y+1)-f(x+1,y)
Figure BDA0003049942150000092
where x and y are the coordinates of the pixel and f (x, y) is the value of the pixel. The included angle between each pixel of the picture and the row and column lines forms a row and column line field, and the angle of the row and column line field is calculated by the following formula:
Figure BDA0003049942150000093
and then sorting is carried out, the region is increased from the pixel with the maximum gradient, the degree difference between the current pixel and the surrounding 8 pixels is compared, if the degree difference is smaller than the threshold value, the region is added, then the last compared and added pixel is used as the reference pixel of the next comparison, and if the number of pixels in the obtained region is too small, the region is discarded. Traversing all pixels, finding the pixels on the most boundary in four directions, and then obtaining the circumscribed rectangles of the pixels, the coordinates (x1, y1) and (x2, y2) of the middle points of two short sides of the rectangle, the side length width of the short sides, the barycentric coordinates (centrx, centry), the main direction angle degree, the angle arc cosine value degreeX, the angle arc sine value degreeY, the probability that a pixel point and an angle arc are consistent, and the threshold prec of the difference between the pixel point field direction and the main direction angle arc, wherein the probability is the proportion of the directly used threshold accounting for 180 degrees, namely, the radian threshold corresponding to 22.5 degrees directly used by 1/8 and prec. Then, counting straight lines reaching the specified length in the picture, and counting the angle distribution of the straight lines. If the angle with the maximum frequency is not in the designated angle range, indicating that the column angle is abnormal, and rejecting the picture.
The specific implementation steps of the hue, the definition, the pixel value, and the acquisition angle for the abnormality determination are not limited in this embodiment, for the sake of deeper understanding, the implementation manners described above are described as examples, and other implementation manners for implementing the hue, the definition, the pixel value, and the acquisition angle for the abnormality determination may refer to the descriptions in this application and are not described herein again.
Based on the above method embodiment, before the pre-trained neural network model is called to perform mask recognition on the picture in step S102, in order to avoid the influence of abnormal data on the recognition accuracy, the following steps may be further performed:
(1) calling an instance segmentation (on the basis of semantic segmentation, providing different labels for independent instances belonging to the same class of objects) neural network model to perform mask recognition on the picture to obtain an instance recognition result;
the example recognition result at least includes a field mask, the field may refer to a field form such as an iron railing, an acrylic board, and the like, and the corresponding model may be specifically set according to the needs of the actual application object, which is not limited herein. The neural network model segmented by the examples can realize accurate identification and segmentation of a large object, namely a column in the picture.
In this embodiment, a specific model structure and a training inference process of the example segmentation neural network model are not limited, and a training process of the model and a corresponding model inference process are introduced here for enhancing understanding.
The training process is as follows: collecting (visible light) pictures, (manually) labeling the outlines of objects such as pigs, iron railings, acrylic plates and the like of main columns in the pictures, training a mask rcnn (an example segmentation method) example segmentation model based on the labeled pictures, namely, inputting the picture into a convolutional neural network model to extract deep features, then generating a suggestion window by using a region suggestion neural network, mapping the suggestion window to the last layer of feature map of the convolutional neural network, generating a feature map with a fixed size through the alignment layer of the region of interest, performing regression by utilizing full-connection classification, a frame and a mask to obtain an example segmentation neural network model, the model mainly has the advantages that the mask of the pig, the iron handrail and the acrylic plate is obtained, the model has high identification precision on large objects such as the iron handrail and the acrylic plate, the mask recognition accuracy of the pig is low, so that the semantic segmentation neural network model is selected to accurately segment the mask of the pig in the embodiment.
One reasoning process for the above model training process is as follows: the method comprises the steps of reasoning input pictures by using a trained example segmentation model, namely inputting the pictures into a convolutional neural network model to extract features, generating a suggestion window by using an area generation neural network, mapping the suggestion window to the last layer of feature map of the convolutional neural network, generating a feature map with a fixed size through an interested area alignment layer, and performing regression by using full-connection classification, a frame and a mask, wherein the model can output masks of various objects (mainly comprising pigs, iron railings and acrylic plates).
(2) Judging whether the example data of the column mask in the example identification result conforms to a complete target column rule or not;
the example recognition result includes a column mask (such as masks of iron railings and acrylic boards), each column forms a column, for example, four connected iron railings enclose one column, the column mask displays the position and the number of the columns, if example data (including example numbers and example pixel coordinates) of the iron railings and the acrylic boards do not accord with a complete target column rule (specific rule setting can be set according to a column setting mode of an application scene, and this is not limited in this embodiment), it is indicated that the target column is incomplete, the target column is not completely illuminated in the picture, a pig of the target column which is not in the picture may exist, at this time, the precision of bunching detection on the pig is low, the picture with the incomplete column is removed in this embodiment, and step (3) is executed.
(3) If not, executing the step of obtaining the picture of the piggery breeding column;
(4) and if so, executing a step of calling a pre-trained neural network model to perform mask recognition on the picture.
If the example data of the field mask accords with the complete target field rule, and the picture is proved to contain the complete target field, the step of calling the pre-trained neural network model to identify the mask of the picture can be executed to carry out the next step of bunching detection.
Further, before (4) judging that the images are matched and executing the step of calling the pre-trained neural network model to perform mask recognition on the images, the following steps can be further executed:
(5) determining whether the number of the pig mask examples in the example identification result in the target column reaches a detection threshold value according to the pixel coordinates of the column mask;
because the pigs of other columns can exist besides the pigs of the target column in the picture, the pig piling condition in one column (namely the target column) can be judged once in the application, in order to avoid the interference of the pigs of other columns to the pig piling condition of the target column, determining the pigs in the target column according to the pixel coordinates of the masks of the pigs and the columns (iron railings and acrylic plates), filtering the pigs in the non-main columns, correcting the number of the example pigs, if the number of the pigs in the target column does not reach the detection threshold (the specific numerical value is not limited in the embodiment, and can be set according to the actual use requirement, such as 3), so that the bunching condition is not generated, the stack detection result without stack binding is directly generated, and subsequent stack binding detection is not performed, so that the detection speed is increased, and the waste of detection resources is avoided. And if so, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture, and starting the bundle detection.
(6) If so, executing a step of calling a pre-trained neural network model to perform mask recognition on the picture;
(7) if not, generating a bundling detection result without bundling.
Corresponding to the above method embodiments, the present invention further provides a swinery bunching detection device, and the below described swinery bunching detection device and the above described swinery bunching detection method may be referred to in correspondence with each other.
Referring to fig. 2, the apparatus includes the following modules:
the picture acquiring unit 110 is mainly used for acquiring pictures of the pigsty breeding columns;
the swinery foreground extraction unit 120 is mainly used for calling a pre-trained neural network model to perform mask recognition on the picture, and determining the pig picture in the target column in the picture;
the bunching recognition unit 130 is mainly configured to call a pre-trained bunching classification model to perform bunching recognition on the pig pictures to obtain a bunching detection result.
Corresponding to the above method embodiment, the embodiment of the present invention further provides a herd bunding detection apparatus, and a herd bunding detection apparatus described below and a herd bunding detection method described above may be referred to in correspondence with each other.
This swinery is pricked and is piled check out test set includes:
a memory for storing a computer program;
and the processor is used for realizing the steps of the pig herd bunching detection method of the embodiment of the method when executing the computer program.
Specifically, referring to fig. 3, a schematic diagram of a specific structure of a pig herd bunching detection device provided in this embodiment is provided, where the pig herd bunching detection device may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Memory 332 may be, among other things, transient or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a data processing device. Still further, the central processor 322 may be configured to communicate with the memory 332 to execute a series of instructional operations on the swine herd bunching detection device 301 within the memory 332.
The swine herding detection device 301 may also include one or more power sources 326, one or more wired or wireless network interfaces 350, one or more input-output interfaces 358, and one or more operating systems 341.
The steps in the swinery bunching detection method described above may be implemented by the structure of the swinery bunching detection apparatus.
Corresponding to the above method embodiment, the embodiment of the present invention further provides a readable storage medium, and a readable storage medium described below and a pig herd bunding detection method described above may be referred to in correspondence.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the herd bunching detection method of the above-mentioned method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Claims (10)

1. A pig herd bunding detection method is characterized by comprising the following steps:
acquiring pictures of breeding columns of the pigsty;
calling a pre-trained neural network model to perform mask recognition on the picture, and determining a pig picture in a target column in the picture;
and calling a pre-trained bunching classification model to perform bunching identification on the pig pictures to obtain a bunching detection result.
2. The swine herding detection method of claim 1, wherein before the invoking of the pre-trained neural network model to mask identify the picture, further comprising:
judging whether the hue, the definition, the pixel value and the acquisition angle in the picture are abnormal or not;
if the abnormal condition exists, the step of obtaining the picture of the piggery breeding column is executed;
and if no abnormity exists, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture.
3. The swinery bunching detection method of claim 1, wherein the calling a pre-trained neural network model to perform mask recognition on the picture and determine the picture of the pig in the target field in the picture comprises:
calling a semantic segmentation neural network model to perform mask recognition on the picture to obtain a semantic recognition result;
and cutting the target column picture according to the pig mask in the semantic recognition result to obtain the pig picture.
4. The swinery bunching detection method of claim 3, wherein the cropping the target field picture according to the pig mask in the semantic recognition result comprises:
setting the pixel value of the non-pig part image in the image to be 0 according to the pig mask in the semantic recognition result to obtain a pixel setting image;
and cutting out a minimum rectangle comprising all pigs in the pixel setting picture according to the coordinate value of the pig mask to be used as the pig picture.
5. The swine herding detection method of claim 1, wherein before the invoking of the pre-trained neural network model to mask identify the picture, further comprising:
calling an example segmentation neural network model to perform mask identification on the picture to obtain an example identification result;
judging whether the example data of the column mask in the example identification result conforms to a complete target column rule or not; the instance data includes an instance number and instance pixel coordinates;
if not, executing the step of acquiring the picture of the piggery breeding column;
and if so, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture.
6. The swine herding detection method according to claim 5, wherein before the step of invoking the pre-trained neural network model to mask the image, further comprising:
determining whether the number of the pig mask instances in the instance identification result in the target field reaches a detection threshold value according to the pixel coordinates of the field mask;
if so, executing the step of calling the pre-trained neural network model to perform mask recognition on the picture;
if not, generating a bundling detection result without bundling.
7. The swine herd bunching detection method of claim 1 further comprising:
and if the bundling detection result shows that the pig herds are bundled and gathered, outputting bundling alarm information.
8. A pig herd bundle pile detection device, its characterized in that includes:
the picture acquisition unit is used for acquiring pictures of the piggery breeding columns;
the pig farm foreground extraction unit is used for calling a pre-trained neural network model to perform mask recognition on the picture and determining a pig picture in a target column in the picture;
and the bunching identification unit is used for calling a pre-trained bunching classification model to perform bunching identification on the pig picture to obtain a bunching detection result.
9. A pig herd bundle pile detection equipment which characterized in that includes:
a memory for storing a computer program;
a processor for implementing the steps of the herd bunching detection method as defined in any one of claims 1 to 7 when the computer program is executed.
10. A readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of swine herd bunching detection as defined in any one of claims 1 to 7.
CN202110485028.9A 2021-04-30 2021-04-30 Pig herd bundling detection method, device, equipment and readable storage medium Pending CN113095441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110485028.9A CN113095441A (en) 2021-04-30 2021-04-30 Pig herd bundling detection method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110485028.9A CN113095441A (en) 2021-04-30 2021-04-30 Pig herd bundling detection method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113095441A true CN113095441A (en) 2021-07-09

Family

ID=76681121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110485028.9A Pending CN113095441A (en) 2021-04-30 2021-04-30 Pig herd bundling detection method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113095441A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505795A (en) * 2021-07-16 2021-10-15 河南牧原智能科技有限公司 Method and system for detecting diarrhea of herd of pigs
CN114189627A (en) * 2021-11-24 2022-03-15 河南牧原智能科技有限公司 Method and product for acquiring preset angle of camera and monitoring breeding fence
CN114543674A (en) * 2022-02-22 2022-05-27 成都睿畜电子科技有限公司 Detection method and system based on image recognition
CN115359410A (en) * 2022-10-21 2022-11-18 正大农业科学研究有限公司 Tie-pile behavior detection method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658414A (en) * 2018-12-13 2019-04-19 北京小龙潜行科技有限公司 A kind of intelligent checking method and device of pig
CN109785337A (en) * 2018-12-25 2019-05-21 哈尔滨工程大学 Mammal counting method in a kind of column of Case-based Reasoning partitioning algorithm
CN110222664A (en) * 2019-06-13 2019-09-10 河南牧业经济学院 A kind of feeding monitoring system of intelligent pigsty based on the analysis of video activity
CN110751117A (en) * 2019-10-25 2020-02-04 兰州大学 Unmanned aerial vehicle-based sheep flock and cattle flock quantity monitoring method and device
CN111178197A (en) * 2019-12-19 2020-05-19 华南农业大学 Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111199535A (en) * 2019-11-28 2020-05-26 北京海益同展信息科技有限公司 Animal state monitoring method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658414A (en) * 2018-12-13 2019-04-19 北京小龙潜行科技有限公司 A kind of intelligent checking method and device of pig
CN109785337A (en) * 2018-12-25 2019-05-21 哈尔滨工程大学 Mammal counting method in a kind of column of Case-based Reasoning partitioning algorithm
CN110222664A (en) * 2019-06-13 2019-09-10 河南牧业经济学院 A kind of feeding monitoring system of intelligent pigsty based on the analysis of video activity
CN110751117A (en) * 2019-10-25 2020-02-04 兰州大学 Unmanned aerial vehicle-based sheep flock and cattle flock quantity monitoring method and device
CN111199535A (en) * 2019-11-28 2020-05-26 北京海益同展信息科技有限公司 Animal state monitoring method and device, electronic equipment and storage medium
CN111178197A (en) * 2019-12-19 2020-05-19 华南农业大学 Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈婵: "实例分割算法在目标盘点方面的应用研究" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505795A (en) * 2021-07-16 2021-10-15 河南牧原智能科技有限公司 Method and system for detecting diarrhea of herd of pigs
CN114189627A (en) * 2021-11-24 2022-03-15 河南牧原智能科技有限公司 Method and product for acquiring preset angle of camera and monitoring breeding fence
CN114543674A (en) * 2022-02-22 2022-05-27 成都睿畜电子科技有限公司 Detection method and system based on image recognition
CN114543674B (en) * 2022-02-22 2023-02-07 成都睿畜电子科技有限公司 Detection method and system based on image recognition
CN115359410A (en) * 2022-10-21 2022-11-18 正大农业科学研究有限公司 Tie-pile behavior detection method and system

Similar Documents

Publication Publication Date Title
CN113095441A (en) Pig herd bundling detection method, device, equipment and readable storage medium
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN110705405B (en) Target labeling method and device
CN108921105B (en) Method and device for identifying target number and computer readable storage medium
CN111881730A (en) Wearing detection method for on-site safety helmet of thermal power plant
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN110807775A (en) Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium
CN112200011B (en) Aeration tank state detection method, system, electronic equipment and storage medium
CN113553977A (en) Improved YOLO V5-based safety helmet detection method and system
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN111339902A (en) Liquid crystal display number identification method and device of digital display instrument
CN113312999A (en) High-precision detection method and device for diaphorina citri in natural orchard scene
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN114402362A (en) Fish counting system, fish counting method, and program
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN117475353A (en) Video-based abnormal smoke identification method and system
CN112329550A (en) Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device
CN112818836B (en) Method and system for detecting personnel target of transformer substation scene
KR102416714B1 (en) System and method for city-scale tree mapping using 3-channel images and multiple deep learning
CN113034449B (en) Target detection model training method and device and communication equipment
CN115601547A (en) Sample image acquisition method, sample image acquisition device, cargo management method, cargo management device, cargo management equipment and storage medium
CN112232272B (en) Pedestrian recognition method by fusing laser and visual image sensor
CN114140428A (en) Method and system for detecting and identifying larch caterpillars based on YOLOv5
CN113239931A (en) Logistics station license plate recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709