CN117115754B - Intelligent duck shed monitoring method based on computer vision - Google Patents
Intelligent duck shed monitoring method based on computer vision Download PDFInfo
- Publication number
- CN117115754B CN117115754B CN202311369932.9A CN202311369932A CN117115754B CN 117115754 B CN117115754 B CN 117115754B CN 202311369932 A CN202311369932 A CN 202311369932A CN 117115754 B CN117115754 B CN 117115754B
- Authority
- CN
- China
- Prior art keywords
- duck
- outlier
- individual
- individuals
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241000272525 Anas platyrhynchos Species 0.000 title claims abstract description 259
- 238000012544 monitoring process Methods 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims abstract description 50
- 241000272522 Anas Species 0.000 claims abstract description 58
- 241000405070 Percophidae Species 0.000 claims abstract description 45
- 235000015872 dietary supplement Nutrition 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims abstract description 4
- 230000011218 segmentation Effects 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 18
- 238000003064 k means clustering Methods 0.000 claims description 13
- 239000003550 marker Substances 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 9
- 238000000513 principal component analysis Methods 0.000 claims description 5
- 239000013589 supplement Substances 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001502 supplementing effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000035943 Aphagia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
- G06K17/0022—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
- G06K17/0025—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
- G06K17/0022—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
- G06K17/0029—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Abstract
The invention relates to the technical field of image data processing, in particular to a duck shed intelligent monitoring method based on computer vision, which comprises the following steps: the method comprises the steps of clustering marking points corresponding to duck individuals in a monitoring image, dividing a plurality of duck individuals into a plurality of clusters, and obtaining the population density of the duck individuals according to the positions of the duck individuals in the clusters and the number of other duck individuals in a certain range; in addition, the duckbill direction of the duckbill in the monitoring image and the relative position direction of the duckbill and the feeder are combined to obtain the outlier factor of the duckbill, and different duckbill in the monitoring image are subjected to food supplement by combining the population density of the duckbill and the outlier factor. According to the invention, through analyzing the behavior patterns of the ducks in the monitoring images, the accuracy of food supplement of the ducks by the feeders is improved, the manual intervention is reduced, the management efficiency and the safety of the duck shed are improved, and the automatic degree is higher.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to an intelligent duck shed monitoring method based on computer vision.
Background
The intelligent monitoring system of the duck shed is generally used for real-time monitoring of the state of the duck shed, monitoring of environmental parameters and safety management functions, and the feeding state of the duck shed influences the overall health level of the duck shed, so that the monitoring of the duck shed is generally required.
In the prior art, duck shed monitoring images are typically analyzed in real time by a computer vision-based method. However, since a large number of ducks exist in the duck shed, the ducks are mutually influenced and interfered, and each duck has certain character characteristics, so that a part of ducks cannot eat all the time, and the ducks are different from ducks with strong eating will, namely, have strong outliers, the behavior patterns of the ducks in the monitoring images need to be analyzed, ducks with strong outliers are identified, and the ducks are supplemented with food.
Disclosure of Invention
The invention provides an intelligent duck shed monitoring method based on computer vision, which aims to solve the existing problems: there are a large amount of ducks in the duck house, influence each other and interfere between the ducks, and every duck all has certain character characteristic, leads to there being some ducks unable to eat all the time, and with the duck that has strong willingness to eat distinguishing, there is stronger outlier promptly, consequently need the analysis to monitor the behavior pattern of ducks in the image, discern the ducks that have stronger outlier, carry out food supplementation to these ducks.
The intelligent duck shed monitoring method based on computer vision adopts the following technical scheme:
the embodiment of the invention provides an intelligent duck shed monitoring method based on computer vision, which comprises the following steps of:
acquiring a monitoring image of the duck shed and the positions and distances between the duck individuals in the duck shed and the feeder, and marking the duck individuals in the monitoring image through example segmentation to obtain marking areas corresponding to the duck individuals;
marking the mass center of a mark area corresponding to any duck individual as mark points, clustering all the mark points by using a K-means clustering algorithm to obtain a plurality of cluster clusters, obtaining the cluster center of each cluster, obtaining the cluster radius according to the distance between the mark points in the cluster and the cluster center, and obtaining the population density of the mark points according to the number of pixel points in the mark area, the number of mark points in the cluster and the number of mark points in the cluster radius range, wherein each duck individual corresponds to one population density;
the corresponding distance between the duck individuals and the feeders closest to the duck individuals is recorded as the nearest feeding distance of the duck individuals; identifying a duckbill region of each duck in the monitoring image, and obtaining a duckbill direction according to coordinates of pixel points in the duckbill region; obtaining the relative position direction of the duck individuals according to the positions of the duck individuals and the feeders, and recording the fusion result of the relative position direction, the nearest feeding distance and the duckbill direction of the duck individuals as outlier factors of the duck individuals in the corresponding monitoring images when the feeders are opened each time; obtaining the outlier of the duck individuals according to the population density and the outlier factor;
and (5) supplementing food to the individual ducks through the feeder according to the size of the outlier.
Further, the method for obtaining the monitoring image of the duck shed and the position and distance between the duck individuals in the duck shed and the feeder, and marking the duck individuals in the monitoring image by example segmentation to obtain marking areas corresponding to the duck individuals comprises the following specific steps:
firstly, taking monitoring images of continuous frames of a duck shed by using a monitoring camera in overlooking mode, and marking individual ducks in the monitoring images by using an example segmentation neural network with a duck identification function to obtain a corresponding marking area of each individual duck in the monitoring images;
then, an RFID chip is implanted into the body of each duck individual in the duck shed, an RFID reader is arranged in each feeder in the duck shed, and the positions of the feeder and each duck in the duck shed and the distance between each duck and any feeder are obtained through the RFID chip and the RFID reader.
Further, the method for clustering all the mark points by using the K-means clustering algorithm to obtain a plurality of clusters, obtaining a cluster center of each cluster, and obtaining a cluster radius according to the distance between the mark point and the cluster center in the cluster, comprises the following specific steps:
firstly, marking the mass center of a marking area corresponding to each duck individual as a marking point to obtain a plurality of marking points, wherein one duck individual corresponds to one marking point, and clustering all marking points in a monitoring image by using a K-means clustering algorithm to obtain a plurality of clustering clusters;
and then, obtaining the cluster centers of all the clusters, marking the maximum value of the distance between the mark point and the cluster center in any one cluster as the radius factor of the cluster, and marking the average value of the radius factors of all the clusters as the cluster radius.
Further, the method for obtaining the population density of the mark points according to the number of the pixel points in the mark area, the number of the mark points in the cluster and the number of the mark points in the cluster radius range comprises the following specific steps:
firstly, the number of pixel points of a mark region corresponding to all mark points in any cluster is obtained and is marked as a first number of corresponding clusters, and the number of mark points in the cluster is marked as a second number of clusters;
then, taking any one mark point in the cluster as a center, obtaining the number of the mark points in the cluster radius range, and marking the number as a density factor of the mark points as the center, wherein each duck individual corresponds to one density factor;
finally, the specific calculation method of the population density of the marked points in the cluster comprises the following steps:
;
wherein,indicate->The>Population density of individual marker points; />Indicate->A second number of clusters; />Indicate->A first number of clusters; />Indicate->The>The density factor of the individual marker points.
Further, the method for identifying the duckbill region of each duck in the monitoring image and obtaining the duckbill direction according to the coordinates of the pixel points in the duckbill region comprises the following specific steps:
detecting duckbill areas of each duck in the monitoring image by using a YOLO3 image recognition network, wherein the input of the YOLO3 image recognition network is the monitoring image, and the output is a duckbill positioning frame; and detecting the communicating domain of the duckbill positioning frame, acquiring coordinates of all pixel points in the communicating domain, and performing principal component analysis on the coordinates of all pixel points in the communicating domain to acquire a principal component direction of the communicating domain, wherein the principal component direction of the communicating domain is recorded as the duckbill direction of the duckbill corresponding to the communicating domain.
Further, the method includes obtaining a relative position direction of the duck individual according to positions of the duck individual and the feeder, and recording a fusion result of the relative position direction, the nearest feeding distance and the duckbill direction of the duck individual as an outlier factor of the duck individual in a corresponding monitoring image when the feeder is opened each time, wherein the method comprises the following steps:
firstly, obtaining the distance corresponding to the distance between a duck individual and the feeder closest to the duck individual after each opening of the feeder, recording the distance as the nearest feeding distance of the duck individual, obtaining the total number of times the feeder is opened within any period, carrying out normalization processing on the nearest feeding distances of all the duck individuals by using a linear normalization method to obtain a normalized nearest feeding distance, and recording the opening times corresponding to the feeder as the condition number of times of the duck individuals when the normalized nearest feeding distance of the duck individual is smaller than a preset distance parameter for the first time within the period; when the density factor of any duck individual is smaller than a preset density factor threshold value, presetting a first weight of the duck individual; when the density factor of the duck individual is greater than or equal to a preset density factor threshold value, presetting a second weight of the duck individual; the first weight is larger than the second weight;
then, under the condition that the feeder is opened for a plurality of times, combining the nearest feeding distance, the duckbill direction and the relative position direction to respectively obtain a first outlier coefficient and a second outlier coefficient of any duck in the monitoring image;
and finally, multiplying the first outlier coefficient by a first weight to obtain a first factor, multiplying the second outlier coefficient by a second weight to obtain a second factor, and recording the sum of the first factor and the second factor as the outlier factor of the duck individual.
Further, the specific acquisition method of the first outlier coefficient is as follows:
;
wherein,representing a first outlier coefficient; />Indicating that the feeder is at +.>Duckbill direction of the duck individual when the duck is opened for the second time;indicating that the feeder is at +.>The relative position direction of the duck individuals when the duck is opened for the second time; />Indicating that the feeder is at +.>The nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />Indicating that the feeder is at +.>The nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />The condition times of the individual ducks are represented; />Representing the acquisition of absolute values.
Further, the specific acquisition method of the second outlier coefficient is as follows:
;
wherein,representing a second outlier coefficient; />Indicating the total number of times the feeder was opened.
Further, the method for obtaining the outlier of the duck individuals according to the population density and the outlier factor comprises the following specific steps:
the ratio of the outlier factor of any duck individual to the normalized population density is marked as an outlier ratio, the outlier ratios of all the duck individuals are subjected to linear normalization, and the normalized result is marked as the outlier of the duck individuals.
Further, the method for supplementing food to the duck individuals through the feeder according to the size of the outlier comprises the following specific steps:
firstly, marking duck individuals with outliers larger than or equal to a preset outlier threshold as strong outlier ducks;
then, at the middle moment when the feeders are opened twice correspondingly, clustering the marking points corresponding to the strong outlier ducks in the monitoring image by using a K-means clustering algorithm, marking the obtained clusters as strong outlier clusters, and opening the feeder closest to the cluster center of each strong outlier cluster in the monitoring image so as to supplement food for the strong outlier ducks.
The technical scheme of the invention has the beneficial effects that: clustering marking points corresponding to the duck individuals in the monitoring image, primarily dividing the duck individuals into clusters which belong to according to the distances between the marking points, and obtaining the population density of the duck individuals, which is the degree characteristic reflecting the distance between the duck individuals and the population, according to the positions of the duck individuals in the clusters and the number of other duck individuals in a certain range; in addition, the duckbill direction reflecting the direction of the line of sight of the ducks in the monitoring image and the relative direction of the ducks and the feeders are combined to obtain the outlier factor reflecting the eating willingness of the ducks, the ducks in the monitoring image are labeled by combining the population density of the ducks and the outlier factor, the accuracy of the ducks in the monitoring image in food supplementing is improved through intelligent analysis of the behavior patterns of the ducks in the monitoring image, automatic feeding is realized, manual intervention is reduced, the management efficiency and safety of the duck shed are improved, and the automatic feeding machine has higher automation degree.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of the intelligent monitoring method for duck shed based on computer vision.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of the intelligent duck shed monitoring method based on computer vision according to the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the intelligent duck shed monitoring method based on computer vision provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a computer vision-based intelligent duck shed monitoring method according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and acquiring a real-time monitoring image, marking the duck individuals in the monitoring image by using example segmentation, and acquiring the distance between the duck individuals and the feeder by using an RFID chip.
It should be noted that, the specific scenario targeted by this embodiment is: the method comprises the steps of identifying different individuals in a monitoring image by using an example segmentation network, and obtaining the overall health condition of the ducks according to the individual performances under different frames, for example, in the process of putting food, detecting the diet condition of the individuals by using the network, wherein in the process, the ducks tend to get together to eat, so that part of the strongly-outlier ducks are repelled, the food quantity is not matched, and the health problem gradually occurs.
Specifically, in order to implement the intelligent monitoring method for duck shed based on computer vision provided in this embodiment, firstly, a monitoring image of the duck shed needs to be collected, and the specific process is as follows:
firstly, taking monitoring images of continuous frames of a duck shed by using a monitoring camera in overlooking mode, and marking individual ducks in the monitoring images by using an example segmentation neural network with a duck identification function to obtain a marking area corresponding to each individual duck in the monitoring images.
It should be noted that, the example segmentation neural network is an existing example segmentation task for solving two problems of object detection and semantic segmentation, in this embodiment, a Mask R-CNN neural network is selected as the example segmentation neural network to perform example segmentation on a monitored image, and a marking area of each duck individual in the monitored image is obtained, and the training process of the example segmentation neural network is as follows: firstly, obtaining a large number of monitoring images of different duck houses at different times, marking all duck individuals in the monitoring images by professionals in the field of computer vision, taking any monitoring image containing marking information as one sample, and forming a data set for training a Mask R-CNN neural network by a large number of samples; then, selecting a Mask R-CNN neural network as a network model of an example segmentation neural network, taking a data set as input of the Mask R-CNN neural network, initializing parameters of the Mask R-CNN neural network, and optimizing the parameters of the Mask R-CNN neural network by utilizing a random gradient descent method in combination with a cross entropy loss function until the Mask R-CNN neural network model converges to obtain the trained example segmentation neural network; and finally, detecting and labeling the duck individuals in the monitoring image by using the trained example segmentation neural network. It should be noted that: in this embodiment, the Mask R-CNN neural network is selected as the example segmentation neural network to segment the monitoring image, and as other embodiments, other example segmentation neural networks may be used to segment the monitoring image.
Then, an RFID chip is implanted into the body of each duck individual in the duck shed, an RFID reader is arranged in each feeder in the duck shed, and information interaction is carried out between the RFID chip and the RFID reader to obtain the positions of the feeders and each duck in the duck shed and the distance between each duck and any feeder.
So far, the method is used for obtaining a multi-frame monitoring image, a plurality of ducks in the monitoring image and the distance between each duck and the feeder.
Step S002: and obtaining the marking points of each marking area and clustering, and obtaining the group density of the clustering cluster where each marking point is located according to the number of the marking points in the clustering clusters and the number of the pixel points in the marking area.
It should be noted that, in order to obtain whether the duck individuals in the duck group are outliers from the monitoring images of the continuous frames, in this embodiment, referred to as the outlier of the duck individuals, the feeding state of the duck individuals needs to be analyzed, and when the density of the group where the duck individuals are located is greater, the outlier of the duck individuals is weaker; conversely, the smaller the density of the population in which the duck individuals are located, the stronger the outlier of the duck individuals. In addition, in order to more accurately obtain the outlier of the duck individuals, the outlier of the duck individuals needs to be analyzed by combining the distance and the relative direction of the duck individuals and the feeder.
The positions of the duck individuals and the corresponding population densities thereof are obtained, so that different duck individuals are classified, and the higher the population density of the population where the duck individuals are located is, the more close the label with strong outlier is correspondingly given to the duck individuals. In order to accurately acquire the population density in the monitoring image, the population to which a plurality of duck individuals subjected to example segmentation belong needs to be acquired, and then the density of the population to which the duck individuals belong is obtained.
Specifically, in the step (1), firstly, the mass center of a marking area corresponding to each duck individual is marked as a marking point to obtain a plurality of marking points, one duck individual corresponds to one marking point, and all marking points in a monitoring image are clustered by using a K-means clustering algorithm to obtain a plurality of clustering clusters.
It should be noted that, the K value of the K-means clustering algorithm is preset to be 5 according to experience, and can be adjusted according to actual conditions, and the embodiment is not particularly limited; in addition, the K-means clustering algorithm is an existing algorithm, so this embodiment is not repeated.
And then, the number of pixels of the mark areas corresponding to all the mark points in any cluster is obtained and is marked as the first number of the corresponding cluster, and the number of the mark points in the cluster is marked as the second number of the cluster.
Secondly, obtaining cluster centers of all clusters, marking the maximum value of the distance between a mark point in any cluster and the cluster center as a radius factor of the cluster, marking the average value of the radius factors of all clusters as a cluster radius, taking any mark point in the cluster as the center, obtaining the number of mark points in the range of the cluster radius, marking the number of mark points as the density factor of the mark point as the center, and each duck individual corresponds to one density factor.
Finally, according to the first quantity, the second quantity and the density factors of the marked points, the population density of the marked points in the cluster is obtained, and the specific calculation method is as follows:
;
wherein,indicate->The>Population density of individual marker points; />Indicate->A second number of clusters; />Indicate->A first number of clusters; />Indicate->The>The density factor of the individual marker points.
It should be noted that, the larger the value of the density factor, the more the corresponding marker points are located in the cluster center, the larger the population density difference between the marker points and other marker points in the cluster is, i.e. the weaker the outlier of the marker points is.
Because each duck corresponds to a mark point, each individual duck also corresponds to a population density.
And (2) obtaining the population density of each marking point in different clustering clusters, and normalizing the population densities of all marking points by using a linear normalization method to obtain the normalized population density of the marking points.
So far, the normalized population density of each marking point in the monitoring image is obtained through the method.
Step S003: and obtaining the outlier factor of the duck individuals according to the positions and directions between the duck individuals and the feeders, and obtaining the outlier of the ducks by combining the population density of the duck individuals and the outlier factor.
In order to obtain the outliers of different duck individuals more objectively, further analyzing and obtaining factors affecting the outliers to obtain the outliers of the duck individuals; the present embodiment obtains outlier factors by correlating the location between the individual ducks and the feeder and the behavior of the individual ducks.
Specifically, step (1), firstly, detecting duckbill regions of each duck individual in a monitoring image by using a YOLO3 image recognition network, wherein the input of the YOLO3 image recognition network is the monitoring image, and the output is a duckbill positioning frame; and detecting the communicating domain of the duckbill positioning frame, acquiring coordinates of all pixel points in the communicating domain, and performing principal component analysis on the coordinates of all pixel points in the communicating domain to acquire a principal component direction of the communicating domain, wherein the principal component direction of the communicating domain is recorded as the duckbill direction of the duckbill corresponding to the communicating domain.
It should be noted that YOLO3 is an existing target detection algorithm, and the principal component analysis algorithm is also an existing algorithm, so that in this embodiment, too much description is not made on the YOLO3 image recognition network and the principal component analysis algorithm.
Then, the distance between the duck individual and the nearest feeder after each opening of the feeder is obtained and is recorded as the nearest distance of the duck individualFeeding distance, obtaining total opening times of the feeder in any period of timeNormalizing the nearest feeding distances of all the duck individuals by using a linear normalization method to obtain normalized nearest feeding distances, and recording the opening times corresponding to the feeders as the condition times of the duck individuals when the normalized nearest feeding distances of the duck individuals in the period of time are smaller than a preset distance parameter for the first time; when the density factor of any duck individual is smaller than the preset density factor threshold value, the first weight of the duck individual is marked as +.>When the density factor of the duck individual is greater than or equal to a preset density factor threshold value, the second weight of the duck individual is marked as +.>。
The distance parameter is preset to 0.1 according to experience, and may be adjusted according to actual conditions, and the embodiment is not particularly limited.
It should be noted that, in this embodiment, the sum of the first weight and the second weight should be 1 and the first weight is greater than the second weight, the first weight is preset empiricallySecond weight->The specific values of the first weight and the second weight may be adjusted according to the actual situation, which is not specifically limited in this embodiment.
It should be noted that, the density factor threshold is preset to be 5 according to experience, and may be adjusted according to practical situations, and the embodiment is not specifically limited.
Step (2), according to the positions of any one duck body and the feeder, obtaining a straight line where the duck body and the feeder are located, and recording an included angle between any one duck body and the straight line corresponding to the feeder closest to the duck body in the horizontal direction as a relative position direction of the duck body; acquiring the relative position direction, the nearest feeding distance and the duckbill direction of a duck individual in a corresponding monitoring image when the feeder is opened each time; the method for obtaining the outlier factor of any duck individual in the monitoring image by combining the nearest feeding distance, the duckbill direction and the relative position direction comprises the following steps of:
;
wherein,an outlier factor representing the individual ducks; />And->Respectively representing a first weight and a second weight; />Indicating that the feeder is at +.>Duckbill direction of the duck individual when the duck is opened for the second time; />Indicating that the feeder is at +.>The relative position direction of the duck individuals when the duck is opened for the second time; />Indicating that the feeder is at +.>The nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />Indicating that the feeder is at the firstThe nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />Indicating the total number of times the feeder is opened; />The condition times of the individual ducks are represented; />Representing the acquisition of absolute values.
First outlier coefficientAnd a second outlier coefficient->Respectively show the direction difference between the viewing angle direction of the duck individual and the corresponding direction of the nearest feeder before and after the nearest feeding distance between the first time and the feeder is smaller than the distance parameter, namely +.>At the same time, the difference of the nearest feeding distance between the individual ducks and the feeder when the feeder is opened twice is reflected, namely +.>。
In the method for calculating the outlier factor of the individual duck, the case that the denominator is 0 is avoided in order to ensure that the denominator is established when the numerical value is 0.1.
It should be noted that, the first weight and the second weight are utilized to adjust the outlier of the duck individual under different conditions, when the outlier of the duck individual is weak, the duck individual actively searches for and approaches the feeder to try to eat, and when the outlier of the duck individual is strong, the willingness of the duck individual to search for and approach the feeder is lower, so when the duck individual has willingness to actively approach the feeder in the monitoring image, the outlier of the duck individual is weak, and the second weight corresponding to the second outlier coefficient is smaller than the first weight corresponding to the first outlier coefficient.
It should be noted that the difference between the viewing angle direction of the duck individual and the direction corresponding to the nearest feeder, namelyIndicating whether the individual ducks are looking at the feeder that has been opened and attempting to eat, that is, the smaller the difference in direction, the stronger the desire to eat the ducks, and the weaker the outlier of the individual ducks.
It should be noted that the difference of the nearest feeding distance between the individual ducks and the feeder when the feeder is opened twice adjacent to each other, i.e.The greater the difference in (2), the more pronounced the change in position of the duck, the more intense the desire to eat, and the lower the outlier.
And (3) marking the ratio of the outlier factor of any duck individual to the normalized population density as an outlier ratio, carrying out linear normalization on the outlier ratios of all the duck individuals, and marking the normalized result as the outlier of the duck individuals.
So far, the outlier of each duck individual in the monitoring image is obtained through the method.
Step S004: utilize the individual outlier nature of duck to carry out automatic feeding through the feeder to the individual duck, realize the intelligent monitoring of duck house.
It should be noted that, this embodiment combines the outlier to annotate the classification to every duck individual, because the weak outlier's duck individual can all eat after the feeder opens at every turn, and the strong outlier's duck is not enough to eat, consequently need to put in supplementary food to the strong outlier's duck individual through the nearest feeder according to the classification condition, because the population density that the strong outlier's duck individual corresponds is lower, there is great distance with weak outlier's duck individual between, consequently can provide food for the strong outlier's duck individual accurately, realize intelligent monitoring's effect.
Specifically, first, the duck individuals with the outlier greater than or equal to a preset outlier threshold are marked as strong outlier ducks, and the duck individuals with the outlier less than the preset outlier threshold are marked as weak outlier ducks.
It should be noted that, the outlier threshold is preset to be 0.5 according to experience, and may be adjusted according to actual situations, which is not specifically limited in this embodiment.
Then, at the middle moment when the feeders are opened twice correspondingly, clustering the marking points corresponding to the strong outlier ducks in the monitoring image by using a K-means clustering algorithm, marking the obtained plurality of clusters as strong outlier clusters, and opening the feeder closest to the cluster center of each strong outlier cluster in the monitoring image so as to supplement food for the strong outlier ducks.
It should be noted that, in this embodiment, the middle time corresponding to two adjacent times of opening of the feeder is selected to supplement food for the strong outlier ducks, and in specific implementation, other times may be selected to open the feeder to supplement food.
It should be noted that, when clustering the mark points corresponding to the strong outlier ducks in the monitoring image, the K value of the K-means clustering algorithm may be set to half the number of feeders, and other values may be selected in the implementation process, which is not specifically limited in this embodiment.
This embodiment is completed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.
Claims (8)
1. The intelligent duck shed monitoring method based on computer vision is characterized by comprising the following steps of:
acquiring a monitoring image of the duck shed and the positions and distances between the duck individuals in the duck shed and the feeder, and marking the duck individuals in the monitoring image through example segmentation to obtain marking areas corresponding to the duck individuals;
marking the mass center of a mark area corresponding to any duck individual as mark points, clustering all the mark points by using a K-means clustering algorithm to obtain a plurality of cluster clusters, obtaining the cluster center of each cluster, obtaining the cluster radius according to the distance between the mark points in the cluster and the cluster center, and obtaining the population density of the mark points according to the number of pixel points in the mark area, the number of mark points in the cluster and the number of mark points in the cluster radius range, wherein each duck individual corresponds to one population density;
the corresponding distance between the duck individuals and the feeders closest to the duck individuals is recorded as the nearest feeding distance of the duck individuals; identifying a duckbill region of each duck in the monitoring image, and obtaining a duckbill direction according to coordinates of pixel points in the duckbill region; obtaining the relative position direction of the duck individuals according to the positions of the duck individuals and the feeders, and recording the fusion result of the relative position direction, the nearest feeding distance and the duckbill direction of the duck individuals as outlier factors of the duck individuals in the corresponding monitoring images when the feeders are opened each time; obtaining the outlier of the duck individuals according to the population density and the outlier factor;
food supplement is carried out on the individual ducks through the feeder according to the size of the outlier;
the method for obtaining the population density of the mark points according to the number of the pixel points in the mark area, the number of the mark points in the cluster and the number of the mark points in the cluster radius range comprises the following specific steps:
firstly, the number of pixel points of a mark region corresponding to all mark points in any cluster is obtained and is marked as a first number of corresponding clusters, and the number of mark points in the cluster is marked as a second number of clusters;
then, taking any one mark point in the cluster as a center, obtaining the number of the mark points in the cluster radius range, and marking the number as a density factor of the mark points as the center, wherein each duck individual corresponds to one density factor;
finally, the specific calculation method of the population density of the marked points in the cluster comprises the following steps:
;
wherein,indicate->The>Population density of individual marker points; />Indicate->A second number of clusters; />Indicate->A first number of clusters; />Indicate->The>Density factors of the individual marker points;
the method for obtaining the outlier of the duck individuals according to the population density and the outlier factor comprises the following specific steps:
the ratio of the outlier factor of any duck individual to the normalized population density is marked as an outlier ratio, the outlier ratios of all the duck individuals are subjected to linear normalization, and the normalized result of the outlier ratio is marked as the outlier of the duck individuals.
2. The intelligent duck shed monitoring method based on computer vision according to claim 1, wherein the steps of marking individual ducks in the monitoring image by example segmentation to obtain marking areas corresponding to the individual ducks comprise the following specific steps:
and shooting monitoring images of continuous frames of the duck shed by using a monitoring camera in a overlooking mode, and marking the duck individuals in the monitoring images by using an example segmentation neural network with a duck identification function to obtain a marking area corresponding to each duck individual in the monitoring images.
3. The intelligent duck shed monitoring method based on computer vision according to claim 1, wherein the clustering of all the mark points by using a K-means clustering algorithm to obtain a plurality of clusters, obtaining a cluster center of each cluster, and obtaining a cluster radius according to the distance between the mark point and the cluster center in the cluster, comprises the following specific steps:
firstly, marking the mass center of a marking area corresponding to each duck individual as a marking point to obtain a plurality of marking points, wherein one duck individual corresponds to one marking point, and clustering all marking points in a monitoring image by using a K-means clustering algorithm to obtain a plurality of clustering clusters;
and then, obtaining the cluster centers of all the clusters, marking the maximum value of the distance between the mark point and the cluster center in any one cluster as the radius factor of the cluster, and marking the average value of the radius factors of all the clusters as the cluster radius.
4. The intelligent duck shed monitoring method based on computer vision according to claim 1, wherein the specific method for identifying the duckbill region of each individual duck in the monitoring image and obtaining the duckbill direction according to the coordinates of the pixel points in the duckbill region comprises the following steps:
detecting duckbill areas of each duck in the monitoring image by using a YOLO3 image recognition network, wherein the input of the YOLO3 image recognition network is the monitoring image, and the output is a duckbill positioning frame; and detecting the communicating domain of the duckbill positioning frame, acquiring coordinates of all pixel points in the communicating domain, and performing principal component analysis on the coordinates of all pixel points in the communicating domain to acquire a principal component direction of the communicating domain, wherein the principal component direction of the communicating domain is recorded as the duckbill direction of the duckbill corresponding to the communicating domain.
5. The intelligent monitoring method of duck shed based on computer vision according to claim 1, wherein the method for obtaining the relative position direction of the duck individual according to the positions of the duck individual and the feeder, and recording the fusion result of the relative position direction, the nearest feeding distance and the duckbill direction of the duck individual as the outlier factor of the duck individual in the corresponding monitoring image when the feeder is opened each time comprises the following specific steps:
firstly, obtaining the distance corresponding to the distance between a duck individual and the feeder closest to the duck individual after each opening of the feeder, recording the distance as the nearest feeding distance of the duck individual, obtaining the total number of times the feeder is opened within any period, carrying out normalization processing on the nearest feeding distances of all the duck individuals by using a linear normalization method to obtain a normalized nearest feeding distance, and recording the opening times corresponding to the feeder as the condition number of times of the duck individuals when the normalized nearest feeding distance of the duck individual is smaller than a preset distance parameter for the first time within the period; when the density factor of any duck individual is smaller than a preset density factor threshold value, presetting a first weight of the duck individual; when the density factor of the duck individual is greater than or equal to a preset density factor threshold value, presetting a second weight of the duck individual; the first weight is larger than the second weight, and the sum of the first weight and the second weight is 1;
then, under the condition that the feeder is opened for a plurality of times, combining the nearest feeding distance, the duckbill direction and the relative position direction to respectively obtain a first outlier coefficient and a second outlier coefficient of any duck in the monitoring image;
and finally, multiplying the first outlier coefficient by a first weight to obtain a first factor, multiplying the second outlier coefficient by a second weight to obtain a second factor, and recording the sum of the first factor and the second factor as the outlier factor of the duck individual.
6. The intelligent duck shed monitoring method based on computer vision as in claim 5, wherein the specific acquisition method of the first outlier coefficient is as follows:
;
wherein,representing a first outlier coefficient; />Indicating that the feeder is at +.>Duckbill direction of the duck individual when the duck is opened for the second time; />Indicating that the feeder is at +.>The relative position direction of the duck individuals when the duck is opened for the second time; />Indicating that the feeder is at +.>The nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />Indicating that the feeder is at +.>The nearest feeding distance of the duck individuals when the duck individuals are opened for the second time; />The condition times of the individual ducks are represented; />Representing the acquisition of absolute values.
7. The intelligent monitoring method for duck shed based on computer vision as in claim 6, wherein the specific acquisition method for the second outlier coefficient is as follows:
;
wherein,representing a second outlier coefficient; />Indicating the total number of times the feeder was opened.
8. The intelligent monitoring method for duck shed based on computer vision according to claim 1, wherein the food supplement is carried out on the individual ducks through the feeder according to the size of the outlier, comprising the following specific steps:
firstly, marking duck individuals with outliers larger than or equal to a preset outlier threshold as strong outlier ducks;
then, at the middle moment when the feeders are opened twice correspondingly, clustering the marking points corresponding to the strong outlier ducks in the monitoring image by using a K-means clustering algorithm, marking the obtained clusters as strong outlier clusters, and opening the feeder closest to the cluster center of each strong outlier cluster in the monitoring image so as to supplement food for the strong outlier ducks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311369932.9A CN117115754B (en) | 2023-10-23 | 2023-10-23 | Intelligent duck shed monitoring method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311369932.9A CN117115754B (en) | 2023-10-23 | 2023-10-23 | Intelligent duck shed monitoring method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117115754A CN117115754A (en) | 2023-11-24 |
CN117115754B true CN117115754B (en) | 2023-12-26 |
Family
ID=88805958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311369932.9A Active CN117115754B (en) | 2023-10-23 | 2023-10-23 | Intelligent duck shed monitoring method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117115754B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405054A (en) * | 2005-12-15 | 2009-04-08 | 雀巢技术公司 | Compositions and methods for preserving brain function |
CN102016745A (en) * | 2008-01-23 | 2011-04-13 | 加州大学评议会 | Systems and methods for behavioral monitoring and calibration |
CN107223019A (en) * | 2013-09-25 | 2017-09-29 | 胺细拉健康公司 | Composition and preparation and its generation and application method for maintaining and improving muscle quality, intensity and performance |
CN108175389A (en) * | 2017-12-22 | 2018-06-19 | 北京农业信息技术研究中心 | A kind of Multi-source Information Fusion milk cow behavior monitoring system and method |
KR20190089468A (en) * | 2018-01-22 | 2019-07-31 | 명홍철 | Method and apparatus of controlling a lighting system for raising chickens |
CN110264471A (en) * | 2019-05-21 | 2019-09-20 | 深圳壹账通智能科技有限公司 | A kind of image partition method, device, storage medium and terminal device |
CN111507179A (en) * | 2020-03-04 | 2020-08-07 | 杭州电子科技大学 | Live pig feeding behavior analysis method |
CN113255873A (en) * | 2021-06-02 | 2021-08-13 | 广州大学 | Clustering longicorn herd optimization method, system, computer equipment and storage medium |
CN114155377A (en) * | 2021-11-18 | 2022-03-08 | 潘磊 | Poultry self-adaptive feeding method based on artificial intelligence and growth cycle analysis |
-
2023
- 2023-10-23 CN CN202311369932.9A patent/CN117115754B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405054A (en) * | 2005-12-15 | 2009-04-08 | 雀巢技术公司 | Compositions and methods for preserving brain function |
CN102016745A (en) * | 2008-01-23 | 2011-04-13 | 加州大学评议会 | Systems and methods for behavioral monitoring and calibration |
CN107223019A (en) * | 2013-09-25 | 2017-09-29 | 胺细拉健康公司 | Composition and preparation and its generation and application method for maintaining and improving muscle quality, intensity and performance |
CN108175389A (en) * | 2017-12-22 | 2018-06-19 | 北京农业信息技术研究中心 | A kind of Multi-source Information Fusion milk cow behavior monitoring system and method |
KR20190089468A (en) * | 2018-01-22 | 2019-07-31 | 명홍철 | Method and apparatus of controlling a lighting system for raising chickens |
CN110264471A (en) * | 2019-05-21 | 2019-09-20 | 深圳壹账通智能科技有限公司 | A kind of image partition method, device, storage medium and terminal device |
CN111507179A (en) * | 2020-03-04 | 2020-08-07 | 杭州电子科技大学 | Live pig feeding behavior analysis method |
CN113255873A (en) * | 2021-06-02 | 2021-08-13 | 广州大学 | Clustering longicorn herd optimization method, system, computer equipment and storage medium |
CN114155377A (en) * | 2021-11-18 | 2022-03-08 | 潘磊 | Poultry self-adaptive feeding method based on artificial intelligence and growth cycle analysis |
Non-Patent Citations (2)
Title |
---|
四川省石渠县西藏盘羊种群与分布现状调查;周华明;吴猛;李静;李志明;王杰;;兽类学报(第04期);346-354页 * |
贵州省威宁自治县肉牛产业调研及高质量发展建议;欧仁等;《贵州畜牧兽医》;第47卷(第2期);1-3页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117115754A (en) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778902B (en) | Dairy cow individual identification method based on deep convolutional neural network | |
CN110046631B (en) | System and method for automatically inferring changes in spatiotemporal images | |
CN110163236B (en) | Model training method and device, storage medium and electronic device | |
Yang et al. | A review of video-based pig behavior recognition | |
CN112016527B (en) | Panda behavior recognition method, system, terminal and medium based on deep learning | |
Li et al. | Group-housed pig detection in video surveillance of overhead views using multi-feature template matching | |
Kaixuan et al. | Target detection method for moving cows based on background subtraction | |
CN112131927A (en) | Sow delivery time prediction system based on posture transformation characteristics in later gestation period | |
CN109376584A (en) | A kind of poultry quantity statistics system and method for animal husbandry | |
Noe et al. | Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model | |
CN113470076A (en) | Multi-target tracking method for yellow-feather chickens in flat-breeding henhouse | |
CN116778531A (en) | Beef cattle face recognition method based on Yolov5-ASFF | |
CN115861721A (en) | Livestock and poultry breeding spraying equipment state identification method based on image data | |
Wang et al. | Pig face recognition model based on a cascaded network | |
CN114581948A (en) | Animal face identification method | |
CN108280516B (en) | Optimization method for mutual-pulsation intelligent evolution among multiple groups of convolutional neural networks | |
CN117115754B (en) | Intelligent duck shed monitoring method based on computer vision | |
CN116206208B (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
CN113420709A (en) | Cattle face feature extraction model training method and system and cattle insurance method and system | |
Xiao et al. | Group-housed pigs and their body parts detection with Cascade Faster R-CNN | |
KR102584357B1 (en) | Apparatus for identifying a livestock using a pattern, and system for classifying livestock behavior pattern based on images using the apparatus and method thereof | |
CN115457468A (en) | Intelligent livestock monitoring method and system for large grassland | |
Campos et al. | Global localization with non-quantized local image features | |
CN111222473B (en) | Analysis and recognition method for clustering faces in video | |
CN114283366A (en) | Method and device for identifying individual identity of dairy cow and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |