CN113331135A - Statistical method for death and washout rate and survival rate of pressed piglets - Google Patents

Statistical method for death and washout rate and survival rate of pressed piglets Download PDF

Info

Publication number
CN113331135A
CN113331135A CN202110725058.2A CN202110725058A CN113331135A CN 113331135 A CN113331135 A CN 113331135A CN 202110725058 A CN202110725058 A CN 202110725058A CN 113331135 A CN113331135 A CN 113331135A
Authority
CN
China
Prior art keywords
piglet
sound
pressed
microphone
survival rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110725058.2A
Other languages
Chinese (zh)
Inventor
杜晓冬
樊士冉
张瑞雪
张志勇
陈麒麟
闫雪冬
赵铖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinliu Agriculture And Animal Husbandry Technology Co ltd
New Hope Group Co ltd
Shandong New Hope Liuhe Agriculture And Animal Husbandry Technology Co ltd
Sichuan New Hope Liuhe Pig Breeding Technology Co ltd
Xiajin New Hope Liuhe Agriculture And Animal Husbandry Co ltd
Tibet Xinhao Technology Co ltd
Shandong New Hope Liuhe Group Co Ltd
New Hope Liuhe Co Ltd
Original Assignee
Beijing Xinliu Agriculture And Animal Husbandry Technology Co ltd
New Hope Group Co ltd
Shandong New Hope Liuhe Agriculture And Animal Husbandry Technology Co ltd
Sichuan New Hope Liuhe Pig Breeding Technology Co ltd
Xiajin New Hope Liuhe Agriculture And Animal Husbandry Co ltd
Tibet Xinhao Technology Co ltd
Shandong New Hope Liuhe Group Co Ltd
New Hope Liuhe Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinliu Agriculture And Animal Husbandry Technology Co ltd, New Hope Group Co ltd, Shandong New Hope Liuhe Agriculture And Animal Husbandry Technology Co ltd, Sichuan New Hope Liuhe Pig Breeding Technology Co ltd, Xiajin New Hope Liuhe Agriculture And Animal Husbandry Co ltd, Tibet Xinhao Technology Co ltd, Shandong New Hope Liuhe Group Co Ltd, New Hope Liuhe Co Ltd filed Critical Beijing Xinliu Agriculture And Animal Husbandry Technology Co ltd
Priority to CN202110725058.2A priority Critical patent/CN113331135A/en
Publication of CN113331135A publication Critical patent/CN113331135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K67/00Rearing or breeding animals, not otherwise provided for; New breeds of animals
    • A01K67/02Breeding vertebrates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements

Abstract

The invention discloses a statistical method for death and washout rate and survival rate of piglets after being pressed, which relates to the technical field of breeding, and adopts the technical scheme that a microphone array is arranged in a column of a pig delivery room and arranged in the middle of the wall of the column, wherein the microphone array comprises an even number of microphones, and the microphones are symmetrically distributed towards two columns; collecting sound sources of two columns through a microphone; according to the collected sound, modeling is carried out by combining the pressed piglet, and the sound direction is judged, so that the pressed column of the piglet is determined, and the specific conditions of the death and culling rate and the survival rate of the piglet caused by the pressed piglet are counted. The invention has the beneficial effects that: the method can analyze whether the detected pressed event is effectively processed or not by combining with the actual production performance index, and evaluate the application feasibility of the pressed event, namely, the evaluation before the application of the mature commercial product and scheme, so as to improve the management level of the digital intelligent production.

Description

Statistical method for death and washout rate and survival rate of pressed piglets
Technical Field
The invention relates to the technical field of breeding, in particular to a statistical method for the death and washout rate and the survival rate of piglets after being crushed.
Background
According to statistics, the phenomenon that 1 piglet is pressed by the sow every 10 piglets occurs in the farrowing process of the sow. Nearly half of the deaths occur in the first 3 days after birth. Piglets have a hope of surviving if they stand up within 3 minutes of the sow pressing on them. When the piglet is pressed, the piglet rushes to the sow by a feeder, the piglet is taken up by beating the piglet, and the sow does not stand after the piglet is beaten, so that the piglet is relieved by artificial subjective intention and behavior, time and labor are wasted, and the problem of pain spots cannot be effectively solved.
Furthermore, the complexity of the livestock farming environment also presents a significant challenge to the operational stability of some automated farming plants, such as: the invention discloses a statistical method for evaluating the effectiveness of piglet pressed detection, which aims at solving the problem and combines actual production performance indexes to provide a statistical method for evaluating the effectiveness of the piglet pressed detection by combining image or sound technology, wherein the statistical method is used for researching that the problem can be solved by image or sound technology, hopefully acquiring the sound near a obstetric table where a sow lies in a real-time non-contact manner, identifying whether a piglet sends a distress signal by using a pattern recognition technology, providing timely feedback information to an actuating mechanism (such as a mechanical arm, an electric stimulation device and the like) and a professional feeding manager, rescuing the piglet at the first time, but no mature commercial product or scheme exists on the market, and the accuracy of piglet pressed detection based on the image/sound detection technology is lower than that of human ear/human eye detection in some aspects and cannot be accurately recognized by percentage, the method can analyze whether the detected pressed event is effectively processed or not, so that the similar technology has wide research and promotion space and the application feasibility of the similar technology is evaluated.
Disclosure of Invention
Aiming at the technical problems, the invention provides a statistical method for the death and elutriation rate and the survival rate of piglets after being crushed.
The technical scheme includes that S1 microphone arrays are arranged in the pig delivery room columns, the microphone arrays are arranged on column walls adjacent to the two pig delivery room columns, the microphone arrays are arranged in the middle of the column walls, the microphone arrays comprise even numbers of microphones, and the microphones are symmetrically distributed towards the two columns;
s2, collecting sound sources of two columns through a microphone;
s3, according to the sound collected in the S2, modeling is carried out by combining the pressed piglet, and the sound direction is judged, so that the pressed field of the piglet is determined;
s4, collecting the piglet crushing condition judged in the S3, and accordingly counting the specific conditions of death and culling rate and survival rate of the piglet caused by crushing.
Preferably, in S3, the sound collected by the microphone is converted into spectral energy, and the spectral energy characteristic is calculated as follows,
Figure BDA0003138269630000021
where n ═ 1,2, 3.., L, denotes an index of the frequency subband feature vector; x (n) is the nth index subband energy; before the spectrum energy of each sub-band is used as an input characteristic, normalization processing is carried out on the spectrum energy of each sub-band.
Preferably, in S3, the sound collected by the microphone is converted into time domain energy, where the time domain energy is used to distinguish between voiced segments and unvoiced segments by measuring changes in amplitude values in the time domain of the sound signal, and for the sound signal, the sound time domain energy e (i) of the ith sample is defined as follows:
E(i)=y2(i)Δt
wherein y (i) is the amplitude of the sound signal at the ith sampling, and the unit is V, and Δ t is the duration of one sampling, and the unit is s.
Preferably, in S3, the direction of the sound is determined, and the sound source is located by,
each microphone array comprises 4 microphones, and sound source signals can be positioned in a two-dimensional plane based on any 3 microphones;
as can be seen from the above figure, let the sound source position be P (x, y), where 3 microphone positions are located at the point S respectively1(-a,0)、S2(0,0)、S3(b,0) point P (x, y) is formed by θ and PS2Denotes that theta is a line segment S2S3And a segment PS2Angle between, PO i.e. r2These parameters are obtained by geometric calculations:
Figure BDA0003138269630000022
Figure BDA0003138269630000023
Figure BDA0003138269630000024
in the formula, r1From the sound source point P to the respective microphone points Si(i ═ 1,2, 3); a is a microphone S1And S2The distance between them; b is a microphone S2And S3Distance between r1The units of a and b are m;
suppose that the speed of sound c is 340m/s, t12Indicating the arrival of the sound source signal at the microphone S1And S2Time difference between t23Indicating the arrival of the sound source signal at the microphone S2And S3Time difference between t12And t23The unit of (a) is s; the time delay estimation can be obtained by the trigonometric cosine theorem:
Figure BDA0003138269630000031
Figure BDA0003138269630000032
r1-r2=ct12
r2-r3=ct23
thereby obtaining r by the following formula2And the value of cos θ:
Figure BDA0003138269630000033
Figure BDA0003138269630000034
preferably, modeling of piglet pressed sound production is carried out by means of a Support Vector Machine (SVM), the Support Vector Machine (SVM) is selected as a main classifier, the model is developed based on a statistical learning theory, and the model is widely applied due to the strong generalization capability of the SVM. The SVM classifier is very suitable for a nonlinear divisible classification task, converts low-dimensional features into a high-dimensional feature space by means of a nonlinear transformation function, and then performs linear classification in the high-dimensional space. The SVM has an excellent effect on the aspect of animal voice recognition, and can well solve the problem of small sample classification. The SVM algorithm utilizes different kernel functions to construct various learning machines, in the present case, radial basis Gaussian function is selected as kernel function,
radial basis function with Gaussian kernel:
K(x,y)=exp(-gamma·|x-y|2)
in order to avoid data overfitting, a K-fold cross validation method is adopted in the test, and indexes for evaluating the performance of the classification prediction model mainly comprise sensitivity (recall rate) and accuracy parameters.
Preferably, in S4, the statistical analysis method for the death and elimination rate and the survival rate of piglets is to count the death and elimination rate and survival rate of piglets in each time period, which is daily or weekly, by the following formula,
Figure BDA0003138269630000035
Figure BDA0003138269630000036
wherein epsilon is the death and culling rate of piglets, and beta is the survival rate of piglets.
Preferably, the microphone array is connected with an edge calculation board card to form a data path, and the edge calculation board card is connected with a server end through a network to form the data path;
the edge computing board card forms a distributed computing node, and front-end data analysis is carried out, namely piglet pressed events are judged;
and the server stores the judgment result of the edge calculation board card and is used for tracing the frequency and the number of the historical piglet squeaking caused by being pressed.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: the method can analyze whether the detected pressed event is effectively processed or not by combining with the actual production performance index, and evaluate the application feasibility of the pressed event, namely, the evaluation before the application of the mature commercial product and scheme, so as to improve the management level of the digital intelligent production. In addition, the method has important value in evaluating from the productivity perspective, saving more piglets, helping to reduce the death and culling rate and improving the survival rate of the piglets, and creating more economic benefits if the surviving piglets can be converted into growing-finishing pigs.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a diagram of a microphone array arrangement according to an embodiment of the invention.
Fig. 3 is a schematic diagram of sound source location coordinates according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a distributed sound computing node according to an embodiment of the present invention.
Wherein the reference numerals are: 1. an array of microphones.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. Of course, the specific embodiments described herein are merely illustrative of the invention and are not intended to be limiting.
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings, which are merely for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be construed as limiting the invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the invention, the meaning of "a plurality" is two or more unless otherwise specified.
In the description of the invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted", "connected" and "disposed" are to be construed broadly, e.g. as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the creation of the present invention can be understood by those of ordinary skill in the art through specific situations.
Example 1
Referring to fig. 1 to 4, the invention provides a statistical method for pigling mortality and survival rate, S1, arranging a microphone array 1 in a pig delivery room column, wherein the microphone array 1 is arranged on column walls adjacent to two pig delivery room columns, the microphone array 1 is arranged in the middle of the column walls, the microphone array comprises an even number of microphones, and the microphones are symmetrically distributed towards the two columns;
s2, collecting sound sources of two columns through a microphone;
s3, according to the sound collected in the S2, modeling is carried out by combining the pressed piglet, and the sound direction is judged, so that the pressed field of the piglet is determined;
s4, collecting the piglet crushing condition judged in the S3, and accordingly counting the specific conditions of death and culling rate and survival rate of the piglet caused by crushing.
In S3, the sound collected by the microphone is converted into spectral energy, and the spectral energy characteristic is calculated as follows,
Figure BDA0003138269630000051
where n ═ 1,2, 3.., L, denotes an index of the frequency subband feature vector; x (n) is the nth index subband energy; before the spectrum energy of each sub-band is used as an input characteristic, normalization processing is carried out on the spectrum energy of each sub-band.
In S3, converting the sound collected by the microphone into time domain energy, where the time domain energy is used to distinguish between voiced segments and unvoiced segments by measuring the amplitude value variation in the time domain of the sound signal, and for the sound signal, the sound time domain energy e (i) of the ith sample is defined as follows:
E(i)=y2(i)Δt
wherein y (i) is the amplitude of the sound signal at the ith sampling, and the unit is V, and Δ t is the duration of one sampling, and the unit is s.
In S3, the direction of the sound is determined, and the sound source is positioned by,
each microphone array 1 comprises 4 microphones, and sound source signals can be positioned in a two-dimensional plane based on any 3 microphones;
as can be seen from FIG. 3, let the sound source position be P (x, y), where the 3 microphone positions are located at the point S respectively1(-a,0)、S2(0,0)、S3(b,0) point P (x, y) is formed by θ and PS2Denotes that theta is a line segment S2S3And a segment PS2Angle between, PO i.e. r2Obtained by geometric calculationThese parameters are:
Figure BDA0003138269630000061
Figure BDA0003138269630000062
Figure BDA0003138269630000063
in the formula, r1From the sound source point P to the respective microphone points Si(i ═ 1,2, 3); a is a microphone S1And S2The distance between them; b is a microphone S2And S3Distance between r1The units of a and b are m;
suppose that the speed of sound c is 340m/s, t12Indicating the arrival of the sound source signal at the microphone S1And S2Time difference between t23Indicating the arrival of the sound source signal at the microphone S2And S3Time difference between t12And t23The unit of (a) is s; the time delay estimation can be obtained by the trigonometric cosine theorem:
Figure BDA0003138269630000064
Figure BDA0003138269630000065
r1-r2=ct12
r2-r3=ct23
thereby obtaining r by the following formula2And the value of cos θ:
Figure BDA0003138269630000066
Figure BDA0003138269630000067
the modeling of piglet pressed sound production is carried out by means of a Support Vector Machine (SVM), the Support Vector Machine (SVM) is selected as a main classifier, the modeling is developed based on a statistical learning theory, and the modeling is widely applied due to the strong generalization capability of the modeling. The SVM classifier is very suitable for a nonlinear divisible classification task, converts low-dimensional features into a high-dimensional feature space by means of a nonlinear transformation function, and then performs linear classification in the high-dimensional space. The SVM has an excellent effect on the aspect of animal voice recognition, and can well solve the problem of small sample classification. The SVM algorithm utilizes different kernel functions to construct various learning machines, in the present case, radial basis Gaussian function is selected as kernel function,
radial basis function with Gaussian kernel:
K(x,y)=exp(-gamma·|x-y|2)
in order to avoid data overfitting, a K-fold cross validation method is adopted in the test, and indexes for evaluating the performance of the classification prediction model mainly comprise sensitivity (recall rate) and accuracy parameters.
After the sound that the piglet is pressed is detected, the central control device transmits a signal to the execution mechanism to stimulate the sow to stand, and a primary judgment result is synchronously stored to the local server, and the execution mechanism can select a relevant mechanism stimulating the sow to stand in the prior art, so that the main point of the scheme is omitted.
In S4, the method for statistical analysis of the death and culling rate and the survival rate of piglets is to take the delivery room as a unit to count the death and culling rate and the survival rate of piglets in each time period, wherein the time period is daily or weekly, and the calculation is performed according to the following formula,
Figure BDA0003138269630000071
Figure BDA0003138269630000072
wherein epsilon is the death and culling rate of piglets, and beta is the survival rate of piglets.
The production data and the piglet pressed detection information are combined to deeply analyze the relevance of the piglet survival rate, the death and elutriation rate and the piglet pressed detection event, and experience shows that the data volume of the piglet pressed event is in direct proportion to the death and elutriation rate and in inverse proportion to the survival rate. Because the other two sounds listed in table 1 are close to the piglet stressed sounding poles, the method can effectively count and screen the results of the piglet stressed identification model, namely the piglet milk robbing sound is judged as piglet stressed, the production index cannot be influenced, the piglet death is caused by the piglet stressed event, and the survival probability is reduced when the piglet is not rescued in time.
In addition, the method can accurately judge the pigling of which obstetric table is pressed in the delivery room, the death and culling rate of the obstetric table and the survival rate of the pigling, and realizes accurate management through distributed monitoring and statistical analysis, for example: the piglet pressed detection events frequently occur on a certain obstetric table, and the production performance index of the piglet is continuously reduced, so that more management measures need to be manually intervened, and the production index is not counted by taking a delivery room as a unit in a general way.
The microphone array is connected with the edge calculation board card to form a data path, and the edge calculation board card is connected with the server end through a network to form the data path;
the edge computing board card forms a distributed computing node, and front-end data analysis is carried out, namely piglet pressed event judgment is carried out;
and the server stores the judgment result of the edge calculation board card and is used for tracing the frequency and the number of the historical piglet squeaking caused by being pressed.
Example 2
On the basis of the embodiment 1, the hardware of the microphone array 1 is selected from a professional recording microphone K053, the sensitivity is minus 38 +/-3 dB, the directivity is heart-shaped pointing, the frequency response is 50 Hz-16 kHz, the output impedance is less than or equal to 680 omega, and the signal-to-noise ratio is greater than or equal to 70dB, and is 3.5mm or a USB (universal serial bus) conversion head. The purpose of the sound detection process is to screen the sound section and the soundless section, because the sound detection method is continuous detection without interruption, a large number of redundant invalid sound sections must exist, and the data should not be stored and recorded so as to avoid occupying more hard disk space; secondly, judging whether the sound source is the cry of the sow and the piglet or not, and filtering complex background noises (such as stove sound, feeding sound, fan operation noise and the like). The processes are used for capturing the help calling sound sent by the pressed piglets more accurately.
The microphone array is used for predicting a specific column of the pressed piglet through a sound source localization technology, and is shown in fig. 2:
in fig. 2, the mounting position of the microphone array is the center of two adjacent delivery room columns, and the sound direction is judged by comparing the spectral energy or the time domain energy of the sound sources from the two columns. If the number of microphones is increased from 2 (default) to 4, the positions of sound sources in the same column can be further known, for example, the positions of regions of positive 2 (15-29 sound source angles) and positive 1 (0-15 sound source angles) in the left column in fig. 2 are located. The calculation formula of the spectrum energy characteristic is as follows:
Figure BDA0003138269630000081
where n ═ 1,2, 3.., L, denotes an index of the frequency subband feature vector; x (n) is the nth index subband energy; before being used as an input characteristic, normalization processing is carried out on the spectrum energy of each sub-band.
The time domain energy is to distinguish the voiced segment and the unvoiced segment by measuring the amplitude value variation in the time domain of the sound signal, and for the sound signal, the sound time domain energy e (i) of the ith sample is defined as follows:
E(i)=y2(i) Δ t equation (2)
Wherein y (i) is the amplitude of the sound signal at the ith sampling, and the unit is V, and Δ t is the duration of one sampling, and the unit is s.
Taking an algorithm of sound source localization with 4 microphones as a linear array as an example, the method is based on sound source signals that any 3 microphones can localize to a two-dimensional plane.
As can be seen from FIG. 3, let the sound source position be P (x, y), where the 3 microphone positions are located at the point S respectively1(-a,0)、S2(0,0)、S3(b,0) point P (x, y) is formed by θ and PS2Denotes that theta is a line segment S2S3And a segment PS2Angle between, PO i.e. r2These parameters are obtained by geometric calculations:
Figure BDA0003138269630000082
Figure BDA0003138269630000083
Figure BDA0003138269630000091
in the formula, r1From the sound source point P to the respective microphone points Si(i ═ 1,2, 3); a is a microphone S1And S2The distance between them; b is a microphone S2And S3Distance between r1The units of a and b are m;
suppose that the speed of sound c is 340m/s, t12Indicating the arrival of the sound source signal at the microphone S1And S2Time difference between t23Indicating the arrival of the sound source signal at the microphone S2And S3Time difference between t12And t23The unit of (a) is s; the time delay estimation can be obtained by the trigonometric cosine theorem:
Figure BDA0003138269630000092
Figure BDA0003138269630000093
r1-r2=ct12formula (8)
r2-r3=ct23Formula (9)
Solving the equation, obtaining nested formulas (6) to (7) and obtaining r2And solution of cos θ
Figure BDA0003138269630000094
Figure BDA0003138269630000095
Taking a machine learning identification model as an example, modeling of the piglet stressed sounding is realized by means of a Support Vector Machine (SVM). The invention selects a Support Vector Machine (SVM) as a main classifier, which is developed based on a statistical learning theory and is widely applied due to strong generalization capability. The SVM classifier is very suitable for a nonlinear divisible classification task, converts low-dimensional features into a high-dimensional feature space by means of a nonlinear transformation function, and then performs linear classification in the high-dimensional space. The SVM has an excellent effect on the aspect of animal voice recognition, and can well solve the problem of small sample classification. The SVM algorithm utilizes different kernel functions to construct various learning machines, and common three kernel function calculation formulas are as follows, wherein a radial basis Gaussian function is selected as a kernel function in the example.
Radial basis function with Gaussian kernel:
K(x,y)=exp(-gamma·|x-y|2)
in order to avoid data overfitting, a K-fold cross validation method is adopted in the test, and the indexes for evaluating the performance of the classification prediction model mainly comprise sensitivity (recall rate) and accuracy parameters. The results of the SVM-based voice detection test are shown in table 1, the recall rate is relative to the total sample, i.e. how many positive samples in the sample are predicted to be correct (TP), the number of misjudged positive samples is (False Negative, FN), and the test is not statistically accurate because voice detection is not specific to different types of voice.
TABLE 1 statistics of sound test results
TP TP+FN Recall rate (recall)
Baby pig pressure sound 360 556 64.7%
Piglet dry cry 30 102 29.4%
Milk grabbing sound for piglets 359 592 58.7%
The machine learning recognition model test results are shown in table 2:
TABLE 2 piglet vocalization recognition results based on SVM (RBF kernel function)
Class 15 sounds Class 8 sounds
Recognition rate (accuracy) 66.7% 78.8%
After the sound of the pressed piglet is detected, the central control device transmits a signal to the actuating mechanism to stimulate the sow to stand, and the primary judgment result is synchronously stored in the local server. The sound system can be designed as an edge computing board card and a server side, and the efficiency of the whole flow of data acquisition, analysis and action execution is improved. By adopting the idea of distributed computing nodes, more front-end data are analyzed at the front end, namely piglet pressed events are judged, the edge computing nodes can be based on FET3399-C AI board cards or NVIDIA developer suite integrated algorithm functions, the local server side stores each judgment result for tracing the frequency and the number of historical piglet pressed sounds, and the server side integrates piglet pressed information and statistical analysis functions, so that the production index can be realized: the death and culling rate and the survival rate of piglets are accurately statistically analyzed, namely the current method is to take the delivery room as a unit to count the death and culling rate and the survival rate of piglets per day/week, for example: the death and culling rate epsilon on the same day is 3 percent, and the survival rate beta of piglets on the same day is 98 percent. (Note that culled piglets also count for survival)
Figure BDA0003138269630000101
Figure BDA0003138269630000102
The production data and the piglet pressed detection information are combined to deeply analyze the relevance of the piglet survival rate, the death and elutriation rate and the piglet pressed detection event, and experience shows that the data volume of the piglet pressed event is in direct proportion to the death and elutriation rate and in inverse proportion to the survival rate. Because the other two sounds listed in table 1 are close to the piglet stressed sounding poles, the method can effectively count and screen the results of the piglet stressed identification model, namely the piglet milk robbing sound is judged as piglet stress, the production index cannot be influenced, the piglet death can be caused due to the piglet stress event, and the survival probability can be reduced without being rescued in time. In addition, the method can accurately judge the piglet on which obstetric table is pressed in the obstetric room, the death and culling rate of the obstetric table and the survival rate of the piglet, and realizes accurate management through distributed monitoring and statistical analysis.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A statistical method for the death and washout rate and the survival rate of piglets is characterized by comprising the following steps,
s1, microphone arrays (1) are arranged in the pig delivery room columns, the microphone arrays (1) are arranged on column walls adjacent to the two pig delivery room columns, the microphone arrays (1) are arranged in the middle of the column walls, the microphone arrays comprise an even number of microphones, and the microphones are symmetrically distributed towards the two columns;
s2, collecting sound sources of two columns through a microphone;
s3, according to the sound collected in the S2, modeling is carried out by combining the pressed piglet, and the sound direction is judged, so that the pressed field of the piglet is determined;
s4, collecting the piglet crushing condition judged in the S3, and accordingly counting the specific conditions of death and culling rate and survival rate of the piglet caused by crushing.
2. The statistical method for the pigling mortality and survival rate of the piglets according to claim 1, wherein in S3, the sound collected by the microphone is converted into the spectral energy, the spectral energy characteristic is calculated as follows,
Figure FDA0003138269620000011
where n ═ 1,2, 3.., L, denotes an index of the frequency subband feature vector; x (n) is the nth index subband energy; before the spectrum energy of each sub-band is used as an input characteristic, normalization processing is carried out on the spectrum energy of each sub-band.
3. The statistical method for the pigling mortality and survival rate of the piglets according to claim 1, wherein in the step S3, the sound collected by the microphone is converted into time domain energy, and for the sound signal, the sound time domain energy e (i) of the ith sampling is defined as follows:
E(i)=y2(i)Δt
where y (i) is the amplitude of the sound signal at the ith sample, and Δ t is the duration of one sample.
4. The statistical method for the death and survival rate of piglets according to claim 3, wherein in step S3, the sound direction is determined, and the sound source is located by,
each microphone array comprises 4 microphones, and sound source signals can be positioned in a two-dimensional plane based on any 3 microphones;
let the sound source position be P (x, y), where 3 microphone positions are located at point S respectively1(-a,0)、S2(0,0)、S3(b,0) point P (x, y) is formed by θ and PS2Denotes that theta is a line segment S2S3And a segment PS2Angle between, PO i.e. r2R is obtained by the following formula2And the value of cos θ:
Figure FDA0003138269620000021
Figure FDA0003138269620000022
wherein a is the microphone S1And S2B is the microphone S2And S3C is the speed of sound, t12Indicating the arrival of the sound source signal at the microphone S1And S2Time difference between t23Indicating the arrival of the sound source signal at the microphone S2And S3The time difference between them.
5. The statistical method for the pigling death and survival rate after being pressed according to claim 4, characterized in that modeling of the sound production of the pigling after being pressed is carried out by means of a support vector machine, a radial basis Gaussian function is selected as a kernel function,
K(x,y)=exp(-gamma·|x-y|2)
and evaluating the performance indexes of the classification prediction model by adopting a K-fold cross validation method, wherein the performance indexes comprise sensitivity and accuracy parameters.
6. The method for counting the lethality and the survival rate of piglets according to claim 5, wherein the statistical analysis of the lethality and the survival rate of piglets in S4 is performed by counting the lethality and the survival rate of piglets in each time period in delivery room unit and calculating by the following formula,
Figure FDA0003138269620000023
Figure FDA0003138269620000024
wherein epsilon is the death and culling rate of piglets, and beta is the survival rate of piglets.
7. The piglet mortality and survival rate statistical method according to claim 6, wherein the microphone array is connected with an edge computing board to form a data path, and the edge computing board is connected with a server side through a network to form a data path;
the edge computing board card forms a distributed computing node, and front-end data analysis is carried out, namely piglet pressed events are judged;
and the server stores the judgment result of the edge calculation board card and is used for tracing the frequency and the number of the historical piglet squeaking caused by being pressed.
CN202110725058.2A 2021-06-29 2021-06-29 Statistical method for death and washout rate and survival rate of pressed piglets Pending CN113331135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110725058.2A CN113331135A (en) 2021-06-29 2021-06-29 Statistical method for death and washout rate and survival rate of pressed piglets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110725058.2A CN113331135A (en) 2021-06-29 2021-06-29 Statistical method for death and washout rate and survival rate of pressed piglets

Publications (1)

Publication Number Publication Date
CN113331135A true CN113331135A (en) 2021-09-03

Family

ID=77481248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110725058.2A Pending CN113331135A (en) 2021-06-29 2021-06-29 Statistical method for death and washout rate and survival rate of pressed piglets

Country Status (1)

Country Link
CN (1) CN113331135A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008010269A1 (en) * 2006-07-19 2008-01-24 Panasonic Electric Works Co., Ltd. System for detecting position of mobile object
JP2008070339A (en) * 2006-09-15 2008-03-27 Univ Of Tokyo Sound source localization method and sound source localization device
CN103235287A (en) * 2013-04-17 2013-08-07 华北电力大学(保定) Sound source localization camera shooting tracking device
JP2017000062A (en) * 2015-06-09 2017-01-05 パナソニックIpマネジメント株式会社 Poultry house monitoring system
JP2017143832A (en) * 2016-02-18 2017-08-24 香川県 Animal audio signal extraction apparatus, animal physiological condition estimation method, animal audio signal extraction program, and animal physiological condition estimation program
CN108089154A (en) * 2017-11-29 2018-05-29 西北工业大学 Distributed acoustic source detection method and the sound-detection robot based on this method
CN110651728A (en) * 2019-11-05 2020-01-07 秒针信息技术有限公司 Piglet pressed detection method, device and system
CN111528108A (en) * 2020-06-08 2020-08-14 西藏新好科技有限公司 Building type pig farm and production group configuration method
CN111543348A (en) * 2020-05-14 2020-08-18 深聆科技(北京)有限公司 Sound positioning device and method for farm and cub monitoring method
CN112820275A (en) * 2021-01-15 2021-05-18 华中农业大学 Automatic monitoring method for analyzing abnormality of suckling piglets based on sound signals

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008010269A1 (en) * 2006-07-19 2008-01-24 Panasonic Electric Works Co., Ltd. System for detecting position of mobile object
JP2008070339A (en) * 2006-09-15 2008-03-27 Univ Of Tokyo Sound source localization method and sound source localization device
CN103235287A (en) * 2013-04-17 2013-08-07 华北电力大学(保定) Sound source localization camera shooting tracking device
JP2017000062A (en) * 2015-06-09 2017-01-05 パナソニックIpマネジメント株式会社 Poultry house monitoring system
JP2017143832A (en) * 2016-02-18 2017-08-24 香川県 Animal audio signal extraction apparatus, animal physiological condition estimation method, animal audio signal extraction program, and animal physiological condition estimation program
CN108089154A (en) * 2017-11-29 2018-05-29 西北工业大学 Distributed acoustic source detection method and the sound-detection robot based on this method
CN110651728A (en) * 2019-11-05 2020-01-07 秒针信息技术有限公司 Piglet pressed detection method, device and system
CN111543348A (en) * 2020-05-14 2020-08-18 深聆科技(北京)有限公司 Sound positioning device and method for farm and cub monitoring method
CN111528108A (en) * 2020-06-08 2020-08-14 西藏新好科技有限公司 Building type pig farm and production group configuration method
CN112820275A (en) * 2021-01-15 2021-05-18 华中农业大学 Automatic monitoring method for analyzing abnormality of suckling piglets based on sound signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵晓洋: "基于动物发声分析的畜禽舍环境评估", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 *

Similar Documents

Publication Publication Date Title
Bao et al. Artificial intelligence in animal farming: A systematic literature review
EP2978305B1 (en) Automated monitoring of animal nutriment ingestion
Navon et al. Automatic recognition of jaw movements in free-ranging cattle, goats and sheep, using acoustic monitoring
CN108198562A (en) A kind of method and system for abnormal sound in real-time positioning identification animal house
Alqudah Towards classifying non-segmented heart sound records using instantaneous frequency based features
CN109258509A (en) A kind of live pig abnormal sound intelligent monitor system and method
Silva et al. Cough localization for the detection of respiratory diseases in pig houses
Keen et al. A comparison of similarity-based approaches in the classification of flight calls of four species of North American wood-warblers (Parulidae)
CN105091938A (en) Poultry health monitoring method and system
CN105336331A (en) Intelligent monitoring method and intelligent monitoring system for abnormal behaviors of pigs on basis of sound
CN108354315B (en) A kind of brush teeth quality detecting system and method based on the asymmetric sound field of double units
Arablouei et al. Animal behavior classification via deep learning on embedded systems
Bravo et al. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests
CN106597235A (en) Partial discharge detection apparatus and method
CN103190904A (en) Electroencephalogram classification detection device based on lacuna characteristics
Tian et al. Real-time behavioral recognition in dairy cows based on geomagnetism and acceleration information
CN112617813A (en) Multi-sensor-based non-invasive fall detection method and system
CN109479750A (en) A kind of plum mountain pig heat monitoring method based on acoustic information
Simon et al. Acoustic traits of bat-pollinated flowers compared to flowers of other pollination syndromes and their echo-based classification using convolutional neural networks
CN109186752A (en) Underwater sound signal acquisition, transmission and detection system based on graphics processor
Bishop et al. Sound analysis and detection, and the potential for precision livestock farming-a sheep vocalization case study
Duan et al. Short-term feeding behaviour sound classification method for sheep using LSTM networks
Bi et al. Pervasive eating habits monitoring and recognition through a wearable acoustic sensor
Gage et al. Acoustic observations in agricultural landscapes
CN113331135A (en) Statistical method for death and washout rate and survival rate of pressed piglets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903