CN111477236A - Piglet cry recognition method based on neural network, breeding monitoring method and system - Google Patents
Piglet cry recognition method based on neural network, breeding monitoring method and system Download PDFInfo
- Publication number
- CN111477236A CN111477236A CN202010405989.XA CN202010405989A CN111477236A CN 111477236 A CN111477236 A CN 111477236A CN 202010405989 A CN202010405989 A CN 202010405989A CN 111477236 A CN111477236 A CN 111477236A
- Authority
- CN
- China
- Prior art keywords
- piglet
- sound
- audio signal
- neural network
- distress call
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 60
- 238000009395 breeding Methods 0.000 title claims abstract description 25
- 230000001488 breeding effect Effects 0.000 title claims abstract description 25
- 238000012544 monitoring process Methods 0.000 title claims abstract description 23
- 230000005236 sound signal Effects 0.000 claims abstract description 73
- 230000009429 distress Effects 0.000 claims abstract description 65
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 238000001514 detection method Methods 0.000 claims description 25
- 230000000638 stimulation Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000035939 shock Effects 0.000 claims description 7
- 239000008267 milk Substances 0.000 claims description 6
- 210000004080 milk Anatomy 0.000 claims description 6
- 235000013336 milk Nutrition 0.000 claims description 6
- 238000009313 farming Methods 0.000 claims 1
- 230000004936 stimulating effect Effects 0.000 claims 1
- 238000001125 extrusion Methods 0.000 description 8
- 230000004807 localization Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000006651 lactation Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000036632 reaction speed Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Emergency Alarm Devices (AREA)
Abstract
The invention provides a neural network-based piglet cry recognition method, a piglet cry recognition device, a breeding monitoring method and a breeding monitoring system. The identification method comprises the following steps: acquiring an audio signal in a target area; filtering the audio signal to obtain an audio signal with the frequency within the range of 4000HZ-7000 HZ; carrying out feature extraction on the audio signal after the filtering processing to obtain sound feature parameters; and inputting the sound characteristic parameters serving as input data into a neural network, processing the input data by the neural network, and outputting a result of whether the sound contains the piglet distress call sound. According to the identification method provided by the invention, before feature extraction, the audio signal is filtered to obtain the audio signal with the frequency within the range of 4000HZ-7000HZ, and the distress call sound of the piglet has very high feature in the frequency band, so that the interference audio in the audio signal can be effectively removed, and the identification accuracy is improved.
Description
Technical Field
The invention relates to the technical field of breeding, in particular to a piglet cry recognition method, a piglet cry recognition device, a breeding monitoring method and a breeding monitoring system based on a neural network.
Background
In the current pig breeding industry, the event that piglets are extruded by sows easily occurs in the lactation period of the sows, so economic losses of different degrees are easily caused to farmers, and the current main adopted solution is to patrol a pigsty in a manual mode, however, the mode not only needs to pay extra labor cost, but also is difficult to realize 24-hour all-weather guard.
When an event that a sow extrudes a piglet occurs, if the help-seeking calling sound of the piglet can be identified in time, the event can be processed in time, if the collected audio signal of a pig farm is directly input into the existing neural network for sound identification, the identification precision is very low, and the misjudgment rate is high.
Disclosure of Invention
Based on the current situation, the invention mainly aims to provide a neural network-based piglet cry recognition method, a piglet cry recognition device, a breeding monitoring method and a breeding monitoring system, which have high precision in piglet cry recognition for help.
In order to achieve the above object, in a first aspect, the present invention provides a piglet call sound identification method based on a neural network, which is used for identifying whether a call sound of a piglet in a target area is a call sound for help, the neural network is used for processing input data and outputting a result of whether the call sound of the piglet for help is included, the identification method includes the steps of:
s200, acquiring a first audio signal in the target area;
s300, filtering the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
s400, extracting the characteristics of the second audio signal after filtering to obtain sound characteristic parameters;
s500, inputting the sound characteristic parameters into the neural network as input data, processing the input data by the neural network, and outputting a result of whether the result contains the piglet distress call sound.
Preferably, the identification method further comprises the steps of:
s100, training the neural network by using sample data;
the sample data comprises a plurality of groups of first sound characteristic parameters and a plurality of groups of second sound characteristic parameters, the first sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet distress sounds, and the second sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet milk fighting sounds.
Preferably, the audio signal containing the piglet distress call sound and the audio signal containing the piglet milk-fighting call sound are both acquired in the target area.
In order to achieve the above object, in a second aspect, the present invention provides a neural network-based piglet cry recognition device for recognizing whether a cry of a piglet in a target area is a distress cry, including:
a microphone assembly for acquiring a first audio signal in the target region;
the filter is used for carrying out filtering processing on the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
the processor is used for carrying out feature extraction on the filtered second audio signal to obtain a sound feature parameter, inputting the sound feature parameter into the neural network as input data, and the neural network processes the input data and outputs a result of whether the result contains the piglet distress call.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a breeding monitoring method for preventing a sow from squeezing a piglet, the breeding monitoring method comprising the steps of:
s10, carrying out voice recognition by adopting the piglet voice recognition method;
s20, judging whether the output result contains a piglet distress call sound, if so, executing a step S30, otherwise, continuing to perform voice recognition and judgment;
s30, carrying out sound source positioning on the second audio signal containing the piglet distress call sound to obtain a sound source positioning result and obtain posture information of the sow;
s40, according to the attitude information and the sound source positioning result, determining the sow which corresponds to the sound source positioning result and is prone to the attitude information;
and S50, performing stimulation operation on the determined sow.
Preferably, the control method further includes, after the step S50, the step of:
s60, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing the step S70;
s70, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained;
training the neural network using the sample library when the sound characteristic parameters in the sample library reach a predetermined amount.
Preferably, the step S70 includes the steps of:
s71, sending information that the piglet distress call sound is not eliminated to the terminal;
s72, judging whether a signal which is sent by the terminal and is judged to be correct is received, if so, directly returning to the step S10, otherwise, executing the step S73;
and S73, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained.
Preferably, the stimulation operation includes a vibration operation and a shock operation, and the step S50 includes the steps of:
s51, executing vibration operation;
s52, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S53;
s53, judging whether the vibration operation continuously reaches or exceeds a first preset time, if so, executing S54, otherwise, returning to S52;
and S54, performing electric shock operation.
Preferably, the step S60 includes the steps of:
s61, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S62;
s62, judging whether the electric shock operation continuously reaches or exceeds a second preset time, if so, executing S63, otherwise, returning to S61;
s63, stop the shock operation and execute the step S70.
In order to achieve the above object, in a fourth aspect, the present invention provides a breeding monitoring system for preventing a sow from squeezing a piglet, the system comprising a piglet cry recognition device, a sound source localization device, a control device, at least one attitude detection device, and an execution device associated with each attitude detection device;
each posture detection device is used for detecting the posture information of the sow;
each executing device is used for executing preset stimulation operation under the control of the control device;
the piglet cry recognition device is used for recognizing voices by adopting the piglet cry recognition method;
the sound source positioning device is used for carrying out sound source positioning on a second audio signal which is identified by the piglet call sound identification device and contains the piglet distress call sound;
the control device is used for determining an execution device which corresponds to the sound source positioning result and is associated with the posture detection device with the posture information of the prone position as a target execution device according to the posture information detected by the posture detection device and the sound source positioning result of the sound source positioning device, and controlling the target execution device to execute the preset stimulation operation.
According to the piglet cry of call recognition method based on the neural network, before feature extraction, the audio signal is filtered to obtain the audio signal with the frequency within the range of 4000HZ-7000HZ, and the piglet cry of help has very high feature in the frequency band and shows regular sound intensity change, so that the interference audio in the audio signal can be effectively removed, and the recognition accuracy is greatly improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of a piglet cry recognition method provided by an embodiment of the invention;
fig. 2 is a block diagram of a piglet cry recognition device provided by the embodiment of the invention;
FIG. 3 is a flow chart of a method for monitoring cultivation provided by an embodiment of the present invention;
fig. 4 is a block diagram of a cultivation monitoring system according to an embodiment of the present invention.
In the figure, 100, a microphone assembly; 200. a filter; 300. a processor;
10. a piglet cry recognition device; 20. a sound source positioning device; 30. a control device; 40. an attitude detecting device; 50. and an execution device.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth in order to avoid obscuring the nature of the present invention, well-known methods, procedures, and components have not been described in detail.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Aiming at the problem of low precision of existing piglet call recognition, the application discovers that because the environment of a pig farm is very noisy, a lot of interference audios exist in the collected audio signals in the pig farm, and the interference audios can seriously affect the recognition precision, so that the embodiment of the invention provides a piglet call recognition method based on a neural network, which is used for recognizing whether the call of a piglet in a target area is a distress call, wherein the neural network is used for processing input data and outputting a result of whether the piglet distress call is included, it can be understood that the neural network is the existing neural network for voice recognition, for example, a BP neural network, the process of voice recognition is similar, and is not repeated herein, and in addition, the result of whether the piglet distress call is included or not means that the output result can indicate whether the piglet distress call is included or not, for example, an output result of "0" indicates that the sound for asking for help of the piglet is not included, an output result of "1" indicates that the sound for asking for help of the piglet is included, and for example, an output result of "1" indicates that the sound for asking for help of the piglet is not included, and an output result of "0" indicates that the sound for asking for help of the piglet is included.
As shown in fig. 1, the identification method includes the steps of:
s200, acquiring a first audio signal in the target area;
s300, filtering the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
s400, extracting the characteristics of the second audio signal after filtering to obtain sound characteristic parameters;
s500, inputting the sound characteristic parameters into the neural network as input data, processing the input data by the neural network, and outputting a result of whether the result contains the piglet distress call sound.
According to the piglet cry of call recognition method based on the neural network, before feature extraction, the audio signal is filtered to obtain the audio signal with the frequency within the range of 4000HZ-7000HZ, and the piglet cry of help has very high feature in the frequency band and shows regular sound intensity change, so that the interference audio in the audio signal can be effectively removed, and the recognition accuracy is greatly improved. In addition, because the filtering processing is carried out firstly and then the sound characteristic parameters are extracted, the calculation amount is greatly reduced, and the reaction speed is further improved.
The target region described in step S200, that is, the region where the piglet call recognition is required, may be, for example, the entire pig farm or a partial region of the pig farm.
In step S300, the second audio signal subjected to the filtering process may be a digital signal or an analog signal, as long as the interfering audio can be filtered.
In step S400, based on different neural networks, the corresponding extracted sound feature parameters are also different, and the sound feature parameters may be, for example, short-time average energy, short-time zero-crossing rate, mel-frequency cepstrum parameters, and the like, which can be implemented by the prior art, and are not described herein again.
The neural network used in the above identification method may be trained by using conventional sample data, such as data samples containing the help-seeking call sound of the piglet and other various sounds. The neural network trained by using the conventional sample data has the problem of low recognition accuracy in practical application, and the applicant finds that the reason is that the piglet distress sound is very similar to the piglet milk seeking sound, so that the neural network trained by using the conventional sample data is very easily interfered by the piglet milk seeking sound when the piglet distress sound is recognized, and the misjudgment rate is high, and therefore, in a preferred embodiment, the method further comprises the following steps:
s100, training the neural network by using sample data;
the sample data comprises a plurality of groups of first sound characteristic parameters and a plurality of groups of second sound characteristic parameters, the first sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet distress sounds, and the second sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet milk fighting sounds.
Therefore, the sound characteristic parameters extracted by the audio signals of the piglet distress call and the piglet milk competing call are particularly utilized to carry out targeted training on the neural network, and the misjudgment rate of the neural network can be greatly reduced.
It is understood that the execution timing and the execution times of step S100 are not limited, and may be executed before the first audio signal in the target region is acquired, or may be executed after the sound characteristic parameter is extracted and before the first audio signal is input to the neural network, or may be executed after one or more times of outputting the result of whether the piglet distress call is included is completed, or may be executed once or multiple times.
Considering that the sound environment of the pig farm is very complex, it is preferable that the audio signal containing the piglet distress call sound and the audio signal containing the piglet milk-seeking call sound used in step S100 are both acquired in the target area, so that the audio signals can be restored to the maximum extent during training, thereby further improving the accuracy of speech recognition.
In the sample data, the ratio of the sample size of the first sound characteristic parameter to the sample size of the second sound characteristic parameter is 7:3 to 6: 4.
It can be understood that the training of the neural network may be performed after the training using the conventional sample data is completed, that is, the neural network is firstly trained once using the conventional sample data, and then the neural network is trained twice using the sample data, or the neural network may be directly trained using the sample data.
Further, as shown in fig. 2, the present application also provides a neural network-based piglet call sound recognition device for recognizing whether a call sound of a piglet in a target area is a call sound for help, the piglet call sound recognition device includes:
a microphone assembly 100 for acquiring a first audio signal in the target region;
the filter 200 is used for carrying out filtering processing on the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
the processor 300 is configured to process input data and output a result indicating whether the result includes a piglet distress call sound, the processor 300 is configured to perform feature extraction on the filtered second audio signal to obtain a sound feature parameter, input the sound feature parameter into the neural network as input data, and the neural network processes the input data and outputs a result indicating whether the result includes the piglet distress call sound.
Further, the present application also provides a breeding monitoring method for preventing a sow from squeezing a piglet, as shown in fig. 3, the breeding monitoring method comprising the steps of:
s10, carrying out voice recognition by adopting the piglet voice recognition method;
s20, judging whether the output result contains a piglet distress call sound, if so, executing a step S30, otherwise, continuing to perform voice recognition and judgment;
s30, carrying out sound source positioning on the second audio signal containing the piglet distress call sound to obtain a sound source positioning result and obtain posture information of the sow;
s40, according to the attitude information and the sound source positioning result, determining the sow which corresponds to the sound source positioning result and is prone to the attitude information;
and S50, performing stimulation operation on the determined sow.
In the invention, the sound source positioning result is the sound source fuzzy position of the audio signal containing the distress call of the piglet, the sow which is possibly subjected to the extrusion event can be determined through the sound source fuzzy position, because the extrusion event usually occurs only when the sow is in a prone position, and only the sow with the prone position is the sow which is possibly subjected to the extrusion event, in the invention, the accurate incident position of the extrusion event can be obtained by combining the obtained sound source positioning result with the detected posture information, and then the stimulation operation is performed on the corresponding sow to save the piglet, so that the automatic monitoring of a pig farm can be realized, the dependence on manpower is reduced, the quick response of the piglet on the extrusion event can be realized, the success rate of piglet saving can be improved, and the survival rate of the piglet in the lactation period can be improved.
In addition, the piglet cry recognition method is adopted to recognize whether the piglet cry for help is available, so that the recognition accuracy is high, and the misjudgment rate is greatly reduced.
In step S30, the specific method of sound source localization is to provide a sound localization apparatus, where the sound localization apparatus includes a plurality of sound collection devices, such as microphones, and the plurality of sound collection devices are radially arranged and disposed on the same circuit board, and audio signals collected by the sound collection devices are used for performing the above-mentioned sound recognition, and when it is recognized that the audio signals include a piglet distress call, the audio signals are used as target sounds to perform the following sound source localization steps:
s1: determining the sound source coordinates of the target sound in the pig farm according to the target signals corresponding to the target sound and collected by the sound collecting devices and the distribution conditions of the sound collecting devices;
s2: determining a pre-judging area which is possible to generate a squeezing event according to the sound source coordinate, wherein the pre-judging area is set to be a circular area which takes the sound source coordinate as a center and takes a preset radius as a radius;
s3: selecting culture columns with intersection with the pre-judging area as pre-processing columns;
s4: judging whether a sow exists in each pretreatment column, if so, determining the pretreatment column as a target column; if not, determining that the preprocessed field is a non-target field. That is, each preprocessed field needs to be determined, and the determination result may be that a squeezing event occurs in all the fields.
When in use, the sound positioning device is arranged in a pig farm, such as on the wall of the pig farm, one sound positioning device can be arranged in one pig farm, when in work, a plurality of sound collecting devices always collect outside sound, when the sound collecting device identifies that the target sound extruded by the piglets exists in the sounds, the fact that the piglets are extruded is indicated, determining the position of the piglet which emits the target sound according to the target signals which are acquired by the sound acquisition devices and correspond to the target sound and the distribution condition of the sound acquisition devices, and determines the coordinates of that location, i.e. the sound source coordinates, taking into account the accuracy of the sound source coordinate positioning, and that the piglets may be active all the time, in order to reduce omission as much as possible, the method determines a pre-judging area by taking the sound source coordinate as a circle center, namely performs coarse positioning, and determines a column which is likely to have a piglet extrusion event; and then accurately judging which column has a piglet squeezing event according to the actual conditions of the animals in each column (the conditions can be pre-stored according to the captive breeding data or collected in real time), thereby informing breeding personnel or timely rescuing the piglets in the column through other equipment. Obviously, the sound positioning device and the sound positioning method save the manual routing inspection process, have high judgment precision, can improve the judgment accuracy of the extrusion column, and can rescue the piglets better and more timely; sound source information can be acquired in an all-around manner by the sound acquisition devices which are arranged in a radial distribution manner, so that the situation that some columns, particularly piglets in far columns, are difficult to recognize when being extruded to sound is avoided; meanwhile, a mode of combining coarse positioning and fine positioning is adopted, so that the column position where the piglet producing the sound extrusion event is located can be determined more accurately.
In step S30, the posture information of the sow may be detected by a posture detection device, the posture detection device may be disposed on the body of the sow, and the current posture (such as standing and lying) of the sow may be monitored in real time by the posture detection device, and it is determined whether the sow is in a lying posture (generally, the squeezing event may only occur when the sow is in the lying posture), for example, the posture detection device may include an acceleration sensor and a gyroscope, and the lying posture may include prone, lateral, and supine. The posture detection device also can not be arranged on the sow, for example, the posture detection device can be arranged at a certain height, infrared rays emitted by the infrared detection device can be shielded when the sow stands, and infrared rays emitted by the infrared detection device can not be shielded when the sow is in a prone position, so that the posture information of the sow can be obtained.
It can be understood that the order of the sound source positioning operation and the sow posture information acquiring operation in step S30 is not limited, and the sow corresponding to the sound source positioning result may be determined first, and then the sow in which the posture information is prone may be determined, or the sow in which the posture information is prone may be determined first, and then the sow corresponding to the sound source positioning result may be determined in the sow in which the posture information is prone.
In step S50, the stimulation operation performed on the sow may include a vibration operation, a shock operation, etc., and in a preferred embodiment, step S50 includes the steps of:
s51, executing vibration operation;
s52, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S53;
s53, judging whether the vibration operation continuously reaches or exceeds a first preset time, if so, executing S54, otherwise, returning to S52, wherein the first preset time can be set according to specific requirements, for example, 15 to 20 seconds;
and S54, performing electric shock operation.
In the above steps, on one hand, the vibration operation with weak stimulation intensity is firstly executed, and when the vibration operation can not change the lying posture of the sow into the standing posture, the electric shock operation is executed to increase the stimulation intensity so as to reduce the damage to the sow, on the other hand, the vibration operation is executed firstly and then the electric shock operation is executed so as to establish the conditioned reflex for the sow, so that the sow is changed from the lying posture into the standing posture after being stimulated by the vibration easily, the damage to the sow is reduced, and the energy is saved.
In practical application, the neural network is difficult to avoid the situation of erroneous judgment, even if a sow is changed from a prone position to a standing position, the sound of a piglet still exists but cannot be eliminated, in order to improve the judgment precision of the subsequent neural network, preferably, the situation of the erroneous judgment can be recorded to form a sample library, and the neural network is trained again by using the sample library, so that the precision of sound recognition of the neural network can be further improved, and the erroneous judgment rate is reduced.
Specifically, the control method further includes, after said step S50, the step of:
s60, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing the step S70;
s70, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained, and can be marked as '0' or '1', for example;
when the sound characteristic parameters in the sample base reach a preset amount, the sample base is utilized to train the neural network, so that each parameter in the neural network can be corrected, and more accurate identification of the distress call of the piglet can be obtained.
When it is determined in step S60 that the piglet distress call sound identified in step S10 is not eliminated, the sound characteristic parameter corresponding to the audio signal may be directly stored in the sample library after being tagged with a data tag, but the piglet distress call sound is not eliminated and may not be caused by a neural network determination error or may be caused by other reasons, and therefore, preferably, it may be determined whether there is a neural network misjudgment condition first, and if so, the corresponding sound characteristic parameter is stored in the sample library after being tagged with a data tag, otherwise, the sample library is not stored, specifically, step S70 includes the following steps:
s71, sending information that the piglet distress call sound is not eliminated to the terminal;
s72, judging whether a signal which is sent by the terminal and is judged to be correct is received, if so, directly returning to the step S10, otherwise, executing the step S73;
and S73, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained.
The terminal can be a mobile phone, a tablet personal computer and the like, when the terminal receives information that the piglet distress call sound is not eliminated, a user can conveniently carry out on-site investigation and timely process the information, and the user can send a signal for judging whether the sound characteristic parameters are correct or wrong through the terminal so as to judge whether the corresponding sound characteristic parameters are stored in a sample library after being marked as data.
Further, the sow is injured by too long electric shock operation, so that the electric shock time is not too long, and if the piglet distress call sound is not eliminated after the electric shock lasts for a certain time, the electric shock is stopped, specifically, the step S60 includes the following steps:
s61, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S62;
s62, judging whether the shock operation continuously reaches or exceeds a second preset time, if so, executing S63, otherwise, returning to S61, wherein the second preset time can be set according to specific requirements, for example, 5 to 10S;
s63, stop the shock operation and execute the step S70.
Further, the present application also provides a breeding monitoring system for preventing a sow from pressing a piglet, as shown in fig. 4, the system comprises a piglet cry recognition device 10, a sound source positioning device 20, a control device 30, at least one attitude detection device 40 and an execution device 50 associated with each attitude detection device 40;
each posture detection device 40 is used for detecting the posture information of the sow;
each of the executing devices 50 is used for executing a preset stimulation operation under the control of the control device 30;
for example, the executing device 50 may be disposed on the body of a sow, the stimulation operation may include vibration and/or electric shock, the stimulation operation may stimulate the sow to change from a prone position to a standing position, so as to achieve the purpose of saving piglets, and in addition, the stimulation operation may also include sound, so that not only stimulation of the sow may be achieved, but also the purpose of reminding a user (such as a farmer) may be achieved, and the user may conveniently arrive at the incident site quickly.
The piglet cry recognition device 10 is configured to perform voice recognition by using the piglet cry recognition method, and the sound source positioning device 20 is configured to perform sound source positioning on the second audio signal including the piglet distress call recognized by the piglet cry recognition device 10.
The control device 30 is configured to determine, according to the posture information detected by the posture detection device 40 and the sound source localization result of the target sound, that the execution device 50 associated with the posture detection device 40 corresponding to the sound source localization result and having the posture information in the prone position is a target execution device, and control the target execution device 50 to execute the preset stimulation operation.
In one embodiment, each posture detecting device 40 is associated with a breeding area (each breeding area represents the position of the sow, for example, each breeding area may be a breeding column), and the sound source localization result of the target sound may include the breeding area where the squeezing event may occur;
preferably, in an embodiment, the gesture detection device 40 and the execution device 50 are integrated in the wearable device, and the gesture detection device 40 and the execution device 50 in the same wearable device are associated with each other.
It will be appreciated by those skilled in the art that the above-described preferred embodiments may be freely combined, superimposed, without conflict.
It will be understood that the embodiments described above are illustrative only and not restrictive, and that various obvious and equivalent modifications and substitutions for details described herein may be made by those skilled in the art without departing from the basic principles of the invention.
Claims (10)
1. A piglet call sound identification method based on a neural network is used for identifying whether the call sound of a piglet in a target area is a distress call sound or not, and is characterized in that the neural network is used for processing input data and outputting a result of whether the piglet call sound is included or not, and the identification method comprises the following steps:
s200, acquiring a first audio signal in the target area;
s300, filtering the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
s400, extracting the characteristics of the second audio signal after filtering to obtain sound characteristic parameters;
s500, inputting the sound characteristic parameters into the neural network as input data, processing the input data by the neural network, and outputting a result of whether the result contains the piglet distress call sound.
2. The neural network-based piglet cry recognition method of claim 1, wherein said recognition method further comprises the steps of:
s100, training the neural network by using sample data;
the sample data comprises a plurality of groups of first sound characteristic parameters and a plurality of groups of second sound characteristic parameters, the first sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet distress sounds, and the second sound characteristic parameters are sound characteristic parameters obtained by extracting audio signals containing piglet milk fighting sounds.
3. The neural-network-based piglet cry recognition method according to claim 2, wherein said audio signal containing piglet distress cry and said audio signal containing piglet milk-seeking cry are both acquired in said target area.
4. The utility model provides a piglet call sound recognition device based on neural network for whether the call sound of discerning the piglet in the target zone is SOS call sound, its characterized in that, piglet call sound recognition device includes:
a microphone assembly for acquiring a first audio signal in the target region;
the filter is used for carrying out filtering processing on the first audio signal to obtain a second audio signal with the frequency within the range of 4000HZ-7000 HZ;
the processor is used for carrying out feature extraction on the filtered second audio signal to obtain a sound feature parameter, inputting the sound feature parameter into the neural network as input data, and the neural network processes the input data and outputs a result of whether the result contains the piglet distress call.
5. A breeding monitoring method is used for preventing sows from extruding piglets and is characterized by comprising the following steps:
s10, carrying out voice recognition by adopting the piglet cry recognition method according to claim 1;
s20, judging whether the output result contains a piglet distress call sound, if so, executing a step S30, otherwise, continuing to perform voice recognition and judgment;
s30, carrying out sound source positioning on the second audio signal containing the piglet distress call sound to obtain a sound source positioning result and obtain posture information of the sow;
s40, according to the attitude information and the sound source positioning result, determining the sow which corresponds to the sound source positioning result and is prone to the attitude information;
and S50, performing stimulation operation on the determined sow.
6. The cultivation monitoring method as claimed in claim 5, wherein the control method further comprises, after the step S50, the step of:
s60, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing the step S70;
s70, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained;
training the neural network using the sample library when the sound characteristic parameters in the sample library reach a predetermined amount.
7. The cultivation monitoring method as claimed in claim 6, wherein the step S70 includes the steps of:
s71, sending information that the piglet distress call sound is not eliminated to the terminal;
s72, judging whether a signal which is sent by the terminal and is judged to be correct is received, if so, directly returning to the step S10, otherwise, executing the step S73;
and S73, making a data label for the sound characteristic parameter corresponding to the second audio signal judged to contain the piglet distress call sound, storing the sound characteristic parameter into a sample library, and returning to the step S10, wherein the data label is that the piglet distress call sound is not contained.
8. The farming monitoring method of claim 6 or 7, wherein the stimulating operation includes a vibrating operation and a shock operation, and the step S50 includes the steps of:
s51, executing vibration operation;
s52, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S53;
s53, judging whether the vibration operation continuously reaches or exceeds a first preset time, if so, executing S54, otherwise, returning to S52;
and S54, performing electric shock operation.
9. The cultivation monitoring method as claimed in claim 8, wherein the step S60 includes the steps of:
s61, judging whether the piglet distress call sound identified in the step S10 is eliminated, if so, returning to the step S10, otherwise, executing S62;
s62, judging whether the electric shock operation continuously reaches or exceeds a second preset time, if so, executing S63, otherwise, returning to S61;
s63, stop the shock operation and execute the step S70.
10. A breeding monitoring system is used for preventing sows from squeezing piglets, and is characterized by comprising a piglet cry recognition device, a sound source positioning device, a control device, at least one attitude detection device and an execution device associated with each attitude detection device;
each posture detection device is used for detecting the posture information of the sow;
each executing device is used for executing preset stimulation operation under the control of the control device;
the piglet cry recognition device is used for carrying out voice recognition by adopting the piglet cry recognition method according to any one of claims 1 to 4;
the sound source positioning device is used for carrying out sound source positioning on a second audio signal which is identified by the piglet call sound identification device and contains the piglet distress call sound;
the control device is used for determining an execution device which corresponds to the sound source positioning result and is associated with the posture detection device with the posture information of the prone position as a target execution device according to the posture information detected by the posture detection device and the sound source positioning result of the sound source positioning device, and controlling the target execution device to execute the preset stimulation operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405989.XA CN111477236A (en) | 2020-05-14 | 2020-05-14 | Piglet cry recognition method based on neural network, breeding monitoring method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405989.XA CN111477236A (en) | 2020-05-14 | 2020-05-14 | Piglet cry recognition method based on neural network, breeding monitoring method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111477236A true CN111477236A (en) | 2020-07-31 |
Family
ID=71759908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010405989.XA Pending CN111477236A (en) | 2020-05-14 | 2020-05-14 | Piglet cry recognition method based on neural network, breeding monitoring method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111477236A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951812A (en) * | 2020-08-26 | 2020-11-17 | 杭州情咖网络技术有限公司 | Animal emotion recognition method and device and electronic equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202012010238U1 (en) * | 2012-10-26 | 2014-01-29 | Big Dutchman Pig Equipment Gmbh | Arrangement for monitoring and controlling the keeping of sows and their piglets, farrowing box and actuator unit |
CN108052964A (en) * | 2017-12-05 | 2018-05-18 | 翔创科技(北京)有限公司 | Livestock condition detection method, computer program, storage medium and electronic equipment |
CN108198562A (en) * | 2018-02-05 | 2018-06-22 | 中国农业大学 | A kind of method and system for abnormal sound in real-time positioning identification animal house |
US20180228131A1 (en) * | 2015-09-29 | 2018-08-16 | Swinetech, Inc. | Warning system for animal farrowing operations |
CN109637549A (en) * | 2018-12-13 | 2019-04-16 | 北京小龙潜行科技有限公司 | A kind of pair of pig carries out the method, apparatus and detection system of sound detection |
CN110189756A (en) * | 2019-06-28 | 2019-08-30 | 北京派克盛宏电子科技有限公司 | It is a kind of for monitoring the method and system of live pig abnormal sound |
CN110580918A (en) * | 2019-09-03 | 2019-12-17 | 上海秒针网络科技有限公司 | Method and device for sending prompt information, storage medium and electronic device |
CN110598643A (en) * | 2019-09-16 | 2019-12-20 | 上海秒针网络科技有限公司 | Method and device for monitoring piglet compression |
CN110651728A (en) * | 2019-11-05 | 2020-01-07 | 秒针信息技术有限公司 | Piglet pressed detection method, device and system |
CN110782905A (en) * | 2019-11-05 | 2020-02-11 | 秒针信息技术有限公司 | Positioning method, device and system |
CN110940539A (en) * | 2019-12-03 | 2020-03-31 | 桂林理工大学 | Machine equipment fault diagnosis method based on artificial experience and voice recognition |
-
2020
- 2020-05-14 CN CN202010405989.XA patent/CN111477236A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202012010238U1 (en) * | 2012-10-26 | 2014-01-29 | Big Dutchman Pig Equipment Gmbh | Arrangement for monitoring and controlling the keeping of sows and their piglets, farrowing box and actuator unit |
US20180228131A1 (en) * | 2015-09-29 | 2018-08-16 | Swinetech, Inc. | Warning system for animal farrowing operations |
CN108052964A (en) * | 2017-12-05 | 2018-05-18 | 翔创科技(北京)有限公司 | Livestock condition detection method, computer program, storage medium and electronic equipment |
CN108198562A (en) * | 2018-02-05 | 2018-06-22 | 中国农业大学 | A kind of method and system for abnormal sound in real-time positioning identification animal house |
CN109637549A (en) * | 2018-12-13 | 2019-04-16 | 北京小龙潜行科技有限公司 | A kind of pair of pig carries out the method, apparatus and detection system of sound detection |
CN110189756A (en) * | 2019-06-28 | 2019-08-30 | 北京派克盛宏电子科技有限公司 | It is a kind of for monitoring the method and system of live pig abnormal sound |
CN110580918A (en) * | 2019-09-03 | 2019-12-17 | 上海秒针网络科技有限公司 | Method and device for sending prompt information, storage medium and electronic device |
CN110598643A (en) * | 2019-09-16 | 2019-12-20 | 上海秒针网络科技有限公司 | Method and device for monitoring piglet compression |
CN110651728A (en) * | 2019-11-05 | 2020-01-07 | 秒针信息技术有限公司 | Piglet pressed detection method, device and system |
CN110782905A (en) * | 2019-11-05 | 2020-02-11 | 秒针信息技术有限公司 | Positioning method, device and system |
CN110940539A (en) * | 2019-12-03 | 2020-03-31 | 桂林理工大学 | Machine equipment fault diagnosis method based on artificial experience and voice recognition |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951812A (en) * | 2020-08-26 | 2020-11-17 | 杭州情咖网络技术有限公司 | Animal emotion recognition method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10721571B2 (en) | Separating and recombining audio for intelligibility and comfort | |
CN111543348B (en) | Sound positioning device and method for farm and cub monitoring method | |
US6535131B1 (en) | Device and method for automatic identification of sound patterns made by animals | |
KR101676643B1 (en) | Apparatus for managing livestock and method thereof | |
US20210219524A1 (en) | Animal training device and control method | |
US20070000216A1 (en) | Method and apparatus for evaluating animals' health and performance | |
CN111709277B (en) | Human body fall detection method, device, computer equipment and storage medium | |
CN110651728B (en) | Piglet pressed detection method, device and system | |
US20220079121A1 (en) | Warning system for animal farrowing operations | |
CN111860203B (en) | Abnormal pig identification device, system and method based on image and audio mixing | |
US11937573B2 (en) | Music providing system for non-human animal | |
CN109620184A (en) | Mobile phone-wearable device integral type human body burst injury real-time monitoring alarming method | |
US20230089378A1 (en) | Detection, prevention, and reaction in a warning system for animal farrowing operations | |
CN113397494A (en) | Animal sign monitoring device and method and intelligent wearable device | |
JP6620276B2 (en) | Animal voice signal extraction device, animal physiological state prediction device, animal voice signal extraction program, and animal physiological state prediction program | |
CN111477236A (en) | Piglet cry recognition method based on neural network, breeding monitoring method and system | |
CN111543351B (en) | Breeding monitoring system and monitoring method thereof | |
CN106073796A (en) | Audition health detecting system based on bone conduction and method | |
CN105766687A (en) | Animal training method and system | |
CN104027070A (en) | Health monitoring alarming help-seeking system and method | |
KR102364172B1 (en) | System for sensing sounds of domestic animals and method thereof | |
KR20170087225A (en) | Apparatus, method, and recording medium for providing animal sound analysis information | |
WO2022086496A1 (en) | Methods and systems for detecting barks | |
KR20130110572A (en) | Apparatus and method for detecting of cattle estrus using sound data | |
US20180233146A1 (en) | Voice Recognition Alert Notification Apparatus, System and Method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230831 Address after: 1542, 12th Floor, Building 1, No. 62 Balizhuang Road, Haidian District, Beijing, 100142 Applicant after: Beijing Radium Scene Technology Co.,Ltd. Address before: 1-1404-495, 14 / F, 87 Xisanhuan North Road, Haidian District, Beijing 100039 Applicant before: Shenlin Technology (Beijing) Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200731 |
|
WD01 | Invention patent application deemed withdrawn after publication |