CN113095559A - Hatching time prediction method, device, equipment and storage medium - Google Patents

Hatching time prediction method, device, equipment and storage medium Download PDF

Info

Publication number
CN113095559A
CN113095559A CN202110362045.3A CN202110362045A CN113095559A CN 113095559 A CN113095559 A CN 113095559A CN 202110362045 A CN202110362045 A CN 202110362045A CN 113095559 A CN113095559 A CN 113095559A
Authority
CN
China
Prior art keywords
sound frequency
hatching
preset time
sound
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110362045.3A
Other languages
Chinese (zh)
Other versions
CN113095559B (en
Inventor
苏睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Original Assignee
Jingdong Shuke Haiyi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Shuke Haiyi Information Technology Co Ltd filed Critical Jingdong Shuke Haiyi Information Technology Co Ltd
Priority to CN202110362045.3A priority Critical patent/CN113095559B/en
Publication of CN113095559A publication Critical patent/CN113095559A/en
Application granted granted Critical
Publication of CN113095559B publication Critical patent/CN113095559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Abstract

The application relates to a hatching moment prediction method, a hatching moment prediction device, equipment and a storage medium, and relates to the field of intelligent breeding. The hatching time prediction method comprises the following steps: acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1; extracting sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of calling in the hatching device in each preset time period; and obtaining the hatching prediction time according to the N sound frequency characteristics. The method and the device are used for solving the problem that the hatching rate cannot be improved by accurately predicting the hatching time.

Description

Hatching time prediction method, device, equipment and storage medium
Technical Field
The application relates to the field of intelligent breeding, in particular to a hatching moment prediction method, a hatching moment prediction device, equipment and a storage medium.
Background
The period from hatching to hatching is 21 days, wherein the first 18 days are hatching in a hatching environment and then entering into the hatching period. The problem of the hatching time exists in the hatching link, and the contradiction exists in the hatching time, which is that if the hatching is too early, the environmental temperature and the like change, and then the eggs without the hatching can hardly be hatched; and if the chicks are hatched too late, the number of the chicks is increased, but the chicks which have been broken shells are dead or disabled due to the high environmental temperature, the high CO2 concentration and the like.
At present, the node is hatched according to fixed time only according to past experience, and other factors cannot be compatible. Due to the fact that the variety and energy difference of the egg sources in different batches are different, factors such as seasons, outdoor sunny and rainy days and the like, the fixed hatching time can only be too early or too late for hatching, and therefore the hatching rate cannot be optimal. At present, the poultry hatching time is predicted to stay at the timing hatching, so that the hatching rate is about 83%, healthy chicks, B chicks and C chicks are contained in the hatching rate, and the condition that the young chicks are worse and worse is understood.
Disclosure of Invention
The application provides a hatching time prediction method, a hatching time prediction device, equipment and a storage medium, which are used for solving the problem that the hatching time cannot be accurately predicted to improve the hatching rate.
In a first aspect, an embodiment of the present application provides a method for predicting a hatching time, including:
acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1;
extracting sound frequency features in the sound data in each preset time period to obtain N sound frequency features, wherein the sound frequency features are used for representing the number of times of calling in the hatching device in each preset time period;
and obtaining the hatching prediction time according to the N sound frequency characteristics.
Optionally, the extracting the sound frequency features in the sound data in each preset time period to obtain N sound frequency features includes:
generating an envelope of the sound data in each of the preset time periods;
and acquiring the number of peaks of each envelope, and taking the number of peaks as the N sound frequency features.
Optionally, the extracting the sound frequency features in the sound data in each preset time period to obtain N sound frequency features includes:
and extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
Optionally, the extracting the sound frequency features in the sound data in each preset time period to obtain N sound frequency features includes:
generating an envelope of the sound data in each of the preset time periods;
obtaining the number of wave crests of each envelope, and taking the number of the wave crests as N first sound audio sub-features;
extracting second audio sub-features in the sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features;
and generating the N sound frequency sub-features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
Optionally, the obtaining the predicted hatching time according to the N sound frequency features includes:
fitting a first curve according to the N sound frequency characteristics and the corresponding moments of the N sound frequency characteristics;
and when the trend of the first curve shows a gentle trend, obtaining the hatching prediction time.
Optionally, the obtaining the predicted hatching time according to the N sound frequency features includes:
inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained hatching moment prediction model to obtain a first sound frequency prediction characteristic;
inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period to the hatching moment prediction model to obtain second sound frequency prediction characteristics;
inputting the sound frequency characteristics and the i-1 sound frequency prediction characteristics in the current preset time period and N-i preset time periods before the current preset time period into the hatching time prediction model to obtain an ith sound frequency prediction characteristic, wherein i is greater than 2 and i is less than N;
fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, the i sound frequency prediction characteristics and the time corresponding to the i sound frequency prediction characteristics;
and when the trend of the second curve shows a gentle trend, obtaining the hatching prediction time.
Optionally, the obtaining the hatching prediction time when the trend of the second curve shows a gradual trend includes:
calculating the difference value of the ith sound frequency prediction characteristic and the ith-1 sound frequency prediction characteristic;
and if the difference is smaller than a preset value, taking the time corresponding to the ith sound frequency prediction characteristic as the hatching prediction time.
In a second aspect, an embodiment of the present application provides a hatching time prediction apparatus, including:
the system comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, and N is greater than 1;
the extracting module is used for extracting sound frequency features in the sound data in each preset time period to obtain N sound frequency features, wherein the sound frequency features are used for representing the number of times of calling in the hatching device in each preset time period;
and the processing module is used for obtaining the hatching prediction time according to the N sound frequency characteristics.
Optionally, the extraction module comprises:
a first generating unit configured to generate an envelope of the sound data for each of the preset time periods;
a first obtaining unit, configured to obtain a number of peaks of each envelope, and use the number of peaks as the N sound frequency features.
Optionally, the extracting module is configured to extract, by using a voice activity detection algorithm, the sound frequency features in the sound data in each preset time period to obtain N sound frequency features.
Optionally, the extraction module comprises:
a second generating unit configured to generate an envelope of the sound data for each of the preset time periods;
the second acquisition unit is used for acquiring the number of wave crests of each envelope and taking the number of the wave crests as N first sound audio sub-features;
the extracting unit is used for extracting second audio sub-features in the sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features;
a third generating unit, configured to generate the N sound frequency sub-features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
Optionally, the processing module includes:
the first fitting unit is used for fitting a first curve according to the N sound frequency characteristics and the time corresponding to the N sound frequency characteristics;
and the first processing unit is used for obtaining the hatching prediction time when the trend of the first curve shows a gentle trend.
Optionally, the processing module includes:
the second processing unit is used for inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained hatching moment prediction model to obtain a first sound frequency prediction characteristic;
the third processing unit is used for inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period to the hatching time prediction model to obtain second sound frequency prediction characteristics;
a fourth processing unit, configured to input the sound frequency features and the i-1 sound frequency prediction features in the current preset time period and N-i preset time periods before the current preset time period to the hatching time prediction model, and obtain an ith sound frequency prediction feature, where i is greater than 2 and i is less than N;
a second fitting unit, configured to fit a second curve according to the N sound frequency features, the time corresponding to the N sound frequency features, and the time corresponding to the i sound frequency prediction features and the i sound frequency prediction features;
and the fifth processing unit is used for obtaining the hatching prediction time when the trend of the second curve shows a gentle trend.
Optionally, the fifth processing unit comprises:
a difference calculating subunit, configured to calculate a difference between the ith sound frequency prediction characteristic and the (i-1) th sound frequency prediction characteristic;
and the processing subunit is configured to, if the difference is smaller than a preset value, use a time corresponding to the ith sound frequency prediction characteristic as the hatching prediction time.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the hatching time prediction method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for predicting the hatching time is implemented according to the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the prior art, a fixed hatching time is determined according to experience, and in the method provided by the embodiment of the application, sound frequency characteristics in sound data in each preset time period are extracted by acquiring sound data in the hatching device in the current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1, N sound frequency characteristics are obtained and used for representing the number of times of calling sound in the hatching device in each preset time period, and a hatching prediction time is obtained according to the N sound frequency characteristics.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic flow chart illustrating a method for predicting a hatching time in an embodiment of the present application;
FIG. 2 is a schematic illustration of an envelope of sound data generated in an embodiment of the present application;
FIG. 3 is a graphical illustration of unfiltered acoustic data in one embodiment of the present application;
FIG. 4 is a schematic diagram of a frequency domain waveform of sound data according to an embodiment of the present application;
FIG. 5 is a schematic illustration of band pass filtered sound data in accordance with an embodiment of the present application;
FIG. 6 is a graphical illustration of wiener filtered acoustic data in accordance with an embodiment of the present application;
FIG. 7 is a diagram of a second curve fit in one embodiment of the present application;
FIG. 8 is a schematic structural diagram of a hatching moment prediction apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method for predicting a hatching moment, which can be applied to a server, and certainly can also be applied to other electronic equipment, such as a terminal (a mobile phone, a tablet computer, etc.). In the embodiment of the present application, the method is described as being applied to a server.
In the embodiment of the present application, as shown in fig. 1, the method for predicting the hatching time mainly includes:
step 101, obtaining sound data inside the hatching apparatus within a current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1.
For example: the preset time period may be 1 second, the sound data inside the hatching apparatus in the current preset time period refers to sound data within 1 second that the hatching apparatus has just collected, for example, sound data within 1 second that is 3 minutes, 10 minutes and 20 seconds, N may be 10, and the sound data inside the hatching apparatus in the current preset time period and N-1 preset time periods before the current preset time period are obtained, that is, sound data within 10 seconds 1 that is 3 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds are obtained.
Step 102, extracting sound frequency features in sound data in each preset time period to obtain N sound frequency features; wherein, the sound frequency characteristic is used for representing the number of times of calling in the hatching device in each preset time period.
In one embodiment, the sound frequency features are extracted by a variety of methods, including but not limited to the following methods:
in a first mode
Generating an envelope of the sound data within each preset time period; and acquiring the number of peaks of each envelope, and taking the number of the peaks as N sound frequency features.
For example: the number of peaks of the envelope of the sound data in 101 seconds, i.e., 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds, and 20 seconds, is 1, 3, 6, 8, 12, 14, 17, 20, 24, and 25, and 1, 3, 6, 8, 12, 14, 17, 20, 24, and 25 are 10 sound frequency characteristics.
The envelope is a curve of amplitude change with time. The common method for generating the envelope is to perform hilbert transform on the time domain signal and then perform a square sum with the time domain signal itself to obtain the envelope of the time domain signal.
One peak of the envelope can be used for representing one-time calling, and the number of peaks of the envelope can be used for representing the number of times of calling.
In one embodiment, the envelope of the generated sound data is as shown in FIG. 2. Fig. 2 is a waveform diagram of an upper portion and a lower portion of the sound data of the left channel and the right channel, respectively, where the abscissa is time and the ordinate is sound intensity, and in fig. 2, distinct peaks can be seen, each peak representing a chickling sound.
Mode two
And extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
Voice Activity Detection (VAD), which is an algorithm for detecting Voice in the frequency domain, can detect whether Voice exists in a section of Voice data. Dividing the frequency into 6 sub-bands (80Hz-250Hz, 250Hz-500Hz, 500Hz-1kHz, 1kHz-2kHz, 2kHz-3kHz, 3kHz-4kHz), then calculating the energy of each sub-band, calculating the probability of noise and non-noise in a Gaussian mixture model according to the clustering idea through the energy, solving the probability of two types of noise and non-noise of each frequency band, then making the likelihood log ratio of the two probabilities of the frequency band, if the two probabilities do not pass through, calculating the likelihood log ratio of the frequency band and the whole, and judging that the voice exists as long as the probability passes through once.
By using a voice activity detection algorithm, the number of times of calling in each preset time period can be extracted.
For example: by using a voice activity detection algorithm, the voice frequency features in 10 sound data within 1 second of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds are extracted as 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 respectively, and 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 are taken as 10 voice frequency features.
Mode III
Generating an envelope of the sound data within each preset time period; obtaining the number of wave crests of each envelope, and taking the number of the wave crests as N first sound audio sub-features; extracting second audio sub-features in the sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features; and generating N sound frequency sub-characteristics according to the N first sound frequency sub-characteristics and the N second sound frequency sub-characteristics.
And generating N sound frequency sub-characteristics according to the N first sound frequency sub-characteristics and the N second sound frequency sub-characteristics, so that the sound frequency characteristics are more accurate.
In a specific embodiment, the N sound frequency sub-features are generated according to the N first sound audio sub-features and the N second sound audio sub-features, and may be an average value of the first sound audio sub-features and the second sound audio sub-features at each time point, and the average value is used as the sound frequency feature at the time point.
For example: the number of peaks of the envelope of the sound data in 101 seconds, namely, 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds is respectively 1, 3, 6, 8, 12, 14, 17, 20, 24 and 25, and 1, 3, 6, 8, 12, 14, 17, 20, 24 and 25 are taken as 10 first sound audio sub-features; extracting 10 sound frequency features of sound data within 1 second of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds as 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 respectively by using a voice activity detection algorithm, and taking 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 as 10 second sound frequency sub-features; and calculating the average value of the first sound audio sub-feature and the second sound audio sub-feature at each moment, taking the average value as the sound frequency feature at the moment, wherein 10 sound frequency features are 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5.
In a specific embodiment, before extracting the sound frequency features in the sound data in each preset time period, band-pass filtering and wiener filtering are performed on the sound data in the hatchling device in the current preset time period and N-1 preset time periods before the current preset time period.
The sound data are continuously collected in the hatching device by using the sound collecting device, but various environmental noises such as air conditioners, alarms and the like are often contained in the sound data, and the original sound data which are not filtered cannot show any amplitude characteristics in a time domain. In one embodiment, unfiltered sound data is shown in FIG. 3. The upper and lower waveform diagrams of fig. 3 are waveform diagrams of unfiltered sound data of two channels of a left channel and a right channel in a time domain, respectively, the abscissa is time, and the ordinate is sound intensity, and in fig. 3, a large number of burrs can be seen, an obvious amplitude feature cannot be seen, and a sound frequency feature cannot be accurately extracted.
First, a fourier transform is used to convert the time domain signal of the sound data into the frequency domain, as shown in fig. 4, two curves in fig. 4 are waveform diagrams of the unfiltered sound data of the two channels of the left channel and the right channel respectively in the frequency domain, the abscissa is frequency, the ordinate is sound intensity, and in fig. 4, the two curves can be seen to have obvious peaks in the range of 2500Hz to 7000 Hz. Part of the background noise can be filtered by adopting sound data in the frequency band of 2500Hz-7000 Hz. Therefore, the sound data in the hatching device in the current preset time period and N-1 preset time periods before the current preset time period are subjected to band-pass filtering, so that part of background noise can be filtered. Specifically, the sound data is filtered by a band-pass filter to obtain sound data with frequency of 2500Hz-7000Hz, and the sound data without other frequency domains is found to improve the filtering of background noise, wherein the sound data after band-pass filtering is shown in FIG. 5. Fig. 5 is a waveform diagram of upper and lower portions of the sound data of the left channel and the right channel, respectively, after band-pass filtering, in the time domain, where the abscissa is time and the ordinate is sound intensity, and it can be seen in fig. 5 that many burrs are eliminated on the basis of fig. 3, and the amplitude characteristics are obvious, but there is still some background noise with a relatively stable amplitude.
In order to filter part of background noise with stable amplitude, wiener filtering is further carried out on the sound data after band-pass filtering. The wiener filtering is a comprehensive probability density function of multiple features formed by combining multiple speech/noise classification features into a model through a likelihood ratio function, wherein the features comprise LRT (likelihood ratio test) features and three indexes of spectral flatness and spectral difference. Because the speech has more harmonics than the noise, the speech will have peaks in the fundamental frequency and harmonics, and the noise spectrum is more stable than the speech spectrum, so the wiener filtering is performed on the band-pass filtered sound data, and the background noise can be further filtered, and the wiener filtered sound data is shown in fig. 6. The upper and lower waveform diagrams of fig. 6 are waveform diagrams of the sound data of the left channel and the right channel after wiener filtering, respectively, in the time domain, the abscissa is time, and the ordinate is sound intensity, and it can be seen in fig. 6 that the background noise with relatively stable partial amplitude is eliminated on the basis of fig. 5, and the obvious amplitude characteristic can be seen, and the sound frequency characteristic can be further accurately extracted.
And 103, obtaining the hatching prediction time according to the N sound frequency characteristics.
In one embodiment, the predicted timing of the hatchling may be obtained in a variety of ways, including but not limited to the following:
in a first mode
Fitting a first curve according to the N sound frequency characteristics and the corresponding moments of the N sound frequency characteristics; when the trend of the first curve shows a gentle trend, the hatching prediction time is obtained.
For example: the sound frequency characteristics corresponding to 10 times of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds are respectively 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5, the time is used as an abscissa, the sound frequency characteristics corresponding to the time are used as an ordinate, and a first curve is fitted according to the 10 points (11, 1), (12, 3.5), (13, 6), (14, 8.5), (15, 11.5), (16, 14), (17, 17.5), (18, 20), (19, 24.5) and (20, 25.5) to obtain a function expression of the first curve, so that a target point with a gentle trend of the first curve is obtained, and the abscissa of the target point is a predicted time.
Mode two
Inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained hatching moment prediction model to obtain a first sound frequency prediction characteristic; inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period to the hatching moment prediction model to obtain second sound frequency prediction characteristics; inputting the sound frequency characteristics and the i-1 sound frequency prediction characteristics in the current preset time period and N-i preset time periods before the current preset time period into a hatching moment prediction model to obtain an ith sound frequency prediction characteristic, wherein i is greater than 2 and i is less than N; fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, and the time corresponding to the i sound frequency prediction characteristics and the i sound frequency prediction characteristics; and when the trend of the second curve shows a gentle trend, obtaining the hatching prediction time.
The pre-trained hatching moment prediction model may be a Recurrent Neural Network (RNN).
For example: the preset time period may be 1 second, the sound frequency feature in the current preset time period may be a sound frequency feature within 1 second, namely 3 hours, 10 minutes and 20 seconds, N may be 10, and the sound frequency features in the current preset time period and in the 10 moments before the current preset time period and N-1 preset time periods are sound frequency features corresponding to 10 moments, namely 3 hours, 10 minutes and 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds, which are respectively 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5.
The sound frequency characteristics 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5 corresponding to 10 times of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds are input into a pre-trained hatching time prediction model, and first sound frequency prediction characteristics corresponding to 3 hours, 10 minutes and 21 seconds are obtained. The sound frequency characteristics 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5 corresponding to the 9 moments of 3 hours, 10 minutes and 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds and the first sound frequency prediction characteristics corresponding to the 3 hours, 10 minutes and 21 seconds are input into the hatching moment prediction model, and the second sound frequency prediction characteristics corresponding to the 3 hours, 10 minutes and 22 seconds are obtained. The sound frequency prediction features 24.5 and 25.5 corresponding to 2 times of 3 hours 10 minutes 19 seconds and 20 seconds and 8 sound frequency prediction features corresponding to 3 hours 10 minutes 21 seconds, 22 seconds, 23 seconds, 24 seconds, 25 seconds, 26 seconds, 27 seconds and 28 seconds are input into the hatching time prediction model, and a ninth sound frequency prediction feature corresponding to 3 hours 10 minutes 29 seconds is obtained.
And fitting a second curve by taking the time as an abscissa and taking the sound frequency characteristic or the sound frequency prediction characteristic corresponding to the time as an ordinate according to the first to ninth sound frequency prediction characteristics corresponding to the 9 times of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds, namely 10 times of 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5, and 3 hours, 10 minutes, 21 seconds, 22 seconds, 23 seconds, 24 seconds, 25 seconds, 26 seconds, 27 seconds, 28 seconds and 29 seconds to obtain a function expression of the second curve, further obtaining a target point with a gentle trend of the second curve, and taking the abscissa of the target point as the young prediction time.
In one embodiment, there are many ways to determine whether the second curve exhibits a gradual trend, including but not limited to the following listed ways:
in a first mode
Calculating the difference value of the ith sound frequency prediction characteristic and the ith-1 sound frequency prediction characteristic; and if the difference is smaller than the preset value, taking the moment corresponding to the ith sound frequency prediction characteristic as the hatching prediction moment.
Mode two
Calculating the slope between the point of the ith sound frequency prediction characteristic on the second curve and the point of the ith-1 sound frequency prediction characteristic on the second curve; and if the slope is smaller than the preset slope, taking the moment corresponding to the ith sound frequency prediction characteristic as the hatching prediction moment.
In a specific embodiment, the fitted second curve is as shown in fig. 7, the dotted line is the fitted second curve, the solid line is a curve drawn according to the sound frequency characteristics extracted from the sound data actually collected inside the hatching apparatus, the abscissa is time, the ordinate is the number of times of singing, and it is observed that the inflection point where the fitted second curve increases slowly is smaller than the inflection point where the solid line increases slowly, that is, the fitted second curve can accurately capture the inflection point where the increase is slow in advance, so that the hatching prediction time can be accurately obtained, and the hatching rate is improved.
In summary, in the embodiment of the present application, by acquiring the sound data inside the hatching apparatus within the current preset time period and N-1 preset time periods before the current preset time period, where N is greater than 1, extracting the sound frequency feature in the sound data within each preset time period, to obtain N sound frequency features, where the sound frequency feature is used to represent the number of times of calling the sound inside the hatching apparatus within each preset time period, and obtaining the predicted hatching time according to the N sound frequency features, the sound frequency feature can be extracted from the sound data actually collected inside the hatching apparatus, and the predicted hatching time can be obtained according to the sound frequency feature, so as to solve the problem that the hatching time cannot be accurately predicted to improve the hatching rate.
Based on the same concept, the embodiment of the present application provides a hatching time prediction apparatus, and the specific implementation of the apparatus may refer to the description of the method embodiment, and repeated details are not repeated, as shown in fig. 8, the apparatus mainly includes:
an obtaining module 801, configured to obtain sound data inside the hatching apparatus within a current preset time period and N-1 preset time periods before the current preset time period, where N is greater than 1;
an extracting module 802, configured to extract a sound frequency feature in the sound data in each preset time period to obtain N sound frequency features, where the sound frequency feature is used to indicate the number of times of calling in the hatching apparatus in each preset time period;
and the processing module 803 is configured to obtain the hatching prediction time according to the N sound frequency characteristics.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 9, the electronic device mainly includes: a processor 901, a memory 902 and a communication bus 903, wherein the processor 901 and the memory 902 communicate with each other via the communication bus 903. The memory 902 stores a program executable by the processor 901, and the processor 901 executes the program stored in the memory 902, so as to implement the following steps:
acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1; extracting sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of calling in the hatching device in each preset time period; and obtaining the hatching prediction time according to the N sound frequency characteristics.
The communication bus 903 mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 903 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The Memory 902 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one storage device located remotely from the processor 901.
The Processor 901 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
In a further embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the hatching moment prediction method described in the above embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A hatching time prediction method is characterized by comprising the following steps:
acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, wherein N is greater than 1;
extracting sound frequency features in the sound data in each preset time period to obtain N sound frequency features, wherein the sound frequency features are used for representing the number of times of calling in the hatching device in each preset time period;
and obtaining the hatching prediction time according to the N sound frequency characteristics.
2. The method for predicting the hatching time according to claim 1, wherein the extracting the sound frequency features from the sound data in each of the preset time periods to obtain N sound frequency features comprises:
generating an envelope of the sound data in each of the preset time periods;
and acquiring the number of peaks of each envelope, and taking the number of peaks as the N sound frequency features.
3. The method for predicting the hatching time according to claim 1, wherein the extracting the sound frequency features from the sound data in each of the preset time periods to obtain N sound frequency features comprises:
and extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
4. The method for predicting the hatching time according to claim 1, wherein the extracting the sound frequency features from the sound data in each of the preset time periods to obtain N sound frequency features comprises:
generating an envelope of the sound data in each of the preset time periods;
obtaining the number of wave crests of each envelope, and taking the number of the wave crests as N first sound audio sub-features;
extracting second audio sub-features in the sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features;
and generating the N sound frequency sub-features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
5. The method for predicting the hatching time according to any one of claims 1 to 4, wherein the obtaining the predicted hatching time according to the N sound frequency characteristics comprises:
fitting a first curve according to the N sound frequency characteristics and the corresponding moments of the N sound frequency characteristics;
and when the trend of the first curve shows a gentle trend, obtaining the hatching prediction time.
6. The method for predicting the hatching time according to any one of claims 1 to 4, wherein the obtaining the predicted hatching time according to the N sound frequency characteristics comprises:
inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained hatching moment prediction model to obtain a first sound frequency prediction characteristic;
inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period to the hatching moment prediction model to obtain second sound frequency prediction characteristics;
inputting the sound frequency characteristics and the i-1 sound frequency prediction characteristics in the current preset time period and N-i preset time periods before the current preset time period into the hatching time prediction model to obtain an ith sound frequency prediction characteristic, wherein i is greater than 2 and i is less than N;
fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, the i sound frequency prediction characteristics and the time corresponding to the i sound frequency prediction characteristics;
and when the trend of the second curve shows a gentle trend, obtaining the hatching prediction time.
7. The hatching time prediction method of claim 6, wherein when the trend of the second curve shows a gradual trend, obtaining the hatching prediction time comprises:
calculating the difference value of the ith sound frequency prediction characteristic and the ith-1 sound frequency prediction characteristic;
and if the difference is smaller than a preset value, taking the time corresponding to the ith sound frequency prediction characteristic as the hatching prediction time.
8. A hatching timing prediction apparatus comprising:
the system comprises an acquisition module, a processing module and a control module, wherein the acquisition module is used for acquiring sound data inside the hatching device within a current preset time period and N-1 preset time periods before the current preset time period, and N is greater than 1;
the extracting module is used for extracting sound frequency features in the sound data in each preset time period to obtain N sound frequency features, wherein the sound frequency features are used for representing the number of times of calling in the hatching device in each preset time period;
and the processing module is used for obtaining the hatching prediction time according to the N sound frequency characteristics.
9. An electronic device, comprising: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is used for executing the program stored in the memory to realize the hatching moment prediction method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method for predicting the hatching moment according to any one of claims 1 to 7.
CN202110362045.3A 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time Active CN113095559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110362045.3A CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110362045.3A CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Publications (2)

Publication Number Publication Date
CN113095559A true CN113095559A (en) 2021-07-09
CN113095559B CN113095559B (en) 2024-04-09

Family

ID=76673579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110362045.3A Active CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Country Status (1)

Country Link
CN (1) CN113095559B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005311534A (en) * 2004-04-19 2005-11-04 Ntt Docomo Inc Server, information communication terminal, and alarm system
JP2014187625A (en) * 2013-03-25 2014-10-02 Pioneer Electronic Corp Audio signal processing device, acoustic device, method of controlling audio signal processing device, and program
CN107578344A (en) * 2017-07-28 2018-01-12 深圳市盛路物联通讯技术有限公司 A kind of monitoring method of biological information, and monitoring device
KR20180038833A (en) * 2016-10-07 2018-04-17 건국대학교 글로컬산학협력단 Method of estimating environment of layer chicken based on chickens sound and apparatus for the same
WO2018150616A1 (en) * 2017-02-15 2018-08-23 日本電信電話株式会社 Abnormal sound detection device, abnormality degree calculation device, abnormal sound generation device, abnormal sound detection learning device, abnormal signal detection device, abnormal signal detection learning device, and methods and programs therefor
US10062378B1 (en) * 2017-02-24 2018-08-28 International Business Machines Corporation Sound identification utilizing periodic indications
WO2019019667A1 (en) * 2017-07-28 2019-01-31 深圳光启合众科技有限公司 Speech processing method and apparatus, storage medium and processor
CN110111815A (en) * 2019-04-16 2019-08-09 平安科技(深圳)有限公司 Animal anomaly sound monitoring method and device, storage medium, electronic equipment
CN110738351A (en) * 2019-09-10 2020-01-31 北京海益同展信息科技有限公司 intelligent monitoring device, system and control method
CN110955286A (en) * 2019-10-18 2020-04-03 北京海益同展信息科技有限公司 Poultry egg monitoring method and device
CN111583962A (en) * 2020-05-12 2020-08-25 南京农业大学 Sheep rumination behavior monitoring method based on acoustic analysis

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005311534A (en) * 2004-04-19 2005-11-04 Ntt Docomo Inc Server, information communication terminal, and alarm system
JP2014187625A (en) * 2013-03-25 2014-10-02 Pioneer Electronic Corp Audio signal processing device, acoustic device, method of controlling audio signal processing device, and program
KR20180038833A (en) * 2016-10-07 2018-04-17 건국대학교 글로컬산학협력단 Method of estimating environment of layer chicken based on chickens sound and apparatus for the same
WO2018150616A1 (en) * 2017-02-15 2018-08-23 日本電信電話株式会社 Abnormal sound detection device, abnormality degree calculation device, abnormal sound generation device, abnormal sound detection learning device, abnormal signal detection device, abnormal signal detection learning device, and methods and programs therefor
US10062378B1 (en) * 2017-02-24 2018-08-28 International Business Machines Corporation Sound identification utilizing periodic indications
CN107578344A (en) * 2017-07-28 2018-01-12 深圳市盛路物联通讯技术有限公司 A kind of monitoring method of biological information, and monitoring device
WO2019019667A1 (en) * 2017-07-28 2019-01-31 深圳光启合众科技有限公司 Speech processing method and apparatus, storage medium and processor
CN110111815A (en) * 2019-04-16 2019-08-09 平安科技(深圳)有限公司 Animal anomaly sound monitoring method and device, storage medium, electronic equipment
CN110738351A (en) * 2019-09-10 2020-01-31 北京海益同展信息科技有限公司 intelligent monitoring device, system and control method
CN110955286A (en) * 2019-10-18 2020-04-03 北京海益同展信息科技有限公司 Poultry egg monitoring method and device
CN111583962A (en) * 2020-05-12 2020-08-25 南京农业大学 Sheep rumination behavior monitoring method based on acoustic analysis

Also Published As

Publication number Publication date
CN113095559B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
Priyadarshani et al. Automated birdsong recognition in complex acoustic environments: a review
CN110718235B (en) Abnormal sound detection method, electronic device and storage medium
CN110600059B (en) Acoustic event detection method and device, electronic equipment and storage medium
CN106683687B (en) Abnormal sound classification method and device
WO2017045429A1 (en) Audio data detection method and system and storage medium
Dufour et al. Clusterized mel filter cepstral coefficients and support vector machines for bird song identification
CN113327626A (en) Voice noise reduction method, device, equipment and storage medium
US20160322064A1 (en) Method and apparatus for signal extraction of audio signal
CN111540342B (en) Energy threshold adjusting method, device, equipment and medium
CN114937461A (en) Live pig sound event detection method and device based on channel attention and residual gating convolution
JP6367691B2 (en) Notification sound detection / identification device, notification sound detection / identification method, notification sound detection / identification program
Murugaiya et al. Probability enhanced entropy (PEE) novel feature for improved bird sound classification
CN115510265A (en) Method and system for judging animal hazard distribution of pole tower in power transmission line
Rademan et al. Soft-output signal detection for cetacean vocalizations using spectral entropy, k-means clustering and the continuous wavelet transform
CN103440870A (en) Method and device for voice frequency noise reduction
Al Bashit et al. MFCC-based houston toad call detection using LSTM
CN113095559A (en) Hatching time prediction method, device, equipment and storage medium
Li et al. Research on environmental sound classification algorithm based on multi-feature fusion
CN111261192A (en) Audio detection method based on LSTM network, electronic equipment and storage medium
CN116386669A (en) Machine running acoustic state monitoring method and system based on block automatic encoder
CN114387991A (en) Audio data processing method, apparatus, and medium for recognizing field environmental sounds
Kim et al. ROBUST detection of infant crying in adverse environments using weighted segmental two-dimensional linear frequency cepstral coefficients
CN115295011A (en) Sound signal processing method, device, equipment and storage medium
CN111916107A (en) Training method of audio classification model, and audio classification method and device
CN110322894B (en) Sound-based oscillogram generation and panda detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant