CN101976564A - Method for identifying insect voice - Google Patents

Method for identifying insect voice Download PDF

Info

Publication number
CN101976564A
CN101976564A CN201010515848XA CN201010515848A CN101976564A CN 101976564 A CN101976564 A CN 101976564A CN 201010515848X A CN201010515848X A CN 201010515848XA CN 201010515848 A CN201010515848 A CN 201010515848A CN 101976564 A CN101976564 A CN 101976564A
Authority
CN
China
Prior art keywords
insect
sound
sample
voice signal
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010515848XA
Other languages
Chinese (zh)
Inventor
王鸿斌
张真
罗茜
赵丽稳
张培毅
孔祥波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Forest Ecology Environment and Protection of Chinese Academy of Forestry
Original Assignee
Research Institute of Forest Ecology Environment and Protection of Chinese Academy of Forestry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Forest Ecology Environment and Protection of Chinese Academy of Forestry filed Critical Research Institute of Forest Ecology Environment and Protection of Chinese Academy of Forestry
Priority to CN201010515848XA priority Critical patent/CN101976564A/en
Publication of CN101976564A publication Critical patent/CN101976564A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention discloses a method for identifying insect voice. The method comprises the following steps: denoising an acquired insect voice signal, and cutting the denoised insect voice signal into a plurality of voice segments, wherein each voice segment comprises a pulse string; performing endpoint detection on each voice segment, wherein the voice segment subjected to the endpoint detection is used as a sample; framing the voice segment in each sample so that the voice segment of each sample is provided with a pre-set number of frames; extracting the characteristic parameters of each frame in each sample, and performing time warping on the extracted characteristic parameters to obtain identification parameters; training a BP artificial neutral network by using the identification parameters of partial samples so that the BP artificial neutral network recognizes and remembers the identification parameters; and inputting the identification parameters of other samples into the trained BP artificial neutral network so as to identify the insect species corresponding to each of the other samples. The method of the invention can accurately identify the insect species by processing and analyzing the insect voice.

Description

The insect sound identification method
Technical field
The present invention relates to a kind of sound identification method, refer to a kind of recognition methods of insect sound especially.
Background technology
Insect is to utilize voice signal to plant the species of information interchange between interior or kind the earliest, 16 purpose insects energy sounding are arranged in 34 orders of Insecta, voice signal is the tie of getting in touch with external environment in the insect vital movement, the calling in planting between individuality, seek a spouse, attack, aspect such as warning plays an important role.Insect gnaws, also can sound in the activity such as flight at it except that specific phonatory organ is sounded.Specificity between the sound that most of insects send all has kind, therefore, can be with the characteristic voice of insect foundation as the identification caste.
Before caste discerned, the voice signal that will entomologize at first.At present, along with the fast development of electronics technology, the insect researcher carries out real-time high-resolution record to the sound of insect and has become possibility.For example, the Edirol R24 digital audio tape cost of 96KHz, 24bit is low and the voice signal effect that entomologizes is fine.Again for example, up-to-date Genex GX9048 is state-of-the-art in the world digital audio recorder, and it provides PCM recording, playback and 48 rail DSD recording, the playing function of 48 rail 24bit, 192KHz.
At present, the animal sounds analysis realizes by statistical methods such as multivariate analysis of variance, discriminatory analysis or principal component analysis (PCA)s.Along with the development of modern technologies, utilize these statistical methods set up can the discriminator unknown sound of a cover, measure kind in plant between the automatic classification system of sounding variation become a reality.Automatically identification is exactly to develop and a next important technology on the basis of this automatic classification system, and the time that it could simulate and integrate the sounding pattern changes, and compares with traditional full spectrogram measurement, and it can utilize time-domain information better.Utilize the automatic classification system of animal sounds mainly to be based on artificial neural network (ANN), hidden markov model (HMM) and gauss hybrid models (GMM).But the pattern-recognition and the machine learning techniques that are used for insect still are in the starting stage, at present, mainly are based on artificial neural network for the automatic classification system that utilizes insect sound, especially the BP artificial neural network.
Artificial neural network (Artificial Neural Networks is abbreviated as ANN) is a kind of algorithm mathematics model that animal nerve network behavior feature is carried out the distributed parallel information processing that imitates.This network relies on the complexity of system, by adjusting interconnective relation between the inner great deal of nodes, thereby reaches the purpose of process information.Artificial neural network has self study and adaptive ability, can be by input---the output data of a collection of mutual correspondence that provides in advance, analyze and grasp potential rule between the two, finally according to these rules, calculate the output result with new input data, the process of this study analysis is called as " training ".At present, artificial neural network has been successfully applied to the automatic Classification and Identification of animal sounds.For example, test in 10 kinds of 25 kinds in Britain's Orthoptera and Japanese bird respectively about the scholar utilizes artificial neural network, the former has 99% recognition correct rate, and the latter has 100% recognition correct rate.
At present, the phonetic feature that is used for the animal sounds Classification and Identification mainly contains linear predictive coding (LPC), LPC cepstrum coefficient (LPCC), mole frequency cepstral coefficient (MFCC) and Green's Wood function cepstrum coefficient (GFCC).In human speech identification, it is many that cepstrum coefficient is used, and this is because stable cepstrum coefficient can obtain reasonable recognition performance and easily extract.In the Classification and Identification of animal sounds, the many of usefulness also is cepstrum coefficient, especially MFCC.MFCC combines the auditory perception property of people's ear and the generation mechanism of voice, and therefore, it has obtained using widely in speech recognition system.
But, in animal sounds Classification and Identification field now, both there be not consistent method for mode matching, there is not unified phonetic feature selection scheme yet, particularly in the automatic identification of insect sound, at present, gather, handle and classification mainly be cricket, sounding such as cicada are big, the easy singing insect of gathering, and it is faint not relate to sounding, the insect that is difficult for collection, and, that at present finish insect sound the phonetic feature of identification selects for use automatically based on artificial neural network all is LPCC, LPC, or even the peak value of frequency, these phonetic features are not very high at recognition success rate.
Summary of the invention
The object of the present invention is to provide a kind of insect sound identification method, this method can identify kind under this insect by the insect sound of gathering.
In order to achieve the above object, the present invention has adopted following technical scheme:
A kind of insect sound identification method, it is characterized in that: it comprises step:
Step 1: the voice signal of the insect that collects removed make an uproar;
Step 2: will intercept into a plurality of sound clips except that the voice signal of this insect after making an uproar, a train of impulses is arranged in the sound clip;
Step 3: each sound clip is carried out end-point detection, detect sound section and unvoiced segments in the sound clip; A sound clip after the end-point detection is as a sample;
Step 4: carry out the branch frame to sound section in each sample and operate, make sound section of each sample to have default frame number; Each frame in each sample is extracted characteristic parameter, and it is regular that the characteristic parameter that extracts is carried out the time, obtains identification parameter;
Step 5: utilize the identification parameter of a part of sample that the BP artificial neural network is trained, make this BP artificial neural network understanding, remember these identification parameters;
Step 6: with this BP artificial neural network after the identification parameter input training of all the other samples, to identify the pairing caste of each sample in these all the other samples.
Comprise following sound signal collecting step before step 1: the insect of voice signal to be collected is fixed in the soundproof box, and the sensor that is provided with in the generator of this insect and this soundproof box is at a distance of a setpoint distance; The voice signal of this this insect of sensor acquisition also sends this voice signal to sound pick-up outfit and stores.
Advantage of the present invention is: the inventive method can be by accurately identifying the kind of insect to the Treatment Analysis of insect sound, very practical, the recognition success rate height provides reliable foundation for personnel such as entomology worker differentiate caste.The present invention is not only applicable to the kind identification of sounding such as cricket, cicada singing insect big, that easily gather, is applicable to that also bark beetle etc. sends the kind identification of the coleopteron of faint sound.
Description of drawings
Fig. 1 is the realization flow figure of insect sound identification method of the present invention;
Fig. 2 gathers one section voice signal that adit is cut tip bark beetle;
Fig. 3 gathers one section voice signal that tip bark beetle is cut in Yunnan;
Fig. 4 gathers one section voice signal that undercoat is cut tip bark beetle;
Fig. 5 is that a sound clip that obtains behind the voice signal that tip bark beetle gathered is cut in intercepting to adit;
Fig. 6 is that a sound clip that obtains behind the voice signal that tip bark beetle gathered is cut in intercepting to Yunnan;
Fig. 7 is that a sound clip that obtains behind the voice signal that tip bark beetle gathered is cut in intercepting to undercoat.
Embodiment
Describe the present invention below in conjunction with accompanying drawing.
The voice signal that insect sent belongs to non-stationary signal, and the each sounding of insect can produce a plurality of pulsegroup, and each pulsegroup has a plurality of train of impulses, and each train of impulses has the monopulse of varying number to constitute.
As shown in Figure 1, insect sound identification method of the present invention may further comprise the steps:
Step 1: the voice signal of the insect that collects is removed make an uproar (adopting Adobe Adition 2.0 to remove the operation of making an uproar);
Step 2: will intercept into a plurality of sound clips except that the voice signal of this insect after making an uproar, a train of impulses (having only a train of impulses) is arranged in the sound clip;
Step 3: each sound clip (sound clip is formed with unvoiced segments by sound section) is carried out end-point detection, detect the sound section (zone of voice signal non-zero in the sound clip, monopulse is arranged) and unvoiced segments (voice signal is zero zone, no monopulse), be beneficial to the branch frame operation of back; A sound clip after the end-point detection is as a sample;
Step 4: carry out the branch frame to sound section in each sample and operate, make sound section of each sample to have default frame number; Each frame in each sample is extracted characteristic parameter, and it is regular that the characteristic parameter that extracts is carried out the time, obtains identification parameter;
Step 5: utilize the identification parameter of a part of sample (being called training sample) that the BP artificial neural network is trained, make this BP artificial neural network understanding, remember these identification parameters, just the pairing caste feature of training sample is remembered;
Step 6: with this BP artificial neural network after the identification parameter input training of all the other samples (being called recognition sample), to identify the pairing caste of each sample in these all the other samples.
Before by insect sound caste being discerned, the voice signal that will entomologize earlier, therefore, before step 1, also comprise following sound signal collecting step: the insect of voice signal to be collected is fixed in the soundproof box that can reduce noise jamming, make it be in to coerce under the pressure and sound, but can not cause physical injury to it, the sensor that is provided with in the generator of this insect and this soundproof box is at a distance of a setpoint distance, and this setpoint distance is generally about 0.5 centimetre; In the recording time section, the voice signal of this this insect of sensor acquisition also sends this voice signal to sound pick-up outfit and stores.This sound pick-up outfit can be Edirol R-4 digital audio tape (DAT), cooperates popularity SM4001 sensor to use, and the sample frequency when recording is 96KHz, and resolution is 16bit, monophony, and volume is set to maximum.
The operand of the invention described above method can be an insect or a plurality of insect.If a plurality of insects, the kind of these a plurality of insects can be identical or different.The inventive method is applicable to the insect of all sounding, the faint coleopteron of sounding for example, and the voice signal that this coleopteron sent is for coercing sound.
In step 4: after dividing the frame operation, sound section frame number that is had of each sample can be identical or different; In having each sample of default frame number, the setting section region overlapping of adjacent two frames.
In step 4, each frame in each sample is extracted characteristic parameter, it is regular that the characteristic parameter that extracts is carried out the time, obtains this step of identification parameter and can be: the 12 dimension MFCC characteristic parameters and the 12 dimension first order difference Δ MFCC characteristic parameters that extract each frame in the sample; , the 12 dimension MFCC characteristic parameters and the regular number of 12 dimension first order difference Δ MFCC characteristic parameter times of those frames that 12 dimension first order difference Δ MFCC characteristic parameters in this sample are non-vanishing for setting.That is to say that dimension MFCC characteristic parameter of 12 after the time is regular and 12 dimension first order difference Δ MFCC characteristic parameters are identification parameter.It should be noted that, after time is regular 12 dimension MFCC characteristic parameter draws by regular algorithm computation of time, the 12 dimension MFCC characteristic parameters of before not being each frame in the sample being asked, in the same manner, after time is regular 12 dimension first order difference Δ MFCC characteristic parameter also draws by regular algorithm computation of time, is not the 12 dimension first order difference Δ MFCC characteristic parameters of before each frame in the sample being asked.
In step 4, each frame in each sample is extracted characteristic parameter, it is regular that the characteristic parameter that extracts is carried out the time, obtains this step of identification parameter and also can be: the 12 dimension MFCC characteristic parameters that extract each frame in the sample; With all regular numbers of 12 dimension MFCC characteristic parameter times of extracting for setting.That is to say that the dimension of 12 after the time is regular MFCC characteristic parameter is identification parameter.After the time that it should be noted that is regular 12 dimension MFCC characteristic parameter draws by regular algorithm computation of time, is not the 12 dimension MFCC characteristic parameters of before each frame in the sample being asked.
For example:
Below adit being cut tip bark beetle, Yunnan cuts tip bark beetle, undercoat and cuts the sound of coercing that three kinds of tip bark beetles cut tip bark beetle insect and discern.For convenience of description, respectively adit is cut below that tip bark beetle, Yunnan cut tip bark beetle, undercoat is cut tip bark beetle and is called A worm, B worm, C worm.
At first, one section voice signal (coercing sound) of A worm, B worm, C worm is gathered in front and back, and the voice signal of the A worm of collection, B worm, C worm is respectively as Fig. 2, Fig. 3, shown in Figure 4.The voice signal of A worm, B worm, C worm has included a plurality of train of impulses.
Then, the voice signal of Fig. 2, Fig. 3, A worm shown in Figure 4, B worm, C worm removed make an uproar, will intercept into a plurality of sound clips except that the voice signal after making an uproar.The voice signal of A worm is intercepted into 5 sound clips, and Fig. 5 has a train of impulses for intercepting one of them sound clip that obtains behind the A worm voice signal in it.The voice signal of B worm, C worm is intercepted into 6,5 sound clips respectively, and Fig. 6, Fig. 7 are respectively one of them sound clip that obtains behind intercepting B worm, the C worm voice signal.
Then, each sound clip that obtains after the voice signal of A worm, B worm, C worm intercepted carries out end-point detection, detects sound section and head and the tail unvoiced segments.As Fig. 5 to Fig. 7, it is sound section that there is the zone of monopulse the centre, and the zone that head and the tail do not have monopulse is a unvoiced segments.A sound clip after the end-point detection is the used sample of back identification caste.Therefore as can be known, this sample one that is used for caste identification has 16.In the reality, sample size is decided with insect to be identified, is generally a hundreds of order of magnitude, to guarantee recognition success rate.
Then, carry out the branch frame to sound section in each sample and operate, make sound section of each sample to have default frame number.Sound section frame number that is had of 16 samples can be identical or different.For example, make sound section of 16 samples all to divide frame according to the following rules: 256 points are 1 frame, overlapping 128 points of adjacent two interframe.
Divide after the frame, each frame in each sample is extracted 12 dimension MFCC characteristic parameters and 12 dimension first order difference Δ MFCC characteristic parameters.Then, by regular algorithm of time, it is regular that the characteristic parameter of each sample extraction is all carried out the time, concrete operations to each sample are: tieing up the first order difference Δ MFCC characteristic parameter times the 12 12 dimension MFCC characteristic parameters and 12 of tieing up those non-vanishing frames of first order difference Δ MFCC characteristic parameters in this sample regular is 4 (numbers of setting), and these 4 12 dimension MFCC characteristic parameters and 4 12 dimension first order difference Δ MFCC characteristic parameters are not that the 12 dimension MFCC characteristic parameters, 12 that each frame is asked in the regular preceding sample of time are tieed up first order difference Δ MFCC characteristic parameters.Therefore, each sample has 4 * 24 identification parameters (if only extract 12 dimension MFCC characteristic parameters, then each sample finally has 4 * 12 identification parameters), one 12 dimension MFCC characteristic parameter and its corresponding one 12 dimension first order difference Δ MFCC characteristic parameter are one group of identification parameter.
Then, utilize the identification parameter (sample has 96 identification parameters) of a part of sample (training sample) in A worm, B worm, the C worm sample that the BP artificial neural network is trained, make this BP artificial neural network understanding, remember these identification parameters, remember the sound characteristic of A worm, B worm, C worm.
At last, with this BP artificial neural network after the identification parameter input training of A worm, B worm, all the other samples of C worm (recognition sample), just can identify the pairing caste of each sample of input.
As from the foregoing, though adit is cut tip bark beetle, Yunnan and is cut tip bark beetle, undercoat and cut three kinds of tip bark beetles to cut tip bark beetle insect very similar on morphology, bad resolution, but, they coerce the acoustical signature difference, therefore, their acoustical signal of coercing is carried out analyzing and processing by the inventive method, just can identify their kind, and recognition success rate is very high.
In actual experiment, if the training sample of A worm, B worm, C worm is all got 100, frequency of training is 20000 times, recognition sample is got 54,95,54 respectively, so, the recognition success rate of A worm can reach 75.5%, and the recognition success rate of B worm can reach 94.7%, the recognition success rate of C worm can reach 94.4%, this shows that recognition success rate is all more than 75%, average recognition success rate is more than 88%, recognition success rate is very high, can reach the demand of each ergonomist to insect identification.
In this application, the algorithm of BP artificial neural network, regular algorithm of time, the 12 dimension MFCC characteristic parameters of asking for frame and 12 dimension first order difference Δ MFCC characteristic parameters all belongs to the known technology of this area, here no longer describes in detail.The BP artificial neural network needs to come parameters such as relative set input number of nodes, output node number according to the demand of each identification caste.Can be about regular algorithm of time with reference to the associated description in " based on the initial method of the GMM Speaker Identification model of regular network of time " in the electronic information scientific and technical literature database (work such as Shen Chen, Zhang Ming).
The inventive method can be by accurately identifying the kind of insect to the Treatment Analysis of insect sound, very practical, the recognition success rate height provides reliable foundation for personnel such as entomology worker differentiate caste.The present invention is not only applicable to the kind identification of sounding such as cricket, cicada singing insect big, that easily gather, is applicable to that also bark beetle etc. sends the kind identification of the coleopteron of faint sound.
The above is preferred embodiment of the present invention and the know-why used thereof; for a person skilled in the art; under the situation that does not deviate from spirit and scope of the invention; any based on conspicuous changes such as the equivalent transformation on the technical solution of the present invention basis, simple replacements, all belong within the protection domain of the present invention.

Claims (7)

1. insect sound identification method, it is characterized in that: it comprises step:
Step 1: the voice signal of the insect that collects removed make an uproar;
Step 2: will intercept into a plurality of sound clips except that the voice signal of this insect after making an uproar, a train of impulses is arranged in the sound clip;
Step 3: each sound clip is carried out end-point detection, detect sound section and unvoiced segments in the sound clip; A sound clip after the end-point detection is as a sample;
Step 4: carry out the branch frame to sound section in each sample and operate, make sound section of each sample to have default frame number; Each frame in each sample is extracted characteristic parameter, and it is regular that the characteristic parameter that extracts is carried out the time, obtains identification parameter;
Step 5: utilize the identification parameter of a part of sample that the BP artificial neural network is trained, make this BP artificial neural network understanding, remember these identification parameters;
Step 6: with this BP artificial neural network after the identification parameter input training of all the other samples, to identify the pairing caste of each sample in these all the other samples.
2. insect sound identification method according to claim 1 is characterized in that:
Before described step 1, comprise following sound signal collecting step:
The insect of voice signal to be collected is fixed in the soundproof box, and the sensor that is provided with in the generator of this insect and this soundproof box is at a distance of a setpoint distance; The voice signal of this this insect of sensor acquisition also sends this voice signal to sound pick-up outfit and stores.
3. insect sound identification method according to claim 1 is characterized in that:
In described step 4: sound section frame number that is had of each sample is identical or different; In having each sample of default frame number, the setting section region overlapping of adjacent two frames.
4. insect sound identification method according to claim 3 is characterized in that:
In described step 4, each frame in each sample is extracted characteristic parameter, it is regular that the characteristic parameter that extracts is carried out the time, obtains this step of identification parameter and be specially:
Extract the 12 dimension MFCC characteristic parameters and the 12 dimension first order difference Δ MFCC characteristic parameters of each frame in the sample; The 12 dimension MFCC characteristic parameters and the regular number of 12 dimension first order difference Δ MFCC characteristic parameter times of the frame that 12 dimension first order difference Δ MFCC characteristic parameters in this sample are non-vanishing for setting.
5. insect sound identification method according to claim 3 is characterized in that:
In described step 4, each frame in each sample is extracted characteristic parameter, it is regular that the characteristic parameter that extracts is carried out the time, obtains this step of identification parameter and be specially:
Extract 12 dimension MFCC characteristic parameters of each frame in the sample; With all regular numbers of 12 dimension MFCC characteristic parameter times of extracting for setting.
6. according to each described insect sound identification method in the claim 1 to 5, it is characterized in that:
Described insect is one; Perhaps, described insect is a plurality of, and the kind of a plurality of described insects is identical or different.
7. insect sound identification method according to claim 6 is characterized in that:
Described insect is a coleopteron, and described voice signal is the sound of coercing of coleopteron.
CN201010515848XA 2010-10-15 2010-10-15 Method for identifying insect voice Pending CN101976564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010515848XA CN101976564A (en) 2010-10-15 2010-10-15 Method for identifying insect voice

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010515848XA CN101976564A (en) 2010-10-15 2010-10-15 Method for identifying insect voice

Publications (1)

Publication Number Publication Date
CN101976564A true CN101976564A (en) 2011-02-16

Family

ID=43576444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010515848XA Pending CN101976564A (en) 2010-10-15 2010-10-15 Method for identifying insect voice

Country Status (1)

Country Link
CN (1) CN101976564A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103117061A (en) * 2013-02-05 2013-05-22 广东欧珀移动通信有限公司 Method and device for identifying animals based on voice
CN103905559A (en) * 2014-04-14 2014-07-02 重庆工商职业学院 Valuable and rare bird population distribution detection system based on birdcall voiceprint characteristics
CN103985385A (en) * 2014-05-30 2014-08-13 安庆师范学院 Method for identifying Batrachia individual information based on spectral features
CN104392722A (en) * 2014-11-28 2015-03-04 电子科技大学 Sound-based biological population identification method and system
CN104835498A (en) * 2015-05-25 2015-08-12 重庆大学 Voiceprint identification method based on multi-type combination characteristic parameters
CN105139852A (en) * 2015-07-30 2015-12-09 浙江图维电力科技有限公司 Engineering machinery recognition method and recognition device based on improved MFCC (Mel Frequency Cepstrum Coefficient) sound features
CN105973446A (en) * 2016-05-10 2016-09-28 中国科学院电工研究所 Detector for vibration signal of insect
CN106094008A (en) * 2016-05-20 2016-11-09 渭南师范学院 A kind of grain storage pest sound detection identification system
CN106094614A (en) * 2016-06-06 2016-11-09 河南工程学院 A kind of grain information monitoring remote monitoring system based on the Internet
CN106847293A (en) * 2017-01-19 2017-06-13 内蒙古农业大学 Facility cultivation sheep stress behavior acoustical signal monitoring method
CN107194229A (en) * 2017-05-22 2017-09-22 商洛学院 A kind of computer user's personal identification method
CN107292314A (en) * 2016-03-30 2017-10-24 浙江工商大学 A kind of lepidopterous insects species automatic identification method based on CNN
CN107369444A (en) * 2016-05-11 2017-11-21 中国科学院声学研究所 A kind of underwater manoeuvre Small object recognition methods based on MFCC and artificial neural network
CN107393542A (en) * 2017-06-28 2017-11-24 北京林业大学 A kind of birds species identification method based on binary channels neutral net
WO2018032946A1 (en) * 2016-08-19 2018-02-22 中兴通讯股份有限公司 Method, device, and system for maintaining animal database
WO2019079972A1 (en) * 2017-10-24 2019-05-02 深圳和而泰智能控制股份有限公司 Specific sound recognition method and apparatus, and storage medium
CN110120224A (en) * 2019-05-10 2019-08-13 平安科技(深圳)有限公司 Construction method, device, computer equipment and the storage medium of bird sound identification model
CN111699368A (en) * 2019-05-22 2020-09-22 深圳市大疆创新科技有限公司 Strike detection method, device, movable platform and computer readable storage medium
CN113317285A (en) * 2021-07-08 2021-08-31 湖州师范学院 Bark beetle sex identification instrument based on difference of female and male wing friction sounds of bark beetle
CN113506579A (en) * 2021-05-27 2021-10-15 华南师范大学 Insect pest recognition method based on artificial intelligence and sound and robot
CN115910077A (en) * 2022-11-15 2023-04-04 生态环境部南京环境科学研究所 Insect identification method based on deep learning of sound

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《中国优秀硕士学位论文全文数据库信息科技辑》 20080115 聂晓颖 果蝇鸣声特征提取及人工神经网络分类研究-陕西硕士论文 I140-39 1-7 , 2 *
《昆虫学报》 20100830 竺乐庆等 基于Mel倒谱系数和矢量量化的昆虫声音自动鉴别 901-907 1-7 第53卷, 第8期 2 *
《植物保护》 20080830 赵丽稳等 昆虫声音信号和应用研究进展 5-12 1-7 第34卷, 第4期 2 *
《计算机工程》 20031231 韩萍 仓储物害虫声音的模式识别 151-152、154 1-7 第29卷, 第22期 2 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103117061A (en) * 2013-02-05 2013-05-22 广东欧珀移动通信有限公司 Method and device for identifying animals based on voice
CN103117061B (en) * 2013-02-05 2016-01-20 广东欧珀移动通信有限公司 A kind of voice-based animals recognition method and device
CN103905559A (en) * 2014-04-14 2014-07-02 重庆工商职业学院 Valuable and rare bird population distribution detection system based on birdcall voiceprint characteristics
CN103985385A (en) * 2014-05-30 2014-08-13 安庆师范学院 Method for identifying Batrachia individual information based on spectral features
CN104392722A (en) * 2014-11-28 2015-03-04 电子科技大学 Sound-based biological population identification method and system
CN104835498A (en) * 2015-05-25 2015-08-12 重庆大学 Voiceprint identification method based on multi-type combination characteristic parameters
CN105139852A (en) * 2015-07-30 2015-12-09 浙江图维电力科技有限公司 Engineering machinery recognition method and recognition device based on improved MFCC (Mel Frequency Cepstrum Coefficient) sound features
CN107292314A (en) * 2016-03-30 2017-10-24 浙江工商大学 A kind of lepidopterous insects species automatic identification method based on CNN
CN105973446A (en) * 2016-05-10 2016-09-28 中国科学院电工研究所 Detector for vibration signal of insect
CN105973446B (en) * 2016-05-10 2019-01-15 中国科学院电工研究所 A kind of detection device of insect vibration signal
CN107369444A (en) * 2016-05-11 2017-11-21 中国科学院声学研究所 A kind of underwater manoeuvre Small object recognition methods based on MFCC and artificial neural network
CN106094008A (en) * 2016-05-20 2016-11-09 渭南师范学院 A kind of grain storage pest sound detection identification system
CN106094614B (en) * 2016-06-06 2018-10-19 河南工程学院 A kind of grain information monitoring remote monitoring system Internet-based
CN106094614A (en) * 2016-06-06 2016-11-09 河南工程学院 A kind of grain information monitoring remote monitoring system based on the Internet
WO2018032946A1 (en) * 2016-08-19 2018-02-22 中兴通讯股份有限公司 Method, device, and system for maintaining animal database
CN106847293A (en) * 2017-01-19 2017-06-13 内蒙古农业大学 Facility cultivation sheep stress behavior acoustical signal monitoring method
CN107194229A (en) * 2017-05-22 2017-09-22 商洛学院 A kind of computer user's personal identification method
CN107393542B (en) * 2017-06-28 2020-05-19 北京林业大学 Bird species identification method based on two-channel neural network
CN107393542A (en) * 2017-06-28 2017-11-24 北京林业大学 A kind of birds species identification method based on binary channels neutral net
WO2019079972A1 (en) * 2017-10-24 2019-05-02 深圳和而泰智能控制股份有限公司 Specific sound recognition method and apparatus, and storage medium
CN110120224A (en) * 2019-05-10 2019-08-13 平安科技(深圳)有限公司 Construction method, device, computer equipment and the storage medium of bird sound identification model
CN110120224B (en) * 2019-05-10 2023-01-20 平安科技(深圳)有限公司 Method and device for constructing bird sound recognition model, computer equipment and storage medium
CN111699368A (en) * 2019-05-22 2020-09-22 深圳市大疆创新科技有限公司 Strike detection method, device, movable platform and computer readable storage medium
CN113506579A (en) * 2021-05-27 2021-10-15 华南师范大学 Insect pest recognition method based on artificial intelligence and sound and robot
CN113506579B (en) * 2021-05-27 2024-01-23 华南师范大学 Insect pest identification method and robot based on artificial intelligence and voice
CN113317285A (en) * 2021-07-08 2021-08-31 湖州师范学院 Bark beetle sex identification instrument based on difference of female and male wing friction sounds of bark beetle
CN115910077A (en) * 2022-11-15 2023-04-04 生态环境部南京环境科学研究所 Insect identification method based on deep learning of sound

Similar Documents

Publication Publication Date Title
CN101976564A (en) Method for identifying insect voice
Agrawal et al. Novel TEO-based Gammatone features for environmental sound classification
CN102163427B (en) Method for detecting audio exceptional event based on environmental model
CN108922541B (en) Multi-dimensional characteristic parameter voiceprint recognition method based on DTW and GMM models
Lee et al. Automatic recognition of animal vocalizations using averaged MFCC and linear discriminant analysis
Kumar et al. Design of an automatic speaker recognition system using MFCC, vector quantization and LBG algorithm
Brandes Feature vector selection and use with hidden Markov models to identify frequency-modulated bioacoustic signals amidst noise
Venter et al. Automatic detection of African elephant (Loxodonta africana) infrasonic vocalisations from recordings
CN103280220A (en) Real-time recognition method for baby cry
CN105448291A (en) Parkinsonism detection method and detection system based on voice
CN112750442B (en) Crested mill population ecological system monitoring system with wavelet transformation and method thereof
Dufour et al. First automatic passive acoustic tool for monitoring two species of procellarides (Pterodroma baraui and Puffinus bailloni) on Reunion Island, Indian Ocean
Ting Yuan et al. Frog sound identification system for frog species recognition
Kharamat et al. Durian ripeness classification from the knocking sounds using convolutional neural network
CN111489763A (en) Adaptive method for speaker recognition in complex environment based on GMM model
Fezari et al. Acoustic analysis for detection of voice disorders using adaptive features and classifiers
Kuo Feature extraction and recognition of infant cries
Wiśniewski et al. Automatic detection of disorders in a continuous speech with the hidden Markov models approach
Permana et al. Implementation of constant-Q transform (CQT) and mel spectrogram to converting bird’s sound
Kamble et al. Emotion recognition for instantaneous Marathi spoken words
Zhang et al. A novel insect sound recognition algorithm based on MFCC and CNN
Zambon et al. Real-time urban traffic noise maps: the influence of Anomalous Noise Events in Milan Pilot area of DYNAMAP
CN111862991A (en) Method and system for identifying baby crying
Hrabina Analysis of linear predictive coefficients for gunshot detection based on neural networks
Cai et al. The best input feature when using convolutional neural network for cough recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110216