CN108157219A - A kind of pet based on convolutional neural networks stops apparatus and method of barking - Google Patents
A kind of pet based on convolutional neural networks stops apparatus and method of barking Download PDFInfo
- Publication number
- CN108157219A CN108157219A CN201711407047.XA CN201711407047A CN108157219A CN 108157219 A CN108157219 A CN 108157219A CN 201711407047 A CN201711407047 A CN 201711407047A CN 108157219 A CN108157219 A CN 108157219A
- Authority
- CN
- China
- Prior art keywords
- pet
- convolutional neural
- barking
- neural networks
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000006870 function Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 238000009432 framing Methods 0.000 claims 1
- 230000001537 neural effect Effects 0.000 claims 1
- 238000005457 optimization Methods 0.000 claims 1
- 230000002265 prevention Effects 0.000 abstract description 6
- 238000007781 pre-processing Methods 0.000 abstract description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 210000001260 vocal cord Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K15/00—Devices for taming animals, e.g. nose-rings or hobbles; Devices for overturning animals in general; Training or exercising equipment; Covering boxes
- A01K15/02—Training or exercising equipment, e.g. mazes or labyrinths for animals ; Electric shock devices ; Toys specially adapted for animals
- A01K15/021—Electronic training devices specially adapted for dogs or cats
- A01K15/022—Anti-barking devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Environmental Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Animal Husbandry (AREA)
- Zoology (AREA)
- Artificial Intelligence (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biodiversity & Conservation Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Physical Education & Sports Medicine (AREA)
- Toys (AREA)
Abstract
本发明提供了一种基于卷积神经网络的宠物止吠方法,包括以下步骤:S1、准备训练样本,选择若干段宠物叫吠声作为模型的训练数据;S2、预处理,对原始的宠物叫吠声进行预处理;S3、计算语谱图;S4、将语谱图输入卷积神经网络;S5、训练模型;S6、宠物身份识别;S7、止吠声音触发,播放止吠声音。本发明还提供了一种基于卷积神经网络的宠物止吠装置。本发明的有益效果是:将卷积神经网络应用到宠物止吠方法中,提升了止吠的灵活性和抗干扰性,且不会对宠物造成伤害,另外还可以对宠物进行身份识别。
The present invention provides a method for stopping barking of pets based on convolutional neural network, comprising the following steps: S1, preparing training samples, selecting several sections of pet barking sounds as the training data of the model; Bark sound preprocessing; S3, calculating the spectrogram; S4, inputting the spectrogram into the convolutional neural network; S5, training the model; S6, pet identification; S7, triggering the stop bark sound, and playing the stop bark sound. The present invention also provides a pet barking prevention device based on convolutional neural network. The beneficial effect of the present invention is that: applying the convolutional neural network to the method for stopping barking of pets improves the flexibility and anti-interference performance of stopping barking without causing harm to pets, and can also identify pets.
Description
技术领域technical field
本发明涉及止吠方法,尤其涉及一种基于卷积神经网络的宠物止吠装置与方法。The present invention relates to a barking prevention method, in particular to a pet barking prevention device and method based on a convolutional neural network.
背景技术Background technique
传统止吠方法除了给宠物(如狗)做手术切割声带、佩戴宠物用口罩以外,还有使用简式电子止吠器,如振动型、超声波或电击型的。Traditional anti-bark methods include the use of simple electronic anti-bark devices, such as vibrating, ultrasonic or electric shock types, in addition to surgically cutting vocal cords for pets (such as dogs) and wearing masks for pets.
传统方法缺点如下:The disadvantages of traditional methods are as follows:
(1)不灵活、容易对宠物造成伤害等。(1) Inflexible, easy to cause harm to pets, etc.
(2)无法对触发叫吠声的宠物身份识别。(2) It is impossible to identify the pet that triggers the barking sound.
发明内容Contents of the invention
为了解决现有技术中的问题,本发明提供了一种基于卷积神经网络的宠物止吠装置与方法。In order to solve the problems in the prior art, the present invention provides a pet barking prevention device and method based on a convolutional neural network.
本发明提供了一种基于卷积神经网络的宠物止吠装置,包括麦克风、运算放大器、嵌入式处理器、存储器、功率放大器和喇叭,其中,所述麦克风的输出端与所述运算放大器的输入端连接,所述运算放大器的输出端与所述嵌入式处理器的输入端连接,所述嵌入式处理器与所述存储器连接,所述嵌入式处理器的输出端与所述功率放大器的输入端连接,所述功率放大器的输出端与所述喇叭的输入端连接。The present invention provides a pet barking device based on convolutional neural network, comprising a microphone, an operational amplifier, an embedded processor, a memory, a power amplifier and a speaker, wherein the output of the microphone is connected to the input of the operational amplifier The output end of the operational amplifier is connected with the input end of the embedded processor, the embedded processor is connected with the memory, the output end of the embedded processor is connected with the input of the power amplifier The output terminal of the power amplifier is connected with the input terminal of the speaker.
本发明还提供了一种基于卷积神经网络的宠物止吠方法,包括以下步骤:The present invention also provides a method for stopping barking of pets based on convolutional neural network, comprising the following steps:
S1、准备训练样本,选择若干段宠物叫吠声作为模型的训练数据;S1. Prepare training samples and select several segments of pet barking sounds as training data for the model;
S2、预处理,对原始的宠物叫吠声进行预处理;S2, preprocessing, preprocessing the original pet barking sound;
S3、计算语谱图;S3, calculating the spectrogram;
S4、将语谱图输入卷积神经网络;S4, input the spectrogram into the convolutional neural network;
S5、训练模型;S5, training model;
S6、宠物身份识别;S6. Pet identification;
S7、止吠声音触发,播放止吠声音。S7, triggering the bark-stopping sound, and playing the bark-stopping sound.
作为本发明的进一步改进,步骤S2中的预处理包括预加重、分帧加窗、叫吠声音端点检测。As a further improvement of the present invention, the preprocessing in step S2 includes pre-emphasis, frame-by-frame windowing, and barking sound endpoint detection.
作为本发明的进一步改进,步骤S4中的卷积神经网络包括卷积层、降采样层和全连接层;卷积层作为卷积神经网络的第一层,直接对二维语谱图信号进行卷积操作;卷积核滤波器大小选择5X5模板,通过不同的卷积核滤波器作用得到的结果构成了特征图;每个卷积核滤波器共享相同的参数,包括相同的权重矩阵和偏置项,采用的卷积层数学模型如下:As a further improvement of the present invention, the convolutional neural network in step S4 includes a convolutional layer, a downsampling layer, and a fully connected layer; the convolutional layer, as the first layer of the convolutional neural network, directly performs a two-dimensional spectrogram signal Convolution operation; the size of the convolution kernel filter is selected as a 5X5 template, and the results obtained through different convolution kernel filters constitute a feature map; each convolution kernel filter shares the same parameters, including the same weight matrix and bias Setting items, the mathematical model of the convolutional layer is as follows:
y=f(x*k+b)y=f(x*k+b)
其中,x是输入信号,k是卷积核,*是卷积操作,b是偏置项,f是sigmoid函数,y是输出特征图;Among them, x is the input signal, k is the convolution kernel, * is the convolution operation, b is the bias term, f is the sigmoid function, and y is the output feature map;
降采样层部署在卷积层之后,降采样滤波器选择2X2模板,采样策略取4个像素对应的最大值,全连接层将得分值送给分类器。The downsampling layer is deployed after the convolutional layer, the downsampling filter selects a 2X2 template, the sampling strategy takes the maximum value corresponding to 4 pixels, and the fully connected layer sends the score value to the classifier.
作为本发明的进一步改进,在步骤S5中,模型训练在计算机PC上完成的,通过前向传播和后向传播,调整参数使训练模型达到最优。为了在PC上训练的模型,也能较好的部署在计算资源相对匮乏的嵌入式移动端,需对权值量化精简模型,生成Android支持的APK模型文件。As a further improvement of the present invention, in step S5, the model training is completed on the computer PC, and the parameters are adjusted to optimize the training model through forward propagation and backward propagation. In order to better deploy the model trained on the PC to the embedded mobile terminal with relatively scarce computing resources, it is necessary to quantify the weights and simplify the model to generate an APK model file supported by Android.
作为本发明的进一步改进,在步骤S6中,训练后的apk模型文件部署在权利要求1所述的基于卷积神经网络的宠物嵌入式Android止吠装置中,该宠物止吠装置中麦克风采集宠物叫吠声音信号,提取声音语谱图,将其作为卷积神经网络模型的输入,得到得分概率值,该得分概率值同设定阈值比较来判别,大于阈值,待检测宠物身份得到确认,否则未确认。As a further improvement of the present invention, in step S6, the trained apk model file is deployed in the pet-embedded Android anti-bark device based on convolutional neural network according to claim 1, and the microphone in the pet anti-bark device collects pet The barking sound signal is extracted from the sound spectrogram, and it is used as the input of the convolutional neural network model to obtain the score probability value. The score probability value is compared with the set threshold to judge. If it is greater than the threshold, the identity of the pet to be detected is confirmed, otherwise unconfirmed.
作为本发明的进一步改进,在步骤S7中,确定了宠物身份后,再检测宠物叫吠声幅值是否超过设定的阈值,超过的话,将保存在存储器的止吠声音,通过喇叭播放出来。As a further improvement of the present invention, in step S7, after determining the pet's identity, it is detected whether the amplitude of the pet's barking sound exceeds the set threshold, and if it exceeds, the barking sound stored in the memory is played through the speaker.
本发明的有益效果是:将卷积神经网络应用到宠物止吠方法中,提升了止吠的灵活性和抗干扰性,且不会对宠物造成伤害,另外还可以对宠物进行身份识别。The beneficial effect of the present invention is that: applying the convolutional neural network to the method for stopping barking of pets improves the flexibility and anti-interference performance of stopping barking without causing harm to pets, and can also identify pets.
附图说明Description of drawings
图1是本发明一种基于卷积神经网络的宠物止吠装置的示意图。FIG. 1 is a schematic diagram of a pet barking prevention device based on a convolutional neural network according to the present invention.
图2是本发明一种基于卷积神经网络的宠物止吠方法的流程示意图。Fig. 2 is a schematic flowchart of a method for stopping barking of pets based on a convolutional neural network according to the present invention.
具体实施方式Detailed ways
下面结合附图说明及具体实施方式对本发明作进一步说明。The present invention will be further described below in conjunction with the description of the drawings and specific embodiments.
如图1所示,一种基于卷积神经网络的宠物止吠装置,包括麦克风101、运算放大器102、嵌入式处理器103、存储器104、功率放大器105和喇叭106,其中,所述麦克风101的输出端与所述运算放大器102的输入端连接,所述运算放大器102的输出端与所述嵌入式处理器103的输入端连接,所述嵌入式处理器103与所述存储器104连接,所述嵌入式处理器103的输出端与所述功率放大器105的输入端连接,所述功率放大器105的输出端与所述喇叭106的输入端连接,该基于卷积神经网络的宠物止吠装置为嵌入式Android移动端止吠装置。As shown in Figure 1, a kind of pet anti-bark device based on convolutional neural network, comprises microphone 101, operational amplifier 102, embedded processor 103, memory 104, power amplifier 105 and loudspeaker 106, wherein, the microphone 101 The output terminal is connected with the input terminal of the operational amplifier 102, the output terminal of the operational amplifier 102 is connected with the input terminal of the embedded processor 103, the embedded processor 103 is connected with the memory 104, and the The output end of embedded processor 103 is connected with the input end of described power amplifier 105, and the output end of described power amplifier 105 is connected with the input end of described speaker 106, and this pet anti-barking device based on convolutional neural network is embedded Android mobile terminal anti-barking device.
如图1所示,本发明提供的一种基于卷积神经网络的宠物止吠装置,所述麦克风101采集宠物叫吠声并经过系统预处理后,输出到运算放大器102,宠物叫吠声经过运算放大器102信号放大后,输入到嵌入式处理器103的识别模型中,模型识别正确的宠物身份后,将存储器104事先录制的宠物主人的止吠声音,通过功率放大器105和喇叭106播放处来,宠物听到主人止吠声音后停止叫吠,从而达到止吠的目的。As shown in Figure 1, the present invention provides a pet barking device based on convolutional neural network. The microphone 101 collects the pet barking sound and outputs it to the operational amplifier 102 after the system pre-processes the pet barking sound. After the signal of the operational amplifier 102 is amplified, it is input into the recognition model of the embedded processor 103. After the model recognizes the correct pet identity, the pet owner's barking sound recorded in advance by the memory 104 is played by the power amplifier 105 and the loudspeaker 106. , the pet stops barking after hearing the sound of the owner stopping barking, so as to achieve the purpose of stopping barking.
卷积神经网络是目前深度学习体系中研究最多、应用最为成功的一个模型,其广泛应用与图像、语音、视频等诸多领域,在人工智能领域做出了巨大的贡献。本发明突破传统的技术手段,将卷积神经网络用在止吠声音识别上。该方法根据功能划分,主要包括宠物叫吠声模型的训练、宠物身份识别、止吠声音(一般指宠物主人事先录的制止吠声音)触发。其中模型训练和模型识别分别是在计算机和嵌入式Android移动端止吠装置上完成的。Convolutional neural network is currently the most researched and most successfully applied model in the deep learning system. It is widely used in many fields such as images, voice, and video, and has made great contributions in the field of artificial intelligence. The invention breaks through the traditional technical means, and uses the convolutional neural network in the sound recognition of bark stop. The method is divided according to functions, and mainly includes the training of the pet barking sound model, pet identification, and the triggering of the barking stop sound (generally referring to the barking stop sound recorded in advance by the pet owner). Among them, model training and model identification are completed on the computer and the embedded Android mobile terminal anti-barking device respectively.
如图2所示,一种基于卷积神经网络的宠物止吠方法,包括以下步骤:As shown in Figure 2, a method for stopping barking of pets based on convolutional neural network includes the following steps:
1、宠物叫吠声模型的训练1. Training of pet barking sound model
(1)准备训练样本(1) Prepare training samples
选择20段宠物(如宠物狗)叫吠声作为模型训练数据,每段声音长度大概30秒左右。Select 20 pieces of barking sounds from pets (such as pet dogs) as model training data, and the length of each sound is about 30 seconds.
(2)预处理(2) Pretreatment
为了提取有用的叫吠声信号,减少环境噪声影响,需要对其进行预处理。本方案采用的预处理方法包括预加重、分帧加窗、叫吠声音端点检测等。In order to extract useful barking signal and reduce the impact of environmental noise, it needs to be preprocessed. The preprocessing methods adopted in this solution include pre-emphasis, frame division and windowing, barking sound endpoint detection, etc.
(3)计算语谱图(3) Calculate spectrogram
考虑到宠物叫吠声音信息,这里采用语谱图作为卷积神经网络的输入。语谱图包含了大量的与叫吠声音的特性有关的信息,它综合了频谱图和时域波形的优点。Considering the pet barking sound information, the spectrogram is used here as the input of the convolutional neural network. The spectrogram contains a large amount of information related to the characteristics of the barking sound, and it combines the advantages of the spectrogram and the time-domain waveform.
(4)卷积神经网络(4) Convolutional neural network
这里采用典型的卷积神经网络结构,其结构包括卷积层、降采样层和全连接层。卷积层作为卷积神经网络的第一层,直接对二维语谱图信号进行卷积操作。卷积和滤波器大小选择5X5模板。通过不同的卷积核作用得到的结果构成了特征图。每个卷积核滤波器共享相同的参数,包括相同的权重矩阵和偏置项。这里采用的卷积层数学模型如下:A typical convolutional neural network structure is used here, and its structure includes convolutional layers, downsampling layers, and fully connected layers. As the first layer of the convolutional neural network, the convolution layer directly performs convolution operations on the two-dimensional spectrogram signal. Convolution and filter size selection 5X5 template. The results obtained through different convolution kernels constitute the feature map. Each convolution kernel filter shares the same parameters, including the same weight matrix and bias term. The mathematical model of the convolutional layer used here is as follows:
y=f(x*k+b)y=f(x*k+b)
其中,x是输入信号,k是卷积核,*是卷积操作,b是偏置项,f是sigmoid函数,y是输出特征图。Among them, x is the input signal, k is the convolution kernel, * is the convolution operation, b is the bias term, f is the sigmoid function, and y is the output feature map.
为了增加系统的鲁棒性,减少了计算复杂度,对输入降采样,降采样层部署在卷积层之后。降采样滤波器选择2X2模板,采样策略取4个像素对应的最大值。全连接层将得分值送给分类器(如softmax分类器)。In order to increase the robustness of the system and reduce the computational complexity, the input is down-sampled, and the down-sampling layer is deployed after the convolutional layer. The downsampling filter selects a 2X2 template, and the sampling strategy takes the maximum value corresponding to 4 pixels. The fully connected layer sends the score value to a classifier (such as a softmax classifier).
(5)训练模型(5) Training model
模型训练在计算机PC上完成的,通过前向传播和后向传播,调整参数使训练模型达到最优。为了在PC上训练的模型,也能较好的部署在计算资源相对匮乏的嵌入式移动端,需对权值量化精简模型,生成Android支持的APK模型文件。The model training is completed on the computer PC. Through forward propagation and backward propagation, the parameters are adjusted to make the training model optimal. In order to better deploy the model trained on the PC to the embedded mobile terminal with relatively scarce computing resources, it is necessary to quantify the weights and simplify the model to generate an APK model file supported by Android.
2、宠物身份识别2. Pet identification
训练后的APK模型部署在嵌入式Android移动端止吠装置中。嵌入式Android移动端止吠装置中麦克风101采集到宠物叫吠声音信号、提取语谱图信号量,将语谱图作为卷积神经网络模型的输入,得到得分概率值,该值大于设定的阈值,待检测宠物得以确认,否则未确认。The trained APK model is deployed in the embedded Android mobile device. The microphone 101 in the embedded Android mobile terminal barking device collects the pet barking sound signal, extracts the spectrogram signal volume, uses the spectrogram as the input of the convolutional neural network model, and obtains a score probability value, which is greater than the set value. Threshold, the pet to be detected is confirmed, otherwise it is not confirmed.
3、止吠声音触发3. Triggered by barking sound
确定了宠物身份后,再检测宠物叫吠声音幅值是否超过设定的阈值,超过的话,将保存在存储器104的止吠声音,通过喇叭106播放出来。After determining the pet's identity, it is detected whether the amplitude of the pet's barking sound exceeds the set threshold, and if it exceeds, the barking sound stored in the memory 104 will be played through the loudspeaker 106 .
本发明提供的一种基于卷积神经网络的宠物止吠装置与方法,独创性的将卷积神经网络应用到宠物止吠中,提升了止吠的灵活性和抗干扰性,且不会对宠物造成伤害,另外本发明还可以对宠物进行身份识别。The present invention provides a device and method for preventing barking of pets based on convolutional neural network, which creatively applies convolutional neural network to pet barking prevention, improves the flexibility and anti-interference performance of barking suppression, and does not affect The pet causes harm, and the present invention can also identify the pet in addition.
本发明提供的一种基于卷积神经网络的宠物止吠装置与方法,具有以下特点:A convolutional neural network-based pet barking device and method provided by the present invention has the following characteristics:
1、人类语音身份识别技术应用在宠物叫吠身份识别上。1. Human voice recognition technology is applied to pet barking recognition.
2、卷积神经网络作为训练和识别的模型。2. The convolutional neural network is used as a model for training and recognition.
3、提取宠物叫吠声的语谱图特征作为卷积神经网络的输入。3. Extract the spectrogram features of pet barking sounds as the input of the convolutional neural network.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deduction or replacement can be made, which should be regarded as belonging to the protection scope of the present invention.
Claims (7)
- The device 1. a kind of pet based on convolutional neural networks only barks, it is characterised in that:Including microphone, operational amplifier, embedding Enter formula processor, memory, power amplifier and loudspeaker, wherein, the output terminal of the microphone and the operational amplifier Input terminal connects, and the output terminal of the operational amplifier is connect with the input terminal of the embeded processor, the embedded place Reason device is connect with the memory, and the output terminal of the embeded processor is connect with the input terminal of the power amplifier, institute The output terminal for stating power amplifier is connect with the input terminal of the loudspeaker.
- A kind of method 2. pet based on convolutional neural networks only barks, which is characterized in that include the following steps:S1, prepare training sample, several sections of pets is selected to yelp training data of the sound as model;S2, pretreatment, the sound that yelps to original pet pre-process;S3, sound spectrograph is calculated;S4, sound spectrograph is inputted into convolutional neural networks;S5, training pattern;S6, pet identification;S7, sounds trigger of only barking, play and stop bark sound.
- The method 3. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S2 Pretreatment includes that preemphasis, framing adding window, yelp sound end-point detection.
- The method 4. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S4 Convolutional neural networks include convolutional layer, down-sampled layer and full articulamentum;First layer of the convolutional layer as convolutional neural networks, directly Convolution operation is carried out to two-dimentional sound spectrograph signal;Convolution kernel wave filter size selects 5X5 templates, is filtered by different convolution kernels The result that device acts on constitutes characteristic pattern;Each convolution kernel wave filter shares identical parameter, including identical weight square Battle array and bias term, the convolutional layer mathematical model of use are as follows:Y=f (x*k+b)Wherein, x is input signal, and k is convolution kernel, and * is convolution operation, and b is bias term, and f is sigmoid functions, and y is that output is special Sign figure;Down-sampled layer is deployed in after convolutional layer, and desampling fir filter selection 2X2 templates, sampling policy takes 4 pixels corresponding Maximum value, full articulamentum give score value to grader.
- The method 5. pet according to claim 2 based on convolutional neural networks only barks, it is characterised in that:In step S5 In, after model training and optimization, the APK model files of generation Android supports.
- The method 6. pet according to claim 5 based on convolutional neural networks only barks, it is characterised in that:In step S6 In, the APK model files after training are disposed the pet described in claim 1 based on convolutional neural networks and are stopped in device of barking, The pet microphone acquisition pet in device that only barks yelps signal, sound spectrograph semaphore is extracted, using sound spectrograph as convolutional Neural The input of network model obtains scoring probability value, which is more than threshold value, and pet identity to be detected is confirmed, is not otherwise obtained Confirm.
- The method 7. pet according to claim 6 based on convolutional neural networks only barks, it is characterised in that:In step S7 In, it is determined that it after pet identity, then detects pet and whether yelps amplitude more than the threshold value set, if being more than, deposited being stored in The only bark sound of reservoir, is played back by loudspeaker.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711407047.XA CN108157219A (en) | 2017-12-22 | 2017-12-22 | A kind of pet based on convolutional neural networks stops apparatus and method of barking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711407047.XA CN108157219A (en) | 2017-12-22 | 2017-12-22 | A kind of pet based on convolutional neural networks stops apparatus and method of barking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108157219A true CN108157219A (en) | 2018-06-15 |
Family
ID=62523500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711407047.XA Pending CN108157219A (en) | 2017-12-22 | 2017-12-22 | A kind of pet based on convolutional neural networks stops apparatus and method of barking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108157219A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322894A (en) * | 2019-06-27 | 2019-10-11 | 电子科技大学 | A kind of waveform diagram generation and giant panda detection method based on sound |
CN111866192A (en) * | 2020-09-24 | 2020-10-30 | 汉桑(南京)科技有限公司 | Pet interaction method, system and device based on pet ball and storage medium |
CN115104548A (en) * | 2022-07-11 | 2022-09-27 | 深圳市前海远为科技有限公司 | Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology |
CN118435880A (en) * | 2024-05-06 | 2024-08-06 | 深圳市安牛智能创新有限公司 | Dynamic adaptive virtual reality dog training method and system, storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5927233A (en) * | 1998-03-10 | 1999-07-27 | Radio Systems Corporation | Bark control system for pet dogs |
CN102231117A (en) * | 2011-07-08 | 2011-11-02 | 盛乐信息技术(上海)有限公司 | Software installment method and system for embedded platform |
CN102499106A (en) * | 2011-09-29 | 2012-06-20 | 鲁东大学 | Bark stop device with voice recognition function |
CN104794527A (en) * | 2014-01-20 | 2015-07-22 | 富士通株式会社 | Method and equipment for constructing classification model based on convolutional neural network |
CN106454634A (en) * | 2016-11-18 | 2017-02-22 | 深圳市航天华拓科技有限公司 | Environment sound detection-based barking stopping apparatus and barking stopping method |
CN106782501A (en) * | 2016-12-28 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Speech Feature Extraction and device based on artificial intelligence |
CN106782504A (en) * | 2016-12-29 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Audio recognition method and device |
CN106821337A (en) * | 2017-04-13 | 2017-06-13 | 南京理工大学 | A kind of sound of snoring source title method for having a supervision |
-
2017
- 2017-12-22 CN CN201711407047.XA patent/CN108157219A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5927233A (en) * | 1998-03-10 | 1999-07-27 | Radio Systems Corporation | Bark control system for pet dogs |
CN102231117A (en) * | 2011-07-08 | 2011-11-02 | 盛乐信息技术(上海)有限公司 | Software installment method and system for embedded platform |
CN102499106A (en) * | 2011-09-29 | 2012-06-20 | 鲁东大学 | Bark stop device with voice recognition function |
CN104794527A (en) * | 2014-01-20 | 2015-07-22 | 富士通株式会社 | Method and equipment for constructing classification model based on convolutional neural network |
CN106454634A (en) * | 2016-11-18 | 2017-02-22 | 深圳市航天华拓科技有限公司 | Environment sound detection-based barking stopping apparatus and barking stopping method |
CN106782501A (en) * | 2016-12-28 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Speech Feature Extraction and device based on artificial intelligence |
CN106782504A (en) * | 2016-12-29 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Audio recognition method and device |
CN106821337A (en) * | 2017-04-13 | 2017-06-13 | 南京理工大学 | A kind of sound of snoring source title method for having a supervision |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322894A (en) * | 2019-06-27 | 2019-10-11 | 电子科技大学 | A kind of waveform diagram generation and giant panda detection method based on sound |
CN110322894B (en) * | 2019-06-27 | 2022-02-11 | 电子科技大学 | Sound-based oscillogram generation and panda detection method |
CN111866192A (en) * | 2020-09-24 | 2020-10-30 | 汉桑(南京)科技有限公司 | Pet interaction method, system and device based on pet ball and storage medium |
CN115104548A (en) * | 2022-07-11 | 2022-09-27 | 深圳市前海远为科技有限公司 | Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology |
CN115104548B (en) * | 2022-07-11 | 2022-12-27 | 深圳市前海远为科技有限公司 | Pet behavior adjustment and human-pet interaction method and device based on multimedia information technology |
CN118435880A (en) * | 2024-05-06 | 2024-08-06 | 深圳市安牛智能创新有限公司 | Dynamic adaptive virtual reality dog training method and system, storage medium |
CN118435880B (en) * | 2024-05-06 | 2024-11-26 | 深圳市安牛智能创新有限公司 | Dynamic adaptive virtual reality dog training method and system, storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110782878B (en) | Attention mechanism-based multi-scale audio scene recognition method | |
CN1478269B (en) | Device and method for judging dog's feeling from cry vocal character analysis | |
US10692480B2 (en) | System and method of reading environment sound enhancement based on image processing and semantic analysis | |
Cai et al. | Sensor network for the monitoring of ecosystem: Bird species recognition | |
CN108157219A (en) | A kind of pet based on convolutional neural networks stops apparatus and method of barking | |
CN105118498B (en) | The training method and device of phonetic synthesis model | |
CN106782504B (en) | Audio recognition method and device | |
CN107492382B (en) | Voiceprint information extraction method and device based on neural network | |
CN109754812A (en) | A voiceprint authentication method for anti-recording attack detection based on convolutional neural network | |
KR102605736B1 (en) | Method and apparatus of sound event detecting robust for frequency change | |
US10178228B2 (en) | Method and apparatus for classifying telephone dialing test audio based on artificial intelligence | |
CN110047510A (en) | Audio identification methods, device, computer equipment and storage medium | |
CN107785029A (en) | Target voice detection method and device | |
CN107609462A (en) | Measurement information generation to be checked and biopsy method, device, equipment and storage medium | |
Aravind et al. | Audio spoofing verification using deep convolutional neural networks by transfer learning | |
CN110047512A (en) | A kind of ambient sound classification method, system and relevant apparatus | |
CN106778559A (en) | The method and device of In vivo detection | |
JP2024547129A (en) | Model training and tone conversion method, apparatus, device and medium | |
Padovese et al. | Data augmentation for the classification of North Atlantic right whales upcalls | |
KR102508550B1 (en) | Apparatus and method for detecting music section | |
CN107369451A (en) | A kind of birds sound identification method of the phenology research of auxiliary avian reproduction phase | |
Zhang et al. | A novel insect sound recognition algorithm based on MFCC and CNN | |
KR20210131067A (en) | Method and appratus for training acoustic scene recognition model and method and appratus for reconition of acoustic scene using acoustic scene recognition model | |
CN106875944A (en) | A kind of system of Voice command home intelligent terminal | |
Ansar et al. | An EfficientNet-based ensemble for bird-call recognition with enhanced noise reduction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |
|
RJ01 | Rejection of invention patent application after publication |