WO2020006935A1 - 动物声纹特征提取方法、装置及计算机可读存储介质 - Google Patents

动物声纹特征提取方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020006935A1
WO2020006935A1 PCT/CN2018/111658 CN2018111658W WO2020006935A1 WO 2020006935 A1 WO2020006935 A1 WO 2020006935A1 CN 2018111658 W CN2018111658 W CN 2018111658W WO 2020006935 A1 WO2020006935 A1 WO 2020006935A1
Authority
WO
WIPO (PCT)
Prior art keywords
animal
voiceprint
voice
animal voice
feature vector
Prior art date
Application number
PCT/CN2018/111658
Other languages
English (en)
French (fr)
Inventor
王健宗
蔡元哲
程宁
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020006935A1 publication Critical patent/WO2020006935A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/18Artificial neural networks; Connectionist approaches

Definitions

  • the present application relates to the technical field of animal identification, and in particular, to a method and device for extracting animal voiceprint features, and a computer non-volatile readable storage medium.
  • animal voiceprint features can be used to identify animal identity information, and then determine animal identity.
  • Voiceprint recognition is a type of biometric recognition. Different species and individuals have unique voiceprint information. Humans can distinguish which animal it is through animal sounds, but the sounds made by different individuals of the same species are confirmed by the human ear. It is difficult to identify directly.
  • the experimental technicians will process the animal voice data into the voice database, establish an animal voiceprint feature database, and use the animal voiceprint feature database to record animal identity information to further determine the identity.
  • the animal voice data is labeled to determine the animal identity, so that when the animal identity information needs to be verified, the animal identity data is identified by comparing the animal voice data to be identified with the animal voice data in the animal voiceprint feature database.
  • the existing animal voiceprint feature extraction method usually converts the animal sound signal into a spectrogram.
  • the spectrogram is a graphical representation of the sound signal.
  • the amplitude of the sound at each frequency point is distinguished by color.
  • the characteristics of animal voiceprints were obtained through different processing methods.
  • the accuracy of extracting animal voiceprint features by analyzing the spectrogram is low, which affects the accuracy of animal voiceprint feature extraction.
  • the environment is noisy and various Sound mixing will affect the effect of voiceprint extraction.
  • the embodiments of the present application provide a method and device for extracting animal voiceprint features and a computer non-volatile readable storage medium, which solves the problem that the features of animal voiceprints cannot be accurately extracted in the related art.
  • an animal voiceprint feature extraction method includes:
  • the animal voice feature vector is input to a convolutional neural network model for training, and an animal voiceprint feature for identifying an animal identity is obtained.
  • an animal voiceprint feature extraction device includes:
  • An extraction unit configured to extract an animal voice feature vector from the animal voice data
  • a training unit is configured to input the animal voice feature vector into a convolutional neural network model for training, and obtain an animal voiceprint feature for identifying an animal identity.
  • a computer non-volatile readable storage medium in which computer readable instructions are stored, and the program implements the following steps when executed by a processor:
  • the animal voice feature vector is input to a convolutional neural network model for training, and an animal voiceprint feature for identifying an animal identity is obtained.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor executes the program, the following is implemented: step:
  • the animal voice feature vector is input to a convolutional neural network model for training, and an animal voiceprint feature for identifying an animal identity is obtained.
  • animal voice feature vectors are extracted from animal voice data. Since animal voice feature vectors have the advantages of simple calculation and good discrimination ability, the animal voice feature vectors are input to a convolutional neural network model for training, and then animal voices are extracted. Compared with the prior art method of extracting animal voiceprint features by means of spectrograms, the embodiment of the present application uses a more advanced voiceprint extraction technology to repeatedly use animal convolutional feature vectors through a convolutional neural network model. Training, so as to accurately extract the voiceprint features of animals, and then improve the effect of animal identification.
  • FIG. 1 is a flowchart of an animal voiceprint feature extraction method according to an embodiment of the present application.
  • FIG. 2 is a flowchart of another animal voiceprint feature extraction method according to an embodiment of the present application.
  • FIG. 3 is a structural block diagram of an animal voiceprint feature extraction device according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of another animal voiceprint feature extraction device according to an embodiment of the present application.
  • FIG. 5 is a block diagram of an animal voiceprint feature extraction device 400 according to an embodiment of the present application.
  • FIG. 1 is a flowchart of an animal voiceprint feature extraction method according to an embodiment of the present application. As shown in FIG. 1, the process includes the following steps:
  • Step S101 acquiring animal voice data
  • the animal voice data is the data of animal sounds, which is equivalent to the unique sound data of animal communication.
  • animal sounds For example, bees make sounds through wings to transmit information, dolphins can make pleasant notes like humans, and pig barking sounds can also transmit a lot.
  • Information such as judging the health of the pigs, identifying the pig's identity, etc.
  • the animal voice data is audio data collected from an animal.
  • the animal voice data can be obtained by installing a collection device on the animal, or by installing a collection device in an animal living place.
  • This application implements The example is not limited.
  • a wearable sensor is usually installed on the neck of the animal to obtain animal voice data.
  • Step S102 extracting an animal voice feature vector from the animal voice data
  • animal speech feature vectors can be divided into two categories based on the stability of the parameters.
  • One is to reflect the inherent characteristics of the animal (such as the channel structure, etc.).
  • Such animal speech feature vectors are mainly represented in the spectral structure of speech.
  • it includes spectral envelope characteristic information reflecting channel resonance and spectral detail structure characteristic information reflecting characteristics of sound sources such as vocal cord vibration.
  • Representative characteristic parameters are genes and formants. Such characteristics are not easy to be imitated, but are easily affected. Health status impact; the other type is used to reflect the characteristics of animal vocal tract movements, that is, the way of pronunciation, pronunciation habits, etc., mainly reflected in the change of the speech spectrum structure with time.
  • Representative feature parameters include cepstrum coefficients, which include features The dynamic characteristics of the parameters, such characteristics are relatively stable and easy to imitate.
  • the animal voice feature vector contains unique voice information in the animal voice data, which is equivalent to the preparation stage of subsequent animal voiceprint feature extraction.
  • the animal voice feature vector can be obtained from the animal voice data.
  • the useful information for animal identification is extracted from the data, and irrelevant redundant information is removed.
  • Step S103 input the animal voice feature vector to a convolutional neural network model for training, and obtain an animal voiceprint feature for identifying an animal identity.
  • the convolutional neural network model here is a network structure that can extract animal voiceprint features by repeatedly training animal voice feature vectors.
  • the network structure can train animal voice feature vectors and give correct inputs. -Output relationship.
  • the structure of a specific convolutional neural network model can be realized through the structure of a convolutional layer, a fully connected layer, and a pooling layer.
  • the convolutional layer here is equivalent to the hidden layer of the convolutional neural network. It can be a multilayer structure that is used to extract deeper layers. Layered animal voiceprint features; in convolutional neural network models, in order to reduce parameters and reduce calculations, pooling layers are often inserted at intervals in successive convolutional layers; the fully connected layer here is similar to the convolutional layer, the convolutional layer Neurons are connected to the output local area of the previous layer. Of course, in order to reduce too many output feature vectors, two fully connected layers can be set. After the animal voice feature vector is trained through several convolutional layers, the training output feature vector is integrated. .
  • Animal voiceprint information is the only sound feature that can identify animals. It is a sound wave spectrum graphic with language information displayed by electroacoustic instruments. Although the physiological structure of the vocal organs of animals is always the same, the organs used by animals in the process of sounding are The size and shape are very different, and the different channel characteristics also determine the uniqueness of the voiceprint, and have long-term stable characteristic signals.
  • animal voice feature vectors are extracted from animal voice data. Since animal voice feature vectors have the advantages of simple calculation and good discrimination ability, the animal voice feature vectors are input to a convolutional neural network model for training, and then animal voices are extracted. Compared with the prior art method of extracting animal voiceprint features by means of spectrograms, the embodiment of the present application uses a more advanced voiceprint extraction technology to repeatedly use animal convolutional feature vectors through a convolutional neural network model. Training, so as to accurately extract the voiceprint features of animals, and then improve the effect of animal identification.
  • FIG. 2 is a flowchart of another animal voiceprint feature extraction method according to an embodiment of the present application. As shown in FIG. 2, the method includes the following steps:
  • Step S201 Acquire animal voice data.
  • step S101 the specific manner of obtaining animal voice data here is the same as that used in step S101, and is not repeated here.
  • the voice data is controlled by setting a preset time period
  • the length of the acquisition time is convenient for subsequent processing of animal voice data.
  • Step S202 pre-process the animal voice data to obtain processed animal voice data.
  • the preprocessing may include operations such as sampling and quantization, pre-emphasis, sound frame, and windowing.
  • sampling quantization is to convert animal speech sequences that are continuous in time and amplitude into discrete analog signals that are discrete in time and still continuous in amplitude, and use the prescribed value to represent the amplitude of animal speech sequences.
  • pre-emphasis is to eliminate the effects of vocal cords and lips during the occurrence of animals, to compensate for the high-frequency parts of the animal's speech sequence suppressed by the pronunciation system, and to highlight the high-frequency formants.
  • the function of the sound box is to frame the animal voice data. Generally, a set of N sampling points is set as an observation unit, that is, a sound box.
  • the windowing function is to eliminate the discontinuity at the two ends of the animal voice data, and to avoid the influence of the connected sound boxes before and after the analysis.
  • Step S203 Perform framed operation on the processed animal voice data according to a preset time interval to obtain multiple frames of animal voice sequences.
  • the animal voice data is not a stable audio signal.
  • a multi-frame animal voice sequence is obtained, and then each frame of the animal voice sequence is viewed.
  • Into a stable audio signal to facilitate subsequent processing of animal speech sequences.
  • the preset time interval is usually set to 200-400ms, of course, it is not limited here, and it is determined according to the actual situation.
  • step S204 an animal voice feature vector is extracted from the animal voice data.
  • the animal voice feature vector is a basic feature that can reflect animal individual information. These basic features must be able to accurately and effectively distinguish different pronunciation animals, and these basic features should be stable for the same individual.
  • Different animal voice feature vectors have different feature parameters, and different feature parameters have different physical meanings.
  • genes and formants are characteristic features of animals.
  • LPC based on the all-pole model can accurately reflect The spectral amplitude and cepstrum coefficients of animal speech sequences reflect the resonance characteristics of animal channels, separating smaller peak information from more important channel shape information.
  • animal voice feature vectors have different extraction methods.
  • the animal voice feature vector is used to reflect the characteristic information of the animal voice in the spectrum structure over time, the following can be used to extract the animal voice data.
  • Animal speech feature vector First, Fourier transform is performed on the animal speech sequence of each frame to obtain the spectrum of the animal speech sequence of each frame, and the spectrum of the animal speech sequence is modulo-squared to obtain the power spectrum of the animal speech sequence. The power spectrum of the animal speech sequence is filtered to obtain the logarithmic energy of the animal speech sequence. Finally, the logarithmic energy of the animal speech sequence is subjected to discrete cosine transform to obtain an animal speech feature vector.
  • the animal voice feature vector When the animal voice feature vector is used to reflect the characteristic information of the animal voice on the spectral structure, the animal voice feature vector can be extracted from the animal voice data through the following implementation methods. First, time-domain analysis and frequency-domain analysis are performed on each frame of the animal voice sequence. , And then calculate the time domain feature parameters and frequency domain feature parameters of the animal voice sequence for each frame, and finally obtain the animal voice feature vector according to the time domain feature parameters and the frequency domain feature parameters.
  • MFCC features are one of the most widely used voice features at present, with outstanding advantages such as simple calculation and good discrimination ability, which can simulate the processing characteristics of animals' ears to a certain extent, and the recognition effect is high, so this application
  • the Mel frequency cepstrum coefficient MFCC feature
  • the MFCC feature can be selected as the animal voice feature vector extracted from the animal voice data. Since the MFCC feature is also used to reflect the characteristic information of the animal voice in the spectral structure over time, The extraction method is as described in the above steps, and is not repeated here.
  • Step S205 The animal voice feature vector is input to a convolutional neural network model for training, and an animal voiceprint feature for identifying the animal identity is obtained.
  • the convolutional neural network model is a multi-layered network model.
  • the convolutional layer of the convolutional neural network model can extract the local voiceprint information of the animal voice feature vector. This layer is equivalent to the hidden information of the neural network model. Contains layers, where the local voiceprint information is used to reflect the local characteristics of animal voiceprint information. Multiple convolution layers can be set, and the extracted local voiceprint information is re-entered into the convolution layer for local voiceprint information extraction, and then extracted. To deeper local voiceprint information; the extracted local voiceprint information is connected through the fully connected layer of the convolutional neural network model to obtain multidimensional local voiceprint information. In order to reduce the output size and the fitting process, After obtaining the multi-dimensional local voiceprint information, the multi-dimensional local voiceprint information is reduced in dimension by the pooling layer of the convolutional neural network model to obtain the characteristics of the animal voiceprint.
  • the animal voice feature vector input to the convolutional neural network model convolution layer is subjected to frame processing to increase the feature relationship between the front and back frames. For example, when the first layer is input to the convolution layer, the first 5 frames of animal voice are input. Feature vectors are framed. When the second input to the convolution layer, the first 9 frames of animal speech feature vectors are framed.
  • the number of layers of the neural network model is set to 15, the first 11 layers are used as the convolution layer, and the first layer is used as the convolution layer.
  • the animal speech vector features of 5 frames before and after stitching are used as the input parameters of the first layer of the neural network model. If there are 83-dimensional animal voice feature vectors per frame, a total of 415-dimensional animal voice feature vectors are obtained as input parameters, and the local voiceprint information is extracted from the animal voice feature vectors, and the extracted local voiceprint information is output.
  • the second layer is full
  • the connection layer connects the extracted local voiceprint information.
  • the third layer is a convolution layer. Similarly, the local voiceprint information output by the second layer neural network model is used as the input parameter of the third layer neural network model.
  • the speech feature vector is used to extract local voiceprint information, and output the extracted local voiceprint information, until the 11th layer of the convolution layer outputs the local voiceprint information, and the 12th layer is the pooling layer, and the local voiceprint output from the first 11 layers is trained.
  • the information is integrated, and the mean and variance are calculated.
  • the 13th to 15th layers are fully connected layers.
  • the integrated voiceprint features are reduced in dimension, and the one-dimensional animal voiceprint features are output to extract the animal voiceprint features.
  • the animal voiceprint features can be purified to obtain the voiceprint features that can best represent the animal identity.
  • the output retains feature vectors that can represent animal voiceprint features to the greatest extent, improving the accuracy of animal voiceprint features.
  • step S206 an animal voiceprint feature database is established based on the extracted animal voiceprint features, and the animal voiceprint features carry unique animal identity information.
  • animal voiceprint features are equivalent to animal-specific identification information
  • different animal voiceprint features carry animal identity information.
  • animal voiceprint samples are created based on the extracted animal voiceprint features to establish animal voiceprints.
  • the voiceprint feature database is equivalent to a database storing voiceprint features of different animals, and each animal voiceprint information carries animal identification information, such as numbers or letter numbers, and is not limited here.
  • the voiceprint feature database can be classified and sorted in advance. For example, different animal species, different regions of animals, or animals of different ages can be classified. Screen animal identification samples to save animal identification time.
  • step S207 when the animal identity verification request is received, the animal voiceprint features of the animal to be identified are compared with the animal voiceprint features in the animal voiceprint feature database to determine the animal identity information.
  • the animal voiceprint features of the animal identity to be identified are extracted through the above steps S201 to S205, and the animal to be identified is identified as an animal.
  • the voiceprint features are compared with the animal voiceprint features in the animal voiceprint feature library one by one to determine the animal identity information.
  • animal voice feature vectors are extracted from animal voice data. Because animal voice feature vectors have the advantages of simple calculation and good discrimination ability, the animal voice feature vectors are input to a convolutional neural network model for training, and then extracted. Compared with the prior art method of extracting animal voiceprint features by means of a spectrogram, the embodiment of the present application adopts a more advanced voiceprint extraction technology, and uses a convolutional neural network model to compare animal voice feature vectors. Repeated training to accurately extract the voiceprint features of animals, thereby improving the effect of animal identification.
  • FIG. 3 is a structural block diagram of an animal voiceprint feature extraction device according to an embodiment of the present application.
  • the apparatus includes an acquisition unit 31, an extraction unit 32, and a training unit 33.
  • the obtaining unit 31 may be used to obtain animal voice data
  • An extraction unit 32 which may be used to extract an animal voice feature vector from the animal voice data
  • the training unit 33 may be used to input animal voice feature vectors into a convolutional neural network model for training, and obtain animal voiceprint features for identifying animal identities.
  • animal voice feature vectors are extracted from animal voice data. Since animal voice feature vectors have the advantages of simple calculation and good discrimination ability, the animal voice feature vectors are input to a convolutional neural network model for training, and then animal voices are extracted. Compared with the prior art method of extracting animal voiceprint features by means of spectrograms, the embodiment of the present application uses a more advanced voiceprint extraction technology to repeatedly use animal convolutional feature vectors through a convolutional neural network model. Training, so as to accurately extract the voiceprint features of animals, and then improve the effect of animal identification.
  • FIG. 4 is a schematic structural diagram of another animal voiceprint feature extraction device according to an embodiment of the present application. As shown in FIG. 4, the device further includes:
  • the pre-processing unit 34 may be configured to pre-process the animal voice data after obtaining the animal voice data to obtain the processed animal voice data;
  • the frame framing unit 35 may be configured to perform framing operations on the processed animal voice data according to a preset time interval to obtain a multi-frame animal voice sequence;
  • the establishing unit 36 may be used to input animal voice feature vectors into a convolutional neural network model for training, and obtain animal voiceprint features for identifying animal identities, and then establish animal voiceprints based on the extracted animal voiceprint features.
  • Feature database, the unique animal voiceprint features carry unique animal identity information;
  • the comparison unit 37 may be configured to compare an animal voiceprint feature of an animal to be identified with an animal voiceprint feature in an animal voiceprint feature database when an animal identity verification request is received, to determine animal identity information.
  • the extraction unit 32 includes:
  • the first extraction module 321 may be configured to perform a Fourier transform on the animal voice sequence of each frame to obtain a frequency spectrum of the animal voice sequence of each frame, and modulo square the frequency spectrum of the animal voice sequence of each frame to obtain a power spectrum of the animal voice sequence;
  • the second extraction module 322 may be configured to filter the power spectrum of the animal voice sequence through a preset filter to obtain the logarithmic energy of the animal voice sequence;
  • the third extraction module 323 may be configured to perform discrete cosine transform on the logarithmic energy of the animal speech sequence to obtain an animal speech feature vector.
  • the extraction unit 32 includes:
  • the fourth extraction module 324 may be used to perform time domain analysis and frequency domain analysis on the animal voice sequence of each frame, and calculate the time domain characteristic parameters and frequency domain feature parameters of the animal voice sequence of each frame;
  • the fifth extraction module 325 may be configured to obtain an animal voice feature vector according to the time domain feature parameters and the frequency domain feature parameters.
  • the training unit 33 includes:
  • a sixth extraction module 331 may be used to extract the local voiceprint information of the animal speech feature vector through the convolution layer of the convolutional neural network model;
  • connection module 332 can be used to connect the extracted local voiceprint information through the fully connected layer of the convolutional neural network model to obtain multidimensional local voiceprint information
  • the dimensionality reduction module 333 can be used to perform multidimensional dimensionality reduction on the local voiceprint information through the pooling layer of the convolutional neural network model to obtain the animal voiceprint features.
  • training unit 33 further includes:
  • the stitching module 334 can be used to perform an animal voice feature vector input to the convolutional layer of the convolutional neural network model before extracting the local voiceprint information of the animal voice feature vector through the convolutional layer of the convolutional neural network model Frame stitching.
  • FIG. 5 is a block diagram of an animal voiceprint feature extraction device 400 according to an embodiment of the present application.
  • the device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness equipment, a personal digital assistant, and the like.
  • the device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an I / O (Input / Output) interface 412, A sensor component 414, and a communication component 416.
  • the processing component 402 generally controls the overall operations of the device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 402 may include one or more processors 420 to execute instructions to complete all or part of the steps of the method described above.
  • the processing component 402 may include one or more modules to facilitate the interaction between the processing component 402 and other components.
  • the processing component 402 may include a multimedia module to facilitate the interaction between the multimedia component 408 and the processing component 402.
  • the memory 404 is configured to store various types of data to support operation at the device 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 404 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as SRAM (Static Random Access Memory, Static Random Access Memory), EEPROM (Electrically-Erasable Programmable Read-Only Memory, Electrical Erasable Programmable Read Only Memory (EPROM), EPROM (Erasable Programmable Read Only Memory), PROM (Programmable Read-Only Memory, Programmable Read Only Memory), ROM (Read-Only Memory, Read-only memory), magnetic memory, flash memory, magnetic or optical disks.
  • SRAM Static Random Access Memory, Static Random Access Memory
  • EEPROM Electrically-Erasable Programmable Read-Only Memory
  • EPROM Electrical Erasable Programmable Read Only Memory
  • PROM Programmable Read-Only Memory, Programmable
  • the power component 406 provides power to various components of the device 400.
  • the power component 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 400.
  • the multimedia component 408 includes a screen that provides an output interface between the device 400 and a user.
  • the screen may include an LCD (Liquid Crystal Display) and a TP (Touch Panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 408 includes a front camera and / or a rear camera. When the device 400 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 410 is configured to output and / or input audio signals.
  • the audio component 410 includes a MIC (Microphone, microphone).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 404 or transmitted via the communication component 416.
  • the audio component 410 further includes a speaker for outputting an audio signal.
  • the I / O interface 412 provides an interface between the processing component 402 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, or the like. These buttons can include, but are not limited to: a home button, a volume button, a start button, and a lock button.
  • the sensor component 414 includes one or more sensors for providing status assessment of various aspects of the device 400.
  • the sensor component 414 can detect the on / off state of the device 400 and the relative positioning of the components, such as the display and keypad of the device 400.
  • the sensor component 414 can also detect the change in the position of the device 400 or a component of the device 400. The presence or absence of contact with the device 400, the orientation or acceleration / deceleration of the device 400, and the temperature change of the device 400.
  • the sensor component 414 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 414 may further include a light sensor, such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge-coupled Device) image sensor, for use in imaging applications.
  • a light sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge-coupled Device) image sensor, for use in imaging applications.
  • the sensor component 414 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 416 is configured to facilitate wired or wireless communication between the device 400 and other devices.
  • the device 400 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 416 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 416 further includes an NFC (Near Field Communication) module to facilitate short-range communication.
  • the NFC module can be based on RFID (Radio Frequency Identification) technology, IrDA (Infra-red Data Association) technology, UWB (Ultra Wideband) technology, BT (Bluetooth, Bluetooth) technology and Other technologies to achieve.
  • the device 400 may be implemented by one or more ASIC (Application Specific Integrated Circuit), DSP (Digital Signal Processor), DSPD (Digital Signal Processor Device) ), PLD (Programmable Logic Device, Programmable Logic Device), FPGA) (Field Programmable Gate Array), controller, microcontroller, microprocessor or other electronic components to implement the above animals Voiceprint feature extraction method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processor Device
  • PLD Programmable Logic Device, Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • a non-transitory computer non-volatile readable storage medium including instructions may be executed by the processor 420 of the device 400 to complete the above method.
  • the non-transitory computer non-volatile storage medium may be ROM, RAM (Random Access Memory, Random Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape , Floppy disks, and optical data storage devices.
  • a non-transitory computer non-volatile readable storage medium when an instruction in the non-volatile readable storage medium is executed by a processor of an animal voiceprint feature extraction device, enables the animal voiceprint feature extraction device to be capable of The above-mentioned animal voiceprint feature extraction method is performed.
  • modules or steps of the present application can be implemented by general-purpose computer equipment, which can be centralized on a single computer equipment or distributed on a network composed of multiple computer equipment
  • they may be implemented with computer-readable instructions of a computer device, so that they may be stored in a storage device and executed by the computer device, and in some cases, may be in a different order than here
  • the steps shown or described are performed, or they are separately made into individual integrated circuit modules, or multiple modules or steps in them are made into a single integrated circuit module for implementation.
  • this application is not limited to any particular combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)
  • Toys (AREA)

Abstract

一种动物声纹特征提取方法、装置及计算机非易失性可读存储介质,涉及动物身份识别技术领域,可以准确提取动物声纹特征,进而提高动物身份识别效果。方法包括:获取动物语音数据(S101);从动物语音数据中提取动物语音特征向量(S102);将动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征(S103)。

Description

动物声纹特征提取方法、装置及计算机可读存储介质
本申请要求于2018年7月5日提交中国专利局、申请号为2018107292687、申请名称为“动物声纹特征提取方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及动物身份识别技术领域,尤其是涉及到动物声纹特征提取方法、装置及计算机非易失性可读存储介质。
背景技术
在最新的动物身份识别系统中,可以采用提取动物声纹特征来识别动物身份信息,进而确定动物身份。声纹识别是生物识别的一种,不同物种、不同个体均有其独特的声纹信息,人类可以通过动物声音区分出来是哪种动物,但是对于同一物种的不同个体发出的声音通过人耳确很难直接识别出来。
具体在动物身份识别过程中,实验测试的技术人员会将动物语音数据经过处理后放入语音库中,建立动物声纹特征库,通过动物声纹特征库记录动物的身份信息,进一步对确定身份的动物语音数据进行标签,从而确定动物身份,以便于在需要验证动物身份信息时,通过将待识别动物语音数据与动物声纹特征库中的动物语音数据进行比对,识别动物身份信息。
现有的动物声纹特征提取方法通常是将动物声音信号转换为语谱图,语谱图是声音信号的一种图像化表示方式,声音在各个频率点的幅值大小用颜色来区分,再通过不同的处理手段得到动物声纹特征。然而,通过分析语谱图的方式来提取动物声纹特征的准确度较低,使得动物声纹特征提取准确度受影响,另外,在提取动物声纹特征的过程中,环境的嘈杂以及多种声音的混杂都会影响声纹提取的效果。
发明内容
本申请实施例提供了动物声纹特征提取方法、装置及计算机非易失性可读存储介质,解决了相关技术中无法准确提取动物声纹特征的问题。
根据本申请实施例的第一方面,提供一种动物声纹特征提取方法,所述方法包括:
获取动物语音数据;
从所述动物语音数据中提取动物语音特征向量;
将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
根据本申请实施例的第二方面,提供一种动物声纹特征提取装置,所述装置包括:
获取单元,用于获取动物语音数据;
提取单元,用于从所述动物语音数据中提取动物语音特征向量;
训练单元,用于将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
根据本申请实施例的第三方面,提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,该程序被处理器执行时实现以下步骤:
获取动物语音数据;
从所述动物语音数据中提取动物语音特征向量;
将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
根据本申请实施例的第四方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述程序时实现以下步骤:
获取动物语音数据;
从所述动物语音数据中提取动物语音特征向量;
将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
通过本申请,从动物语音数据中提取动物语音特征向量,由于动物语音特征向量具有计算简单,区分能力好等优点,通过将动物语音特征向量输入至卷积神经网络模型进行训练,进而提取动物声纹特征,与现有技术通过语谱图的方式来提取动物声纹特征的方法相比,本申请实施例采用更先进的声纹提取技术,通过卷积神经网络模型对动物语音特征向量进行反复训练,从而准确提取动物声纹特征,进而提高动物身份识别效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种动物声纹特征提取方法的流程图;
图2是根据本申请实施例的另一种动物声纹特征提取方法的流程图;
图3是根据本申请实施例的一种动物声纹特征提取装置的结构框图;
图4是根据本申请实施例的另一种动物声纹特征提取装置的结构框图;
图5是根据本申请实施例的动物声纹特征提取装置400的框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本申请。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
在本实施例中提供了一种动物声纹特征提取方法,图1是根据本申请实施例的一种动物声纹特征提取方法的流程图,如图1所示,该流程包括如下步骤:
步骤S101,获取动物语音数据;
其中,动物语音数据为动物发出声音的数据,相当于动物交流的独特的声音数据,例如,蜜蜂通过翅膀发出声音来传递信息,海豚能像人一样发出悦耳的音符,猪叫声音也可以传递很多信息,如判断猪只的健康情况,识别猪只的身份信息等。
对于本申请实施例,该动物语音数据为从动物身上采集到的音频数据,具体可以通过在动物身上安装采集设备来获取动物语音数据,也可以通过在动物生活场所内安装采集设备,本申请实施例不进行限定,为了保证获取到更准确的动物语音数据,通常在动物脖子上安装可佩带传感器来获取动物语音数据。
步骤S102,从所述动物语音数据中提取动物语音特征向量;
通常情况下,根据参数的稳定性可以将动物语音特征向量分为两大类,一类用于反映动物固有特性(如声道结构等),这类动物语音特征向量主要表现在语音的频谱结构上,包含反映声道共振的频谱包络特征信息和反映声带振动等声源特征的频谱细节构造特征信息,具有代表性的特征参数有基因和共振峰,这类特征不易被模仿,但容易受到健康状况影响;另一类用于反映动物声道运动特征,即发音方式、发音习惯等,主要表现在语音频谱结构随时间的变化上,具有代表性的特征参数有倒谱系数,包含了特征参数的动态特性,这类特征相对稳定且容易被模仿。
对于本申请实施例,动物语音特征向量包含有动物语音数据中的独特的语音信息,相当于后续动物声纹特征提取的准备阶段,通过从动物语音数据中提取动物语音特征向量,可以从动物语音数据中提取出对动物身份识别的有用信息,去掉无关的冗余信息。
步骤S103,将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
对于本申请实施例,这里的卷积神经网络模型为可以通过反复训练动物语音特征向量 实现提取动物声纹特征的网络结构,该网络结构可以对动物语音特征向量进行训练,并给出正确的输入-输出关系。
具体卷积神经网络模型的结构可以通过卷积层、全连接层以及池化层结构实现,这里的卷积层相当于卷积神经网络的隐含层,可以为多层结构,用于提取更深层次的动物声纹特征;在卷积神经网络模型中,为了减小参数,减低计算,常常在连续卷积层中间隔插入池化层;这里的全连接层与卷积层相似,卷积层的神经元和上一层输出局部区域相连,当然为了减少输出特征向量过多,可以设置两个全连接层,在动物语音特征向量通过若干个卷积层训练后对训练输出的特征向量进行整合。
动物声纹信息是唯一能够识别动物的声音特征,是用电声学仪器显示的携带语言信息的声波频谱图形,虽然动物的发音器官生理构造总是相同的,但是动物在发声过程中使用的器官在尺寸和形态上差异很大,而不同的声道特征也决定了声纹的唯一性,且具有长期稳定的特征信号。
通过本申请,从动物语音数据中提取动物语音特征向量,由于动物语音特征向量具有计算简单,区分能力好等优点,通过将动物语音特征向量输入至卷积神经网络模型进行训练,进而提取动物声纹特征,与现有技术通过语谱图的方式来提取动物声纹特征的方法相比,本申请实施例采用更先进的声纹提取技术,通过卷积神经网络模型对动物语音特征向量进行反复训练,从而准确提取动物声纹特征,进而提高动物身份识别效果。
图2是根据本申请实施例的另一动物声纹特征提取方法的流程图,如图2所示,该方法包括以下步骤:
步骤S201,获取动物语音数据。
应说明的是,这里获取动物语音数据的具体方式与步骤S101中采用相同的实现方式,在此不进行赘述。
对于本申请实施例,考虑到选取动物的种类以及数量问题,如果记录动物语音数据时间过久,则每种动物或者每只动物需要花费大量的处理时间,通过设置预设时间段来控制语音数据采集的时间长度,方便后续对动物语音数据的处理。
步骤S202,对动物语音数据进行预处理,得到处理后的动物语音数据。
对于本申请实施例,预处理可以包括采样量化、预加重、取音框以及加窗等操作。采样量化的目的是将时间、幅值上都连续的动物语音序列转换为在时间上离散、幅值上仍然连续的离散模拟信号,并用规定的数值来表示动物语音序列的幅值。预加重的作用就是为了消除动物发生过程中声带和嘴唇造成的效应,来补偿动物语音序列收到发音系统所压抑的高频部分,并且能突出高频的共振峰。取音框的作用是将动物语音数据进行音框化,通 常设置N个取样点集合作为一个观测单位,即一个音框。加窗作用是消除动物语音数据两端的不连续性,避免分析时受到前后相连音框的影响。
步骤S203,按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列。
对于本申请实施例,动物语音数据并非是稳定的音频信号,通过按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列,进而将每一帧动物语音序列看成稳定的音频信号,以便于后续对动物语音序列进行处理。
例如,对动物语音数据进行分帧,通常设置预设时间区间为200~400ms,当然这里不进行限定,具体根据实际情况确定。
步骤S204,从动物语音数据中提取动物语音特征向量。
其中,动物语音特征向量是能够反映动物个体信息的基本特征,这些基本特征必须能够准确、有效地区分不同的发音动物,且对于同一个体,这些基本特征应具有稳定性。
对于不同的动物语音特征向量具有不同的特征参数,而不同的特征参数具有不同的物理意义,例如,基因以及共振峰是表征动物固有特征参数,以全极点模型为基础的LPC可以较为准确地反映动物语音序列的频谱幅度,倒谱系数反映了动物声道的共振特性,将较小的峰值信息和更重要的声道形状信息相分离。
对于本申请实施例,不同的动物语音特征向量具有不同的提取方式,当动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,可以通过下述实现方式从动物语音数据中提取动物语音特征向量,首先对每帧动物语音序列进行傅立叶变换得到每帧动物语音序列的频谱,并对每帧动物语音序列的频谱取模平方得到动物语音序列的功率谱,然后通过预设滤波器对所述动物语音序列的功率谱滤波,得到动物语音序列的对数能量,最后对所述动物语音序列的对数能量进行离散余弦变换,得到动物语音特征向量。当动物语音特征向量用于反映动物语音在频谱结构上的特征信息,可以通过下述实现方式从动物语音数据中提取动物语音特征向量,首先对每帧动物语音序列进行时域分析以及频域分析,然后计算每帧动物语音序列的时域特征参数以及频域特征参数,最后根据时域特征参数以及频域特征参数,得到动物语音特征向量。
通常情况下,MFCC特征是目前使用最为广泛的语音特征之一,具有计算简单、区分能力好等突出优点,可以在一定程度上模拟动物耳朵对语音的处理特点,识别效果较高,所以本申请实施例可以选取梅尔频率倒谱系数(MFCC特征)作为从动物语音数据中提取的动物语音特征向量,由于MFCC特征同样用于反映动物语音在频谱结构随时间变化的特征信息,具体MFCC特征的提取方式如上述步骤所述,在此不进行赘述。
步骤S205,将动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
对于本申请实施例,卷积神经网络模型为多层结构的网络模型,通过卷积神经网络模型的卷积层可以提取动物语音特征向量的局部声纹信息,该层相当于神经网络模型的隐含层,这里的局部声纹信息用于反映动物声纹信息的局部特征,可以设置多个卷积层,将提取到的局部声纹信息再次输入卷积层进行局部声纹信息提取,进而提取到更深层次的局部声纹信息;通过卷积神经网络模型的全连接层将提取到的局部声纹信息进行连接,得到多维度的局部声纹信息,为了减少输出大小和降低拟合过程,在得到多维度的局部声纹信息后,通过卷积神经网络模型的池化层对多维度的局部声纹信息进行降维处理,得到动物声纹特征。
需要说明的是,考虑到前后两帧语音特征向量之间的依赖性,在将动物语音特征向量输入至卷积神经网络模型的卷积层进行局部声纹信息提取的过程中,通过对每次输入至卷积神经网络模型卷积层的动物语音特征向量进行拼帧处理,以增加前后帧之间耦合的特征关系,例如,在第一层输入至卷积层的时候对前5帧动物语音特征向量进行拼帧,第二次输入至卷积层的时候对前9帧动物语音特征向量进行拼帧。
例如,设置神经网络模型的层数为15层,前11层作为卷积层,第1层为卷积层,将拼接前后5帧的动物语音向量特征作为第1层神经网络模型的输入参数,如果每帧83维的动物语音特征向量,共得到415维的动物语音特征向量作为输入参数,对动物语音特征向量进行局部声纹信息提取,输出提取到的局部声纹信息,第2层为全连接层,将提取到的局部声纹信息进行连接,第3层为卷积层,同理将第2层神经网络模型输出的局部声纹信息作为第3层神经网络模型的输入参数,对动物语音特征向量进行局部声纹信息提取,输出提取到的局部声纹信息,直至第11层卷积层输出局部声纹信息,第12层为池化层,将前11层训练输出的局部声纹信息进行整合,计算均值和方差,第13-15层为全连接层,对整合后的声纹特征进行降维,输出一维的动物声纹特征,提取出动物声纹特征。
进一步地,在对动物身份进行识别之前,为了保证动物声纹特征的准确度,可以通过对动物声纹特征进行提纯,得到最能够表现动物身份的声纹特征。
例如,通过LDA矩阵对所有1024为的动物声纹特征向量,输出保留可以最大程度表示动物声纹特征的特征向量,提高动物声纹特征的精度。
步骤S206,根据提取出的不同动物声纹特征,建立动物声纹特征库,不同动物声纹特征携带有唯一动物身份信息。
由于动物声纹特征相当于动物特有的标识信息,不同动物声纹特征携带有动物身份信 息,为了方便对动物身份进行识别,根据提取出的不同动物声纹特征作为动物声纹样本,建立动物声纹特征库,该声纹特征库相当于存储不同动物声纹特征的数据库,并且每个动物声纹信息携带有动物标识信息,如数字或字母编号等形式,这里不进行限定。
需要说明的是,为了方便后续动物身份识别,可以预先对声纹特征库进行分类整理,如将不同动物物种,不同区域的动物或者不同年龄的动物进行划分,在对动物身份识别之前对无效的动物身份样本进行筛除,节省动物身份验证时间。
步骤S207,当接收到动物身份验证请求时,将待识别动物身份的动物声纹特征与动物声纹特征库中的动物声纹特征进行比对,确定动物身份信息。
由于动物声纹特征库中存储有不同动物声纹样本,在接收到动物身份验证请求时,通过上述步骤S201至步骤S205提取待识别动物身份的动物声纹特征,并将待识别动物身份的动物声纹特征与动物声纹特征库中的动物声纹特征逐一进行比对,从而确定动物身份信息。
通过本申请实施例,从动物语音数据中提取动物语音特征向量,由于动物语音特征向量具有计算简单,区分能力好等优点,通过将动物语音特征向量输入至卷积神经网络模型进行训练,进而提取动物声纹特征,与现有技术通过语谱图的方式来提取动物声纹特征的方法相比,本申请实施例采用更先进的声纹提取技术,通过卷积神经网络模型对动物语音特征向量进行反复训练,从而准确提取动物声纹特征,进而提高动物身份识别效果。
图3是根据本申请实施例的一种动物声纹特征提取装置的结构框图。参照图3,该装置包括获取单元31,提取单元32和训练单元33。
获取单元31,可以用于获取动物语音数据;
提取单元32,可以用于从动物语音数据中提取动物语音特征向量;
训练单元33,可以用于将动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
通过本申请,从动物语音数据中提取动物语音特征向量,由于动物语音特征向量具有计算简单,区分能力好等优点,通过将动物语音特征向量输入至卷积神经网络模型进行训练,进而提取动物声纹特征,与现有技术通过语谱图的方式来提取动物声纹特征的方法相比,本申请实施例采用更先进的声纹提取技术,通过卷积神经网络模型对动物语音特征向量进行反复训练,从而准确提取动物声纹特征,进而提高动物身份识别效果。
作为图3中所示动物声纹特征提取装置的进一步说明,图4是根据本申请实施例另一种动物声纹特征提取装置的结构示意图,如图4所示,该装置还包括:
预处理单元34,可以用于在获取动物语音数据之后,对动物语音数据进行预处理,得 到处理后的动物语音数据;
分帧单元35,可以用于按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列;
建立单元36,可以用于在将动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征之后,根据提取出的不同动物声纹特征,建立动物声纹特征库,该不同动物声纹特征携带有唯一动物身份信息;
比对单元37,可以用于当接收到动物身份验证请求时,将待识别动物身份的动物声纹特征与动物声纹特征库中的动物声纹特征进行比对,确定动物身份信息。
进一步地,当动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,提取单元32包括:
第一提取模块321,可以用于对每帧动物语音序列进行傅立叶变换得到每帧动物语音序列的频谱,并对每帧动物语音序列的频谱取模平方得到动物语音序列的功率谱;
第二提取模块322,可以用于通过预设滤波器对动物语音序列的功率谱滤波,得到动物语音序列的对数能量;
第三提取模块323,可以用于对动物语音序列的对数能量进行离散余弦变换,得到动物语音特征向量。
进一步地,当动物语音特征向量用于反映动物语音在频谱结构上的特征信息,提取单元32包括:
第四提取模块324,可以用于对每帧动物语音序列进行时域分析以及频域分析,计算每帧动物语音序列的时域特征参数以及频域特征参数;
第五提取模块325,可以用于根据时域特征参数以及频域特征参数,得到动物语音特征向量。
进一步地,训练单元33包括:
第六提取模块331,可以用于通过卷积神经网络模型的卷积层提取动物语音特征向量的局部声纹信息;
连接模块332,可以用于通过卷积神经网络模型的全连接层将提取到的局部声纹信息进行连接,得到多维度的局部声纹信息;
降维模块333,可以用于通过卷积神经网络模型的池化层对多维度的局部声纹信息进行降维处理,得到动物声纹特征。
进一步地,训练单元33还包括:
拼帧模块334,可以用于在通过卷积神经网络模型的卷积层提取动物语音特征向量的 局部声纹信息之前,对每次输入至卷积神经网络模型卷积层的动物语音特征向量进行拼帧处理。
图5是根据本申请实施例的动物声纹特征提取装置400的框图。例如,可以是一个计算机设备,装置400可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图5,装置400可以包括以下一个或多个组件:处理组件402,存储器404,电源组件406,多媒体组件408,音频组件410,I/O(Input/Output,输入/输出)的接口412,传感器组件414,以及通信组件416。
处理组件402通常控制装置400的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件402可以包括一个或多个处理器420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件402可以包括一个或多个模块,便于处理组件402和其他组件之间的交互。例如,处理组件402可以包括多媒体模块,以方便多媒体组件408和处理组件402之间的交互。
存储器404被配置为存储各种类型的数据以支持在装置400的操作。这些数据的示例包括用于在装置400上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如SRAM(Static Random Access Memory,静态随机存取存储器),EEPROM(Electrically-Erasable Programmable Read-Only Memory,电可擦除可编程只读存储器),EPROM(Erasable Programmable Read Only Memory,可擦除可编程只读存储器),PROM(Programmable Read-Only Memory,可编程只读存储器),ROM(Read-Only Memory,只读存储器),磁存储器,快闪存储器,磁盘或光盘。
电源组件406为装置400的各种组件提供电力。电源组件406可以包括电源管理系统,一个或多个电源,及其他与为装置400生成、管理和分配电力相关联的组件。
多媒体组件408包括在所述装置400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括LCD(Liquid Crystal Display,液晶显示器)和TP(Touch Panel,触摸面板)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件408包括一个前置摄像头和/或后置摄像头。当装置400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统 或具有焦距和光学变焦能力。
音频组件410被配置为输出和/或输入音频信号。例如,音频组件410包括一个MIC(Microphone,麦克风),当装置400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器404或经由通信组件416发送。在一些实施例中,音频组件410还包括一个扬声器,用于输出音频信号。
I/O接口412为处理组件402和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件414包括一个或多个传感器,用于为装置400提供各个方面的状态评估。例如,传感器组件414可以检测到设备400的打开/关闭状态,组件的相对定位,例如组件为装置400的显示器和小键盘,传感器组件414还可以检测装置400或装置400一个组件的位置改变,用户与装置400接触的存在或不存在,装置400方位或加速/减速和装置400的温度变化。传感器组件414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件414还可以包括光传感器,如CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物)或CCD(Charge-coupled Device,电荷耦合元件)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件414还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件416被配置为便于装置400和其他设备之间有线或无线方式的通信。装置400可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件416还包括NFC(Near Field Communication,近场通信)模块,以促进短程通信。例如,在NFC模块可基于RFID(Radio Frequency Identification,射频识别)技术,IrDA(Infra-red Data Association,红外数据协会)技术,UWB(Ultra Wideband,超宽带)技术,BT(Bluetooth,蓝牙)技术和其他技术来实现。
在示例性实施例中,装置400可以被一个或多个ASIC(Application Specific Integrated Circuit,应用专用集成电路)、DSP(Digital signal Processor,数字信号处理器)、DSPD(Digital signal ProcessorDevice,数字信号处理设备)、PLD(Programmable Logic Device,可编程逻辑器件)、FPGA)(Field Programmable Gate Array,现场可编程门阵列)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述动物声纹特征提取方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机非易失性可读存储介 质,例如包括指令的存储器404,上述指令可由装置400的处理器420执行以完成上述方法。例如,所述非临时性计算机非易失性可读存储介质可以是ROM、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,光盘只读存储器)、磁带、软盘和光数据存储设备等。
一种非临时性计算机非易失性可读存储介质,当所述非易失性可读存储介质中的指令由动物声纹特征提取装置的处理器执行时,使得动物声纹特征提取装置能够执行上述动物声纹特征提取方法。
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算机设备来实现,它们可以集中在单个的计算机设备上,或者分布在多个计算机设备所组成的网络上,可选地,它们可以用计算机设备的计算机可读指令来实现,从而,可以将它们存储在存储装置中由计算机设备来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (20)

  1. 一种动物声纹特征提取方法,其特征在于,所述方法包括:
    获取动物语音数据;
    从所述动物语音数据中提取动物语音特征向量;
    将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
  2. 根据权利要求1所述的方法,其特征在于,在所述获取动物语音数据之后,所述方法还包括:
    对所述动物语音数据进行预处理,得到处理后的动物语音数据;
    按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列;
    所述从所述动物语音数据中提取动物语音特征向量包括:
    从所述多帧动物语音序列中提取出与所述多帧动物语音序列一一对应的多个动物语音特征向量。
  3. 根据权利要求2所述的方法,其特征在于,当所述动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,所述从所述动物语音数据中提取动物语音特征向量包括:
    对每帧动物语音序列进行傅立叶变换得到每帧动物语音序列的频谱,并对所述每帧动物语音序列的频谱取模平方得到动物语音序列的功率谱;
    通过预设滤波器对所述动物语音序列的功率谱滤波,得到动物语音序列的对数能量;
    对所述动物语音序列的对数能量进行离散余弦变换,得到动物语音特征向量。
  4. 根据权利要求2所述的方法,其特征在于,当所述动物语音特征向量用于反映动物语音在频谱结构上的特征信息,所述从所述动物语音数据中提取动物语音特征向量包括:
    对每帧动物语音序列进行时域分析以及频域分析,计算每帧动物语音序列的时域特征参数以及频域特征参数;
    根据所述时域特征参数以及频域特征参数,得到动物语音特征向量。
  5. 根据权利要求2所述的方法,其特征在于,所述卷积神经网络为多层结构的网络模型,所述将所述动物语音特征向量输入至卷积神经网络进行训练,得到用于识别动物身份的动物声纹特征包括:
    通过所述卷积神经网络模型的卷积层提取所述动物语音特征向量的局部声纹信息;
    通过所述卷积神经网络模型的全连接层将提取到的局部声纹信息进行连接,得到多维度的局部声纹信息;
    通过所述卷积神经网络模型的池化层对所述多维度的局部声纹信息进行降维处理,得到动物声纹特征。
  6. 根据权利要求5所述的方法,其特征在于,在所述通过所述卷积神经网络模型的卷积层提取所述动物语音特征向量的局部声纹信息之前,所述方法还包括:
    对每次输入至所述卷积神经网络模型卷积层的动物语音特征向量进行拼帧处理。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,在所述将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征之后,所述方法还包括:
    根据提取出的不同动物声纹特征,建立动物声纹特征库,所述不同动物声纹特征携带有唯一动物身份信息;
    当接收到动物身份验证请求时,将待识别动物身份的动物声纹特征与所述动物声纹特征库中的动物声纹特征进行比对,确定动物身份信息。
  8. 一种动物声纹特征提取装置,其特征在于,所述装置包括:
    获取单元,用于获取动物语音数据;
    提取单元,用于从所述动物语音数据中提取动物语音特征向量;
    训练单元,用于将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    预处理单元,用于在所述获取动物语音数据之后,对所述动物语音数据进行预处理,得到处理后的动物语音数据;
    分帧单元,用于按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列。
  10. 根据权利要求9所述的装置,其特征在于,当所述动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,所述提取单元包括:
    第一提取模块,用于对每帧动物语音序列进行傅立叶变换得到每帧动物语音信号的频谱,并对所述每帧动物语音信号的频谱取模平方的动物语音信号的功率谱;
    第二提取模块,用于通过预设滤波器对所述动物语音信号的功率谱滤波,得到动物语音信号的对数能量;
    第三提取模块,用于对所述动物语音信号的对数能量进行离散余弦变换,得到动物语 音特征向量。
  11. 根据权利要求9所述的装置,其特征在于,当所述动物语音特征向量用于反映动物语音在频谱结构上的特征信息,所述提取单元包括:
    第四提取模块,用于对每帧动物语音序列进行时域分析以及频域分析,计算每帧动物信号的时域特征参数以及频域特征参数;
    第五提取模块,用于根据所述时域特征参数以及频域特征参数,得到动物语音特征向量。
  12. 根据权利要求8所述的装置,其特征在于,所述训练单元包括:
    第六提取模块,用于通过所述卷积神经网络模型的卷积层提取所述动物语音特征向量的局部声纹信息;
    连接模块,用于通过所述卷积神经网络模型的全连接层将提取到的局部声纹信息进行连接,得到多维度的局部声纹信息;
    降维模块,用于通过所述卷积神经网络模型的池化层对所述多维度的局部声纹信息进行降维处理,得到动物声纹特征。
  13. 根据权利要求12所述的装置,其特征在于,所述训练单元还包括:
    拼帧模块,用于在所述通过所述卷积神经网络模型的卷积层提取所述动物语音特征向量的局部声纹信息之前,对每次输入至所述卷积神经网络模型卷积层的动物语音特征向量进行拼帧处理。
  14. 根据权利要求8-13中任一项所述的装置,其特征在于,所述装置还包括:
    建立单元,用于在所述将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征之后,根据提取出的不同动物声纹特征作为动物声纹样本,建立动物声纹数据库,所述不同动物声纹特征携带有唯一动物身份信息;
    比对单元,用于当接收到动物身份验证请求时,通过将待识别动物身份的动物声纹特征与所述动物声纹特征库中的动物声纹特征进行比对,确定动物身份信息。
  15. 一种计算机非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现动物声纹特征提取方法,包括:
    获取动物语音数据;
    从所述动物语音数据中提取动物语音特征向量;
    将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
  16. 根据权利要求15所述的计算机非易失性可读存储介质,其特征在于,所述计算 机可读指令被处理器执行时实现在所述获取动物语音数据之后,所述方法还包括:
    对所述动物语音数据进行预处理,得到处理后的动物语音数据;按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列;
    所述从所述动物语音数据中提取动物语音特征向量包括:从所述多帧动物语音序列中提取出与所述多帧动物语音序列一一对应的多个动物语音特征向量。
  17. 根据权利要求16所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被处理器执行时实现当所述动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,所述从所述动物语音数据中提取动物语音特征向量包括:
    对每帧动物语音序列进行傅立叶变换得到每帧动物语音序列的频谱,并对所述每帧动物语音序列的频谱取模平方得到动物语音序列的功率谱;
    通过预设滤波器对所述动物语音序列的功率谱滤波,得到动物语音序列的对数能量;
    对所述动物语音序列的对数能量进行离散余弦变换,得到动物语音特征向量。
  18. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现动物声纹特征提取方法,包括:
    获取动物语音数据;
    从所述动物语音数据中提取动物语音特征向量;
    将所述动物语音特征向量输入至卷积神经网络模型进行训练,得到用于识别动物身份的动物声纹特征。
  19. 根据权利要求18所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现在所述获取动物语音数据之后,所述方法还包括:
    对所述动物语音数据进行预处理,得到处理后的动物语音数据;
    按照预设时间区间对处理后的动物语音数据进行分帧操作,得到多帧动物语音序列;
    所述从所述动物语音数据中提取动物语音特征向量包括:
    从所述多帧动物语音序列中提取出与所述多帧动物语音序列一一对应的多个动物语音特征向量。
  20. 根据权利要求19所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现当所述动物语音特征向量用于反映动物语音在频谱结构随时间变化的特征信息,所述从所述动物语音数据中提取动物语音特征向量包括:
    对每帧动物语音序列进行傅立叶变换得到每帧动物语音序列的频谱,并对所述每帧动物语音序列的频谱取模平方得到动物语音序列的功率谱;
    通过预设滤波器对所述动物语音序列的功率谱滤波,得到动物语音序列的对数能量;对所述动物语音序列的对数能量进行离散余弦变换,得到动物语音特征向量。
PCT/CN2018/111658 2018-07-05 2018-10-24 动物声纹特征提取方法、装置及计算机可读存储介质 WO2020006935A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810729268.7 2018-07-05
CN201810729268.7A CN108899037B (zh) 2018-07-05 2018-07-05 动物声纹特征提取方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2020006935A1 true WO2020006935A1 (zh) 2020-01-09

Family

ID=64347705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111658 WO2020006935A1 (zh) 2018-07-05 2018-10-24 动物声纹特征提取方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108899037B (zh)
WO (1) WO2020006935A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750441A (zh) * 2021-04-02 2021-05-04 北京远鉴信息技术有限公司 一种声纹的识别方法、装置、电子设备及存储介质
CN113035203A (zh) * 2021-03-26 2021-06-25 合肥美菱物联科技有限公司 一种动态变换语音应答风格的控制方法
CN114049899A (zh) * 2021-11-23 2022-02-15 中国林业科学研究院资源信息研究所 一种声音识别方法、装置、电子设备及存储介质
CN116612769A (zh) * 2023-07-21 2023-08-18 志成信科(北京)科技有限公司 一种野生动物声音识别方法和装置
CN118173107A (zh) * 2024-05-15 2024-06-11 百鸟数据科技(北京)有限责任公司 基于多模态深度特征层级融合的鸟类声音质量分析方法

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887526B (zh) * 2019-01-04 2023-10-17 平安科技(深圳)有限公司 对母羊的生理状态检测方法、装置、设备及存储介质
CN110120224B (zh) * 2019-05-10 2023-01-20 平安科技(深圳)有限公司 鸟声识别模型的构建方法、装置、计算机设备及存储介质
CN110189757A (zh) * 2019-06-27 2019-08-30 电子科技大学 一种大熊猫个体识别方法、设备及计算机可读存储介质
CN110459225B (zh) * 2019-08-14 2022-03-22 南京邮电大学 一种基于cnn融合特征的说话人辨认系统
CN110517698B (zh) * 2019-09-05 2022-02-01 科大讯飞股份有限公司 一种声纹模型的确定方法、装置、设备及存储介质
CN110570871A (zh) * 2019-09-20 2019-12-13 平安科技(深圳)有限公司 一种基于TristouNet的声纹识别方法、装置及设备
CN110704646A (zh) * 2019-10-16 2020-01-17 支付宝(杭州)信息技术有限公司 一种豢养物档案建立方法及装置
CN111524525B (zh) * 2020-04-28 2023-06-16 平安科技(深圳)有限公司 原始语音的声纹识别方法、装置、设备及存储介质
CN111833884A (zh) * 2020-05-27 2020-10-27 北京三快在线科技有限公司 一种声纹特征提取方法、装置、电子设备及存储介质
CN111816166A (zh) * 2020-07-17 2020-10-23 字节跳动有限公司 声音识别方法、装置以及存储指令的计算机可读存储介质
CN114333767A (zh) * 2020-09-29 2022-04-12 华为技术有限公司 发声者语音抽取方法、装置、存储介质及电子设备
CN112259106B (zh) * 2020-10-20 2024-06-11 网易(杭州)网络有限公司 声纹识别方法、装置、存储介质及计算机设备
CN112420023B (zh) * 2020-11-26 2022-03-25 杭州音度人工智能有限公司 一种音乐侵权检测方法
CN112786059A (zh) * 2021-03-11 2021-05-11 合肥市清大创新研究院有限公司 一种基于人工智能的声纹特征提取方法及装置
CN113112183B (zh) * 2021-05-06 2024-03-19 国家市场监督管理总局信息中心 一种出入境危险货物风险评估的方法、系统和可读存储介质
CN113793615B (zh) * 2021-09-15 2024-02-27 北京百度网讯科技有限公司 说话人识别方法、模型训练方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106340309A (zh) * 2016-08-23 2017-01-18 南京大空翼信息技术有限公司 一种基于深度学习的狗叫情感识别方法及装置
CN106847293A (zh) * 2017-01-19 2017-06-13 内蒙古农业大学 设施养殖羊应激行为的声信号监测方法
US20180101748A1 (en) * 2016-10-10 2018-04-12 Gyrfalcon Technology Inc. Hierarchical Category Classification Scheme Using Multiple Sets of Fully-Connected Networks With A CNN Based Integrated Circuit As Feature Extractor
CN108052964A (zh) * 2017-12-05 2018-05-18 翔创科技(北京)有限公司 牲畜状态检测方法、计算机程序、存储介质及电子设备
CN108198562A (zh) * 2018-02-05 2018-06-22 中国农业大学 一种用于实时定位辨识动物舍内异常声音的方法及系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800316B (zh) * 2012-08-30 2014-04-30 重庆大学 基于神经网络的声纹识别系统的最优码本设计方法
CN104008751A (zh) * 2014-06-18 2014-08-27 周婷婷 一种基于bp神经网络的说话人识别方法
CN104835498B (zh) * 2015-05-25 2018-12-18 重庆大学 基于多类型组合特征参数的声纹识别方法
CN107610707B (zh) * 2016-12-15 2018-08-31 平安科技(深圳)有限公司 一种声纹识别方法及装置
CN106683680B (zh) * 2017-03-10 2022-03-25 百度在线网络技术(北京)有限公司 说话人识别方法及装置、计算机设备及计算机可读介质
CN106952649A (zh) * 2017-05-14 2017-07-14 北京工业大学 基于卷积神经网络和频谱图的说话人识别方法
CN107393526B (zh) * 2017-07-19 2024-01-02 腾讯科技(深圳)有限公司 语音静音检测方法、装置、计算机设备和存储介质
CN107464568B (zh) * 2017-09-25 2020-06-30 四川长虹电器股份有限公司 基于三维卷积神经网络文本无关的说话人识别方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106340309A (zh) * 2016-08-23 2017-01-18 南京大空翼信息技术有限公司 一种基于深度学习的狗叫情感识别方法及装置
US20180101748A1 (en) * 2016-10-10 2018-04-12 Gyrfalcon Technology Inc. Hierarchical Category Classification Scheme Using Multiple Sets of Fully-Connected Networks With A CNN Based Integrated Circuit As Feature Extractor
CN106847293A (zh) * 2017-01-19 2017-06-13 内蒙古农业大学 设施养殖羊应激行为的声信号监测方法
CN108052964A (zh) * 2017-12-05 2018-05-18 翔创科技(北京)有限公司 牲畜状态检测方法、计算机程序、存储介质及电子设备
CN108198562A (zh) * 2018-02-05 2018-06-22 中国农业大学 一种用于实时定位辨识动物舍内异常声音的方法及系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113035203A (zh) * 2021-03-26 2021-06-25 合肥美菱物联科技有限公司 一种动态变换语音应答风格的控制方法
CN112750441A (zh) * 2021-04-02 2021-05-04 北京远鉴信息技术有限公司 一种声纹的识别方法、装置、电子设备及存储介质
CN112750441B (zh) * 2021-04-02 2021-07-23 北京远鉴信息技术有限公司 一种声纹的识别方法、装置、电子设备及存储介质
CN114049899A (zh) * 2021-11-23 2022-02-15 中国林业科学研究院资源信息研究所 一种声音识别方法、装置、电子设备及存储介质
CN116612769A (zh) * 2023-07-21 2023-08-18 志成信科(北京)科技有限公司 一种野生动物声音识别方法和装置
CN116612769B (zh) * 2023-07-21 2023-09-12 志成信科(北京)科技有限公司 一种野生动物声音识别方法和装置
CN118173107A (zh) * 2024-05-15 2024-06-11 百鸟数据科技(北京)有限责任公司 基于多模态深度特征层级融合的鸟类声音质量分析方法

Also Published As

Publication number Publication date
CN108899037B (zh) 2024-01-26
CN108899037A (zh) 2018-11-27

Similar Documents

Publication Publication Date Title
WO2020006935A1 (zh) 动物声纹特征提取方法、装置及计算机可读存储介质
Czyzewski et al. An audio-visual corpus for multimodal automatic speech recognition
US10997764B2 (en) Method and apparatus for generating animation
CN107799126B (zh) 基于有监督机器学习的语音端点检测方法及装置
US10056073B2 (en) Method and apparatus to synthesize voice based on facial structures
US9672829B2 (en) Extracting and displaying key points of a video conference
US8589167B2 (en) Speaker liveness detection
CN111583944A (zh) 变声方法及装置
CN110808063A (zh) 一种语音处理方法、装置和用于处理语音的装置
CN111508511A (zh) 实时变声方法及装置
WO2019119279A1 (en) Method and apparatus for emotion recognition from speech
CN114121006A (zh) 虚拟角色的形象输出方法、装置、设备以及存储介质
CN110400565A (zh) 说话人识别方法、系统及计算机可读存储介质
CN110765868A (zh) 唇读模型的生成方法、装置、设备及存储介质
CN113223542A (zh) 音频的转换方法、装置、存储介质及电子设备
CN113921026A (zh) 语音增强方法和装置
CN109754816B (zh) 一种语音数据处理的方法及装置
CN109309790A (zh) 一种会议幻灯片智能记录方法及系统
CN112820300A (zh) 音频处理方法及装置、终端、存储介质
Nirjon et al. sMFCC: exploiting sparseness in speech for fast acoustic feature extraction on mobile devices--a feasibility study
JP2021117371A (ja) 情報処理装置、情報処理方法および情報処理プログラム
JP2008146268A (ja) 映像を用いた発音の推定方法
CN109102813B (zh) 声纹识别方法、装置、电子设备和存储介质
CN209692906U (zh) 一种会议幻灯片智能记录系统
CN114492579A (zh) 情绪识别方法、摄像装置、情绪识别装置及存储装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18925185

Country of ref document: EP

Kind code of ref document: A1