WO2024012284A1 - 音频识别方法、装置、电子设备和计算机程序产品 - Google Patents

音频识别方法、装置、电子设备和计算机程序产品 Download PDF

Info

Publication number
WO2024012284A1
WO2024012284A1 PCT/CN2023/105121 CN2023105121W WO2024012284A1 WO 2024012284 A1 WO2024012284 A1 WO 2024012284A1 CN 2023105121 W CN2023105121 W CN 2023105121W WO 2024012284 A1 WO2024012284 A1 WO 2024012284A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
audio data
level
feature
audio
Prior art date
Application number
PCT/CN2023/105121
Other languages
English (en)
French (fr)
Inventor
杜行健
梁会东
朱碧磊
马泽君
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2024012284A1 publication Critical patent/WO2024012284A1/zh

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Definitions

  • Embodiments of the present disclosure relate to the field of data processing, and more specifically, to audio recognition methods, apparatuses, electronic devices and computer program products.
  • audio recognition technology based on deep learning has a wide range of application scenarios in many fields.
  • current deep learning-based audio recognition technology usually uses convolution operations to achieve feature extraction.
  • the extracted features have rich high-level semantic information, but also ignore other information.
  • Embodiments of the present disclosure provide audio recognition solutions.
  • an audio recognition method may include obtaining a target feature map of the audio data based on a multi-level feature map of the audio data.
  • the method may further include determining a feature representation of the audio data based on the target feature map. Additionally, the method may further include determining a recognition result of the audio data based at least on the feature representation.
  • an audio recognition device which audio recognition device may include: a target feature map acquisition module configured to obtain the target feature map of the audio data based on the multi-level feature map of the audio data; a feature representation determination module configured to determine the target feature map based on the target feature map. a characteristic representation of the audio data; and a recognition result determination module configured to determine a recognition result of the audio data based at least on the characteristic representation.
  • an electronic device including: a processor; and a memory coupled to the processor, the memory having instructions stored therein that, when executed by the processor causing the electronic device to perform an action, the action including: obtaining a target feature map of the audio data based on a multi-level feature map of the audio data; determining a feature representation of the audio data based on the target feature map; and at least Based on the feature representation, a recognition result of the audio data is determined.
  • a computer program product tangibly stored on a computer-readable medium and including machine-executable instructions that, when executed, cause a machine to perform according to Any step of a method on the one hand.
  • FIG. 1 illustrates a schematic diagram of an example environment in which various embodiments of the present disclosure can be implemented
  • FIG. 2 illustrates a schematic diagram of a detailed example environment for training and applying models in accordance with embodiments of the present disclosure
  • Figure 3 illustrates a flow of a process for audio recognition according to an embodiment of the present disclosure. picture
  • FIG. 4 illustrates a schematic diagram of an example environment for determining feature representations in accordance with an embodiment of the present disclosure
  • Figure 5 shows a schematic diagram of a feature map according to an embodiment of the present disclosure
  • Figure 6 shows a schematic diagram of a multi-level feature map according to an embodiment of the present disclosure
  • Figure 7 shows a schematic diagram of a model training architecture according to an embodiment of the present disclosure
  • Figure 8 shows a schematic diagram of an audio recognition device according to an embodiment of the present disclosure.
  • Figure 9 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure.
  • a prompt message is sent to the user to clearly remind the user that the operation requested will require the acquisition and use of the user's personal information. Therefore, users can autonomously choose whether to provide personal information to software or hardware such as electronic devices, applications, servers or storage media that perform the operations of the technical solution of the present disclosure based on the prompt information.
  • the method of sending prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in the form of text in the pop-up window.
  • the pop-up window can also contain a selection control for the user to choose "agree” or "disagree” to provide personal information to the electronic device.
  • the term “data” may refer to real-time data to be recognized, for example, an audio segment intercepted from a song, which may be audio recognized using a trained recognition model.
  • data can also refer to data containing annotation information, such as model training data.
  • annotation information may be, for example, pre-annotated classification information.
  • classification generally refers to the recognition result of an audio clip.
  • a recognition model can be used to determine whether a frame of audio clip is a certain type of audio, such as a chorus.
  • feature representation generally refers to features extracted from data using at least part of a deep neural network.
  • the training process of the traditional audio recognition model needs to be optimized.
  • the resolution of the extracted feature maps gradually decreases.
  • the feature map with reduced resolution carries higher-level semantic information, the sacrifice of resolution causes the feature map to lose precise location information.
  • the "position information" mentioned in this article mainly refers to the position of an audio segment in a piece of audio, such as the starting time and end time of the audio segment.
  • this scheme not only uses the closest upper-level feature map of the target feature map, but also uses the feature map obtained by each level or multi-level feature extraction, so that the final The obtained target feature map not only contains rich semantic information, Also has high-resolution location information, enabling the above issues and/or other potential issues to be addressed.
  • subsequent embodiments of the present disclosure also provide a solution for enhancing the above-mentioned feature representation determined by the target feature map.
  • FIG. 1 illustrates a block diagram of an example system 100 for audio recognition in accordance with an embodiment of the present disclosure. It should be understood that the system 100 shown in FIG. 1 is only an example in which embodiments of the present disclosure can be implemented, and is not intended to limit the scope of the present disclosure. Embodiments of the present disclosure are equally applicable to other systems or architectures.
  • system 100 may include computing device 120 .
  • Computing device 120 may be configured to receive audio data 110 and output recognition results 130 related to audio data 110 .
  • the audio data 110 is a spectrogram of a constant Q transform or other transform of audio data in the time domain.
  • computing device 120 may obtain audio data 110 .
  • audio data 110 may be audio segments to be identified.
  • audio data 110 may include multiple training samples for training a deep neural network or machine learning model (also referred to as a target model). Audio data 110 may have corresponding annotation information. Such annotation information can be generated by manual annotation, automatic annotation by the model, or other appropriate methods.
  • the target model may be designed to perform audio recognition tasks.
  • target models include, but are not limited to, various types of deep neural networks (DNN), convolutional neural networks (CNN), support vector machines (SVM), decision trees, random forest models, etc.
  • DNN deep neural networks
  • CNN convolutional neural networks
  • SVM support vector machines
  • decision trees random forest models
  • the target model may also be referred to as the "recognition model.”
  • the terms “recognition model”, “neural network”, “learning model”, “learning network”, “model” and “network” are used interchangeably.
  • computing device 120 may include, but is not limited to, a personal computer, a server computer, a handheld or laptop device, a mobile device (such as a mobile phone, personal digital assistant (PDA), media player, etc.), consumer electronics, small form factor device, etc. Computers, mainframe computers, cloud computing resources, etc.
  • the recognition result 130 may be set as classification information determined from the audio data 110, for example, whether the audio data 110 being an audio segment of a song belongs to the chorus classification, etc.
  • the recognition result 130 may also be set as a prediction result that is revised or updated during the model training process (this result is compared with the annotated ground truth result in the subsequent process to determine the loss function).
  • system 100 may also include additional devices and/or units not shown.
  • the computing device 120 of the system 100 may further include a storage unit (not shown) for storing pre-input hyperparameters, etc., and a trained model.
  • Training and use of the model in computing device 120 is described below with reference to FIG. 2 .
  • FIG. 2 shows a schematic diagram of a detailed example environment 200 in accordance with embodiments of the present disclosure. Similar to FIG. 1 , example environment 200 may include computing device 220 , audio data 210 input to computing device 220 , and recognition results 230 output from computing device 220 . The difference is that the example environment 200 may generally include a model training system 260 and a model application system 270 . As examples, model training system 260 and/or model application system 270 may be implemented in computing device 120 as shown in FIG. 1 or computing device 220 as shown in FIG. 2 . It should be understood that the structure and functionality of example environment 200 are described for illustrative purposes only and are not intended to limit the scope of the subject matter described herein. The subject matter described herein may be implemented in different structures and/or functions.
  • the process of processing the input audio data 110 to determine the recognition results 230 of classification information such as audio segments can be divided into two stages: a model training stage and a model application stage.
  • the model training system 260 may utilize the training data set 250 to train a recognition model for performing the corresponding function.
  • Type 240 may be a combination of multiple sample data (as input to the recognition model 240) and corresponding annotated supervision information (also referred to as "labels", "truth results”).
  • the model application system 270 may receive the trained recognition model 240 .
  • the recognition model 240 loaded into the computing device 220 of the model application system 270 can determine the recognition result 230 based on the audio data 210 .
  • recognition model 240 may be constructed as a learning network.
  • the learning network may include multiple networks, where each network may be a multi-layer neural network, which may be composed of a large number of neurons. Through the training process, the corresponding parameters of each neuron in the network can be determined. The parameters of the neurons in these networks are collectively referred to as the parameters of the recognition model 240 .
  • the training process of the recognition model 240 may be performed in an iterative manner until at least some of the parameters of the recognition model 240 converge or until a predetermined number of iterations is reached, thereby obtaining final model parameters.
  • Figure 3 illustrates a flow diagram of a process 300 for audio recognition in accordance with an embodiment of the present disclosure.
  • process 300 may be implemented in computing device 120 in FIG. 1 and computing device 220 in FIG. 2 .
  • a process 300 of audio recognition according to an embodiment of the present disclosure is now described with reference to FIG. 3 .
  • the specific examples mentioned in the following description are illustrative and are not intended to limit the scope of the present disclosure.
  • the computing device 120 may obtain a target feature map of the audio data 110 based on the multi-level feature map of the audio data 110 . Thereafter, at step 304, the computing device 120 may determine a feature representation of the audio data 110 based on the target feature map.
  • FIG. 4 illustrates a schematic diagram of an example environment 400 for determining feature representations in accordance with an embodiment of the present disclosure.
  • the example environment 400 includes audio data 410, feature extraction network 420 and feature representation 430 .
  • the audio data 410 may be the audio data 110 or a segment of the audio data 110 .
  • the feature extraction network 420 will perform a feature extraction operation on the audio data 410.
  • the feature extraction network 420 may be a deep neural network or a multi-layer feature extractor as shown in FIG. 4 .
  • the feature extraction network 420 may include at least a first-level extractor 421 and a second-level extractor 422. It should be understood that the feature extraction network 420 may also include more stages of extractors.
  • the computing device 120 may obtain the multi-level feature map of the audio data 410 using, for example, the feature extraction network 420 including at least the above-mentioned first-level extractor 421 and the second-level extractor 422.
  • the first-level extractor 421 and the second-level extractor 422 may be convolutional neural networks, so the first-level extractor 421 may perform a convolution operation on the audio data 410 to obtain the first-level feature map, and the second-level extractor 421 may perform a convolution operation on the audio data 410 to obtain the first-level feature map.
  • the level extractor 422 may perform a convolution operation on the first level feature map to obtain the second level feature map.
  • the convolution operation process is essentially a downsampling process. Since the next-level feature map in the multi-level feature map is extracted from the upper-level feature map, the resolution of the second-level feature map is higher than that of the first-level feature map. The image is of lower resolution.
  • the computing device 120 can perform feature reconstruction based on at least the lower-level feature map and the upper-level feature map to determine the target feature map, and then obtain the feature vector of the audio data 410 in the abstract space, that is, the feature representation 430.
  • the resolution of the next-level feature map is upgraded to the resolution of the upper-level feature map through feature reconstruction, and the feature reconstruction is at least based on the next-level feature map and the upper-level feature map, thus including both the next-level feature map and the upper-level feature map.
  • the rich semantic information extracted in the first-level feature map, and the high resolution in the upper-level feature map make it easier to locate the location of specific types of audio clips.
  • Figure 5 shows a schematic diagram of a feature map 510 according to an embodiment of the present disclosure.
  • the feature map 510 may be a feature data group determined based on the audio data 410 , where A...I are specific values of the above feature data.
  • feature map 510 may be a 100 ⁇ 100 matrix.
  • the feature map 510 is down-sampled into a matrix of, for example, 50 ⁇ 50, And after further performing a convolution operation by the second-stage extractor 422, the feature map 510 is down-sampled into a matrix of, for example, 25 ⁇ 25.
  • the feature map 510 as a 25 ⁇ 25 matrix may be upsampled to, for example, a 50 ⁇ 50 matrix, and further upsampled to, for example, a 100 ⁇ 100 matrix. It should be understood that the feature reconstruction process does not end here.
  • the architecture of determining the target feature map is now described with reference to FIG. 6 .
  • FIG. 6 shows a schematic diagram of a multi-level feature map 600 according to an embodiment of the present disclosure.
  • the multi-level feature map 600 includes a first-level feature map 601, a second-level feature map 602, a third-level feature map 603, a feature map 604 generated based on the third-level feature map 603, and a feature map 604 generated based on the third-level feature map 603.
  • the first-level feature map 601 may be extracted from the audio data 410 by the first-level extractor 421 shown in FIG. 4
  • the second-level feature map 602 may be extracted by the first-level extractor 421 shown in FIG. 4
  • the second-level extractor 422 extracts from the first-level feature map 601
  • the third-level feature map 603 may be extracted from the second-level feature map 602. It should be understood that the multi-level feature map 600 shown in FIG. 6 can have more levels, and the number of levels is related to the network structure of the model.
  • the computing device 120 can directly copy the values in the third-level feature map 603 to the feature map 604. Thereafter, the computing device 120 may upsample the feature map 604, that is, expand the feature map 604 into a spare feature map 605. In other words, the computing device 120 may copy the values in the upsampled feature map 604 to the feature map 605, and compare the values in the second-level feature map 602 at the same level as the feature map 605 with the values in the feature map 605. Such as mean or other operations, and the calculated results are stored in the feature map 605.
  • the computing device 120 may further upsample the feature map 605, that is, expand the feature map 605 into a spare feature map 606, and compare the values in the first-level feature map 601 with the features at the same level as the feature map 606.
  • the values in the graph 606 are subjected to operations such as mean or other operations, and the calculated results are stored in the feature map 606.
  • the feature map 606 is the target feature map. In this way, the target feature map contains both rich semantic information and high resolution. location information, thereby optimizing model performance.
  • the computing device 120 may determine a recognition result 130 of the audio data 110 based at least on the feature representation.
  • the audio data 110 is an audio segment of a song, and in order to determine the recognition result 130 of the audio data 110, the computing device 120 may determine whether the audio segment belongs to the chorus category. From this, the chorus part of a song can be automatically identified. It should be understood that the present disclosure is not limited to identifying the chorus part in the song, but can also identify other parts in the song, such as verses, transitional sentences, bridge sections, etc., and can also identify classifiable parts in other audio data.
  • the feature data determined through the above embodiments contains richer information and has more accurate position information compared with the traditional audio recognition module, thereby improving the performance of the model.
  • the above embodiments mainly involve the application of the recognition model 240.
  • the training process of the recognition model 240 will be introduced in detail below.
  • the audio data 110 may be training data or a training data set
  • the computing device 120 may further based on the recognition result 130 and the pre-annotated ground truth result of the training data. , determine the loss function value of the trained recognition model to update the parameters of the recognition model.
  • Figure 7 shows a schematic diagram of a model training architecture 700 according to an embodiment of the present disclosure.
  • audio data 701 may be input to extraction module 710 to determine a feature representation of audio data 701 . Thereafter, the determined feature representation is input to the prediction module 720 to determine a prediction result of the feature representation of the audio data 701 .
  • the loss determination module 730 may determine a loss function value 703 for the model based on the determined results and the ground truth label 702 of the audio data 701 .
  • the computing device 120 may perform data augmentation on the feature representation determined by the extraction module 710 .
  • computing device 120 may utilize the enhanced model in FIG. 7 Block 740 determines a distribution of feature representations corresponding to the audio segments that belong to the chorus category or do not belong to the chorus category, and then determines the sampled feature representations in the distribution as additional feature representations.
  • computing device 120 may sample a predetermined number of feature representations in the distribution as additional feature representations. Therefore, the computing device 120 can input one feature representation determined by the extraction module 710 and multiple additional feature representations obtained through data enhancement into the fully connected layer of the recognition model to determine the recognition result or prediction result. In this way, the present disclosure can augment more training data at the level of feature vectors, thereby increasing the amount and diversity of training data.
  • is a hyperparameter of the model, which can be set to ⁇ >0, for example.
  • the computing device 120 may determine an upper limit on the loss function of the recognition model by setting the number of sampled feature representations to positive infinity, thereby determining the loss function value.
  • the number of samples of the enhanced training data is N ⁇ (M+1).
  • a cross-entropy loss function may be used to train the module.
  • the weight W corresponding to category c can be expressed as w c
  • the corresponding offset b can be expressed as b c .
  • the loss function can be determined without spending large computing resources like formula (1), so that the value of the loss function can be quickly obtained, thereby optimizing model training.
  • FIG. 8 shows a schematic diagram of an audio recognition device 800 according to an embodiment of the present disclosure.
  • the audio recognition device 800 may at least include a target feature map acquisition module 802, a feature representation determination module 804, and a recognition result determination module 806.
  • the target feature map acquisition module 802 may acquire the target feature map of the audio data based on the multi-level feature map of the audio data.
  • the feature representation determining module 804 may further determine the feature representation of the audio data based on the acquired target feature map.
  • the recognition result determination module 806 may further determine the recognition result of the audio data based at least on the determined feature representation.
  • the target feature map acquisition module 802 may include a multi-level feature map acquisition sub-module, which is used to acquire multi-level feature maps of audio data. It should be understood that the next-level feature map in the multi-level feature map is extracted from the upper-level feature map.
  • the multi-level feature map acquisition sub-module can include a first-level extractor, a second-level extractor, etc.
  • the first-level extractor may perform a convolution operation on the audio data to obtain the first-level feature map
  • the second-level extractor may perform a convolution operation on the first-level feature map to obtain the second-level feature map.
  • the target feature map acquisition module 802 may also include a target feature map determination sub-module, which is used to perform feature reconstruction based on at least the next-level feature map and the upper-level feature map to determine the target feature map.
  • the target feature map determination sub-module can expand the second-level feature map into a first-level backup feature map when performing feature reconstruction, and determine the target based on the first-level backup feature map and the first-level feature map. Feature map.
  • the audio data may be training data
  • the audio recognition device 800 may further include: a loss function value determination submodule for determining the trained value based on the recognition result and the pre-labeled true value result of the training data.
  • the loss function value of the recognition model is used to update the parameters of the recognition model.
  • the audio recognition device 800 may further include: a distribution determination module, used to determine the distribution of feature representations corresponding to audio clips that belong to the chorus category or do not belong to the chorus category; and an additional feature representation determination module, Used to determine the sampled feature representation in this distribution as an additional feature representation.
  • a distribution determination module used to determine the distribution of feature representations corresponding to audio clips that belong to the chorus category or do not belong to the chorus category
  • an additional feature representation determination module Used to determine the sampled feature representation in this distribution as an additional feature representation.
  • the additional feature representation determination module may be configured to sample a predetermined number of feature representations in the distribution as additional feature representations.
  • the loss function value determination sub-module may be configured to determine an upper limit of the loss function of the recognition model by setting the predetermined number to positive infinity to determine the loss function value.
  • the recognition result determination module 806 may be configured to input the feature representation and the additional feature representation into a fully connected layer of the recognition model to determine the recognition result.
  • the audio data is an audio segment of a song
  • the recognition result determination module 806 may include: a classification module for determining whether the audio segment belongs to the chorus category or does not belong to the chorus category.
  • FIG. 9 shows a schematic block diagram of an example device 900 that may be used to implement embodiments of the present disclosure.
  • computing device 120 as shown in FIG. 1 may be implemented by device 900.
  • the device 900 includes a central processing unit (CPU) 901 that can operate according to computer program instructions stored in a read-only memory (ROM) 902 or loaded from a storage unit 908 into a random access memory (RAM) 903 .
  • RAM 903 various programs and data required for the operation of the device 900 can also be stored.
  • the CPU 901, ROM 902, and RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904.
  • I/O interface 905 Multiple components in device 900 are connected to I/O interface 905, including: input unit 906, such as keyboard, mouse, etc.; output unit 907, such as various types of displays, speakers, etc.; storage unit 908, such as magnetic disk, optical disk, etc. ; and communication unit 909, such as a network card, modem, wireless communication transceiver, etc.
  • the communication unit 909 allows the device 900 to exchange information/data with other devices through computer networks such as the Internet and/or various telecommunications networks. It should be understood that the present disclosure can use the output unit 907 to display real-time dynamic change information of user satisfaction, key factor identification information of satisfied group users or individual users, optimization strategy information, and strategy implementation effect evaluation information, etc.
  • the processing unit 901 may be implemented by one or more processing circuits. Processing unit 901 may be configured to perform the various processes and processes described above, such as process 300.
  • process 300 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908.
  • part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communication unit 909.
  • ROM 902 When a computer program is loaded into RAM 903 and executed by CPU 901, one or more steps in process 300 described above may be performed.
  • the performance of the trained model can be significantly improved.
  • various test data sets are used to detect the performance of the trained model and compare it with traditional multiple models:
  • the AUC (area under the curve) score of the CNMF (convolutional non-negative matrix factorization) model is 0.526
  • the AUC score of the SCluster model is 0.533
  • the AUC score of the Highlighter model is 0.804
  • the Multi2021 model's AUC score is 0.526.
  • the AUC score was 0.819
  • the AUC score of the DeepChorus model was 0.842
  • the AUC score of the trained model of the present disclosure was 0.906.
  • the AUC score of the CNMF model is 0.543
  • the AUC score of the SCluster model is 0.545
  • the AUC score of the Highlighter model is 0.703
  • the AUC score of the Multi2021 model is 0.675
  • the AUC score of the DeepChorus model The score is 0.780
  • the AUC score of the trained model of the present disclosure is 0.887.
  • the AUC score of the CNMF model is 0.478
  • the AUC score of the SCluster model is 0.551
  • the AUC score of the Highlighter model is 0.671
  • the AUC score of the Multi2021 model is 0.633
  • the AUC score of the DeepChorus model is 0.765.
  • the AUC score of the trained model of the present disclosure is 0.831.
  • the AUC score of the CNMF model is 0.488
  • the AUC score of the SCluster model is 0.568
  • the AUC score of the Highlighter model is 0.553
  • the AUC score of the DeepChorus model is 0.811
  • the trained model of the present disclosure The AUC score is 0.872.
  • the F-score of the model of the present disclosure is also higher than that of the traditional module. It can be seen that the performance of the audio recognition module trained according to the embodiments of the present disclosure has significantly improved performance compared with the traditional model.
  • the present disclosure may be a system, method, and/or computer program product.
  • a computer program product may include a computer-readable storage medium having thereon computer-readable program instructions for performing various aspects of the present disclosure.
  • Computer-readable storage media may be tangible devices that can retain and store instructions for use by an instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) or Flash memory), Static Random Access Memory (SRAM), Compact Disk Read Only Memory (CD-ROM), Digital Versatile Disk (DVD), Memory Stick, Floppy Disk, Mechanical Coding Device, such as a printer with instructions stored on it.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • Flash memory Static Random Access Memory
  • CD-ROM Compact Disk Read Only Memory
  • DVD Digital Versatile Disk
  • Memory Stick
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber optic cables), or through electrical wires. transmitted electrical signals.
  • Computer-readable program instructions described herein may be obtained from a computer-readable storage medium Download to various computing/processing devices, or to an external computer or external storage device via a network, such as the Internet, local area network, wide area network and/or wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage on a computer-readable storage medium in the respective computing/processing device .
  • Computer program instructions for performing operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • the computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through the Internet). connect).
  • LAN local area network
  • WAN wide area network
  • an external computer such as an Internet service provider through the Internet. connect
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA)
  • the electronic circuit can Computer readable program instructions are executed to implement various aspects of the disclosure.
  • These computer-readable program instructions may be provided to a processing unit of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatus, thereby producing a machine such that the instructions, when executed by the processing unit of the computer or other programmable data processing apparatus, When executed, a device is produced that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing device and/or other equipment to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions that implement aspects of the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other equipment, causing a series of operating steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executed on a computer, other programmable data processing apparatus, or other equipment to implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions that embody one or more elements for implementing the specified logical function(s).
  • Executable instructions may occur out of the order noted in the figures. For example, two consecutive blocks may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions.
  • Example 1 An audio recognition method, including: based on a multi-level feature map of audio data, obtaining a target feature map of the audio data; based on the target feature map, determining the Characteristic representation of the audio data; and determining a recognition result of the audio data based at least on the characteristic representation.
  • Example 2 The method according to Example 1, wherein obtaining the target feature map includes: Obtain the multi-level feature map of the audio data, the next-level feature map in the multi-level feature map is extracted from the upper-level feature map; and based on at least the lower-level feature map and the upper-level feature map The first-level feature map performs feature reconstruction to determine the target feature map.
  • Example 3 The method according to Example 2, wherein the multi-level feature map at least includes: a first-level feature map extracted from the audio data; and a second-level feature extracted according to the first-level feature map picture.
  • Example 4 The method according to Example 3, wherein the feature reconstruction at least includes: expanding the second-level feature map into a first-level backup feature map; and based on the first-level backup feature map and the third-level backup feature map.
  • the first-level feature map determines the target feature map.
  • Example 5 The method of Example 1, wherein the audio data is training data, and the method further includes: determining a trained recognition based on the recognition result and a pre-labeled ground truth result of the training data.
  • the loss function value of the model is used to update the parameters of the recognition model.
  • Example 6 The method according to Example 5, further comprising: determining a distribution of feature representations corresponding to audio clips that belong to the chorus category or do not belong to the chorus category; and determining the sampled feature representations in the distribution as additional feature representations. .
  • Example 7 The method of example 6, wherein determining the sampled feature representation as the additional feature representation includes sampling a predetermined number of feature representations in the distribution as the additional feature representation.
  • Example 8 The method of example 7, wherein determining the loss function value includes determining an upper limit of a loss function of the recognition model by setting the predetermined number to positive infinity to determine the loss function value.
  • Example 9 The method of Example 6, wherein determining the recognition result based at least on the feature representation includes inputting the feature representation and the additional feature representation into a fully connected layer of the recognition model to determine the Recognition results.
  • Example 10 The method of Example 1, wherein the audio data is an audio segment of a song, and determining the recognition result of the audio data includes: determining that the audio segment belongs to a chorus category; or determining that the audio segment does not Belongs to the chorus category.
  • Example 11 An audio recognition device The configuration includes: a target feature map acquisition module configured to obtain the target feature map of the audio data based on a multi-level feature map of audio data; a feature representation determination module configured to determine the target feature map based on the target feature map. a characteristic representation of the audio data; and a recognition result determination module configured to determine the recognition result of the audio data based at least on the characteristic representation.
  • Example 12 The audio recognition device according to Example 11, wherein the target feature map acquisition module includes: a multi-level feature map acquisition sub-module configured to acquire the multi-level feature map of the audio data, the multi-level feature The next-level feature map in the figure is extracted from the upper-level feature map; and the target feature map determination submodule is configured to perform feature reconstruction based on at least the lower-level feature map and the upper-level feature map, to determine the target feature map.
  • the target feature map acquisition module includes: a multi-level feature map acquisition sub-module configured to acquire the multi-level feature map of the audio data, the multi-level feature The next-level feature map in the figure is extracted from the upper-level feature map; and the target feature map determination submodule is configured to perform feature reconstruction based on at least the lower-level feature map and the upper-level feature map, to determine the target feature map.
  • Example 13 The audio recognition device according to Example 12, wherein the multi-level feature map at least includes: a first-level feature map extracted from the audio data; and a second-level feature map extracted according to the first-level feature map. level feature map.
  • Example 14 The audio recognition device according to Example 13, wherein the target feature map acquisition module may be configured to: expand the second-level feature map into a first-level backup feature map when the feature is reconstructed; and The target feature map is determined based on the first-level backup feature map and the first-level feature map.
  • Example 15 The audio recognition device according to Example 11, wherein the audio data is training data, and the audio recognition device further includes: a loss function value determination submodule configured to based on the recognition result and the training The pre-labeled true value results of the data are used to determine the loss function value of the trained recognition model to update the parameters of the recognition model.
  • a loss function value determination submodule configured to based on the recognition result and the training The pre-labeled true value results of the data are used to determine the loss function value of the trained recognition model to update the parameters of the recognition model.
  • Example 16 The audio recognition device according to Example 15, further comprising: a distribution determination module configured to determine the distribution of feature representations corresponding to audio segments that belong to the chorus category or do not belong to the chorus category; and additional feature representation determination Module configured to determine sampled feature representations in the distribution as additional feature representations.
  • a distribution determination module configured to determine the distribution of feature representations corresponding to audio segments that belong to the chorus category or do not belong to the chorus category
  • additional feature representation determination Module configured to determine sampled feature representations in the distribution as additional feature representations.
  • Example 17 The audio recognition device of example 16, wherein the additional feature representation determining module is configured to sample a predetermined number of feature representations in the distribution, as stated additional features.
  • Example 18 The audio recognition device according to Example 17, wherein the loss function value determination sub-module is configured to determine an upper limit of the loss function of the recognition model by setting the predetermined number to positive infinity to determine the The value of the loss function.
  • Example 19 The audio recognition device according to Example 16, wherein the recognition result determination module is configured to input the feature representation and the additional feature representation into a fully connected layer of the recognition model to determine the recognition result .
  • Example 20 The audio recognition device according to Example 11, wherein the audio data is an audio segment of a song, and the recognition result determination module includes: a classification module configured to determine whether the audio segment belongs to the chorus category or not. Belongs to the chorus category.
  • Example 21 An electronic device, comprising: a processor; and a memory coupled to the processor, the memory having instructions stored therein, the instructions being processed When the processor is executed, the electronic device performs an action.
  • the action includes: obtaining a target feature map of the audio data based on a multi-level feature map of the audio data; and determining a feature representation of the audio data based on the target feature map. ; and determining a recognition result of the audio data based at least on the feature representation.
  • Example 22 The device according to Example 21, wherein obtaining the target feature map includes: obtaining the multi-level feature map of the audio data, where a next-level feature map in the multi-level feature map is obtained from a previous level feature map extraction; and perform feature reconstruction based on at least the lower level feature map and the upper level feature map to determine the target feature map.
  • Example 23 The device of example 22, wherein the multi-level feature map includes at least: a first-level feature map extracted from the audio data; and a second-level feature extracted according to the first-level feature map picture.
  • Example 24 The device of Example 23, wherein the feature reconstruction at least includes: expanding the second-level feature map into a first-level backup feature map; and based on the first-level backup feature map and the first-level backup feature map.
  • the first-level feature map determines the target feature map.
  • Example 25 The device of example 21, wherein the audio data is training data, and the method further includes: based on the recognition result and the training data The pre-labeled true value results are used to determine the loss function value of the trained recognition model to update the parameters of the recognition model.
  • Example 26 The device according to Example 25, further comprising: determining a distribution of feature representations corresponding to audio clips that belong to the chorus category or do not belong to the chorus category; determining the sampled feature representations in the distribution as additional feature representations .
  • Example 27 The apparatus of example 26, wherein determining the sampled feature representation as the additional feature representation includes sampling a predetermined number of feature representations in the distribution as the additional feature representation.
  • Example 28 The apparatus of example 27, wherein determining the loss function value includes determining an upper limit of a loss function of the recognition model by setting the predetermined number to positive infinity to determine the loss function value.
  • Example 29 The apparatus of example 26, wherein determining the recognition result based at least on the feature representation includes inputting the feature representation and the additional feature representation into a fully connected layer of the recognition model to determine the Recognition results.
  • Example 30 The device of Example 21, wherein the audio data is an audio segment of a song, and determining the recognition result of the audio data includes: determining that the audio segment belongs to a chorus category; or determining that the audio segment does not Belongs to the chorus category.
  • Example 31 A computer program product tangibly stored on a computer-readable medium and comprising machine-executable instructions that are executed when When executed, the machine is caused to perform the method according to any one of examples 1 to 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开的实施例提供了一种音频识别方法、装置、电子设备和计算机程序产品。该方法可以包括基于音频数据的多级特征图,获取所述音频数据的目标特征图。该方法还可以包括基于所述目标特征图,确定所述音频数据的特征表示。此外,该方法可以进一步包括至少基于所述特征表示,确定所述音频数据的识别结果。通过实现本公开的技术方案,确定的特征表示具有高分辨率的位置信息,从而优化了模型性能,提升了用户体验。

Description

音频识别方法、装置、电子设备和计算机程序产品
相关申请的交叉引用
本申请要求申请号为202210828275.9,题为“音频识别方法、装置、电子设备和计算机程序产品”、申请日为2022年7月13日的中国发明专利申请的优先权,通过引用的方式将该申请整本并入本文。
技术领域
本公开的实施例涉及数据处理领域,并且更具体地,涉及音频识别方法、装置、电子设备和计算机程序产品。
背景技术
智能地识别诸如歌曲、人声等音频数据的技术是很多领域研究的关键。因此,基于深度学习的音频识别技术在很多领域中都均有广泛的应用场景。例如,当前的基于深度学习的音频识别技术通常会利用诸如卷积运算实现特征提取,所提取的特征具有丰富的高层次语义信息,但同时也忽略了其他信息。亟需一种音频识别技术,其提取的特征能够包含更多的信息。
发明内容
本公开的实施例提供了音频识别方案。
在本公开的第一方面中,提供了一种音频识别方法。该方法可以包括基于音频数据的多级特征图,获取所述音频数据的目标特征图。该方法还可以包括基于所述目标特征图,确定所述音频数据的特征表示。此外,该方法可以进一步包括至少基于所述特征表示,确定所述音频数据的识别结果。
在本公开的第二方面中,提供了一种音频识别装置,该音频识 别装置可以包括:目标特征图获取模块,被配置为基于音频数据的多级特征图,获取所述音频数据的目标特征图;特征表示确定模块,被配置为基于所述目标特征图,确定所述音频数据的特征表示;以及识别结果确定模块,被配置为至少基于所述特征表示,确定所述音频数据的识别结果。
在本公开的第三方面中,提供了一种电子设备,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:基于音频数据的多级特征图,获取所述音频数据的目标特征图;基于所述目标特征图,确定所述音频数据的特征表示;以及至少基于所述特征表示,确定所述音频数据的识别结果。
在本公开的第四方面中,提供了一种计算机程序产品,计算机程序产品被有形地存储在计算机可读介质上并且包括机器可执行指令,机器可执行指令在被执行时使机器执行根据第一方面的方法的任意步骤。
提供该内容部分是为了简化的形式来介绍对概念的选择,它们在下文的具体实施方式中将被进一步描述。该内容部分无意标识本公开的关键特征或主要特征,也无意限制本公开的范围。
附图说明
通过结合附图对本公开示例性实施例进行更详细的描述,本公开的上述以及其它目的、特征和优势将变得更加明显,其中,在本公开示例性实施例中,相同或相似的参考标号通常代表相同或相似的部件。在附图中:
图1示出了本公开的多个实施例能够在其中实现的示例环境的示意图;
图2示出了根据本公开的实施例的用于训练和应用模型的详细示例环境的示意图;
图3示出了根据本公开的实施例的用于音频识别的过程的流程 图;
图4示出了根据本公开的实施例的确定特征表示的示例环境的示意图;
图5示出了根据本公开的实施例的特征图的示意图;
图6示出了根据本公开的实施例的多级特征图的示意图;
图7示出了根据本公开的实施例的模型训练架构的示意图;
图8示出了根据本公开的实施例的音频识别装置的示意图;以及
图9示出了可以用来实施本公开的实施例的示例设备的示意性框图。
具体实施方式
可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
可以理解的是,本技术方案所涉及的数据(包括但不限于数据 本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
下面将参考附图中示出的若干示例实施例来描述本公开的原理。
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
在本公开的实施例中,术语“数据”可以是指待识别的实时数据,例如,从一段歌曲中截取的音频片段,该音频片段可以利用训练好的识别模型进行音频识别。此外,术语“数据”还可以是指包含标注信息的数据,例如,模型训练数据。该标注信息例如可以是预先标注的分类信息。术语“分类”一般是指音频片段的识别结果,例如,可以通过识别模型确定一帧音频片段是否是某类音频,诸如,副歌。术语“特征表示”一般是指利用深度神经网络中的至少部分网络从数据中提取的特征。
如上文所描述,随着计算机技术的不断发展,深度神经网络被广泛应用于人们生活的各个方面。为了更好地执行音频识别的分类任务,需要对传统的音频识别模型的训练过程进行优化。在传统的音频识别模型的训练过程中,随着模型的深入,提取的特征图的分辨率逐渐降低。尽管分辨率降低的特征图携带了更高层的语义信息,但分辨率的牺牲使得特征图丢失了精确的位置信息。应理解,本文提及的“位置信息”主要是指一帧音频片段在一段音频中的位置,例如,该帧音频片段的起始时间、截止时间。
根据本公开的实施例,提出了一种用于音频识别的方案。该方案在提取用于确定特征表示的目标特征图时,不仅利用了目标特征图的最接近的上一级特征图,还利用了每一级或者多级特征提取得到的特征图,从而使最终获取的目标特征图既包含丰富的语义信息, 也具有高分辨率的位置信息,从而能够解决上述问题和/或其他潜在问题。
此外,在模型训练过程中,训练数据的数量和多样性直接决定了模型的性能。对于音频识别的训练数据而言,样本量和/或多样性的不充足对音频识别模型的训练带来了不利影响。为此,本公开的后续实施例还提供了对上述的由目标特征图确定的特征表示进行增强的方案。
以下将结合示例场景来详细描述本公开的各实施例。应当理解,这仅仅是出于说明的目的,不旨在以任何方式限制本公开的范围。
图1示出了根据本公开的实施例的用于音频识别的示例系统100的框图。应当理解,图1所示的系统100仅仅是本公开的实施例可实现于其中的一种示例,不旨在限制本公开的范围。本公开的实施例同样适用于其他系统或架构。
如图1所示,系统100可以包括计算设备120。计算设备120可以被配置为接收音频数据110,并输出与音频数据110相关的识别结果130。在一些实施例中,音频数据110是时域的音频数据经常数Q变换或其他变换的频谱图。
在一些实施例中,计算设备120可以获取音频数据110。在一些实施例中,音频数据110可以是待识别的音频片段。在另一些实施例中,音频数据110可以包括用于训练深度神经网络或机器学习模型(也被称为目标模型)的多个训练样本。音频数据110可以具有对应的标注信息。这样的标注信息可以是由人工标注、模型自动标注或者其他适当的方式所产生的。
在本公开中,目标模型可以被设计用于执行音频识别任务。目标模型的示例包括但不限于各类深度神经网络(DNN)、卷积神经网络(CNN)、支持向量机(SVM)、决策树、随机森林模型等等。在本公开的实现中,目标模型也可以被称为“识别模型”。在下文中,术语“识别模型”、“神经网络”、“学习模型”、“学习网络”、“模型”和“网络”可替换地使用。
在一些实施例中,计算设备120可以包括但不限于个人计算机、服务器计算机、手持或膝上型设备、移动设备(诸如移动电话、个人数字助理PDA、媒体播放器等)、消费电子产品、小型计算机、大型计算机、云计算资源等。
在一些实施例中,识别结果130可以被设置为从音频数据110中确定的分类信息,例如,作为歌曲的音频片段的音频数据110是否属于副歌分类等。备选地或附加地,识别结果130还可以被设置为在模型训练过程中被修正或更新的预测结果(该结果在后续过程中与经标注的真值结果进行比较,以便确定损失函数)。
应当理解,系统100中所包括的这些装置和/或装置中的单元仅是示例性的,而不旨在限制本公开的范围。应当理解的是,系统100还可以包括未示出的附加装置和/或单元。例如,在一些实施例中,系统100的计算设备120中还可以进一步包括用于存储预先输入的超参数等的存储单元(未示出)以及经训练的模型。
下文将参考图2对计算设备120中的模型的训练和使用进行描述。
图2示出了根据本公开的实施例的详细示例环境200的示意图。与图1类似地,示例环境200可以包含计算设备220、输入计算设备220的音频数据210和从计算设备220输出的识别结果230。区别在于,示例环境200总体上可以包括模型训练系统260和模型应用系统270。作为示例,模型训练系统260和/或模型应用系统270可以在如图1所示的计算设备120或如图2所示的计算设备220中实现。应当理解,仅出于示例性的目的描述示例环境200的结构和功能并不旨在限制本文所描述主题的范围。本文所描述主题可以在不同的结构和/或功能中实施。
如前所述,对输入的音频数据110进行处理以确定诸如音频片段的分类信息的识别结果230的过程可以分为两个阶段:模型训练阶段和模型应用阶段。作为示例,在模型训练阶段中,模型训练系统260可以利用训练数据集250来训练用于执行相应功能的识别模 型240。应理解,训练数据集250可以是多个样本数据(作为识别模型240的输入)以及相应的被标注的监督信息(或称为“标签”、“真值结果”)的组合。在模型应用阶段中,模型应用系统270可以接收经训练的识别模型240。由此,载入到模型应用系统270的计算设备220中的识别模型240可以基于音频数据210来确定识别结果230。
在其他实施例中,识别模型240可以被构建为学习网络。在一些实施例中,该学习网络可以包括多个网络,其中每个网络可以是一个多层神经网络,其可以由大量的神经元组成。通过训练过程,每个网络中的神经元的相应参数能够被确定。这些网络中的神经元的参数被统称为识别模型240的参数。
识别模型240的训练过程可以以迭代方式来被执行,直至识别模型240的参数中的至少部分参数收敛或者直至达到预定迭代次数,由此获得最终的模型参数。
上文描述的技术方案仅用于示例,而非限制本公开。应理解,还可以按照其他方式和连接关系来布置各个网络。为了更清楚地解释上述方案的原理,下文将参考图3来更详细描述从音频数据110中确定识别结果130的过程。
图3示出了根据本公开的实施例的用于音频识别的过程300的流程图。在某些实施例中,过程300可以在图1中的计算设备120和图2中的计算设备220中实现。现参照图3描述根据本公开实施例的音频识别的过程300。为了便于理解,在下文描述中提及的具体实例均是示例性的,并不用于限定本公开的保护范围。
在步骤302,计算设备120可以基于音频数据110的多级特征图,获取音频数据110的目标特征图。之后,在步骤304,计算设备120可以基于目标特征图,确定音频数据110的特征表示。
为了清楚地描述本公开提及的“特征表示”的确定过程,现参照图4描述特征提取的过程。图4示出了根据本公开的实施例的确定特征表示的示例环境400的示意图。
如图4所示,示例环境400中包含音频数据410、特征提取网络 420以及特征表示430。应理解,音频数据410可以是音频数据110或者音频数据110的一个片段。音频数据410被输入特征提取网络420后,特征提取网络420会对音频数据410执行特征提取运算。作为示例,特征提取网络420可以是如图4所示的深度神经网络或者多层的特征提取器。如图所示,特征提取网络420可以至少包含第一级提取器421和第二级提取器422。应理解,特征提取网络420还可以包含更多级的提取器。
为了获取目标特征图,计算设备120可以利用例如至少包含上述第一级提取器421和第二级提取器422的特征提取网络420来获取音频数据410的多级特征图。作为示例,第一级提取器421和第二级提取器422可以是卷积神经网络,故第一级提取器421可以对音频数据410执行卷积运算以得到第一级特征图,并且第二级提取器422可以对第一级特征图执行卷积运算以得到第二级特征图。
应注意,卷积运算过程本质上是下采样过程,由于多级特征图中的下一级特征图是从上一级特征图提取的,故第二级特征图的分辨率比第一级特征图的分辨率更低。
之后,计算设备120可以至少基于下一级特征图和上一级特征图进行特征重建,以确定目标特征图,进而可以获得音频数据410在抽象空间中的特征向量,即特征表示430。以此方式,通过特征重建将下一级特征图的分辨率提升到上一级特征图的分辨率,并且特征重建至少基于下一级特征图和上一级特征图,从而既包含了下一级特征图中提取的丰富的语义信息,又包含了上一级特征图中的高分辨率,使得定位特定类型的音频片段的位置更加容易。
为了清楚地描述本公开提及的“特征图”,现参照图5描述特征图的示例形式。图5示出了根据本公开的实施例的特征图510的示意图。如图5所示,特征图510可以是基于音频数据410确定的特征数据组,其中的A…I均为上述特征数据的具体数值。作为示例,特征图510可以是100×100的矩阵。当特征图510被第一级提取器421执行卷积运算后,特征图510被下采样为例如50×50的矩阵, 并且当进一步被第二级提取器422执行卷积运算后,特征图510被下采样为例如25×25的矩阵。对于上述特征重建过程,作为25×25的矩阵的特征图510可以被上采样为例如50×50的矩阵,进而被上采样为例如100×100的矩阵。应理解,特征重建的过程并不止于此,为了更详细的描述特征提取和特征重建的过程,现参照图6描述确定目标特征图的架构。
图6示出了根据本公开的实施例的多级特征图600的示意图。如图6所示,多级特征图600包含第一级特征图601、第二级特征图602、第三级特征图603以及基于第三级特征图603生成的特征图604、基于特征图604与第二级特征图602生成的特征图605、基于特征图605与第一级特征图601生成的特征图606。
在图6中,第一级特征图601可以是由图4中所示的第一级提取器421从音频数据410中提取的,第二级特征图602可以是由图4中所示的第二级提取器422从第一级特征图601中提取的,进而第三级特征图603可以是从第二级特征图602中提取的。应理解,图6中所示的多级特征图600可以具有更多层级,并且层级的数目与模型的网络结构有关。
由此,在进行特征重建时,计算设备120可以将直接将第三级特征图603中的数值直接复制到特征图604中。之后,计算设备120可以对特征图604进行上采样,即,将特征图604扩充为备用的特征图605。换言之,计算设备120可以将上采样后的特征图604中的数值复制到特征图605中,并且将与特征图605同级的第二级特征图602中的数值与特征图605中的数值进行诸如均值或其他运算,并且将算得的结果存储在特征图605中。类似地,计算设备120进一步可以对特征图605进行上采样,即,将特征图605扩充为备用的特征图606,并且将与特征图606同级的第一级特征图601中的数值与特征图606中的数值进行诸如均值或其他运算,并且将算得的结果存储在特征图606中,此时的特征图606即为目标特征图。以此方式,目标特征图中既包含了丰富的语义信息,又具有高分辨率 的位置信息,从而优化了模型性能。
回到图3,在步骤306,计算设备120可以至少基于特征表示确定音频数据110的识别结果130。
在某些实施例中,音频数据110是歌曲的音频片段,为了确定音频数据110的识别结果130,计算设备120可以确定音频片段是否属于副歌分类。由此,可以自动识别一首歌曲中的副歌部分。应理解,本公开不限于识别歌曲中的副歌部分,还可以识别歌曲中的其他部分,诸如主歌、过渡句、桥段等,并且也可以识别其他音频数据中的可分类的部分。
以此方式,通过上述实施例确定的特征数据包含了更丰富的信息,与传统的音频识别模块相比,具有更精确的位置信息,从而改善了模型的性能。
以上实施例主要涉及对识别模型240的应用,下面将详细介绍识别模型240的训练过程。在模型训练过程中,音频数据110可以是训练数据或训练数据集,并且在被训练的模型确定了识别结果130之后,计算设备120可以进一步基于识别结果130与训练数据的预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新识别模型的参数。
为了确定模型的损失函数值,计算设备120需要对真值标签与实时生成的识别结果进行比较。图7示出了根据本公开的实施例的模型训练架构700的示意图。
如图7所示,音频数据701可以被输入提取模块710,以确定音频数据701的特征表示。之后,经确定的特征表示被输入预测模块720,以确定音频数据701的特征表示的预测结果。由此,损失确定模块730可以基于经确定的结果和音频数据701的真值标签702,确定模型的损失函数值703。
在某些实施例中,为了优化(泛化)模型的性能,计算设备120可以对提取模块710确定的特征表示进行数据增强(data augmentation)。作为示例,计算设备120可以利用图7中的增强模 块740确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布,进而将该分布中的采样特征表示确定为附加特征表示。
在某些实施例中,为了将采样特征表示确定为附加特征表示,计算设备120可以在该分布中采样预定数目的特征表示,作为附加特征表示。由此,计算设备120可以将提取模块710确定的一个特征表示与经数据增强得到的多个附加特征表示输入识别模型的全连接层,以确定识别结果或预测结果。以此方式,本公开可以在特征向量的层级上增扩更多的训练数据,从而提升了训练数据的数据量和多样性。
应理解,经数据增强得到的特征表示可以基于如下公式(1)生成:
其中ai是特征表示,i是提取模块710确定的特征表示中的第i行特征。yi表示第i帧的标注类别(诸如,副歌)。表示类别yi的协方差矩阵。λ是模型的超参数,例如可以被设置为λ>0。
应理解,当采样特征表示的数目较多时,模型训练的运算量将显著增加。为此,计算设备120可以通过将采样特征表示的数目设定为正无穷来确定识别模型的损失函数的上限,从而确定损失函数值。
具体地,设数据集的大小为N,采样特征表示的数目为M,则将增强的训练数据的采样数目为N×(M+1)。在某些实施例中,可以使用交叉熵损失函数来训练模块。对于全连接层,可以将类别c所对应的权重W表示为wc,并且将对应的偏移b表示为bc。当M为正无穷时:
公式(2)等效于如下损失函数公式:

其中
借助Jensen不等式E[logX]≤logE[X],可以推导出损失函数的上限即,如下公式(5):
最终,损失函数的上限可以被推导为如下公式(6):
其中
以此方式,无需如公式(1)一样花费较大的计算资源,即可确定损失函数,从而快速求得损失函数值,从而优化了模型训练。
本公开还提供了一种视频识别装置。具体地,图8示出了根据本公开的实施例的音频识别装置800的示意图。如图8所示,音频识别装置800至少可以包括目标特征图获取模块802、特征表示确定模块804、和识别结果确定模块806。目标特征图获取模块802可以基于音频数据的多级特征图来获取音频数据的目标特征图。特征表示确定模块804可以进一步基于获取的目标特征图来确定音频数据的特征表示。此外,识别结果确定模块806可以至少基于确定的特征表示来进一步确定音频数据的识别结果。
在某些实施例中,目标特征图获取模块802可以包括多级特征图获取子模块,其用于获取音频数据的多级特征图。应理解,多级特征图中的下一级特征图是从上一级特征图提取的。多级特征图获取子模块可以包含第一级提取器、第二级提取器等。第一级提取器可以对音频数据执行卷积运算以得到第一级特征图,并且第二级提取器可以对第一级特征图执行卷积运算以得到第二级特征图。此外,目标特征图获取模块802还可以包括目标特征图确定子模块,其用于至少基于下一级特征图和上一级特征图进行特征重建,以确定目标特征图。
在某些实施例中,目标特征图确定子模块在进行特征重建时可以将第二级特征图扩充为第一级备用特征图,并且基于第一级备用特征图和第一级特征图确定目标特征图。
在某些实施例中,音频数据可以是训练数据,并且音频识别装置800还可以包括:损失函数值确定子模块,用于基于识别结果与训练数据的预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新识别模型的参数。
在某些实施例中,音频识别装置800还可以包括:分布确定模块,用于确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布;以及附加特征表示确定模块,用于将该分布中的采样特征表示确定为附加特征表示。
在某些实施例中,附加特征表示确定模块可以被配置为在该分布中采样预定数目的特征表示,作为附加特征表示。
在某些实施例中,损失函数值确定子模块可以被配置为通过将预定数目设定为正无穷确定识别模型的损失函数的上限,以确定损失函数值。
在某些实施例中,识别结果确定模块806可以被配置为将特征表示与附加特征表示输入识别模型的全连接层,以确定识别结果。
在某些实施例中,音频数据是歌曲的音频片段,并且识别结果确定模块806可以包括:分类模块,用于确定音频片段属于副歌分类或者不属于副歌分类。
图9示出了可以用来实施本公开的实施例的示例设备900的示意性框图。例如,如图1所示的计算设备120可以由设备900来实施。如图所示,设备900包括中央处理单元(CPU)901,其可以根据存储在只读存储器(ROM)902中的计算机程序指令或者从存储单元908加载到随机访问存储器(RAM)903中的计算机程序指令,来执行各种适当的动作和处理。在RAM 903中,还可存储设备900操作所需的各种程序和数据。CPU 901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
设备900中的多个部件连接至I/O接口905,包括:输入单元906,例如键盘、鼠标等;输出单元907,例如各种类型的显示器、扬声器等;存储单元908,例如磁盘、光盘等;以及通信单元909,例如网卡、调制解调器、无线通信收发机等。通信单元909允许设备900通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。应理解,本公开可以利用输出单元907显示用户满意度的实时动态变化信息、满意度的群体用户或个体用户的关键因素识别信息、优化策略信息、以及策略实施效果评估信息等。
处理单元901可通过一个或多个处理电路来实现。处理单元901可被配置为执行上文所描述的各个过程和处理,例如过程300。例如,在一些实施例中,过程300可以被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元908。在一些实施例中,计算机程序的部分或者全部可以经由ROM 902和/或通信单元909而被载入和/或安装到设备900上。当计算机程序被加载到RAM 903并由CPU 901执行时,可以执行上文描述的过程300中的一个或多个步骤。
效果详述
通过执行上述实施例,可以显著提升经训练的模型的性能。为了验证模型性能,利用多种测试数据集来检测经训练的模型的性能并与传统的多种模型进行比较:
对于RWC(真实世界计算)数据集,CNMF(卷积非负矩阵分解)模型的AUC(曲线下面积)评分为0.526、SCluster模型的AUC评分为0.533、Highlighter模型的AUC评分为0.804、Multi2021模型的AUC评分为0.819、DeepChorus模型的AUC评分为0.842,而本公开的经训练的模型的AUC评分为0.906。
对于SP(salami-pop)数据集,CNMF模型的AUC评分为0.543、SCluster模型的AUC评分为0.545、Highlighter模型的AUC评分为0.703、Multi2021模型的AUC评分为0.675、DeepChorus模型的AUC 评分为0.780,而本公开的经训练的模型的AUC评分为0.887。
对于SL(salami-live)数据集,CNMF模型的AUC评分为0.478、SCluster模型的AUC评分为0.551、Highlighter模型的AUC评分为0.671、Multi2021模型的AUC评分为0.633、DeepChorus模型的AUC评分为0.765,而本公开的经训练的模型的AUC评分为0.831。
对于DC(Di-Chorus)数据集,CNMF模型的AUC评分为0.488、SCluster模型的AUC评分为0.568、Highlighter模型的AUC评分为0.553、DeepChorus模型的AUC评分为0.811,而本公开的经训练的模型的AUC评分为0.872。
此外,通过其他实验,本公开的模型的F评分(F-score)也高于传统模块。由此可见,根据本公开的实施例训练的音频识别模块的性能较传统模型具有显著提高的性能。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于执行本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质 下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理单元,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理单元执 行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
根据本公开的一个或多个实施例,示例1.一种音频识别方法,包括:基于音频数据的多级特征图,获取所述音频数据的目标特征图;基于所述目标特征图,确定所述音频数据的特征表示;以及至少基于所述特征表示,确定所述音频数据的识别结果。
示例2.根据示例1所述的方法,其中获取所述目标特征图包括: 获取所述音频数据的所述多级特征图,所述多级特征图中的下一级特征图是从上一级特征图提取的;以及至少基于所述下一级特征图和所述上一级特征图进行特征重建,以确定所述目标特征图。
示例3.根据示例2所述的方法,其中所述多级特征图至少包括:从所述音频数据中提取的第一级特征图;以及根据所述第一级特征图提取的第二级特征图。
示例4.根据示例3所述的方法,其中所述特征重建至少包括:将所述第二级特征图扩充为第一级备用特征图;以及基于所述第一级备用特征图和所述第一级特征图,确定所述目标特征图。
示例5.根据示例1所述的方法,其中所述音频数据是训练数据,并且所述方法还包括:基于所述识别结果与所述训练数据的预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新所述识别模型的参数。
示例6.根据示例5所述的方法,还包括:确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布;将所述分布中的采样特征表示确定为附加特征表示。
示例7.根据示例6所述的方法,其中将所述采样特征表示确定为所述附加特征表示包括:在所述分布中采样预定数目的特征表示,作为所述附加特征表示。
示例8.根据示例7所述的方法,其中确定所述损失函数值包括:通过将所述预定数目设定为正无穷确定所述识别模型的损失函数的上限,以确定所述损失函数值。
示例9.根据示例6所述的方法,其中至少基于所述特征表示确定所述识别结果包括:将所述特征表示与所述附加特征表示输入所述识别模型的全连接层,以确定所述识别结果。
示例10.根据示例1所述的方法,其中所述音频数据是歌曲的音频片段,并且确定所述音频数据的识别结果包括:确定所述音频片段属于副歌分类;或者确定所述音频片段不属于副歌分类。
根据本公开的一个或多个实施例,示例11.一种音频识别装 置,包括:目标特征图获取模块,被配置为基于音频数据的多级特征图,获取所述音频数据的目标特征图;特征表示确定模块,被配置为基于所述目标特征图,确定所述音频数据的特征表示;以及识别结果确定模块,被配置为至少基于所述特征表示,确定所述音频数据的识别结果。
示例12.根据示例11所述的音频识别装置,其中目标特征图获取模块包括:多级特征图获取子模块,被配置为获取所述音频数据的所述多级特征图,所述多级特征图中的下一级特征图是从上一级特征图提取的;以及目标特征图确定子模块,被配置为至少基于所述下一级特征图和所述上一级特征图进行特征重建,以确定所述目标特征图。
示例13.根据示例12所述的音频识别装置,其中所述多级特征图至少包括:从所述音频数据中提取的第一级特征图;以及根据所述第一级特征图提取的第二级特征图。
示例14.根据示例13所述的音频识别装置,其中所述目标特征图获取模块在所述特征重建时可以被配置为:将所述第二级特征图扩充为第一级备用特征图;以及基于所述第一级备用特征图和所述第一级特征图,确定所述目标特征图。
示例15.根据示例11所述的音频识别装置,其中所述音频数据是训练数据,并且所述音频识别装置还包括:损失函数值确定子模块,被配置为基于所述识别结果与所述训练数据的预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新所述识别模型的参数。
示例16.根据示例15所述的音频识别装置,还包括:分布确定模块,被配置为确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布;以及附加特征表示确定模块,被配置为将所述分布中的采样特征表示确定为附加特征表示。
示例17.根据示例16所述的音频识别装置,其中所述附加特征表示确定模块被配置为在所述分布中采样预定数目的特征表示, 作为所述附加特征表示。
示例18.根据示例17所述的音频识别装置,其中所述损失函数值确定子模块被配置为通过将所述预定数目设定为正无穷确定所述识别模型的损失函数的上限,以确定所述损失函数值。
示例19.根据示例16所述的音频识别装置,其中所述识别结果确定模块被配置为将所述特征表示与所述附加特征表示输入所述识别模型的全连接层,以确定所述识别结果。
示例20.根据示例11所述的音频识别装置,其中所述音频数据是歌曲的音频片段,并且所述识别结果确定模块包括:分类模块,被配置为确定所述音频片段属于副歌分类或者不属于副歌分类。
根据本公开的一个或多个实施例,示例21.一种电子设备,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:基于音频数据的多级特征图,获取所述音频数据的目标特征图;基于所述目标特征图,确定所述音频数据的特征表示;以及至少基于所述特征表示,确定所述音频数据的识别结果。
示例22.根据示例21所述的设备,其中获取所述目标特征图包括:获取所述音频数据的所述多级特征图,所述多级特征图中的下一级特征图是从上一级特征图提取的;以及至少基于所述下一级特征图和所述上一级特征图进行特征重建,以确定所述目标特征图。
示例23.根据示例22所述的设备,其中所述多级特征图至少包括:从所述音频数据中提取的第一级特征图;以及根据所述第一级特征图提取的第二级特征图。
示例24.根据示例23所述的设备,其中所述特征重建至少包括:将所述第二级特征图扩充为第一级备用特征图;以及基于所述第一级备用特征图和所述第一级特征图,确定所述目标特征图。
示例25.根据示例21所述的设备,其中所述音频数据是训练数据,并且所述方法还包括:基于所述识别结果与所述训练数据的 预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新所述识别模型的参数。
示例26.根据示例25所述的设备,还包括:确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布;将所述分布中的采样特征表示确定为附加特征表示。
示例27.根据示例26所述的设备,其中将所述采样特征表示确定为所述附加特征表示包括:在所述分布中采样预定数目的特征表示,作为所述附加特征表示。
示例28.根据示例27所述的设备,其中确定所述损失函数值包括:通过将所述预定数目设定为正无穷确定所述识别模型的损失函数的上限,以确定所述损失函数值。
示例29.根据示例26所述的设备,其中至少基于所述特征表示确定所述识别结果包括:将所述特征表示与所述附加特征表示输入所述识别模型的全连接层,以确定所述识别结果。
示例30.根据示例21所述的设备,其中所述音频数据是歌曲的音频片段,并且确定所述音频数据的识别结果包括:确定所述音频片段属于副歌分类;或者确定所述音频片段不属于副歌分类。
根据本公开的一个或多个实施例,示例31.一种计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括机器可执行指令,所述机器可执行指令在被执行时使机器执行根据示例1至10中的任一项所述的方法。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (13)

  1. 一种音频识别方法,包括:
    基于音频数据的多级特征图,获取所述音频数据的目标特征图;
    基于所述目标特征图,确定所述音频数据的特征表示;以及
    至少基于所述特征表示,确定所述音频数据的识别结果。
  2. 根据权利要求1所述的方法,其中获取所述目标特征图包括:
    获取所述音频数据的所述多级特征图,所述多级特征图中的下一级特征图是从上一级特征图提取的;以及
    至少基于所述下一级特征图和所述上一级特征图进行特征重建,以确定所述目标特征图。
  3. 根据权利要求2所述的方法,其中所述多级特征图至少包括:
    从所述音频数据中提取的第一级特征图;以及
    根据所述第一级特征图提取的第二级特征图。
  4. 根据权利要求3所述的方法,其中所述特征重建至少包括:
    将所述第二级特征图扩充为第一级备用特征图;以及
    基于所述第一级备用特征图和所述第一级特征图,确定所述目标特征图。
  5. 根据权利要求1所述的方法,其中所述音频数据是训练数据,并且所述方法还包括:
    基于所述识别结果与所述训练数据的预先标注的真值结果,确定被训练的识别模型的损失函数值,以更新所述识别模型的参数。
  6. 根据权利要求5所述的方法,还包括:
    确定属于副歌分类或者不属于副歌分类的音频片段所对应的特征表示的分布;
    将所述分布中的采样特征表示确定为附加特征表示。
  7. 根据权利要求6所述的方法,其中将所述采样特征表示确定为所述附加特征表示包括:
    在所述分布中采样预定数目的特征表示,作为所述附加特征表示。
  8. 根据权利要求7所述的方法,其中确定所述损失函数值包括:
    通过将所述预定数目设定为正无穷确定所述识别模型的损失函数的上限,以确定所述损失函数值。
  9. 根据权利要求6所述的方法,其中至少基于所述特征表示确定所述识别结果包括:
    将所述特征表示与所述附加特征表示输入所述识别模型的全连接层,以确定所述识别结果。
  10. 根据权利要求1所述的方法,其中所述音频数据是歌曲的音频片段,并且确定所述音频数据的识别结果包括:
    确定所述音频片段属于副歌分类;或者
    确定所述音频片段不属于副歌分类。
  11. 一种音频识别装置,包括:
    目标特征图获取模块,被配置为基于音频数据的多级特征图,获取所述音频数据的目标特征图;
    特征表示确定模块,被配置为基于所述目标特征图,确定所述音频数据的特征表示;以及
    识别结果确定模块,被配置为至少基于所述特征表示,确定所述音频数据的识别结果。
  12. 一种电子设备,包括:
    处理器;以及
    与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:
    基于音频数据的多级特征图,获取所述音频数据的目标特征图;
    基于所述目标特征图,确定所述音频数据的特征表示;以及
    至少基于所述特征表示,确定所述音频数据的识别结果。
  13. 一种计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括机器可执行指令,所述机器可执行指令在被执行时使机器执行根据权利要求1至10中的任一项所述的方 法。
PCT/CN2023/105121 2022-07-13 2023-06-30 音频识别方法、装置、电子设备和计算机程序产品 WO2024012284A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210828275.9 2022-07-13
CN202210828275.9A CN115240704A (zh) 2022-07-13 2022-07-13 音频识别方法、装置、电子设备和计算机程序产品

Publications (1)

Publication Number Publication Date
WO2024012284A1 true WO2024012284A1 (zh) 2024-01-18

Family

ID=83673981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105121 WO2024012284A1 (zh) 2022-07-13 2023-06-30 音频识别方法、装置、电子设备和计算机程序产品

Country Status (2)

Country Link
CN (1) CN115240704A (zh)
WO (1) WO2024012284A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240704A (zh) * 2022-07-13 2022-10-25 北京有竹居网络技术有限公司 音频识别方法、装置、电子设备和计算机程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333804A (zh) * 2021-12-27 2022-04-12 北京达佳互联信息技术有限公司 音频分类识别方法、装置、电子设备及存储介质
CN114420097A (zh) * 2022-01-24 2022-04-29 腾讯科技(深圳)有限公司 语音定位方法、装置、计算机可读介质及电子设备
CN114627856A (zh) * 2022-03-30 2022-06-14 杭州网易智企科技有限公司 语音识别方法、装置、存储介质及电子设备
CN115240704A (zh) * 2022-07-13 2022-10-25 北京有竹居网络技术有限公司 音频识别方法、装置、电子设备和计算机程序产品

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333804A (zh) * 2021-12-27 2022-04-12 北京达佳互联信息技术有限公司 音频分类识别方法、装置、电子设备及存储介质
CN114420097A (zh) * 2022-01-24 2022-04-29 腾讯科技(深圳)有限公司 语音定位方法、装置、计算机可读介质及电子设备
CN114627856A (zh) * 2022-03-30 2022-06-14 杭州网易智企科技有限公司 语音识别方法、装置、存储介质及电子设备
CN115240704A (zh) * 2022-07-13 2022-10-25 北京有竹居网络技术有限公司 音频识别方法、装置、电子设备和计算机程序产品

Also Published As

Publication number Publication date
CN115240704A (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
CN112685565B (zh) 基于多模态信息融合的文本分类方法、及其相关设备
US10068174B2 (en) Hybrid approach for developing, optimizing, and executing conversational interaction applications
KR102532396B1 (ko) 데이터 세트 처리 방법, 장치, 전자 기기 및 저장 매체
US20220121906A1 (en) Task-aware neural network architecture search
WO2021218024A1 (zh) 命名实体识别模型的训练方法、装置、计算机设备
CN110276023B (zh) Poi变迁事件发现方法、装置、计算设备和介质
KR20210124111A (ko) 모델을 훈련하기 위한 방법, 장치, 기기, 매체 및 프로그램 제품
CN114780727A (zh) 基于强化学习的文本分类方法、装置、计算机设备及介质
CN110929114A (zh) 利用动态记忆网络来跟踪数字对话状态并生成响应
CN111831826B (zh) 跨领域的文本分类模型的训练方法、分类方法以及装置
JP2022003537A (ja) 対話意図の認識方法及び装置、電子機器並びに記憶媒体
CN113434683B (zh) 文本分类方法、装置、介质及电子设备
WO2024012284A1 (zh) 音频识别方法、装置、电子设备和计算机程序产品
US11954910B2 (en) Dynamic multi-resolution processing for video classification
CN114817478A (zh) 基于文本的问答方法、装置、计算机设备及存储介质
CN110826327A (zh) 情感分析方法、装置、计算机可读介质及电子设备
US11532174B2 (en) Product baseline information extraction
KR102595384B1 (ko) 문서 유사도 학습에 기반한 딥러닝 모델의 전이 학습 방법 및 시스템
WO2023245869A1 (zh) 语音识别模型的训练方法、装置、电子设备及存储介质
US20230229859A1 (en) Zero-shot entity linking based on symbolic information
WO2022271369A1 (en) Training of an object linking model
CN112966513B (zh) 用于实体链接的方法和装置
US10546247B2 (en) Switching leader-endorser for classifier decision combination
US20220092403A1 (en) Dialog data processing
CN113239215A (zh) 多媒体资源的分类方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23838776

Country of ref document: EP

Kind code of ref document: A1