WO2019205383A1 - 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质 - Google Patents

电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质 Download PDF

Info

Publication number
WO2019205383A1
WO2019205383A1 PCT/CN2018/102219 CN2018102219W WO2019205383A1 WO 2019205383 A1 WO2019205383 A1 WO 2019205383A1 CN 2018102219 W CN2018102219 W CN 2018102219W WO 2019205383 A1 WO2019205383 A1 WO 2019205383A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
layer
model
musical
intensity
Prior art date
Application number
PCT/CN2018/102219
Other languages
English (en)
French (fr)
Inventor
刘奡智
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019205383A1 publication Critical patent/WO2019205383A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/071Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for rhythm pattern analysis or rhythm style recognition

Definitions

  • the present application relates to the field of deep learning, and in particular, to an electronic device, a music learning style recognition method based on deep learning, and a storage medium.
  • the music intensity mark is a mark of the composer's strength to the midrange of the music work. It can be used as a reminder signal for the performance intensity. Through these prompt signals, the performer will understand the overall performance intensity, structure and logic of the music more meaningfully. Therefore, the musical intensity mark plays a very important role in playing good music.
  • the music velocity mark is usually very general, and the subtle and complex force relationship that really exists in the music cannot be accurately marked, which makes it difficult for the player to accurately play the style of the music, and Affect the learning efficiency and effectiveness of beginners.
  • the present application provides an electronic device, a deep learning-based music performance style recognition method, and a storage medium, which can quickly and accurately mark the music intensity in the music score, so that the player can accurately play the music style and improve the performance of the beginner. Learning efficiency and effectiveness.
  • the present application provides an electronic device including a memory and a processor connected to the memory, the processor for performing deep learning-based music performance stored on the memory
  • the style recognition program when the depth learning-based music playing style recognition program is executed by the processor, implements the following steps:
  • the present application further provides a music learning style recognition method based on deep learning, the method comprising the following steps:
  • the present application further provides a computer readable storage medium storing a music learning style recognition program based on a deep learning, the depth learning based music playing style recognition program may be Executing by the at least one processor to cause the at least one processor to perform the following steps:
  • the method obtains a musical score corresponding to the musical piece to be played; and the mathematical strength marking model according to the pre-training Performing a musical intensity annotation on the acquired musical score to mark the musical intensity in the musical score; determining the playing style of the musical composition according to the intensity of the marked music. It can improve the learning efficiency and effect of beginners, and the method is simple, flexible and practical.
  • FIG. 1 is a schematic diagram of an optional hardware architecture of an electronic device proposed by the present application.
  • FIG. 2 is a schematic diagram of a program module of a music playing style recognition program based on deep learning in an embodiment of the electronic device of the present application
  • FIG. 3 is a flow chart of an implementation of a preferred embodiment of a music learning style recognition method based on deep learning in the present application.
  • the electronic device 10 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through the communication bus 14. It should be noted that FIG. 1 only shows the electronic device 10 having the components 11-14, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 includes at least one type of computer readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, SD or DX memory, etc.), a random access memory (RAM), and a static memory.
  • the memory 11 may be an internal storage unit of the electronic device 10, such as a hard disk or memory of the electronic device 10.
  • the memory 11 may also be an outsourced storage device of the electronic device 10, such as a plug-in hard disk equipped on the electronic device 10, a smart memory card (SMC), and a secure digital (Secure Digital, SD) ) cards, flash cards, etc.
  • the memory 11 can also include both an internal storage unit of the electronic device 10 and an outsourced storage device thereof.
  • the memory 11 is generally used to store an operating system installed in the electronic device 10 and various types of application software, such as a music learning style recognition program based on deep learning. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
  • Processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 12 is typically used to control the overall operation of the electronic device 10.
  • the processor 12 is configured to run program code or processing data stored in the memory 11, such as a running deep learning-based music playing style recognition program or the like.
  • the network interface 13 may include a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the electronic device 10 and other electronic devices.
  • Communication bus 14 is used to implement a communication connection between components 11-13.
  • Figure 1 shows only the electronic device 10 having components 11-14 and a deep learning based music playing style recognition program, but it should be understood that not all illustrated components may be implemented, alternative implementations may be more or more Less components.
  • the electronic device 10 may further include a user interface (not shown in FIG. 1), and the user interface may include a display, an input unit such as a keyboard, wherein the user interface may further include a standard wired interface, a wireless interface, and the like.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED touch device, or the like. Further, the display may also be referred to as a display screen or display unit for displaying information processed in the electronic device 10 and a user interface for displaying visualizations.
  • music refers to the method of marking the length and pitch of a piece of music with a large number of special symbols, and using other methods to record music, but music is an art form with rich emotional expressions.
  • Music music can only help people mechanically. Record and "read” music. If you want to better express the expressiveness of music, you must add some markup to the music.
  • the mark can be a note made by the composer on the score, or a mark made by the performer. The purpose is to express or understand the music.
  • the deep connotation is often referred to as the musical intensity.
  • the intensity of music is the strength of the music midrange.
  • the mark of music intensity can be, for example, a fixed degree of strength, a gradually changing intensity, a change in intensity, etc., wherein the strength of the fixed intensity is: strong, medium strong, extremely strong, weak, medium weak, and extremely weak.
  • the intensity of music that gradually changes intensity includes: strengthening, gradually strengthening, weakening, and gradually weakening; the strength of music that changes intensity includes: stronger, weaker intensity, and enhanced individual sound.
  • the intensity of the music marked in the music can express the emotional tone and even the musical theme, the artistic conception, and the musical effect.
  • the overall strength of the lullaby is definitely weak, and the overall strength of the musical works that represent disastrous fighting is definitely strong. ,and many more.
  • some works increase the drama by increasing the intensity of the contrast, making the music vivid and vivid.
  • the music is divided into several parts. If the composer wants to highlight a part, then he can mark the intensity of the strong part in the part and mark the intensity of the weak part on the other parts. Reach the primary and secondary points. There are many, many more. In short, the velocity mark has a great effect on the performance of the music. For example, the player can determine the playing style of the corresponding music according to the intensity of the marked music.
  • the music score corresponding to the music piece to be played is first acquired, and then the obtained music score is intensity-marked according to the music strength labeling model that is pre-trained to mark the music intensity in the music score corresponding to the music piece to be played, and finally Determine the playing style of the song based on the strength of the music being marked.
  • the predetermined music strength annotation model is a pre-trained two-way cyclic neural network model
  • the bidirectional cyclic neural network model includes a neuron node of a bidirectional cyclic neural network and a full link layer
  • the neuron node of the bidirectional cyclic neural network includes four layers, which are an input layer, a forward layer, a feedback layer, and an output layer; the forward layer and the feedback layer together form an implicit layer, and the implicit layer
  • the layer includes two output channels, wherein one output channel is connected to the output layer, and the other output channel is connected to the input channel of the hidden layer, so that information can be continuously retained, and the function of inferring the state according to the previous state can be realized.
  • the bidirectional cyclic neural network model is a Lest long and short memory bidirectional cyclic neural network model.
  • the predetermined music strength annotation model includes a model training process and a model testing process
  • the model training process includes the following steps:
  • the sample in the training subset corresponds to the digital notation "1234"
  • the musical velocity sequence in the music strength annotation model that is to be trained is assumed to be "12334”.
  • the actual annotated music velocity sequence is not represented as such.
  • the hypothetical musical velocity sequence is actually derived from the following four independent Training case:
  • the number 2 should only appear if the number 1 appears.
  • the number 3 can also appear in the case of 123,
  • 1-of-k coding is used (in the order of character index, 1 means that the input number is at the index position, 0 means that the input number is not at the index position), each number is encoded into a vector, and then each time to the music Enter a number in the velocity labeling model, and the music velocity labeling model will output a vector of 4D sequence (each number represents a dimension), and then assign the output vector as a musical velocity labeling model to the next digit in the musical intensity sequence. degree.
  • a three-layer Lest long and short memory bidirectional cyclic neural network model is selected, the input and output layers are 4 dimensions (ie, 4 cells), and the hidden layer is 3 dimensions.
  • the Lest long and short memory bidirectional cyclic neural network model reads the number "1” and then sets the confidence that the number “1” may appear after it to 1.0, and the number “2” may appear.
  • the confidence level is set to 2.2, the confidence that the number “3” may be set to -3.0, and the confidence that the number "4" may be set to 4.1. Because the next occurrence of the number in the training data is "2", it will increase the confidence of this number and reduce the confidence of other numbers.
  • the usual practice is to use a cross-loss function.
  • the cross-loss function is equivalent to using the Softmax classifier on each output vector, using the index of the next occurrence of the number as a correct classification, and using a small batch.
  • the stochastic gradient descent cycle is trained, and finally the trained music intensity annotation model is generated.
  • model testing process includes the following steps:
  • the music intensity annotations are respectively performed on the samples in the test subset to obtain a sample marked with musical strength
  • the sample marked with the strength of the music is compared with the standard musical intensity of each of the pre-stored samples, if the number of samples marked with the musical strength greater than the preset error threshold is greater than the preset number of samples threshold, then it is determined that the intensity of the music is marked
  • the test of the model does not pass, or if the number of samples of the musical strength marked with the error rate less than or equal to the preset error threshold is less than the preset sample number threshold, then the test for the musical velocity annotation model is determined to pass.
  • the electronic device proposed by the present application acquires the music score corresponding to the music piece to be played; and performs music intensity labeling on the acquired music score according to the music strength labeling model that is pre-trained to mark the music in the music score.
  • Velocity determines the playing style of the song based on the strength of the music being marked. It can improve the learning efficiency and effect of beginners, and the method is simple, flexible and practical.
  • the deep learning-based music playing style recognition program of the present application may be described by a program module having the same function according to different functions implemented by the respective parts.
  • FIG. 2 is a schematic diagram of a program module of a music playing style recognition program based on deep learning in an embodiment of the electronic device of the present application.
  • the music learning style recognition program based on the deep learning may be divided into the acquisition module 201, the analysis module 202, and the recognition module 203 according to different functions implemented by the respective parts.
  • the program module referred to in the present application refers to a series of computer program instruction segments capable of performing a specific function, and is more suitable than the program to describe the execution process of the music learning style recognition program based on the deep learning in the electronic device 10. .
  • the functions or operational steps implemented by the modules 201-203 are similar to the above, and are not described in detail herein, by way of example, for example:
  • the obtaining module 201 is configured to acquire a music score corresponding to the music piece to be played;
  • the analysis module 202 is configured to perform music intensity labeling on the acquired music score according to the music strength annotation model that is pre-trained to mark the music intensity in the music score;
  • the recognition module 203 is configured to determine a playing style of the music piece according to the marked music strength.
  • the present application also provides a music learning style recognition method based on deep learning.
  • the depth learning-based music performance style recognition method includes the following steps:
  • Step S301 acquiring a musical score corresponding to the music piece to be played
  • Step S302 performing music intensity labeling on the acquired music score according to the music strength labeling model completed in advance to mark the music intensity in the music score;
  • Step S303 determines the playing style of the music piece according to the marked music intensity.
  • music refers to the method of marking the length and pitch of a piece of music with a large number of special symbols, and using other methods to record music, but music is an art form with rich emotional expressions.
  • Music music can only help people mechanically. Record and "read” music. If you want to better express the expressiveness of music, you must add some markup to the music.
  • the mark can be a note made by the composer on the score, or a mark made by the performer. The purpose is to express or understand the music.
  • the deep connotation is often referred to as the musical intensity.
  • the intensity of music is the strength of the music midrange.
  • the mark of music intensity can be, for example, a fixed degree of strength, a gradually changing intensity, a change in intensity, etc., wherein the strength of the fixed intensity is: strong, medium strong, extremely strong, weak, medium weak, and extremely weak.
  • the intensity of music that gradually changes intensity includes: strengthening, gradually strengthening, weakening, and gradually weakening; the strength of music that changes intensity includes: stronger, weaker intensity, and enhanced individual sound.
  • the intensity of the music marked in the music can express the emotional tone and even the musical theme, the artistic conception, and the musical effect.
  • the overall strength of the lullaby is definitely weak, and the overall strength of the musical works that represent disastrous fighting is definitely strong. ,and many more.
  • some works increase the drama by increasing the intensity of the contrast, making the music vivid and vivid.
  • the music is divided into several parts. If the composer wants to highlight a part, then he can mark the intensity of the strong part in the part and mark the intensity of the weak part on the other parts. Reach the primary and secondary points. There are many, many more. In short, the velocity mark has a great effect on the performance of the music. For example, the player can determine the playing style of the corresponding music according to the intensity of the marked music.
  • the music score corresponding to the music piece to be played is first acquired, and then the obtained music score is intensity-marked according to the music strength labeling model that is pre-trained to mark the music intensity in the music score corresponding to the music piece to be played, and finally Determine the playing style of the song based on the strength of the music being marked.
  • the predetermined music velocity annotation model is a pre-trained two-way cyclic neural network model
  • the bidirectional cyclic neural network model includes a bidirectional cyclic nerve.
  • the neuron node of the bidirectional cyclic neural network includes four layers, respectively an input layer, a forward layer, a feedback layer, and an output layer;
  • the forward layer and the feedback layer Cooperating to form a hidden layer, the hidden layer includes two output channels, wherein one output channel is connected to the output layer, and the other output channel is connected to the input channel of the hidden layer, so that information can be continuously retained.
  • the bidirectional cyclic neural network model is a Lest long and short memory bidirectional cyclic neural network model.
  • the predetermined music strength annotation model includes a model training process and a model testing process
  • the model training process includes the following steps:
  • the sample in the training subset corresponds to the digital notation "1234"
  • the musical velocity sequence in the music strength annotation model that is to be trained is assumed to be "12334”.
  • the actual annotated music velocity sequence is not represented as such.
  • the hypothetical musical velocity sequence is actually derived from the following four independent Training case:
  • the number 2 should only appear if the number 1 appears.
  • the number 3 can also appear in the case of 123,
  • 1-of-k coding is used (in the order of character index, 1 means that the input number is at the index position, 0 means that the input number is not at the index position), each number is encoded into a vector, and then each time to the music Enter a number in the velocity labeling model, and the music velocity labeling model will output a vector of 4D sequence (each number represents a dimension), and then assign the output vector as a musical velocity labeling model to the next digit in the musical intensity sequence. degree.
  • a three-layer Lest long and short memory bidirectional cyclic neural network model is selected, the input and output layers are 4 dimensions (ie, 4 cells), and the hidden layer is 3 dimensions.
  • the Lest long and short memory bidirectional cyclic neural network model reads the number "1” and then sets the confidence that the number “1” may appear after it to 1.0, and the number “2” may appear.
  • the confidence level is set to 2.2, the confidence that the number “3” may be set to -3.0, and the confidence that the number "4" may be set to 4.1. Because the next occurrence of the number in the training data is "2", it will increase the confidence of this number and reduce the confidence of other numbers.
  • the usual practice is to use a cross-loss function.
  • the cross-loss function is equivalent to using the Softmax classifier on each output vector, using the index of the next occurrence of the number as a correct classification, and using a small batch.
  • the stochastic gradient descent cycle is trained, and finally the trained music intensity annotation model is generated.
  • model testing process includes the following steps:
  • the music intensity annotations are respectively performed on the samples in the test subset to obtain a sample marked with musical strength
  • the sample marked with the strength of the music is compared with the standard musical intensity of each of the pre-stored samples, if the number of samples marked with the musical strength greater than the preset error threshold is greater than the preset number of samples threshold, then it is determined that the intensity of the music is marked
  • the test of the model does not pass, or if the number of samples of the musical strength marked with the error rate less than or equal to the preset error threshold is less than the preset sample number threshold, then the test for the musical velocity annotation model is determined to pass.
  • the deep learning-based music performance style recognition method proposed by the present application obtains the music score corresponding to the music piece to be played; and analyzes the acquired music score according to the predetermined music strength annotation model to mark the music score.
  • the strength of the music; according to the strength of the music, the style of the music is recognized. It can improve the learning efficiency and effect of beginners, and the method is simple, flexible and practical.
  • the present application also provides a computer readable storage medium having stored thereon a depth learning based music performance style recognition program, the depth learning based music performance style recognition program being executed by a processor Implement the following operations:
  • the acquired score is analyzed according to a predetermined musical strength annotation model to mark the intensity of the music in the score;
  • the playing style of the piece is recognized based on the intensity of the music being marked.
  • the specific embodiment of the computer readable storage medium of the present application is substantially the same as the above embodiments of the electronic device and the deep learning-based music performance style recognition method, and will not be described herein.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in the various embodiments of the present application.

Abstract

一种电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质,该方法包括:获取待演奏的乐曲对应的乐谱(S301);根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出乐谱中的音乐力度(S302);根据标注的音乐力度,确定该乐曲的演奏风格(S303)。该方法简单灵活实用性强,能够提高演奏初学者的学习效率及效果。

Description

电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质
本申请要求于2018年4月28日提交中国专利局、申请号为2018104032086,发明名称为“电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉互深度学习领域,尤其涉及一种电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质。
背景技术
音乐力度标记,是作曲家对音乐作品中音的强弱程度的标记,可作为表演力度的提示信号,通过这些提示信号,演奏者会更有依据地领会乐曲的总体表演力度、结构和逻辑。所以,音乐力度标记对演奏好乐曲起着非常重要的作用。但是,在实际应用中,音乐力度标记通常是非常概括的,且乐曲中真正存在着的细微、复杂的力度关系,是无法准确标记的,导致演奏者很难准确地演奏出乐曲的风格,且影响演奏初学者的学习效率及效果。
发明内容
有鉴于此,本申请提出一种电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质,能够快速准确地标记出乐谱中的音乐力度,使得演奏者准确演奏乐曲风格,提高演奏初学者的学习效率及效果。
首先,为实现上述目的,本申请提出一种电子装置,所述电子装置包括存储器、及与所述存储器连接的处理器,所述处理器用于执行所述存储器上存储的基于深度学习的乐曲演奏风格识别程序,所述基于深度学习的乐曲演奏风格识别程序被所述处理器执行时实现如下步骤:
获取待演奏的乐曲对应的乐谱;
根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
根据标注的音乐力度,确定该乐曲的演奏风格。
此外,为实现上述目的,本申请还提供一种基于深度学习的乐曲演奏风格识别方法,该方法包括如下步骤:
获取待演奏的乐曲对应的乐谱;
根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
根据标注的音乐力度,确定该乐曲的演奏风格。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储基于深度学习的乐曲演奏风格识别程序,所述基于深度学习的乐曲演奏风格识别程序可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:
获取待演奏的乐曲对应的乐谱;
根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
根据标注的音乐力度,确定该乐曲的演奏风格。
相较于现有技术,本申请所提出的电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质,所述方法通过获取待演奏的乐曲对应的乐谱;根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;根据标注的音乐力度,确定该乐曲的演奏风格。能够提高演奏初学者的学习效率及效果,且该方法简单灵活实用性强。
附图说明
图1是本申请提出的电子装置一可选的硬件架构的示意图;
图2是本申请电子装置一实施例中基于深度学习的乐曲演奏风格识别程序的程序模块示意图;
图3是本申请基于深度学习的乐曲演奏风格识别方法较佳实施例的实施流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
参阅图1所示,是本申请提出的电子装置一可选的硬件架构示意图。本实施例中,电子装置10可包括,但不仅限于,可通过通信总线14相互通信连接存储器11、处理器12、网络接口13。需要指出的是,图1仅示出了具有组件11-14的电子装置10,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
其中,存储器11至少包括一种类型的计算机可读存储介质,计算机可读 存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器11可以是电子装置10的内部存储单元,例如电子装置10的硬盘或内存。在另一些实施例中,存储器11也可以是电子装置10的外包存储设备,例如电子装置10上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,存储器11还可以既包括电子装置10的内部存储单元也包括其外包存储设备。本实施例中,存储器11通常用于存储安装于电子装置10的操作系统和各类应用软件,例如基于深度学习的乐曲演奏风格识别程序等。此外,存储器11还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。处理器12通常用于控制电子装置10的总体操作。本实施例中,处理器12用于运行存储器11中存储的程序代码或者处理数据,例如运行的基于深度学习的乐曲演奏风格识别程序等。
网络接口13可包括无线网络接口或有线网络接口,网络接口13通常用于在电子装置10与其他电子设备之间建立通信连接。
通信总线14用于实现组件11-13之间的通信连接。
图1仅示出了具有组件11-14以及基于深度学习的乐曲演奏风格识别程序的电子装置10,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,电子装置10还可以包括用户接口(图1中未示出),用户接口可以包括显示器、输入单元比如键盘,其中,用户接口还可以包括标准的有线接口、无线接口等。
可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED触摸器等。进一步地,显示器也可称为显示屏或显示单元,用于显示在电子装置10中处理信息以及用于显示可视化的用户界面。
在一实施例中,存储器11中存储的基于深度学习的乐曲演奏风格识别程序被处理器12执行时,实现如下操作:
获取待演奏的乐曲对应的乐谱;
根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
根据标注的音乐力度,确定该乐曲的演奏风格。
通常,乐谱指的是用大量的特殊符号来标记乐曲中音长和音高,以及借用其他方式来记录音乐的方法,但是音乐是一种有着丰富情感表达的艺术形式,乐谱仅仅能够帮助人们机械的记录和“阅读”音乐。如果想要更好地表现出音乐的表现力,就必须对音乐加入一些标记,该标记可以是作曲家在乐谱上做的笔记,或者是演奏者做的标记,目的都是表达或者理解音乐的深层内涵,通常将该标记称为音乐力度。
音乐力度即音乐中音的强弱程度。音乐力度的标记可以是,例如强弱程度固定的、逐渐改变强度的、改变强度的等;其中,强弱程度固定的音乐力度包括:强、中强、极强、弱、中弱、极弱;逐渐改变强度的音乐力度包括:加强、逐渐加强、减弱、逐渐减弱;改变强度的音乐力度包括:较强、强度较差、个别音的加强。
通常,音乐中标注的音乐力度可以表达作品所需的感情基调乃至音乐主题、音乐意境、音乐作用,例如摇篮曲的总体力度肯定是弱的,而表现英勇奋战的音乐作品的总体力度肯定是强的,等等。此外,有的作品通过增加力度对比,从而增加了戏剧性,让音乐生动传神。有的时候,音乐分好几个声部,如果作曲家想要突出某个声部,那么可以在那个声部处标注上强奏的力 度记号,在其他声部上标注弱奏的力度记号,从而达到主次分明。还有很多很多情况。总之,力度标记对于音乐表现有着巨大作用,例如演奏者可以根据标注的音乐力度,确定对应乐曲的演奏风格等。
在本实施例中,首先获取待演奏的乐曲对应的乐谱,然后根据预先训练完成的音乐力度标注模型对获取的乐谱进行力度标注,以标注出待演奏的乐曲对应的乐谱中的音乐力度,最后根据标注的音乐力度,确定该乐曲的演奏风格。
在本申请的一实施例中,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型,所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接,从而能够持续保留信息,实现根据之前状态推理之后状态的功能。
在一实施例中,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
在本实施例中,所述预先确定的音乐力度标注模型包括模型训练过程及模型测试过程,所述模型训练过程包括如下步骤:
E、从预先确定的数据源中获取预设数量的标注了音乐力度的乐谱(例如Midi文件),构成预设数量的样本;
F、将所述样本分为第一比例的训练子集和第二比例的测试子集;
G、利用所述训练子集中的样本训练所述音乐力度标注模型,以得到训练好的音乐力度标注模型;
H、利用所述测试子集中的样本对所述训练好的音乐力度标注模型进行测试,若测试通过,则训练结束,或者,若测试不通过,则增加所述测试子集中的样本数量并重新执行上述步骤E、F、G、H。
例如在本实施例中,假设训练子集中的样本对应为数字简谱“1234”,然后想要训练完成的音乐力度标注模型中的音乐力度序列假设为“12334”。需要说明的是,实际标注的音乐力度序列不是这样表示,这里只是为了形象的说明音乐力度标注模型的训练过程而假设的音乐力度序列,该假设的音乐力度序列实际上是来自于如下4个独立的训练案例:
1.数字2应该在数字1出现的情况下才可能出现,
2.数字3应该出现在数字12出现的情况下,
3.数字3同样可以出现在123出现的情况下,
4.字母4应该出现在1233出现的情况下。
在训练过程中,采用1-of-k编码(按字符索引顺序,1表示输入数字在该索引位置,0表示输入数字不在该索引位置)将每个数字编码成一个向量,然后每次向音乐力度标注模型中输入一个数字,音乐力度标注模型会输出一个4维序列的向量(每个数字代表一个维度),依次将输出的向量作为音乐力度标注模型分配给音乐力度序列中下一个数字的置信度。在本实施例中,选用三层Lest长短记忆双向循环神经网络模型,输入输出层为4维(即4个单元),隐含层为3维。需要说明的,在第一次执行训练时,Lest长短记忆双向循环神经网络模型读取到数字“1”然后将它之后可能出现数字“1”的置信度设置为1.0,可能出现数字“2”的置信度设置为2.2,可能出现数字“3”的置信度设置为-3.0,可能出现数字“4”的置信度设置为4.1。因为在训练数据中,下一个出现的数字是“2”,将会提高这个数字的置信度,并且降低其他数字的置信度。通常的做法是使用一个交叉损失函数,在本实施例中,交叉损失函数相当于在每个输出向量上使用Softmax分类器,将下一个出现的数字的索引作为一个正确的分类,并采用小批量随机梯度下降循环进行训练,最后生成训练完成的音乐力度标注模型。
进一步地,在本实施例中,所述模型测试过程包括如下步骤:
利用训练好的音乐力度标注模型分别对所述测试子集中的样本进行音乐 力度标注,以得到标注了音乐力度的样本;
分别将各个标注了音乐力度的样本与预存的各个样本的标准音乐力度进行比较;
若标注了音乐力度的样本与预存的各个样本的标准音乐力度相比,误差率大于预设的误差阈值的标注了音乐力度的样本数大于预设的样本数阈值,则确定针对该音乐力度标注模型的测试不通过,或者,若分析得到的误差率小于或等于预设的误差阈值的标注了音乐力度的样本数小于预设的样本数阈值,则确定针对该音乐力度标注模型的测试通过。
由上述事实施例可知,本申请提出的电子装置通过获取待演奏的乐曲对应的乐谱;根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;根据标注的音乐力度,确定该乐曲的演奏风格。能够提高演奏初学者的学习效率及效果,且该方法简单灵活实用性强。
进一步需要说明的是,本申请的基于深度学习的乐曲演奏风格识别程序依据其各部分所实现的功能不同,可用具有相同功能的程序模块进行描述。请参阅图2所示,是本申请电子装置一实施例中基于深度学习的乐曲演奏风格识别程序的程序模块示意图。本实施例中,基于深度学习的乐曲演奏风格识别程序依据其各部分所实现的功能的不同,可以被分割成获取模块201、分析模块202、识别模块203。由上面的描述可知,本申请所称的程序模块是指能够完成特定功能的一系列计算机程序指令段,比程序更适合于描述基于深度学习的乐曲演奏风格识别程序在电子装置10中的执行过程。所述模块201-203所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:
获取模块201用于获取待演奏的乐曲对应的乐谱;
分析模块202用于根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
识别模块203用于根据标注的音乐力度,确定该乐曲的演奏风格。
此外,本申请还提出一种基于深度学习的乐曲演奏风格识别方法,请参阅图3所示,所述基于深度学习的乐曲演奏风格识别方法包括如下步骤:
步骤S301获取待演奏的乐曲对应的乐谱;
步骤S302根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
步骤S303根据标注的音乐力度,确定该乐曲的演奏风格。
通常,乐谱指的是用大量的特殊符号来标记乐曲中音长和音高,以及借用其他方式来记录音乐的方法,但是音乐是一种有着丰富情感表达的艺术形式,乐谱仅仅能够帮助人们机械的记录和“阅读”音乐。如果想要更好地表现出音乐的表现力,就必须对音乐加入一些标记,该标记可以是作曲家在乐谱上做的笔记,或者是演奏者做的标记,目的都是表达或者理解音乐的深层内涵,通常将该标记称为音乐力度。
音乐力度即音乐中音的强弱程度。音乐力度的标记可以是,例如强弱程度固定的、逐渐改变强度的、改变强度的等;其中,强弱程度固定的音乐力度包括:强、中强、极强、弱、中弱、极弱;逐渐改变强度的音乐力度包括:加强、逐渐加强、减弱、逐渐减弱;改变强度的音乐力度包括:较强、强度较差、个别音的加强。
通常,音乐中标注的音乐力度可以表达作品所需的感情基调乃至音乐主题、音乐意境、音乐作用,例如摇篮曲的总体力度肯定是弱的,而表现英勇奋战的音乐作品的总体力度肯定是强的,等等。此外,有的作品通过增加力度对比,从而增加了戏剧性,让音乐生动传神。有的时候,音乐分好几个声部,如果作曲家想要突出某个声部,那么可以在那个声部处标注上强奏的力度记号,在其他声部上标注弱奏的力度记号,从而达到主次分明。还有很多很多情况。总之,力度标记对于音乐表现有着巨大作用,例如演奏者可以根据标注的音乐力度,确定对应乐曲的演奏风格等。
在本实施例中,首先获取待演奏的乐曲对应的乐谱,然后根据预先训练完成的音乐力度标注模型对获取的乐谱进行力度标注,以标注出待演奏的乐曲对应的乐谱中的音乐力度,最后根据标注的音乐力度,确定该乐曲的演奏风格。
由上述事实例可知,本申请提出的在本申请的一实施例中,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型,所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接,从而能够持续保留信息,实现根据之前状态推理之后状态的功能。
在一实施例中,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
在本实施例中,所述预先确定的音乐力度标注模型包括模型训练过程及模型测试过程,所述模型训练过程包括如下步骤:
E、从预先确定的数据源中获取预设数量的标注了音乐力度的乐谱(例如Midi文件),构成预设数量的样本;
F、将所述样本分为第一比例的训练子集和第二比例的测试子集;
G、利用所述训练子集中的样本训练所述音乐力度标注模型,以得到训练好的音乐力度标注模型;
H、利用所述测试子集中的样本对所述训练好的音乐力度标注模型进行测试,若测试通过,则训练结束,或者,若测试不通过,则增加所述测试子集中的样本数量并重新执行上述步骤E、F、G、H。
例如在本实施例中,假设训练子集中的样本对应为数字简谱“1234”,然后想要训练完成的音乐力度标注模型中的音乐力度序列假设为“12334”。需要说 明的是,实际标注的音乐力度序列不是这样表示,这里只是为了形象的说明音乐力度标注模型的训练过程而假设的音乐力度序列,该假设的音乐力度序列实际上是来自于如下4个独立的训练案例:
1.数字2应该在数字1出现的情况下才可能出现,
2.数字3应该出现在数字12出现的情况下,
3.数字3同样可以出现在123出现的情况下,
4.字母4应该出现在1233出现的情况下。
在训练过程中,采用1-of-k编码(按字符索引顺序,1表示输入数字在该索引位置,0表示输入数字不在该索引位置)将每个数字编码成一个向量,然后每次向音乐力度标注模型中输入一个数字,音乐力度标注模型会输出一个4维序列的向量(每个数字代表一个维度),依次将输出的向量作为音乐力度标注模型分配给音乐力度序列中下一个数字的置信度。在本实施例中,选用三层Lest长短记忆双向循环神经网络模型,输入输出层为4维(即4个单元),隐含层为3维。需要说明的,在第一次执行训练时,Lest长短记忆双向循环神经网络模型读取到数字“1”然后将它之后可能出现数字“1”的置信度设置为1.0,可能出现数字“2”的置信度设置为2.2,可能出现数字“3”的置信度设置为-3.0,可能出现数字“4”的置信度设置为4.1。因为在训练数据中,下一个出现的数字是“2”,将会提高这个数字的置信度,并且降低其他数字的置信度。通常的做法是使用一个交叉损失函数,在本实施例中,交叉损失函数相当于在每个输出向量上使用Softmax分类器,将下一个出现的数字的索引作为一个正确的分类,并采用小批量随机梯度下降循环进行训练,最后生成训练完成的音乐力度标注模型。
进一步地,在本实施例中,所述模型测试过程包括如下步骤:
利用训练好的音乐力度标注模型分别对所述测试子集中的样本进行音乐力度标注,以得到标注了音乐力度的样本;
分别将各个标注了音乐力度的样本与预存的各个样本的标准音乐力度进 行比较;
若标注了音乐力度的样本与预存的各个样本的标准音乐力度相比,误差率大于预设的误差阈值的标注了音乐力度的样本数大于预设的样本数阈值,则确定针对该音乐力度标注模型的测试不通过,或者,若分析得到的误差率小于或等于预设的误差阈值的标注了音乐力度的样本数小于预设的样本数阈值,则确定针对该音乐力度标注模型的测试通过。
由上述事实施例可知,本申请提出的基于深度学习的乐曲演奏风格识别方法通过获取待演奏的乐曲对应的乐谱;根据预先确定的音乐力度标注模型分析获取的乐谱,以标记出所述乐谱中的音乐力度;根据标注的音乐力度,识别出该乐曲的演奏风格。能够提高演奏初学者的学习效率及效果,且该方法简单灵活实用性强。
此外,本申请还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有基于深度学习的乐曲演奏风格识别程序,所述基于深度学习的乐曲演奏风格识别程序被处理器执行时实现如下操作:
获取待演奏的乐曲对应的乐谱;
根据预先确定的音乐力度标注模型分析获取的乐谱,以标记出所述乐谱中的音乐力度;
根据标注的音乐力度,识别出该乐曲的演奏风格。
本申请计算机可读存储介质具体实施方式与上述电子装置以及基于深度学习的乐曲演奏风格识别方法各实施例基本相同,在此不作累述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘) 中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、及与所述存储器连接的处理器,所述处理器用于执行所述存储器上存储的基于深度学习的乐曲演奏风格识别程序,所述基于深度学习的乐曲演奏风格识别程序被所述处理器执行时实现如下步骤:
    获取待演奏的乐曲对应的乐谱;
    根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
    根据标注的音乐力度,确定该乐曲的演奏风格。
  2. 如权利要求1所述的电子装置,其特征在于,所述预先确定的音乐力度标注模型包括模型训练过程及模型测试过程,所述模型训练过程包括如下步骤:
    E、从预先确定的数据源中获取预设数量的标注了音乐力度的乐谱,构成预设数量的样本;
    F、将所述样本分为第一比例的训练子集和第二比例的测试子集;
    G、利用所述训练子集中的样本训练所述音乐力度标注模型,以得到训练好的音乐力度标注模型;
    H、利用所述测试子集中的样本对所述训练好的音乐力度标注模型进行测试,若测试通过,则训练结束,或者,若测试不通过,则增加所述测试子集中的样本数量并重新执行上述步骤E、F、G、H。
  3. 如权利要求2所述的电子装置,其特征在于,所述模型测试过程包括如下步骤:
    利用训练好的音乐力度标注模型分别对所述测试子集中的样本进行音乐力度标注,以得到标注了音乐力度的样本;
    分别将各个标注了音乐力度的样本与预存的各个样本的标准音乐力度进行比较;
    若标注了音乐力度的样本与预存的各个样本的标准音乐力度相比,误差率大于预设的误差阈值的标注了音乐力度的样本数大于预设的样本数阈值,则确定针对该音乐力度标注模型的测试不通过;
    或者,若分析得到的误差率小于或等于预设的误差阈值的标注了音乐力度的样本数小于预设的样本数阈值,则确定针对该音乐力度标注模型的测试通过。
  4. 如权利要求1所述的电子装置,其特征在于,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型;所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;
    所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接;所述输出层与所述全链接层连接,输出带有音乐力度标注的乐谱。
  5. 如权利要求1电子装置,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
  6. 如权利要求2电子装置,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
  7. 如权利要求3电子装置,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
  8. 如权利要求4电子装置,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
  9. 一种基于深度学习的乐曲演奏风格识别方法,其特征在于,所述方法包括如下步骤:
    获取待演奏的乐曲对应的乐谱;
    根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注, 以标注出所述乐谱中的音乐力度;
    根据标注的音乐力度,确定该乐曲的演奏风格。
  10. 如权利要求9所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述预先确定的音乐力度标注模型包括模型训练过程及模型测试过程,所述模型训练过程包括如下步骤:
    E、从预先确定的数据源中获取预设数量的标注了音乐力度的乐谱,构成预设数量的样本;
    F、将所述样本分为第一比例的训练子集和第二比例的测试子集;
    G、利用所述训练子集中的样本训练所述音乐力度标注模型,以得到训练好的音乐力度标注模型;
    H、利用所述测试子集中的样本对所述训练好的音乐力度标注模型进行测试,若测试通过,则训练结束,或者,若测试不通过,则增加所述测试子集中的样本数量并重新执行上述步骤E、F、G、H。
  11. 如权利要求9所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述模型测试过程包括如下步骤:
    利用训练好的音乐力度标注模型分别对所述测试子集中的样本进行音乐力度标注,以得到标注了音乐力度的样本;
    分别将各个标注了音乐力度的样本与预存的各个样本的标准音乐力度进行比较;
    若标注了音乐力度的样本与预存的各个样本的标准音乐力度相比,误差率大于预设的误差阈值的标注了音乐力度的样本数大于预设的样本数阈值,则确定针对该音乐力度标注模型的测试不通过;
    或者,若分析得到的误差率小于或等于预设的误差阈值的标注了音乐力度的样本数小于预设的样本数阈值,则确定针对该音乐力度标注模型的测试通过。
  12. 如权利要求10所述的基于深度学习的乐曲演奏风格识别方法,其特 征在于,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型;所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;
    所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接;所述输出层与所述全链接层连接,输出带有音乐力度标注的乐谱。
  13. 如权利要求11所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型;所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;
    所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接;所述输出层与所述全链接层连接,输出带有音乐力度标注的乐谱。
  14. 如权利要求13所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有基于深度学习的乐曲演奏风格识别程序,所述基于深度学习的乐曲演奏风格识别程序可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:
    获取待演奏的乐曲对应的乐谱;
    根据预先训练完成的音乐力度标注模型对获取的乐谱进行音乐力度标注,以标注出所述乐谱中的音乐力度;
    根据标注的音乐力度,确定该乐曲的演奏风格。
  16. 如权利要求15所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述预先确定的音乐力度标注模型包括模型训练过程及模型测试过程,所述模型训练过程包括如下步骤:
    E、从预先确定的数据源中获取预设数量的标注了音乐力度的乐谱,构成预设数量的样本;
    F、将所述样本分为第一比例的训练子集和第二比例的测试子集;
    G、利用所述训练子集中的样本训练所述音乐力度标注模型,以得到训练好的音乐力度标注模型;
    H、利用所述测试子集中的样本对所述训练好的音乐力度标注模型进行测试,若测试通过,则训练结束,或者,若测试不通过,则增加所述测试子集中的样本数量并重新执行上述步骤E、F、G、H。
  17. 如权利要求15所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述模型测试过程包括如下步骤:
    利用训练好的音乐力度标注模型分别对所述测试子集中的样本进行音乐力度标注,以得到标注了音乐力度的样本;
    分别将各个标注了音乐力度的样本与预存的各个样本的标准音乐力度进行比较;
    若标注了音乐力度的样本与预存的各个样本的标准音乐力度相比,误差率大于预设的误差阈值的标注了音乐力度的样本数大于预设的样本数阈值,则确定针对该音乐力度标注模型的测试不通过;
    或者,若分析得到的误差率小于或等于预设的误差阈值的标注了音乐力度的样本数小于预设的样本数阈值,则确定针对该音乐力度标注模型的测试通过。
  18. 如权利要求16所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型;所述双向循环神经网络模型包括双向循环神经网络的神经元节点 以及全链接层;
    所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接;所述输出层与所述全链接层连接,输出带有音乐力度标注的乐谱。
  19. 如权利要求17所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述预先确定的音乐力度标注模型为预先训练完成的双向循环神经网络模型;所述双向循环神经网络模型包括双向循环神经网络的神经元节点以及全链接层;
    所述双向循环神经网络的神经元节点包括四层,分别为输入层,前向层、反馈层、以及输出层;所述前向层和所述反馈层共同构成隐含层,所述隐含层包括两个输出通道,其中,一个输出通道与所述输出层连接,另一个输出通道与所述隐含层的输入通道连接;所述输出层与所述全链接层连接,输出带有音乐力度标注的乐谱。
  20. 如权利要求15所述的基于深度学习的乐曲演奏风格识别方法,其特征在于,所述双向循环神经网络模型为Lest长短记忆双向循环神经网络模型。
PCT/CN2018/102219 2018-04-28 2018-08-24 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质 WO2019205383A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810403208.6 2018-04-28
CN201810403208.6A CN108766463B (zh) 2018-04-28 2018-04-28 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质

Publications (1)

Publication Number Publication Date
WO2019205383A1 true WO2019205383A1 (zh) 2019-10-31

Family

ID=64008762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102219 WO2019205383A1 (zh) 2018-04-28 2018-08-24 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质

Country Status (2)

Country Link
CN (1) CN108766463B (zh)
WO (1) WO2019205383A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669796A (zh) * 2020-12-29 2021-04-16 西交利物浦大学 基于人工智能的音乐转乐谱的方法及装置
CN114925742A (zh) * 2022-03-24 2022-08-19 华南理工大学 基于辅助任务的符号音乐情感分类系统及方法
CN114925742B (zh) * 2022-03-24 2024-05-14 华南理工大学 基于辅助任务的符号音乐情感分类系统及方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109817192A (zh) * 2019-01-21 2019-05-28 深圳蜜蜂云科技有限公司 一种智能陪练方法
CN111554255B (zh) * 2020-04-21 2023-02-14 华南理工大学 基于循环神经网络的midi演奏风格自动转换系统
WO2022143679A1 (zh) * 2020-12-28 2022-07-07 新加坡鱼尾狮音乐教育品牌有限公司 谱面分析和标注方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080314231A1 (en) * 2007-06-20 2008-12-25 Mixed In Key, Llc System and method for predicting musical keys from an audio source representing a musical composition
CN104978884A (zh) * 2015-07-18 2015-10-14 呼和浩特职业学院 一种学前教育专业学生学习乐理视唱练耳课程的教学系统
CN106297755A (zh) * 2016-09-28 2017-01-04 北京邮电大学 一种用于乐谱图像识别的电子设备及识别方法
CN106446952A (zh) * 2016-09-28 2017-02-22 北京邮电大学 一种乐谱图像识别方法及装置
CN106529576A (zh) * 2016-10-20 2017-03-22 天津大学 基于测度学习改进支持向量机的钢琴乐谱难度识别算法
CN106548212A (zh) * 2016-11-25 2017-03-29 中国传媒大学 一种二次加权的knn音乐流派分类方法
WO2017072754A2 (en) * 2015-10-25 2017-05-04 Koren Morel A system and method for computer-assisted instruction of a music language

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080314231A1 (en) * 2007-06-20 2008-12-25 Mixed In Key, Llc System and method for predicting musical keys from an audio source representing a musical composition
CN104978884A (zh) * 2015-07-18 2015-10-14 呼和浩特职业学院 一种学前教育专业学生学习乐理视唱练耳课程的教学系统
WO2017072754A2 (en) * 2015-10-25 2017-05-04 Koren Morel A system and method for computer-assisted instruction of a music language
CN106297755A (zh) * 2016-09-28 2017-01-04 北京邮电大学 一种用于乐谱图像识别的电子设备及识别方法
CN106446952A (zh) * 2016-09-28 2017-02-22 北京邮电大学 一种乐谱图像识别方法及装置
CN106529576A (zh) * 2016-10-20 2017-03-22 天津大学 基于测度学习改进支持向量机的钢琴乐谱难度识别算法
CN106548212A (zh) * 2016-11-25 2017-03-29 中国传媒大学 一种二次加权的knn音乐流派分类方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669796A (zh) * 2020-12-29 2021-04-16 西交利物浦大学 基于人工智能的音乐转乐谱的方法及装置
CN114925742A (zh) * 2022-03-24 2022-08-19 华南理工大学 基于辅助任务的符号音乐情感分类系统及方法
CN114925742B (zh) * 2022-03-24 2024-05-14 华南理工大学 基于辅助任务的符号音乐情感分类系统及方法

Also Published As

Publication number Publication date
CN108766463B (zh) 2019-05-10
CN108766463A (zh) 2018-11-06

Similar Documents

Publication Publication Date Title
WO2019205383A1 (zh) 电子装置、基于深度学习的乐曲演奏风格识别方法及存储介质
CN107220235B (zh) 基于人工智能的语音识别纠错方法、装置及存储介质
CN106502896B (zh) 一种函数测试代码的生成方法及装置
TWI661319B (zh) 根據文本產生控制指令之裝置、方法及其電腦程式產品
US11531693B2 (en) Information processing apparatus, method and non-transitory computer readable medium
CN111738016A (zh) 多意图识别方法及相关设备
US9299264B2 (en) Sound assessment and remediation
WO2019196301A1 (zh) 电子装置、基于深度学习的乐谱识别方法、系统及存储介质
CN112002323A (zh) 语音数据处理方法、装置、计算机设备及存储介质
CN114091568B (zh) 一种面向文本分类模型的字词双粒度对抗防御系统及方法
US20220083742A1 (en) Man-machine dialogue method and system, computer device and medium
CN111539207B (zh) 文本识别方法、文本识别装置、存储介质和电子设备
CN107844531B (zh) 答案输出方法、装置和计算机设备
CN111477200A (zh) 乐谱文件生成方法、装置、计算机设备和存储介质
CN111325031B (zh) 简历解析方法及装置
CN113707111B (zh) 将多行展示的乐谱数据处理为播放数据的方法及计算机程序
WO2021190660A1 (zh) 音乐和弦识别方法及装置、电子设备、存储介质
CN112732910B (zh) 跨任务文本情绪状态评估方法、系统、装置及介质
Rajagopalan A user-friendly tool for metrical analysis of Sanskrit verse
JP4840051B2 (ja) 音声学習支援装置及び音声学習支援プログラム
WO2015032303A1 (zh) 一种基于字符字根的在线笔迹认证及模板扩充方法
CN115099222A (zh) 标点符号误用检测纠正方法、装置、设备及存储介质
Liu Make Python Talk: Build Apps with Voice Control and Speech Recognition
US20180046604A1 (en) Annotating chemical reactions
Opgen-Rhein et al. Requirements for Author Verification in Electronic Computer Science Exams.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 28.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18915914

Country of ref document: EP

Kind code of ref document: A1