EP0367191B1 - Méthode et dispositif de transcription musicale automatique - Google Patents

Méthode et dispositif de transcription musicale automatique Download PDF

Info

Publication number
EP0367191B1
EP0367191B1 EP89120118A EP89120118A EP0367191B1 EP 0367191 B1 EP0367191 B1 EP 0367191B1 EP 89120118 A EP89120118 A EP 89120118A EP 89120118 A EP89120118 A EP 89120118A EP 0367191 B1 EP0367191 B1 EP 0367191B1
Authority
EP
European Patent Office
Prior art keywords
information
acoustic signals
pitch
cpu
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP89120118A
Other languages
German (de)
English (en)
Other versions
EP0367191A3 (en
EP0367191A2 (fr
Inventor
Yoshinari Utsumi
Shichiro Tsuruta
Hiromi Fujii
Masaki C/O Nec Scientific Information Fujimoto
Masanori C/O Nec Scientific Information Mizuno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Home Electronics Ltd
NEC Corp
Original Assignee
NEC Home Electronics Ltd
NEC Corp
Nippon Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Home Electronics Ltd, NEC Corp, Nippon Electric Co Ltd filed Critical NEC Home Electronics Ltd
Publication of EP0367191A2 publication Critical patent/EP0367191A2/fr
Publication of EP0367191A3 publication Critical patent/EP0367191A3/en
Application granted granted Critical
Publication of EP0367191B1 publication Critical patent/EP0367191B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10GREPRESENTATION OF MUSIC; RECORDING MUSIC IN NOTATION FORM; ACCESSORIES FOR MUSIC OR MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR, e.g. SUPPORTS
    • G10G1/00Means for the representation of music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/061Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/071Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for rhythm pattern analysis or rhythm style recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/086Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for transcription of raw audio or music data to a displayed or printed staff representation or to displayable MIDI-like note-oriented data, e.g. in pianoroll format

Definitions

  • This invention relates in general to an automatic music transcription method and system.
  • the invention is in the field of automatic music transcription and refers to an arrangement (method and apparatus) for preparing musical score data from acoustic signals.
  • acoustic signals may include vocal sounds, humming voices, and musical instrument sounds.
  • An automatic music transcription system transforms such acoustic signals as those in vocals, hummed voices, and musical instrument sounds, into musical score data. It is necessary for such a system to be able to detect from the acoustic signals basic items of information, such as, for examples, sound lengths, musical intervals, keys, times, and tempos.
  • Acoustic signals comprise repetitions of fundamental waveforms in continuum. It is not possible to obtain directly from the acoustic signals the basic items of information needed to establish the musical score data.
  • European patent application no. 0 113 257 discloses a musical note display device in which musical signals are A/D converted and fast Fourier transform processed to determine pitch and power spectrum information. Such information is then correlated with a musical staff for display.
  • European patent application 0 142 935 discloses a voice recognition interval scoring system in which musical pitch is extracted from an acoustic signal and stored as data in a memory. Sound data thus stored may be used to operate a sound generator to produce a tone of corresponding pitch.
  • the acoustic signals which are input from a music performance or a song by a user measuring the tempo or keeping the time for himself contain fluctuations in power and pitch, and, because of this feature, it has been found difficult to perform segmentation even with the utilization of the power information and the pitch information.
  • Segmentation is an element important for the compilation of musical score data, and a lower degree of accuracy in segmentation results in a considerably low degree of accuracy in the musical score data to be obtained ultimately. Therefore, it is to be desired that the accuracy of segmentation is improved.
  • the present invention provides an automatic music transcription arrangement (apparatus and method) which is easier to use than known systems. Furthermore, the system according to the present invention provides more accurate segmentation than can be obtained from known systems.
  • an arrangement for capturing acoustic signals and storing them in the memory while reporting the information on the input auxiliary rhythms including at least information on tempo by an auditory sense process or a visual sense process the system being incorporated in an automatic music transcription system which converts such acoustic signals into musical score data by a set of processes including at least the process for capturing such acoustic signals and storing them in the memory by means of an acoustic signal input means and thereafter extracting the pitch information, which represents the repetitive cycles of their waveforms and their sound pitch, and the power information of such acoustic signals out of the acoustic signals so stored in the memory, the process for segmentation, which consists in dividing the acoustic signals into sections each of which can be regarded to represent a single level in musical interval, by performing such segmentation on the basis of the pitch information and/or the power information, and the musical interval identifying process, which identifies each of the segments derived by such division
  • the system has been designed to give the users the input auxiliary information by an acoustic sense process and/or a visual sense process, so that they may have ease and simplicity in generating the acoustic signals when they capture acoustic signals and take such signals into the system for storing them in the memory for the purpose of performing the process for music transcription.
  • an automatic music transcription system that is somewhat similar to the first aspect described above, but wherein the system stores input auxiliary rhythm information as well in the memory on the same time axis at the time when it performs the capturing and storing in memory of the acoustic signals and wherein the segmentation process is divided among the first process for dividing the acoustic signals into those sections which can be regarded as representing the same level of musical interval as determined on the basis of the input auxiliary rhythm information stored in the memory, the second process for dividing the acoustic signals into those segments which can be regarded as representing the same level in musical interval as determined on the basis of the pitch information and/or the power information, and the third process for making adjustments to the sections as divided by the first process and the second process.
  • the system is arranged so as to utilize the input auxiliary rhythm information, so that the accuracy of the segmentation process may be improved.
  • the system stores in its memory also the input auxiliary rhythm information at the same time as the acoustic signals are captured and stored in the memory. Then, the system performs its segmentation process on the basis of this input auxiliary rhythm information, performs its segmentation process also on the basis of the pitch information and the power information, and then makes adjustments to the results of such segmentation processes.
  • a system including an input auxiliary rhythm reporting means whereby the input auxiliary rhythm information including at least the tempo information is reported by an auditory sense process and/or a visual sense process at the time when the acoustic signals are captured and stored in the memory, the system being incorporated in an automatic music transcription system for converting the acoustic signals into musical score data, the system being provided at least in some with the means of capturing and taking the acoustic signals into the system, the means of storing in the memory the acoustic signals so taken into the system, the pitch and power extracting means, which extracts the pitch information representing the repetitive cycle of the waveforms in the acoustic signals stored in the memory and representing the level in pitch, and the power information from the acoustic signals, the segmentation means for dividing the acoustic signals into those sections which can be regarded as representing the same level in musical interval as determined on the basis of the pitch information and the power information, and the musical interval identifying means
  • the system is so designed that its input auxiliary rhythm reporting means reports the input auxiliary rhythm information by an auditory sense process and/or a visual sense process at the time when the acoustic signals are captured and stored in the memory.
  • a system having a memory means designed to store also the input auxiliary information on the input auxiliary rhythm information in memory on the same time axis at the time when acoustic signal is captured and processed for storage in the memory and provided also with a segmenting means including a first segmenting section for segmenting the acoustic signals into those sections each of which can be regarded as forming one and the same level of musical interval, as determined on the basis of the input auxiliary rhythm information stored in the memory, a second segmenting section for segmenting the acoustic signals into those sections each of which can be regarded as forming one and the same level of musical interval, as determined on the basis of the pitch information and the power information, and a third segmenting section for making adjustments to those sections as divided into segments by the first segmenting section and the second segmenting section.
  • the memory means which stores the acoustic signals in its memory, keeps also the input auxiliary rhythm information in memory on the same time axis as reported from the input auxiliary rhythm information when the captured acoustic signals are stored in the memory, and the system is so designed that the first segmenting section performs its segmentation process on the basis of this input reporting rhythm information and the third segmenting section makes adjustments to the results of this segmentation process and the results of the segmentation performed by the second segmenting section on the basis of the pitch information and the power information.
  • the accuracy of segmentation can be improved.
  • FIGURE 2 is a block diagram of an automatic music transcription system incorporating the present invention.
  • a Central Processing Unit (CPU) 1 performs overall control for the entire system.
  • CPU 1 executes an acoustic signal input program shown in the FIGURE 1 flow chart and a music transcription processing program shown in the FIGURE 3 flow chart.
  • the acoustic signal input and music transcription processing programs are stored in a main storage device 3 connected to CPU 1 via a bus 2.
  • a keyboard 4 which serves as an input device
  • a display unit 5 which serves as an output device
  • an auxiliary memory device 6 for use as working memory
  • an analog/digital (A/D) converter 7 An acoustic signal input device 8, which may comprise a microphone, etc. provides input to A/D converter 7.
  • Acoustic signal input device 8 captures the acoustic signals in vocal songs or humming voices or like sound signals generated by musical instruments and then transforms the signals into electrical signals, thereafter outputting the electrical signals to the A/D converter 7.
  • Speaker 10 generates, when necessary, scattered input auxiliary rhythm sounds representing the predetermined time and tempo under control of CPU 1.
  • CPU 1 operates in accordance with the acoustic signal input program flow charted in FIGURE 1 to input acoustic signals into the system. These signals are stored in main storage device 3. When there has been received a command to input the acoustic signals, together with a command to operate with the specified time and tempo, as entered on the keyboard 4, the input acoustic signals are stored in an orderly sequence into the auxiliary storage device 6. The system also temporarily stores input auxiliary rhythm information in auxiliary memory device 6.
  • CPU 1 Upon completion of the input of acoustic signals into the system, CPU 1 executes the music transcription processing program (flow charted in FIGURE 3) stored in the main storage device 3 thereby converting the input acoustic signals into musical score data and outputting such data to display unit 5 as required.
  • the music transcription processing program (flow charted in FIGURE 3) stored in the main storage device 3 thereby converting the input acoustic signals into musical score data and outputting such data to display unit 5 as required.
  • FIGURE 1 is a flow chart of the process for inputting acoustic signals.
  • the CPU 1 receives a command by way of keyboard 4 to operate in its input mode, the CPU 1 starts executing the program flow charted in FIGURE 1. It first displays on the display unit 5 a prompt for the use to input timing information. It then receives timing information from the user in response to the prompt via keyboard 4. Display unit 5 then displays a prompt to the user to input tempo information. The tempo information is received from the user in response to that prompt (Steps SP 1 and SP 2). Thereafter, the CPU 1 carries out arithmetic operations to determine the cycle and intensity of the input auxiliary rhythm information on the basis of the timing information and the tempo information. CPU 1 then stands by for the input of an input start command from keyboard 4 (Steps SP 3 and SP 4).
  • the CPU 1 When an input start command is given by the user, the CPU 1 causes an input auxiliary rhythm sound to be generated from the speaker 10. It thereafter determines whether or not the input auxiliary rhythm sound so generated indicates the beginning of any measure.
  • the CPU 1 stores the sound in the auxiliary storage device 6 and thereafter receives into the system the acoustic signals composed of digital data as processed through the acoustic signal input device 8 and the A/D converter 7. However, if the sound does not indicate any beginning of a measure, then the CPU 1 immediately inputs of the acoustic signals (Steps SP 5 through SP 8). Thereafter, the CPU 1 stores the acoustic signals so input into the system in the auxiliary storage device 6 (Step SP 9).
  • the CPU 1 determines whether or not any command to finish of the input operation has been given by way of the keyboard 4. When a finish command has been given, the CPU 1 stops its series of operations. However, if there has not been any finish combined given, the CPU 1 further determines whether or not the system is in a timing for the generation of any input auxiliary rhythm sound (Steps SP 10 and SP 11). If it is not in any timing for the generation of such a sound, the CPU 1 returns to step SP 8 and proceeds to the step at which it takes acoustic signals into the system. If it is found that the operation of the system is in the timing for generating the input auxiliary rhythm sound, the CPU 1 returns to step SP 5, and moves on to the step for the generation of the next input auxiliary rhythm sound.
  • the system takes in acoustic signals generated by a user while generating the input auxiliary rhythm sound, and stores the signals one after another in orderly sequence, together with marks indicating the beginning of a measure, in the auxiliary storage device 6.
  • the feature of the system related to generating the input auxiliary rhythms sound makes it easy for a user to input of the acoustic signals.
  • FIGURE 3 is a flow chart of the automatic music transcription process. This process does not occur until after the input of acoustic signals.
  • the CPU 1 extracts the pitch information for the acoustic signals for each analytical circle using autocorrelation analysis of the acoustic signals. It also extracts power information for each analytical cycle by processing the acoustic signals to find the square sum. Then, CPU 1 performs various pre-treatment processes, such as, for examples pre-treatments for noise elimination and smoothing (Steps SP 21 and SP 22).
  • the CPU 1 segments the input acoustic signals into predetermined sections on the basis of the marks placed at the beginning of each measure as stored in the auxiliary storage device 6. It then reviews such sections on the basis of the changes in power, thereby separating such sections to establish the segments which can be regarded as representing the same sound (Steps SP 23 and SP 24).
  • Step SP 25 the CPU 1 performs a tuning process (Step SP 25).
  • CPU 1 calculates the amount of deviation of the musical interval axis that the acoustic signal has in relation to the axis of the absolute musical interval on the basis of the state of distribution of the pitch information and effecting a shift of the obtained pitch information in accordance with the amount of deviation thereof.
  • the CPU 1 modifies the pitch information in such a way that there will remain a smaller difference between the axis of musical interval for the singer or the musical instrument that has generated the acoustic signal and the axis of the absolute musical axis.
  • the CPU 1 thus identifies the musical interval of the particular segment with that on the axis of the absolute musical interval to which the relevant pitch information is considered to be closest as seen on the basis of the pitch information of the segment obtained by the above-mentioned segmentation process and further executes the segmentation process again on the basis of whether or not the musical interval of the identified segments in continuum are identical (Steps SP 26 and SP 27).
  • the CPU 1 finds the product sum of the frequency of occurrence of the musical interval as obtained by working out the classified total of the pitch information after the tuning thereof and the certain prescribed weighing coefficient as determined in correspondence to the key, and, on the basis of the maximum value information of this product sum, the CPU 1 determines the key, for example, the C-major key or the A-minor key, for the piece of music in the input acoustic signals, thereafter ascertaining and correcting the musical interval by reviewing the same musical interval in greater detail with respect to the pitch information regarding the prescribed musical interval on the musical scale for the determined key (Steps SP 28 and SP 29).
  • the key for example, the C-major key or the A-minor key
  • the CPU 1 carries out a final segmentation by reviewing the segmentation results on the basis of whether or not the finally determined musical interval contains identical segments in continuum or whether or not there is any change in power among the segments in continuum (Step SP 30).
  • the CPU 1 After the musical interval and the segments (i.e. the sound length) have been determined in this manner, the CPU 1 produces the finalized musical score data through adjustment of the information including the timing information and the tempo information which were input at the time when the input of the acoustic signals was started (Step SP 31).
  • FIGURE 4 is a flow chart of the segmentation process based on the measure information and the power information generated by the system and FIGURE 5 is a flow chart showing greater details of the segmentation process based on the measure information and the power information.
  • FIGURE 4 and FIGURE 5 related to the segmentation process (Steps SP 23 and SP 24 in FIGURE 3) based on the measure information and the power information on the acoustic signals.
  • FIGURE 4 is a flow chart illustrating such a process at the functional level while FIGURE 5 is a flow chart illustrating the greater details of what is shown in FIGURE 4.
  • the acoustic signals are brought to their squares with respect to the individual sampling points within the analytical cycle, and the sum total of those square values is used for the power information on the acoustic signals in the analytical cycle.
  • the CPU 1 takes out the mark for the beginning of a measure as stored in the auxiliary storage device 6, divides each measure into four equal portions, and puts a mark indicating the beginning of a beat at the initial part of each of the equally divided portions (Step SP 40). In the case of quadruple measure not being selected, but rather triple measure having been selected, the measure is to be divided into three equal portions. Next, the CPU 1 makes a further division of each of the obtained beats into four equal portions, and puts a mark for the beginning of a semiquarter note at the initial part of each of the equally divided portions (Step SP 41). In this manner, the acoustic signals are divided into 16 portions of each measure on the basis of the measure information. In those cases where not the quadruple measure but the triple measure has been selected, one measure will be divided into twelve equal portions. Thereafter, the CPU 1 reviews these divided portions on the basis of the power information.
  • the system has been so arranged as to reflect the power information on the segmentation process because users may produce changes accompanying some intensification of power when they change the pitch in the sounds, i.e. when they make a transition to the next sound.
  • CPU 1 then extracts the point of a rise in the power information, putting a mark indicating a rising point at the appropriate place and thereafter taking the mark which indicates the beginning of a semiquarter note and is located at a point closest to each of the rising point and putting a mark indicating the beginning of a semiquarter note at the rising point (Steps SP 42 and SP 43).
  • the CPU 1 counts the number of pieces of the pitch information in each semiquarter note section and puts a mark indicating the beginning of a rest at the initial point of each section where the number of pieces of such information is smaller than the threshold value (Step SP 44). Finally, the CPU 1 places a mark indicating the beginning of a segment at those points bearing a mark for the beginning of a measure, a rising point, or the beginning of a rest (Step SP 45). A mark is made indicating the beginning of a segment also at the point where a measure begins because one sound may extend over two measures, in which case it is the practice to show musical notes in the respective measures indicated on the score.
  • the system obtains a plural number of segments obtained by the division based on the measure information and the power information. Even if some of the segments obtained by this segmentation process should turn out to be inadequate ones, such segments will be rectified to be proper segments by the effect of the segmentation to be executed at subsequent steps (Steps SP 27 and SP 30 given in FIGURE 3) as mentioned above.
  • the CPU 1 first clears to zero the parameter i indicating each analytical circle (such an analytical circle like this is hereafter called an analytical point in view of the fact that it has a very short analytical cycle), and then, ascertaining that the analytical point data (which include pitch information and power information) to be processed has not yet been completed, the CPU judges whether or not any mark indicating the beginning of a measure is placed on that analytical point (Steps SP 50 through SP 52).
  • the CPU increment the parameter i for the analytical point and returns to the above-mentioned Step SP 51, but, in case where such a mark is placed, the CPU 1 proceeds to perform the processes at the Step SP 54 and the subsequent steps (Step SP 54). In this manner, the CPU 1 finds the mark indicating the beginning of the first measure.
  • the CPU 1 sets i + 1 in the parameter j, and, ascertaining that the analytical point data to be processed have not been completed, the CPU 1 judges whether any mark indicating the beginning of a measure is placed on the particular analytical point (Steps SP 54 through SP 56). In case no such mark is placed, the CPU 1 increments the parameter j and returns to the Step SP 55 mentioned above, but, in case such a mark is placed, the CPU 1 proceeds to the processing of the Step SP 58 and the subsequent steps (Step SP 57).
  • the parameter i indicates the analytical point positioned at the former mark out of the two consecutive marks which indicate the beginning of a measure while the parameter j indicates the analytical point positioned at the latter of the two consecutive marks which indicate the beginning of a measure.
  • the CPU 1 divides the section from the analytical point i to the analytical point j-1 into four equal portions (or into three equal portions in the case of such a section with the triple beat) and puts a mark for the beginning of a beat on each of those portions, thereafter setting j in the parameter i, which indicates the analytical point positioned in the former of the marks indicating the beginning of a measure, and then returning to the above-mentioned Step SP 54 to proceed to the searching of the analytical point bearing the mark indicating the beginning of a measure and positioned in the latter of the analytical points (Steps SP 58 and SP 59).
  • Step SP 60 By the repeated execution of this loop operation process including Steps SP 54 through SP 59, the marks indicating the beginning of each beat are placed one by one in orderly sequence in the individual measure sections until the data on the final analytical point are taken out to produce an affirmative result at the Step Sp 55.
  • the CPU 1 places a mark indicating the beginning of a beat at the analytical point for the parameter i at the particular point in time, therewith completing a series of processes for putting the mark indicating the beginning of a beat and thereafter proceeds to Step 61 and the subsequent steps for putting the mark indicating the beginning of each semiquarter note (Step SP 60).
  • Step SP 51 If CPU 1 obtains an affirmative result at the Step SP 51 as it comes to the final data without finding any mark indicating the beginning of the initial measure, the CPU proceeds, without placing any mark on such sections, to the processes for putting the marks indicating the beginning of the semiquarter notes.
  • the portion of the process including Steps SP 50 through SP 60 correspond to Step 40 in FIGURE 4.
  • Steps SP 61 through SP 71 The details of the processes corresponding to the Step 41 in FIGURE 4, which are to be performed for putting the marks indicating the beginning of the semiquarter notes by finding the two marks one preceding the other and indicating the beginning of the beat and dividing the sections with such marks into four equal portions, are almost identical to the processes of Steps SP 50 through SP 60. Marks are placed indicating the beginning of the beats by finding the marks indicating the beginning of the respectively preceding and following sections and dividing those sections into four equal portions. Therefore, a detailed discussion of that process is omitted (Steps SP 61 through SP 71).
  • the CPU 1 clears to zero the parameter i for the analytical point and thereafter performs arithmetic operations to determine the function d (i) for extracting the rise in the power information with respect to that analytical point, ascertaining the point that the analytical point data to be processed have not yet been brought to a finish (Steps SP 72 through SP 74).
  • the CPU 1 judges whether or not the value of the rise extraction function d(i) so obtained is any smaller than the threshold value ⁇ d, and, if it is smaller, the CPU 1 increments the parameter i for the analytical point and returns to the Step SP 73 (Steps SP 75 and SP 76).
  • the CPU 1 places the mark indicating the beginning of a rise point to that analytical point (Step SP 77).
  • the CPU 1 ascertains that the processing has not yet been completed on the data with respect to all the analytical points and then, performing arithmetic operations to determine the rise extraction function d(i), judges whether or not the rise extraction function d(i) smaller than the threshold value ⁇ d (Steps SP 78 through SP 80). In case the rise extraction function d(i) is smaller than the threshold value, the CPU 1 increments the parameter i and returns to the above- mentioned Step SP 78 (Step SP 81).
  • the process of Steps from SP 78 through SP 81 is a process for finding the analytical point at which the rise extraction function d(i) becomes smaller than the threshold value ⁇ d after the rise extraction function once grows larger than the threshold value. Now that there is an analytical point where the rise extraction function rises again after the analytical point thus obtained, the CPU 1 returns to the above-mentioned step SP 73 and resumes the process for extracting the rise point if has found an analytical point where the rise extraction function becomes smaller than the threshold value, i.e. if it obtains an affirmative result at the above- mentioned step SP 80.
  • the CPU 1 soon detects that the processing has been completed of all the analytical points at the Steps SP 73 or SP 78, and the CPU 1 proceeds to a review of the rise points on the basis of the length between the adjacent rise points at the Step SP 82 and the subsequent steps.
  • the CPU 1 clears to zero the parameter i for the analytical point, and then, ascertaining that the data on the analytical point have not yet been brought to a finish, the CPU 1 judges whether or not a mark indicating a rise point is placed on the analytical point (Steps SP 82 through SP 84). When the point is not a rise point, the CPU 1 increments the parameter i for the analytical point and then returns to the Step SP 83 (Step SP 85). Upon the detection of a rise point through the repeated performance of this process, the CPU 1 sets the length parameter L at the initial value "1" in order to measure the length from the rise point to the next rise point (Step SP 86).
  • the CPU 1 increments the analytical point parameter i, and then, ascertaining that the analytical point data has not yet been completed, further judges whether or not any mark indicating the beginning of a rise point is placed on the particular analytical point (Steps SP 87 through SP 89). If the CPU 1 finds as the result that the analytical point is not any rise point, the CPU 1 increments the length parameter L and also increments the analytical point parameter i, thereafter returning to the above-mentioned step, SP 88 (Steps SP 90 and SP 91).
  • the length parameter L found at this time corresponds to the distance between the marked analytical point being taken up for processing and the immediately preceding marked analytical point, i.e. to the length between the respectively preceding and following rise points.
  • the CPU 1 judges whether or not this parameter L is shorter than the threshold value ⁇ L, and, when it is found to be above the threshold value ⁇ L, the CPU 1 returns to the above-mentioned Step, SP 83, without eliminating the mark indicating a rise point, but, when it is smaller than the threshold value ⁇ L, the CPU 1 removes the former mark indicating the rise point, and then returns to the above-mentioned step SP 83 (Steps SP 92 and SP 93).
  • the CPU 1 will immediately obtain an affirmative result at the step SP 84, unless the analytical point data has been completed, and the CPU 1 will proceed to the processing at the subsequent steps beginning with the step SP 86 and will move on to the operation for searching for another mark next to the mark just found.
  • the CPU 1 will complete the review of the lengths between the rise points with respect to all the rise points, and when it soon obtains an affirmative result at the Step SP 83 or the Step SP 88, the CPU 1 will complete the series of processes for the extraction of the rise points in the power information.
  • the process of Steps, SP 72 through SP 93 corresponds to the process of Step SP 42 shown in FIGURE 4.
  • the CPU 1 When the CPU 1 completes the process for thus extracting the rise points in the power information by repeating this processing procedure, the CPU 1 first clears to zero the parameter i for the analytical point and then, ascertaining that the data to be processed are not yet finished, the CPU 1 judges whether or not any mark indicating a rise point in the power information is placed with respect to that analytical point (the Steps, SP 94 through SP 96). In case no such mark is placed, the CPU 1 increments the parameter i and then returns to the step, SP 95, mentioned above (Step SP 97). When the CPU 1 finds one rise point in this manner, the CPU judges whether or not any mark indicating a semiquarter note is placed on the analytical point i thereof (Step SP 98).
  • the CPU 1 increments the parameter i and then returns to the Step SP 95 mentioned above, thereupon proceeding to the process for searching the next rise point because it is not necessary to perform any processing for the matching of that rise point and the beginning point of the semiquarter note (Step SP 99).
  • the CPU 1 returns to the above-mentioned step SP 95 and proceeds therefrom to the process for searching the rise point positioned closest to this rise point.
  • the CPU 1 puts a mark indicating the beginning of a semiquarter note at the rise point, and then it sets the parameter j at its initial value "1" for finding the analytical point preceding the rise point and bearing a mark indicating the beginning of a semiquarter note (Steps SP 100 and SP 101).
  • the CPU 1 judges whether or not any mark indicating the beginning of a semiquarter note is placed on the analytical point i-j. In case no such mark is placed there, the CPU 1 increments the parameter j, thereafter returning to the Step 102 (Steps SP 102 through SP 104).
  • the CPU 1 finds the analytical point i-j, which is located in a position closest on the side preceding the rise point where a mark indicating a semiquarter note is placed, then obtaining an affirmative result at the Step SP 103.
  • the CPU 1 sets the parameter k, which is a parameter for finding the analytical point bearing a mark indicating a semiquarter note at the side following the rise point, at the initial value "1" (Step SP 105). Thereafter, the CPU 1 ascertains that the analytical point i+k does not have any value larger than that of the final analytical point, which amounts to saying that the analytical point i+k is one where data are present, and then judges whether or not any mark indicating the beginning of a semiquarter note is placed on the analytical point i+k. If no such mark is placed there, the CPU 1 increments the parameter, then returning to the step SP 106 (Steps SP 106 through SP 108).
  • the CPU 1 finds the analytical point i+k, which is positioned closest to and following the rise point which bears the mark indicating the beginning of a semiquarter note, then obtaining an affirmative result at the step SP 107.
  • the CPU 1 compares the two parameters j and k in terms of size and judges which of the two analytical points are closer to the rise point, and, in case the analytical point i-j positioned on the preceding side is closer to the rise point (including those cases where the analytical point is equally close to the rise point), the CPU 1 removes the mark indicating the beginning of a semiquarter note from the analytical point i-j, where it has been placed, and thereafter the CPU 1 increments the parameter i and proceeds to the process of searching the next rise point.
  • the CPU 1 removes the mark indicating the beginning of a semiquarter note from that analytical point i+k where it has been attached, and thereafter increments the parameter i and proceeds to the process of searching the next rise point (Steps SP 109 through SP 113).
  • the CPU 1 places a mark indicating the beginning of a semiquarter note on every rise point while it removes the mark indicating the beginning of a semiquarter note from the point closest to that rise point. And, when this process is completed with respect to all the analytical points, the CPU 1 finishes the process for matching the series of rise points and the points marking the beginning of the semiquarter points by the step, SP 95. Moreover, the process of steps SP 94 through SP 113 corresponds to the step SP 43 of FIGURE 4.
  • the CPU 1 clears to zero the parameter i for the analytical point and then, ascertaining that the data to be processed with respect to the analytical point are not yet finished, the CPU 1 judges whether or not a mark indicating the beginning of a semiquarter note is placed on that analytical point (steps, SP 114 through SP 116). In case no such mark is placed, the CPU 1 increments the parameter i and returns to the above-mentioned step SP 115 (Step SP 117).
  • the CPU 1 sets at i-1 the parameter j applied to the next mark indicating the beginning of a semiquarter note, and then, ascertains that the data to be processed of the analytical data have not been finished yet, the CPU 1 judges whether or not a mark indicating the beginning of a semiquarter note is placed on that analytical point j (Steps SP 118 through SP 120). In case no such mark is placed, the CPU 1 then increments the parameter j and returns to the step SP 119 mentioned above (Step SP 121).
  • the CPU 1 clears to zero the number-of-pieces parameter n for a segment with a pitch and thereafter sets at 1 the finish parameter k for the processing of a segment with the presence of pitch (Steps SP 122 and SP 123).
  • the CPU 1 judges whether or not there is any pitch information present at the analytical point k, i.e. whether or not the analytical point k contains a voiced sound (Steps SP 124 and SP 125).
  • the CPU 1 then increments the number-of- pieces parameter n and thereafter also increments parameter k, then returning to the step SP 124 mentioned above.
  • the CPU 1 immediately increments the parameter k, thereafter returning to the above-mentioned step SP 124 (Steps SP 125 and SP 126). The repetition of this process will soon results in obtaining an affirmative answer at the step SP 124.
  • the parameter k changes within the range from i to j-1, and, when an affirmative result is obtained at the step SP 124, the number-of-pieces parameter n indicates the number of pieces of the analytical points with the presence of the pitch information between the analytical point i and the analytical point j-1, i.e. the number of pieces of the analytical points where there is some pitch information between the preceding and the following marks each indicating the beginning of a semiquarter note.
  • the CPU 1 judges whether or not the value of the number-of-pieces parameter n is larger than the prescribed threshold value ⁇ n. If the value of the parameter is smaller than the threshold value ⁇ n, the CPU 1 puts a mark for the beginning of a rest at the analytical point i, which is the first analytical point in the count of the number of pieces of the analytical points, where a mark indicating the beginning of a semiquarter note is placed, and thereafter the CPU 1 sets the parameter i at j and returns to the step SP 118 mentioned above.
  • the CPU 1 immediately sets the parameter i at j, thereafter returning to the above-mentioned step SP 118 and proceeding to the process of searching the next analytical point where a mark indicating the beginning of a semiquarter note is placed (Steps SP 128 through SP 130).
  • a mark indicating the beginning of a rest is placed one by one in orderly sequence at the first analytical point that is positioned between the respectively preceding and following marks each indicating the beginning of a semiquarter note and having a fewer number of pieces of analytical points with the presence of the pitch information, and soon an affirmative result is obtained at the steps SP 115 or SP 119, and the series of processes for placing a mark indicating the beginning of a rest will be brought to a finish.
  • the process of steps, SP 114 through SP 130 corresponds to the process at the step SP 44 of FIGURE 4.
  • the CPU 1 Upon completion of the process of placing a mark indicating the beginning of a rest, the CPU 1 clears to zero the analytical point parameter i, and, ascertaining that the analytical point data to be processed have not yet been finished, the CPU 1 judges whether or not a mark indicating the beginning of a measure is placed on that analytical point (Steps SP 131 through SP 133). In case no mark indicating the beginning of a measure is placed, the CPU 1 further judges whether or not a mark indicating a rise point in the power information is placed there (Step SP 134). In case there is no mark placed for indicating a rise point, the CPU 1 further judges whether or not a mark indicating the beginning of a rest is placed there (Step SP 135). In case the mark indicating the beginning of a rest is not placed, the CPU 1 increments the parameter i and returns to the above-mentioned step, SP 132, then ascertaining the presence of a mark on the next analytical point (Step SP 136).
  • the CPU 1 puts a mark on the analytical point thereby to indicate the beginning of a segment, and then increments the parameter i, thereafter returning to the above-mentioned step, SP 132, and ascertaining whether or not the prescribed mark is attached to the next analytical point (Steps SP 137 and SP 138).
  • the CPU 1 places marks indicating the beginnings of segments one by one on those analytical points which bear a mark indicating the beginning of a measure, a rise point, or the beginning of a rest, and the process soon comes to the final data, and an affirmative result is obtained at the step SP 132. Thereupon the series of processes for placing the mark indicating the beginning of a segment is finished.
  • the process of the steps SP 131 through SP 138 corresponds to the process of step SP 45 of FIGURE 4.
  • the CPU 1 finishes the process of segmentation on the basis of the measures and power information, thereafter proceeding to the tuning process as described above.
  • FIGURE 6 presents the changes in the pitch information, PIT, the power information, POW, and the rise extraction function d(i) with respect to the one-measure section.
  • the "dual circle” mark represents the beginning of a measure
  • the "white star” mark represents a rise point
  • the "circle” mark indicates the beginning of a beat
  • the "X” mark indicates the beginning of a semiquarter note before the matching with a rise point is executed
  • the "triangle” mark shows the beginning of a rest. Therefore, in the case of this example of a section corresponding to one measure, the mark indicating the beginning of a segment is placed as shown by the "black circle” mark shown in it as the result of the execution of the series of segmentation processes as described above.
  • the system is so designed as to generate input auxiliary rhythm sounds in order to help the users in their input of acoustic signals, thereby offering simplicity and ease of use with regard to the input of acoustic signals and enabling their input with accuracy in terms of rhythm, which results in greater facility in the segmenting of such signals and therefore in improvements upon the precision of the produced musical score data.
  • the system is arranged in such a way that the information on the input auxiliary rhythm sounds generated at the time of the input are recorded on the same time axis as for the acoustic signals, so that such information may be used for segmenting such signals.
  • This feature enhances accuracy of segmentation, which in turn leads to improvements on the precision of the musical score data produced.
  • the preferred embodiment employs the square sum of the acoustic signal as the power information, but another parameter may also be used.
  • the square root of the square sum may be used.
  • the rise extraction function has been obtained in the manner expressed in the equation (1), but also another parameter may be employed. It is acceptable to extract the rise in the power information by the application of a function representing only the numerator in the equation (1).
  • the system takes away the mark of the rise point on the preceding side in case the distance between the preceding and following rise points is short, but it is acceptable to remove the mark of the rise point.
  • the system generates the input auxiliary rhythm sounds to permit the users to input the acoustic sounds with ease.
  • the rhythm information for assisting the user with the input procedure may be provided in the visual form.
  • the sounds of a metronome or rhythmic accompanying sounds could be provided as the input auxiliary sounds.
  • the system makes use of the information on the beginning of a measure, out of the input auxiliary rhythm information, for performing the segmentation process.
  • the information indicating the beginning of a beat, out of the input auxiliary rhythm information may well be used for performing the segmentation process.
  • the preferred embodiment uses display unit 5 to output of the musical score data, but a character printing device can be used in its place.
  • CPU 1 executes all the processes in accordance with the programs stored in memory in the main storage device 3. Yet, some or all of the processes can be executed by a hardware system or sub-system.
  • a hardware system or sub-system For example, as illustrated in FIGURE 7, where the identical reference numbers are given for the parts corresponding to those shown in FIGURE 2, the acoustic signals input from the acoustic signal input device 8 can be amplified while there are passed through the amplifying circuit 11 and thereafter channeled through a pre-filter 12 and then fed into the A/D converter 13, where they are converted into digital signals.
  • the acoustic signals as thus converted into digital signals are then processed for autocorrelation analysis by the signal-processing processor 14, which thereby extracts the pitch information or may otherwise extract the power information by processing the signals to find their square sum, and the pitch information or the power information, as the case may be, can then be supplied to the CPU 1 for their processing with the software system.
  • the signal-processing processor 14 which can be utilized for such a hardware construction (11 through 14), it is possible to use a processor which is capable of performing the real-time processing of the signals and is also provided with the signals for establishing an interface with the host computer (for example, ⁇ PD 7720 made by Nippon Electric Corporation).
  • the preferred embodiment performs the initial segmentation process on the basis of the input auxiliary rhythm information and the power information, but the system can be designed to perform the process on the basis of the input auxiliary rhythm information and the pitch information, or can also be so designed as to perform the process on the basis of the input auxiliary rhythm information and the power information and the pitch information.
  • the system according to this invention is arranged so as to provide a user with input auxiliary rhythm information and let the user input acoustic signals, thereby enabling the user to input acoustic signals with greater ease and simplicity, so that he can input the intended acoustic signals with accuracy in terms of rhythm, with the result that greater facility is attained in the performance of the segmentation process for such acoustic signals and that the precision of the musical score data so prepared can be improved positively.
  • the system is designed also to record the input auxiliary rhythm information provided to the users on the same time axis as the acoustic signals, so that the information so recorded may be made available for the process of segmentation process.
  • This feature makes it possible to perform accurate segmentation, thereby enhancing the precision of the musical score data generated by the system.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)

Claims (12)

  1. Méthode pour transcrire automatiquement de la musique, comportant les étapes de :
    - introduction d'informations de rythme (8; SP1, SP2);
    - réception de signaux acoustiques (8; SP5-SP8);
    - simultanément à l'étape de réception, fourniture d'informations sur des rythmes auxiliaires d'entrée comprenant au moins des informations sur le tempo (1,10; SP4,SP5);
    - mémorisation des signaux acoustiques dans une mémoire (3,6; SP9);
    - extraction, à partir desdits signaux acoustiques mémorisés dans la mémoire, d'informations de hauteur, qui représentent les cycles répétitifs de leurs formes d'ondes et de la hauteur de leur son, ainsi que d'informations de puissance acoustique dérivées de l'amplitude d'entrée des signaux acoustiques (1; SP21, SP22);
    - segmentation des signaux acoustiques sur la base des informations de hauteur et/ou des informations de puissance, le processus de segmentation comprenant la division des signaux acoustiques en sections dont chacune peut être considérée comme formant une hauteur musicale relative unique (1; SP23,SP24);
    - identification de la hauteur de chacun des segments par comparaison avec un axe de hauteur musicale absolue (1; SP25-SP27); et
    - affichage/transmission des résultats des étapes précédentes (5; SP31).
  2. Méthode de transcription musicale automatique selon la revendication 1, dans laquelle l'étape de fourniture comporte l'étape de fourniture d'un signal audio (5; SP10,SP11).
  3. Méthode de transcription musicale automatique selon la revendication 1, dans laquelle l'étape de fourniture comporte l'étape de fourniture d'un signal vidéo (5; SP10,SP11).
  4. Méthode de transcription musicale automatique selon la revendication 1, dans laquelle l'étape de fourniture comporte l'étape de fourniture de signaux audio et vidéo à la fois (5; SP10,SP11).
  5. Méthode de transcription musicale automatique selon une des revendications précédentes, comportant de plus l'étape de mémorisation des rythmes auxiliaires dans la mémoire sur la même base de temps que celle des signaux acoustiques (6; SP5-SP8) au moment où les signaux acoustiques susmentionnés sont reçus et mémorisés.
  6. Méthode de transcription musicale automatique selon une des revendications précédentes, dans laquelle l'étape de segmentation comporte les étapes de :
    - première segmentation, sur la base des informations sur les rythmes auxiliaires d'entrée mémorisées dans la mémoire, des signaux acoustiques en sections dont chacune peut être considérée comme formant une seule et unique hauteur musicale relative (SP23, SP24);
    - deuxième segmentation, sur la base des informations de hauteur et des informations de puissance acoustique, des signaux acoustiques en sections dont chacune peut être considérée comme formant une seule et unique hauteur musicale relative (SP25); et
    - troisième segmentation faisant des corrections sur ces sections telles que divisées en segments par la première et la deuxième étapes (SP26,SP27).
  7. Dispositif de transcription musicale automatique, comportant :
    - des moyens pour introduire des informations de rythme (8; SP1,SP2);
    - des moyens pour recevoir des signaux acoustiques devant être transcrits (8);
    - des moyens pour fournir des informations sur des rythmes auxiliaires comprenant des informations sur le tempo, au moment où les signaux acoustiques sont reçus (1,10);
    - une mémoire (3,6);
    - des moyens pour traiter et mémoriser dans la mémoire les signaux acoustiques ainsi que les informations de rythme (1);
    - des moyens d'extraction de hauteur et de puissance pour extraire, à partir des signaux acoustiques mémorisés dans la mémoire, des informations de hauteur, qui représentent un cycle répétitif des formes d'ondes des signaux acoustiques et la hauteur musicale relative de ces signaux, ainsi que des informations de puissance acoustique dérivées de l'amplitude d'entrée des signaux acoustiques (7);
    - des moyens de segmentation pour diviser les signaux acoustiques en sections dont chacune peut être considérée comme formant une hauteur musicale relative tel que déterminé sur la base des informations de hauteur et/ou des informations de puissance acoustique (1); et
    - des moyens d'identification d'intervalle musical pour identifier la hauteur musicale relative des signaux acoustiques susmentionnés avec référence à un axe absolu de hauteur musicale.
  8. Dispositif de transcription musicale automatique selon la revendication 7, comportant de plus des moyens pour fournir les informations sur les rythmes auxiliaires d'entrée sous une forme auditive (9,10).
  9. Dispositif de transcription musicale automatique selon la revendication 7, comportant de plus des moyens pour fournir les informations sur les rythmes auxiliaires d'entrée sous une forme visuelle (5).
  10. Dispositif de transcription musicale automatique selon la revendication 7, comportant de plus des moyens pour fournir les informations sur les rythmes auxiliaires d'entrée sous une forme auditive et visuelle à la fois (5,9,10).
  11. Dispositif de transcription musicale automatique selon une des revendications 7 à 10, dans lequel lesdits moyens de traitement et de mémorisation comportent des moyens pour mémoriser les informations sur les rythmes auxiliaires ainsi que les signaux acoustiques en mémoire sur la même base de temps, au moment où les signaux acoustiques susmentionnés sont reçus et mémorisés dans la mémoire (3,6).
  12. Dispositif de transcription musicale automatique selon la revendication 11, dans lequel les moyens de segmentation comportent :
    - une section de première segmentation pour segmenter, sur la base des informations sur les rythmes auxiliaires d'entrée mémorisées dans la mémoire, les signaux acoustiques en sections dont chacune peut être considérée comme formant une seule et unique hauteur musicale relative (SP23,SP24);
    - une section de deuxième segmentation pour segmenter, sur la base des informations de hauteur et des informations de puissance acoustique, les signaux acoustiques en sections dont chacune peut être considérée comme formant une seule et unique hauteur musicale relative; et
    - une section de troisième segmentation pour faire des corrections sur ces sections telles que divisées en segments par les sections de première et deuxième segmentations (SP26,SP27).
EP89120118A 1988-10-31 1989-10-30 Méthode et dispositif de transcription musicale automatique Expired - Lifetime EP0367191B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP275740/88 1988-10-31
JP63275740A JP3047068B2 (ja) 1988-10-31 1988-10-31 自動採譜方法及び装置

Publications (3)

Publication Number Publication Date
EP0367191A2 EP0367191A2 (fr) 1990-05-09
EP0367191A3 EP0367191A3 (en) 1990-07-25
EP0367191B1 true EP0367191B1 (fr) 1993-12-29

Family

ID=17559732

Family Applications (1)

Application Number Title Priority Date Filing Date
EP89120118A Expired - Lifetime EP0367191B1 (fr) 1988-10-31 1989-10-30 Méthode et dispositif de transcription musicale automatique

Country Status (6)

Country Link
EP (1) EP0367191B1 (fr)
JP (1) JP3047068B2 (fr)
KR (1) KR920007206B1 (fr)
AU (1) AU631573B2 (fr)
CA (1) CA2001923A1 (fr)
DE (1) DE68911858T2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729506B (zh) * 2002-12-20 2010-05-26 安布克斯英国有限公司 音频信号分析方法和设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0645757B1 (fr) * 1993-09-23 2000-04-05 Xerox Corporation Filtre sémantique de cooccurrence pour la reconnaissance de la parole et pour utilisations dans la transcription de signaux
US7386357B2 (en) * 2002-09-30 2008-06-10 Hewlett-Packard Development Company, L.P. System and method for generating an audio thumbnail of an audio track
US8208643B2 (en) * 2007-06-29 2012-06-26 Tong Zhang Generating music thumbnails and identifying related song structure
CN109979483B (zh) * 2019-03-29 2020-11-03 广州市百果园信息技术有限公司 音频信号的旋律检测方法、装置以及电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2279290A1 (fr) * 1974-07-15 1976-02-13 Anvar Procede et dispositif pour realiser sur un ecran de television ou equivalent, l'affichage d'ideogrammes ou autres signes, et notamment de notations musicales
JPS5924895A (ja) * 1982-08-03 1984-02-08 ヤマハ株式会社 楽譜表示装置における楽音情報の処理方法
DE3377951D1 (en) * 1982-12-30 1988-10-13 Victor Company Of Japan Musical note display device
JPS6090376A (ja) * 1983-10-24 1985-05-21 セイコーインスツルメンツ株式会社 音声認識式音程学習装置
JPS6090396A (ja) * 1983-10-24 1985-05-21 セイコーインスツルメンツ株式会社 音声認識式音程採譜装置
US4771671A (en) * 1987-01-08 1988-09-20 Breakaway Technologies, Inc. Entertainment and creative expression device for easily playing along to background music
DE68907616T2 (de) * 1988-02-29 1994-03-03 Nippon Denki Home Electronics Verfahren und Gerät zur Musiktranskribierung.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729506B (zh) * 2002-12-20 2010-05-26 安布克斯英国有限公司 音频信号分析方法和设备

Also Published As

Publication number Publication date
AU631573B2 (en) 1992-12-03
JP3047068B2 (ja) 2000-05-29
KR900006908A (ko) 1990-05-09
AU4389489A (en) 1990-05-03
DE68911858D1 (de) 1994-02-10
CA2001923A1 (fr) 1990-04-30
EP0367191A3 (en) 1990-07-25
JPH02120893A (ja) 1990-05-08
EP0367191A2 (fr) 1990-05-09
KR920007206B1 (ko) 1992-08-27
DE68911858T2 (de) 1994-05-26

Similar Documents

Publication Publication Date Title
Durrieu et al. Source/filter model for unsupervised main melody extraction from polyphonic audio signals
US5038658A (en) Method for automatically transcribing music and apparatus therefore
Dixon On the computer recognition of solo piano music
Ryynänen et al. Transcription of the Singing Melody in Polyphonic Music.
CN101165773B (zh) 信号处理设备及方法
EP2688063B1 (fr) Analyse de séquence de notes
CN109979488B (zh) 基于重音分析的人声转乐谱系统
US9378719B2 (en) Technique for analyzing rhythm structure of music audio data
JP2002116754A (ja) テンポ抽出装置、テンポ抽出方法、テンポ抽出プログラム及び記録媒体
EP0367191B1 (fr) Méthode et dispositif de transcription musicale automatique
US6365819B2 (en) Electronic musical instrument performance position retrieval system
EP0331107B1 (fr) Procédé et dispositif pour la transcription de musique
CN113823270A (zh) 节奏评分的确定方法、介质、装置和计算设备
JP2604414B2 (ja) 自動採譜方法及び装置
JP2653456B2 (ja) 自動採譜方法及び装置
JPH01219627A (ja) 自動採譜方法及び装置
JP2604401B2 (ja) 自動採譜方法及び装置
JP2008015213A (ja) ビブラート検出方法、歌唱訓練プログラム及びカラオケ装置
JP2604405B2 (ja) 自動採譜方法及び装置
JP2604400B2 (ja) ピッチ抽出方法及び抽出装置
JP2604413B2 (ja) 自動採譜方法及び装置
JPH0934350A (ja) 歌唱トレーニング装置
JP2614631B2 (ja) 自動採譜方法及び装置
JP2604407B2 (ja) 自動採譜方法及び装置
Bapat et al. Pitch tracking of voice in tabla background by the two-way mismatch method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19901109

17Q First examination report despatched

Effective date: 19920302

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REF Corresponds to:

Ref document number: 68911858

Country of ref document: DE

Date of ref document: 19940210

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 19951023

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 19951031

Year of fee payment: 7

Ref country code: DE

Payment date: 19951031

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Effective date: 19961030

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 19961030

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Effective date: 19970630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Effective date: 19970701

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST