CN110998708A - Differential presentation device, differential presentation method, and differential presentation program - Google Patents

Differential presentation device, differential presentation method, and differential presentation program Download PDF

Info

Publication number
CN110998708A
CN110998708A CN201880050238.1A CN201880050238A CN110998708A CN 110998708 A CN110998708 A CN 110998708A CN 201880050238 A CN201880050238 A CN 201880050238A CN 110998708 A CN110998708 A CN 110998708A
Authority
CN
China
Prior art keywords
pitch
user
chord
name
differential
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880050238.1A
Other languages
Chinese (zh)
Inventor
松本秀一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Publication of CN110998708A publication Critical patent/CN110998708A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10GREPRESENTATION OF MUSIC; RECORDING MUSIC IN NOTATION FORM; ACCESSORIES FOR MUSIC OR MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR, e.g. SUPPORTS
    • G10G7/00Other auxiliary devices or accessories, e.g. conductors' batons or separate holders for resin or strings
    • G10G7/02Tuning forks or like devices

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A reference pitch determining unit acquires a pitch name to be uttered by a user as a target pitch name, and determines a pitch corresponding to the acquired target pitch name as a reference pitch based on the tuning information. The tuning information indicates a correspondence between a pitch name and a pitch in a predetermined musical scale. The predetermined pitch is, for example, an unequal pitch. The acquisition unit acquires a pitch produced by the user as a user pitch. The presentation unit presents difference information indicating a difference between the determined reference pitch and the acquired user pitch.

Description

Differential presentation device, differential presentation method, and differential presentation program
Technical Field
The present invention relates to a difference presentation device, a difference presentation method, and a difference presentation program for presenting a difference between a pitch of a sound emitted by a user and a reference pitch.
Background
In an ensemble performance, a player sometimes desires to perform a performance at a pitch tuned to the accompaniment tones. By appropriately tuning, it is possible to realize sound effects in a desired rhythm. Therefore, a teaching device for improving the technique of playing at a desired pitch has been proposed.
Patent document 1 describes a harmony trainer having a meter that visually presents a tone tuning phenomenon and a frequency hopping phenomenon of accents in a tuner mode and a sound mode. According to the harmony trainer, the player can create a beautiful chord in the sound mode and recognize the chord acoustically and visually, and can confirm the pitch of the sound emitted from the player's own musical instrument in the tuner mode.
Patent document 1: japanese laid-open patent publication No. 60-50594
Disclosure of Invention
When playing a musical instrument such as a wind instrument, a user can produce various temperament sounds. In this case, a high performance skill is required, and therefore the user needs to practice to emit a pitch of a desired temperament. In addition, when the user sings, in order to produce a desired musical rhythm, practice of producing a pitch of the musical rhythm is also required. In the practice as described above, it is desirable to be able to easily recognize the difference between the pitch of a musical performance tone or singing tone uttered by the user and the pitch of a desired temperament.
The present invention aims to provide a differential presenting device, a differential presenting method and a differential presenting program, which can easily identify the difference between the pitch of a sound emitted by a user and a desired pitch.
According to one aspect of the present invention, a differential presentation device includes: a reference pitch determination unit that acquires a pitch name to be uttered by a user as a target pitch name, and determines a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm; an acquisition unit that acquires a pitch produced by a user as a user pitch; and a presentation unit that presents difference information indicating a difference between the determined reference pitch and the acquired user pitch.
The prescribed temperament may be an unequal temperament. The unequal law may comprise pure, bidagraas (Pythagoras) or mediocre whole temperament. The presentation unit may display, as the difference information, a difference presentation image including a reference index corresponding to the determined reference pitch and a user pitch index corresponding to the user pitch, and adjust a positional relationship between the reference index and the user pitch index on the difference presentation image such that a distance between the reference index and the user pitch index increases as the difference increases.
The reference pitch determining unit may obtain the current pitch name of the song from the song data including the pitch name sequence to be uttered by the user in the song, as the target pitch name. The tuning information may represent the pitch corresponding to each note name as a pitch having an unequal discipline relationship with respect to a reference pitch which is a reference pitch. The difference presentation device may further include a reference pitch determining unit that determines a reference pitch, the tuning information may represent a pitch corresponding to each of the pitch names as a relative pitch to the reference pitch, and the reference pitch determining unit may determine the reference pitch based on the determined reference pitch and the tuning information. The reference pitch determining unit may determine a pitch of the average pitch of the keynote in the tonality of the song as the reference pitch. The music piece data may further include a chord string corresponding to the note name string in the music piece, and the reference tone pitch determining section may obtain the chord at the current position in the music piece from the music piece data, and determine the tone pitch of the mean pitch of the root in the obtained chord as the reference tone pitch.
The differential presentation device may further include: a chord waveform obtaining unit that obtains a chord waveform indicating a chord tone pitch group including a plurality of tone pitches; and a chord specifying unit that specifies a chord corresponding to the chord tone pitch group based on the obtained chord waveform, and refers to the tone pitch determining unit to obtain one of the plurality of tone names constituting the specified chord as the target tone name. The reference pitch determining unit may determine the name of the target pitch based on the acquired pitch of the user.
The tuning information may represent the pitch corresponding to each note name as a pitch having an unequal discipline relationship with respect to a reference pitch which is a reference pitch. The difference presentation device may further include a reference pitch determining unit that determines a reference pitch, the tuning information may represent a pitch corresponding to each of the pitch names as a relative pitch to the reference pitch, and the reference pitch determining unit may determine the reference pitch based on the determined reference pitch and the tuning information. The reference tone pitch determining section may determine the reference tone pitch based on the obtained chord waveform.
The difference presentation apparatus may further include an instruction receiving unit that receives an instruction to acquire a chord waveform from a user, and the chord waveform acquiring unit may acquire a waveform indicating an input sound at a time when the instruction is received as the chord waveform. The reference pitch determining unit may acquire a plurality of pitches arranged in time series, specify the tonality based on the acquired plurality of pitches, and determine the reference pitch based on the specified tonality.
According to another aspect of the present invention, a differential hinting method includes the steps of: acquiring a pitch name to be uttered by a user as a target pitch name, and determining a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm; obtaining a pitch of a tone uttered by a user as a user pitch; and presenting difference information indicating a difference between the determined reference pitch and the obtained user pitch.
According to another aspect of the present invention, a differential hint program for causing a computer to perform the steps of: acquiring a pitch name to be uttered by a user as a target pitch name, and determining a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm; obtaining a pitch of a tone uttered by a user as a user pitch; and presenting difference information indicating a difference between the determined reference pitch and the obtained user pitch.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, the user can easily recognize the pitch of the uttered tone and the difference of the desired pitch.
Drawings
Fig. 1 is a block diagram showing a configuration of a difference presentation device according to embodiment 1 of the present invention.
Fig. 2 is a block diagram showing a functional configuration of the difference presentation device of fig. 1.
Fig. 3 is a diagram showing an example of a differential presentation image.
Fig. 4 is a diagram showing an example of tuning information.
Fig. 5 is a diagram showing a relationship between the pitch and pitch difference values of the scale.
Fig. 6 is a diagram for explaining a specific example of a method of determining a reference pitch based on volume information.
Fig. 7 is a diagram showing a reference tone pitch of the reference tone and a tuned accompaniment tone pitch in the chord "C" of fig. 6.
Fig. 8 is a flowchart showing an example of the difference presentation process implemented by each functional unit of fig. 2.
Fig. 9 is a flowchart showing an example of the difference presentation process implemented by each functional unit of fig. 2.
Fig. 10 is a block diagram showing a functional configuration of the difference presentation device according to embodiment 2 of the present invention.
Fig. 11 is a diagram showing an example of a difference presentation image according to embodiment 2.
Fig. 12 is a diagram for explaining a specific example of processing using chord waveforms.
Fig. 13 is a flowchart showing an example of the chord determination process realized by each functional unit of fig. 10.
Fig. 14 is a flowchart of the user pitch adjustment process implemented by the functional units of fig. 10.
Detailed Description
The difference presentation device, the difference presentation method, and the difference presentation program according to the embodiments of the present invention will be described in detail below with reference to the drawings.
< A > embodiment 1
[1] Structure of differential prompting device
Fig. 1 is a block diagram showing a configuration of a difference presentation device according to embodiment 1 of the present invention. As shown in fig. 1, the differential presentation device 100 includes: an audio input unit 1, an operation unit 4, and a display unit 6. The sound input unit 1, the operation unit 4, and the display unit 6 are connected to the bus 19. The sound input unit 1 includes a microphone, an amplifier, an analog-digital (a/D) conversion circuit, and the like, and inputs a sound emitted by a user (hereinafter, referred to as a user-emitted sound) as audio data to a CPU (central processing unit) 11 described later. The user uttered sound includes a singing sound of the user and a performance sound of an instrument performed by the user. The types of musical instruments include, for example, wind musical instruments or string musical instruments, but are not limited to these musical instruments, and the differential presenting device 100 according to the present embodiment can be applied to playing various musical instruments.
The operation unit 4 includes a switch for performing an on-off operation, a rotary encoder for performing a rotary operation, a linear encoder for performing a slide operation, and the like, and is used for starting playback of a song, stopping playback of a song, adjusting a volume, turning on and off a power supply, and performing various settings. The display unit 6 includes, for example, a liquid crystal display, and displays various information related to musical performance, settings, and the like. At least a part of the operation unit 4 and the display unit 6 may be constituted by a touch panel display.
The differential presentation device 100 further includes: a RAM (random access memory) 9, a ROM (read only memory) 10, a CPU 11, a timer 12, and a storage device 13. The RAM9, the ROM 10, the CPU 11, and the storage device 13 are connected to the bus 19, and the timer 12 is connected to the CPU 11. An external device such as an external storage device 15 can be connected to the bus 19 via a communication I/F (interface) 14. The RAM9, the ROM 10, the CPU 11, and the timer 12 constitute a computer PC.
The RAM9 is constituted by, for example, a volatile memory, is used as a work area of the CPU 11, and temporarily stores various data. The ROM 10 is composed of, for example, a nonvolatile memory, and stores computer programs such as a control program and a difference presentation program. The CPU 11 executes a difference presentation program stored in the ROM 10 on the RAM9, thereby performing difference presentation processing described later. The timer 12 gives time information such as the current time to the CPU 11.
The storage device 13 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and stores 1 or more pieces of music data. The song data is, for example, minisone music data used in singing or playing, and includes reference sound data, accompaniment sound data, and score data.
Hereinafter, a sound to be uttered by the user in the music is referred to as a reference sound. The reference sound data is data for specifying a sound name sequence of a reference sound in a music piece, and is, for example, midi (musical Instrument Digital interface) data. The pitch names in the pitch list may be represented by characters such as 'C', 'D', 'E', 'ハ', 'ニ', or 'ホ' (where 'ハ', 'ニ', and 'ホ' are representations of pitch names in japan), or may be represented by symbols, numerical values, or the like. The accompaniment sound data is data representing accompaniment sounds and includes a chord string corresponding to a sound name string specified by referring to the sound data. The accompaniment sound data may be audio data or MIDI data.
Note that the chord in the chord row represents a combination of 2 or more tones, and includes a combination of 2 or more tones (polyphonic note) having mutually different tone pitches, or a combination of 2 or more tones (ziqi) having the same tone pitch or different tone pitches of n octaves (1+7 × n degrees (n is a positive integer)).
The storage device 13 may store the difference presentation program described above instead of the ROM 10. The storage device 13 stores tuning information indicating a correspondence between the pitch and the pitch in a predetermined rhythm. Specifically, the tuning information represents the pitch corresponding to each note name as a relative pitch with respect to the reference pitch. The prescribed temperament includes equal temperament and unequal temperament. In the present embodiment, the predetermined temperament is an unequal temperament. Therefore, the tuning information expresses the pitch corresponding to each note name as a pitch having an unequal-pitch relationship with respect to the reference pitch.
As the unequal temperament, there are, for example, pure temperament, bidao las temperament, or mediocre whole temperament. Pure rhythm is a rhythm in which the ratio of frequencies of a plurality of tones is expressed by a simple integer ratio. The pythagoras temperament is a temperament created based on pitches having a relationship of a frequency ratio of 3 to 2. The mediocre whole temperament is a temperament in which a full-five pitch is slightly narrower than a pure pitch in order to ensure the purity of a third-degree pitch.
The external storage device 15 may include a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and store at least 1 of music data, tuning information, and a difference presentation program, as in the storage device 13.
The difference presentation program in the present embodiment may be provided to be stored in a recording medium readable by a computer PC, and installed in the ROM 10 or the storage device 13. When the communication I/F14 is connected to a communication network, the difference presentation program transmitted from a server connected to the communication network may be installed in the ROM 10 or the storage device 13. Similarly, the music data or the tuning information may be acquired from a storage medium or may be acquired from a server connected to a communication network. Note that the music data may be acquired sequentially based on a performance operation of a MIDI keyboard or the like performed by the user, without being stored in advance in a storage medium such as the storage device 13 or the external storage device 15.
The difference presentation apparatus 100 further includes an audio output unit 8. The sound output unit 8 includes a sound source 16 and a sound system 18. The sound source 16 is connected to the bus 19 and the sound system 18 is connected to the sound source 16. The sound source 16 generates musical tone signals representing accompaniment from the accompaniment sound data. The sound system 18 includes a digital-to-analog (D/a) conversion circuit, an amplifier, and a speaker. The sound system 18 converts the musical sound signal given from the sound source 16 into an analog sound signal, and generates accompaniment sound based on the analog sound signal.
[2] Functional structure of differential presentation device
Fig. 2 is a block diagram showing a functional configuration of the difference presentation apparatus 100 of fig. 1. As shown in fig. 2, the differential presentation apparatus 100 includes: a music data acquisition unit 51, a playback unit 52, a reference pitch determination unit 53, a reference pitch determination unit 54, a tuning unit 55, a pitch acquisition unit 56, and a presentation unit 57. The functions of the music data acquisition unit 51, the playback unit 52, the reference pitch determination unit 53, the reference pitch determination unit 54, the tuning unit 55, the pitch acquisition unit 56, and the presentation unit 57 are realized by the CPU 11 executing a difference presentation program.
The music data acquisition unit 51 acquires music data from the storage device 13 based on the operation of the operation unit 4. The playback unit 52 sequentially gives the music piece data acquired by the music piece data acquisition unit 51 to the reference pitch determination unit 53 and the reference pitch determination unit 54. Thereby, the music is played. Here, the position of the music data given from the playback unit 52 to the reference pitch determining unit 53 and the reference pitch determining unit 54 at the current time is referred to as the current position in the music.
The reference tone pitch determining section 53 includes a chord acquiring section 53a and a reference determining section 53 b. The chord acquiring section 53a acquires a chord at the current position in the music piece from the music piece data acquired by the music piece data acquiring section 51. The reference determination unit 53b determines the pitch of the mean tone of the root in the chord acquired by the chord acquisition unit 53a as the reference pitch.
The reference pitch determining unit 54 includes a pitch name acquiring unit 54a and a reference determining unit 54 b. The sound name acquisition unit 54a acquires the sound name at the current position in the music as the current sound name of the reference sound from the reference sound data in the music data acquired by the music data acquisition unit 51. The current pitch name is an example of the object pitch name. The sound name acquisition unit 54a acquires the sound name at the current position in the song as the current sound name of the accompaniment sound from the accompaniment sound data in the song data acquired by the song data acquisition unit 51. Here, the user can select the type of the inequality rules in the tuning information by operating the operation unit 4. When the user selects the type of the unequal law, the phonetic name obtaining unit 54a determines the unequal law as the selected type. When the user does not select the type of the unequal law, the phonetic name obtaining unit 54a determines the unequal law as pure law. In the following description, the non-equal law is determined as pure law.
The reference determination unit 54b sequentially determines, as a reference pitch, a pitch corresponding to the current pitch name of the reference sound acquired by the pitch name acquisition unit 54a, based on the unequal temperament determined by the pitch name acquisition unit 54a, the tuning information stored in the storage device 13, and the reference pitch determined by the reference determination unit 53 b. The reference determining unit 54b sequentially determines, as the pitch of the accompaniment, the pitch corresponding to the current pitch of the accompaniment sound acquired by the sound name acquiring unit 54a, based on the unequal temperament determined by the sound name acquiring unit 54a, the tuning information stored in the storage device 13, and the reference pitch determined by the reference determining unit 53 b.
The tone tuning unit 55 tunes the accompaniment tones in the accompaniment tone data from the playback unit 52 based on the accompaniment pitch determined by the reference determination unit 54b, and outputs accompaniment tone data indicating the tuned accompaniment tones to the tone output unit 8. Thereby, the tuned accompaniment sound is output from the sound output unit 8. The accompaniment sound outputted from the sound output unit 8 is given to the user to sound, thereby realizing harmony. The user can input the user uttered sound to the sound input unit 1 while listening to the accompaniment sound emitted from the sound output unit 8.
The pitch acquisition unit 56 is an example of an acquisition unit, and sequentially acquires the pitch of the user uttered sound as the user pitch based on the audio data input from the sound input unit 1. The presentation unit 57 presents difference information indicating the difference between the reference pitch determined by the reference determination unit 54b and the user pitch acquired by the pitch acquisition unit 56. The presentation of the difference information is performed by, for example, displaying an image (hereinafter, referred to as a difference presentation image) on the screen of the display unit 6.
Fig. 3 is a diagram showing an example of a differential presentation image. The difference presentation image DI of fig. 3 is an image obtained by simulating a simulation-type Tuning machine for Tuning (Tuning) a musical instrument, and includes a needle portion 71, a plurality of scales SC, and a reference scale SCa. The reference scale SCa disposed at the center of the plurality of scales SC corresponds to the reference pitch determined by the reference pitch determining unit 54, and the position of the tip end of the needle portion 71 corresponds to the user pitch acquired by the pitch acquiring unit 56. That is, the reference scale SCa is an example of a reference index, and the needle portion 71 is an example of a user pitch index.
The positional relationship between the reference scale SCa and the tip end of the needle portion 71 on the difference presentation image DI is adjusted so that the distance between the reference scale SCa and the tip end of the needle portion 71 becomes larger as the difference between the reference pitch and the user pitch becomes larger. For example, as shown in the example of fig. 3, when the position of the tip end portion of the needle portion 71 is on the left side of the reference scale SCa, the user pitch is lower than the reference pitch. On the other hand, when the position of the tip end of the needle portion 71 is on the right side of the reference scale SCa, the user pitch is higher than the reference pitch. When the tip end of the needle portion 71 matches the reference scale SCa, the user pitch matches the reference pitch.
The user can visually recognize the difference between the user pitch and the reference pitch at the current time by viewing the differential cue image DI. Therefore, the user can easily adjust the user utterances so as to be coordinated with the accompaniment tones. The differential information may be presented by other information such as light or sound instead of the differential presentation image DI. For example, a difference indicator lamp may be provided which indicates difference information by blinking of light. Alternatively, the sound output unit 8 in fig. 1 may output a difference alert sound indicating the difference information.
[3] Tuning information
In this example, the pitch corresponding to each note name is determined to have a pure tone relationship with a reference pitch, which is a pitch of the equal temperament, based on the tuning information. Hereinafter, a pitch having a relationship of pure pitch with respect to the reference pitch is referred to as a pure pitch.
Fig. 4 is a diagram showing an example of tuning information. The tuning information in fig. 4 indicates a difference value between the mean pitch and the pure pitch of each note name (hereinafter, referred to as a pitch difference value). Each line (horizontal arrangement) of the tuning information in fig. 4 represents a pitch difference value of each note name corresponding to one note name in 1 octave. The pitch difference value is represented by a cent (cent). For example, when one note name is 'C', the pitch difference values of the note names 'C #' and 'Bb' corresponding thereto are '11.7' and '17.9', respectively. Note that the pitch difference value of a tone in the relation of n octaves with respect to the reference tone is '0'. By using the tuning information as described above, the pitch of pure rhythm can be easily determined.
Fig. 5 is a diagram showing a relationship between the pitch and pitch difference values of the scale. Fig. 5 schematically shows the pitch of the equal temperament and the pitch of the pure temperament for 7 pitch names of I degree, II degree, · and VII degree of the scale constituting the major key, respectively. In the example of fig. 5, the pitch name of I degrees, which is the key of the scale, is determined as the reference pitch. In this case, pitch difference values of the pitch names of degree I, degree II, degree III,. cndot.. cndot.VII degree are 0, 3.9, -13.7,. cndot.. As a characteristic of the pitch of the pure temperament, in the scale of the long key, the pitch of the pure temperament is lower than the pitch of the mean temperament with respect to the pitch name of the III degree (refer to fig. 5), and in the scale of the short key, the pitch of the pure temperament is higher than the pitch of the mean temperament with respect to the pitch name of the III degree.
Fig. 6 is a diagram for explaining a specific example of a method for determining a reference sound level based on tone tuning information, and fig. 6 shows a chord "C" and a chord "FonC". the chord "C" is a 3-tone member including 'C', 'E', and 'G' as constituent sounds, the chord "FonC" is a transposition form of the chord "F" and includes 3-tone members including 'C', 'F', and 'a' as constituent sounds, here, the chord "○" (○ is a chord name (and a sound name)) indicates a chord (chord) '△' (△ is a sound name) indicates a single sound name.
The chord includes a reference sound and an accompaniment sound. The reference sound and the accompaniment sound may be determined by reference sound data and accompaniment sound data of the music data, or may be determined based on a predetermined condition. For example, the melody sound of the melody constituting the song may be determined as the reference sound and the other sounds may be determined as the accompaniment sound. In the example of fig. 6, 'E' of the chord "C" is a reference sound, and 'C' and 'G' of the chord "C" are accompaniment sounds. In addition, F ' of the chord "FonC" is a reference sound, and C ' and a ' of the chord "FonC" are accompaniment sounds.
In the example of the chord "C" of fig. 6, the pitch of the root note 'C' is decided as the pitch of the equal temperament. In addition, if referring to the shaded columns in the tuning information of fig. 4, the pitch difference value of the reference tone 'E' when 'C' is the root is-13.7 cents. Therefore, the reference pitch of the reference tone 'E' is determined based on the pitch difference value, and the reference tone 'E' is tuned to a pure pitch. In this example, the reference pitch has a pure pitch. Note that, with respect to the other tone 'G', the pitch is also determined by the same method as the reference tone 'E', and the tone 'G' is tuned to a pure pitch.
In the example of the chord "FonC" of fig. 6, the pitch of the root note 'F' is decided to the pitch of the equal temperament. In addition, if referring to the dotted pattern column in the tuning information of fig. 4, the pitch difference value of the reference sound 'F' when 'F' is the root sound is 0 cent. Therefore, the reference pitch of the reference sound 'F' is determined based on the pitch difference value. In this example, the reference pitch has an equal pitch. Note that, with respect to the other tones 'C' and 'a', the pitch is also determined by the same method as the reference tone 'F', and the tones 'C' and 'a' are tuned to the pitch of pure temperament.
In these examples, the root of the chord is the pitch of the equal temperament, and therefore, even when a plurality of chords are continuously used with the progress of the music, the chord direction becomes natural. The chord may be a chord composed of 2 tones. Further, the start time or the end time of a plurality of constituent tones of a chord may be different from each other. The start time and the end time of the reference sound and the accompaniment sound are determined by music data, for example.
Fig. 7 is a diagram showing a reference tone pitch of the reference tone and a tuned accompaniment tone pitch in the chord "C" of fig. 6. In the example of fig. 7, the start timings of the accompaniment 'C', the accompaniment 'G', and the reference sound 'E' are t1, t2, and t3, respectively. In fig. 7, two-dot chain lines P1, P2, and P3 indicate pitch pitches of equal temperaments of 'C', 'E', and 'G', respectively.
The reference determination unit 54b in fig. 2 determines pitches of ' C ', ' E ', and ' G ' such that pitches of ' C ', E ', and ' G ' have a pure relationship with "C" and a pure relationship with "C" as an average. Further, the pitch of the reference tone 'E' is a reference pitch. The accompaniment tones are tuned to the pitch determined by the tuning unit 55 of fig. 2, and the tuned accompaniment tones are outputted from the tone output unit 8 of fig. 2.
In the example of fig. 7, the tuned accompaniment sound having the pitch of 'C' and 'G' starts to be output at times t1 and t2, respectively. The user starts inputting a sound to the sound input unit 1 of fig. 2 by the user using, for example, a musical instrument at time t3 while listening to the accompaniment sounds. Here, the user can easily recognize the difference between the user pitch and the reference pitch (the pitch indicated by the thick one-dot chain line of fig. 7) by visually recognizing the difference indication image DI. If the pitch of the user is consistent with the reference pitch, the user's generated note ' E ', the accompaniment ' C ' and the accompaniment ' G ' are in a pure relationship and are in harmony with each other.
[4] Differential hint processing
Next, a difference presentation process performed by the difference presentation method according to the present embodiment will be described. Fig. 8 and 9 are flowcharts showing an example of the difference presentation process implemented by each functional unit of fig. 2. The difference presentation processing in fig. 8 and 9 is performed by the CPU 11 in fig. 1 executing a difference presentation program stored in the ROM 10 or the storage device 13.
First, the music data acquisition unit 51 determines whether or not music data is specified based on the operation of the operation unit 4 by the user (step S1). When the music data is not specified, the music data acquisition unit 51 waits until the music data is specified. When the music data is designated, the music data acquiring unit 51 acquires the designated music data from the storage device 13 (step S2).
Next, the phonetic name obtaining unit 54a determines whether or not the inequality category is selected based on the operation of the operation unit 4 by the user (step S3). When the type of the unequal law is selected, the phonetic name obtaining unit 54a determines the unequal law as the selected type (step S4), and proceeds to step S6. When the type of the unequal law is not selected, the phonetic name obtaining unit 54a determines the unequal law as the pure law (step S5), and proceeds to step S6.
In step S6, the playback unit 52 determines whether or not playback of the song is instructed based on the operation of the operation unit 4 by the user (step S6). If the playback of the song is not instructed, the playback unit 52 repeats steps S3 to S5 until the playback of the song is instructed. When the reproduction of the music is instructed, the reproduction unit 52 sequentially adds the music data acquired in step S2 to the reference pitch determination unit 53 and the reference pitch determination unit 54, thereby reproducing the music (step S7).
Next, the chord acquiring section 53a acquires the chord at the current position in the music piece from the music piece data acquired at step S2 (step S8). The reference determination unit 53b determines the pitch of the mean rhythm of the root note in the chord obtained in step S8 as the reference pitch (step S9). Further, the sound name obtaining unit 54a obtains the sound names of the reference sound and the accompaniment sound at the current position in the music as the current sound name from the music data obtained in step S2 (step S10).
Then, the reference determination unit 54b determines a pitch corresponding to the current pitch name of the reference tone acquired in step S10 as the reference pitch based on the unequal temperament determined in step S4 or step S5, the reference pitch determined in step S9, and the tuning information stored in the storage device 13 (step S11). Similarly, the reference determination unit 54b determines a pitch corresponding to the current note name of the accompaniment note acquired in step S10 as the accompaniment pitch based on the unequal temperament, the reference pitch, and the tuning information (step S12).
Next, the tuning unit 55 tunes the accompaniment tones in the accompaniment tone data based on the accompaniment pitch determined in step S12 (step S13). Next, the tuning unit 55 outputs accompaniment sound data indicating the tuned accompaniment sound to the sound output unit 8 (step S14). Thereby, the tuned accompaniment sound is output from the sound output unit 8. Note that accompaniment sounds other than the tuned accompaniment sounds (e.g., drum sounds) may be output from the sound output unit 8 based on the music data.
The user can input the singing voice or the musical performance voice as the user uttered voice to the voice input unit 1 while listening to the accompaniment voice uttered from the voice output unit 8. The pitch acquiring unit 56 acquires the user pitch as the user uttered sound from the sound input unit 1 (step S15). The presentation unit 57 displays a difference presentation image DI indicating the difference between the reference pitch determined in step S11 and the user pitch acquired in step S15 on the display unit 6 (step S16).
Finally, the playback unit 52 determines whether or not the current position of the song has reached the end position (step S17). If the current position of the song has not reached the end position, the playback unit 52 returns to step S8. Thereby, the playing of the tune is continued until the current position of the tune reaches the end position, and steps S8 to S17 are repeated. When the current position of the song reaches the end position, the difference presentation process ends. The operation unit 4 may include a stop button, and when the user operates the stop button, the difference presentation process may be ended.
In the difference presentation process described above, a part of the process may be executed at other timing. For example, steps S3 to S5 may be executed earlier than steps S1 and S2. In addition, steps S8, S9, and S10 may be performed first or simultaneously. Step S11 and step S12 may be executed first or simultaneously.
[5] Effect
In the difference presentation device 100 according to embodiment 1, the reference pitch determining unit 54 obtains the sound name at the current position in the music as the current sound name from the music data including the list of sound names to be uttered by the user in the music. Based on the tuning information indicating the correspondence between the pitch name and the pitch in the predetermined temperament, the reference pitch determining unit 54 determines the pitch corresponding to the acquired current pitch name as the reference pitch. The pitch of the sound emitted by the user is sequentially acquired as the user pitch by the pitch acquiring unit 56. The presentation unit 57 presents difference information indicating the difference between the determined reference pitch and the acquired user pitch.
With this configuration, the user can visually recognize the difference information presented by the presentation unit 57, thereby recognizing the difference between the reference pitch and the user pitch while playing the music. Therefore, the user can recognize the difference between the pitch of the uttered tone and the desired reference pitch.
The differential information is displayed as a differential cue image DI containing a reference index corresponding to a reference pitch and a user pitch index corresponding to a user pitch. In the difference indication image DI, the larger the difference, the larger the distance between the reference index and the user pitch index. In this case, the user can easily recognize the difference between the reference pitch and the user pitch by visually recognizing the distance between the reference index and the user pitch index.
In the tuning information, pitches corresponding to respective names of tones are expressed as pitches having an unequal rhythm relationship with respect to a reference pitch. In this case, it is possible to determine a reference pitch that is easily harmonized with pitches of other note names in the music. Further, the unequal principles include pure principles, bidao las temperament, or mediocre whole temperament, and therefore, a reference pitch for emitting a good chord can be determined.
In the tuning information, the pitch corresponding to each note name is represented as a relative pitch to the reference pitch determined by the reference pitch determining unit 53. The reference pitch determining unit 54 determines the reference pitch based on the reference pitch determined by the reference pitch determining unit 53 and the tuning information. In this case, the pitch corresponding to each note name can be easily determined as a relative pitch to the reference pitch.
Note that a chord string corresponding to a note name string in a song is included in the song data, and a chord at the current position in the song is acquired from the song data by the reference tone pitch determining section 53. In this case, the pitch of the average pitch of the root note in the obtained chord can be determined as the reference pitch more easily.
The music data further includes accompaniment sound data, and the difference presentation device 100 further includes a sound output unit 8, and the sound output unit 8 outputs accompaniment sounds based on the accompaniment sound data at the current position in the music. In this case, the user can generate the user generated sound in accordance with the accompaniment sound while listening to the accompaniment sound output from the sound output unit 8. In addition, the user can hear the accompaniment sounds and the harmonious sounds uttered by the user.
The difference presentation image DI includes a needle portion 71, a plurality of scales SC, and a reference scale SCa arranged as one of the plurality of scales SC, the reference scale SCa corresponding to the reference pitch determined by the reference pitch determining unit 54 as a reference index, and the position of the tip portion of the needle portion 71 corresponding to the user pitch acquired by the pitch acquiring unit 56 as a user pitch index.
In this case, the difference presentation image DI is an image simulating a tuner of an analog type, and the positional relationship between the reference scale SCa and the tip end portion of the needle portion 71 on the difference presentation image DI is adjusted so that the larger the difference between the reference pitch and the user pitch, the larger the distance between the reference scale SCa and the tip end portion of the needle portion 71. The user can more easily recognize the difference between the reference pitch and the user pitch by visually recognizing the distance between the reference scale SCa and the tip end of the needle portion 71.
< B > embodiment 2
A difference between the difference presentation device according to embodiment 2 of the present invention and the difference presentation device 100 according to embodiment 1 will be described.
[1] Functional structure of differential presentation device
Fig. 10 is a block diagram showing a functional configuration of a difference presentation device 100A according to embodiment 2 of the present invention. The difference presentation device 100A of fig. 10 includes a chord waveform acquisition unit 61, an instruction reception unit 62, a chord determination unit 63, a reference tone pitch determination unit 53A, a reference tone pitch determination unit 54A, a tone pitch acquisition unit 56A, and a presentation unit 57A, instead of the tune data acquisition unit 51, the playback unit 52, the reference tone pitch determination unit 53, the reference tone pitch determination unit 54, the tone pitch acquisition unit 56, and the presentation unit 57 of fig. 2. The chord waveform acquiring section 61, the instruction receiving section 62, the chord determining section 63, the reference tone pitch determining section 53A, the reference tone pitch determining section 54A, the tone pitch acquiring section 56A, and the presenting section 57A are realized by the CPU 11 executing a difference presenting program.
The chord waveform acquiring section 61 acquires a chord waveform based on the audio data input from the tone input section 1. The chord waveform represents a chord tone pitch group including a plurality of tone pitches. The chord tone pitch group is a combination of tone pitches of a plurality of tone names constituting one chord. The user instructs to acquire the chord waveform by operating the operation unit 4. The instruction receiving unit 62 receives an instruction to acquire a chord waveform from a user. For example, the operation unit 4 includes a chord waveform acquisition button, and if the user presses the chord waveform acquisition button, the instruction receiving unit 62 receives an instruction to acquire a chord waveform.
Upon receiving the instruction to acquire the chord waveform, the chord waveform acquisition section 61 acquires the chord waveform from the audio data input at that time. For example, the user instructs the acquisition of the chord waveform in the middle of the ensemble of a plurality of players. In this case, a chord waveform indicating the chord tone pitch group issued at the instructed timing is acquired. Instead of obtaining the chord waveform in response to an instruction given by the user, the audio data may be continuously input and a plurality of chord waveforms representing a plurality of chord tone groups may be sequentially obtained. Alternatively, the chord waveform may be obtained from the audio data that was last input at the time when the input of the audio data was stopped.
The chord specifying unit 63 specifies a chord (chord name) corresponding to the chord tone pitch group based on the obtained chord waveform. In this case, the chord (chord name) can be determined from the chord waveform by a known analysis technique.
The pitch acquiring unit 56A is an example of an acquiring unit, and acquires the pitch of the user uttered sound as the user pitch based on the audio data input from the sound input unit 1. For example, the operation unit 4 includes a user pitch acquisition button, and if the user presses the user pitch acquisition button, the pitch acquisition unit 56A acquires the user pitch. For example, the user presses a user pitch take button, and plays the musical instrument individually. Thus, audio data of a user uttered sound is input, and a user pitch is obtained from the audio data. The frequency of the sound produced by the user can be detected from the vibration of the musical instrument using a piezoelectric element or the like, and the pitch corresponding to the frequency can be acquired as the user pitch. A plurality of players including the user may acquire audio data of the ensemble tone while performing ensemble, and may acquire the user pitch through the piezoelectric element. Note that the chord waveform obtaining section 61 and the tone pitch obtaining section 56A may be connected to the tone input section 1.
The reference tone pitch determining section 53A specifies a reference tone pitch based on the obtained chord waveform. For example, one tone pitch which becomes the user tone pitch in the chord tone pitch group represented by the chord waveform is decided as the reference tone pitch. For example, the pitch of the root of the chord is determined as the reference pitch.
As the reference pitch, a predetermined pitch may be used. For example, a pitch of 440Hz or 442Hz, which is generally set as a pitch of 'a' in performance of classical music, may be used as a reference pitch. In addition, the reference pitch may be designated by the user. Alternatively, one pitch of the chord pitch group may be decided as a reference pitch, which is corrected based on the other pitches of the chord pitch group.
A plurality of template waveforms may be prepared in advance for each chord, and the reference tone pitch may be determined based on a comparison of the plurality of template waveforms and the obtained chord waveform. For example, for one chord, 4 template waveforms with 439Hz, 440Hz, 441Hz, and 442Hz as references are prepared. The obtained chord waveform is compared with each of the plurality of template waveforms, and the template waveform closest to the chord waveform is determined. A reference pitch is determined based on the determined frequency of the reference of the template waveform.
The reference pitch determining unit 54A includes a pitch name acquiring unit 54c and a reference determining unit 54 d. The sound name obtaining unit 54c obtains, as the user sound name, one sound name to be uttered by the user from among the plurality of sound names composed of the chords specified by the chord specifying unit 63. The user pitch name is an example of the object pitch name. In this example, the pitch name acquisition unit 54c acquires the user's pitch based on the user's pitch acquired by the pitch acquisition unit 56A. For example, of the plurality of tone names of the chord configuration specified by the chord specifying unit 63, the tone name corresponding to the tone pitch closest to the user tone pitch is acquired as the user tone name. In this case, the pitch of the plurality of note names constituting the chord may be determined based on the reference pitch, respectively, and the note name corresponding to the pitch closest to the user pitch among the plurality of note names constituting the chord may be determined by comparing the plurality of pitch with the user pitch, respectively.
Similarly to the phonetic name obtaining unit 54a of fig. 2, the phonetic name obtaining unit 54c determines the inequality law as the selected type when the user selects the type of the inequality law. When the user does not select the type of the unequal law, the phonetic name obtaining unit 54a determines the unequal law as pure law.
The reference determination unit 54d determines, as the reference pitch, the pitch corresponding to the user pitch acquired by the pitch name acquisition unit 54a based on the tuning information stored in the storage device 13 and the reference pitch determined by the reference pitch determination unit 53A.
The presentation unit 57A presents difference information indicating the difference between the user pitch acquired by the pitch acquisition unit 56A and the reference pitch determined by the reference determination unit 54 d. In this example, the presentation unit 57A presents the chord name determined by the chord determination unit 63 and the user sound name acquired by the sound name acquisition unit 54c together with the difference information. For example, a difference presentation image including the difference information, the chord name, and the user's voice name is displayed on the screen of the display unit 6.
Fig. 11 is a diagram showing an example of a difference presentation image according to embodiment 2. The differential presentation image DIa of fig. 11 includes the differential presentation image DI of fig. 3, and also includes the chord name CN and the user sound name US of the object. The user can recognize the difference between the pitch (user pitch) uttered by the user and the reference pitch based on the positional relationship between the tip portion of the needle portion 71 and the reference scale SCa in the difference presentation image DI. Further, the user can adjust the self-produced tone pitch so that the tip end portion of the pin portion 71 matches the reference scale SCa, thereby harmonizing the self-produced tone pitch with other tone pitches within the chord of the object.
In the difference presentation image DIa of fig. 11, the difference between the user pitch and the reference pitch is indicated by the needle portion 71 and the reference scale SCa, but the difference between the user pitch and the reference pitch may be indicated by signs such as "+" and "-" or arrows such as "≠" and "↓". The user name may be an italian label ("do", "re", "cnota· ·," si ") or a japanese label (" ハ "," ニ "," cnota· ·, "ロ"). In addition, the scale name corresponding to the user's phonetic name may be displayed. In this case, the tonality may be specified by the user in advance, or may be obtained by analyzing the input audio data.
[2] Specific example of processing using chord waveform
Fig. 12 is a diagram for explaining a specific example of processing using chord waveforms, the horizontal axis in fig. 12 indicates time, and in the example in fig. 12, a player α, a player β, and a player γ perform ensemble, and audio data of the ensemble sound thereof is input from the sound input unit 1 in fig. 2, and in this example, a player β is a user.
In the example of fig. 12, during a period T1, the player α issues 'C', the player β issues 'E', the player γ issues 'G'. 'C', 'E' and 'G' constituting the chord "C". similarly, during a period T2, the players α to β issue 'C', 'F' and 'a' constituting the chord "FonC", and during a period T3, the players α to γ issue 'D', 'G' and 'B' constituting the chord "GonD".
For example, in the period T1, when the player β as the user feels that the emitted sound of the player β is not coordinated with the emitted sound of the player α, the player β presses the chord waveform acquisition button of the operation unit 4 at time T11 within the period T1 while playing, whereby the chord waveform acquisition unit 61 (fig. 10) acquires the chord waveform from the audio data of the ensemble input at time T11 and the chord determination unit 63 (fig. 10) determines the chord "C" based on the acquired chord waveform.
After the ensemble is finished, the performer β presses the user pitch acquisition button of the operation unit 4 to confirm the appropriate pitch of 'E' in the period T1 and issues 'E' again, whereby the pitch acquisition unit 56A (fig. 10) acquires the user pitch from the input audio data, and the pitch name acquisition unit 54c (fig. 10) of the reference pitch determination unit 54A acquires 'E' as the user pitch name based on the acquired user pitch.
In addition, when the user (performer β) utters an erroneous sound, that is, when the pitch of the chord uttered by the user is significantly different from any one of the chord pitch groups represented by the obtained chord waveform, an error message may be displayed on the screen of the display unit 6, for example.
The reference tone pitch determining section 53A (fig. 10) determines, as the reference tone pitch, a tone pitch of 'C', which is, for example, the root of the chord, in the chord tone pitch group indicated by the obtained chord waveform. The reference determination section 54d (fig. 10) of the reference pitch determination section 54A determines the reference pitch of 'E' which is the name of the user pitch, based on the tuning information and the reference pitch. The presentation unit 57A (fig. 10) presents difference information indicating the difference between the user pitch and the reference pitch, for example, on the screen of the display unit 6. The user can confirm the pitch of 'E' that should be emitted by itself within the chord "C" based on the presented differential information.
The sound output unit 8 in fig. 1 may output, of the constituent sounds of the chord specified by the obtained chord waveform, sounds having a sound name other than the user's sound name. For example, if the obtained chord waveform corresponds to the chord "C" and the user sound name is "E", then "C" and "G" may be output, respectively. In this case, the pitches of 'C' and 'G' are adjusted to a predetermined pitch based on the reference pitch and the tuning information, as in the case of the reference pitch. Thus, the user can recognize whether or not the pitch emitted by the user is harmonious with other pitches by the sense of hearing within the chord of the object. Alternatively, for reference, the tones of all the names including the user's name among the constituent tones of the chord to be determined may be output at a pitch corresponding to a predetermined temperament.
The chord waveform may be obtained by instructing the chord waveform to be obtained a plurality of times during the ensemble and obtaining the chord waveform for each instruction. For example, when the acquisition of the chord waveform is instructed at times t11 and t12 in fig. 12, the chord waveform corresponding to the chord "C" and the chord waveform corresponding to the chord "GonD" may be acquired. In this case, the user designates any one of the plurality of chord waveforms thus obtained, and the same processing as described above is performed using the designated chord waveform. This enables the user to efficiently confirm and adjust the user tone levels associated with a plurality of chords.
[3] Differential hint processing
Next, a difference presentation process performed by the difference presentation method according to embodiment 2 will be described. In the present embodiment, the difference presentation process includes a chord determination process and a user pitch adjustment process. Fig. 13 is a flowchart showing an example of the chord decision processing realized by the functional sections of fig. 10, and fig. 14 is a flowchart of the user pitch adjustment processing realized by the functional sections of fig. 10. The chord determination processing and the user pitch adjustment processing of fig. 13 and 14 are performed by the CPU 11 of fig. 1 executing the difference presentation program stored in the ROM 10 or the storage device 13.
In the chord decision processing of fig. 13, first, the instruction receiving section 62 determines whether or not acquisition of a chord waveform is instructed (step S21). Until the instruction for obtaining the chord waveform is given, the instruction receiving section 62 repeats step S21. If the acquisition of the chord waveform is instructed, the instruction receiving section 62 receives the instruction, and the chord waveform acquiring section 61 determines whether or not audio data is input from the tone input section 1 (step S22). When the audio data is not input, the chord waveform obtaining portion 61 indicates that an error has occurred (step S23). For example, an error message is displayed on the screen of the display unit 6. Then, the instruction receiving unit 62 returns to step S21.
When the audio data is input, the chord waveform acquisition section 61 determines whether or not the input audio data represents a plurality of tone pitches (step S24). When the input audio data indicates only a single pitch, the chord waveform acquisition unit 61 proceeds to step S23 to indicate an error. On the other hand, when the input audio data includes a plurality of tone pitches, the chord waveform acquiring unit 61 acquires the waveform indicated by the audio data as the chord waveform (step S25). Next, the chord decision section 63 decides a chord name based on the obtained chord waveform (step S26). Thereby, the chord determination processing ends.
In the user pitch adjustment process of fig. 14, the pitch name acquisition unit 54c determines whether or not the inequality category is selected based on the operation of the operation unit 4 by the user (step S31). When the type of the unequal law is selected, the phonetic name obtaining unit 54c determines the unequal law as the selected type (step S32), and proceeds to step S34. When the type of the unequal law is not selected, the phonetic name obtaining unit 54c determines the unequal law as the pure law (step S33), and proceeds to step S34. In step S34, the pitch acquisition unit 56A determines whether or not acquisition of the pitch of the user has been instructed (step S34). Until the user is instructed to acquire the pitch, the pitch name acquiring unit 54c repeats steps S31 to S34.
If the user tone height acquisition is instructed, the presentation section 57A presents the chord name determined in step S26 of the chord determination process of fig. 13 (step S35). For example, the chord name CN of fig. 11 is displayed on the screen of the display unit 6. Next, the pitch acquiring unit 56A determines whether or not audio data is input from the tone input unit 1 (step S36). Until the audio data is input, the pitch obtaining section 56A repeats step S36.
If the audio data is input, the pitch obtaining section 56A determines whether the input audio data represents a single pitch (step S37). When the input audio data indicates a plurality of pitches, the pitch obtaining unit 56A presents the occurrence of an error (step S38). For example, an error message is displayed on the screen of the display unit 6, as in step S23 of fig. 13. Then, the sound name obtaining unit 54c returns to step S31. In step S36, if the audio data is not input for a certain period of time, the sound name obtaining unit 54c may proceed to step S38 to indicate an error.
When the input audio data represents only a single pitch, the pitch acquisition unit 56A acquires the pitch represented by the input audio data as the user pitch (step S39). Next, the sound name obtaining unit 54c specifies the user sound name based on the user pitch obtained in step S39 (step S40). For example, the sound name obtaining unit 54c determines, as the user sound name, the sound name corresponding to the pitch closest to the user pitch obtained in step S39, from among the plurality of sound names constituting the chord name determined in step S26 in fig. 13. Next, the presentation unit 57A presents the determined user sound name (step S41). For example, the user sound name US of fig. 11 is displayed on the screen of the display unit 6.
Further, in a case where the obtained user pitch is significantly different from any of the plurality of note names constituting the identified chord name (for example, in a case where the note number is different by 100 or more), an error message may be displayed on the screen of the display unit 6. In this case, of the plurality of sound names constituting the identified chord name, the sound name corresponding to the pitch closest to the user's pitch may be displayed on the screen of the display unit 6 as a correct sound name candidate.
Next, the reference tone pitch determining section 53A determines a reference tone pitch based on the chord waveform obtained in step S25 (step S42). For example, the reference tone pitch determining section 53A determines, as the reference tone pitch, the tone pitch of the root of the chord in the chord tone pitch group represented by the chord waveform acquired in step S25.
Next, the reference determining unit 54d determines a pitch corresponding to the user' S pitch name determined in step S40 as a reference pitch based on the tuning information stored in the storage device 13 and the reference pitch determined in step S42 (step S43). Next, the presentation unit 57A acquires difference information based on the user pitch acquired in step S39 and the reference pitch determined in step S42, and presents the acquired difference information (step S45). For example, the difference presentation image DI of fig. 11 is displayed on the screen of the display unit 6.
Next, the pitch obtaining unit 56A determines whether or not the end of the pitch adjustment process by the user has been instructed (step S45). For example, the operation unit 4 includes a stop button, and the user instructs the user to end the pitch adjustment process by pressing the stop button. If the end of the user pitch adjustment process is not instructed, the pitch obtaining unit 56A returns to step S36. When the user has instructed the end of the pitch adjustment processing, the acquisition of the user pitch and the presentation of each piece of information are stopped, and the user pitch adjustment processing is ended.
[4] Effect
In the difference presentation apparatus 100A according to embodiment 2, a chord waveform indicating a chord tone pitch group is acquired, and a chord is determined based on the acquired chord waveform. One of a plurality of pitch names constituting the identified chord is acquired as a user pitch name, and a pitch corresponding to the acquired user pitch name is determined as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch at a predetermined pitch rate. Differential information indicating the difference between the determined reference pitch and the user pitch is presented.
According to this configuration, the user can easily confirm the difference between the pitch of the subject and the other pitches by visually recognizing the presented difference information. Further, the user can adjust the pitch of the sound emitted by the user so as to match the other pitch in the chord of the object while observing the difference information.
In the present embodiment, a reference pitch is determined based on the obtained chord waveform, and a reference pitch is determined based on the reference pitch. Thus, even when other tone pitches within the chord vary, the user can adjust the tone pitch of his own voice so as to coordinate with the tone pitch.
In the present embodiment, if the user instructs to acquire a chord waveform, the chord waveform is acquired from the audio data input at the instructed timing. In this case, the user can instruct acquisition of the chord waveform at an arbitrary timing while performing an ensemble with another player. Thus, the user can confirm the relationship between the user's pitch and other pitches with respect to any chord in the ensemble.
< C > other embodiments
In the above-described embodiment, the tuning information indicating the correspondence between the pitch name and pitch in a predetermined musical rhythm is stored in a storage medium such as the storage device 13 in the form of a table, but may be stored in another form. For example, the tuning information may be stored in the form of a calculation formula that associates the pitch name and pitch in a predetermined musical scale. In the above embodiment, the predetermined temperament is an unequal law, but may be an equal law.
In the above-described embodiment 1, the pitch of the average scale of the root note in the chord at the current position in the tune is determined as the reference pitch, but the pitch of the average scale of the tonic in the tonality of the tune may be determined as the reference pitch by the reference pitch determining unit 53. In this case, too, the reference pitch can be easily determined. In this configuration, the reference tone pitch determining section 53 includes a tonality acquiring section for acquiring tonality of a tune, instead of the chord acquiring section 53 a. The tune adjusting unit obtains the tune of the tune from the tune data obtained by the tune data obtaining unit 51. The reference determination unit 53b determines the pitch of the mean pitch of the consonants in the tonality acquired by the tonality acquisition unit as the reference pitch. For example, when the key tone of a tune is C major, the dominant tone is 'C'. In this case, the pitch of the mean rule of 'C' may be determined as the reference pitch.
The reference pitch determining unit 54(54A) may acquire a plurality of pitches arranged in time series, specify the tonality based on the acquired plurality of pitches, and specify the reference pitch based on the specified tonality. For example, audio data indicating a performance performed by a user or ensemble tones uttered by a plurality of players is input from the tone input unit 1, and tonality is determined by analyzing the audio data. Based on the identified tonality and the tuning information, a pitch of the target note name in the predetermined temperament is determined as a reference pitch. This allows the user to confirm the pitch at a predetermined pitch corresponding to the tonality of the played music.
In the above embodiment, the difference presentation devices 100 and 100A include the sound input unit 1 and the sound output unit 8, but the sound input unit and the sound output unit may be prepared as external devices. In addition, in embodiment 1 described above, since the tuning of the accompaniment sound is not required, the tuned accompaniment sound may not be output from the sound output unit 8. In this case, the difference presentation device 100 may not include the sound adjustment unit 55 and the sound output unit 8.
In the above embodiment, pure temperament, bidagrasses temperament, or mediocre whole temperament can be selected as the type of unequal temperament, but other temperament such as wockmeister tempest (Werckmeister tempest) or Kirnberger tempest may be selected.
In the above embodiment, the functional units in fig. 2 are realized by hardware such as the CPU 11 and software such as the difference presentation program, but these functional units may be realized by hardware such as an electronic circuit.
The differential presentation devices 100 and 100A may be applied to electronic musical instruments such as electronic keyboard musical instruments, and may also be applied to other electronic devices such as personal computers, smart phones, tablet terminals, and the like.

Claims (18)

1. A differential presenting device, comprising:
a reference pitch determination unit that acquires a pitch name to be uttered by a user as a target pitch name, and determines a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm;
an acquisition unit that acquires a pitch produced by a user as a user pitch; and
and a presentation unit that presents difference information indicating a difference between the determined reference pitch and the acquired user pitch.
2. A differential hint device as defined in claim 1,
the prescribed temperament is an unequal temperament.
3. A differential hint device as defined in claim 2,
the unequal temperament includes pure temperament, bidao las temperament, or mediocre whole temperament.
4. A differential alerting device of any one of claims 1-3 wherein,
the presentation unit displays, as the difference information, a difference presentation image including a reference index corresponding to the determined reference pitch and a user pitch index corresponding to the user pitch, and adjusts a positional relationship between the reference index and the user pitch index on the difference presentation image such that a distance between the reference index and the user pitch index is larger as the difference is larger.
5. A differential alerting device of any one of claims 1-4 wherein,
the reference pitch determining unit obtains a current pitch name of a song from song data including a pitch name sequence to be uttered by a user in the song as a target pitch name.
6. A differential hint device as defined in claim 5, wherein,
the tuning information represents pitches corresponding to the respective names as pitches having an unequal pitch relationship with respect to a reference pitch that is a reference pitch.
7. A differential hint device as defined in claim 6,
further comprising a reference pitch determining section for determining the reference pitch,
the tuning information represents pitches corresponding to the respective names of the tones as relative pitches with respect to the reference pitch,
the reference pitch determining unit determines the reference pitch based on the determined reference pitch and the tuning information.
8. A differential hint device as defined in claim 7,
the reference pitch determining unit determines a pitch of an average pitch of a keynote in the tonality of the song as the reference pitch.
9. A differential alerting device of claim 7 or 8 wherein,
the music data further contains a chord column corresponding to the list of the names of the tones in the music,
the reference pitch determining section obtains a chord at a current position in the music piece from the music piece data, and determines a pitch of an average scale of a root note in the obtained chord as the reference pitch.
10. A differential alerting device of any one of claims 1-4 wherein,
further comprising:
a chord waveform obtaining unit that obtains a chord waveform indicating a chord tone pitch group including a plurality of tone pitches; and
a chord determination section that determines a chord corresponding to the chord tone pitch group based on the obtained chord waveform,
the reference tone pitch determining section acquires one of a plurality of names constituting the identified chord as a target name.
11. A differential hint device as defined in claim 10,
the reference pitch determining unit obtains a target pitch name based on the obtained user pitch.
12. A differential alerting device of claim 10 or 11 wherein,
the tuning information represents pitches corresponding to the respective names as pitches having an unequal pitch relationship with respect to a reference pitch that is a reference pitch.
13. A differential hint device as defined in claim 12,
further comprising a reference pitch determining section for determining the reference pitch,
the tuning information represents pitches corresponding to the respective names of the tones as relative pitches with respect to the reference pitch,
the reference pitch determining unit determines the reference pitch based on the determined reference pitch and the tuning information.
14. A differential hint device as defined in claim 13,
the reference pitch determining section determines the reference pitch based on the chord waveform obtained.
15. A differential alerting device of any one of claims 10-14 wherein,
further comprises an instruction receiving unit for receiving an instruction for obtaining the chord waveform from a user,
the chord waveform acquiring unit acquires, as the chord waveform, a waveform indicating the input sound at the time when the acquisition instruction is received.
16. A differential alerting device of any one of claims 1-15 wherein,
the reference pitch determining unit acquires a plurality of pitches arranged in time series, specifies a tone based on the acquired plurality of pitches, and determines the reference pitch based on the specified tone.
17. A differential rendering method, comprising the steps of:
acquiring a pitch name to be uttered by a user as a target pitch name, and determining a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm;
obtaining a pitch of a tone uttered by a user as a user pitch; and
differential information indicating a difference between the determined reference pitch and the acquired user pitch is presented.
18. A differential presentation program for causing a computer to execute the steps of:
acquiring a pitch name to be uttered by a user as a target pitch name, and determining a pitch corresponding to the acquired target pitch name as a reference pitch based on tuning information indicating a correspondence between the pitch name and the pitch in a predetermined rhythm;
obtaining a pitch of a tone uttered by a user as a user pitch; and
differential information indicating a difference between the determined reference pitch and the acquired user pitch is presented.
CN201880050238.1A 2017-08-03 2018-03-12 Differential presentation device, differential presentation method, and differential presentation program Pending CN110998708A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2017151108 2017-08-03
JP2017-151108 2017-08-03
PCT/JP2018/009526 WO2019026325A1 (en) 2017-08-03 2018-03-12 Differential presentation device, differential presentation method, and differential presentation program

Publications (1)

Publication Number Publication Date
CN110998708A true CN110998708A (en) 2020-04-10

Family

ID=65232454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880050238.1A Pending CN110998708A (en) 2017-08-03 2018-03-12 Differential presentation device, differential presentation method, and differential presentation program

Country Status (3)

Country Link
JP (1) JPWO2019026325A1 (en)
CN (1) CN110998708A (en)
WO (1) WO2019026325A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7425558B2 (en) * 2019-08-07 2024-01-31 株式会社河合楽器製作所 Code detection device and code detection program

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0519765A (en) * 1991-07-11 1993-01-29 Casio Comput Co Ltd Electronic musical instrument
JP2001117560A (en) * 1999-10-22 2001-04-27 Yamaha Corp Device and method for tuning, and recording medium
CN1325525A (en) * 1998-10-29 2001-12-05 保罗-里德-史密斯-吉塔尔斯股份合作有限公司 Method of modifying harmonic content of complex waveform
JP2002132256A (en) * 2000-10-25 2002-05-09 Korg Inc Tuning device
JP2005221787A (en) * 2004-02-05 2005-08-18 Yamaha Corp Tuning device and program therefor
JP2013242440A (en) * 2012-05-21 2013-12-05 Korg Inc Tuner
US20140041510A1 (en) * 2012-08-09 2014-02-13 Roland Corporation Tuning device
US20140137720A1 (en) * 2012-11-19 2014-05-22 Roland Corporation Tuning device
WO2017082061A1 (en) * 2015-11-10 2017-05-18 ヤマハ株式会社 Tuning estimation device, evaluation apparatus, and data processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5130842B2 (en) * 2007-09-19 2013-01-30 ヤマハ株式会社 Tuning support device and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0519765A (en) * 1991-07-11 1993-01-29 Casio Comput Co Ltd Electronic musical instrument
CN1325525A (en) * 1998-10-29 2001-12-05 保罗-里德-史密斯-吉塔尔斯股份合作有限公司 Method of modifying harmonic content of complex waveform
JP2001117560A (en) * 1999-10-22 2001-04-27 Yamaha Corp Device and method for tuning, and recording medium
JP2002132256A (en) * 2000-10-25 2002-05-09 Korg Inc Tuning device
JP2005221787A (en) * 2004-02-05 2005-08-18 Yamaha Corp Tuning device and program therefor
JP2013242440A (en) * 2012-05-21 2013-12-05 Korg Inc Tuner
US20140041510A1 (en) * 2012-08-09 2014-02-13 Roland Corporation Tuning device
US20140137720A1 (en) * 2012-11-19 2014-05-22 Roland Corporation Tuning device
WO2017082061A1 (en) * 2015-11-10 2017-05-18 ヤマハ株式会社 Tuning estimation device, evaluation apparatus, and data processing apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANLU XIE BEIJING LANGUAGE AND CULTURE UNIVERSITY, BEIJING, CN: "The training of the tone of Mandarin two-syllable words based on pitch projection synthesis speech" *
王冀: "均衡与混响的调音技巧" *

Also Published As

Publication number Publication date
WO2019026325A1 (en) 2019-02-07
JPWO2019026325A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
US5287789A (en) Music training apparatus
US7795524B2 (en) Musical performance processing apparatus and storage medium therefor
US20060230910A1 (en) Music composing device
EP2362378A2 (en) Generation of harmony tone
CN111048058B (en) Singing or playing method and terminal for adjusting song music score in real time
US7705229B2 (en) Method, apparatus and programs for teaching and composing music
JP6520162B2 (en) Accompaniment teaching device and accompaniment teaching program
KR101428457B1 (en) Apparatus and method for providing user customized musical note
JP3780967B2 (en) Song data output device and program
CN110998708A (en) Differential presentation device, differential presentation method, and differential presentation program
KR20140078195A (en) System for user customized instrument education
JP4211388B2 (en) Karaoke equipment
EP1975920A2 (en) Musical performance processing apparatus and storage medium therefor
JP2019028407A (en) Harmony teaching device, harmony teaching method, and harmony teaching program
US20120325074A1 (en) Music machine
JP2014191331A (en) Music instrument sound output device and music instrument sound output program
JP7425558B2 (en) Code detection device and code detection program
JP3835131B2 (en) Automatic composition apparatus and method, and storage medium
JPH1091172A (en) Karaoke sing-along machine
JP5855837B2 (en) Electronic metronome
KR100817475B1 (en) An electronic metronome with a metronome output signal setup and replaying function by the note input method
JP3752956B2 (en) PERFORMANCE GUIDE DEVICE, PERFORMANCE GUIDE METHOD, AND COMPUTER-READABLE RECORDING MEDIUM CONTAINING PERFORMANCE GUIDE PROGRAM
TWM531033U (en) Real-time composer and playback apparatus
JP2008052118A (en) Electronic keyboard musical instrument and program used for the same
JP6036800B2 (en) Sound signal generating apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200410