CN110546705A - Lyric display device and method - Google Patents

Lyric display device and method Download PDF

Info

Publication number
CN110546705A
CN110546705A CN201780089625.1A CN201780089625A CN110546705A CN 110546705 A CN110546705 A CN 110546705A CN 201780089625 A CN201780089625 A CN 201780089625A CN 110546705 A CN110546705 A CN 110546705A
Authority
CN
China
Prior art keywords
display
display unit
lyric
state
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201780089625.1A
Other languages
Chinese (zh)
Other versions
CN110546705B (en
Inventor
柏濑一辉
滨野桂三
郑宇新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Publication of CN110546705A publication Critical patent/CN110546705A/en
Application granted granted Critical
Publication of CN110546705B publication Critical patent/CN110546705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

Provided is a lyric display device capable of recognizing lyrics and the state thereof without requiring complicated operations. Display frames (45) as a plurality of display units are arranged in series in the main regions (41, 42) of the display unit (33). A CPU (10) acquires data (14a) for singing of a selected song and acquires a predetermined state relating to an electronic musical instrument (100). The CPU (10) displays character information included in the singing data (14a) on the 1 st display unit group (display frames (45) of the main regions (41, 42)) in the display unit (33), and displays the acquired state on the 2 nd display unit group (display frames (46) of the sub regions (43, 44)).

Description

Lyric display device and method
Technical Field
the present invention relates to a lyric display apparatus and method for displaying lyrics.
Background
Conventionally, there is known a device for displaying singing lyrics in accordance with a performance performed by a player (patent document 1). The apparatus updates a singing position in lyrics represented by the lyric data so that a character being in the singing position is displayed in a form (color) different from other characters. In electronic musical instruments and the like, there are portions that display setting screens and receive instructions for setting various functions, sound generation parameters, and the like.
Patent document 1: japanese patent No. 4735544
disclosure of Invention
However, it is troublesome to switch the screen every time an operation is required to perform various settings or to complete the settings. For example, if an operation of switching to a screen on which lyrics or a state are displayed is required when the lyrics or the state is desired to be viewed, the operation becomes complicated. In particular, it is more difficult to perform the switching operation while performing the performance.
the invention aims to provide a lyric display device and a method which can visually recognize lyrics and states without complicated operations.
In order to achieve the above object, according to the present invention, there is provided a lyric display apparatus having: a display unit (33) composed of a plurality of display sections (45, 46) arranged in series; a data acquisition unit that acquires lyric data (14a), the lyric data (14a) including character information for displaying lyrics; a state acquisition unit that acquires a predetermined state; and a display control unit that displays character information included in the lyric data acquired by the data acquisition unit on a 1 st display unit group (45) that is a part of a group of consecutive display units in the display unit, and displays the state acquired by the state acquisition unit on a 2 nd display unit group (46) that is not the 1 st display unit group in the display unit.
in order to achieve the above object, according to the present invention, there is provided a lyric display method having: a data acquisition step of acquiring lyric data including character information for displaying lyrics; a state acquisition step of acquiring a predetermined state; and a display control step of displaying character information included in the lyric data acquired in the data acquisition step on a 1 st display unit group, which is a continuous partial display unit group in a display unit, and displaying the state acquired in the state acquisition step on a 2 nd display unit group, which is not the 1 st display unit group, in the display unit. Note that the numerals in parentheses above are examples.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, the lyrics and the state can be visually recognized without requiring complicated operations.
drawings
Fig. 1 is a schematic diagram of a lyric display apparatus.
Fig. 2 is a schematic diagram of the lyric display apparatus.
Fig. 3 is a block diagram of the electronic musical instrument.
Fig. 4 is a diagram showing a main part of the display unit.
Fig. 5 is a flowchart showing an example of the flow of processing in the case of performing a performance.
Fig. 6 is a diagram showing an example of lyric text data.
Fig. 7 is a diagram showing an example of the type of clip data.
Fig. 8 is a flowchart of the display process.
Fig. 9 is a diagram showing an example of display in the display unit.
Fig. 10 is a diagram showing an example of display in the display unit.
Fig. 11A to 11C are diagrams showing examples of display of the sub-area.
Fig. 12 is a diagram showing an example of display in the display unit.
Fig. 13 is a diagram showing an example of display in the display unit.
Fig. 14 is a diagram showing an example of display in the display unit.
Detailed Description
embodiments of the present invention will be described below with reference to the drawings.
fig. 1 and 2 are schematic diagrams of a lyric display device according to an embodiment of the present invention. The lyric display device is configured as an electronic musical instrument 100 as a keyboard musical instrument as an example, and has a main body portion 30 and a neck portion 31. The body 30 has a 1 st surface 30a, a 2 nd surface 30b, a 3 rd surface 30c, and a 4 th surface 30 d. The 1 st surface 30a is a keyboard arrangement surface on which a keyboard section KB including a plurality of keys is arranged. The 2 nd surface 30b is a back surface. Hook members 36 and 37 are provided on the 2 nd surface 30 b. A harness, not shown, can be laid between the hook members 36 and 37, and a player usually hangs the harness on the shoulder to perform a performance such as an operation of the keyboard section KB. Therefore, when the user wears the shoulder, particularly when the scale direction (key arrangement direction) of the keyboard KB is the left-right direction, the 1 st surface 30a and the keyboard KB face the listener side, and the 3 rd surface 30c and the 4 th surface 30d face substantially downward and upward, respectively. The electronic musical instrument 100 is of a specification in which the keyboard section KB is played mainly with the right hand when used while hanging on the shoulder.
The neck 31 extends from a side of the body 30. Various kinds of operation members including a forward operation member 34 and a return operation member 35 are disposed in the neck portion 31. A display unit 33 made of liquid crystal or the like is disposed on the 4 th surface 30d of the main body portion 30. The shape of the body 30 and the neck 31 is substantially rectangular when viewed from the side, but the 4 surfaces constituting the rectangle may be curved surfaces such as convex surfaces instead of flat surfaces.
The electronic musical instrument 100 is a musical instrument capable of performing a singing simulation in accordance with an operation of a performance operating element. Here, the singing simulation means outputting a voice simulating a human voice by singing synthesis. The white keys and the black keys, which are the keys of the keyboard section KB, are arranged in the order of tone pitch, and the keys are associated with different tone pitches. When playing the electronic musical instrument 100, the user presses a desired key of the keyboard section KB. The electronic musical instrument 100 detects a key operated by a user, and emits a singing voice at a pitch corresponding to the operated key. In addition, the order of syllables of the uttered singing voice is predetermined.
Fig. 3 is a block diagram of the electronic musical instrument 100. The electronic musical instrument 100 includes a cpu (central Processing unit)10, a timer 11, a rom (read Only memory)12, a ram (random Access memory)13, a data storage unit 14, a performance operation unit 15, other operation units 16, a parameter value setting operation unit 17, a display unit 33, a sound source 19, an effect circuit 20, a sound system 21, a communication I/f (interface), and a bus 23.
The CPU 10 is a central processing unit that controls the entire electronic musical instrument 100. The timer 11 is a module that measures time. The ROM 12 is a nonvolatile memory that stores a control program, various data, and the like. The RAM 13 is a volatile memory used as a work area of the CPU 10, various buffers, and the like. The display unit 33 is a display module such as a liquid crystal display panel or an organic EL (Electro-Luminescence) panel. The display unit 33 displays an operation state of the electronic musical instrument 100, various setting screens, a message for the user, and the like.
The performance operating element 15 is a module that mainly accepts performance operations for specifying a pitch. In the present embodiment, the keyboard section KB, the forward movement operation element 34, and the return operation element 35 are included in the performance operation element 15. For example, when the performance operating element 15 is a keyboard, the performance operating element 15 outputs performance information such as note-on/note-off based on/off of a sensor corresponding to each key, and intensity of a key (velocity, key stroke strength). The performance information may also be in the form of MIDI (musical instrument digital interface) messages.
the other operators 16 are, for example, operation modules such as operation buttons and operation knobs for performing settings other than the performance, such as settings related to the electronic musical instrument 100. The parameter value setting operation element 17 is an operation means such as an operation button or an operation knob for setting the parameter of the sound. Examples of the parameters related to the attribute of the singing voice include harmony (harmony), Brightness (brillness), Resonance (Resonance), and sex Factor (genter Factor). Harmony is a parameter for setting the balance of harmonic components included in a sound. The brightness is a parameter for setting the brightness of the sound, giving a change in pitch. Resonance is a parameter for setting the tone color or intensity of a color tone. The sex factor is a parameter for setting a formant, and changes the thickness and texture of a sound into a female or male. The external storage device 3 is, for example, an external device connected to the electronic musical instrument 100, and is, for example, a device that stores voice data. The communication I/F22 is a communication module that communicates with an external device. The bus 23 performs data transmission between the respective parts in the electronic musical instrument 100.
The data storage unit 14 stores singing data 14a (lyric data). The singing data 14a includes lyric text data, a phoneme information database, and the like. The lyric text data is data describing lyrics. In the lyric text data, lyrics of each song are described in a manner divided into syllables. That is, the lyric text data has character information for dividing the lyrics into syllables, and the character information is also information for displaying the lyrics in association with the syllables. Here, the syllable is an aggregate of tones output according to 1 performance operation. The phoneme information database is a database storing speech fragment data. The speech segment data is data representing a waveform of speech, and for example, spectral data of a sample sequence including a speech segment as waveform data. In addition, the voice section data includes section pitch data representing a pitch of a waveform of the voice section. The lyric text data and the voice fragment data can be managed through the database respectively.
The sound source 19 is a module having a plurality of sound generation channels. Under the control of the CPU 10, the sound source 19 assigns 1 sound generation channel corresponding to the performance of the user. When a singing voice is generated, the sound source 19 reads the segment data corresponding to the performance from the data storage 14 in the assigned sound generation channel to generate singing voice data. The effect circuit 20 applies the acoustic effect specified by the parameter value setting operation element 17 to the singing voice data generated by the sound source 19. The sound system 21 converts the singing voice data processed by the effect circuit 20 into an analog signal using a digital/analog converter. The audio system 21 amplifies the singing voice converted into an analog signal and outputs the amplified singing voice from a speaker or the like.
fig. 4 is a diagram showing a main part of the display unit 33. The display unit 33 includes a 1 st main area 41, a 2 nd main area 42, a 1 st sub area 43, and a 2 nd sub area 44 as display areas. The entire display area has a 2-line (2-segment) structure, the 1 st main area 41 and the 1 st sub area 43 are disposed in the 1 st line (upper segment), and the 2 nd main area 42 and the 2 nd sub area 44 are disposed in the 2 nd line (lower segment). In the main regions 41 and 42, a plurality of display frames 45(45-1, 45-2, 45-3, · · 45-13) as display units are arranged in series in the longitudinal direction of the display unit 33. The sub-regions 43 and 44 also have a plurality of display portions, i.e., display frames 46(46-1, 46-2, and 46-3), respectively. The plurality of display frames 45 are a 1 st display unit group which is a part of a continuous display unit group, and the plurality of display frames 46 are a 2 nd display unit group which does not belong to the 1 st display unit group. The display frame 45 may be configured to display characters, and the display frame 46 may be configured to display visual information such as icons, and is not limited to the configuration thereof, and is not necessarily divided by being surrounded by a frame. The characters corresponding to the syllables are displayed in the order of the predetermined pronunciation starting with the display frame 45-1 at the left end of fig. 4. The main areas 41, 42 are mainly used for lyric display. The sub-areas 43 and 44 are used for display other than the lyrics, and are mainly used for status display.
Fig. 5 is a flowchart showing an example of the flow of processing in the case of performing a performance using the electronic musical instrument 100. Here, a description will be given of a process in a case where a user selects a tune for performance and performs a performance of the selected song. For simplicity of explanation, a case where only a single tone is output will be described even when a plurality of keys are simultaneously operated. In this case, only the highest pitch among the pitches of the simultaneously operated keys may be processed, or only the lowest pitch may be processed. The processing described below is realized, for example, by the CPU 10 executing a program stored in the ROM 12 or the RAM 13 to function as a control unit that controls various configurations of the electronic musical instrument 100.
If the power is on, the CPU 10 waits for an operation of accepting selection of a song to be performed from the user (step S101). Further, when the operation of selecting a song is not performed after a certain time has elapsed, the CPU 10 may determine that a default song is selected. If the CPU 10 accepts the selection of a song, the lyric text data of the singing data 14a of the selected song is read. Then, the CPU 10 sets the cursor position at the top syllable described in the lyric text data (step S102). Here, the cursor is a virtual indicator indicating the position of the next syllable to be pronounced. Next, the CPU 10 determines whether or not the note-on is detected by the operation of the keyboard section KB (step S103). When the note-on is not detected, the CPU 10 determines whether or not the note-off is detected (step S107). On the other hand, if the note-on is detected, that is, if a new key operation is detected, if a sound is being output, the CPU 10 stops outputting the sound (step S104). Next, the CPU 10 executes an output tone generation process of emitting a singing tone corresponding to the note-on (step S105).
The output sound generation process will be described. The CPU 10 first reads the clip data (waveform data) of the syllable corresponding to the cursor position, and outputs the tone of the waveform represented by the read clip data at a pitch corresponding to the note-on. Specifically, the CPU 10 obtains a difference between a pitch indicated by segment pitch data included in the speech segment data and a pitch corresponding to the operated key, and shifts a spectral distribution indicated by the waveform data in the frequency axis direction by a frequency corresponding to the difference. Thereby, the electronic musical instrument 100 can output the singing voice at a pitch corresponding to the operated key. Next, the CPU 10 updates the cursor position (reading position) (step S106), and advances the process to step S107.
Here, determination of the cursor position and pronunciation of the singing voice in the processing in steps S105 and S106 will be described with specific examples. First, the update of the cursor position will be described.
Fig. 6 is a diagram showing an example of lyric text data. In the example of fig. 6, lyrics of 5 syllables c1 to c5 are described in the lyric text data. Each of the words "は (ha)", "る (ru)", "よ (yo)", "こ (ko)", and "い (yi)" represents 1 word of japanese hiragana, and each word corresponds to 1 syllable. The CPU 10 updates the cursor position in units of syllables. For example, when the cursor is located at the syllable c3, the data storage unit 14 reads the clip data corresponding to "よ (yo)" and issues a singing voice of "よ (yo)". If the pronunciation of "よ (yo)" is finished, the CPU 10 moves the cursor position to the next syllable c 4. Thus, the CPU 10 sequentially moves the cursor position to the next syllable in accordance with the note-on.
Next, the pronunciation of the singing voice will be described. Fig. 7 is a diagram showing an example of the type of clip data. The CPU 10 extracts the speech segment data corresponding to the syllable from the phoneme information database in order to utter the syllable corresponding to the cursor position. In the speech section data, there are 2 kinds of phoneme chain data and normal section data. The phoneme chain data is data indicating a speech piece at the time of a change in pronunciation such as "from no sound (#) to consonant", "from consonant to vowel", "from vowel to consonant (of the next syllable) or vowel". The normal partial data is data representing a speech segment when the utterance of a vowel continues. For example, in the case where the cursor position is set at "は (ha)" of the syllable c1, the sound source 19 selects the voice chain data "# -h" corresponding to "no sound → consonant h", the voice chain data "h-a" corresponding to "consonant h → vowel a", and the normal partial data "a" corresponding to "vowel a". Then, if the musical performance is started and the key is detected, the CPU 10 outputs the singing voice based on the voice chain data "# -h", the voice chain data "h-a", and the normal partial data "a" at the pitch corresponding to the operated key and the keystroke dynamics corresponding to the operation. As described above, determination of the cursor position and pronunciation of the singing voice are performed.
When the note-off is detected in step S107 in fig. 5, if the sound is being output, the CPU 10 stops the output (step S108) and advances the process to step S110. On the other hand, if the note-off is not detected, the CPU 10 advances the process to step S110. In step S110, the CPU 10 determines whether the performance is ended. If the performance is not ended in step S110, the CPU 10 returns the process to step S103. On the other hand, in the case of ending the performance, if a tone is being output, the CPU 10 stops the output of the tone (step S111), and ends the processing shown in fig. 5. The CPU 10 can determine whether or not the musical performance is ended based on, for example, whether or not the last syllable of the selected tune has been uttered, whether or not an operation for ending the musical performance is performed by another operator 16, or the like.
Next, the operation of displaying the lyrics with eyes will be described. First, the lyric text data included in the singing data 14a includes at least character information associated with a plurality of syllables corresponding to the selected song. The lyric text data is data for singing by the singing section (the sound source 19, the effect circuit 20, and the sound system 21). The lyric text data is divided into a plurality of sections in advance, and each of the divided sections is referred to as a "Phrase (Phrase)". A phrase is a unit having a certain set and is divided in a manner that is easily recognized by a user, but the definition of a section is not limited thereto. If a song is selected, the CPU 10 takes in a state of being divided into a plurality of phrases. The phrase includes 1 or more syllables and character information corresponding to the syllables.
If the electronic musical instrument 100 is started, the CPU 10 causes the 1 st main area 41 (FIG. 4) of the display unit 33 to display character information corresponding to the first phrase among the phrases corresponding to the selected music. At this time, the first character of the 1 st phrase is displayed in the left display frame 45-1, and the number of characters that can be displayed in the 1 st main area 41 is displayed. As for the 2 nd phrase, the number of characters that can be displayed is displayed in the 2 nd main area 42. The keyboard section KB functions as a progress instruction acquisition section for acquiring an instruction to sing or a progress instruction of displaying character information. The CPU 10 makes the singing section to sing a next syllable to be singed in response to the instruction to obtain the singing, and makes the display of the character displayed in the 1 st main area 41 advance as the syllable advances. The character display is stepped in the left direction of fig. 4, and the character which is not completely displayed first appears from the display frame 45 on the right side as the singing advances. The cursor position indicates a syllable of the next singing, indicating a syllable corresponding to the character displayed in the display frame 45-1 of the 1 st main area 41.
Furthermore, 1 character does not necessarily correspond to 1 syllable. For example, the 2 characters "だ" (da), "た" (ta) and "" having cloud points correspond to 1 syllable. In addition, the lyrics may be in english, and when the lyrics are "september", for example, the lyrics are 3 syllables of "sep", "tem" and "ber". "sep" is 1 syllable, but 3 characters "s", "e", "p" correspond to 1 syllable. The step-by-step final of the character display is a syllable unit, and thus advances by 2 characters by singing in the case of "だ (da)". As described above, the lyrics are not limited to japanese, but may be in other languages.
Fig. 8 is a flowchart of the display process. The processing is realized by, for example, the CPU 10 executing a program stored in the ROM 12 or the RAM 13 to function as a control unit that controls various configurations of the electronic musical instrument 100. The processing shown in fig. 8 is executed in parallel with the processing shown in fig. 5 after the power is turned on. In the processing shown in fig. 8, the CPU 10 functions as a state acquisition unit, a display control unit, a data acquisition unit, and a setting instruction acquisition unit. Fig. 9, 10, 12, 13, and 14 are diagrams showing examples of display on the display unit 33. Fig. 11A, 11B, and 11C are diagrams showing examples of display of the sub-regions.
If the power is turned on, the CPU 10 causes the display unit 33 to display the startup-time screen shown in fig. 9 (step S201). In the startup screen, for example, a manufacturer name is displayed in the 1 st main area 41 and the 1 st sub area 43, and a product name is displayed in the 2 nd main area 42 and the 2 nd sub area 44. Then, the CPU 10 acquires "a predetermined state" relating to the electronic musical instrument 100 (step S202). In the state acquisition process, the CPU 10 may switch the display unit 33 to display indicating activation. Here, the predetermined state includes a state related to power supply, for example, a state in which the type of power supply is a commercial power supply or a battery, and further includes a remaining battery level. The predetermined state includes a setting state of transposition and its value, the number of the current selection song, the presence or absence of a connection network, and the like. The predetermined state is not limited to these examples. When the predetermined state is obtained, the CPU 10 also obtains the singing data 14a of the selected song, and further extracts a plurality of phrases corresponding to the selected song from the lyrics text data of the obtained singing data 14 a. Multiple phrases have been ordered.
Next, the CPU 10 causes the display unit 33 to display a normal screen as shown in fig. 10 (step S203). Specifically, the CPU 10 displays lyrics in the main areas 41 and 42, and displays information on the acquired state in the sub areas 43 and 44. With particular regard to the lyric display, the CPU 10 causes characters of the 1 st phrase among the extracted phrases to be displayed by a displayable amount from the beginning in the 1 st main area 41, and causes characters of the 2 nd phrase to be displayed by a displayable amount from the beginning in the 2 nd main area 42. For example, as shown in fig. 10, the CPU 10 displays the character string "ダ ン ダ ン ト ·" in the 1 st main area 41 and displays the character string "ア イ ウ エ オ ·" in the 2 nd main area 42. Further, information on the selected music or the set tone may be displayed temporarily in the middle of the transition from the display showing the start to the normal screen display.
The CPU 10 may use at least one of the main areas 41 and 42 for lyric display, or may use at least one of the sub areas 43 and 44 for status-related information (hereinafter, status information) display. As an example of the status information display, fig. 10 shows an icon showing that the power supply state is a battery and the charge state (remaining battery level) is full. In addition, the electronic musical instrument 100 may also have a singing composition mode and a musical instrument pronunciation mode. The display example of fig. 10 assumes a singing composition mode. In the normal screen display in the musical instrument sound generation mode, the CPU 10 may display information indicating the tone color of the musical instrument in the main areas 41 and 42 instead of the lyrics.
Next, the CPU 10 determines whether or not the power is turned off (step S204), and when the power is turned off, stores various kinds of information (such as the state and the setting value) currently in the nonvolatile memory (such as the data storage unit 14) (step S205), and then ends the processing shown in fig. 8. On the other hand, when the power supply is not turned off, the CPU 10 determines whether or not there is a change in the predetermined state (step S206). Then, the CPU 10 proceeds the process to step S208 if there is no change in the predetermined state, and updates the display of the state information (step S207) if there is a change in the predetermined state (step S208).
Here, in step S207, the CPU 10 updates the display of the sub-areas 43 and 44 based on the newly acquired information on the predetermined state. For example, when the state information such as a decrease in the remaining battery level or no remaining battery level is acquired, the CPU 10 switches the display of the remaining battery level in the sub-area 44 to the display corresponding to the remaining battery level, as shown in fig. 11A and 11B. Alternatively, when the power supply is switched from the battery to the commercial power supply, the CPU 10 switches the display of the sub-area 44 to the icon of the analog outlet as shown in fig. 11C. Further, in these cases, the lyric display in the main areas 41, 42 remains as it is.
In step S208, the CPU 10 determines whether or not a change is accepted with respect to the singing data 14 a. The change related to the singing data 14a corresponds to, for example, a change in the singing position in the current selected song or a change in the selected song itself. When the change related to the singing data 14a is not accepted, the CPU 10 determines whether or not an instruction of a predetermined setting is accepted (step S209). Here, the predetermined settings include, for example, settings related to sound generation (parameter setting change, such as effect and volume), and settings related to various functions (octave shift, mute, and the like). The predetermined setting may include updating of firmware. The predetermined setting is not limited to these examples.
in step S209, the CPU 10 returns the process to step S204 when the instruction of the predetermined setting is not received, and proceeds to step S210 when the instruction of the predetermined setting is received. In step S210, the CPU 10 displays the setting screen using both the main area and the sub area. That is, the CPU 10 switches the display mode from the normal screen display to the setting screen display that displays information for a predetermined setting using both the 1 st display unit group and the 2 nd display unit group in the display unit 33. For example, when a setting instruction for updating the firmware is received, as shown in fig. 12, the CPU 10 displays character information indicating that the mode is the update mode in the main area 41 and the sub area 43, and displays character information indicating the current version and the updated version in the main area 42 and the sub area 44. When an instruction to change the effect setting is received, the CPU 10 displays the type, value, and the like of the effect before and after the change using the main areas 41 and 42 and the sub areas 43 and 44. In the case of using the 1 st and 2 nd rows in the display unit 33, the usage of each row is not limited. For example, a mode of displaying items of setting change on the 1 st line, displaying setting values on the 2 nd line, and the like can be considered.
In step S211, the CPU 10 reflects the contents of the received setting instruction, and then determines whether or not the operation related to the setting instruction has continued for a predetermined time period in a state where the setting screen display is displayed (step S212). If a new operation related to the setting instruction is performed before the predetermined time has elapsed, the operation is accepted, and the process returns to step S210. Further, the display timing of the setting screen display changes by repeating steps S210 to S212, and the example shown in fig. 12 is an example of the middle thereof. On the other hand, if the setting instruction continues to be absent for a predetermined time period while the setting screen display is displayed, the CPU 10 returns the display mode of the display unit 33 to the state immediately before the switching to the setting screen display (step S213). Therefore, when the information for the predetermined setting is displayed and the operation related to the setting instruction is not continued for the predetermined time, the screen returns to the normal screen display immediately before. Then, the process returns to step S204.
In step S208, when a change is accepted with respect to the singing data 14a, the CPU 10 determines whether or not a change of the selected song is accepted (step S214). When the change of the selected song is not accepted, the change of the singing position in the currently selected song is accepted, and therefore the CPU 10 updates the lyrics display in the main areas 41 and 42 (step S218). That is, the CPU 10 advances the display of the phrase of the 1 st main area 41 by the amount of 1 syllable. In detail, the CPU 10 eliminates the characters corresponding to 1 syllable at the left end of the 1 st main area 41, and fills the character string to the left by the eliminated number. By advancing the display of the phrase by the amount of 1 syllable, the syllable corresponding to the character newly displayed in the display frame 45-1 becomes the next singing target. For example, in the display state shown in fig. 10, if an arbitrary key of the keyboard section KB is pressed and operated to sing the syllable of "ダ" at the beginning of the 1 st main region 41, the "ン" corresponding to the next syllable is displayed at the beginning (fig. 13). Here, the display of the 2 nd main area 42 is not changed (not stepped). Therefore, step S218 corresponds to steps S105, S106 of fig. 5. Further, the display may be advanced in terms of phrase units by the operation of the forward operation element 34. Further, the display may be delayed in terms of phrases by the operation of the return operation element 35. When the phrase-based advancement or retardation is used, a process of updating the cursor position may be added immediately before step S110 in fig. 5 in accordance with the phrase. Then, the process advances to step S209.
When a change of the selected song is accepted in step S214, the CPU 10 displays a screen for song change using one or both of the main areas 41 and 42 in the display unit 33 (step S215), and reflects the change content in accordance with the change instruction (step S216). At this time, the sub-regions 43 and 44 maintain the state display. For example, as shown in fig. 14, titles of songs before and after change and the like are displayed in the 1 st main area 41, and data replacement is performed. Then, if there is no operation related to the selected song within the predetermined time, the CPU 10 switches the display mode to the normal screen display in which the lyrics of the selected song after the change are displayed in the main areas 41 and 42 in step S217. Therefore, in the normal screen display shown in fig. 10, the content updated to the lyrics corresponding to the selected song is displayed on the display unit 33. Then, the process advances to step S209.
According to the present embodiment, the CPU 10 displays the character information included in the acquired singing data 14a on the 1 st display unit group (the display frame 45 group of the main areas 41 and 42) in the display unit 33, and displays the acquired state on the 2 nd display unit group (the display frame 46 group of the sub areas 43 and 44). Thus, the lyrics and the state can be recognized without requiring complicated operations.
Further, if an instruction for a predetermined setting is obtained, information for the predetermined setting is displayed using both the 1 st display unit group and the 2 nd display unit group, so that a wide range of the display unit 33 can be used when the predetermined setting is to be performed.
further, if the operation relating to the instruction of the predetermined setting is not continued for a predetermined time while the information for the predetermined setting is displayed, the display mode of the display unit 33 returns to the state of the normal screen display before the switching, and therefore, the display can return to the lyrics and state display without the operation by the completion of the setting.
The entire display region of the display unit 33 has a 2-line (2-segment) structure, but may have a structure of 3 or more lines. The main regions are arranged vertically, but the arrangement relationship is not limited to the example, and may be arranged horizontally.
The source of the singing data 14a is not limited to the storage unit, and an external device via the communication I/F22 may be used as the source of the singing data. The CPU 10 may also acquire the information by editing or creating with the electronic musical instrument 100 by the user.
While the present invention has been described in detail based on the preferred embodiments thereof, the present invention is not limited to these specific embodiments, and various embodiments within the scope not departing from the gist of the present invention are also included in the present invention.
Description of the reference numerals
10 CPU (status acquisition unit, display control unit, data acquisition unit, setting instruction acquisition unit)
14a singing data (lyric data)
33 display unit
41 1 st main region
42 nd 2 nd main area
43 sub-region 1
44 nd sub-region 2
45 display frame (1 st display part group)
46 display frame (No. 2 display part group)

Claims (7)

1. A lyric display apparatus having:
A display unit including a plurality of display units arranged in series;
a data acquisition unit that acquires lyric data including character information for displaying lyrics;
A state acquisition unit that acquires a predetermined state; and
And a display control unit that displays character information included in the lyric data acquired by the data acquisition unit on a 1 st display unit group, which is a continuous partial display unit group in the display unit, and displays the state acquired by the state acquisition unit on a 2 nd display unit group, which is not the 1 st display unit group, in the display unit.
2. The lyric display apparatus according to claim 1, wherein,
Comprises a setting instruction acquisition unit for acquiring an instruction of a predetermined setting,
the display control unit switches a display mode of the display means to display information for the predetermined setting using the 1 st display unit group and the 2 nd display unit group when the setting instruction acquisition unit acquires the instruction for the predetermined setting.
3. The lyric display apparatus according to claim 2, wherein,
The display control unit returns the display mode of the display unit to a state immediately before switching if no operation related to the instruction of the predetermined setting continues for a predetermined time period in a state where the information for the predetermined setting is displayed by the display unit.
4. The lyric display apparatus according to any one of claims 1 to 3, wherein,
Comprises a forward instruction acquisition unit for acquiring an instruction to forward display of character information,
The display control unit updates the character information displayed in the 1 st display unit group in accordance with the instruction to advance the display of the character information acquired by the advance instruction acquisition unit.
5. The lyric display apparatus according to any one of claims 1 to 4, wherein,
The prescribed state includes a state related to power supply to the lyric display apparatus.
6. The lyric display apparatus according to any one of claims 1 to 5, wherein,
The predetermined setting includes a setting related to generation of a sound.
7. A lyric display method, comprising:
A data acquisition step of acquiring lyric data including character information for displaying lyrics;
A state acquisition step of acquiring a predetermined state; and
And a display control step of displaying character information included in the lyric data acquired in the data acquisition step on a 1 st display unit group, which is a continuous partial display unit group in a display unit, and displaying the state acquired in the state acquisition step on a 2 nd display unit group, which is not the 1 st display unit group, in the display unit.
CN201780089625.1A 2017-04-27 2017-04-27 Lyric display device and method Active CN110546705B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/017436 WO2018198382A1 (en) 2017-04-27 2017-04-27 Apparatus and method for displaying lyrics

Publications (2)

Publication Number Publication Date
CN110546705A true CN110546705A (en) 2019-12-06
CN110546705B CN110546705B (en) 2023-05-09

Family

ID=63918855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780089625.1A Active CN110546705B (en) 2017-04-27 2017-04-27 Lyric display device and method

Country Status (3)

Country Link
JP (1) JP6732216B2 (en)
CN (1) CN110546705B (en)
WO (1) WO2018198382A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11282464A (en) * 1998-03-26 1999-10-15 Roland Corp Display device of automatic playing device
JP2002000734A (en) * 2000-06-22 2002-01-08 Daiichikosho Co Ltd Musical therapy support device
CN1551099A (en) * 2003-05-09 2004-12-01 雅马哈株式会社 Apparatus and computer program for displaying a musical score
JP2008170592A (en) * 2007-01-10 2008-07-24 Yamaha Corp Device and program for synthesizing singing voice
JP2012159575A (en) * 2011-01-31 2012-08-23 Daiichikosho Co Ltd Singing guidance system by plurality of singers
JP2016206490A (en) * 2015-04-24 2016-12-08 ヤマハ株式会社 Display control device, electronic musical instrument, and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146566A (en) * 1995-11-20 1997-06-06 Fujitsu General Ltd Karaoke device
JPH09146573A (en) * 1995-11-21 1997-06-06 Ekushingu:Kk Karaoke device
JPH10161655A (en) * 1996-11-29 1998-06-19 Casio Comput Co Ltd Navigating device of musical instrument
JP4368817B2 (en) * 2005-03-17 2009-11-18 株式会社第一興商 Portable music player with lyrics display
JP2009295012A (en) * 2008-06-06 2009-12-17 Sharp Corp Control method for information display, display control program and information display
JP5549521B2 (en) * 2010-10-12 2014-07-16 ヤマハ株式会社 Speech synthesis apparatus and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11282464A (en) * 1998-03-26 1999-10-15 Roland Corp Display device of automatic playing device
JP2002000734A (en) * 2000-06-22 2002-01-08 Daiichikosho Co Ltd Musical therapy support device
CN1551099A (en) * 2003-05-09 2004-12-01 雅马哈株式会社 Apparatus and computer program for displaying a musical score
JP2008170592A (en) * 2007-01-10 2008-07-24 Yamaha Corp Device and program for synthesizing singing voice
JP2012159575A (en) * 2011-01-31 2012-08-23 Daiichikosho Co Ltd Singing guidance system by plurality of singers
JP2016206490A (en) * 2015-04-24 2016-12-08 ヤマハ株式会社 Display control device, electronic musical instrument, and program

Also Published As

Publication number Publication date
CN110546705B (en) 2023-05-09
JP6732216B2 (en) 2020-07-29
WO2018198382A1 (en) 2018-11-01
JPWO2018198382A1 (en) 2019-11-21

Similar Documents

Publication Publication Date Title
EP2680254B1 (en) Sound synthesis method and sound synthesis apparatus
JP6728754B2 (en) Pronunciation device, pronunciation method and pronunciation program
US20220076658A1 (en) Electronic musical instrument, method, and storage medium
US20220076651A1 (en) Electronic musical instrument, method, and storage medium
CN113160779A (en) Electronic musical instrument, method and storage medium
CN113506554A (en) Electronic musical instrument and control method for electronic musical instrument
JP2017194594A (en) Pronunciation control device, pronunciation control method, and program
CN110546705B (en) Lyric display device and method
US20220301530A1 (en) Information processing device, electronic musical instrument, and information processing method
JP2011180428A (en) Display device of syllable number of lyrics and program
JP6809608B2 (en) Singing sound generator and method, program
JP2011180429A (en) Display device of syllable number of lyrics and program
CN110720122B (en) Sound generating device and method
JP6944366B2 (en) Karaoke equipment
WO2023120121A1 (en) Consonant length changing device, electronic musical instrument, musical instrument system, method, and program
WO2018198381A1 (en) Sound-generating device, method, and musical instrument
JP2016177277A (en) Sound generating device, sound generating method, and sound generating program
WO2019026233A1 (en) Effect control device
CN117877459A (en) Recording medium, sound processing method, and sound processing system
JP2015099358A (en) Musical tone information processing apparatus, and program
WO2019003348A1 (en) Singing sound effect generation device, method and program
KR20010058808A (en) Note hight automatic control method for karaoke system
JP2019132978A (en) Karaoke device
JP2017161721A (en) Lyrics generator and lyrics generating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant