CN113140201A - Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program - Google Patents

Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program Download PDF

Info

Publication number
CN113140201A
CN113140201A CN202011577931.XA CN202011577931A CN113140201A CN 113140201 A CN113140201 A CN 113140201A CN 202011577931 A CN202011577931 A CN 202011577931A CN 113140201 A CN113140201 A CN 113140201A
Authority
CN
China
Prior art keywords
sound
accompaniment
performance
style
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011577931.XA
Other languages
Chinese (zh)
Other versions
CN113140201B (en
Inventor
渡边大地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Publication of CN113140201A publication Critical patent/CN113140201A/en
Application granted granted Critical
Publication of CN113140201B publication Critical patent/CN113140201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • G10H1/383Chord detection and/or recognition, e.g. for correction, or automatic bass generation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
    • G10H2210/011Fill-in added to normal accompaniment pattern
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
    • G10H2210/015Accompaniment break, i.e. interrupting then restarting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/075Musical metadata derived from musical analysis or for use in electrophonic musical instruments
    • G10H2240/081Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The invention provides an accompaniment sound generation device, an electronic musical instrument, an accompaniment sound generation method and an accompaniment sound generation program. The accompaniment sound generation device comprises: a specifying unit that specifies a plurality of performance sound units for generating accompaniment sounds based on an input performance sound; an accompaniment sound generation unit for generating accompaniment sounds belonging to the plurality of specified musical performance sound parts for each musical performance sound; and an accompaniment output unit for outputting accompaniment sounds generated by the plurality of performance sound units in accordance with the sound emission timing.

Description

Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program
Technical Field
The invention relates to an accompaniment sound generation device, method and program, and an electronic musical instrument having the accompaniment sound generation device.
Background
There is known an electronic musical instrument having a function of adding an automatic accompaniment to a performance sound uttered by a player based on accompaniment pattern data stored in advance. For example, there are electronic keyboard musical instruments having an automatic accompaniment function. If the player operates the keyboard to perform a performance, the electronic keyboard instrument outputs automatic accompaniment sounds in match with the performance sounds. The automatic accompaniment data generation device of patent document 1 described below controls the rhythm of automatic accompaniment sounds so as to be aligned with the position of accent (center) of musical performance.
Patent document 1: japanese patent laid-open publication No. 2017-58597
By playing an electronic musical instrument having an automatic accompaniment function, a player can enjoy a performance sound accompanied by accompaniment sound while playing a melody, for example. The automatic accompaniment function generates repetitive accompaniment tones based on the accompaniment pattern data, and thus may also feel unsatisfactory to the performer. In order to give further playing fun to players, an automatic accompaniment function is expected to generate accompaniment sounds rich in variation.
Disclosure of Invention
The invention aims to generate automatic accompaniment sound rich in variation.
An accompaniment sound generation device according to an aspect of the present invention includes: a specifying unit that specifies a plurality of performance sound units for generating accompaniment sounds based on an input performance sound; an accompaniment sound generation unit for generating accompaniment sounds belonging to the plurality of specified musical performance sound parts for each musical performance sound; and an accompaniment output unit for outputting accompaniment sounds generated by the plurality of performance sound units in accordance with the sound emission timing.
A plurality of patterns may be prepared as a pattern for generating accompaniment sound from musical performance sound, and the specifying unit may specify a plurality of musical performance sound parts corresponding to the set pattern with reference to setting information in which a plurality of musical performance sound parts for generating accompaniment sound in each pattern are registered.
The setting information may include information on a generation rule of the accompaniment sound generated based on the performance sound in each mode, and the accompaniment sound generator may generate the accompaniment sound from the performance sound based on the generation rule corresponding to the set mode with reference to the setting information.
Information relating the characteristics of the performance sound and the characteristics of the accompaniment sound may be registered as the generation rule in the setting information.
An electronic musical instrument according to another aspect of the present invention includes the accompaniment sound generation device according to any one of the above-described aspects, and the electronic musical instrument includes a style accompaniment sound generation unit that generates style accompaniment sounds in a predetermined musical performance sound part based on predetermined accompaniment style information, wherein the style accompaniment sound generation unit stops the generation of style accompaniment sounds for a musical performance sound part identical to the accompaniment sounds during the generation of the accompaniment sounds from the accompaniment sound generation unit.
The pattern accompaniment sound generation unit may stop the generation of the pattern accompaniment sound for the 1 st performance sound part and continue the generation of the pattern accompaniment sound for the 2 nd performance sound part when the pattern accompaniment sound generation mode is turned on.
According to another aspect of the present invention, there is provided an accompaniment sound generation method for specifying a plurality of musical performance sound parts for generating accompaniment sounds based on an input musical performance sound, generating accompaniment sounds belonging to the specified musical performance sound parts for each musical performance sound, and outputting the accompaniment sounds generated in the musical performance sound parts so as to match sound emission timings.
According to another aspect of the present invention, there is provided a program for generating accompaniment sound for causing a computer to execute: determining a plurality of performance sound parts for generating accompaniment sounds based on the input performance sounds; generating accompaniment sounds belonging to the determined plurality of performance sound parts for each performance sound; and outputting the accompaniment sound generated in the plurality of performance sound parts in accordance with the sound generation timing.
ADVANTAGEOUS EFFECTS OF INVENTION
According to the present invention, an automatic accompaniment sound rich in variation can be generated.
Drawings
Fig. 1 is a functional block diagram of an electronic musical instrument according to the embodiment.
Fig. 2 is a diagram showing a data structure of accompaniment style data.
Fig. 3 is a functional block diagram of the accompaniment sound generation apparatus according to the embodiment.
Fig. 4 is a diagram showing the setting information of the accent mode.
Fig. 5 is a diagram showing setting information of the ensemble mode.
Fig. 6 is a flowchart illustrating an accompaniment sound generation method according to an embodiment.
Fig. 7 is a flowchart illustrating an accompaniment sound generation method according to an embodiment.
Fig. 8 is a flowchart illustrating an accompaniment sound generation method according to an embodiment.
Fig. 9 is a timing chart of automatic accompaniment generation.
Description of the reference numerals
1 … electronic musical instrument, 11 … musical performance sound input unit, 12 … pattern decision unit, 13 … decision unit, 14 … real-time accompaniment sound generation unit, 15 … accompaniment style data acquisition unit, 16 … style accompaniment sound generation unit, 17 … accompaniment sound output unit, 101 … musical performance operation unit, 102 … setting operation unit, 106 … CPU, 107 … RAM, 108 … ROM, 109 … storage device, P1 … accompaniment sound generation program, SD … setting data, ASD … accompaniment style data, PD … accompaniment style data, RD … real-time accompaniment data accompaniment
Detailed Description
Hereinafter, an accompaniment sound generation device, an electronic musical instrument, an accompaniment sound generation method, and an accompaniment sound generation program according to embodiments of the present invention will be described in detail with reference to the drawings.
(1) Structure of electronic musical instrument
Fig. 1 is a block diagram showing a configuration of an electronic musical instrument 1 including an accompaniment sound generation apparatus 10 according to an embodiment of the present invention. The player can perform music by operating the electronic musical instrument 1. In addition, the electronic musical instrument 1 can add automatic accompaniment to the performance sound of the player by operating the accompaniment sound generation device 10.
The electronic musical instrument 1 includes a performance operation unit 101, a setting operation unit 102, and a display unit 103. The performance operating section 101 includes a keyboard-like pitch designation operating element and is connected to the bus 120. The performance operation section 101 inputs performance operations of a player and outputs performance data indicating performance tones. The performance data is composed of midi (musical Instrument Digital interface) data or audio data. The setting operation unit 102 includes a switch for performing an on-off operation, a rotary encoder for performing a rotary operation, a linear encoder for performing a slide operation, and the like, and is connected to the bus 120. The setting operation unit 102 is used to adjust the volume of a musical performance sound or an automatic accompaniment sound, turn on/off a power supply, and perform various settings. The display unit 103 includes, for example, a liquid crystal display, and is connected to the bus 120. Various information related to a musical performance, settings, and the like is displayed on the display unit 103. At least a part of the performance operation unit 101, the setting operation unit 102, and the display unit 103 may be constituted by a touch panel display.
The electronic musical instrument 1 further has: a CPU (central processing unit) 106, a RAM (random access memory) 107, a ROM (read only memory) 108, and a storage device 109. The CPU 106, RAM 107, ROM 108, and storage device 109 are connected to the bus 120. The CPU 106, RAM 107, ROM 108, and storage device 109 constitute the accompaniment sound generation apparatus 10.
The RAM 107 is constituted by, for example, a volatile memory, is used as a work area when the CPU 106 executes a program, and temporarily stores various data. The ROM 108 is configured by, for example, a nonvolatile memory, and stores a computer program such as the accompaniment sound generation program P1 and various data such as the setting data SD and the accompaniment style data ASD. As the ROM 108, a flash memory such as an EEPROM is used, for example. The CPU 106 executes automatic accompaniment processing described later by using the RAM 107 as a work area and executing the accompaniment sound generation program P1 stored in the ROM 108.
The storage device 109 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card. The accompaniment sound generation program P1, the setting data SD, or the accompaniment style data ASD may be stored in the storage device 109.
The accompaniment sound generation program P1 according to the present embodiment may be provided as being stored in a computer-readable recording medium and installed in the ROM 108 or the storage device 109. When the communication I/F of the electronic musical instrument 1 is connected to a communication network, the accompaniment sound generation program P1 transmitted from a server connected to the communication network may be installed in the ROM 108 or the storage device 109. Similarly, the setting data SD or the accompaniment style data ASD may be acquired from a recording medium, or the setting data SD or the accompaniment style data ASD may be acquired from a server connected to a communication network.
The electronic musical instrument 1 further has a sound source 104 and an acoustic system 105. Sound source 104 is connected to bus 120, and sound system 105 is connected to sound source 104. The sound source 104 generates musical tone signals based on performance data input from the performance operating unit 101 or data relating to automatic accompaniment sounds generated by the accompaniment sound generation device 10.
The sound system 105 includes a digital-to-analog (D/a) conversion circuit, an amplifier, and a speaker. The sound system 105 converts the musical sound signal given from the sound source 104 into an analog sound signal, and generates a sound based on the analog sound signal. Thereby, the musical tone signal is played.
(2) Automatic accompaniment sound
Next, an automatic accompaniment sound generated by the accompaniment sound generation device 10 according to the present embodiment will be described. The accompaniment sound generation device 10 according to the present embodiment can generate 2 types of automatic accompaniment sounds, i.e., style accompaniment sounds and real-time accompaniment sounds. The style accompaniment sounds are generated by repeatedly playing previously stored accompaniment style data. A category or the like is specified by the player, whereby accompaniment pattern data corresponding to the specified category is played. The player can perform the performance in accordance with the play of the style accompaniment tones.
The real-time accompaniment is accompaniment sound generated in real time in accordance with performance sound generated by a performance operation of a player. The real-time accompaniment sound is generated in accordance with the content of the setting data SD for every 1 sound of the performance sound. If the player performs a performance, real-time accompaniment tones are added based on the performance tones of the player.
(3) Accompaniment style data
Next, the accompaniment style data ASD will be explained. The accompaniment style data ASD is data that classifies the contents of style accompaniment sounds by category. The accompaniment style data ASD may be used to determine the timbre of the real-time accompaniment sound.
Fig. 2 is a diagram showing a data structure of the accompaniment style data ASD. As shown in fig. 2, 1 or more accompaniment style data ASD are prepared for each category of jazz, rock, classical (not shown), and the like. The categories as described above may be arranged hierarchically. For example, hard rock, front rock, and the like may be set as the lower-level categories of rock. Each accompaniment style data ASD includes a plurality of accompaniment chapter (section) data.
The accompaniment chapter data is classified into the "intro" chapter, the "main" chapter, the "fill-in" chapter, and the "ending (ending)" chapter. The "prelude", "main", "over" and "end" each represent the type of chapter, and in fig. 2 are each represented by the letters "I", "M", "F" and "E". Each accompaniment chapter data is further classified into a plurality of variations (variations).
The variations of the "prelude" section, the "main" section, and the "ending" section indicate the atmosphere or the degree of thermal severity of the automatic accompaniment sound, and in the example of fig. 2, the letters "a" (normal (quiet)), "B" (slightly gorgeous), "C" (gorgeous), and "D" (fairly gorgeous) are used in accordance with the degree of thermal severity.
The "plus flowers" chapter is a connection (plus flowers) with other chapters, and therefore, the variation of the "plus flowers" chapter is represented in fig. 2 by a combination of 2 letters corresponding to the evolution of the atmosphere or the degree of heat intensity of the preceding and following chapters. For example, the variation "AC" corresponds to a change from "quiet" to "gorgeous".
In fig. 2, accompaniment chapter data is shown by a combination of letters representing the types of chapters and letters representing variations. For example, the type of chapter of the accompaniment chapter data MA is "main", and the variation is "a". In addition, the chapter type of the accompaniment chapter data FAB is "flower-adding", and the variation is "AB".
The accompaniment chapter data includes accompaniment style data PD related to each of a plurality of musical performance vocal sections (tracks), such as a bass vocal section, a chord vocal section, a phrase vocal section, and a bed vocal section. The accompaniment chapter data includes reference chord information and pitch conversion rules (pitch conversion table information, scale, and re-emission rules for chord change). The accompaniment pattern data PD is MIDI data or audio data, and can be converted into an arbitrary pitch based on the reference chord information and the pitch conversion rule. The number of performance parts for generating the style accompaniment sounds, note sequence of the accompaniment style data PD, and the like are different according to the corresponding variation.
For example, the performer can select a desired style of accompaniment sound and accompaniment style data ASD by operating the setting operation unit 102 of fig. 1. A list of the type of style accompaniment sound and the name of the style of accompaniment sound to accompaniment chapter data (including variations) may be displayed on the display unit 103, and the performer may select the type, name of the style of accompaniment sound, and the like by operating the setting operation unit 102. Then, the player operates the setting operation unit 102 of fig. 1 to set the musical composition. The song structure refers to an arrangement of chapters constituting a song. For example, the section from the start to the end of the song is set to which chapter the section corresponds. Thus, the order of accompaniment pattern data PD constituting the accompaniment sounds is determined. Alternatively, the performer may select a desired song from among a plurality of songs registered in advance to automatically select the accompaniment style data ASD and the song structure. Based on the selection content of the style accompaniment sound selected by the player as described above and the accompaniment style data ASD, the style accompaniment sound is output from the sound system 105 of fig. 1.
(4) Structure of function of accompaniment sound generator
Fig. 3 is a block diagram showing a functional configuration of the accompaniment sound generation apparatus 10 according to the embodiment of the present invention. The accompaniment sound generation device 10 is a device for generating style accompaniment sounds and real-time accompaniment sounds. The CPU 106 of fig. 1 realizes the functions of the parts of the accompaniment sound generation apparatus 10 of fig. 3 by executing the accompaniment sound generation program P1 stored in the ROM 108 or the storage device 109. As shown in fig. 3, the accompaniment generation apparatus 10 includes a musical performance input unit 11, a pattern determining unit 12, a determining unit 13, a real-time accompaniment generator 14, an accompaniment style data acquiring unit 15, a style accompaniment generator 16, and an accompaniment output unit 17.
The performance tone input section 11 inputs performance data (performance tones) output from the performance operating section 101. The performance sound input unit 11 outputs the performance data to the determination unit 13 and the real-time accompaniment sound generation unit 14. As described above, MIDI data or audio data is used as performance data.
The mode determination section 12 is inputted with a mode operation by the player from the setting operation section 102. The accompaniment sound generation device 10 of the present embodiment has a plurality of modes as modes for performing real-time accompaniment. The player can perform mode setting by operating the setting operation unit 102. In the present embodiment, 2 modes, an Accent mode (event mode) and a Unison mode (Unison mode), are prepared as modes of real-time accompaniment.
The specifying unit 13 specifies a musical performance sound unit that generates real-time accompaniment sounds. The real-time accompaniment is outputted in accordance with the timing among a plurality of musical performance sound portions such as a bass sound portion, a chord sound portion, a phrase sound portion, and a bottoming sound portion, similarly to the style accompaniment. The specifying unit 13 specifies a musical performance sound unit for generating real-time accompaniment sounds based on the musical performance data input from the musical performance sound input unit 11. That is, the specifying unit 13 specifies a plurality of performance sound units for generating real-time accompaniment sounds for each 1 tone of the performance sounds.
The real-time accompaniment sound generation unit 14 generates real-time accompaniment data RD. The real-time accompaniment sound generation unit 14 generates the real-time accompaniment data RD generated in the performance sound part specified by the specification unit 13. The real-time accompaniment sound generation unit 14 determines the timbre, volume, and the like of the real-time accompaniment sound based on the performance data (performance sound) input from the performance sound input unit 11. The real-time accompaniment sound generation unit 14 outputs the generated real-time accompaniment data RD to the accompaniment sound output unit 17. The accompaniment sound output unit 17 outputs the real-time accompaniment data RD to the sound source 104 shown in fig. 1. The sound source 104 plays real-time accompaniment sounds through the sound system 105.
The accompaniment style data acquisition unit 15 acquires accompaniment style data ASD. As described above, the performer performs the selection operation of the style accompaniment sound, and the accompaniment style data acquisition unit 15 accesses the ROM 108 to acquire the selected accompaniment style data ASD.
The accompaniment pattern generating unit 16 receives the accompaniment pattern data ASD acquired by the accompaniment pattern data acquiring unit 15, and acquires accompaniment pattern data PD used for the accompaniment pattern from the accompaniment pattern data ASD. The style accompaniment sound generation unit 16 acquires accompaniment style data PD included in the accompaniment style data ASD based on accompaniment chapter data including a variation selected by the player. The style accompaniment sound generation unit 16 performs pitch conversion necessary for the accompaniment style data PD, and outputs the result to the accompaniment sound output unit 17 in accordance with the rhythm of the performance sound. The accompaniment sound output unit 17 outputs the accompaniment pattern data PD to the sound source 104 shown in fig. 1. The sound source 104 plays the style accompaniment sound through the sound system 105.
The real-time accompaniment sound generation unit 14 further transmits an instruction to stop the generation of the style accompaniment data to the style accompaniment sound generation unit 16. The generation of the accompaniment pattern data PD is stopped while the real-time accompaniment data RD is being generated. That is, when outputting the real-time accompaniment sound, the style accompaniment sound is muted for a set period. When the accent mode or the ensemble mode is on, the generation of the accompaniment pattern data PD is stopped for a part of the performance vocal sections for generating the real-time accompaniment data RD. That is, when any of the modes is on, muting is performed on a part of the performance sound units in units of performance sound units.
(5) Mode of real-time accompaniment functions
As described above, the accompaniment sound generation apparatus 10 according to the present embodiment includes the accent mode and the ensemble mode as the mode for generating the real-time accompaniment sounds. The accent mode generates an automatic accompaniment sound such as a cymbal (cymbal) sounding strongly when a player strongly presses a key or when the player performs a strong sound (Forte), for example. The ensemble mode generates, for example, an automatic accompaniment sound by sounding a string (string) at the same pitch or a sound of the same note name in an octave relationship in accordance with the melody of the piano.
Fig. 4 and 5 are diagrams showing the contents of the setting data SD. Fig. 4 shows the contents of the data of the accent pattern among the data registered in the setting data SD. Fig. 5 shows the contents of data of the ensemble pattern among the data registered in the setting data SD. In any mode, data relating to "performance sound (input sound)", "intensity condition", "conversion destination", "mute object", and "mute release timing" are registered in the setting data SD.
"performance tone (input tone)" is data indicating the category of the performance tone. The real-time accompaniment sound generation unit 14 determines the performance sound according to a predetermined algorithm. The "Top note (Top) indicates the highest pitch among the tones included in the performance tone. For example, when a player is playing a sound, the highest pitch is determined as the treble note in the chord. The "whole note" indicates the entire performance tone. "chord" indicates a performance sound having a harmonious (or accompaniment) action among sounds simultaneously performed at a certain timing. The "lowest note (Bottom) note" represents the lowest pitch among the tones included in the performance tone. For example, when a player is playing a sound, the lowest pitch is determined as the lowest note in the chord.
The "intensity condition" means an intensity condition of the "performance sound". In fig. 4 and 5, the "intensity condition" is expressed by an expression that is easily distinguished perceptually, such as "strong", "medium or higher but lower than strong", but actually, the intensity condition is expressed by a specific numerical value such as a keystroke intensity or a volume.
The "transformation target" also includes fields of "sound part" and "pitch (instrument)". The "sound part" registers a performance sound part for generating a real-time accompaniment sound. The pitch of the real-time accompaniment tones or the musical instrument is shown in "pitch (musical instrument)". As the musical instrument part for generating the real-time accompaniment, the "king drum", "chord 1", "phrase 1", and the like are designated as the musical instrument part included in the accompaniment style data ASD. As a pitch (instrument) for generating the real-time accompaniment sound, a Drum kit (Drum kit) is generally set in a case where the musical performance sound part is a "master Drum". The drum set is set by assigning note numbers of MIDI to the rhythm instruments used in the drum set. The kind of rhythm instrument and the method of distribution differ according to the drum set. In the case where "conversion target" - "pitch (instrument)" is "cymbal" in fig. 4, the performance sound is converted into "cymbal-allocated pitch" of a drum set actually specified as the tone color of the sound part. The high/medium/low of "conversion target" - "pitch (instrument)" of fig. 4 is not an actual conversion value, but "rhythm instrument of slightly high sound"/"rhythm instrument of medium height sound"/"rhythm instrument of slightly low sound". In the case of fig. 4, the tone color is set along the tone color set in each part of the accompaniment style data ASD selected (set) until the start of the real-time accompaniment sound generation.
The "mute object" indicates an object of style accompaniment sound that is to be stopped when real-time accompaniment sound is being generated. When the accent mode or the ensemble mode is on, the generation of the style accompaniment sound is stopped for a part of the played vocal part. In the "mute object", when a musical performance sound part is registered, when an arbitrary mode is turned on, the style accompaniment of the musical performance sound part is not played. In addition, when the real-time accompaniment is being generated, the accompaniment sound of the same style of the performance sound part as the performance sound part that is generating the real-time accompaniment sound is not played. In the "mute object", when "1 tone" is registered, the playback of the style accompaniment tones is stopped for each 1 tone in accordance with the generation of the real-time accompaniment tones. Alternatively, the playing of the style accompaniment sound of only the sound that is homophonic with the real-time accompaniment sound may be stopped for every 1 sound in accordance with the generation of the real-time accompaniment sound.
The "mute release timing" indicates a timing at which the stop of the playback of the style accompaniment sound shown by the "mute object" is released. As the "mute release timing", the "after a predetermined time has elapsed" is described, but specifically, the timing from the stop of the playback of the accompaniment sound for the style, such as 1 sound or 1 beat, to the restart of the playback is registered. The "silence release timing" is described as "detection of the closing of the input sound", which means that the playing of the style accompaniment sound is restarted at the time when the input of the performance sound is closed. However, when the musical performance sound part is registered in the "mute object", the muting of the style accompaniment sound is not released while the accent mode or the live mode continues. When the mode is closed, muting of the style accompaniment sound is released at the timing when closing of the input sound is detected.
For example, the data of line 1 of the accent mode indicates the following settings. When the top note of the performance data (performance sound) is "strong", the cymbal of the main drum is sounded as a real-time accompaniment. When the cymbal of the main drum is sounded, the main drum of the style accompaniment sound is not played. Then, after a predetermined time has elapsed, the playback of the master drum of the style accompaniment sounds is resumed.
For example, the data on the 3 rd row of the accent mode indicates the following settings. The intensity of the sound is unconditional for all pitches of the performance data (performance sound), and the real-time accompaniment sound is sounded at the same pitch as the performance sound (input sound) for the performance sound part of chord 1. When the accent mode is on, the playback of the style accompaniment is stopped for the played part of the chord 1. In the case of mode closure, playback of the style accompaniment tones is resumed upon detection of closure of the input tones.
For example, the data on the 8 th line of the accent mode indicates the following settings. When the intensity of the chord tones included in the musical performance data (musical performance tones) is equal to or higher than the medium intensity and lower than the medium intensity, the tom is sounded as a real-time accompaniment tone in the main drum sound part. When the bass drum is sounded in the main drum sound part, the bass drum sound of the main drum of the style accompaniment sound is not played. After the predetermined time has elapsed, the generation of the bass drum sound of the main drum of the style accompaniment sounds is restarted.
For example, the data of the 3 rd row of the ensemble mode indicates the following settings. The intensity of the sound is unconditional for the tonic note of the performance data (performance sound), and the real-time accompaniment sound is sounded at the same pitch as the tonic note for the performance part of chord 1. When the solo mode is on, the playing of the style accompaniment tones is stopped for the playing part of the chord 1. In the case of mode closure, playback of the style accompaniment tones is resumed upon detection of closure of the input tones.
In addition, for example, data of the 9 th line of the ensemble mode indicates the following settings. The intensity of the chord tones included in the musical performance data (musical performance tones) is unconditional, and the main drum is sounded as real-time accompaniment tones. When the military drum is sounded in the main drum sound part, the military drum sound of the main drum of the style accompaniment sound is not played. After a predetermined time has elapsed, the playing of the military drum sounds of the main drum of the style accompaniment sounds is resumed.
As described above, the setting data SD has setting information registered therein for sounding the real-time accompaniment sounds. Specifically, in the setting data SD shown in fig. 4 and 5, information of "conversion destination" - "vocal part" is registered as information for specifying a musical performance vocal part for generating a real-time accompaniment sound. In addition, in the setting data SD, "conversion destination" - "pitch (instrument)" is registered as information on the rule of generating the real-time accompaniment sound. The real-time accompaniment data generation unit 14 shown in fig. 3 generates the real-time accompaniment data RD for each 1 note of the performance notes included in the performance data by referring to the setting data SD.
In the examples shown in fig. 4 and 5, when performance data (performance sound) is input, a setting is made to generate real-time accompaniment sounds in a plurality of performance sound parts. As described above, the accompaniment sound generation apparatus 10 according to the present embodiment can generate real-time accompaniment sounds for a plurality of musical performance sound parts in response to the input of 1 musical performance sound.
(6) One example of accompaniment sound generation method
Next, the accompaniment sound generation method according to the present embodiment will be described. The CPU 106 executes the accompaniment sound generation program P1 shown in fig. 1, whereby the accompaniment sound generation device 10 executes the accompaniment sound generation method shown below. Fig. 6, 7, and 8 are flowcharts illustrating an accompaniment sound generation method according to the present embodiment.
As shown in fig. 6, first, in step S11, the pattern accompaniment sound generation unit 16 determines whether or not the setting operation unit 102 has detected an instruction to start automatic accompaniment. When the instruction to start the automatic accompaniment is detected, the accompaniment style data acquisition unit 15 reads the accompaniment style data ASD from the ROM 108 in step S12. The accompaniment style data acquisition unit 15 reads the accompaniment style data ASD based on the selection information or category information of the accompaniment style data ASD inputted from the setting operation unit 102. Next, in step S13, the accompaniment pattern data PD is acquired by the accompaniment sound generator 16 and given to the accompaniment sound output unit 17. The accompaniment sound output unit 17 outputs the accompaniment pattern data PD to the sound source 104. Thereby, the playback of the style accompaniment sounds is started through the sound system 105. As described above, the style accompaniment sound generation unit 16 acquires the accompaniment style data PD included in the accompaniment style data ASD based on the accompaniment chapter data including the variation selected by the performer.
Next, in step S14, the style accompaniment sound generation unit 16 determines whether or not the setting operation unit 102 has detected an instruction to stop the automatic accompaniment. When the instruction to stop the automatic accompaniment is detected, the style accompaniment sound generation unit 16 stops the generation of the accompaniment style data PD and stops the playback of the style accompaniment sounds in step S15.
In step S14, when the instruction to stop the automatic accompaniment is not detected, the mode determination unit 12 determines whether or not the opening of the accent mode or the ensemble mode is detected in step S16. That is, the mode determination unit 12 determines whether or not an instruction to start the real-time accompaniment function is detected. When the mode determination unit 12 detects that the accent mode or the ensemble mode is on, the real-time accompaniment sound generation unit 14 reads the setting data SD in step S21 of fig. 7.
Next, when there is a musical performance sound part for stopping the style accompaniment, the real-time accompaniment sound generator 14 gives an instruction to stop the style accompaniment sound to the style accompaniment sound generator 16. The real-time accompaniment generator 14 refers to the setting data SD, and when a stop (mute) is set in the "mute object" in units of a musical performance sound part, instructs the stop of the style accompaniment sound associated with the musical performance sound part. In response to the instruction, the style accompaniment sound generation unit 16 stops the output of the accompaniment style data PD for the performance sound part to which the stop instruction is given (step S22).
Next, in step S23, the mode determination unit 12 determines whether or not a stop instruction of the currently set mode is detected. When the mode stop instruction is detected, the real-time accompaniment sound generation unit 14 stops the generation of the real-time accompaniment data RD by the mode determination unit 12 (step S24). The real-time accompaniment sound generation unit 14 instructs the style accompaniment sound generation unit 16 to resume the style accompaniment sound associated with the musical performance sound unit that was stopped (muted) (step S25). Then, the process returns to step S14 of fig. 6.
In step S23, when the mode determination unit 12 does not detect the instruction to stop the mode, the mode determination unit 12 determines whether or not the instruction to change the mode is detected in step S26. For example, it is determined whether or not a mode change from the accent mode to the ensemble mode has been made. When the mode change instruction is detected, the process returns to step S21, and the setting data SD of the mode after the change is read again. When the mode change instruction is not detected, the performance sound input unit 11 determines whether or not the note-on is obtained in step S31 in fig. 8. Note-on is an input event of a performance sound by a key of a keyboard or the like. That is, the performance sound input unit 11 determines whether or not the input of performance data (performance sound) by the player is acquired.
When the musical performance sound input unit 11 has acquired the note-on, the specifying unit 13 specifies the musical performance sound unit for generating the real-time accompaniment sound based on the acquired musical performance sound in step S32. The specifying unit 13 specifies a performance sound part for generating real-time accompaniment sounds based on the "conversion destination" - "sound part" information corresponding to the currently set mode, with reference to the setting data SD. Next, in step S33, the real-time accompaniment sound generation unit 14 generates the real-time accompaniment data RD for the identified performance sound part. The real-time accompaniment sound generation unit 14 refers to the setting data SD, and determines the pitch, tone, volume, and the like of the real-time accompaniment sound to be generated, based on the "conversion destination" - "pitch (instrument)" information corresponding to the currently set mode.
The real-time accompaniment data RD generated by the real-time accompaniment sound generation unit 14 is supplied to the accompaniment sound output unit 17. The accompaniment sound output unit 17 outputs the real-time accompaniment data RD to the sound source 104 in accordance with the musical performance sound obtained by note-on at the sound generation timing. When playing the real-time accompaniment data RD associated with a plurality of performance sound parts, the real-time accompaniment data RD of the plurality of performance sound parts is output to the sound source 104 with sound emission timing matched with the performance sound. Thus, the real-time accompaniment sounds of the plurality of musical performance sound parts are output from the acoustic system 105 so that the sound emission timing matches the musical performance sound.
In step S34, the musical performance sound input unit 11 determines whether or not the note-off is obtained. Note off indicates that the input of performance data (performance tone) has made a state transition from the on state to the off state. When the musical performance sound input unit 11 has acquired the note-off, the real-time accompaniment sound generation unit 14 stops the real-time accompaniment sound following the note-off in step S35.
(7) Timing of automatic accompaniment generation
Fig. 9 is a timing diagram of automatic accompaniment sounds including style accompaniment sounds and real-time accompaniment sounds. The diagram of fig. 9 progresses from the left to the right in time. First, the start of automatic accompaniment is instructed at time T1, and generation of accompaniment sounds in a style is started. The style accompaniment sounds are generated in the performance sound parts of the key drum sound part, the bass sound part, the chord 1 sound part, and the phrase 1 sound part. After time T1, the style accompaniment sound is generated in accordance with the performance sound uttered by the player.
Next, at time T2, the turn-on of the ensemble mode is detected. Thus, the style accompaniment is stopped (muted) for the played sound portions of the bass sound portion, the chord 1 sound portion, and the phrase 1 sound portion. At time T2 and later, generation of the style accompaniment sound is continued for the main drum sound part.
Next, a performance tone is input at time T3. Based on the musical performance sound, real-time accompaniment sounds are generated in the main vocal part, the bass vocal part, and the chord 1 vocal part. Then, at time T3, the style accompaniment sound of the main drum sound part is stopped (muted). The musical performance tone is input again between times T4 and T5. Based on the performance sound, real-time accompaniment sounds are generated in the main drum sound part, the bass sound part, and the phrase 1 sound part. Then, the style accompaniment sound of the main drum sound part is stopped (muted) between time T4 and time T5. Next, a performance tone is input at time T6. Based on the performance sound, real-time accompaniment sounds are generated in all the performance sound parts. Then, at time T6, the style accompaniment sound of the main drum sound part is stopped (muted).
(8) Effects of the embodiments
The accompaniment sound generation device according to the present embodiment specifies a plurality of performance sound parts for generating real-time accompaniment sounds based on input performance sounds, and generates real-time accompaniment sounds belonging to the specified plurality of performance sound parts for each performance sound. The real-time accompaniment sounds generated by the plurality of musical performance sound units are output in accordance with the sound emission timing. Thus, the player can enjoy the varied automatic accompaniment sound. Since the real-time accompaniment is generated for each performance sound, the player is not given a monotonous impression of the accompaniment sounds.
In addition, according to the present embodiment, a plurality of patterns are prepared as a pattern for generating a real-time accompaniment sound from a musical performance sound. In addition, a plurality of musical performance sound portions for generating real-time accompaniment sounds in each mode are registered in the setting data SD. The real-time accompaniment tones can be adapted in match with the player's favorite pattern.
In addition, according to the present embodiment, the setting data SD includes information on the rule of generating the real-time accompaniment sound generated based on the performance sound in each mode. The real-time accompaniment sound generation unit 14 refers to the setting data SD and generates real-time accompaniment sounds from the performance sounds based on the generation rule corresponding to the set pattern. The real-time accompaniment tones can be adapted in match with the player's favorite pattern.
For example, if a human is playing, if there is a score of a song and a "decision" part, the player performs a performance matching the part. However, since the conventional apparatus for generating automatic accompaniment uses style accompaniment, it cannot cope with the performance as described above. That is, if people are present, it is impossible to deal with the performance expression having an integrated feeling that can be realized, if accompaniment sound data corresponding to the performance expression is not prepared in advance. It is assumed that if accompaniment sound data as described above is prepared in advance, enormous data is required. In addition, it is generally difficult for the user to create accompaniment sound data as described above, and this operation requires time. In the accompaniment generation apparatus according to the present embodiment, since the performance sound part is specified for each performance sound and the real-time accompaniment is generated based on the performance sound, the automatic accompaniment which is instantaneously matched with the performance such as the score sound and the decision sound can be played in real time.
In addition, according to the present embodiment, the style accompaniment sound generation unit 16 stops the generation of the style accompaniment sounds for the same musical performance sound part while the real-time accompaniment sounds are being generated. Real-time accompaniment sound is easy to hear, and players can enjoy the real-time accompaniment sound.
In addition, according to the present embodiment, the style accompaniment sound generation unit 16 stops the generation of the style accompaniment sound for a musical performance sound part such as a bass sound part, a chord sound part, a bottoming sound part, or a phrase sound part when an arbitrary mode for generating the real-time accompaniment sound is on. Real-time accompaniment sound is easy to hear, and players can enjoy the real-time accompaniment sound. When any mode for generating the real-time accompaniment sound is turned on, generation of the style accompaniment sound is continued for a performance sound part such as a main drum sound part. The player can enjoy the real-time accompaniment in the style accompaniment sound stream.
(9) Correspondence between each constituent element of claims and each element of embodiments
In the following, examples of correspondence between the respective components of the claims and the respective components of the embodiments will be described, but the present invention is not limited to the following examples. In the above-described embodiments, the real-time accompaniment sound is an example of accompaniment sound in the claims. In the above-described embodiment, the setting data SD is an example of setting information. In the above-described embodiment, "performance sound (input sound)" and "intensity condition" in fig. 4 are examples of the characteristics of performance sound, and "conversion target" - "pitch (instrument)" in fig. 4 are examples of the characteristics of accompaniment sound. In the above-described embodiment, the bass sound portion, chord 1 sound portion, and phrase 1 sound portion in fig. 9 are examples of the 1 st musical performance sound portion, and the drumstick sound portion is an example of the 2 nd musical performance sound portion. The 1 st musical performance sound part may include a plurality of musical performance sound parts. In addition, the 2 nd performance sound part may include a plurality of performance sound parts.
As each component of the claims, various components having the structures or functions described in the claims can be used.
(10) Other embodiments
In the above embodiment, the accent mode and the ensemble mode have been described as an example of the mode of the real-time accompaniment. For example, a mode corresponding to the category such as a hard rock mode or a jazz mode may be prepared.
In the above embodiment, the timbre, volume, and the like of the real-time accompaniment sound are determined with reference to the "conversion destination" - "pitch (instrument)" of the setting data SD. As another embodiment, the timbre, volume, and the like of the real-time accompaniment sound may be determined by referring to the accompaniment style data ASD based on the genre and the genre set in the current style accompaniment sound.
In the above embodiment, while the mode of the real-time accompaniment is on, the performance sound parts other than the main drum sound part are set to be muted, and only the main drum sound part continues the style accompaniment. In another embodiment, the style accompaniment may be continued by another part of the performance sound part in accordance with the main sound part. For example, the main drum sound part and the bass sound part may be continued.
In addition, when changing from the solo mode to another mode (closing of the solo mode or changing to the accent mode), the accompaniment part other than the rhythm may not be restored until the chord change instruction is received.

Claims (8)

1. An accompaniment sound generation device includes:
a specifying unit that specifies a plurality of performance sound units for generating accompaniment sounds based on an input performance sound;
an accompaniment sound generation unit for generating the accompaniment sound belonging to the plurality of performance sound parts determined for each of the performance sounds; and
and an accompaniment output unit that outputs the accompaniment generated by the plurality of performance sound units so as to match sound emission timing.
2. The accompaniment sound generation apparatus according to claim 1, wherein,
a plurality of patterns are prepared as patterns for generating the accompaniment sound based on the performance sound,
the specifying unit specifies the plurality of musical performance sound parts corresponding to the set mode with reference to setting information in which a plurality of musical performance sound parts for generating the accompaniment sound in each mode are registered.
3. The accompaniment sound generation apparatus according to claim 2, wherein,
the setting information includes information on a generation rule of the accompaniment sound generated based on the performance sound in each mode,
the accompaniment sound generation unit generates the accompaniment sound from the performance sound based on the generation rule corresponding to the set pattern with reference to the setting information.
4. The accompaniment sound generation apparatus according to claim 3, wherein,
in the setting information, information that associates a feature of the musical performance sound and a feature of the accompaniment sound is registered as the generation rule.
5. An electronic musical instrument having the accompaniment sound generation apparatus according to any one of claims 1 to 4,
the electronic musical instrument has a style accompaniment sound generating part for generating style accompaniment sounds in a predetermined musical performance sound part based on predetermined accompaniment style information,
the accompaniment sound generation unit stops the generation of the accompaniment sound pattern for a musical performance part identical to the accompaniment sound during the generation of the accompaniment sound from the accompaniment sound generation unit.
6. The electronic musical instrument according to claim 5,
the style accompaniment sound generation part stops the generation of the style accompaniment sound aiming at the 1 st playing sound part and continues the generation of the style accompaniment sound aiming at the 2 nd playing sound part when the mode for generating the accompaniment sound is opened.
7. A accompaniment sound generation method for specifying a plurality of performance sound parts for generating accompaniment sounds based on an input performance sound,
generating the accompaniment sound belonging to the determined plurality of performance sound parts for each of the performance sounds,
and outputting the accompaniment sound generated by the plurality of performance sound parts in accordance with sound generation timing.
8. An accompaniment sound generation program for causing a computer to execute:
determining a plurality of performance sound parts for generating accompaniment sounds based on the input performance sounds;
generating the accompaniment sound belonging to the determined plurality of performance sound parts for each of the performance sounds; and
and outputting the accompaniment sound generated by the plurality of performance sound parts in accordance with sound generation timing.
CN202011577931.XA 2020-01-17 2020-12-28 Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program Active CN113140201B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-006370 2020-01-17
JP2020006370A JP7419830B2 (en) 2020-01-17 2020-01-17 Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program

Publications (2)

Publication Number Publication Date
CN113140201A true CN113140201A (en) 2021-07-20
CN113140201B CN113140201B (en) 2024-04-19

Family

ID=76650515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011577931.XA Active CN113140201B (en) 2020-01-17 2020-12-28 Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program

Country Status (4)

Country Link
US (1) US11955104B2 (en)
JP (1) JP7419830B2 (en)
CN (1) CN113140201B (en)
DE (1) DE102021200208A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7419830B2 (en) * 2020-01-17 2024-01-23 ヤマハ株式会社 Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007147711A (en) * 2005-11-24 2007-06-14 Yamaha Corp Electronic musical instrument and computer program applied to electronic musical instrument
JP2010117419A (en) * 2008-11-11 2010-05-27 Casio Computer Co Ltd Electronic musical instrument
JP2010160523A (en) * 2010-04-22 2010-07-22 Yamaha Corp Electronic musical instrument and computer program applied to electronic musical instrument
CN103165115A (en) * 2011-12-09 2013-06-19 雅马哈株式会社 Sound data processing device and method
CN104050954A (en) * 2013-03-14 2014-09-17 卡西欧计算机株式会社 Automatic accompaniment apparatus and a method of automatically playing accompaniment
JP2016161901A (en) * 2015-03-05 2016-09-05 ヤマハ株式会社 Music data search device and music data search program
CN109416905A (en) * 2016-06-23 2019-03-01 雅马哈株式会社 Performance assistant apparatus and method
CN110299128A (en) * 2018-03-22 2019-10-01 卡西欧计算机株式会社 Electronic musical instrument, method, storage medium

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2576700B2 (en) 1991-01-16 1997-01-29 ヤマハ株式会社 Automatic accompaniment device
JP2526439B2 (en) * 1991-07-09 1996-08-21 ヤマハ株式会社 Electronic musical instrument
JP2956505B2 (en) * 1993-12-06 1999-10-04 ヤマハ株式会社 Automatic accompaniment device
US5756917A (en) * 1994-04-18 1998-05-26 Yamaha Corporation Automatic accompaniment device capable of selecting a desired accompaniment pattern for plural accompaniment components
CA2214161C (en) * 1996-08-30 2001-05-29 Daiichi Kosho, Co., Ltd. Karaoke playback apparatus utilizing digital multi-channel broadcasting
JP3603587B2 (en) * 1998-03-10 2004-12-22 ヤマハ株式会社 Automatic accompaniment device and storage medium
JP4173227B2 (en) * 1998-09-24 2008-10-29 株式会社第一興商 Karaoke device that selectively reproduces and outputs multiple vocal parts
CN1248135C (en) * 1999-12-20 2006-03-29 汉索尔索弗特有限公司 Network based music playing/song accompanying service system and method
JP3700532B2 (en) * 2000-04-17 2005-09-28 ヤマハ株式会社 Performance information editing / playback device
JP3844286B2 (en) * 2001-10-30 2006-11-08 株式会社河合楽器製作所 Automatic accompaniment device for electronic musical instruments
JP3885791B2 (en) * 2003-09-29 2007-02-28 ヤマハ株式会社 Program for realizing automatic accompaniment apparatus and automatic accompaniment method
JP2006201654A (en) 2005-01-24 2006-08-03 Yamaha Corp Accompaniment following system
JP2006301019A (en) * 2005-04-15 2006-11-02 Yamaha Corp Pitch-notifying device and program
JP2008089849A (en) * 2006-09-29 2008-04-17 Yamaha Corp Remote music performance system
JP5605040B2 (en) * 2010-07-13 2014-10-15 ヤマハ株式会社 Electronic musical instruments
WO2012132856A1 (en) * 2011-03-25 2012-10-04 ヤマハ株式会社 Accompaniment data generation device
JP6194589B2 (en) * 2013-02-13 2017-09-13 ヤマハ株式会社 Music data reproducing apparatus and program for realizing music data reproducing method
CN103258529B (en) * 2013-04-16 2015-09-16 初绍军 A kind of electronic musical instrument, musical performance method
JP6252088B2 (en) * 2013-10-09 2017-12-27 ヤマハ株式会社 Program for performing waveform reproduction, waveform reproducing apparatus and method
JP2015075754A (en) * 2013-10-12 2015-04-20 ヤマハ株式会社 Sounding assignment program, device, and method
JP2016161900A (en) 2015-03-05 2016-09-05 ヤマハ株式会社 Music data search device and music data search program
JP6565528B2 (en) * 2015-09-18 2019-08-28 ヤマハ株式会社 Automatic arrangement device and program
JP6565530B2 (en) * 2015-09-18 2019-08-28 ヤマハ株式会社 Automatic accompaniment data generation device and program
JP6497404B2 (en) * 2017-03-23 2019-04-10 カシオ計算機株式会社 Electronic musical instrument, method for controlling the electronic musical instrument, and program for the electronic musical instrument
JP6733720B2 (en) * 2018-10-23 2020-08-05 ヤマハ株式会社 Performance device, performance program, and performance pattern data generation method
JP6939922B2 (en) * 2019-03-25 2021-09-22 カシオ計算機株式会社 Accompaniment control device, accompaniment control method, electronic musical instrument and program
JP6693596B2 (en) * 2019-07-26 2020-05-13 ヤマハ株式会社 Automatic accompaniment data generation method and device
JP6760450B2 (en) * 2019-07-26 2020-09-23 ヤマハ株式会社 Automatic arrangement method
JP7419830B2 (en) * 2020-01-17 2024-01-23 ヤマハ株式会社 Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program
JP7036141B2 (en) * 2020-03-23 2022-03-15 カシオ計算機株式会社 Electronic musical instruments, methods and programs

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007147711A (en) * 2005-11-24 2007-06-14 Yamaha Corp Electronic musical instrument and computer program applied to electronic musical instrument
JP2010117419A (en) * 2008-11-11 2010-05-27 Casio Computer Co Ltd Electronic musical instrument
JP2010160523A (en) * 2010-04-22 2010-07-22 Yamaha Corp Electronic musical instrument and computer program applied to electronic musical instrument
CN103165115A (en) * 2011-12-09 2013-06-19 雅马哈株式会社 Sound data processing device and method
CN104050954A (en) * 2013-03-14 2014-09-17 卡西欧计算机株式会社 Automatic accompaniment apparatus and a method of automatically playing accompaniment
JP2016161901A (en) * 2015-03-05 2016-09-05 ヤマハ株式会社 Music data search device and music data search program
CN109416905A (en) * 2016-06-23 2019-03-01 雅马哈株式会社 Performance assistant apparatus and method
CN110299128A (en) * 2018-03-22 2019-10-01 卡西欧计算机株式会社 Electronic musical instrument, method, storage medium

Also Published As

Publication number Publication date
JP7419830B2 (en) 2024-01-23
US20210225345A1 (en) 2021-07-22
JP2021113895A (en) 2021-08-05
US11955104B2 (en) 2024-04-09
DE102021200208A1 (en) 2021-07-22
CN113140201B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US7750230B2 (en) Automatic rendition style determining apparatus and method
US8324493B2 (en) Electronic musical instrument and recording medium
US8314320B2 (en) Automatic accompanying apparatus and computer readable storing medium
CN113140201B (en) Accompaniment sound generation device, electronic musical instrument, accompaniment sound generation method, and accompaniment sound generation program
JP4962592B2 (en) Electronic musical instruments and computer programs applied to electronic musical instruments
JP5912269B2 (en) Electronic musical instruments
US8759660B2 (en) Electronic musical instrument
JP5909967B2 (en) Key judgment device, key judgment method and key judgment program
JP2587737B2 (en) Automatic accompaniment device
JP2007147711A (en) Electronic musical instrument and computer program applied to electronic musical instrument
JP4003625B2 (en) Performance control apparatus and performance control program
JPH1185174A (en) Karaoke sing-along machine which enables a user to play accompaniment music
US20230035440A1 (en) Electronic device, electronic musical instrument, and method therefor
JP3812509B2 (en) Performance data processing method and tone signal synthesis method
JP3674469B2 (en) Performance guide method and apparatus and recording medium
JP2009216769A (en) Sound processing apparatus and program
JPH10171475A (en) Karaoke (accompaniment to recorded music) device
JP3424989B2 (en) Automatic accompaniment device for electronic musical instruments
JP5104293B2 (en) Automatic performance device
JP4900233B2 (en) Automatic performance device
JP3434403B2 (en) Automatic accompaniment device for electronic musical instruments
JP5034471B2 (en) Music signal generator and karaoke device
JP4067007B2 (en) Arpeggio performance device and program
JP2000172253A (en) Electronic musical instrument
JP2010217230A (en) Electronic musical instrument and automatic performance program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant