WO2011030761A1 - Music game system, computer program of same, and method of generating sound effect data - Google Patents

Music game system, computer program of same, and method of generating sound effect data Download PDF

Info

Publication number
WO2011030761A1
WO2011030761A1 PCT/JP2010/065337 JP2010065337W WO2011030761A1 WO 2011030761 A1 WO2011030761 A1 WO 2011030761A1 JP 2010065337 W JP2010065337 W JP 2010065337W WO 2011030761 A1 WO2011030761 A1 WO 2011030761A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
data
voice
sound effect
pitch
Prior art date
Application number
PCT/JP2010/065337
Other languages
French (fr)
Japanese (ja)
Inventor
右寺 修
宜隆 西村
Original Assignee
株式会社コナミデジタルエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コナミデジタルエンタテインメント filed Critical 株式会社コナミデジタルエンタテインメント
Priority to CN201080039640.3A priority Critical patent/CN102481488B/en
Priority to US13/394,967 priority patent/US20120172099A1/en
Publication of WO2011030761A1 publication Critical patent/WO2011030761A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/368Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems displaying animated or moving pictures synchronized with the music or audio part
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/814Musical performances, e.g. by evaluating the player's ability to follow a notation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/206Game information storage, e.g. cartridges, CD ROM's, DVD's, smart cards
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/63Methods for processing data by generating or executing the game program for controlling the execution of the game in time
    • A63F2300/638Methods for processing data by generating or executing the game program for controlling the execution of the game in time according to the timing of operation or a time limit
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8047Music games
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/066Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/106Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/135Musical aspects of games or videogames; Musical instrument-shaped game input interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/015PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used

Definitions

  • the present invention relates to a music game system or the like in which sound input by a player is reflected in game content.
  • JP 2002-136664 A Japanese Patent Laid-Open No. 10-268876
  • the game content is changed by taking in the voice of the player. Processing is performed to detect the pitch of the player's voice and change the action content of the character based on the comparison result with the reference pitch.
  • the sound input by the player is reflected in the game content as a material and the game is enjoyed with the input sound.
  • an object of the present invention is to provide a music game system capable of discriminating a voice input by a player and forming a scale based on the discrimination result, a computer program thereof, and a method of generating sound effect data.
  • the music game system of the present invention includes a sound input device for inputting sound, a sound output device for reproducing and outputting game sound, and sound effect data for outputting each of a plurality of sound effects having different pitches from the sound output device.
  • Sound data storage means for storing sound
  • sequence data storage means for storing sequence data describing the relationship between the sound effects to be output in response to the player's operation, and the sound of the sound input by the sound input device
  • a scale is formed by pitch determination means for determining the pitch representative of the input voice, and a plurality of sound data having different pitches from the voice data based on the pitch determination result of the pitch determination means.
  • Scale generation means for generating the sound data, and the plurality of sound data generated by the scale generation means as at least a part of the sound effect data
  • sound effect data storage control means for storing in said sound effect data storage means, in which comprises a.
  • the computer program for the music game system of the present invention outputs a sound input device for inputting sound, a sound output device for reproducing and outputting game sound, and each of a plurality of sound effects having different pitches from the sound output device.
  • a music game comprising sound effect data storage means for storing sound effect data for playing, and sequence data storage means for storing sequence data describing the relationship between sound effects to be output in response to the player's operation
  • a computer incorporated in the system, based on the voice data of the voice input by the voice input device, on the basis of the pitch discrimination means for discriminating the pitch representative of the input voice, on the basis of the pitch discrimination result of the pitch discrimination means;
  • a scale generating means for generating a plurality of sound data having different pitches from the sound data so that a scale is formed. And a plurality of sound data generated by the scale generating means is configured to function as sound effect data storage control means for storing the sound effect data storage means as at least a part of the sound effect data. .
  • voice data is generated by the pitch discriminating unit based on the voice input by the player to the voice input device, and the pitch that is representative of the voice data is discriminated. Then, based on the pitch determination result of the voice data, the scale generation unit generates a plurality of sound data having different pitches based on the voice data whose pitch has been determined.
  • the plurality of sound data forms a musical scale.
  • the plurality of sound data is stored in the sound effect data storage means as sound effect data, and the plurality of sound data is used as the sound effect to be output in response to the operation of the player.
  • a scale is formed based on the voice arbitrarily input by the player, so that a melody based on the input voice can be played, or the input voice can be reflected in the game content and the game can be enjoyed with the voice input by the player. can do.
  • the pitch determination unit may determine the pitch of the voice by specifying a representative frequency from voice data of the voice input by the voice input device.
  • the pitch of the voice is determined by specifying the frequency having the maximum distribution as a representative value with reference to the frequency spectrum of the voice data.
  • the scale generation means may generate a scale of at least one octave or more. According to this aspect, it is possible to play a melody by generating a scale. If a large amount of sound data is generated, the range of musical scales can be expanded to increase the number of melodies that can be played, and the game content can be enhanced.
  • the audio output device further includes an input device having at least one operation unit, and the sound effect according to the description of the sequence data is based on the operation of the player by the input device. May be played from.
  • the player can reproduce the sound effect composed of the scale formed using the sound input by the player by operating the operation unit. Therefore, it is possible to reflect the input sound as a material in the game content and enjoy the game with the sound input by the player.
  • the sound effect data generation method of the present invention includes a pitch determination step for determining a pitch representing the input voice based on the voice data of the voice input by the voice input device, and a pitch determination result of the pitch determination step. And generating a plurality of sound data having different pitches from the sound data so that a scale is formed, and outputting the plurality of sound data generated in the scale generation step from a sound output device And storing it in the storage means as sound effect data.
  • the present invention is a method for generating sound effect data in a music game system and a computer program thereof, and has the same effects.
  • the present invention is not limited to a music game system, and can be applied to various electronic devices such as an electronic musical instrument.
  • the sound data is generated by the pitch determination unit based on the voice input by the player to the voice input device, and the pitch that is representative of the voice data is determined. Is done. Then, based on the pitch determination result of the voice data, the scale generation unit generates a plurality of sound data having different pitches based on the voice data whose pitch has been determined. The plurality of sound data forms a musical scale. The plurality of sound data is stored in the sound effect data storage means as sound effect data, and the plurality of sound data is used as the sound effect to be output in response to the operation of the player.
  • a scale is formed based on the voice arbitrarily input by the player, so that a melody based on the input voice can be played, or the input voice can be reflected in the game content and the game can be enjoyed with the voice input by the player. can do.
  • the same effect can be achieved in the method for generating sound effect data.
  • 1 is a functional block diagram of a game machine according to one embodiment of the present invention.
  • the game machine 1 is disposed on a housing 2 that a player (user) can hold, a first monitor 3 disposed on the right side of the housing 2, and a left side of the housing 2.
  • a second monitor 4 a plurality of push button switches 5 disposed on the upper side of the first monitor 3, and a cross key 6 disposed on the lower side of the first monitor 3 are provided.
  • a transparent touch panel 7 is superimposed on the surface of the first monitor 3.
  • the touch panel 7 is a known input device that outputs a signal corresponding to the contact position when the player touches with a touch pen or the like.
  • the game machine 1 is provided with various input devices and output devices such as a power switch, a volume operation switch, and a power lamp, which are included in a normal portable game machine. Illustration is omitted.
  • a control unit 10 as a computer is provided inside the game machine 1.
  • the control unit 10 includes a game control unit 11 as a control subject, and a pair of display control units 12 and 13 and an audio output control unit 14 that operate according to an output from the game control unit 11.
  • the game control unit 11 is configured as a unit in which a microprocessor and various peripheral devices such as an internal storage device (for example, ROM and RAM) necessary for the operation of the microprocessor are combined.
  • the display control units 12 and 13 draw an image corresponding to the image data supplied from the game control unit 11 in the frame buffer, and output video signals corresponding to the drawn image to the monitors 3 and 4, respectively. A predetermined image is displayed on 3 and 4.
  • the sound output control unit 14 generates a sound reproduction signal corresponding to the sound reproduction data given from the game control unit 11 and outputs the sound reproduction signal to the speaker 8, whereby predetermined sound (including musical sounds and the like) is output from the speaker 8. Let it play.
  • the game control unit 11 is connected with the push button switch 5, the cross key 6 and the touch panel 7 described above as input devices, and in addition to these, a voice input device (microphone) 9 is connected.
  • various input devices may be connected to the game control unit 11.
  • an external storage device 20 is connected to the game control unit 11.
  • the external storage device 20 is a storage medium that can hold the storage even when power is not supplied, such as a nonvolatile semiconductor memory device such as an EEPROM or a magnetic storage device.
  • the storage medium of the external storage device 20 is detachable from the game machine 1.
  • the external storage device 20 stores a game program 21 and game data 22.
  • the game program 21 is a computer program necessary for executing a music game according to a predetermined procedure on the game machine 1, and includes a sequence control module 23 and a pitch determination module for realizing the functions according to the present invention. 24 and a scale generation module 25 are included.
  • the game control unit 11 executes various operation settings stored in the internal storage device, thereby executing various initial settings necessary to operate as the game machine 1. Then, by reading the game program 21 from the external storage device 20 and executing it, an environment for executing the music game according to the game program 21 is set.
  • the sequence control module 23 of the game program 21 is executed by the game control unit 11, a sequence processing unit 15 is generated in the game control unit 11.
  • the pitch determination module 24 of the game program 21 is executed by the game control unit 11
  • the pitch determination unit 16 is generated in the game control unit 11.
  • the scale generation module 25 is generated by the game control unit 11.
  • a scale generation unit 17 is generated in the game control unit 11.
  • the sequence processing unit 15, the pitch determination unit 16, and the scale generation unit 17 are logical devices realized by a combination of computer hardware and a computer program.
  • the sequence processing unit 15 performs music game processing in which an operation is instructed to the player in accordance with the reproduction of music (music) selected by the player, or a sound effect is generated in accordance with the operation of the player.
  • the pitch determination unit 16 takes in an arbitrary sound input by the player to the sound input device 9 and performs a predetermined process described later to determine a representative value of the frequency.
  • the scale generation unit 17 generates a plurality of sound data whose pitches are changed based on the representative values determined by the pitch determination unit 16. These sound data form a scale of a predetermined octave number and constitute sound effects.
  • the game program 21 includes various program modules necessary for executing the music game in addition to the modules 23 to 25 described above, and the game control unit 11 generates logical devices corresponding to these modules. However, their illustration is omitted.
  • the game data 22 includes various data to be referred to when the music game is executed according to the game program 21.
  • the game data 22 includes music data 26, sound effect data 27, and image data 28.
  • the music data 26 is data necessary for reproducing and outputting music to be played from the speaker 8.
  • FIG. 2 one type of music data 26 is shown, but in reality, the player can select music to play from a plurality of music.
  • a plurality of pieces of music data 26 are recorded in the game data 22 with information for identifying each music piece.
  • the sound effect data 27 is data in which a plurality of types of sound effects to be output from the speaker 8 in response to the operation of the player are recorded in association with unique codes for each sound effect. Sound effects include musical instruments and various other types of sounds.
  • a vocal sound for outputting text from the speaker 8 is also included as a kind of sound effect.
  • the sound effect data 27 is prepared for a predetermined number of octaves by changing the pitch for each type.
  • the image data 28 is data for displaying the background image, various objects, icons, and the like in the game screen on the monitors 3 and 4.
  • the game data 22 further includes sequence data 29.
  • the sequence data 29 is data defining operations and the like to be instructed to the player. At least one sequence data 29 is prepared for one piece of music data 26.
  • the game operation instruction screen 100 is displayed on the first monitor 3, and the game information screen 110 is displayed on the second monitor 4.
  • the operation instruction screen 100 is visually displayed by a procedure in which the first lane 101, the second lane 102, and the third lane 103 extending in the vertical direction are divided by the dividing line 104. The state divided into is displayed. Operation reference signs 105 are displayed at the lower ends of the lanes 101, 102, and 103, respectively.
  • the lanes 101, 102, and 103 display objects 106 as operation instruction signs according to the sequence data 27.
  • the object 106 appears at the upper end of the lanes 101, 102, and 103 at an appropriate time in the music and is scrolled downward as the music progresses as indicated by an arrow A in FIG.
  • the player is requested to touch the lane 101, 102, or 103 on which the object 106 is displayed with an operation member such as the touch pen 120 as the object 106 reaches the operation reference mark 105.
  • an operation member such as the touch pen 120 as the object 106 reaches the operation reference mark 105.
  • a time difference between the time when the object 106 matches the operation reference sign 105 and the time of the touch operation of the player is detected. The smaller the deviation time, the higher the player's operation is evaluated.
  • sound effects corresponding to each object 106 are reproduced from the speaker 8 in response to the touch operation.
  • the player may touch the second lane 102 in accordance with the arrival.
  • the touch position may be anywhere within the second lane 102. That is, in this embodiment, three operation units are formed by a combination of the lanes 101, 102, and 103 displayed on the first monitor 3 and the touch panel 7 superimposed on them.
  • each of the lanes 101, 102, and 103 may be used as a term representing the operation unit.
  • the sound effect corresponding to each object 106 reproduced in response to the touch operation is selected from a plurality of sound effects recorded in the sound effect data 27.
  • the sound effect data 27 includes original data 27 a recorded in advance in the game data 22 and user data 27 b obtained based on the sound input by the player using the sound input device 9.
  • a plurality of sound effects A1, B1,... are recorded, and the sound effect A1 will be described as an example.
  • Other sound effects B1, C1,... Have similar sound data.
  • the user data 27b is similar to the original data 27a in the structure of the sound data of the sound effects A2, B2,..., But the sound data is generated based on the sound input by the player using the sound input device 9. It differs from the original data 27a recorded in advance.
  • the sequence data 29 includes an initial setting unit 29a and an operation sequence unit 29b.
  • the initial setting unit 29a is different for each song, such as music tempo (BPM as an example) which is an initial setting for playing a game, and information for specifying sound effects to be generated when the lanes 101 to 103 are operated. Information specifying game execution conditions and the like is described.
  • operation designation information 29c and sound effect switching instruction information 29d are described in the operation sequence portion 29b.
  • the operation designation information 29c the operation timing of the lanes 101 to 103 is described in association with information for designating any of the lanes 101 to 103. That is, as partly illustrated in FIG. 5, the operation designation information 29c includes a plurality of pieces of information in which the time (operation time) when the operation should be performed in the music and the information designating the operation unit (lane) are associated with each other. It is structured as a set of records. The operation time is described by separating a bar number, a beat number, and a time value in the music by commas.
  • the operation unit is described as “button 1” when the first lane 101 is designated, “button 2” when the second lane 102 is designated, and “button 3” when the third lane 103 is designated. In the example of FIG.
  • the first lane 101 is touched at the start time (000) of the first beat of the first bar
  • the second lane 102 is touched at the start time (000) of the second beat of the first bar
  • the operation time and the operation unit are designated such that the third lane 103 is touched when the time corresponding to “025” has elapsed since the start of the second beat of the first measure.
  • the sound effect switching instruction information 29d is inserted at an appropriate position in the middle of the operation specifying information 29c.
  • the sound effect switching instruction information 29d is described in association with the time on the music for which the sound effect is to be changed and sound data of the sound effect that should be generated when the lanes 101 to 103 are operated.
  • the sound effect generated when the lane designated in the designation information 29c is touched is changed.
  • the time on the music is described in the same format as the operation time in the operation designation information 29c.
  • the sound effect switching instruction information 29d designates one of the sound data of the original data 27a and the user data 27b recorded in the sound effect data 27 for each lane.
  • the sound effect switching instruction information 29d is inserted at a time on the music to which the sound effect is to be switched, and the setting of the sound effect is maintained until an instruction is given by the next sound effect switching instruction information 29d.
  • the sequence processing unit 15 of the game control unit 11 controls the display of each of the lanes 101 to 103 so that the object 106 matches the operation reference sign 105 at the operation time specified by the operation specifying information 29c described above. Further, the sequence processing unit 15 controls to switch the sound effect generated when the player touches the designated lanes 101 to 103 at the time on the music designated by the sound effect switching instruction information 29d.
  • the game control unit 11 When the game control unit 11 reads the game program 21 and completes the initial settings necessary to execute the music game, the game control unit 11 stands by in preparation for a game start instruction from the player.
  • the instruction to start the game includes, for example, an operation for specifying data used in the game such as selection of music to be played in the game or difficulty level.
  • the procedure for receiving these instructions may be the same as that of a known music game or the like.
  • the game control section 11 When the game start is instructed, the game control section 11 reads the music data 26 corresponding to the music selected by the player and outputs the music data 26 to the audio output control section 14, thereby starting the reproduction of the music from the speaker 8. Thereby, the control unit 10 functions as a music reproducing means. In addition, the game control unit 11 reads out the sequence data 29 corresponding to the player's selection in synchronization with the reproduction of the music, and refers to the image data 28 and images necessary for drawing the operation instruction screen 100 and the information screen 110. By generating data and outputting it to the display control units 12 and 13, the operation instruction screen 100 and the information display surface 110 are displayed on the monitors 3 and 4. Furthermore, during the execution of the music game, the game control unit 11 repeatedly executes a sequence processing routine shown in FIG. 6 at a predetermined cycle as a process necessary for displaying the operation instruction screen 100 and the like.
  • the sequence processing unit 15 of the game control unit 11 first acquires the current time on the music in step S1. For example, timing is started with the internal clock of the game control unit 11 with the music reproduction start time as a reference, and the current time is acquired from the value of the internal clock.
  • the sequence processing unit 15 acquires, from the sequence data 28, operation timing data existing for a time length corresponding to the display range of the operation instruction screen 100.
  • the display range is set to a time range corresponding to two measures of music from the current time to the future.
  • the sequence processing unit 15 calculates the coordinates in the operation instruction screen 100 of all the objects 106 to be displayed on the lanes 101 to 103.
  • the calculation is performed as follows as an example. Based on the designation of the lanes 101 to 103 associated with the operation timing included in the display range, that is, the designation of any of “button 1” to “button 3” in the example of FIG. It is determined in which of these should be arranged. Further, the position of each object 106 in the time axis direction from the operation reference mark 105 (that is, the moving direction of the object 106) is determined according to the time difference between each operation time and the current time. As a result, the coordinates of each object 106 necessary for arranging each object 106 along the time axis from the operation reference mark 105 in the designated lanes 101 to 103 can be acquired.
  • step S4 determines whether or not the sound effect switching instruction information 29d is present in the data acquired from the sequence data 29.
  • the sequence processing unit 15 obtains the current time in step S5 and compares it with the time on the music designated by the sound effect switching instruction information 29d, and the current time is the sound effect. It is determined whether or not the timing of the switching instruction is met. In the case where it corresponds to the timing of the switching instruction, in step S6, the sequence control unit 15 specifies the sound effect generated in each of the lanes 101 to 103 specified by the subsequent operation specifying information 29c by using the sound effect switching instruction information 29d. Change to the sound effect.
  • the sound data sd_101, sd_105, and sd_106 of the sound effect A2 of the user data 27b of the sound effect data 27 are respectively stored in the lane 101 after the start of the third beat of the first bar of the music. , 102 and 103, and when the player touches the lanes 101 to 103, each sound data is reproduced. If there is no sound effect switching instruction information 29d in step S4, or if there is no sound effect switching instruction information 29d in step S5, the sequence processing unit 15 proceeds to step S7.
  • the sequence processing unit 15 proceeds to the next step S7, and generates image data necessary for drawing the operation instruction screen 100 based on the coordinates of the object 106 calculated in step S3. . Specifically, the image data is generated so that the object 106 is arranged at the calculated coordinates. The image of the object 106 may be acquired from the image data 28.
  • step S8 the sequence processing unit 15 outputs the image data to the display control unit 12. Thereby, the operation instruction screen 100 is displayed on the first monitor 3.
  • step S8 the sequence processing unit 15 ends the current sequence processing routine.
  • the object 106 is scroll-displayed in the lanes 101 to 103 so that the object 106 reaches the operation reference mark 105 at the operation time described in the sequence data 29.
  • the sound effect is created by, for example, instructing the player to start while the music game is not being executed.
  • the pitch discriminating unit 16 first executes the pitch discriminating processing routine shown in FIG. 7, and the scale generating unit 17 based on the result of the pitch discriminating processing routine is shown in FIG. Execute.
  • the pitch determination unit 16 of the game control unit 11 acquires the voice input by the player in step S11.
  • the player inputs voice while the voice input device 9 can capture voice
  • raw voice data is generated.
  • the pitch determination unit 16 performs A / D conversion on the raw voice data.
  • the analog signal of the raw voice data is converted into a digital signal, and the voice data of the input voice is generated.
  • FIG. 9 shows an example of audio data.
  • the audio data in FIG. 9 is a digital waveform of a guitar sound, where the horizontal axis indicates the dynamic range and the vertical axis indicates the duration.
  • a well-known technique may be used for A / D conversion.
  • the pitch discriminating unit 16 obtains the frequency spectrum of the voice data in step S13.
  • FIG. 10 shows a frequency spectrum generated by fast Fourier transform from the audio data obtained in step S12. The horizontal axis indicates the frequency, and the vertical axis indicates the frequency distribution degree. The generation of the frequency spectrum is not limited to the calculation by the fast Fourier transform, and various known techniques may be used.
  • the pitch determination unit 16 determines a representative value from the frequency spectrum obtained in step S13.
  • the representative value is the maximum value of the frequency spectrum distribution.
  • the peak frequency indicated by the arrow p is a representative value.
  • the pitch of the voice data based on the voice input by the player is determined based on the frequency of the representative value thus determined.
  • the representative value may be calculated from the data of the band q occupied at both ends of the peak having such a maximum peak. Even when the peak is unclear, such as when the frequency of the peak is wide, the representative value can be calculated from a certain band by this method.
  • the pitch determination unit 16 ends the current pitch determination processing routine. With the above processing, representative values are determined for the voice data based on the voice input by the player, and the unique pitch is determined.
  • the scale generation unit 17 executes the scale generation processing routine of FIG.
  • the scale generation unit 17 generates a plurality of sound data forming a scale from the sound data whose representative value is determined in step S21.
  • the scale generation unit 17 frequency-converts the sound data based on the representative value so that the representative value of each sound data becomes the frequency of each sound forming a scale having a predetermined number of octaves.
  • FIG. 11 shows an example of sound data subjected to frequency conversion. The waveform of FIG. 11 is obtained by frequency-converting the audio data of FIG. 9 upward by one octave.
  • the scale generation unit 17 stores the generated sound data set in the sound effect data 27.
  • step S22 the scale generation unit 17 ends the current scale generation process routine.
  • a plurality of pieces of sound data having different representative value frequencies are generated based on the sound data for which the representative value is determined, and a scale is formed.
  • a set of sound data forming the scale is stored as sound effects in the user data 27b of the sound effect data 27.
  • the external storage device 20 of the game machine 1 functions as sound effect data storage means and sequence data storage means.
  • the control unit 10 functions as a pitch discriminating unit by causing the pitch discriminating unit 16 to execute the processes of steps S11 to S14 in FIG. 7, and the scale generating unit 17 executes the step S21 in FIG. It functions as a sound effect data storage control means by causing the scale generation unit 17 to execute step S22 of FIG.
  • the present invention can be implemented in various forms without being limited to the above-described forms.
  • the music game machine 1 has been described as an example of a device that causes the pitch determination unit, the scale generation unit, and the sound effect data storage control unit to function, but the present invention is not limited thereto.
  • a melody can be played with an arbitrary voice input by the player.
  • the music game system of the present invention is not limited to that realized by a portable game machine, but is realized by using a stationary game machine for home use, an arcade game machine installed in a commercial facility, and a network. It may be realized in an appropriate form such as a game system.
  • the input device is not limited to an example using a touch panel, and input devices having various configurations such as a push button, a lever, and a trackball can be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A music game system is provided with a voice input device (9) for inputting voice, a speaker (8) for playing back and outputting game sounds, and an external storage device (20) for storing sound effect data (27) for outputting each of a plurality of sound effects of different pitches from a speaker (8) and sequence data (29) for describing relationships between operations of a player and sound effects to be output accordingly, wherein, on the basis of voice data of a voice input by way of the voice input device (9), the pitch representing the voice that has been input is determined, and on the basis of the pitch determination result, a plurality of sound data for which the pitch mutually differs from that of the voice data is generated so as to form a scale, and the collection of the plurality of sound data is stored in the sound effect data (27) as at least a portion of the sound effect data.

Description

音楽ゲームシステム及びそのコンピュータプログラム並びに効果音データの生成方法Music game system, computer program thereof, and method of generating sound effect data
 本発明は、プレイヤにより入力された音声がゲーム内容に反映される音楽ゲームシステム等に関する。 The present invention relates to a music game system or the like in which sound input by a player is reflected in game content.
 プレイヤが入力した音声に基づいてゲーム内容が変化する音楽ゲーム機が周知である。例えば、入力した音声をキャラクタの動作に反映させるもの(特許文献1参照。)や、プレイヤの歌唱を入力して採点し、その優劣を競うもの(特許文献2参照。)が知られている。 Music game machines in which game content changes based on voice input by a player are well known. For example, there are known ones that reflect the input voice in the action of the character (see Patent Document 1) and those that input and score a player's song and compete for superiority (see Patent Document 2).
特開2002-136764号公報JP 2002-136664 A 特開平10-268876号公報Japanese Patent Laid-Open No. 10-268876
 上述のゲーム機では、いずれもプレイヤの音声を取り込んでゲーム内容を変更している。プレイヤの音声の音程を検知し、基準音程との比較結果に基づいてキャラクタの動作内容を変更するように処理がされる。しかしながら、プレイヤが入力した音声を素材としてゲーム内容に反映させ、この入力した音声でゲームを楽しむような構成はない。 In any of the above game machines, the game content is changed by taking in the voice of the player. Processing is performed to detect the pitch of the player's voice and change the action content of the character based on the comparison result with the reference pitch. However, there is no configuration in which the sound input by the player is reflected in the game content as a material and the game is enjoyed with the input sound.
 そこで、本発明はプレイヤによる入力された音声を判別し、判別結果に基づいて音階を形成可能な音楽ゲームシステム及びそのコンピュータプログラム並びに効果音データの生成方法を提供することを目的とする。 Therefore, an object of the present invention is to provide a music game system capable of discriminating a voice input by a player and forming a scale based on the discrimination result, a computer program thereof, and a method of generating sound effect data.
 本発明の音楽ゲームシステムは、音声を入力する音声入力装置と、ゲーム音を再生出力する音声出力装置と、音程の異なる複数の効果音のそれぞれを前記音声出力装置から出力させるための効果音データを記憶する効果音データ記憶手段と、プレイヤの操作と対応して出力すべき効果音との関係を記述するシーケンスデータを記憶するシーケンスデータ記憶手段と、前記音声入力装置により入力された音声の音声データに基づいて、前記入力された音声を代表する音程を判別する音程判別手段と、前記音程判別手段の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成手段と、前記音階生成手段が生成した前記複数の音データを前記効果音データの少なくとも一部として前記効果音データ記憶手段に記憶させる効果音データ記憶制御手段と、を備えるものである。 The music game system of the present invention includes a sound input device for inputting sound, a sound output device for reproducing and outputting game sound, and sound effect data for outputting each of a plurality of sound effects having different pitches from the sound output device. Sound data storage means for storing sound, sequence data storage means for storing sequence data describing the relationship between the sound effects to be output in response to the player's operation, and the sound of the sound input by the sound input device Based on the data, a scale is formed by pitch determination means for determining the pitch representative of the input voice, and a plurality of sound data having different pitches from the voice data based on the pitch determination result of the pitch determination means. Scale generation means for generating the sound data, and the plurality of sound data generated by the scale generation means as at least a part of the sound effect data And sound effect data storage control means for storing in said sound effect data storage means, in which comprises a.
 また、本発明の音楽ゲームシステム用のコンピュータプログラムは、音声を入力する音声入力装置と、ゲーム音を再生出力する音声出力装置と、音程の異なる複数の効果音のそれぞれを前記音声出力装置から出力させるための効果音データを記憶する効果音データ記憶手段と、プレイヤの操作と対応して出力すべき効果音との関係を記述するシーケンスデータを記憶するシーケンスデータ記憶手段と、を備えた音楽ゲームシステムに組み込まれるコンピュータを、前記音声入力装置により入力された音声の音声データに基づいて、前記入力された音声を代表する音程を判別する音程判別手段、前記音程判別手段の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成手段、及び前記音階生成手段が生成した前記複数の音データを前記効果音データの少なくとも一部として前記効果音データ記憶手段に記憶させる効果音データ記憶制御手段、として機能させるように構成されたものである。 The computer program for the music game system of the present invention outputs a sound input device for inputting sound, a sound output device for reproducing and outputting game sound, and each of a plurality of sound effects having different pitches from the sound output device. A music game comprising sound effect data storage means for storing sound effect data for playing, and sequence data storage means for storing sequence data describing the relationship between sound effects to be output in response to the player's operation A computer incorporated in the system, based on the voice data of the voice input by the voice input device, on the basis of the pitch discrimination means for discriminating the pitch representative of the input voice, on the basis of the pitch discrimination result of the pitch discrimination means; A scale generating means for generating a plurality of sound data having different pitches from the sound data so that a scale is formed. And a plurality of sound data generated by the scale generating means is configured to function as sound effect data storage control means for storing the sound effect data storage means as at least a part of the sound effect data. .
 本発明においては、プレイヤが音声入力装置に入力した音声に基づき音程判別部で音声データが生成され、音声データの代表となる音程が判別される。そして、音声データの音程判別結果に基づいて音階生成部では、音程が判別された音声データを元にして音程の異なる複数の音データを生成する。この複数の音データが音階を形成している。複数の音データは、効果音データとして効果音データ記憶手段に記憶され、プレイヤの操作と対応して出力すべき効果音として複数の音データが利用される。従って、プレイヤが任意に入力した音声に基づいて音階が形成されるので、入力した音声によるメロディを奏でたり、入力した音声を素材としてゲーム内容に反映させ、プレイヤが入力した音声でゲームを楽しんだりすることができる。 In the present invention, voice data is generated by the pitch discriminating unit based on the voice input by the player to the voice input device, and the pitch that is representative of the voice data is discriminated. Then, based on the pitch determination result of the voice data, the scale generation unit generates a plurality of sound data having different pitches based on the voice data whose pitch has been determined. The plurality of sound data forms a musical scale. The plurality of sound data is stored in the sound effect data storage means as sound effect data, and the plurality of sound data is used as the sound effect to be output in response to the operation of the player. Therefore, a scale is formed based on the voice arbitrarily input by the player, so that a melody based on the input voice can be played, or the input voice can be reflected in the game content and the game can be enjoyed with the voice input by the player. can do.
 本発明の音楽ゲームシステムの一態様において、前記音程判別手段は、前記音声入力装置により入力された音声の音声データから代表する周波数を特定することで前記音声の音程を判別してもよい。この態様によれば、例えば音声データの周波数スペクトルを参照して分布が最大となる周波数を代表値として特定することで、音声の音程が判別される。 In one aspect of the music game system of the present invention, the pitch determination unit may determine the pitch of the voice by specifying a representative frequency from voice data of the voice input by the voice input device. According to this aspect, for example, the pitch of the voice is determined by specifying the frequency having the maximum distribution as a representative value with reference to the frequency spectrum of the voice data.
 本発明の音楽ゲームシステムの一態様において、前記音階生成手段は、少なくとも1オクターブ以上の音階を生成してもよい。この態様によれば、音階を生成することでメロディを奏でることができる。多くの音データを生成すれば音階の幅が広がり奏でることができるメロディも増え、ゲーム内容を高度化させることができる。 In one aspect of the music game system of the present invention, the scale generation means may generate a scale of at least one octave or more. According to this aspect, it is possible to play a melody by generating a scale. If a large amount of sound data is generated, the range of musical scales can be expanded to increase the number of melodies that can be played, and the game content can be enhanced.
 本発明の音楽ゲームシステムの一態様において、少なくとも一つの操作部を有する入力装置をさらに備え、前記シーケンスデータの記述に従った効果音が、前記入力装置によるプレイヤの操作に基づいて前記音声出力装置から再生されてもよい。この態様によれば、プレイヤは操作部を操作することにより、自ら入力した音声を利用して形成された音階で構成される効果音を再生することができる。従って、入力した音声を素材としてゲーム内容に反映させ、プレイヤが入力した音声でゲームを楽しんだりすることができる。 In one aspect of the music game system of the present invention, the audio output device further includes an input device having at least one operation unit, and the sound effect according to the description of the sequence data is based on the operation of the player by the input device. May be played from. According to this aspect, the player can reproduce the sound effect composed of the scale formed using the sound input by the player by operating the operation unit. Therefore, it is possible to reflect the input sound as a material in the game content and enjoy the game with the sound input by the player.
 本発明の効果音データの生成方法は、音声入力装置により入力された音声の音声データに基づいて前記入力された音声を代表する音程を判別する音程判別工程と、前記音程判別工程の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成工程と、前記音階生成工程にて生成した前記複数の音データを、音声出力装置から出力させるための効果音データとして記憶手段に記憶させる工程と、を備えるものである。 The sound effect data generation method of the present invention includes a pitch determination step for determining a pitch representing the input voice based on the voice data of the voice input by the voice input device, and a pitch determination result of the pitch determination step. And generating a plurality of sound data having different pitches from the sound data so that a scale is formed, and outputting the plurality of sound data generated in the scale generation step from a sound output device And storing it in the storage means as sound effect data.
 本発明は、音楽ゲームシステム及びそのコンピュータプログラムにおける効果音データの生成方法であり、同様の作用効果を奏するものである。なお、本発明については、音楽ゲームシステムに限られず、電子楽器等の各種電子機器にも適用可能である。 The present invention is a method for generating sound effect data in a music game system and a computer program thereof, and has the same effects. The present invention is not limited to a music game system, and can be applied to various electronic devices such as an electronic musical instrument.
 以上、説明したように、本発明の音楽ゲームシステム及びそのコンピュータプログラムにおいては、プレイヤが音声入力装置に入力した音声に基づき音程判別部で音声データが生成され、音声データの代表となる音程が判別される。そして、音声データの音程判別結果に基づいて音階生成部では、音程が判別された音声データを元にして音程の異なる複数の音データを生成する。この複数の音データが音階を形成している。複数の音データは、効果音データとして効果音データ記憶手段に記憶され、プレイヤの操作と対応して出力すべき効果音として複数の音データが利用される。従って、プレイヤが任意に入力した音声に基づいて音階が形成されるので、入力した音声によるメロディを奏でたり、入力した音声を素材としてゲーム内容に反映させ、プレイヤが入力した音声でゲームを楽しんだりすることができる。効果音データの生成方法においても同様の効果を奏する。 As described above, in the music game system and the computer program thereof according to the present invention, the sound data is generated by the pitch determination unit based on the voice input by the player to the voice input device, and the pitch that is representative of the voice data is determined. Is done. Then, based on the pitch determination result of the voice data, the scale generation unit generates a plurality of sound data having different pitches based on the voice data whose pitch has been determined. The plurality of sound data forms a musical scale. The plurality of sound data is stored in the sound effect data storage means as sound effect data, and the plurality of sound data is used as the sound effect to be output in response to the operation of the player. Therefore, a scale is formed based on the voice arbitrarily input by the player, so that a melody based on the input voice can be played, or the input voice can be reflected in the game content and the game can be enjoyed with the voice input by the player. can do. The same effect can be achieved in the method for generating sound effect data.
本発明の一形態に係るゲーム機の外観を示す図。The figure which shows the external appearance of the game machine which concerns on one form of this invention. 本発明の一形態に係るゲーム機の機能ブロック図。1 is a functional block diagram of a game machine according to one embodiment of the present invention. ゲーム画面の一部として表示される操作指示画面の拡大図。The enlarged view of the operation instruction | indication screen displayed as a part of game screen. 効果音データの内容の一例を示す図。The figure which shows an example of the content of sound effect data. シーケンスデータの内容の一例を示す図。The figure which shows an example of the content of sequence data. ゲーム制御部が実行するシーケンス処理ルーチンを示すフローチャート。The flowchart which shows the sequence processing routine which a game control part performs. ゲーム制御部が実行する音程判別処理ルーチンを示すフローチャート。The flowchart which shows the pitch discrimination | determination process routine which a game control part performs. ゲーム制御部が実行する音階生成処理ルーチンを示すフローチャート。The flowchart which shows the scale production | generation processing routine which a game control part performs. 音声データの一例を示すグラフ。The graph which shows an example of audio | voice data. 図9の音声データの周波数スペクトルを示すグラフ。The graph which shows the frequency spectrum of the audio | voice data of FIG. 図9の音声データを周波数変換した音データを示すグラフ。The graph which shows the sound data which frequency-converted the audio | voice data of FIG.
 以下、本発明を携帯型のゲーム機に適用した一形態について説明する。図1に示すように、ゲーム機1は、プレイヤ(ユーザ)が手持ち可能な筐体2と、その筐体2の右側に配置された第1モニタ3と、筐体2の左側に配置された第2モニタ4と、第1モニタ3の上側に配置された複数の押釦スイッチ5と、第1モニタ3の下側に配置された十字キー6とを備えている。第1モニタ3の表面には、透明なタッチパネル7が重ね合わされている。タッチパネル7は、プレイヤがタッチペン等で触れると、その接触位置に応じた信号を出力する公知の入力装置である。その他にも、ゲーム機1には、電源スイッチ、ボリューム操作スイッチ、電源ランプといった、通常の携帯型ゲーム機が備えている各種の入力装置及び出力装置が設けられているが、図1ではそれらの図示を省略した。 Hereinafter, an embodiment in which the present invention is applied to a portable game machine will be described. As shown in FIG. 1, the game machine 1 is disposed on a housing 2 that a player (user) can hold, a first monitor 3 disposed on the right side of the housing 2, and a left side of the housing 2. A second monitor 4, a plurality of push button switches 5 disposed on the upper side of the first monitor 3, and a cross key 6 disposed on the lower side of the first monitor 3 are provided. A transparent touch panel 7 is superimposed on the surface of the first monitor 3. The touch panel 7 is a known input device that outputs a signal corresponding to the contact position when the player touches with a touch pen or the like. In addition, the game machine 1 is provided with various input devices and output devices such as a power switch, a volume operation switch, and a power lamp, which are included in a normal portable game machine. Illustration is omitted.
 図2に示すように、ゲーム機1の内部にはコンピュータとしての制御ユニット10が設けられている。制御ユニット10は、制御主体としてのゲーム制御部11と、そのゲーム制御部11からの出力に従って動作する一対の表示制御部12、13及び音声出力制御部14とを備えている。ゲーム制御部11は、マイクロプロセッサと、そのマイクロプロセッサの動作に必要な内部記憶装置(一例としてROM及びRAM)等の各種の周辺装置とを組み合わせたユニットとして構成されている。表示制御部12、13は、ゲーム制御部11から与えられる画像データに応じた画像をフレームバッファに描画し、その描画した画像に対応する映像信号をモニタ3、4にそれぞれ出力することにより、モニタ3、4上に所定の画像を表示させる。音声出力制御部14は、ゲーム制御部11から与えられた音声再生データに応じた音声再生信号を生成してスピーカ8に出力することにより、スピーカ8から所定の音声(楽音等を含む。)を再生させる。 As shown in FIG. 2, a control unit 10 as a computer is provided inside the game machine 1. The control unit 10 includes a game control unit 11 as a control subject, and a pair of display control units 12 and 13 and an audio output control unit 14 that operate according to an output from the game control unit 11. The game control unit 11 is configured as a unit in which a microprocessor and various peripheral devices such as an internal storage device (for example, ROM and RAM) necessary for the operation of the microprocessor are combined. The display control units 12 and 13 draw an image corresponding to the image data supplied from the game control unit 11 in the frame buffer, and output video signals corresponding to the drawn image to the monitors 3 and 4, respectively. A predetermined image is displayed on 3 and 4. The sound output control unit 14 generates a sound reproduction signal corresponding to the sound reproduction data given from the game control unit 11 and outputs the sound reproduction signal to the speaker 8, whereby predetermined sound (including musical sounds and the like) is output from the speaker 8. Let it play.
 ゲーム制御部11には、入力装置として、上述した押釦スイッチ5、十字キー6及びタッチパネル7が接続されるとともに、これらに加えて、音声入力装置(マイク)9が接続されている。その他にも、各種の入力装置がゲーム制御部11に接続されてよい。さらに、ゲーム制御部11には、外部記憶装置20が接続されている。外部記憶装置20には、EEPROM等の不揮発性半導体メモリ装置、あるいは磁気記憶装置といった、電源の供給がなくても記憶を保持可能な記憶媒体が使用される。外部記憶装置20の記憶媒体は、ゲーム機1に対して着脱可能である。 The game control unit 11 is connected with the push button switch 5, the cross key 6 and the touch panel 7 described above as input devices, and in addition to these, a voice input device (microphone) 9 is connected. In addition, various input devices may be connected to the game control unit 11. Furthermore, an external storage device 20 is connected to the game control unit 11. The external storage device 20 is a storage medium that can hold the storage even when power is not supplied, such as a nonvolatile semiconductor memory device such as an EEPROM or a magnetic storage device. The storage medium of the external storage device 20 is detachable from the game machine 1.
 外部記憶装置20には、ゲームプログラム21と、ゲームデータ22とが記憶されている。ゲームプログラム21は、ゲーム機1にて所定の手順に従って音楽ゲームを実行するために必要なコンピュータプログラムであり、そこには、本発明に係る機能を実現するためのシーケンス制御モジュール23、音程判別モジュール24及び音階生成モジュール25が含まれている。ゲーム機1が起動されると、ゲーム制御部11はその内部記憶装置に記憶されたオペレーションプログラムを実行することにより、ゲーム機1として動作するために必要な各種の初期設定を実行し、続いて、外部記憶装置20からゲームプログラム21を読み込んでこれを実行することにより、ゲームプログラム21に従って音楽ゲームを実行するための環境を設定する。ゲームプログラム21のシーケンス制御モジュール23がゲーム制御部11にて実行されることにより、ゲーム制御部11にはシーケンス処理部15が生成される。また、ゲームプログラム21の音程判別モジュール24がゲーム制御部11にて実行されることにより、ゲーム制御部11には音程判別部16が生成され、同様に音階生成モジュール25がゲーム制御部11にて実行されると、ゲーム制御部11には音階生成部17が生成される。 The external storage device 20 stores a game program 21 and game data 22. The game program 21 is a computer program necessary for executing a music game according to a predetermined procedure on the game machine 1, and includes a sequence control module 23 and a pitch determination module for realizing the functions according to the present invention. 24 and a scale generation module 25 are included. When the game machine 1 is activated, the game control unit 11 executes various operation settings stored in the internal storage device, thereby executing various initial settings necessary to operate as the game machine 1. Then, by reading the game program 21 from the external storage device 20 and executing it, an environment for executing the music game according to the game program 21 is set. When the sequence control module 23 of the game program 21 is executed by the game control unit 11, a sequence processing unit 15 is generated in the game control unit 11. In addition, when the pitch determination module 24 of the game program 21 is executed by the game control unit 11, the pitch determination unit 16 is generated in the game control unit 11. Similarly, the scale generation module 25 is generated by the game control unit 11. When executed, a scale generation unit 17 is generated in the game control unit 11.
 シーケンス処理部15、音程判別部16及び音階生成部17は、コンピュータハードウェアとコンピュータプログラムとの組合せによって実現される論理的装置である。シーケンス処理部15は、プレイヤが選択した音楽(楽曲)の再生に合わせてプレイヤに操作を指示し、あるいはプレイヤの操作に応じて効果音を発生させるといった音楽ゲーム処理を実行する。音程判別部16は、プレイヤが音声入力装置9に入力した任意の音声を取り込んで後述する所定の処理を行うことにより周波数の代表値を決定する。音階生成部17は、音程判別部16にて決定された代表値に基づいて、音程を変更した音データを複数生成する。これらの音データは、所定のオクターブ数の音階を形成し、効果音を構成する。ゲームプログラム21には、上述したモジュール23~25の他にも音楽ゲームを実行するために必要な各種のプログラムモジュールが含まれ、ゲーム制御部11にはそれらのモジュールに対応した論理的装置が生成されるが、それらの図示は省略した。 The sequence processing unit 15, the pitch determination unit 16, and the scale generation unit 17 are logical devices realized by a combination of computer hardware and a computer program. The sequence processing unit 15 performs music game processing in which an operation is instructed to the player in accordance with the reproduction of music (music) selected by the player, or a sound effect is generated in accordance with the operation of the player. The pitch determination unit 16 takes in an arbitrary sound input by the player to the sound input device 9 and performs a predetermined process described later to determine a representative value of the frequency. The scale generation unit 17 generates a plurality of sound data whose pitches are changed based on the representative values determined by the pitch determination unit 16. These sound data form a scale of a predetermined octave number and constitute sound effects. The game program 21 includes various program modules necessary for executing the music game in addition to the modules 23 to 25 described above, and the game control unit 11 generates logical devices corresponding to these modules. However, their illustration is omitted.
 ゲームデータ22には、ゲームプログラム21に従って音楽ゲームを実行する際に参照されるべき各種のデータが含まれている。例えば、ゲームデータ22には、楽曲データ26、効果音データ27及び画像データ28が含まれている。楽曲データ26は、ゲームの対象となる楽曲をスピーカ8から再生出力させるために必要なデータである。図2では、一種類の楽曲データ26が示されているが、実際には、プレイヤが複数の楽曲からプレイする楽曲を選択可能である。ゲームデータ22には、それらの複数の楽曲データ26がそれぞれの曲を識別するための情報を付して記録されている。効果音データ27は、プレイヤの操作に応答してスピーカ8から出力させるべき複数種類の効果音を、効果音毎にユニークなコードと対応付けて記録したデータである。効果音は、楽器その他の様々な種類の音声を含む。テキストをスピーカ8から出力させるためのヴォーカル音も効果音の一種に含まれる。効果音データ27は、各種類に対して、音程を変えて所定のオクターブ数だけ用意されている。画像データ28は、ゲーム画面内の背景画像、各種のオブジェクト、アイコン等をモニタ3、4に表示させるためのデータである。 The game data 22 includes various data to be referred to when the music game is executed according to the game program 21. For example, the game data 22 includes music data 26, sound effect data 27, and image data 28. The music data 26 is data necessary for reproducing and outputting music to be played from the speaker 8. In FIG. 2, one type of music data 26 is shown, but in reality, the player can select music to play from a plurality of music. A plurality of pieces of music data 26 are recorded in the game data 22 with information for identifying each music piece. The sound effect data 27 is data in which a plurality of types of sound effects to be output from the speaker 8 in response to the operation of the player are recorded in association with unique codes for each sound effect. Sound effects include musical instruments and various other types of sounds. A vocal sound for outputting text from the speaker 8 is also included as a kind of sound effect. The sound effect data 27 is prepared for a predetermined number of octaves by changing the pitch for each type. The image data 28 is data for displaying the background image, various objects, icons, and the like in the game screen on the monitors 3 and 4.
 ゲームデータ22には、さらにシーケンスデータ29が含まれている。シーケンスデータ29は、プレイヤに対して指示すべき操作等を定義したデータである。一曲の楽曲データ26に対して、最低一個のシーケンスデータ29が用意されている。 The game data 22 further includes sequence data 29. The sequence data 29 is data defining operations and the like to be instructed to the player. At least one sequence data 29 is prepared for one piece of music data 26.
 次に、ゲーム機1にて実行される音楽ゲームの概要を説明する。図1に示したように、ゲーム機1による音楽ゲームの実行中、第1モニタ3にはゲームの操作指示画面100が表示され、第2モニタ4にはゲームの情報画面110が表示される。図3にも示したように、操作指示画面100には、上下方向に延びる第1レーン101、第2レーン102及び第3レーン103が、区分線104にて区分されるといった手順により、視覚的に区分された状態が表示される。レーン101、102、103の下端部には、操作基準標識105がそれぞれ表示される。音楽ゲームの実行中、つまり、楽曲の再生の進行中において、レーン101、102、103には、操作指示標識としてのオブジェクト106がシーケンスデータ27に従って表示される。 Next, an outline of a music game executed on the game machine 1 will be described. As shown in FIG. 1, during the execution of the music game by the game machine 1, the game operation instruction screen 100 is displayed on the first monitor 3, and the game information screen 110 is displayed on the second monitor 4. As shown in FIG. 3, the operation instruction screen 100 is visually displayed by a procedure in which the first lane 101, the second lane 102, and the third lane 103 extending in the vertical direction are divided by the dividing line 104. The state divided into is displayed. Operation reference signs 105 are displayed at the lower ends of the lanes 101, 102, and 103, respectively. While the music game is being executed, that is, during the reproduction of the music, the lanes 101, 102, and 103 display objects 106 as operation instruction signs according to the sequence data 27.
 オブジェクト106は、曲中の適当な時期にレーン101、102、103の上端部に出現し、図3に矢印Aで示したように楽曲の進行に従って下方にスクロールされる。プレイヤは、オブジェクト106の操作基準標識105への到達に合わせてそのオブジェクト106の表示されたレーン101、102又は103をタッチペン120等の操作部材でタッチ操作するように要求される。プレイヤがタッチ操作を行うと、オブジェクト106が操作基準標識105に一致した時刻と、プレイヤのタッチ操作の時刻との間のずれ時間が検出される。そのずれ時間が小さいほどプレイヤの操作が高く評価される。また、タッチ操作に応じて各オブジェクト106に対応した効果音がスピーカ8から再生される。図3の例では、第2レーン102にてオブジェクト106が操作基準標識105に到達する直前であり、その到達に合わせてプレイヤは第2レーン102をタッチ操作すればよい。タッチする位置は、第2レーン102内であればどこでも構わない。つまり、本形態では、第1モニタ3上に表示されるレーン101、102、103と、それらに重ね合わされるタッチパネル7との組合せによって、3つの操作部が形成される。なお、以下では、レーン101、102、103のそれぞれを、操作部を代表する用語として使用することがある。 The object 106 appears at the upper end of the lanes 101, 102, and 103 at an appropriate time in the music and is scrolled downward as the music progresses as indicated by an arrow A in FIG. The player is requested to touch the lane 101, 102, or 103 on which the object 106 is displayed with an operation member such as the touch pen 120 as the object 106 reaches the operation reference mark 105. When the player performs a touch operation, a time difference between the time when the object 106 matches the operation reference sign 105 and the time of the touch operation of the player is detected. The smaller the deviation time, the higher the player's operation is evaluated. In addition, sound effects corresponding to each object 106 are reproduced from the speaker 8 in response to the touch operation. In the example of FIG. 3, it is just before the object 106 reaches the operation reference mark 105 in the second lane 102, and the player may touch the second lane 102 in accordance with the arrival. The touch position may be anywhere within the second lane 102. That is, in this embodiment, three operation units are formed by a combination of the lanes 101, 102, and 103 displayed on the first monitor 3 and the touch panel 7 superimposed on them. Hereinafter, each of the lanes 101, 102, and 103 may be used as a term representing the operation unit.
 タッチ操作に応じて再生される各オブジェクト106に対応した効果音は、効果音データ27に記録された複数の効果音から選択される。図4に示すように、効果音データ27は、予めゲームデータ22に記録されたオリジナルデータ27aと、プレイヤが音声入力装置9にて入力した音声に基づいて得られるユーザデータ27bとを含む。オリジナルデータ27a及びユーザデータ27bは、複数の効果音A1、B1、・・・が記録され、効果音A1を例に説明すると、効果音A1には、音階を構成する各音とユニークなコードとを対応付けた音データsd_000、sd_001、sd_002、・・・の集合が記録されている。他の効果音B1、C1、・・・も同様な音データを有する。ユーザデータ27bは、効果音A2、B2、・・・の有する音データの構成においてオリジナルデータ27aと同様であるが、プレイヤが音声入力装置9にて入力した音声に基づいて音データが生成されている点で予め記録されたオリジナルデータ27aとは異なる。 The sound effect corresponding to each object 106 reproduced in response to the touch operation is selected from a plurality of sound effects recorded in the sound effect data 27. As shown in FIG. 4, the sound effect data 27 includes original data 27 a recorded in advance in the game data 22 and user data 27 b obtained based on the sound input by the player using the sound input device 9. In the original data 27a and the user data 27b, a plurality of sound effects A1, B1,... Are recorded, and the sound effect A1 will be described as an example. Are associated with sound data sd_000, sd_001, sd_002,. Other sound effects B1, C1,... Have similar sound data. The user data 27b is similar to the original data 27a in the structure of the sound data of the sound effects A2, B2,..., But the sound data is generated based on the sound input by the player using the sound input device 9. It differs from the original data 27a recorded in advance.
 次に、シーケンスデータ29の詳細を説明する。図5に示したように、シーケンスデータ29は、初期設定部29aと、操作シーケンス部29bとを含んでいる。初期設定部29aにはゲームをプレイするにあたっての初期設定となる音楽のテンポ(一例としてBPM)、レーン101~103を操作したときにそれぞれ発生させるべき効果音を指定する情報といった、楽曲毎に異なるゲームの実行条件等を指定する情報が記述される。 Next, details of the sequence data 29 will be described. As shown in FIG. 5, the sequence data 29 includes an initial setting unit 29a and an operation sequence unit 29b. The initial setting unit 29a is different for each song, such as music tempo (BPM as an example) which is an initial setting for playing a game, and information for specifying sound effects to be generated when the lanes 101 to 103 are operated. Information specifying game execution conditions and the like is described.
 一方、操作シーケンス部29bには、操作指定情報29cと、効果音切替指示情報29dとが記述される。操作指定情報29cは、レーン101~103の操作時期が、それらのレーン101~103のいずれかを指定する情報と対応付けて記述される。すなわち、図5にその一部を例示したように、操作指定情報29cは、楽曲中において操作が行われるべき時期(操作時期)と操作部(レーン)を指定する情報とを対応付けた複数のレコードの集合として構成されている。操作時期は、楽曲中の小節番号、拍数及び拍中の時刻を示す値をそれぞれカンマで区切って記述されている。拍中の時刻は、一拍の先頭からの経過時間であり、一拍の時間長をn個の単位時間に等分したときのその拍の先頭からの単位数で表現される。例えば、n=100で、楽曲の一小節目の二拍目で、かつその拍の先頭から1/4だけ経過した時刻を操作時期として指定する場合には、“01,2,025”と記述される。操作部は、第1レーン101を指定する場合に“button1”、第2レーン102を指定する場合に“button2”、第3レーン103を指定する場合に“button3”とそれぞれ記述される。図5の例では、一小節目の一拍目の開始時点(000)で第1レーン101をタッチし、一小節目の二拍目の開始時点(000)で第2レーン102をタッチし、一小節目の二拍目の開始時点から“025”相当だけ経過した時期に第3レーン103をタッチするといった具合に操作時期及び操作部が指定されている。 On the other hand, operation designation information 29c and sound effect switching instruction information 29d are described in the operation sequence portion 29b. In the operation designation information 29c, the operation timing of the lanes 101 to 103 is described in association with information for designating any of the lanes 101 to 103. That is, as partly illustrated in FIG. 5, the operation designation information 29c includes a plurality of pieces of information in which the time (operation time) when the operation should be performed in the music and the information designating the operation unit (lane) are associated with each other. It is structured as a set of records. The operation time is described by separating a bar number, a beat number, and a time value in the music by commas. The time during a beat is the elapsed time from the beginning of one beat, and is expressed by the number of units from the beginning of that beat when the time length of one beat is equally divided into n unit times. For example, when n = 100, the second beat of the first measure of the music and a time that has passed by ¼ from the beginning of the beat is designated as the operation time, “01, 2, 025” is described. Is done. The operation unit is described as “button 1” when the first lane 101 is designated, “button 2” when the second lane 102 is designated, and “button 3” when the third lane 103 is designated. In the example of FIG. 5, the first lane 101 is touched at the start time (000) of the first beat of the first bar, the second lane 102 is touched at the start time (000) of the second beat of the first bar, The operation time and the operation unit are designated such that the third lane 103 is touched when the time corresponding to “025” has elapsed since the start of the second beat of the first measure.
 操作指定情報29cの途中の適宜の位置には、効果音切替指示情報29dが挿入される。効果音切替指示情報29dは、効果音が変更されるべき楽曲上の時刻と、レーン101~103を操作したときにそれぞれ発生させるべき効果音の音データとが対応付けて記述され、以降の操作指定情報29cにおいて指定されたレーンにタッチしたときに発生する効果音を変更する。楽曲上の時刻は、操作指定情報29cにおける操作時期と同一の形式で記述されている。効果音切替指示情報29dは、効果音データ27に記録されたオリジナルデータ27a及びユーザデータ27bの音データのいずれかをそれぞれのレーンに対して指定する。効果音切替指示情報29dは効果音が切り替えられるべき楽曲上の時刻に挿入され、次の効果音切替指示情報29dによる指示があるまでは、その効果音の設定が維持される。 The sound effect switching instruction information 29d is inserted at an appropriate position in the middle of the operation specifying information 29c. The sound effect switching instruction information 29d is described in association with the time on the music for which the sound effect is to be changed and sound data of the sound effect that should be generated when the lanes 101 to 103 are operated. The sound effect generated when the lane designated in the designation information 29c is touched is changed. The time on the music is described in the same format as the operation time in the operation designation information 29c. The sound effect switching instruction information 29d designates one of the sound data of the original data 27a and the user data 27b recorded in the sound effect data 27 for each lane. The sound effect switching instruction information 29d is inserted at a time on the music to which the sound effect is to be switched, and the setting of the sound effect is maintained until an instruction is given by the next sound effect switching instruction information 29d.
 ゲーム制御部11のシーケンス処理部15は、上述した操作指定情報29cにて指定された操作時期にオブジェクト106が操作基準標識105と一致するようにレーン101~103のそれぞれの表示を制御する。また、シーケンス処理部15は、効果音切替指示情報29dにて指定された楽曲上の時刻に、指定されたレーン101~103にプレイヤがタッチしたときに発生する効果音を切り替えるように制御する。 The sequence processing unit 15 of the game control unit 11 controls the display of each of the lanes 101 to 103 so that the object 106 matches the operation reference sign 105 at the operation time specified by the operation specifying information 29c described above. Further, the sequence processing unit 15 controls to switch the sound effect generated when the player touches the designated lanes 101 to 103 at the time on the music designated by the sound effect switching instruction information 29d.
 次に、ゲーム機1にて音楽ゲームが実行される際のゲーム制御部11の処理を説明する。ゲーム制御部11は、ゲームプログラム21を読み込んで音楽ゲームを実行するために必要な初期設定を終えると、プレイヤからのゲーム開始の指示に備えて待機する。ゲーム開始の指示は、例えばゲームでプレイする楽曲、あるいは難易度の選択といったゲームで使用するデータを特定する操作を含む。それらの指示を受け付ける手順は、公知の音楽ゲーム等と同様でよい。 Next, processing of the game control unit 11 when a music game is executed on the game machine 1 will be described. When the game control unit 11 reads the game program 21 and completes the initial settings necessary to execute the music game, the game control unit 11 stands by in preparation for a game start instruction from the player. The instruction to start the game includes, for example, an operation for specifying data used in the game such as selection of music to be played in the game or difficulty level. The procedure for receiving these instructions may be the same as that of a known music game or the like.
 ゲーム開始が指示されると、ゲーム制御部11は、プレイヤが選択した曲に対応する楽曲データ26を読み取って音声出力制御部14に出力することにより、スピーカ8から楽曲の再生を開始させる。これにより、制御ユニット10が楽曲再生手段として機能する。また、ゲーム制御部11は、楽曲の再生に同期して、プレイヤの選択に対応したシーケンスデータ29を読み取って、画像データ28を参照しつつ操作指示画面100及び情報画面110の描画に必要な画像データを生成して表示制御部12、13に出力することにより、モニタ3、4上に操作指示画面100及び情報が面110を表示させる。さらに、音楽ゲームの実行中において、ゲーム制御部11は、操作指示画面100の表示等に必要な処理として図6に示すシーケンス処理ルーチンを所定の周期で繰り返し実行する。 When the game start is instructed, the game control section 11 reads the music data 26 corresponding to the music selected by the player and outputs the music data 26 to the audio output control section 14, thereby starting the reproduction of the music from the speaker 8. Thereby, the control unit 10 functions as a music reproducing means. In addition, the game control unit 11 reads out the sequence data 29 corresponding to the player's selection in synchronization with the reproduction of the music, and refers to the image data 28 and images necessary for drawing the operation instruction screen 100 and the information screen 110. By generating data and outputting it to the display control units 12 and 13, the operation instruction screen 100 and the information display surface 110 are displayed on the monitors 3 and 4. Furthermore, during the execution of the music game, the game control unit 11 repeatedly executes a sequence processing routine shown in FIG. 6 at a predetermined cycle as a process necessary for displaying the operation instruction screen 100 and the like.
 図6のシーケンス処理ルーチンが開始されると、ゲーム制御部11のシーケンス処理部15は、まずステップS1にて楽曲上の現在時刻を取得する。例えば、楽曲の再生開始時点を基準として、ゲーム制御部11の内部クロックにて計時が開始され、その内部クロックの値から現在時刻が取得される。続くステップS2において、シーケンス処理部15は、シーケンスデータ28から、操作指示画面100の表示範囲に相当する時間長に存在する操作時期のデータを取得する。表示範囲は、一例として現在時刻から将来に向かって楽曲の2小節相当の時間範囲に設定される。 When the sequence processing routine of FIG. 6 is started, the sequence processing unit 15 of the game control unit 11 first acquires the current time on the music in step S1. For example, timing is started with the internal clock of the game control unit 11 with the music reproduction start time as a reference, and the current time is acquired from the value of the internal clock. In subsequent step S <b> 2, the sequence processing unit 15 acquires, from the sequence data 28, operation timing data existing for a time length corresponding to the display range of the operation instruction screen 100. For example, the display range is set to a time range corresponding to two measures of music from the current time to the future.
 次のステップS3にて、シーケンス処理部15はレーン101~103に表示すべき全てのオブジェクト106の操作指示画面100内における座標を演算する。その演算は、一例として以下のように行われる。表示範囲に含まれている操作時期に対応付けられたレーン101~103の指定、すなわち、図5の例において“button1”~“button3”のいずれかの指定に基づいてオブジェクト106をレーン101~103のいずれに配置すべきかを判別する。また、各操作時期と現在時刻との時間差に応じて、操作基準標識105からの時間軸方向(つまり、オブジェクト106の移動方向)における各オブジェクト106の位置を判別する。これにより、各オブジェクト106を、指定されたレーン101~103内にて操作基準標識105から時間軸に沿って配置するために必要な各オブジェクト106の座標を取得することができる。 In the next step S3, the sequence processing unit 15 calculates the coordinates in the operation instruction screen 100 of all the objects 106 to be displayed on the lanes 101 to 103. The calculation is performed as follows as an example. Based on the designation of the lanes 101 to 103 associated with the operation timing included in the display range, that is, the designation of any of “button 1” to “button 3” in the example of FIG. It is determined in which of these should be arranged. Further, the position of each object 106 in the time axis direction from the operation reference mark 105 (that is, the moving direction of the object 106) is determined according to the time difference between each operation time and the current time. As a result, the coordinates of each object 106 necessary for arranging each object 106 along the time axis from the operation reference mark 105 in the designated lanes 101 to 103 can be acquired.
 オブジェクト106の座標演算が完了すると、シーケンス処理部15はステップS4に進み、シーケンスデータ29から取得したデータ内に効果音切替指示情報29dの有無を判別する。効果音切替指示情報29dがある場合には、シーケンス処理部15はステップS5にて現在時刻を取得して効果音切替指示情報29dが指定する楽曲上の時刻と比較し、現在時刻が効果音の切替指示のタイミングに該当するか否かを判別する。切替指示のタイミングに該当する場合、シーケンス制御部15はステップS6にて、以降の操作指定情報29cで指定されたレーン101~103のそれぞれで発生させる効果音を、効果音切替指示情報29dで指定された効果音に変更する。例えば、図5に示した例で説明すると、楽曲の一小節目の三拍目の開始時点以降、効果音データ27のユーザデータ27bの効果音A2の音データsd_101、sd_105、sd_106がそれぞれレーン101、102、103に割り当てられ、プレイヤがレーン101~103にタッチするとそれぞれの音データが再生される。なお、ステップS4にて効果音切替指示情報29dがない場合、又はステップS5で効果音の切替指示情報29dがない場合は、シーケンス処理部15は、ステップS7に進む。 When the coordinate calculation of the object 106 is completed, the sequence processing unit 15 proceeds to step S4, and determines whether or not the sound effect switching instruction information 29d is present in the data acquired from the sequence data 29. When there is the sound effect switching instruction information 29d, the sequence processing unit 15 obtains the current time in step S5 and compares it with the time on the music designated by the sound effect switching instruction information 29d, and the current time is the sound effect. It is determined whether or not the timing of the switching instruction is met. In the case where it corresponds to the timing of the switching instruction, in step S6, the sequence control unit 15 specifies the sound effect generated in each of the lanes 101 to 103 specified by the subsequent operation specifying information 29c by using the sound effect switching instruction information 29d. Change to the sound effect. For example, in the example shown in FIG. 5, the sound data sd_101, sd_105, and sd_106 of the sound effect A2 of the user data 27b of the sound effect data 27 are respectively stored in the lane 101 after the start of the third beat of the first bar of the music. , 102 and 103, and when the player touches the lanes 101 to 103, each sound data is reproduced. If there is no sound effect switching instruction information 29d in step S4, or if there is no sound effect switching instruction information 29d in step S5, the sequence processing unit 15 proceeds to step S7.
 効果音の切替が完了すると、シーケンス処理部15は、次のステップS7に進み、ステップS3で演算されたオブジェクト106の座標に基づいて操作指示画面100を描画するために必要な画像データを生成する。具体的には、演算された座標にオブジェクト106が配置されるように画像データを生成する。オブジェクト106の画像は、画像データ28から取得すればよい。 When the switching of the sound effects is completed, the sequence processing unit 15 proceeds to the next step S7, and generates image data necessary for drawing the operation instruction screen 100 based on the coordinates of the object 106 calculated in step S3. . Specifically, the image data is generated so that the object 106 is arranged at the calculated coordinates. The image of the object 106 may be acquired from the image data 28.
 続くステップS8にて、シーケンス処理部15は表示制御部12に画像データを出力する。それにより、第1モニタ3に操作指示画面100が表示される。ステップS8の処理を終えると、シーケンス処理部15は今回のシーケンス処理ルーチンを終了する。以上の処理が繰り返し実行されることにより、シーケンスデータ29にて記述された操作時期にオブジェクト106が操作基準標識105へ到達するようにレーン101~103内にてオブジェクト106がスクロール表示される。 In subsequent step S8, the sequence processing unit 15 outputs the image data to the display control unit 12. Thereby, the operation instruction screen 100 is displayed on the first monitor 3. When the process of step S8 is completed, the sequence processing unit 15 ends the current sequence processing routine. By repeatedly executing the above processing, the object 106 is scroll-displayed in the lanes 101 to 103 so that the object 106 reaches the operation reference mark 105 at the operation time described in the sequence data 29.
 次に、ゲーム機1にてプレイヤにより入力された音声に基づき効果音を作成する際の音程判別部16及び音階生成部17の処理を説明する。効果音の作成は、例えば、音楽ゲームが実行されていない待機中にプレイヤにより開始が指示されることで行われる。効果音の作成が開始されると、まず音程判別部16が図7に示す音程判別処理ルーチンを実行し、音程判別処理ルーチンの結果に基づいて音階生成部17が図8に示す音階生成処理ルーチンを実行する。 Next, processing of the pitch determination unit 16 and the scale generation unit 17 when creating a sound effect based on the voice input by the player in the game machine 1 will be described. The sound effect is created by, for example, instructing the player to start while the music game is not being executed. When the creation of the sound effect is started, the pitch discriminating unit 16 first executes the pitch discriminating processing routine shown in FIG. 7, and the scale generating unit 17 based on the result of the pitch discriminating processing routine is shown in FIG. Execute.
 図7の音程判別処理ルーチンが開始されると、ゲーム制御部11の音程判別部16は、ステップS11で、プレイヤが入力した音声を取得する。音声入力装置9が音声を取込み可能となっている状態でプレイヤが音声を入力すると、生音声データが生成される。続くステップS12にて音程判別部16は、生音声データをA/D変換する。これにより、生音声データのアナログ信号がデジタル信号に変換され、入力された音声の音声データが生成される。図9に音声データの一例を示す。図9の音声データは、ギターの音のデジタル波形で、横軸がダイナミックレンジ、縦軸がデュレーションをそれぞれ示している。なお、A/D変換には、周知技術を利用してよい。 When the pitch determination processing routine of FIG. 7 is started, the pitch determination unit 16 of the game control unit 11 acquires the voice input by the player in step S11. When the player inputs voice while the voice input device 9 can capture voice, raw voice data is generated. In subsequent step S12, the pitch determination unit 16 performs A / D conversion on the raw voice data. As a result, the analog signal of the raw voice data is converted into a digital signal, and the voice data of the input voice is generated. FIG. 9 shows an example of audio data. The audio data in FIG. 9 is a digital waveform of a guitar sound, where the horizontal axis indicates the dynamic range and the vertical axis indicates the duration. A well-known technique may be used for A / D conversion.
 そして、音程判別部16は、ステップS13にて音声データの周波数スペクトルを得る。ステップS12で得られた音声データから、高速フーリエ変換することにより生成された周波数スペクトルを図10に示す。横軸が周波数、縦軸が周波数の分布度合をそれぞれ示している。なお、周波数スペクトルの生成は、高速フーリエ変換による演算に限られず各種周知技術を利用してよい。続くステップS14にて、音程判別部16はステップS13で得られた周波数スペクトルから代表値を決定する。代表値は、周波数スペクトルの分布数の最大値とする。図10のグラフで説明すると、矢印pで示したピークの周波数が代表値となる。このようにして決定した代表値の周波数により、プレイヤが入力した音声に基づく音声データの音程が判別される。また、このような最大ピークを有する山の両端で占める帯域qのデータから代表値を算出してもよい。ピークの周波数に幅がある等ピークが不明瞭な場合にもこのような方法で一定の帯域から代表値を算出することができる。ステップS14の処理を終えると、音程判別部16は今回の音程判別処理ルーチンを終了する。以上の処理により、プレイヤの入力した音声による音声データは、代表値が決定され、固有の音程が判別される。 Then, the pitch discriminating unit 16 obtains the frequency spectrum of the voice data in step S13. FIG. 10 shows a frequency spectrum generated by fast Fourier transform from the audio data obtained in step S12. The horizontal axis indicates the frequency, and the vertical axis indicates the frequency distribution degree. The generation of the frequency spectrum is not limited to the calculation by the fast Fourier transform, and various known techniques may be used. In subsequent step S14, the pitch determination unit 16 determines a representative value from the frequency spectrum obtained in step S13. The representative value is the maximum value of the frequency spectrum distribution. Explaining with the graph of FIG. 10, the peak frequency indicated by the arrow p is a representative value. The pitch of the voice data based on the voice input by the player is determined based on the frequency of the representative value thus determined. Further, the representative value may be calculated from the data of the band q occupied at both ends of the peak having such a maximum peak. Even when the peak is unclear, such as when the frequency of the peak is wide, the representative value can be calculated from a certain band by this method. When the process of step S14 is completed, the pitch determination unit 16 ends the current pitch determination processing routine. With the above processing, representative values are determined for the voice data based on the voice input by the player, and the unique pitch is determined.
 音程判別処理ルーチンで代表値が得られると、音階生成部17は図8の音階生成処理ルーチンを実行する。音階生成部17は、ステップS21にて代表値が決定した音声データから音階を形成する複数の音データを生成する。音階生成部17は、その代表値に基づき音声データを周波数変換し、各音データの代表値が所定のオクターブ数の音階を形成する各音の周波数となるようにする。図11に周波数変換した音データの一例を示す。図11の波形は、図9の音声データを1オクターブ上方に周波数変換したものである。そして、ステップS22で音階生成部17は、生成した音データの集合を効果音データ27に格納する。これらの音データは、効果音データ27内のユーザデータ27bに格納される。ステップS22の処理を終えると、音階生成部17は今回の音階生成処理ルーチンを終了する。以上の処理により、代表値が決定された音声データを元にして代表値の周波数が互いに異なる音データを複数生成し、音階が形成される。音階を形成する音データの集合は、効果音として効果音データ27のユーザデータ27b内に格納される。 When the representative value is obtained in the pitch determination processing routine, the scale generation unit 17 executes the scale generation processing routine of FIG. The scale generation unit 17 generates a plurality of sound data forming a scale from the sound data whose representative value is determined in step S21. The scale generation unit 17 frequency-converts the sound data based on the representative value so that the representative value of each sound data becomes the frequency of each sound forming a scale having a predetermined number of octaves. FIG. 11 shows an example of sound data subjected to frequency conversion. The waveform of FIG. 11 is obtained by frequency-converting the audio data of FIG. 9 upward by one octave. In step S <b> 22, the scale generation unit 17 stores the generated sound data set in the sound effect data 27. These sound data are stored in the user data 27 b in the sound effect data 27. When the process of step S22 is completed, the scale generation unit 17 ends the current scale generation process routine. Through the above processing, a plurality of pieces of sound data having different representative value frequencies are generated based on the sound data for which the representative value is determined, and a scale is formed. A set of sound data forming the scale is stored as sound effects in the user data 27b of the sound effect data 27.
 以上の形態においては、ゲーム機1の外部記憶装置20が効果音データ記憶手段、シーケンスデータ記憶手段として機能する。また、制御ユニット10が、音程判別部16に図7のステップS11~S14の処理を実行させることにより音程判別手段として機能し、音階生成部17に図8のステップS21を実行させることにより音階生成手段として機能し、音階生成部17に図8のステップS22を実行させることにより効果音データ記憶制御手段として機能する。 In the above embodiment, the external storage device 20 of the game machine 1 functions as sound effect data storage means and sequence data storage means. Further, the control unit 10 functions as a pitch discriminating unit by causing the pitch discriminating unit 16 to execute the processes of steps S11 to S14 in FIG. 7, and the scale generating unit 17 executes the step S21 in FIG. It functions as a sound effect data storage control means by causing the scale generation unit 17 to execute step S22 of FIG.
 本発明は、上述した形態に限定されることなく、種々の形態にて実施することができる。例えば、本形態では、音程判別手段、音階生成手段及び効果音データ記憶制御手段を機能させる装置として音楽ゲーム機1を例に説明したが、これに限られない。例えば、電子楽器等の各種電子機器に適用してもよい。電子楽器に本発明を適用した場合、プレイヤの入力した任意の音声にてメロディを奏でることができる。 The present invention can be implemented in various forms without being limited to the above-described forms. For example, in this embodiment, the music game machine 1 has been described as an example of a device that causes the pitch determination unit, the scale generation unit, and the sound effect data storage control unit to function, but the present invention is not limited thereto. For example, you may apply to various electronic devices, such as an electronic musical instrument. When the present invention is applied to an electronic musical instrument, a melody can be played with an arbitrary voice input by the player.
 本発明の音楽ゲームシステムは、携帯型のゲーム機にて実現されるものに限られず、家庭用の据置型ゲーム機、商業施設に設置される業務用ゲーム機、ネットワークを利用して実現されるゲームシステムといった適宜の形態で実現されてもよい。入力装置としては、タッチパネルを利用した例に限らず、押釦、レバー、トラックボールといった各種の構成の入力装置を利用することができる。 The music game system of the present invention is not limited to that realized by a portable game machine, but is realized by using a stationary game machine for home use, an arcade game machine installed in a commercial facility, and a network. It may be realized in an appropriate form such as a game system. The input device is not limited to an example using a touch panel, and input devices having various configurations such as a push button, a lever, and a trackball can be used.

Claims (6)

  1.  音声を入力する音声入力装置と、
     ゲーム音を再生出力する音声出力装置と、
     音程の異なる複数の効果音のそれぞれを前記音声出力装置から出力させるための効果音データを記憶する効果音データ記憶手段と、
     プレイヤの操作と対応して出力すべき効果音との関係を記述するシーケンスデータを記憶するシーケンスデータ記憶手段と、
     前記音声入力装置により入力された音声の音声データに基づいて、前記入力された音声を代表する音程を判別する音程判別手段と、
     前記音程判別手段の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成手段と、
     前記音階生成手段が生成した前記複数の音データを前記効果音データの少なくとも一部として前記効果音データ記憶手段に記憶させる効果音データ記憶制御手段と、
    を備えた音楽ゲームシステム。
    A voice input device for inputting voice;
    An audio output device for reproducing and outputting game sounds;
    Sound effect data storage means for storing sound effect data for outputting each of a plurality of sound effects having different pitches from the sound output device;
    Sequence data storage means for storing sequence data describing the relationship between the player's operation and sound effects to be output;
    A pitch determination means for determining a pitch representing the input voice based on voice data of the voice input by the voice input device;
    Scale generation means for generating a plurality of sound data having different pitches from the sound data based on the pitch determination result of the pitch determination means so that a scale is formed;
    Sound effect data storage control means for storing the plurality of sound data generated by the scale generation means in the sound effect data storage means as at least part of the sound effect data;
    Music game system with
  2.  前記音程判別手段は、前記音声入力装置により入力された音声の音声データから代表する周波数を特定することで前記音声の音程を判別する請求項1の音楽ゲームシステム。 The music game system according to claim 1, wherein the pitch determination means determines the pitch of the voice by specifying a representative frequency from voice data of the voice input by the voice input device.
  3.  前記音階生成手段は、少なくとも1オクターブ以上の音階を生成する請求項1又は2の音楽ゲームシステム。 The music game system according to claim 1 or 2, wherein the scale generation means generates a scale of at least one octave or more.
  4.  少なくとも一つの操作部を有する入力装置をさらに備え、
     前記シーケンスデータの記述に従った効果音が、前記入力装置によるプレイヤの操作に基づいて前記音声出力装置から再生される請求項1~3のいずれか一項の音楽ゲームシステム。
    An input device having at least one operation unit;
    The music game system according to any one of claims 1 to 3, wherein the sound effect according to the description of the sequence data is reproduced from the sound output device based on a player's operation by the input device.
  5.  音声を入力する音声入力装置と、ゲーム音を再生出力する音声出力装置と、音程の異なる複数の効果音のそれぞれを前記音声出力装置から出力させるための効果音データを記憶する効果音データ記憶手段と、プレイヤの操作と対応して出力すべき効果音との関係を記述するシーケンスデータを記憶するシーケンスデータ記憶手段と、を備えた音楽ゲームシステムに組み込まれるコンピュータを、
     前記音声入力装置により入力された音声の音声データに基づいて、前記入力された音声を代表する音程を判別する音程判別手段、
     前記音程判別手段の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成手段、及び
     前記音階生成手段が生成した前記複数の音データを前記効果音データの少なくとも一部として前記効果音データ記憶手段に記憶させる効果音データ記憶制御手段、
    として機能させるように構成された音楽ゲームシステム用のコンピュータプログラム。
    A sound input device for inputting sound, a sound output device for reproducing and outputting game sound, and sound effect data storage means for storing sound effect data for outputting each of a plurality of sound effects having different pitches from the sound output device And a sequence data storage means for storing sequence data that describes the relationship between the player's operation and the sound effect to be output, and a computer incorporated in the music game system,
    A pitch discriminating means for discriminating a pitch representative of the inputted voice based on voice data of the voice inputted by the voice input device;
    Scale generation means for generating a plurality of sound data having different pitches from the sound data based on a pitch determination result of the pitch determination means, and the plurality of sound data generated by the scale generation means Is stored in the sound effect data storage means as at least part of the sound effect data,
    A computer program for a music game system configured to function as a computer program.
  6.  音声入力装置により入力された音声の音声データに基づいて前記入力された音声を代表する音程を判別する音程判別工程と、
     前記音程判別工程の音程判別結果に基づいて前記音声データと互いに音程の異なる複数の音データを音階が形成されるように生成する音階生成工程と、
     前記音階生成工程にて生成した前記複数の音データを、音声出力装置から出力させるための効果音データとして記憶手段に記憶させる工程と、
    を備えた効果音データの生成方法。
    A pitch determination step of determining a pitch representative of the input voice based on voice data of the voice input by the voice input device;
    A scale generation step for generating a plurality of pieces of sound data having different pitches from the sound data based on a pitch determination result of the pitch determination step;
    Storing the plurality of sound data generated in the scale generation step in a storage means as sound effect data for outputting from a sound output device;
    A method for generating sound effect data.
PCT/JP2010/065337 2009-09-11 2010-09-07 Music game system, computer program of same, and method of generating sound effect data WO2011030761A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201080039640.3A CN102481488B (en) 2009-09-11 2010-09-07 Music game system and method of generating sound effect data
US13/394,967 US20120172099A1 (en) 2009-09-11 2010-09-07 Music game system, computer program of same, and method of generating sound effect data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-210571 2009-09-11
JP2009210571A JP5399831B2 (en) 2009-09-11 2009-09-11 Music game system, computer program thereof, and method of generating sound effect data

Publications (1)

Publication Number Publication Date
WO2011030761A1 true WO2011030761A1 (en) 2011-03-17

Family

ID=43732433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/065337 WO2011030761A1 (en) 2009-09-11 2010-09-07 Music game system, computer program of same, and method of generating sound effect data

Country Status (4)

Country Link
US (1) US20120172099A1 (en)
JP (1) JP5399831B2 (en)
CN (1) CN102481488B (en)
WO (1) WO2011030761A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6360280B2 (en) * 2012-10-17 2018-07-18 任天堂株式会社 GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08123448A (en) * 1994-10-18 1996-05-17 Sega Enterp Ltd Image processor using waveform analysis of sound signal
JP2001009152A (en) * 1999-06-30 2001-01-16 Konami Co Ltd Game system and storage medium readable by computer
JP2002215151A (en) * 2001-01-22 2002-07-31 Sega Corp Acoustic signal output method and bgm generating method
JP2002351489A (en) * 2001-05-29 2002-12-06 Namco Ltd Game information, information storage medium, and game machine
JP2008054851A (en) * 2006-08-30 2008-03-13 Namco Bandai Games Inc Program, information storage medium, and game device
JP2008178449A (en) * 2007-01-23 2008-08-07 Yutaka Kojima Puzzle game system and numeric keypad character

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1106209C (en) * 1989-01-10 2003-04-23 任天堂株式会社 Electronic gaming device with pseude-stereophonic sound generating cap abilities
AU704156B2 (en) * 1994-12-02 1999-04-15 Sony Computer Entertainment Inc. Sound source data generating method, recording medium, and sound source data processing device
DE69632695T2 (en) * 1995-09-29 2005-06-16 Yamaha Corp., Hamamatsu Method and apparatus for generating musical music
TR199902479T2 (en) * 1997-04-14 2000-04-21 Thomson Consumer Electronics, Inc. A system for processing and decode MPEG compatible data and internet information.
US6464585B1 (en) * 1997-11-20 2002-10-15 Nintendo Co., Ltd. Sound generating device and video game device using the same
JP4236024B2 (en) * 1999-03-08 2009-03-11 株式会社フェイス Data reproducing apparatus and information terminal
AU7455400A (en) * 1999-09-16 2001-04-17 Hanseulsoft Co., Ltd. Method and apparatus for playing musical instruments based on a digital music file
JP3630075B2 (en) * 2000-05-23 2005-03-16 ヤマハ株式会社 Sub-melody generation apparatus and method, and storage medium
JP4206332B2 (en) * 2003-09-12 2009-01-07 株式会社バンダイナムコゲームス Input device, game system, program, and information storage medium
JP3981382B2 (en) * 2005-07-11 2007-09-26 株式会社コナミデジタルエンタテインメント GAME PROGRAM, GAME DEVICE, AND GAME CONTROL METHOD
CN1805003B (en) * 2006-01-12 2011-05-11 深圳市蔚科电子科技开发有限公司 Pitch training method
US20080200224A1 (en) * 2007-02-20 2008-08-21 Gametank Inc. Instrument Game System and Method
JP4467601B2 (en) * 2007-05-08 2010-05-26 ソニー株式会社 Beat enhancement device, audio output device, electronic device, and beat output method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08123448A (en) * 1994-10-18 1996-05-17 Sega Enterp Ltd Image processor using waveform analysis of sound signal
JP2001009152A (en) * 1999-06-30 2001-01-16 Konami Co Ltd Game system and storage medium readable by computer
JP2002215151A (en) * 2001-01-22 2002-07-31 Sega Corp Acoustic signal output method and bgm generating method
JP2002351489A (en) * 2001-05-29 2002-12-06 Namco Ltd Game information, information storage medium, and game machine
JP2008054851A (en) * 2006-08-30 2008-03-13 Namco Bandai Games Inc Program, information storage medium, and game device
JP2008178449A (en) * 2007-01-23 2008-08-07 Yutaka Kojima Puzzle game system and numeric keypad character

Also Published As

Publication number Publication date
CN102481488B (en) 2015-04-01
CN102481488A (en) 2012-05-30
JP2011056122A (en) 2011-03-24
JP5399831B2 (en) 2014-01-29
US20120172099A1 (en) 2012-07-05

Similar Documents

Publication Publication Date Title
JP3686906B2 (en) Music game program and music game apparatus
JP3317686B2 (en) Singing accompaniment system
JP3719124B2 (en) Performance instruction apparatus and method, and storage medium
JP2003302984A (en) Lyric display method, lyric display program and lyric display device
JP2006030692A (en) Musical instrument performance training device and program therefor
JP3728942B2 (en) Music and image generation device
JP2001145778A (en) Game system, and computer readable storage medium for effecting the system
JP5806936B2 (en) Music game system capable of outputting text and computer-readable storage medium storing computer program thereof
JP2020056938A (en) Musical performance information display device and musical performance information display method, musical performance information display program, and electronic musical instrument
JP3286683B2 (en) Melody synthesis device and melody synthesis method
JP2014200454A (en) Recording medium, game device and game progress method
JP2013068657A (en) Image generation device, method and program for generating image, performance support apparatus, and method and program for supporting performance
JP5399831B2 (en) Music game system, computer program thereof, and method of generating sound effect data
JP4211388B2 (en) Karaoke equipment
JP4131279B2 (en) Ensemble parameter display device
US8878044B2 (en) Processing device and method for displaying a state of tone generation apparatus
JP6411412B2 (en) Program, game apparatus, and game progress method
JP6260176B2 (en) Performance practice apparatus, method, and program
JPWO2014174621A1 (en) Recording medium, game device, and game progress method
JP7425558B2 (en) Code detection device and code detection program
JP5773956B2 (en) Music performance apparatus, music performance control method, and program
JP2008165098A (en) Electronic musical instrument
JP2017015957A (en) Musical performance recording device and program
JPH11231872A (en) Musical sound generation device, image generation device, game device and information storage medium
JP3873872B2 (en) Performance information recording apparatus and program

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080039640.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10815358

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13394967

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10815358

Country of ref document: EP

Kind code of ref document: A1