WO2024029572A1 - Système de jeu, et programme de jeu et procédé de commande pour système de jeu - Google Patents

Système de jeu, et programme de jeu et procédé de commande pour système de jeu Download PDF

Info

Publication number
WO2024029572A1
WO2024029572A1 PCT/JP2023/028307 JP2023028307W WO2024029572A1 WO 2024029572 A1 WO2024029572 A1 WO 2024029572A1 JP 2023028307 W JP2023028307 W JP 2023028307W WO 2024029572 A1 WO2024029572 A1 WO 2024029572A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound data
game
data
song
output sound
Prior art date
Application number
PCT/JP2023/028307
Other languages
English (en)
Japanese (ja)
Inventor
明広 石原
暁 中田
康司 山中
義隆 東
Original Assignee
株式会社コナミデジタルエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コナミデジタルエンタテインメント filed Critical 株式会社コナミデジタルエンタテインメント
Publication of WO2024029572A1 publication Critical patent/WO2024029572A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/814Musical performances, e.g. by evaluating the player's ability to follow a notation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters

Definitions

  • the present invention relates to a game system, a game program for the game system, and a control method for simulating the growth of a game object to be grown and outputting audio according to the state of the game object.
  • Patent Document 1 discloses an audio mixdown device.
  • This audio mixdown device includes an audio file input section, a mixdown control section, an effect processing section, and a synthesized audio data output section.
  • the audio file input unit inputs an audio file recorded using karaoke on a device used by a user
  • the audio file input unit stores the input audio file in a recorded audio file database in association with identification data.
  • the mixdown control unit selects a plurality of recorded audio files to be synthesized from the recorded audio file database in response to a request from a user.
  • the effect processing section performs effect processing on a plurality of recorded audio files and generates synthesized audio data.
  • the synthesized voice data output unit outputs the generated synthesized voice data.
  • a performance in which game objects such as characters appearing in the game sing may be performed, and the song may be output while the game is being played. Further, as such a game, there is a training game in which a user trains a character and improves the parameters of the character.
  • This training game includes, for example, a performance in which the trained character sings.
  • a game system simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs an effect in which the game object performs a song or sings a song.
  • a game system that provides a game that outputs a song comprising: a state acquisition means for acquiring state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating the state of the song or the sound data of the song.
  • the apparatus includes a data acquisition means for acquiring output sound data according to the output sound data, and an audio output control means for outputting sound based on the acquired output sound data.
  • a game program includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs a performance.
  • a game program for a game system that provides a game that outputs the song to be played or the song to be sung; It functions as a data acquisition means for acquiring output sound data of the song, which corresponds to the state indicated by the state information, and an audio output control means for outputting a sound based on the acquired output sound data.
  • a control method includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs the performance.
  • a control method for a game system that provides a game that outputs the song to be played or the song to be sung, wherein the computer acquires state information indicating a state of the game object, and outputs the sound of the song or the song. output sound data corresponding to the state indicated by the state information is acquired, and sound based on the acquired output sound data is output.
  • FIG. 1 is a schematic block diagram of a game system according to a first embodiment. Schematic diagram showing the proportion of inferior parts. An explanatory diagram of the inferior part and the performance part. Flowchart for acquiring output sound data.
  • FIG. 2 is a schematic block diagram of a game system according to a second embodiment.
  • identification information is data composed of letters, numbers, symbols, images, or a combination thereof.
  • FIG. 1 is a schematic diagram showing the overall configuration of a game system 100.
  • the game system 100 includes a game terminal 10, which is an example of a user terminal, and a server 30.
  • the server 30 is configured as one logical server by combining a plurality of server units 52.
  • the server 30 may be configured by a single server unit 52.
  • the server 30 may be configured logically using cloud computing.
  • the server 30 is configured to be connectable to the network 50.
  • network 50 is configured to utilize the TCP/IP protocol to implement network communications.
  • a local area network LAN connects the server 30 and the Internet 51.
  • the Internet 51 as a WAN and a local area network LAN are connected via a router 53.
  • the game terminal 10 is also configured to be connected to the Internet 51.
  • Servers 30 may be interconnected by a local area network LAN or by the Internet 51.
  • the network 50 may be a leased line, a telephone line, an in-house network, a mobile communication network, another communication line, or a combination thereof, and it does not matter whether it is wired or wireless. .
  • the game terminal 10 is a computer device operated by a user.
  • the game terminal 10 includes a stationary or book-type personal computer 54 and a mobile terminal device 55 such as a mobile phone including a smartphone.
  • the game terminal 10 includes various computer devices such as a stationary home game device, a portable game device, a portable tablet terminal device, and an arcade game machine.
  • the game terminal 10 can allow the user to enjoy various services provided by the server 30. Note that, below, an example in which the game terminal 10 is the mobile terminal device 55 will be mainly explained.
  • the server 30 transmits the program and data used for the game to the game terminal 10 via the network 50.
  • the game terminal 10 then stores the received program and data.
  • the game terminal 10 may be configured to read a program or data stored in an information storage medium (not shown). In this case, the game terminal 10 may acquire the program or data via an information storage medium.
  • the user can play various games on the game terminal 10.
  • the game includes elements for growing game objects.
  • the games include simulation games that simulate the growth of game objects, competitive trading card games, music games, board games, mahjong games, RPG games, horse racing games, fighting games, puzzle games, quiz games, and These include sports games such as baseball and soccer.
  • a game object is an object that is displayed or used on the game terminal 10.
  • game objects are used in game processing to progress the game, and include characters, cards, effects, equipment, items, and the like.
  • the material object which is a game object serving as a training material
  • the training object is a copy of the character. That is, a plurality of training objects may exist for the same character.
  • the training object may be the character itself, which is a material object. (There are cases where a material object can be duplicated and the breeding object is a material object and/or a copy of a material object.
  • the breeding object is a material object. It is also possible to adopt a configuration in which there is no training object and a predetermined training object is given to the user when the training game is started.
  • the breeding object may be a virtual card or the like corresponding to a character.
  • the game has a training part in which training objects are trained, and the training part is divided into multiple sections. Furthermore, each section is composed of a plurality of turns, and the last turn has a live part in which the breeding object performs live. The number of turns included in each section varies depending on the character that is the material of the training object, the ongoing scenario, etc. Furthermore, in each turn, the nurturing object can be made to perform a predetermined action.
  • an event targeting the training object occurs at an appropriate timing within each turn. For example, the event is an event for playing with friends, an event for a training camp, or the like. Note that although multiple events may occur in one turn, there may be times when no event occurs.
  • a support event will occur with a predetermined probability during the training part.
  • the event object corresponds to, for example, a character that is a material object.
  • an event object is a virtual card on which a character is drawn.
  • a support event that occurs by configuring a deck is associated with an event object.
  • an event object of a singer character is associated with a support event that increases singing ability, and the support event occurs with a predetermined probability in a lesson that increases singing ability.
  • the user trains the training object by having it perform various actions. That is, the nurturing object performs various actions in response to the user's instructions, and as a result, the parameters of the nurturing object change.
  • the actions that the training object performs include lessons, work, rest, going out, going to the hospital, and acquiring skills.
  • the actions taken by the training object will have effects such as changes in parameters associated with the training object, acquisition or loss of abilities, etc., acquisition or use of items, etc., and changes in relationships with other game objects. It's fine if it's an option.
  • the nurturing object may be able to perform training camps, trips, live performances, reporting, photography, appearances, competitions, auditions, going to school, and the like.
  • the parameters of the breeding object are variables linked to object identification information that uniquely identifies the breeding object, and change as the game progresses.
  • the parameters include information indicating the size or height of the ability, information indicating the presence or absence of the ability, and information indicating the state of the game object.
  • the parameter changes as the value of the parameter increases or decreases.
  • the parameters vary depending on whether the flag is turned on or off.
  • the parameters include singing ability, dancing ability, expressive ability, visual ability (for example, values that increase depending on clothing, makeup, hairstyle, etc.), acting ability, performance ability, mental strength, stamina, intelligence, charm, etc. including.
  • consumption elements include skill points, in-game currency, and the like.
  • the points that can be acquired by performing an action may be a value that gives the user an advantage in acquiring skills.
  • the points are values that give an effect of reducing skill points required when acquiring a skill.
  • an event may occur according to a predetermined scenario, such as a drama depicting friendship between game objects.
  • an event that affects the cultivation of the cultivation object may occur at a predetermined probability in each turn.
  • a beneficial effect is an increase in the amount of increase in a parameter due to a lesson, or an increase in a parameter of a training object.
  • the disadvantageous effect is, for example, a decrease in the amount of increase in the parameter or a decrease in the parameter of the nurturing object.
  • the live part functions as a checkpoint to confirm the level of development.
  • the live part will be held during the last turn of each section.
  • a mini-game is played in which the live progresses automatically.
  • a live performance sung by a single breeding object or a unit made up of a plurality of game objects including the breeding object is displayed as a performance video.
  • the live success conditions can be achieved according to the parameters of the grown object.
  • the success condition may be that the singing ability exceeds a predetermined value, or that the number of fans or ticket sales exceeds a predetermined number, or it may be the fact that the live performance itself is held. good.
  • the training can be continued, but if the success conditions are not achieved, the training ends.
  • the live part is not limited to the one held at the last turn of each section as described above, but can be held only at the end of the training part, or in a part different from the training part (i.e., one of the training parts). It may also be held by a club (not a club).
  • one turn progresses each time the training object performs an action. That is, the user causes the training object to perform an action every turn.
  • a training object executes a job selected by the user from among the jobs constituting a work route including a plurality of jobs.
  • the breeding object grows and the parameters associated with the breeding object change.
  • a new job that the training object can perform may be opened.
  • work includes live performances, interviews, filming, appearances, competitions, and auditions.
  • the actions that the training object is made to perform include actions that do not consume turns. For example, even if you perform an action to acquire a skill, a turn does not pass, and you can have the training object perform another action. Note that an action that does not consume a turn may be performed in the turn of the live part before the start of the live performance. Also, in the last turn of each section, a target live performance will be performed as a live part. Note that conditions for holding a live performance may be set separately.
  • the event condition is to obtain a predetermined number of fans, ticket sales, etc.
  • the number of fans or the number of ticket sales increases by having the user perform an action such as work or by the occurrence of an event.
  • the live success rate in the live part increases or decreases depending on the parameters of the training object grown in the training part. In order to have a successful live performance, the user needs to take lessons in the training part to increase parameters.
  • the number of turns until the end of training is arbitrary, but an example is 72 turns, which corresponds to 6 years in the game.
  • the user trains at least one training object in the training part consisting of a plurality of turns.
  • the trained object can be used as an inherited object.
  • an arbitrary character can be used as a training object, and a training object that has been trained before the character can be used as an inherited object.
  • the training object can inherit the parameters, talents, job routes, etc. associated with the inheritance object.
  • the game described above proceeds as follows.
  • the user selects a training object to be trained from among a plurality of game objects to be training materials.
  • an inheritance object that includes an inheritance element to be inherited by the breeding object.
  • a user selects two inherited objects.
  • the number of selectable inherited objects may be one or three or more.
  • the user selects one or more event objects to construct a deck.
  • the user selects six event objects.
  • the number of selectable event objects may be five or less or seven or more.
  • a support event may occur during the training part.
  • the support events may be different from each other, and each has a predetermined influence on the growth of the growth object.
  • the training part begins.
  • the user selects one action from among lessons, work, rest, going to the hospital, and acquiring skills, and causes the training object to perform the action.
  • the breeding object is in a bad state (for example, when it is sick or injured)
  • the user selects the action of going to the hospital in order to resolve the bad state.
  • the condition of the breeding object becomes poor, the user selects the action of going out to make the condition better or better.
  • the effectiveness of the lesson decreases, the amount of increase in the parameter decreases, the increase in the parameter is limited, or the parameter decreases.
  • the physical strength of the training object increases or decreases. Basically, when you do lessons or work, the physical strength of the training object decreases. If the physical strength is lower than a predetermined value, the probability of being injured by performing the lesson increases, the effectiveness of the lesson decreases, or the lesson cannot be selected. Therefore, the user selects a resting action to recover a predetermined amount of physical strength. Note that physical strength may be able to be recovered by the occurrence of an event, the use of a skill or an item, or the like.
  • the user selects a work action to obtain predetermined conditions for holding a live performance.
  • the training object performs the job, the number of fans or the number of ticket sales increases.
  • the user selects an action to acquire skills.
  • the effects exerted by a skill may include an effect that makes it easier to succeed in an event or job, an effect that changes the parameters of a training object, an effect that increases the amount of increase in parameters due to lessons, and the like.
  • a live part will occur as a checkpoint.
  • the training can be continued, and if the success conditions are not achieved, the training ends. Note that a plurality of checkpoints may be provided.
  • the game system 100 provides a game in which a game object plays a song or sings a song, and outputs a song to be played or a song to be sung. Furthermore, in the game, the raising of a game object to be raised is simulated.
  • This game system 100 includes a game terminal 10 and a server 30, as shown in FIG. In the following, an example of a game in which a game object sings a song and outputs the sung song will be mainly described.
  • the game terminal 10 includes a terminal control section 11 as an example of a terminal control means, a terminal storage section 12 as an example of a terminal storage means, a terminal communication section 13 as an example of a terminal communication means, and a terminal operation section as an example of an operation means. 14, a terminal display section 15 as an example of a display means, and an audio output section 16 as an example of an audio output means.
  • the terminal control unit 11 is configured as a computer and includes a processor (not shown). This processor is, for example, a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit). Further, the processor controls the entire game terminal 10 based on the control program and game program stored in the terminal storage unit 12, and also controls various processes in an integrated manner.
  • the terminal storage unit 12 is a computer-readable non-temporary storage medium.
  • the terminal storage unit 12 includes RAM (Random Access Memory), which is a system work memory for the processor to operate, ROM (Read Only Memory), HDD (Hard Disc Drive), and HDD (Hard Disc Drive) that store programs and system software. Includes storage devices such as SSD (Solid State Drive).
  • the CPU of the terminal control unit 11 executes processing operations such as various calculations, controls, and determinations according to a control program stored in the ROM or HDD of the terminal storage unit 12.
  • the terminal control unit 11 uses a portable recording medium such as a CD (Compact Disc), a DVD (Digital Versatile Disc), a CF (Compact Flash) card, and a USB (Universal Serial Bus) memory, or a server on the Internet. Control can also be performed according to a control program stored in an external storage medium such as .
  • a portable recording medium such as a CD (Compact Disc), a DVD (Digital Versatile Disc), a CF (Compact Flash) card, and a USB (Universal Serial Bus) memory
  • Control can also be performed according to a control program stored in an external storage medium such as .
  • the terminal storage unit 12 stores a terminal program PG, which is an example of a game program, object data 12A, and terminal audio data 12B.
  • the object data 12A includes, as game object data, a character image, parameter values, information indicating the state of the object, etc., which are associated with object identification information that uniquely identifies the object.
  • the terminal audio data 12B includes the character's voice and sound data related to singing.
  • the terminal audio data 12B is waveform data in a predetermined format such as WAV format.
  • the terminal storage unit 12 stores data (not shown) necessary for game processing to advance the game, such as game images and game music.
  • the terminal program PG includes a terminal control unit 11 as a computer, a status acquisition unit 11A which is an example of a status acquisition unit, a data acquisition unit 11B which is an example of a data acquisition unit, and a game progress unit 11C which is an example of an audio output control unit. , and a generation unit 11D which is an example of a generation means. That is, the terminal control section 11 has each section as a logical device realized by a combination of hardware and software. Alternatively, the terminal program PG can be stored in other computer-readable non-transitory storage medium in addition to the terminal storage unit 12.
  • the terminal operation unit 14 is an input device through which the user inputs game operations.
  • the terminal display unit 15 is a device that displays game images, and is, for example, a liquid crystal display or an organic EL display.
  • the audio output unit 16 is an output device that outputs game music and the like, and is, for example, a speaker or headphones. Note that in FIG. 3, the terminal operation section 14 and the terminal display section 15 are shown separately. However, the terminal operation section 14 and the terminal display section 15 may be integrally configured as a touch panel. Further, the terminal operating section 14 may include a touch pad, a pointing device such as a mouse, a button, a key, a lever, a stick, etc. that are not integrated with the terminal display section 15. Further, the terminal operation unit 14 may be a device that detects the voice emitted by the user or the user's motion, and performs an operation according to the detection result.
  • audio is output based on output sound data according to the state of the breeding object. For example, in the early stages of the game before the singing ability, which is a parameter, increases, the audio output unit 16 outputs audio based on the output sound data of a poor singer. On the other hand, in the final stage of the game after the singing ability has increased, the audio output unit 16 outputs audio based on the output sound data of the singing skill. Thereby, the user can audibly sense the growth of the nurturing object.
  • the state of the training object changes from a poor state to a normal state to an excellent state.
  • a poor state a poor song is output with a narrow vocal range and a different pitch of high or low notes from the original song.
  • a poor song may be output in which the pitch or pitch is different from the original song and the voice is raised or the pitch is low and flat.
  • a poor song may be output in which the beginning of the song or the solo singing part is faster or slower than the original song.
  • a poorly written song may be output in which the pitch is significantly different from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the inferior state may be a state in which at least some of the musical elements are inferior compared to the original song.
  • a normal song with a slightly wider vocal range and a slightly different pitch of the highest or lowest note from the original song is output.
  • the pitch or pitch is different from the original song, and the voice is raised or a normal song with a slightly lower and flat pitch is output.
  • a normal song may be output in which the beginning of the song or the solo singing part is slightly faster or slower than the original song.
  • a normal song may be output that has a slightly different pitch from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the normal state may be a state in which the musical elements are comparable to those of the original song.
  • a good song with a wide vocal range and the same waveform as the original song except for the production part is output.
  • a good song may be output in which the pitch, pitch, timing, etc. of the song are the same as the original song except for the production part.
  • a good song may be output that includes a performance part that reflects the singing technique. That is, as an example, the superior state may be a state in which at least some of the musical elements are superior compared to the original song.
  • the output sound data is sound data corresponding to the song outputted from the audio output section 16, and is used to finally output the song from the audio output section 16.
  • the output sound data is selected from among multiple types of sound data included in the terminal audio data 12B stored in the terminal storage unit 12.
  • multiple types of sound data may be stored in the server storage unit 32 of the server 30. Further, the output sound data may be generated each time in the game terminal 10 or the server 30 as necessary.
  • multiple types of sound data include first sound data that corresponds to an inferior state and has many inferior parts, second sound data that corresponds to a normal state and has few inferior parts, and second sound data that corresponds to an excellent state. It contains the third tone data, which does not have inferior parts and includes a directed part. These first to third sound data are generated as candidates for output sound data. Moreover, the first to third tone data are based on the same song. Therefore, at least a portion (for example, a portion of bars) of the first to third tone data has the same waveform.
  • the inferior parts of the first note data and the second note data have different pitches compared to the same lyrics part of the third note data.
  • the sound output using the inferior part of the lyric "a” is a degraded sound that gives the user a sense of discomfort compared to the part of the lyric "a" in the reference sound data.
  • degraded sounds include sounds with a high or low pitch, sounds with early or late timing, sounds with incorrect or skipped lyrics, sounds with a low or loud voice, sounds with a raised voice, or The sound is hoarse, etc.
  • the production part of the third note data has a production that reflects the singing technique.
  • the performance portion includes a performance that reflects a singing technique such as vibrato.
  • the performance portion may include a "staccato”, “shakuri”, “fall”, “fist”, or the like.
  • the output sound data may include reference sound data corresponding to a song sung according to the musical score, inferior sound data that includes an inferior part that is inferior compared to the reference sound data, and inferior sound data that is inferior to the reference sound data. It may be generated using superior sound data including superior performance parts.
  • the output sound data is generated by mixing at least two types of reference sound data, inferior sound data, and superior sound data.
  • the first sound data, the second sound data, or the third sound data are used as output sound data.
  • the first to third note data are generated based on the generation data and the musical score data.
  • the first to third tone data are generated based on the inferior tone data, the reference tone data, and the superior tone data.
  • the inferior tone data, reference tone data, and superior tone data are all generated based on generation data and musical score data.
  • inferior tone data, reference tone data, and superior tone data are used as first to third tone data, respectively.
  • the plurality of types of sound data may be inferior sound data, reference sound data, and superior sound data.
  • the first tone data corresponding to the inferior condition becomes the inferior tone data
  • the second tone data corresponding to the normal condition becomes the reference tone data
  • the third tone data corresponding to the superior condition becomes the superior tone data.
  • the plurality of types of sound data may be divided into a plurality of stages, such that as the parameter becomes higher, the proportion of inferior parts is lowered.
  • each of the plurality of types of sound data may be generated for each character that is a game object.
  • the generation unit 11D generates output sound data.
  • the generation unit 11D generates output sound data using generation data that includes singing characteristics of a human performer (for example, an idol or a voice actor) who plays the character of the game object.
  • This generation data is created from original data of a plurality of (for example, three) songs sung by a performer.
  • output sound data of the desired song is generated using the generation data and the musical score data of the desired song by voice creation software that is AI (Artificial Intelligence) created by machine learning. Therefore, it is also possible to generate output sound data of a song different from the recorded song.
  • AI Artificial Intelligence
  • the generation data is used to reproduce the characteristics of the performer's singing (for example, the timing of breaths, the degree of pitch deviation, etc.). Therefore, when output sound data is generated using different generation data, songs with different characteristics, ie, output sound data, are generated even if the songs use the same score data.
  • the generation unit 11D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 11A.
  • the terminal storage unit 12 stores generation data and voice creation software.
  • the audio creation software is downloaded from the server 30 in advance.
  • the generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10.
  • the server 30 may transmit the generation data in advance in response to a request from the game terminal 10.
  • the terminal storage unit 12 may also store voice creation software that has learned the characteristics of the performer's singing indicated by the generation data.
  • the generation unit 11D generates output sound data in real time before and during the live performance.
  • the generation unit 11D may generate the output sound data at the timing when the status acquisition unit 11A acquires the status information.
  • the generation unit 11D when generating output sound data corresponding to a normal state, the generation unit 11D generates output sound data with few or no inferior parts.
  • the generation unit 11D when generating output sound data corresponding to an excellent state, the generation unit 11D generates output sound data that does not have inferior parts and includes a presentation part.
  • the generation unit 11D when generating output sound data corresponding to an inferior state, the generation unit 11D generates output sound data that has many inferior parts.
  • the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the amount of data downloaded from the server 30 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations (for example, use of an item or activation of a skill) during a live performance, output sound data can be generated with the proportion of inferior parts changed according to the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
  • Output sound data may be generated according to this parameter.
  • the generation unit 11D generates the output sound data so that the inferior parts are small in the beginning of the live performance when the stamina is high, and the inferior parts are large in the final stage of the live performance when the stamina is low.
  • the generation unit 11D may generate the output sound data so that the sense of elation increases during the exciting period of a live performance, such as the chorus of a song, and the inferior portions are reduced during this period.
  • the generation unit 11D may generate the output sound data so that inferior parts are more numerous in parts corresponding to a period of high tension at the beginning of a live performance, such as the beginning of a song. Alternatively, the generation unit 11D may determine the amount of the inferior portion according to a combination of two or more of these dynamic parameters, and generate the output sound data.
  • the dynamic parameter may be other parameters such as singing ability. Alternatively, the parameters may change during the live performance due to the use of an item or the activation of a skill. Further, the dynamic parameter of the trained character may be a type of parameter that changes due to training, or may be a value that does not change due to training but changes during a live performance.
  • the generation unit 11D may generate output sound data based on inferior sound data or the like. Specifically, the generation unit 11D generates inferior sound data, which is an example of the first sound data that includes an inferior part at least in part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or Output sound data (for example, first to third sound data) is generated by mixing with reference sound data that is an example of at least one other sound data that does not include the sound data.
  • the inferior part has a different sound output timing or pitch compared to other sound data. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process.
  • the generation unit 11D can further use superior sound data for mixing as other sound data.
  • the inferior tone data, the reference tone data, and the superior tone data are generated in the server 30 using generation data, and are stored in the server storage unit 32. These sound data are then transmitted from the server 30 in response to a download request from the game terminal 10.
  • the server 30 may transmit the inferior tone data, the reference tone data, and the superior tone data in advance in response to a request from the game terminal 10.
  • the usage ratio of each of the inferior tone data, the reference tone data, and the superior tone data may be increased or decreased depending on the parameters. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
  • FIG. 4A is a state before the singing ability increases, and is a schematic diagram showing the proportion of inferior parts in the output sound data acquired mainly at the beginning of the training part.
  • FIG. 4B is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly in the middle of the training part when the singing ability has increased to some extent.
  • FIG. 4C is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly at the end of the training part in a state where the singing ability has increased.
  • FIG. 5A is a schematic diagram showing honor tone data corresponding to a song including a performance part.
  • FIG. 5B is a schematic diagram showing reference tone data corresponding to a song according to the musical score.
  • FIG. 5C is a schematic diagram showing inferior tone data corresponding to a song including an inferior part.
  • the generation unit 11D generates the output sound data of FIG. 4A (for example, the first sound data) including the inferior part using the reference sound data and the inferior sound data.
  • reference sound data is used for the lyrics "Ue” and "Kikukeko".
  • Inferior tone data is used for other parts.
  • the inferior tone data includes a part B1 where the pitch is shifted to a higher side, a part B2 where the pitch is shifted to a lower side, and a part B2 where the pitch is shifted to a higher side and the timing is different.
  • the slow part B3 is included as an inferior part.
  • the generation unit 11D uses the inferior sound data for the part of the lyrics "Ai". As a result, output sound data is generated that includes, as an inferior part, a part B1 with a higher pitch in the part of the lyrics "Ai”. Furthermore, the generation unit 11D uses inferior sound data for the part of the lyrics "oka”. As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyric "Oka”. Furthermore, the generation unit 11D uses inferior sound data for the lyrics "Sasisuseseso". As a result, output sound data is generated that includes a portion B3, which is an inferior portion, in which the pitch is shifted toward the higher side and the timing is delayed, in the portion of the lyrics “Sasisuseseso”.
  • the generation unit 11D generates the output sound data of FIG. 4B (for example, second sound data), which has few inferior parts, using superior sound data in addition to the reference sound data and inferior sound data.
  • the superior tone data is used for the lyrics “eo” and “suseso”.
  • the inferior tone data is used for the "ku” and "sa” parts of the lyrics, and the reference tone data is used for the other parts.
  • the superior sound data includes a portion P to which vibrato is applied as a presentation portion.
  • the generation unit 11D uses the superior tone data as the "so" part of the lyrics. As a result, output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics. Furthermore, the generation unit 11D uses inferior sound data for the "ku” part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyrics "ku”. Furthermore, the generation unit 11D uses inferior sound data for the "sa" part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B3 in which the pitch is shifted to the higher side and the timing is delayed in the part of the lyric "sa".
  • the generation unit 11D generates the output sound data (for example, third sound data) of FIG. 4C without the inferior part using the reference sound data and the superior sound data.
  • the standard tone data is used for the "ki" part of the lyrics
  • the superior tone data is used for the other parts.
  • the generation unit 11D uses the superior sound data as the "so" part of the lyrics in the output sound data of FIG. 4B.
  • output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics.
  • the output sound data in FIG. 4A includes inferior parts in the lyrics “ai”, “oka”, and “sashisu seso”.
  • the output sound data of FIG. 4B includes inferior parts in the lyrics “ku” and “sa”, and has relatively few inferior parts compared to the output sound data of FIG. 4A.
  • the output sound data in FIG. 4C does not include any inferior parts, and has relatively fewer inferior parts compared to the output sound data in FIGS. 4A and 4B.
  • the generation unit 11D transmits a download request for inferior tone data, reference tone data, and superior tone data to the server 30 before the start of the live performance. Then, the generation unit 11D mixes the inferior tone data etc. downloaded from the server 30 in real time before the start of the live performance and during the live performance, and outputs sound data according to the state indicated by the state information acquired by the state acquisition unit 11A. generate.
  • the inferior tone data, the reference tone data, and the superior tone data may be downloaded from the server 30 to the game terminal 10 in advance. Then, the generation unit 11D mixes the pre-downloaded inferior tone data, etc. in real time before the start of the live performance and during the live performance, and generates output sound data according to the state indicated by the state information obtained by the state acquisition unit 11A. generate.
  • the inferior tone data, superior tone data, and reference tone data are prepared in advance for each song. Further, when a unit including a plurality of game objects each corresponding to a character sings the same song, inferior tone data, superior tone data, and reference tone data may be prepared for each character.
  • the generation unit 11D may generate inferior tone data, reference tone data, and superior tone data using the generation data.
  • the generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10.
  • the generation unit 11D mixes pre-generated inferior sound data and the like to generate output sound data in accordance with the state indicated by the state information acquired by the state acquisition unit 11A in real time during a live performance.
  • the generation unit 11D when the generation unit 11D generates output sound data in real time during a live performance, the inferior sound data and the reference sound are generated in bar units or note units so that the output sound data is generated in real time during a live performance. It may be mixed to include the data and a part of the superior tone data.
  • the generation unit 11D increases the usage ratio of the inferior tone data, and mainly uses the inferior tone data and switches some measures thereof to a part of the reference tone data.
  • the generation unit 11D when the singing ability parameter is high, the generation unit 11D reduces the usage ratio of the inferior tone data, mainly uses the reference tone data, and switches a part of the measure to a part of the inferior tone data.
  • the generation unit 11D does not generate the output sound data in real time during the live performance, but generates the output sound data based on the generation data and the musical score data before the start of the live performance (for example, at the timing when the status acquisition unit 11A acquires the status information). may be generated.
  • the generation unit 11D includes the generated output sound data in the terminal audio data 12B and stores it in the terminal storage unit 12.
  • the generation unit 11D may include the generated output sound data in the server audio data 32A and store it in the server storage unit 32.
  • the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the generation unit 11D may generate multiple types of sound data in advance from the generation data as output sound data candidates. For example, the generation unit 11D generates multiple types of sound data in advance for each state indicated by the state information (for example, parameters) of the training object. As an example, when the singing ability is low, the generation unit 11D generates output sound data candidates so as to include an inferior part in which the pitch is significantly shifted. Furthermore, the generation unit 11D generates candidates for output sound data such that as the singing ability increases, the amount of deviation in pitch decreases. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12.
  • the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12.
  • the generation unit 11D may also generate multiple types of sound data for each material object as output sound data candidates so that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. good. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12. For example, the generation unit 11D outputs a normal pattern in which the singing ability is within a predetermined range, a pattern in an excellent state in which the singing ability exceeds the predetermined range, and a pattern in an inferior state in which the singing ability is below the predetermined range. Sound data candidates are generated in advance.
  • the generation unit 11D may generate output sound data or inferior tone data, etc. according to musical score data in which the position of the inferior part in the song is set. For example, a song made entirely of degraded sounds will be difficult for the user to hear. Therefore, in the musical score data, the inferior part is set at a position corresponding to a predetermined part of the song so that the entire sound does not become deteriorated. For example, inferior parts are set at positions corresponding to the beginning of a song, the end of a song, a high-pitched part, a low-pitched part, a part with complicated lyrics, and the like. The generation unit 11D generates output sound data, inferior sound data, etc. so that inferior parts are provided at these positions.
  • parts that may be used as inferior parts parts that may be used as production parts, parts that should not be used as inferior parts, or parts that should not be used as production parts may be predetermined in accordance with the musical piece. In other words, there may be predetermined locations in the music that can be made into superior or inferior parts or parts that cannot be made into superior or inferior parts.
  • the content of the inferior part may be set in the musical score data.
  • the contents of the inferior part include, for example, the degree of inferiority and the mode of inferiority.
  • the content of the inferior part is the extent to which the voice becomes soft or loud, or the manner in which the voice becomes hoarse. Then, the generation unit 11D generates output sound data such that the sound is degraded according to the contents of the inferior portion set in the musical score data.
  • the position of the inferior part and the contents of the inferior part may be set for each character, which is a game object.
  • a character with a low voice is set so that the pitch of the high-pitched part is lower than that of the original song.
  • characters who are easily nervous are set so that their voices become hoarse when they start singing.
  • characters with low physical strength are set so that their voices become hoarse at the end of the song.
  • the contents of the inferior part may be set according to the parameters of the training object before the live performance starts, or the parameters of the training object that change during the live performance, or the like. For example, musical score data for a person with low physical strength is set so that the voice becomes hoarse at the end of the song.
  • the generation unit 11D may divide the characters, which are game objects, into a plurality of types, and use musical score data in which patterns of inferior parts differ for each type. For example, the generation unit 11D uses musical score data of a pattern for shifting the pitch higher, a pattern for shifting the pitch lower, a pattern for accelerating the timing, and a pattern for delaying the timing. Specifically, a pattern is determined for each character, and the generation unit 11D uses musical score data according to the pattern. For example, the generation unit 11D generates output sound data for character A according to musical score data in which an inferior part of a pattern that shifts the pitch to a higher side is set. Further, the generation unit 11D generates output sound data for character B according to the musical score data in which the inferior part of the pattern for accelerating the timing is set. This allows the number of musical score data to be created to be reduced.
  • the score data is created by the administrator of the server 30 or the user of the game terminal 10.
  • the musical score data may be automatically generated by the game terminal 10 or the server 30.
  • the use of musical score data is merely an example, and the portion to be degraded does not need to be set in advance.
  • the data acquisition unit 11B may randomly select a portion to be degraded, degrade the selected portion, and generate output sound data.
  • the generation unit 11D may generate the output sound data by referring to a table showing the proportion of inferior parts according to the state information (for example, parameters) of the breeding object.
  • the generation unit 11D refers to a table that associates low singing ability below a predetermined value with the ratio of inferior parts or the usage ratio of inferior sound data. Then, when the singing ability is below a predetermined value, the generation unit 11D generates output sound data such that the proportion of the inferior part or the usage proportion of the inferior sound data associated with the low singing ability is reflected.
  • the generation unit 11D refers to a table that associates singing ability higher than a predetermined value with the proportion of inferior parts or the proportion of use of inferior sound data. Then, when the singing ability is higher than a predetermined value, the generation unit 11D generates output sound data so as to reflect the proportion of the inferior part or the usage proportion of the inferior sound data associated with the high singing ability.
  • the generation unit 11D may generate the output sound data in a manner different from the manner described above, as long as it can generate multiple types of sound data corresponding to songs of different skill levels as output sound data candidates. For example, the generation unit 11D may create data for each phoneme from the pronunciation of a sentence by a human performer, and generate output sound data, inferior sound data, etc. based on the data.
  • the output sound data described above includes an inferior part or a performance part only in the song part (that is, the sung part).
  • the song parts that is, the parts played
  • the song parts in the first to third note data correspond to songs played according to the musical score, and do not include inferior parts.
  • the state acquisition unit 11A acquires state information indicating the state of the game object.
  • the game object is, for example, a breeding object, and the states of the breeding object may include a poor state, a superior state, and other normal states.
  • the state of the nurturing object may be divided into a plurality of stages, for example, four or more stages such as high, slightly high, slightly low, and low.
  • the data acquisition unit 11B acquires output sound data according to the state based on the status information acquired by the status acquisition unit 11A.
  • the data acquisition unit 11B when a parameter (for example, singing ability or physical strength) is lower than a predetermined value, or when an abnormal condition such as illness or injury occurs, the data acquisition unit 11B generates an output sound corresponding to the inferior condition. Get data.
  • the parameter when the parameter is higher than a predetermined value, or when no abnormal condition such as illness or injury has occurred, the data acquisition unit 11B acquires output sound data corresponding to an excellent condition.
  • the predetermined value may be one or two or more. In other words, two or more means, for example, that the predetermined value used to determine whether the condition is inferior or not is different from the predetermined value used to determine whether the condition is excellent. . In this way, multiple predetermined values may be used to determine multiple states.
  • the data acquisition unit 11B may acquire output sound data corresponding to an inferior state when the value of a parameter such as mental strength is low. Furthermore, when a parameter such as stamina or physical strength is low, the data acquisition unit 11B may acquire output sound data corresponding to an inferior state.
  • the state indicated by the state information may be any state that changes depending on parameters, actions, or skills.
  • the status includes various situations such as low level, high level, fatigue, poor condition, good condition, buff, and debuff.
  • the state information is information for specifying the state.
  • the status information is a numerical value of a parameter, information indicating whether a flag such as illness or injury is on or off, or status identification information that uniquely identifies the status. For example, by determining the state based on parameters as state information, an increase or decrease in the parameters as a result of training can be reflected in the singing ability output based on the output sound data.
  • the state acquisition section 11A may determine the state indicated by the state information, and the data acquisition section 11B may obtain output sound data corresponding to the determined state.
  • the state acquisition unit 11A determines the state based on parameters associated with the breeding object.
  • the status acquisition unit 11A refers to the object data 12A and acquires the parameters of the singing ability of the training object before the start of the live performance.
  • the data acquisition unit 11B then acquires output sound data according to the state indicated by the singing ability. Then, the data acquisition unit 11B acquires output sound data corresponding to a normal state when the acquired singing ability value is within the range of 400 or more and 600 or less. Further, when the acquired singing ability value is less than 400, the data acquisition unit 11B acquires output sound data corresponding to an inferior state. Further, when the acquired singing ability value is higher than 600, the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
  • the status acquisition unit 11A may acquire status information that changes while performing a live part.
  • the data acquisition unit 11B acquires output sound data corresponding to the state of the breeding object indicated by the parameters that change during the live performance.
  • the status acquisition unit 11A may collect dynamic parameters related to the live performance, such as the number of fans that increases or decreases during the live performance, the level of excitement of the entire live performance, the number of cheers, or the number of viewers when the live performance is virtually distributed within the game space. Obtain the number, amount of coins, etc. Then, the data acquisition unit 11B acquires output sound data corresponding to the excitement state of the live performance based on the acquired value.
  • the dynamic parameters for example, endurance, elevation, tension, etc.
  • the state acquisition unit 11A may acquire the increased or decreased parameters.
  • the state acquisition unit 11A may refer to the object data 12A in real time during the live performance to acquire the parameters of the endurance of the training object.
  • the data acquisition unit 11B acquires output sound data corresponding to an inferior state.
  • the data acquisition unit 11B acquires output sound data corresponding to the normal state. do.
  • the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
  • the parameters change as the game progresses. For example, it increases as the game progresses, or decreases as the state of the game object deteriorates. Specifically, as the game progresses and the turn of the training part passes, the numerical value of the parameter (for example, singing ability) of the training object increases. Furthermore, when the training object becomes fatigued and its condition deteriorates, the numerical value of the parameter (for example, physical strength) of the training object becomes low. Alternatively, the parameters may decrease as the game progresses.
  • the state acquisition unit 11A may acquire state information of each game object of a unit consisting of a plurality of game objects including the breeding object.
  • the data acquisition unit 11B acquires output sound data corresponding to the state of each game object.
  • the data acquisition unit 11B acquires output sound data corresponding to the singing proficiency state of each game object based on the singing ability value of each game object.
  • the data acquisition unit 11B may acquire output sound data corresponding to the state of fatigue of each game object based on the physical strength value of each game object.
  • the status acquisition unit 11A may acquire only the status information of the training object even when the unit is performing live. In this case, it is possible to obtain output sound data according to the parameters and output audio only for the breeding object.
  • the data acquisition unit 11B acquires output sound data that corresponds to the song sung by the training object and corresponds to the state indicated by the status information acquired by the status acquisition unit 11A. For example, the data acquisition unit 11B acquires output sound data according to the state by acquiring the output sound data generated by the generation unit 11D according to the state.
  • the data acquisition unit 11B may acquire output sound data generated by the generation unit 11D in advance before the start of the live performance, or may acquire output sound data generated by the generation unit 11D in real time during the live performance. .
  • the data acquisition unit 11B may cause the generation unit 11D to generate output sound data according to the state of the breeding object. Further, the data acquisition unit 11B may select and acquire output sound data according to the state from the terminal storage unit 12 or the server storage unit 32.
  • the output sound data corresponding to the inferior state includes at least a part of the inferior part according to the state indicated by the state information. Specifically, when the state information indicates an inferior state, the data acquisition unit 11B obtains output sound data having a higher proportion of inferior parts than when the state information indicates an excellent state. Then, when the state information indicates an excellent state, the data acquisition unit 11B acquires output sound data that has a small proportion of inferior parts or does not include inferior parts.
  • the data acquisition unit 11B may select and acquire output sound data according to the state information from among a plurality of different types of sound data. For example, the data acquisition unit 11B selects and acquires output sound data according to the parameters from among a plurality of types (for example, three patterns) of sound data generated in advance. Specifically, when the singing ability is below a predetermined range and the status information indicates an inferior state, the data acquisition unit 11B selects output sound data (for example, first sound data) with a high proportion of inferior parts. get. In addition, when the singing ability is included in a predetermined range and the status information indicates a normal state, the data acquisition unit 11B outputs sound data with a small proportion of inferior parts or no inferior parts (for example, second sound data). Select and obtain.
  • a plurality of types of sound data generated in advance Specifically, when the singing ability is below a predetermined range and the status information indicates an inferior state, the data acquisition unit 11B selects output sound data (for example, first sound data) with a high proportion of inferior parts. get. In
  • the data acquisition unit 11B selects output sound data (for example, third note data) that has a small proportion of inferior parts or does not include it. and obtain it. Thereby, the process of generating output sound data each time can be omitted, and the processing load can be reduced.
  • output sound data for example, third note data
  • the data acquisition unit 11B compares the output sound data including the inferior part with the first sound data when the output sound data including at least a part of the inferior part is the first sound data. Acquire other sound data that has a small proportion or does not include inferior parts. As a result, voices containing inferior parts are less likely to be output, or are not output at all, and, for example, it is possible to express a skilled state with high singing ability, a lively state with high physical strength, etc. during the game.
  • the data acquisition unit 11B may acquire output sound data that includes at least a portion of the inferior part.
  • the data acquisition unit 11B may acquire output sound data that has less inferior parts or does not include inferior parts.
  • the data acquisition unit 11B may acquire output sound data that includes more or more performance parts.
  • the data acquisition unit 11B may acquire the output sound data of each game object of the unit including the breeding object.
  • a unit of two to seven people including the training object may sing.
  • which game object sings which part of the song may be changed by automatic selection such as placing the breeding object at the center of the unit or by user selection.
  • output sound data is generated according to the state of the entire song of each game object.
  • the data acquisition unit 11B acquires output sound data according to the states of all members of the unit.
  • the game progression unit 11C causes the audio output unit 16 to output audio based on the output audio data of all members of the unit so as not to output unnecessary portions of audio. Specifically, the game progression unit 11C causes the audio output unit 16 to output only the audio of the song part assigned to each game object, and does not output the audio of the other unassigned parts. This eliminates the need to prepare unit output sound data every time a part changes. Therefore, the number of output sound data can be reduced and data management becomes easier.
  • the data acquisition unit 11B may acquire output sound data of a game object other than the training object, depending on the state of the training object. For example, when the breeding object is in an inferior state, the data acquisition unit 11B also acquires output sound data in an inferior state from the output sound data of other game objects.
  • the data acquisition unit 11B may acquire output sound data of other game objects according to a state indicated by a predetermined parameter or a fixed state (for example, an excellent state). . Furthermore, when a unit sings, the data acquisition unit 11B may acquire predetermined output sound data (for example, third sound data) for output sound data of other game objects regardless of the parameters.
  • the game object included in the unit may be a bred object (for example, an inherited object) that has previously been bred.
  • the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the inherited object as the output sound data of the inherited object. For example, when the singing ability of the inherited object indicates an excellent state exceeding a predetermined range, the data acquisition unit 11B acquires output sound data corresponding to the excellent state as the output sound data of the inherited object.
  • the material object may be included in the unit.
  • the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the material object as the output sound data of the inheritance object. For example, when the material object shows a poor state in which the singing ability falls below a predetermined range, the data acquisition unit 11B obtains output sound data corresponding to the poor state as the output sound data of the material object. Furthermore, if the number of inherited objects is less than the number of people in the unit, the unit may be formed in this state.
  • the data acquisition unit 11B may obtain the output sound data of the entire unit that reflects the assignment of parts in the unit by causing the generation unit 11D to generate it. Thereby, the data acquisition unit 11B only needs to acquire the output sound data of the unit, and does not need to acquire the output sound data of each character. Alternatively, if the data capacity is not a problem, the generation unit 11D may generate separate song parts for each game object as the output sound data of the unit.
  • the game progress section 11C which is an example of a game progress means, simulates the growth of game objects. Then, the game progression unit 11C changes the parameters of the breeding object according to the progress of the game. For example, when a lesson for increasing the singing ability is given in the training part, the game progression unit 11C increases the singing ability of the training object and reduces the physical strength of the training object. Then, the game progression unit 11C associates the increased or decreased parameters with object identification information that uniquely identifies the breeding object, includes them in the object data 12A, and stores them in the terminal storage unit 12. Alternatively, the game progression unit 11C may cause the server storage unit 32 to store the data of the breeding object.
  • the game progression unit 11C increases the dance power of the training object when a lesson for increasing the dance power is given in the training part.
  • the game progression unit 11C may acquire a video corresponding to the dancing ability from the server storage unit 32 or the terminal storage unit 12 and display it on the terminal display unit 15.
  • the server storage unit 32 or the terminal storage unit 12 stores, as dance videos according to dance ability, an inferior dance video showing a state where the dancing ability is low and the dancing is bad, and an inferior dance video showing the state where the dancing ability is high and the dancing is good. Represents honor dance videos and memorizes them.
  • the game progression unit 11C displays this dance video during a live performance.
  • the server storage unit 32 or the terminal storage unit 12 stores operation data of the training object used depending on the training state of the training object. Then, the game progression unit 11C displays an effect based on the motion data at a predetermined timing. Thereby, the user can sense the growth of the nurturing object visually.
  • the motion data is not limited to dance videos, and may be motion data that defines the motion of the training object.
  • the game progress section 11C may cause the terminal display section 15 to display an image of the training object showing a distressed expression or a lack of confidence.
  • the game progress unit 11C may cause the terminal display unit 15 to display an image of the training object showing a smiling face or a confident expression.
  • the state information indicates an inferior state, that is, when the parameters are relatively low, relatively many expressions of pain or lack of confidence are displayed, but as the parameters increase, the state information becomes normal.
  • the condition or excellent condition is indicated, smiling or confident expressions are relatively often displayed. Thereby, the user can visually sense the growth of the nurturing object.
  • the game progression unit 11C which is an example of an audio output control unit, causes the audio output unit 16, which is an example of an audio output unit, to output audio based on the output sound data acquired by the data acquisition unit 11B.
  • the audio output section 16 is a speaker, and is configured integrally with the game terminal 10.
  • the audio output unit 16 may be separate from the game terminal 10 and connected to the game terminal 10 by wire or wirelessly.
  • the audio output unit 16 may be configured integrally with a display device that is separate from the game terminal 10. As a result, before the singing ability improves, the audio output section 16 outputs a poor song with many inferior parts.
  • the audio output section 16 outputs a good song with few inferior parts or no inferior parts. Therefore, the user can audibly feel the growth of the nurturing object. Thereby, the user can sense the training results of the training object through the song sung by the training object.
  • the server control unit 31 of the server 30 is configured as a computer and includes a processor (not shown).
  • This processor is, for example, a CPU or an MPU, and controls the entire server 30 based on a program stored in the server storage unit 32, and also controls various processes in an integrated manner.
  • the server control unit 31 can also perform control according to a program stored in a portable recording medium such as a CD, DVD, CF card, and USB memory, or in an external storage medium.
  • an operation section including a keyboard or various switches for inputting predetermined commands and data is connected to the server control section 31 by wire or wirelessly.
  • a display section (not shown) that displays the input state, setting state, measurement results, and various information of the device is connected to the server control section 31 by wire or wirelessly.
  • the server storage unit 32 is a computer-readable non-transitory storage medium. Specifically, the server storage unit 32 includes storage devices such as RAM, ROM, HDD, and SSD. The server storage unit 32 also stores server audio data 32A. Further, the server storage unit 32 may store data such as image data or music data necessary for progressing the game, update data for the terminal program PG, and the like.
  • the server communication unit 33 is a communication module, a communication interface, or the like. The server communication unit 33 allows data to be transmitted and received between the game terminal 10 and the server 30 via the network 50.
  • the game progression unit 11C simulates the training of the training object. Then, the game progress unit 11C changes the parameters of the breeding object according to the progress of the game (S101). After that, when starting the live part, the status acquisition unit 11A acquires, for example, singing ability as a parameter that is status information of the training object (S102). Further, the status acquisition unit 11A passes the acquired parameters to the data acquisition unit 11B (S103).
  • the data acquisition unit 11B acquires first sound data, which is output sound data with many inferior parts (S105).
  • the data acquisition unit 11B acquires the first sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the first sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106).
  • the data acquisition unit 11B acquires third sound data, which is output sound data without inferior parts (S108).
  • the data acquisition unit 11B acquires the third sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the third sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106).
  • the data acquisition unit 11B acquires second sound data that is output sound data with fewer inferior parts (S109).
  • the data acquisition unit 11B acquires the second sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the second sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on the second sound data (S106). In this way, in the live part, audio is output based on output sound data according to the singing ability.
  • the game system 100 it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
  • the data acquisition unit 11B may acquire output sound data according to the value of the parameter of the game object or the progress of the game.
  • the data acquisition unit 11B specifies output sound data according to the value of a parameter (for example, physical strength) or the progress of the game from among a plurality of types of output sound data.
  • the data acquisition unit 11B then acquires the specified output sound data.
  • the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or progress of the game. Then, the data acquisition unit 11B refers to the table and specifies the output sound data.
  • the data acquisition unit 11B will acquire the sound data of the song played by the training object, which the state acquisition unit 11A will acquire.
  • Output sound data may be acquired according to the state indicated by the state information.
  • the generation unit 11D generates output sound data of a song according to the state.
  • the game progression unit 11C then causes the audio output unit 16 to output audio based on the output sound data acquired by the data acquisition unit 11B.
  • the terminal audio data 12B includes various sound data of the song related to the performance, similar to the sound data of the song described above.
  • an inferior part of the output sound data of a song corresponding to an inferior state differs from the corresponding part of the original song in pitch, output timing, or loudness.
  • the output sound data of the music piece corresponding to the excellent condition includes at least a part of the performance portion that has been performed to reflect the performance technique.
  • the output sound data includes a performance portion that is performed using a performance technique such as rapid playing of a guitar.
  • the generation unit 11D may acquire reference sound data of a song and generate output sound data based on the acquired reference sound data and the like.
  • the data acquisition unit 11B may acquire sound data of a song sung by the breeding object and sound data of a song played by the breeding object as output sound data according to the state.
  • the game progression unit 11C may cause the audio output unit 16 to output audio based on the output sound data of the song and the song.
  • a second embodiment will be described with reference to FIG.
  • the second embodiment differs from the first embodiment in that the server 230 includes a status acquisition unit 211A and a generation unit 211D.
  • differences from the first embodiment will be described, and the same reference numerals will be given to the components that have already been described, and the description thereof will be omitted.
  • components with the same reference numerals have substantially the same operations and functions, and their effects are also substantially the same.
  • the server storage unit 232 of the server 230 stores a server program PG2, which is an example of a game program. Then, the server program PG2 causes the server control section 231 as a computer to function as a state acquisition section 211A and a generation section 211D.
  • the status acquisition unit 211A acquires status information indicating the status of a training object, which is a game object to be trained, from the game terminal 210.
  • the data acquisition unit 11B of the game terminal 210 requests the server 230 for output sound data, and also transmits parameters as status information to the server 230. Note that when generating output sound data according to parameters that change during a live performance, the data acquisition unit 11B not only transmits the parameters when starting the live performance, but also transmits the parameters to the server 230 every time the parameters change. .
  • the status acquisition unit 211A then passes the received status information to the generation unit 211D. Furthermore, the generation unit 211D generates output sound data according to the state indicated by the status information acquired by the status acquisition unit 211A.
  • the generation unit 211D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 211A.
  • the server storage unit 32 stores generation data, musical score data, and audio creation software.
  • the generation unit 211D generates output sound data using generation data and musical score data in real time before and during the live performance.
  • the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner.
  • the data acquisition unit 11B of the game terminal 210 acquires output sound data according to the state by acquiring the transmitted output sound data.
  • the game progress section 11C causes the audio output section 16 to output audio based on the output sound data.
  • the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations during a live performance, output sound data can be generated with the proportion of inferior parts changed in accordance with the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
  • the generation unit 211D may generate the output sound data using the generation data before the start of the live performance (for example, at the timing when the status acquisition unit 211A acquires the status information), instead of generating the output sound data in real time during the live performance.
  • the generation unit 211D includes the generated output sound data in the server audio data 32A and stores it in the server storage unit 32.
  • the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to a download request from the data acquisition unit 11B. Thereby, the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved.
  • the generation unit 211D may increase or decrease the proportion of the inferior portion continuously or stepwise according to the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the generation unit 211D may generate the output sound data before the start of the live performance based on the reference sound data of the song. Specifically, the generation unit 211D generates inferior sound data, which is an example of the first sound data that includes an inferior part in at least a part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or has an inferior part. Output sound data is generated by mixing with reference sound data, which is an example of at least one other sound data that is not included. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process. Note that the generation unit 211D can further use superior sound data for mixing as other sound data.
  • the generation unit 211D generates the reference tone data, inferior tone data, and superior tone data in advance using the generation data and musical score data.
  • the generation unit 211D may generate reference sound data, inferior sound data, and superior sound data when generating the output sound data.
  • the generation unit 211D increases or decreases the usage ratio of each of the inferior sound data, the reference sound data, and the superior sound data according to the parameters that the state acquisition unit 211A acquires from the game terminal 210. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
  • the generation unit 211D may generate output sound data in real time before and during the live performance. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner. Furthermore, the generation unit 211D may generate the output sound data at the timing when the status acquisition unit 211A acquires the status information. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to the download request from the data acquisition unit 11B.
  • the generation unit 211D may generate multiple types of output sound data such that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. That is, the generation unit 211D may generate a plurality of patterns (for example, three patterns) of output sound data in advance according to the state information of the breeding object. Then, the generation unit 211D includes the generated plural types of sound data in the server audio data 32A, and stores the server audio data 32A in the server storage unit 32.
  • the generation unit 211D generates output sound data corresponding to a normal state in which the singing ability is within a predetermined range, output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range, and output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range.
  • Output sound data corresponding to an inferior state is generated in advance.
  • the data acquisition unit 11B of the game terminal 210 also functions as a status acquisition unit and acquires status information indicating the status of the breeding object. Then, the data acquisition unit 11B transmits to the server 230 a download request for output sound data according to the state of the training object. The data acquisition unit 11B acquires output sound data according to the state by acquiring the requested output sound data from the server 230. Thereby, the amount of data stored in the terminal storage section 12 of the game terminal 210 can be reduced. Furthermore, since the game terminal 210 does not generate output sound data, the processing load on the game terminal 210 can be reduced.
  • the data acquisition unit 11B requests the server 230 to download the output sound data specified according to the parameters. Specifically, when the singing ability is within a predetermined range, the data acquisition unit 11B requests output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the data acquisition unit 11B requests output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the data acquisition unit 11B requests output sound data (for example, first sound data) corresponding to the inferior state.
  • the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the data acquisition unit 11B specifies the output sound data with reference to the table, and transmits the output sound data of the specified data identification information to the server 230.
  • the server storage unit 232 may store multiple types of output sound data in association with state information or a state specified by the state information.
  • the server storage unit 232 may store output sound data in association with parameters as state information. Thereby, output sound data can be specified and transmitted to the game terminal 210 based on the parameters acquired by the state acquisition unit 211A.
  • the server storage unit 232 may store output sound data in association with a poor state, a normal state, and an excellent state. Thereby, output sound data can be specified and transmitted to the game terminal 210 according to the state indicated by the parameter acquired by the state acquisition unit 211A.
  • the output sound data may be transmitted in the form of streaming distribution. In the case of streaming distribution, at least a part of the output sound data according to the request is included in the terminal audio data 12B and stored in the terminal storage unit 12.
  • the data acquisition unit 11B may request the server 230 to download output sound data corresponding to all states of the breeding object.
  • the output sound data of the breeding object is stored in the terminal storage section 12.
  • the data acquisition unit 11B selects and acquires necessary output sound data from the downloaded output sound data according to the state of the breeding object. For example, when the singing ability is below a predetermined range, the data acquisition unit 11B selects and acquires output sound data (for example, first sound data) corresponding to the inferior state.
  • the server control unit 231 may select output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmit it to the game terminal 210. Specifically, when the singing ability is within a predetermined range, the server control unit 231 selects output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the server control unit 231 selects output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the server control unit 231 selects output sound data (for example, first sound data) corresponding to the inferior state. As an example, the server storage unit 32 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the server control unit 231 refers to the table and selects output sound data.
  • the server storage unit 32 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the server control unit
  • the data acquisition unit 11B transmits status information (for example, parameters) to the server 230. Then, the server control unit 231 selects output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmits the selected output sound data to the game terminal 210. The data acquisition unit 11B acquires the output sound data selected by the server control unit 231 from the server 230, thereby acquiring output sound data according to the state.
  • the server storage unit 232 stores a plurality of types of output sound data in association with state information or a state specified by the state information. Thereby, the server control unit 231 can select output sound data and transmit it to the game terminal 210 based on the state information or state.
  • the game system 200 it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
  • the generation data is generated by machine learning using original data obtained from a performer's singing or reading as learning data.
  • the musical score data is created by the administrator of the server 30 or the user of the game terminal 10, or is automatically generated.
  • the output sound data may be directly generated from the generation data and the musical score data. Further, the output sound data may be generated by appropriately mixing inferior tone data, reference tone data, and superior tone data generated from the generation data and musical score data. The conditions for mixing are as described above.
  • the output sound data is obtained from a plurality of sound data such as first sound data, second sound data, and third sound data generated based on the inferior sound data, reference sound data, and superior sound data under predetermined conditions. may be selected based on. Furthermore, each of the inferior tone data, reference tone data, and superior tone data may be used as they are as the first tone data, the second tone data, and the third tone data. Further, the musical score data may be fixed, but may be changed as appropriate based on state information indicating the state of the object.
  • status information that changes as the game progresses is acquired at a predetermined timing, and based on the acquired status information, instructions for the part to be sung as the inferior part or the part to be sung as the superior part are provided in the musical score data. Changes such as instructions may be made. Then, output sound data is generated based on the changed musical score data and the generation data.
  • inferior tone data, reference tone data, and superior tone data are generated based on the changed musical score data and generation data, and data to be output as output sound data is selected from among these tone data. Good too.
  • inferior tone data, reference tone data, and superior tone data as intermediate tone data may be generated based on the changed musical score data and generation data.
  • the output sound data may be generated by appropriately mixing the intermediate sound data.
  • each sound data and the like may be generated by the server 30 or by the game terminal 10.
  • the output sound data is downloaded to the game terminal 10 at a predetermined timing.
  • at least the server 30 holds the musical score data.
  • intermediate sound data or generation data is generated by the server 30 and downloaded to the game terminal 10 at an appropriate timing.
  • the musical score data is held by the game terminal 10 or the server 30, or held by the game terminal 10 and the server 30.
  • the generation data is downloaded to the game terminal 10 at an appropriate timing.
  • the game terminal 10 at least holds the musical score data. Furthermore, changes to the musical score data may be made at the server 30 or at the game terminal 10. Further, the output sound data may be sequentially generated and played back during a live performance. Further, the output sound data may be selected or generated immediately before the live performance, and may be played back during the live performance. Alternatively, the generation of the output sound data may be started by referring to the status information at a predetermined timing in the middle of the section, and the generation may be completed by the time of the live performance. When this generation is performed by the server 30, the download of the output sound data to the game terminal 10 may be performed in parallel with the progress of the game performed by user operations, etc., by the time of live execution.
  • a plurality of sound data may be held in advance in the server 30 as output sound data.
  • the sound data to be used as output sound data is determined from the plurality of sound data by referring to the state information at a predetermined timing in the middle of the section, and in parallel with the progress of the game performed by the user's operation etc.
  • the determined sound data may be downloaded to the game terminal 10. By doing these things, it is possible to reduce the amount of time the user is kept waiting for the download process.
  • the state acquisition unit 11A, the data acquisition unit 11B, and the generation unit 11D may be provided separately for the server 30 and the game terminal 10.
  • the game terminal 10 may be provided with the state acquisition section 11A that determines the state, and the generation section 11D may be provided on the server 30.
  • the server control section 31 and the terminal control section 11 cooperate to function as a computer.
  • the data acquisition unit 11B may function as a generation unit that generates output sound data.
  • the generating means may be provided outside the game system 100, 200.
  • the generation units 11D and 211D can be omitted.
  • at least one of the generation data, reference sound data, inferior sound data, superior sound data, and output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as necessary.
  • the generation units 11D and 211D may obtain generation data from outside the game system 100 and 200 to generate reference sound data, inferior sound data, superior sound data, or output sound data.
  • the generation units 11D and 211D may obtain at least one of reference sound data, inferior sound data, and superior sound data from outside the game system 100 and 200, and generate the output sound data.
  • the output sound data may be data generated by recording a song sung by a human.
  • the reference tone data, inferior tone data, and superior tone data may also be data generated by recording a song sung by a human. In these cases, the output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as needed.
  • the training method performed in the training part is not limited to the method described above, as long as it can grow the training object.
  • it may be a training method in which multiple objects including the training object walk around the field and encounter enemies and fight, or a training method may be used in which a number of objects, including the training object, encounter and fight enemies, or the training object's parameters may be set by combining cards etc. obtained by winning a lottery.
  • It may also be a type of cultivation method that increases the.
  • a type of training method may be used in which the user performs a so-called timing game, in which the user performs an operation at the timing when an indicator that moves within the screen in accordance with the rhythm reaches a predetermined point.
  • the generation units 11D and 211D may generate output sound data, inferior sound data, or superior sound data based on the reference sound data. Specifically, the generation units 11D and 211D may generate inferior sound data by modifying the reference sound data so that at least a part thereof becomes an inferior part, or the reference sound data at least a part of which becomes a production part. Honor tone data may be generated by making such changes. In addition, the generation units 11D and 211D appropriately modify the reference sound data so that at least a part thereof becomes an inferior part or a production part, and generate, for example, first to third sound data as candidates for output sound data. You can.
  • a game system 100,200 state acquisition means 11A, 211A for acquiring state information indicating the state of the game object; Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information; A game system 100, 200 comprising a sound output control means 11C that outputs sound based on the acquired output sound data.
  • the data acquisition means 11B, 211B acquires the output sound data in which the proportion of the inferior part is higher when the status information indicates an inferior status, compared to when the status information indicates an excellent status.
  • the game system 100, 200 according to appendix 2.
  • the state information is a parameter, The game system 100, 200 according to any one of Supplementary Notes 1 to 4, further comprising a game progressing means 11C that simulates the growth of the game object and changes the parameters according to the progress of the game.
  • Appendix 6 The game system 100, 200 according to appendix 5, wherein the data acquisition means 11B, 211B selects and acquires the output sound data according to the parameters from among a plurality of different types of sound data.
  • Appendix 10 The game system 100, 200 according to appendix 8 or 9, wherein the other sound data includes at least a part of a performance portion that is performed to reflect a performance technique or a singing technique.
  • the computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song.
  • Game programs PG and PG2 of game systems 100 and 200 that provide games that output songs,
  • Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information;
  • Game programs PG and PG2 function as an audio output control means 11C that outputs audio based on the acquired output sound data.
  • the computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song.
  • a method of controlling a game system 100, 200 that provides a game that outputs a song the method comprising: The computer 11, 231, obtain state information indicating the state of the game object; Obtaining output sound data of the song or the song that corresponds to the state indicated by the state information; A control method for outputting sound based on the acquired output sound data.
  • the game programs PG and PG2 described in appendix 12, or the control method described in appendix 13 a sound corresponding to the training result is output, so that the user allows you to audibly sense the results of growing a growing object. Further, by providing multiple opportunities for outputting such audio in one training part, the user can feel that the music being played or the song being sung improves through training. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 4 and 6, the process of generating output sound data each time can be omitted, and the processing load can be reduced. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 5 to 7, the state can be determined based on parameters as state information. This allows changes in parameters as a result of training to be reflected in the quality of the output sound data.
  • output sound data including inferior parts can be generated by mixing without generating a large number of output sound data. Furthermore, according to the game systems 100 and 200 described in Appendix 11, output sound data can be generated by including inferior portions in the reference sound data without generating a large number of output sound data.
  • Terminal control unit (computer) 11A: Status acquisition unit (status acquisition means) 11B: Data acquisition unit (data acquisition means) 11C: Game progress section (audio output control means) 11D: Generation unit (generation means) 16: Audio output section (sound output means) 100: Game system 200: Game system 211A: State acquisition unit (state acquisition means) 211B: Data acquisition unit (data acquisition means) 211D: Generation unit (generation means) 231: Server control unit (computer) PG: Terminal program (game program) PG2: Server program (game program)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

La présente invention concerne un système de jeu (100) qui est destiné à fournir un jeu pour simuler l'élevage d'un objet de jeu à élever, à produire des performances instrumentales d'un morceau de musique ou des performances vocales d'une chanson par l'objet de jeu, et à délivrer le morceau de musique interprété de manière instrumentale ou la chanson interprétée vocalement, le système de jeu comprenant : un moyen d'acquisition d'état (11A) pour acquérir des informations d'état indiquant l'état de l'objet de jeu ; un moyen d'acquisition de données (11B) pour acquérir des données sonores de sortie, du morceau de musique ou de la chanson, en fonction de l'état indiqué par les informations d'état ; et un moyen de commande de sortie vocale (11C) pour délivrer une parole sur la base des données sonores de sortie acquises.
PCT/JP2023/028307 2022-08-04 2023-08-02 Système de jeu, et programme de jeu et procédé de commande pour système de jeu WO2024029572A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-124850 2022-08-04
JP2022124850 2022-08-04

Publications (1)

Publication Number Publication Date
WO2024029572A1 true WO2024029572A1 (fr) 2024-02-08

Family

ID=89849438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/028307 WO2024029572A1 (fr) 2022-08-04 2023-08-02 Système de jeu, et programme de jeu et procédé de commande pour système de jeu

Country Status (1)

Country Link
WO (1) WO2024029572A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003242289A (ja) * 2002-02-20 2003-08-29 Sony Corp コンテンツ処理装置およびその方法、ならびにプログラム
JP2005081011A (ja) * 2003-09-10 2005-03-31 Namco Ltd ゲームシステム、プログラム及び情報記憶媒体
JP2010004898A (ja) * 2008-06-24 2010-01-14 Daito Giken:Kk 遊戯機、電子機器及び遊戯機の音出力方法
JP2013195699A (ja) * 2012-03-19 2013-09-30 Yamaha Corp 歌唱合成装置および歌唱合成プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003242289A (ja) * 2002-02-20 2003-08-29 Sony Corp コンテンツ処理装置およびその方法、ならびにプログラム
JP2005081011A (ja) * 2003-09-10 2005-03-31 Namco Ltd ゲームシステム、プログラム及び情報記憶媒体
JP2010004898A (ja) * 2008-06-24 2010-01-14 Daito Giken:Kk 遊戯機、電子機器及び遊戯機の音出力方法
JP2013195699A (ja) * 2012-03-19 2013-09-30 Yamaha Corp 歌唱合成装置および歌唱合成プログラム

Similar Documents

Publication Publication Date Title
KR102654029B1 (ko) 음악 생성을 위한 장치, 시스템들, 및 방법들
JP5094091B2 (ja) ゲームシステム
US20020005109A1 (en) Dynamically adjustable network enabled method for playing along with music
Summers The Legend of Zelda: Ocarina of Time: A Game Music Companion
US6878869B2 (en) Audio signal outputting method and BGM generation method
Herzfeld Atmospheres at play: Aesthetical considerations of game music
Wang Game Design for Expressive Mobile Music.
Mice et al. Super size me: Interface size, identity and embodiment in digital musical instrument design
JPH11468A (ja) 情報記憶媒体及びゲーム装置
JP3746875B2 (ja) 情報記憶媒体及びゲーム装置
CN103191561B (zh) 具谱面预览与结果状态的音乐游戏机台及其方法
WO2024029572A1 (fr) Système de jeu, et programme de jeu et procédé de commande pour système de jeu
Enns Game scoring: Towards a broader theory
JP3863545B2 (ja) 情報記憶媒体及びゲーム装置
JP6752465B1 (ja) コンピュータプログラムおよびゲームシステム
JP2008145928A (ja) 対戦カラオケシステム
JP5750234B2 (ja) 音出力装置、音出力プログラム
Cutajar Automatic Generation of Dynamic Musical Transitions in Computer Games
JP2009237345A (ja) カラオケゲームシステム、カラオケ装置及びプログラム
TWI487553B (zh) A music game machine and a method thereof with a spectral preview and a result state
WO2024024813A1 (fr) Système de jeu et programme de jeu ainsi que procédé de commande pour le système de jeu
JP2000330576A (ja) カラオケの歌唱評価方法と装置
Kamp Ludic music in video games
Andersson Exploring new interaction possibilities for video game music scores using sample-based granular synthesis
Studley Exploring Real-Time Music Composition through Competitive Gameplay Interactions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23850126

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2024539194

Country of ref document: JP

Kind code of ref document: A