WO2020241441A1 - システム、装置、方法及びプログラム - Google Patents

システム、装置、方法及びプログラム Download PDF

Info

Publication number
WO2020241441A1
WO2020241441A1 PCT/JP2020/020090 JP2020020090W WO2020241441A1 WO 2020241441 A1 WO2020241441 A1 WO 2020241441A1 JP 2020020090 W JP2020020090 W JP 2020020090W WO 2020241441 A1 WO2020241441 A1 WO 2020241441A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound data
output device
sound
data selection
user terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2020/020090
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
真史 原田
吉田 浩二
嵩広 宮崎
木村 憲司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bandai Co Ltd
Original Assignee
Bandai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bandai Co Ltd filed Critical Bandai Co Ltd
Publication of WO2020241441A1 publication Critical patent/WO2020241441A1/ja
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present invention relates to a system or the like, and particularly to a system or the like for outputting sound in an output device based on sound data selected by selective input to a user terminal.
  • the present invention has been made to solve such a problem, and an object of the present invention is to provide a system or the like capable of enhancing the interest of an application.
  • the system as one aspect of the present invention is a sound data providing system including a user terminal, a server capable of communicating with the user terminal, and an output device capable of communicating with the user terminal, and the server stores a plurality of sound data.
  • a means and a communication means for transmitting sound data to the user terminal are provided, and the user terminal includes a display means for displaying one or more sound data selection areas associated with any of a plurality of sound data, and the one or more sound data.
  • the input means that accepts the selection input by the user who selects one sound data selection area from the selection area, and the selected sound data that is the sound data associated with the sound data selection area selected by the user are received from the server.
  • the output device includes a communication means for transmitting the received sound data to the output device, and the output device includes a communication means for receiving the sound data transmitted from the user terminal and a sound for outputting the sound based on the received sound data.
  • An output means is provided, and in the one or more sound data selection areas, the same sound data is not associated with the one or more sound data selection areas.
  • the user terminal can further be provided with a control means that makes it impossible to reselect the sound data selection area already selected by the user.
  • the sound data associated with the one or more sound data selection areas may be determined by a lottery process.
  • the sound data associated with the one or more sound data selection areas can be determined again when the sound data selection area is selected.
  • the display means can display the sound data selection area so that the sound data associated with one or more sound data selection areas cannot be recognized by the user.
  • the display means can display a plurality of sound data selection areas, and can display the already selected sound data selection area and another sound data selection area in different modes.
  • the display means may indicate that the already selected sound data selection area has been selected, and that another sound data selection area has not been selected.
  • the display means displays a plurality of sound data selection areas, and the sound data selection area associated with the selected sound data is a selection area for outputting a sound based on the selected sound data in the output device. It may be shown that the sound data selection area associated with other sound data is a selection area for acquiring sound data.
  • the server determines whether or not the sound data associated with the selected sound data selection area is stored in the storage means of the server in a state of being available to the user, and it is determined that the sound data is stored in the available state. Then, the communication means of the server may transmit the sound data to the user terminal.
  • the storage means of the server can be stored as one group including one or more sound data.
  • Character identification information indicating a predetermined character is associated with each group, and the sound data included in each group may be sound data related to the character indicated by the character identification information associated with the group.
  • the sound data can be the dialogue data of the character indicated by the character identification information associated with the group of the sound data.
  • the one or more sound data selection areas can be linked to the sound data of another group when all the sound data of the first group have been selected.
  • the output device further includes a storage means for storing output device identification information for identifying a character associated with the output device, and the communication means of the output device transmits the output device identification information and the user terminal. May not transmit the sound data included in the group associated with the character identification information other than the character associated with the output device to the output device based on the output device identification information.
  • the display means can selectably display one or more sound data selection areas that have been made selectable by executing the consideration payment process when the predetermined consideration payment process is executed.
  • the plurality of sound data may include sound data associated with the sound data selection area only for a predetermined period.
  • the storage means of the server stores the first value information or the second value information indicating a higher value than the first value information in association with each of the plurality of sound data, and stores the second value information. It is possible to store more sound data associated with the first value information than the sound data associated with.
  • the one or more sound data selection areas are a plurality of sound data selection areas, and include a sound data selection area associated with one sound data and a sound data selection area associated with a plurality of sound data. Can be done.
  • a user terminal as one aspect of the present invention includes a display means for displaying one or more sound data selection areas associated with any of a plurality of sound data, and one sound data from the one or more sound data selection areas.
  • An input means that accepts selection input by the user who selects the selection area, and the selected sound data that is the sound data associated with the sound data selection area selected by the user is received from the server and the received sound data is output. It is provided with a communication means for transmitting to the device, and is configured so that the same sound data is not associated with the one or more sound data selection areas in the one or more sound data selection areas.
  • a program as one aspect of the present invention is a program executed by a user terminal, and displays the user terminal to display one or more sound data selection areas associated with any one or more sound data.
  • Means, input means that accepts selection input by the user who selects one sound data selection area from the one or more sound data selection areas, and selected sound data that is associated with the sound data selection area selected by the user.
  • the sound data is received from the server and is made to function as a communication means for transmitting the received sound data to the output device.
  • the same sound data is the one or more sound data selection areas. It is configured so that it cannot be tied to.
  • the output device as one aspect of the present invention is an output device including a communication means for receiving sound data and a sound output means, and the communication means is linked to a sound data selection area selected by the user on the user terminal.
  • the selected sound data which is the attached sound data, is received from the user terminal, and the sound output means can output only the sound based on the received sound data.
  • the sound output means can output sounds based on the received plurality of sound data with a predetermined time difference.
  • the output device is a toy body that imitates a predetermined shape, and may be configured to be attached to a stuffed animal.
  • the sound data providing system 1 is a system in which a plurality of electronic devices, user terminals 10, are connected to a server 20 via a network, and each user terminal is further connected to an output device 30 attached to the toy 5 by wireless communication. Can be realized by.
  • FIG. 1 shows an example of the overall configuration of a sound data providing system according to an embodiment of the present invention.
  • the sound data providing system 1 includes a plurality of user terminals 10, a server 20, and an output device 30 mounted on the plurality of toys 5.
  • the user terminal 10 and the server 20 are connected to a network 2 such as the Internet and can communicate with each other.
  • the user terminal 10 and the output device are connected by short-range wireless communication such as Bluetooth (registered trademark), but the user terminal 10 and the server 20 are connected by the Internet, wireless LAN, or the like. It doesn't matter if it is connected.
  • FIG. 2 is a block diagram showing a hardware configuration of a user terminal 10, a server 20, and an output device 30 according to an embodiment of the present invention.
  • the user terminal 10 includes a processor 11, a display device 12, an input device 13, a storage device 14, and a communication device 15. Each of these components is connected by a bus 16. It is assumed that an interface is interposed between the bus 16 and each component device as needed.
  • the user terminal 10 is a smartphone.
  • the user terminal 10 can be a terminal such as a tablet computer or a computer provided with a contact type input device such as a touch pad as long as it has the above configuration.
  • the server 20 also includes a processor 21, a display device 22, an input device 23, a storage device 24, and a communication device 25. Each of these components is connected by a bus 26. It is assumed that an interface is interposed between the bus 26 and each component device as needed.
  • the server 20 is realized by a computer.
  • the output device 30 is a device that outputs sound based on sound data, and includes a processor 31, a sound output device 32, a storage device 34, and a communication device 35. Each of these components is connected by a bus 36. It is assumed that an interface is interposed between the bus 36 and each component device as needed.
  • the output device 30 is a wireless speaker, and is a toy body that imitates a predetermined shape such as a biscuit or a badge. Further, although the toy 5 to which the output device 30 is mounted is a stuffed animal, it can be a toy having a figure or other shape.
  • the output device 30 is provided with a locking tool and the like and is configured to be mounted on the toy 5.
  • the processors 11, 21 and 31 control the operation of the user terminal 10, the server 20 or the output device 30 as a whole, and are, for example, a CPU.
  • a CPU central processing unit
  • an electronic circuit such as an MPU may be used.
  • the processors 11, 21, and 31 execute various processes by reading and executing programs and data stored in the storage devices 14, 24, and 34.
  • the display devices (displays) 12 and 22 display the application screen and the like to the user of the user terminal 10 or the user of the server 20 under the control of the processors 11 and 21.
  • a liquid crystal display is preferable, but a display using an organic EL, a plasma display, or the like may be used.
  • the output device 30 may also include a display device. It is possible to present the state of the output device 30 to the user via the display device.
  • the sound output device 32 outputs sound based on the sound data stored in the storage device 34 according to the control of the processor 31.
  • the sound output device 32 includes a processing device that stores sound data, which is a digital signal received via the communication device 35, in a buffer, performs digital / analog conversion, and outputs sound, and the output device 30 is a processor and a storage device. It is also possible to have a configuration that does not separately provide.
  • the input devices 13 and 23 are user interfaces that receive input from the user to the user terminal 10 and the server 20, and are, for example, a touch panel, a touch pad, a keyboard, or a mouse. Since the user terminal 10 is a smartphone in the present embodiment, the user terminal 10 is provided with a touch panel as an input device 13, the touch panel also functions as a display device 12, and the display device 12 and the input device 13 are integrated. .. The display device 12 and the input device 13 may be in separate forms arranged at different positions. Since the server 20 is a computer, it is assumed that a keyboard and a mouse are provided as input devices and a liquid crystal display is provided as a display device. The output device 30 may also include an input device. The output device 30 can be controlled via the input device.
  • Storage devices 14, 24, and 34 are storage devices included in general smartphones, computers, and wireless speakers, including RAM which is a volatile memory, ROM which is a non-volatile memory, and a magnetic storage device. Storage devices 14, 24 and 34 may also include external memory.
  • the storage device 14 stores an application
  • the storage device 24 stores a server application.
  • the application includes a program for executing an event of the application and various data referred to when the program is executed.
  • the storage device 34 is a volatile memory incorporated as a part of the sound output device, and can temporarily store the sound data received from the user terminal 10 for digital / analog conversion or the like. ..
  • Communication devices 15 and 25 can exchange data with other devices via network 2 (omitted in FIG. 2).
  • the communication devices 15 and 25 perform wireless communication such as mobile communication and wireless LAN, and connect to the network 2.
  • the user terminal 10 communicates with the server 20 via the network by using the communication device 15.
  • the communication devices 15 and 25 may perform wired communication using an Ethernet (registered trademark) cable or the like.
  • the communication device 15 and the communication device 35 can communicate with each other via short-range wireless communication.
  • short-range wireless communication is referred to as Bluetooth (registered trademark), but other wireless communication may be used, or infrared communication may be used. It may be connected via a wireless LAN, network 2, or the like.
  • FIG. 3 shows an example of a functional block diagram of the user terminal 10, the server 20, and the output device 30 according to the embodiment of the present invention.
  • the user terminal 10 includes control means 101, display means 102, input means 103, storage means 104, and communication means 105
  • the server 20 includes control means 201, display means 202, input means 203, storage means 204, and communication means 205.
  • the output device 30 includes a control means 301, a sound output means 302, a storage means 304, and a communication means 305.
  • these functions are realized by the processors 11, 21 and 31 executing the program.
  • the program to be executed is a program stored in the storage devices 14, 24 and 34. In this way, since various functions are realized by reading the program, a part or all of one part (function) may be possessed by another part.
  • These functions may be realized by hardware by configuring an electronic circuit or the like for realizing a part or all of each function.
  • the control means 101 of the user terminal 10 performs a control process for executing the function by the application of the present embodiment.
  • the display means 102 displays an application screen for controlling the function of the application, and displays the screen for the application according to the function of the application and the user operation.
  • the input means 103 accepts input from the user of the user terminal 10.
  • a touch panel provided with the display means 102 and the input means 103 is used, and the input means is realized by the touch detection function.
  • the storage means 104 stores information necessary for information processing executed by the control means 101, and stores, for example, sound data received from the server 20.
  • the control means 201 of the server 20 performs processing for an application executed on the user terminal 10.
  • the control means 201 sends and receives data periodically or as needed to realize the function of the application on the user terminal.
  • the control means 201 executes a sound data lottery process as one function for the application executed on the user terminal 10, and selects the sound data to be provided based on the transmission information from the user terminal 10. I do.
  • the display means 202 displays a management screen for the server administrator on the display device 22 as needed.
  • the storage means 204 stores information for an application executed on the user terminal 10, such as sound data provided to the user.
  • the sound data is stored in the storage means 204 in correspondence with the sound data ID. Further, as shown in Table 1, the sound data ID is stored in association with the value information and the group information indicating the value of the sound data.
  • each sound data ID is associated with either the first value information or the second value information
  • the second value information shows a higher value than the first value information
  • the storage means. 204 stores more sound data associated with the first value information than sound data associated with the second value information.
  • the value information indicates a low possibility of being selected by the lottery process, that is, a so-called rarity
  • the second value information is less likely to be selected by the lottery process than the first value information. Is set.
  • the value information does not have to be further limited to the first and second, and can be associated with three or more types of value information.
  • the sound data ID is included in at least one group, and the group information is an identifier used to identify the group.
  • the group information is a character identifier indicating a character
  • the sound data ID is stored in association with one or more character identifiers.
  • one sound data ID may be assigned to a plurality of character identifiers.
  • the sound data is character dialogue data, and the sound data ID is stored in association with the character identifier of the character.
  • one group information is assigned to one user ID, and after all the sound data included in the group indicated by the group information is acquired by the user ID, the sounds included in the other groups. Enables the provision of data.
  • the storage means 204 of the server 20 registers the group information initially assigned to each user ID as the group information 1, and indicates whether or not all the sound data for the group has been provided. Information is also stored.
  • the status information is 1, it indicates that all the sound data of the group has been provided, and when it is 0, it indicates that there is unprovided sound data.
  • the status 1 0, indicating that there is sound data of group information 1 that has not been acquired.
  • the control means 301 of the output device 30 performs control processing for executing the function by the application of the present embodiment.
  • the sound output means 302 outputs sound based on the sound data received from the user terminal 10.
  • the sound output means 302 can output only the sound transmitted from the user terminal 10. As a result, for example, it is possible to output only the dialogue of the character associated with the output device 30.
  • the storage means 304 can function as a temporary buffer for processing the sound data received from the user terminal 10 by the sound output means 302. Further, the output device 30 is assigned one character identifier as output device identification information at the time of shipment, and is stored in the storage means 104, for example. This character identifier may be stored so that it cannot be changed by the user.
  • the communication means 305 transmits / receives data to / from the user terminal 10.
  • FIG. 4 is an example of an application screen installed on the user terminal 10, and is a diagram showing an example of a sound data management screen for managing sound data.
  • This sound data management screen is a screen displayed on a touch panel that functions as a display means 102 and an input means 103 of the user terminal 10, and is a sound for selecting sound data for acquiring sound data or outputting sound.
  • the data selection areas 1 to N (401 to 403) are included. Sound data can be linked to the sound data selection area by lottery processing.
  • the sound data selection area to which the sound data is associated may be associated with the sound data by executing a lottery process before being displayed on the sound data management screen, or the sound data may be associated with the sound data. After the user selects the selected area, the lottery process may be executed and associated with the sound data. By selecting the sound data selection area, the sound data associated with the sound data selection area can be selected. When the sound data selection area is selected to acquire the sound data and the associated sound data is acquired, the acquired sound data has already been selected in the sense that the associated sound data has been selected. It is the sound data of. Sound data acquired by the user may be used as selected data.
  • a smartphone is used as the user terminal 10
  • a wireless speaker imitating a predetermined shape is used as the output device 30.
  • the user terminal 10 and the output device 30 are wirelessly connected by Bluetooth (registered trademark).
  • an application is installed on the user terminal 10 in advance to output sound using a wireless speaker which is an output device 30 for carrying out the present invention. Further, the user performs pairing in which the user terminal 10 and the output device 30 are recognized and connected to each other.
  • the output device 30 stores the output device identification information of the output device in the storage means 304, and the user terminal 10 acquires the output device identification information of the output device 30 from the output device 30.
  • the output device identification information may be an identifier unique to the output device for identifying the output device, but in the present embodiment, the character identifier associated with the output device is used. Although the output device cannot be individually identified, the character associated with the output device can be specified.
  • the output device 30 can also store both the identifier unique to the output device and the character identifier associated with it.
  • the user terminal 10 transmits the character identifier (output device identification information) of the output device 30 together with its own user ID to the server 20, and the group associated with the user ID shown in Table 2 stored in the storage means 204. Request the registration of information 1.
  • an identifier unique to the output device for identifying the output device is used as the output device identifier, for example, in the user terminal 10 or the server 20, a table corresponding to the output device identifier and the character identifier is stored in advance, and this is stored.
  • the output device identifier can be converted to a character identifier based on.
  • the user selects a character via the application and transmits the character identifier of the character to the server 20, thereby determining the character identifier as the group information associated with the user ID of the user. You may. Further, the character identifier can be transmitted to the output device 30 to be paired, and the character identifier can be assigned to the output device.
  • FIG. 6 shows an embodiment of the sound data management screen used in the following description.
  • lottery buttons 1 to N (601 to 603) are displayed as a sound data selection area for acquiring sound data, and sound is output by the output device 30 based on the sound data, that is, sound is reproduced.
  • Play buttons 1 to N (611 to 613) are displayed as a sound data selection area for the operation. Even if the sound data is associated with the lottery button in advance, the lottery button displays the associated sound data in a manner that cannot be specified by the user.
  • the control means 101 of the user terminal 10 prevents the already selected lottery button from being selected again, and grays out the lottery button to indicate this and displays it on the display means 102.
  • the unselected lottery button is displayed in white.
  • the play button is shown in white (Fig. 6 (b) 612), and the button corresponding to the sound data that has not been acquired and cannot be played is grayed out (Fig. 6). (A) 611 to 613).
  • the control means 201 executes the lottery process, and the sound data is stored in the sound data selection areas 1 to N which are the lottery buttons 1 to N of the user ID.
  • a sound data selection area paired sound data ID correspondence table (Table 3) is created and stored in the storage means 204 (S521 in FIG. 5).
  • the association between the sound data selection area and the sound data may be determined by another method such as determination according to a predetermined rule instead of the lottery process.
  • the lottery process is executed, for example, by randomly selecting from the sound data IDs included in the groups that can be acquired for each user ID.
  • the sound data of another group cannot be acquired unless all the sound data are sequentially acquired from the group of the group information 1. Therefore, at first, only the sound data of the group of the group information 1 is the sound data that can be acquired, and is the target of the lottery process.
  • the sound data of the group information 2 becomes a lottery target. All sound data may be acquired regardless of the group, or the sound data that can be acquired may be determined based on other information such as value information.
  • one sound data is associated with one sound data selection area, but a plurality of sound data may be associated with each other. It may be linked as a package containing a plurality of sound data. One sound data may be associated with some sound data selection areas, and a plurality of sound data may be associated with other sound data selection areas. Further, in the present embodiment, the probability of being selected based on the value information of each sound data ID is set. For example, the sound data ID having the first value information can be selected with a probability of 10 times that of the sound data ID having the second value information.
  • At least a part of the sound data can be subject to lottery processing only for a predetermined period. For example, by making it possible to acquire some sound data for a limited period of one month, it is possible to enhance the interest of the application. It is possible to link to the sound data selection area as the target of the lottery process only within a predetermined period. By executing the lottery process again after the expiration of the predetermined period and updating the sound data selection area vs. sound data ID correspondence table, it is possible to prevent the sound data for a limited time from being acquired.
  • the lottery process is processed so that the same sound data is not provided to one user more than once.
  • Such processing can be executed by various methods, but the sound data ID to be linked to each lottery button is sequentially determined by the lottery process, but the sound data ID already linked is stored and the lottery is performed.
  • the re-lottery can be repeatedly executed until the other sound data ID is selected.
  • the sound data once selected may be excluded from the lottery target in the next lottery process.
  • the play buttons 1 to N are associated with the sound data IDs 1 to N in advance, but may be associated with the sound data acquired by the corresponding lottery button, or the sound data may be associated with the sound data. Once acquired, they may be linked in order from the play button 1. It is assumed that the storage means 104 of the user terminal 10 stores a table in which the sound data ID is associated with each play button.
  • the sound data management screen shown in FIG. 6 is displayed on the touch panel that functions as the display means 102. 60 is displayed (S501).
  • S502 it is determined whether or not the user terminal 10 has received an input from the user to select any of the sound data selection areas 1 to N, which is a lottery button for acquiring sound data, via the input means 103. (S502). If not accepted, the process proceeds to S510, and if accepted, selection input information indicating the selected sound data selection area is transmitted to the server 20 via the communication means 105 (S504).
  • the server 20 receives this via the communication means 205 (S522), the server 20 determines the sound data associated with the lottery button, which is the selected sound data selection area (S524).
  • the sound data is determined in S524, it is determined whether or not the determined sound data is available (S526).
  • a table indicating whether or not each sound data ID is available is stored, and by referring to the table, it is determined whether or not the determined sound data ID is available. Can be done. It was available at the time when the distribution of sound data was started and the process of linking to the sound data selection area of each user was performed, but after that, the distribution of sound data is performed as if the copyright availability period had expired. May not be possible. In such a case, when it becomes unavailable, the data indicating availability for the sound data stored in the storage means 204 is changed to the data indicating that it is not available. By confirming the availability of the sound data before providing the sound data to the user, it is possible to provide an appropriate sound data providing service.
  • the server 20 executes the lottery process again and executes the process of updating the association of the sound data ID with the lottery button that has not been selected yet (S532), but the update process is the sound data. It does not have to be done every time the decision is made. For example, it may be performed at a predetermined timing such as when new sound data is registered in the server 20.
  • the user terminal 10 receives the sound data ID, the sound data, and the group information and stores them in the storage means 104 (S506).
  • the storage means 104 S506
  • the user terminal 10 updates the sound data management screen (S508). Specifically, as shown in FIG. 6B, when the lottery button 1 (601) is selected, the lottery button 1 cannot be selected again, and the lottery button 1 is displayed in grayout. Allows the user to recognize that the selection cannot be made again. Then, since the sound data received from the server 20 can be played back, the play button 2 corresponding to the received sound data is changed to a white button indicating that the user can play the sound data and displayed. To do.
  • the group matching determination process is executed (S512).
  • the sound of the first character can be output from the output device 30 associated with the first character attached to the stuffed toy 5 that imitates the shape of the first character.
  • the output device 30 associated with the first character attached to the stuffed toy 5 that imitates the shape of the first character.
  • even a user who owns a plurality of output devices 30 can transmit only sound data for the associated character to each output device 30. It is possible to prevent the sound for the second character from being output from the output device attached to the stuffed toy 5 that imitates the shape of the first character.
  • the user terminal 10 compares the group information which is the character identifier of the output device 30 acquired by the pairing process with the output device 30 with the group information of the sound data ID selected for reproduction, and if they do not match. It is determined that the data is inconsistent, and information indicating that the data cannot be reproduced due to the group inconsistency is displayed on the display means 102 (S514). If they match, the selected sound data is transmitted to the output device 30 (S516). After transmitting the sound data, the user terminal 10 returns to the standby for the selection input of the sound data selection area again (S502).
  • Sound may be output from a plurality of output devices 30 using one user terminal 10.
  • the correspondence between each output device 30 and the character identifier is stored, and when the sound data is transmitted, it is associated with one output device 30 selected by the user. It is possible to determine the consistency between the character identifier and the character identifier of the sound data to be transmitted.
  • the sound output means 302 When the output device 30 receives the sound data (S550), the sound output means 302 outputs the sound based on the received sound data (S552).
  • the sound output in the present embodiment is assumed to be the dialogue voice of the character imitated by the stuffed animal 5 equipped with the output device 30, but any other sound such as a song, music, or animal bark is used. It doesn't matter if it is something like this.
  • the user can transmit a plurality of sound data from the user terminal 10 to the output device 30 at the same time or continuously, for example, by inputting to select a plurality of play buttons.
  • the output device 30 can be stored in the storage means 304 in the order of reception, and can be sequentially output from the sound output means 302 according to a command from the control means 301. At the time of sequential reproduction, it is possible to leave a predetermined time interval or output with a predetermined time difference.
  • the output device 30 can also output conversational voice based on a scenario with another output device 30.
  • the scenario data included in the sound data includes data indicating a character dialogue portion associated with the output device 30 and a dialogue portion of another character associated with the other output device 30.
  • the control means 301 analyzes this, and the other output is based on the scenario data. It is possible to detect how far the sound output from the device 30 is output in the scenario, and output the conversation sound based on the sound data from the sound output means 302 at the timing of the dialogue portion of the own character.
  • the output devices 30 can directly communicate with each other to synchronize the timing of sound output.
  • the sound data selection area can be selected after the payment processing of the predetermined consideration is completed. For example, before the payment process of the predetermined amount is executed, the sound data selection area for acquisition is displayed in gray out on the user terminal 10 and the user's selection input is not accepted, but the payment process of the predetermined amount is completed. Then, it is displayed in white, the user's selection input can be accepted, and the subsequent processing can be executed.
  • the sound data selection area for acquisition of the first type can be selected free of charge, and the sound data having the first and second value information can be acquired based on the respective acquisition probabilities.
  • the acquisition sound data selection area can be set so that payment of a predetermined consideration is required and only the sound data having the second value information is acquired.
  • the sound data selection area (lottery button) for acquisition is selected and the sound data to be acquired is determined
  • the sound data is transmitted from the server 20 and the storage means 104 of the user terminal 10.
  • the user terminal can select the play button corresponding to the received sound data ID.
  • the sound data has not been downloaded to the user terminal 10, it is assumed that the user has acquired the sound data, and the selected sound data is used.
  • the server 20 requests the sound data corresponding to the sound data ID, and when the sound data is received from the server 20, it is temporarily stored in the storage means 104 and output. It may be transmitted to the device 30.
  • the server 20 before transmitting the sound data, the server 20 performs a determination process of whether or not the sound data is available in the same manner as in S526, and if it is not available, the server 20 is not available instead of transmitting the sound data. Information indicating that the information may be transmitted to indicate to the user that the selected sound data is no longer available on the user terminal 10.
  • the server 20 determines the sound data selection area anti-sound data ID correspondence table in advance before receiving the selection input of the sound data selection area for acquisition.
  • the lottery process may be executed to determine the sound data selection area vs. sound data ID correspondence table. Further, only the sound data ID associated with the selected sound data selection area may be determined. In this case, it is not necessary to execute the lottery process after the sound data transmission (S530) to update the corresponding table (S532).
  • S530 sound data transmission
  • S532 By associating the already acquired sound data ID with the user ID and storing it, it is determined whether or not the sound data ID determined by the lottery process has already been acquired by this user, and the acquired sound data ID has been acquired. If it is determined, the lottery process can be executed again. Further, when it is determined that the determined sound data is not available, the lottery process may be performed again.
  • the user terminal 10 may receive the sound data selection area anti-sound data ID correspondence table in advance from the server 20 and store it in the storage means 104.
  • the sound data selection area is acquired based on the sound data selection area anti-sound data ID correspondence table stored in the input and storage means 104.
  • the sound data ID is determined, and the determined sound data ID is transmitted to the server 20.
  • the server 20 can execute the steps after S530.
  • the user terminal 10 may execute the lottery process to create a sound data selection area vs. sound data ID correspondence table.
  • the sound data selection area becomes the sound data selection area. It can be changed to the sound data selection area for reproducing the sound data associated with the sound data selection area. For example, as shown in FIG. 7A, before the sound data is acquired, the lottery buttons 1 to N are displayed on the sound data management screen, and when the lottery button 1 is selected by the user, FIG. 7B shows. As shown, the sound data is changed to the play button 1 associated with the lottery button 1 (701).
  • one lottery button 801 is displayed on the sound data management screen, and when the lottery button 801 is selected by the user, the lottery process of the acquired sound data is executed. , The sound data selected by the lottery process is acquired, and the sound is output from the output device 30 according to the user input for selecting the playback buttons (811 to 813) associated with the acquired sound data. It is different from the first embodiment. Hereinafter, the points different from the first embodiment will be mainly described.
  • FIG. 9 shows a flowchart of the sound data acquisition and reproduction processing of the present embodiment.
  • the server 20 does not prepare the sound data selection area anti-sound data ID correspondence table in advance.
  • the user terminal 10 displays the sound data management screen 80 based on the user's operation (S501).
  • the user terminal 10 receives an input from the user for selecting a lottery button for acquiring sound data via the input means 103 in S502
  • the user terminal 10 transmits selection input information indicating the selected sound data selection area to the server 20. (S504).
  • the server 20 determines the sound data associated with the lottery button, which is the selected sound data selection area, by the lottery process (S901).
  • the lottery process as in the first embodiment, it is determined that the same sound data is not selected by one user more than once. By associating the already acquired sound data ID with the user ID and storing it, it is determined whether or not the sound data ID determined by the lottery process has already been acquired by this user, and the acquired sound data ID has been acquired. If it is determined, the lottery process can be executed again.
  • the user terminal 10 When the user terminal 10 receives the sound data ID, the group information, and the sound data, the user terminal 10 associates them with the play button and stores them in the storage means 104 (S506).
  • the playback buttons 1 (811) are linked in the order of acquisition. Then, the user terminal 10 indicates to the user that the playback button associated with the sound data can be played back by changing the play button associated with the sound data from the grayout display to the white display (S508). After that, the sound can be output from the output device 30 by the same processing as in the first embodiment.
  • the first lottery button is a button for performing a free lottery
  • the second lottery button is a lottery button that enables selection on condition that a predetermined amount of money is paid, and more valuable sound data can be obtained. It can be assumed that the probability of acquisition is set high.
  • a program that realizes the functions of the embodiment of the present invention described above and the information processing shown in the flowchart, or a computer-readable storage medium that stores the program can also be used.
  • the function of the embodiment of the present invention described above or the method of realizing the information processing shown in the flowchart can be used.
  • the server may be a server capable of supplying the computer with a program that realizes the functions of the embodiment of the present invention described above and the information processing shown in the flowchart.
  • it can be a virtual machine that realizes the functions of the embodiment of the present invention described above and the information processing shown in the flowchart.
  • the processing or operation described above can be freely performed as long as there is no contradiction in the processing or operation such as using data that should not be available in that step in a certain step. Can be changed. Further, each of the examples described above is an example for explaining the present invention, and the present invention is not limited to these examples. The present invention can be carried out in various forms without departing from the gist thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Toys (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
PCT/JP2020/020090 2019-05-29 2020-05-21 システム、装置、方法及びプログラム Ceased WO2020241441A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-099970 2019-05-29
JP2019099970A JP6935452B2 (ja) 2019-05-29 2019-05-29 システム、装置、方法及びプログラム

Publications (1)

Publication Number Publication Date
WO2020241441A1 true WO2020241441A1 (ja) 2020-12-03

Family

ID=72185589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/020090 Ceased WO2020241441A1 (ja) 2019-05-29 2020-05-21 システム、装置、方法及びプログラム

Country Status (3)

Country Link
JP (2) JP6935452B2 (enExample)
CN (1) CN111596884B (enExample)
WO (1) WO2020241441A1 (enExample)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6935452B2 (ja) * 2019-05-29 2021-09-15 株式会社バンダイ システム、装置、方法及びプログラム
JP7691345B2 (ja) * 2021-11-10 2025-06-11 株式会社三共 遊技機
JP7691344B2 (ja) * 2021-11-10 2025-06-11 株式会社三共 遊技機
JP7691346B2 (ja) * 2021-11-10 2025-06-11 株式会社三共 遊技機

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000210476A (ja) * 1999-01-27 2000-08-02 Namco Ltd 玩具、ゲ―ム装置及び情報記憶媒体
JP2002297196A (ja) * 2001-03-30 2002-10-11 Bandai Co Ltd 製品完成システム、及び製品発注方法
JP2010533532A (ja) * 2007-07-19 2010-10-28 スティーヴン リップマン 対話式玩具
KR20150125626A (ko) * 2014-04-30 2015-11-09 (주)파워보이스 스마트 완구, 컴퓨터 프로그램, 스마트 완구 시스템
KR20170049863A (ko) * 2015-10-29 2017-05-11 이두만 음성 메시지를 송/수신하는 인형

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10179941A (ja) * 1996-10-21 1998-07-07 Junji Kuwabara 音声認識および音声発生装置、および、該音声認識および音声発生装置を備えた玩具、ならびに、音声認識および音声発生制御プログラムを記録した記録媒体
JPH11226270A (ja) * 1998-02-13 1999-08-24 Sente Creations:Kk 無線玩具
JP2003309670A (ja) * 2002-04-17 2003-10-31 Nec Corp 音データ提供システム、音データ提供サーバ、音データ提供方法及び音データ提供プログラム
CN101295504B (zh) * 2007-04-28 2013-03-27 诺基亚公司 用于仅文本的应用的娱乐音频
JP2011206448A (ja) 2010-03-30 2011-10-20 Namco Bandai Games Inc サーバシステム及びゲーム装置
JP6555858B2 (ja) * 2014-08-01 2019-08-07 シャープ株式会社 機器、音声出力方法、音声出力プログラム、ネットワークシステム、サーバ、および通信機器
JP5817900B1 (ja) 2014-09-29 2015-11-18 株式会社セガゲームス 情報処理装置、プログラム及び情報処理システム
JP6921030B2 (ja) * 2015-01-07 2021-08-18 株式会社スクウェア・エニックス ゲームシステム、プログラム、ゲーム装置及びオブジェクト合成方法
JP5999219B1 (ja) 2015-04-23 2016-09-28 株式会社セガゲームス プログラム及び情報処理装置
CN204945956U (zh) * 2015-09-23 2016-01-06 敲敲科技(北京)有限公司 智能控制设备
JP6783541B2 (ja) * 2016-03-30 2020-11-11 株式会社バンダイナムコエンターテインメント プログラム及び仮想現実体験提供装置
CN105975588B (zh) * 2016-05-04 2019-11-19 杭州网易云音乐科技有限公司 一种多媒体资源播放操作控制方法和装置
JP6869128B2 (ja) * 2017-07-05 2021-05-12 株式会社バンダイ ゲーム装置、プログラム及びゲームシステム
CN208260193U (zh) * 2018-05-14 2018-12-21 厦门市妖猫网络有限公司 一种声光玩具
JP6935452B2 (ja) 2019-05-29 2021-09-15 株式会社バンダイ システム、装置、方法及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000210476A (ja) * 1999-01-27 2000-08-02 Namco Ltd 玩具、ゲ―ム装置及び情報記憶媒体
JP2002297196A (ja) * 2001-03-30 2002-10-11 Bandai Co Ltd 製品完成システム、及び製品発注方法
JP2010533532A (ja) * 2007-07-19 2010-10-28 スティーヴン リップマン 対話式玩具
KR20150125626A (ko) * 2014-04-30 2015-11-09 (주)파워보이스 스마트 완구, 컴퓨터 프로그램, 스마트 완구 시스템
KR20170049863A (ko) * 2015-10-29 2017-05-11 이두만 음성 메시지를 송/수신하는 인형

Also Published As

Publication number Publication date
JP2021183184A (ja) 2021-12-02
JP7419305B2 (ja) 2024-01-22
CN111596884A (zh) 2020-08-28
JP6935452B2 (ja) 2021-09-15
JP2020192114A (ja) 2020-12-03
CN111596884B (zh) 2023-07-07

Similar Documents

Publication Publication Date Title
JP7419305B2 (ja) システム、装置、方法及びプログラム
US11679327B2 (en) Information processing device control method, information processing device, and program
JP2020044136A (ja) 視聴プログラム、配信プログラム、視聴プログラムを実行する方法、配信プログラムを実行する方法、情報処理装置、および情報処理システム
US9596538B2 (en) Wearable audio mixing
CN102222173A (zh) 基于用户活动来跟踪经历进展
US20180353866A1 (en) Communication system, server, and information-processing method
JP2013230226A (ja) ゲーム管理サーバ装置、ゲーム管理サーバ装置用プログラム、および、端末装置用プログラム
JP6442759B2 (ja) 管理装置、端末装置、ゲーム装置、およびプログラム
US11247124B2 (en) Computer system, terminal, and distribution server
TW200843823A (en) Data supply system, game machine, method for data supply, and information recording medium
JP2020195691A (ja) 情報処理装置、情報処理方法、及びプログラム
JP2020000839A (ja) コンピュータプログラム、およびコンピュータ装置
KR101478576B1 (ko) 게임 진행 정보 제공을 위한 시스템, 이를 위한 서버, 이를 위한 단말, 이를 위한 방법 및 이 방법이 기록된 컴퓨터로 판독 가능한 기록 매체
JP2020000393A (ja) コンピュータプログラム、およびコンピュータ装置
JP7532600B2 (ja) 玩具システム
JP2021023528A (ja) プログラム、情報処理方法、及び情報処理装置
HK40029275A (en) Voice data delivery system, user terminal, recording medium and output device
JP6768112B2 (ja) ゲームシステム、およびゲームプログラム
JP6061822B2 (ja) 玩具システム及びプログラム
KR20050091587A (ko) 캐릭터가 활성화되는 온라인게임의 대기실 운영방법, 이를구현하기 위한 대기실 운영시스템 및 대기실운영프로그램이 기록된 컴퓨터로 읽을 수 있는 기록매체
JP7288032B2 (ja) 玩具システム
JP2019115812A (ja) 制御プログラム、制御方法及びコンピュータ
JP6928056B2 (ja) ビデオゲーム処理装置、ビデオゲーム処理サーバ、及びビデオゲーム処理プログラム
JP6458280B1 (ja) ゲームシステム及びそれに用いるコンピュータプログラム
JP6523384B2 (ja) 制御プログラム、制御方法及びコンピュータ

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20812908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20812908

Country of ref document: EP

Kind code of ref document: A1