WO2006098295A1 - Dispositif de création d’informations de parole, jouet, dispositif de sortie d’informations de parole et système de sortie d’informations de parole - Google Patents

Dispositif de création d’informations de parole, jouet, dispositif de sortie d’informations de parole et système de sortie d’informations de parole Download PDF

Info

Publication number
WO2006098295A1
WO2006098295A1 PCT/JP2006/304951 JP2006304951W WO2006098295A1 WO 2006098295 A1 WO2006098295 A1 WO 2006098295A1 JP 2006304951 W JP2006304951 W JP 2006304951W WO 2006098295 A1 WO2006098295 A1 WO 2006098295A1
Authority
WO
WIPO (PCT)
Prior art keywords
utterance information
utterance
information output
output device
toy
Prior art date
Application number
PCT/JP2006/304951
Other languages
English (en)
Japanese (ja)
Inventor
Shinichi Naohara
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to JP2007508139A priority Critical patent/JP4406456B2/ja
Publication of WO2006098295A1 publication Critical patent/WO2006098295A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/28Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H2200/00Computerized interactive toys, e.g. dolls

Definitions

  • Utterance information generation device toy, utterance information output device, and utterance information output system
  • the present invention relates to an utterance information generation device, a toy, an utterance information output device, and an utterance information output system.
  • an utterance information generation device that performs processing related to an utterance operation in a toy such as a stuffed toy or a robot
  • a synthetic voice is output in response to a tactile switch pressing operation or the like when a string is pulled, or a user's utterance It is known to output the synthesized speech corresponding to the response sentence.
  • Such an utterance information generating device incorporates an amplifier, a speaker, and the like in order to output sound and the like.
  • the utterance information generating device is provided with a power source for supplying power, an amplifier for outputting sound to the outside, a speaker, and the like, a toy that incorporates the utterance information generating device, etc.
  • the problem is that the weight of the toy will increase, and in order to output a loud sound, it is necessary to increase the capacity of the power source, and the weight of the toy will increase further.
  • the power supply, amplifier, speaker, etc. are downsized to make the toy small and light, there is a problem that the sound quality of the reproduction is deteriorated when sound is output from the small and light toy.
  • the problem to be solved by the present invention is exemplified by the above-mentioned problems.
  • the present invention provides an utterance information generation device, toy, utterance information output device, and utterance processing system that can reduce the weight of a toy and the like and that does not deteriorate the reproduction sound quality in view of the above-described problems. Doing as an issue.
  • the speech information generating device which has been made according to the present invention to solve the above-mentioned problem Is an utterance information generator that is built in a toy or the like and generates utterance information that is output in response to the utterance operation of the toy or the like, and is provided outside the toy or the like and is capable of outputting the utterance information.
  • the output device comprises: transmission means for transmitting various types of information; and transmission instruction means for instructing the transmission means to transmit the utterance information generated in response to the utterance operation to the utterance information output device. To do.
  • the utterance operation is performed by outputting the transmitted utterance information by the utterance information output device provided outside.
  • An utterance information output device which is made according to the present invention to solve the above-mentioned problem, is an external device such as a toy that incorporates the utterance information generation device according to any one of claims:! To 5.
  • An utterance information output device configured to output utterance information transmitted by the utterance information generation device, the utterance information capturing unit for capturing the utterance information transmitted by the transmission unit of the utterance information generation device, and And utterance information output means for outputting the utterance information taken in by the utterance information taking means.
  • the utterance information output system for outputting utterance information in response to the utterance operation in a toy, etc.
  • the utterance information generating device according to any one of claims 5 to 5;
  • the utterance information output device according to claim 7 provided outside a toy or the like, wherein the utterance information output device captures the utterance information including the utterance information transmitted by the transmission means of the utterance information generation device.
  • the utterance information taken in by the means is output by the utterance information output means.
  • FIG. 1 is a block diagram showing a basic configuration of an utterance information generating device, an utterance information output device, and an utterance information output system according to the present invention.
  • FIG. 2 is a system configuration diagram showing a system configuration of an utterance information output system.
  • FIG. 3 is a configuration diagram showing an example of a schematic configuration of the utterance information generating device and the audio device of FIG. It is.
  • FIG. 4 is a flowchart showing a part of the processing outline according to the present invention executed by the CPU of the utterance information generating device of FIG. 3.
  • FIG. 5 is a flowchart showing another part of the outline of the processing according to the present invention executed by the CPU of the utterance information generating device of FIG. 3.
  • FIG. 6 is a flowchart showing an outline of processing according to the present invention executed by CPUc of the audio apparatus of FIG. 3.
  • FIG. 6 is a flowchart showing an outline of processing according to the present invention executed by CPUc of the audio apparatus of FIG. 3.
  • FIG. 1 is a configuration diagram showing the basic configuration of an utterance information generating device, an utterance information output device, and an utterance information output system according to the present invention.
  • an utterance information generating device 10 is incorporated in a toy or the like, and in the utterance information generating device 10 that generates utterance information that is output in response to an utterance operation in the toy or the like, Speech information output that is provided outside a toy or the like and can output the speech information.
  • Sending means 13 for sending various information to the power device 20, and sending instruction means l lal for instructing the sending means 13 to send the utterance information generated in response to the utterance operation to the utterance information output device 20; .
  • the transmission instruction means l lal is the transmission means 13 Is sent to the utterance information output device 20, and the sending means 13 sends the generated utterance information to the utterance information output device 20 in response to the instruction. Then, the utterance information output device 20 outputs the transmitted utterance information.
  • the utterance information generated in response to the utterance operation of the toy or the like is sent to the utterance information output device 20 provided outside the toy or the like, an amplifier, a speaker, or the like is incorporated in the toy or the like. Since it is not necessary, the weight of a toy or the like having a speech function can be reduced. Further, since it is not necessary to incorporate a large-capacity power source in a toy or the like in order to output a loud sound, it is possible to further reduce the weight and to prevent the reproduction sound quality from being deteriorated.
  • the utterance information generation device 10 further includes activation state detection means lla2 for detecting an activation state of whether or not the utterance information output device 20 is activated, and the transmission instruction means 1lal further includes When the activation state detection unit lla2 detects that the utterance information output device 20 is not activated, the operation is stopped or the operation is performed with power saving.
  • the transmission instruction means l lal stops operation or operates with power saving. Therefore, since the useless operation is not performed when the utterance information output device 20 is not activated, power saving in the utterance information output device 20 can be achieved.
  • the utterance information generation device 10 is activated when the utterance information output device 20 is activated by the activation state detection unit lla2 when the utterance information is generated according to the utterance operation. Is detected, the utterance information output device 20 further includes an activation request information transmission instruction unit l la3 that instructs the transmission unit 13 to transmit the activation request information indicating the activation request to the utterance information output device 20. Mean l lal is further After the activation request information is transmitted to the utterance information output device 20 by the transmission means 13 in response to the instruction of the activation request information transmission instruction l la3, the utterance information is transmitted to the utterance information output device 20.
  • the delivery means 13 is instructed to exit.
  • the activation state detection means lla2 detects that the utterance information output device 20 is not activated.
  • Activation request information transmission instruction means lla3 instructs transmission of activation request information to the utterance information output device 20, and in response to the instruction, the transmission means 13 transmits activation request information to the utterance information output device 20.
  • the transmission instruction unit l l al instructs the transmission unit 13 to transmit the utterance information to the utterance information output device 20.
  • the utterance information generation device 10 transmits the utterance information after the utterance information output device 20 is activated. Since the output device 20 is not activated, it is possible to avoid a situation in which speech information cannot be output. Furthermore, it is not necessary for a user such as a toy to activate the utterance information output device 20 in order to cause the toy to perform an utterance operation.
  • the utterance information generating device 10 further includes remaining amount detecting means lla4 for detecting the remaining amount of the power source 10p built in the utterance information generating device 10, and the sending instruction means llal further includes the remaining information.
  • the remaining amount detected by the amount detection means l la4 falls below a predetermined threshold, the predetermined amount of utterance information is sent to the utterance information output device 20 to warn of a decrease in the remaining amount. Instruct the sending means 13.
  • this utterance information generating device 10 when the transmission instructing unit llal detects a decrease in the remaining amount detected by the remaining amount detecting unit lla4 with respect to a predetermined threshold, it warns that the remaining amount has decreased.
  • the sending means 13 is instructed to send warning utterance information to the utterance information output device 20 in advance. Then, the transmitted utterance information for warning is output by the utterance information output device 20.
  • the power source 10p of the utterance information generating device 10 is a battery or a battery, when the remaining amount falls below a predetermined threshold, the utterance information output device is used to output warning utterance information. Since it was sent to 20 users, battery replacement, battery charging, etc. Can be urged. Moreover, since the utterance information for warning is output by the utterance information output device 20 operating with power other than the power supply 10p, the remaining power of the power supply 10p may be further reduced by the warning.
  • the power supply 10p is suitable for a configuration in which power is supplied to a drive unit that drives a robot that is a toy.
  • the utterance information generating device 10 further includes timing detection means lla5 for detecting a predetermined timing for giving a warning to a user such as the toy, and the sending instruction means llal further includes In response to the detection by the timing detection unit lla5, the transmission unit 13 is instructed to transmit the utterance information determined in advance in order to give a warning to the user.
  • this utterance information generating device 10 for example, when the timing detection unit l la5 detects timings such as a date and time arbitrarily set for the user, a time when a predetermined time has passed, and a predetermined interval.
  • the sending instruction means l lal instructs the sending means 13 to send predetermined utterance information for warning to the utterance information output device 20.
  • the transmitted utterance information for warning is output by the utterance information output device 20.
  • the timing for warning a user such as a toy and the utterance information for warning are determined in advance, and the utterance information for warning is sent to the utterance information output device 20 when the timing is detected.
  • the utterance information for warning can be output from the utterance information output device 20 at an appropriate timing.
  • the user uses the utterance information output device 20.
  • the game device can be recognized by the user as if a personal' computer is emitting a warning.
  • the ability to improve the warning effect for games and long hours of work can be improved.
  • the utterance information output device 20 is provided outside a toy or the like that contains the utterance information generation device 10 described above, and outputs the utterance information sent by the utterance information generation device 10.
  • the utterance information output device 20, which outputs the utterance information capturing means 21 a for capturing the utterance information transmitted by the transmitting means 13 of the utterance information generating device 10, and the utterance information captured by the utterance information capturing means 21 a Utterance information output means 23.
  • the utterance information generation device 10 incorporated in a toy or the like transmits the utterance information sent in response to the utterance operation in the toy or the like by wireless or wired communication, or a signal.
  • the utterance information output means 23 outputs the captured utterance information.
  • the utterance information output device 20 provided outside the toy or the like has taken in and output the utterance information transmitted by the utterance information generation device 10 incorporated in the toy or the like, the toy Therefore, it is not necessary to incorporate an amplifier, a speaker, etc. in the body, so that it is possible to reduce the weight of a toy or the like having a speech function. Further, since it is not necessary to incorporate a large-capacity power source in the toy etc. in order to output a loud sound, the toy etc. can be further reduced in weight.
  • the utterance information output system 1 is built in the toy etc. in the utterance information output system 1 that outputs utterance information corresponding to the utterance operation in the toy etc. Claims:! -5
  • the utterance information generating device 10 according to any one of claims 1 to 7, and the utterance information output device 20 according to claim 7 provided outside the toy or the like, wherein the utterance information output device 20 includes the utterance information
  • the utterance information sent from the sending means 13 of the generator 10 is taken in by the utterance information taking means 21a, and the taken utterance information is outputted by the utterance information output means 23.
  • the utterance information generator 10 incorporated in a toy or the like, the utterance information generated in response to the utterance operation in the toy or the like is provided outside the toy or the like,
  • the utterance information output device 20 takes in the utterance information and outputs it. Accordingly, it is not necessary to incorporate an amplifier, a speaker, or the like in the toy or the like, so that the toy or the like having a speech function can be reduced in weight. Further, since it is not necessary to incorporate a large-capacity power source in a toy or the like in order to output a loud sound, it is possible to further reduce the weight of the toy or the like.
  • the utterance information output device 20 performs various operations based on operation request information for requesting various operations in the utterance information output device 20 received from the outside.
  • the utterance information generating device 10 further includes an operation request for detecting a request for the various operations from a user such as the toy. Detection means l la6 is provided, and the sending means 13 further sends the operation request information corresponding to the request detected by the operation request detection means l la6 to the utterance information output device 20.
  • this utterance information output system when the operation request detection means lla6 detects a request for various operations in the utterance information output device 20 based on the user's operation on a toy or the like, the detected information is detected.
  • the sending means 13 sends operation request information corresponding to the request to the utterance information generating device 10.
  • the utterance information output device 20 Upon receiving the operation request information, the utterance information output device 20 performs control corresponding to the operation request information. Therefore, the utterance information generating device 10 can be used as a remote control for operating the utterance information output device 20, and weight reduction of toys having an utterance function can be achieved.
  • FIG. 2 is a system configuration diagram showing a system configuration of the utterance information output system
  • FIG. 3 is a configuration diagram showing an example of a schematic configuration of the utterance information generation device and the audio device of FIG.
  • the speech processing system 1 of the present invention described above is a stuffed toy (corresponding to a toy).
  • the audio device 20 is used as the utterance information output device in the claims.
  • the present invention is not limited to this. If it is possible to output the captured utterance information, a variety of devices such as a personal computer, a video game machine, and a mobile phone can be used.
  • the utterance information generating apparatus 10 has a well-known microprocessor (MPU) 11.
  • MPU11 is a central processing unit (CPU) l la that performs various processes and controls according to a predetermined program, ROMl lb, which is a read-only memory that stores programs for CPUl la, and various data CPU1 with storing It has a RAMI lc, which is a readable / writable memory having an area necessary for la processing work.
  • the ROMl lb further includes various means such as sending instruction means, starting state detecting means, starting request information sending instruction means, remaining amount detecting means, timing detecting means, and operation request detecting means in the above-mentioned claims.
  • sending instruction means As a program for functioning the CPU (computer) 1 la, and a speech information generation program that recognizes the user's voice, generates speech information according to the recognition, and stores it in a predetermined storage area of RAMI lc And les.
  • the utterance information generation program can take various forms, such as generating utterance information in accordance with the user's actions and the operation of the whole body.
  • the utterance information generating device 10 further includes a memory unit 12 using an electrically erasable EEPROM or the like, and is connected to the MPU 11 so as to be readable and writable.
  • the memory unit 12 stores various pieces of information such as a plurality of pieces of utterance information generated according to the utterance operation in the stuffed toy 2 and a threshold value for determining the remaining amount of the power source 10p.
  • the utterance information is configured to have digital data such as MP3 (MPEG audio layer 3), for example, but the present invention is not limited to this, and the output of the utterance information is requested. Depending on the configuration of the device, it will have different forms.
  • MP3 MPEG audio layer 3
  • the utterance information generating device 10 further includes a communication unit 13 corresponding to the sending means in the claims, and the communication unit 13 is connected to the MPU 11.
  • the communication unit 13 includes infrared and wireless LAN (local
  • wireless communication devices such as Bluetooth
  • wired communication devices such as RS-232C (Recommended Standard 232), USB (Universal Serial Bus, IEEE (Institute of Electrical and Electronics Engineers) 1394), etc. it can.
  • a wireless communication device is used for the communication unit 13, and information can be transmitted / received to / from other wireless communication devices using an infrared signal.
  • the communication part 13 is MPU
  • Various information input from 11 is transmitted to other wireless communication devices, and various information received from other wireless communication devices is output to MPU 11.
  • the utterance information generating device 10 further includes an operation unit corresponding to the operation request detecting means in the claims.
  • the operation unit 14 includes a plurality of operation switches (not shown) provided at arbitrary locations on the toy 1. For example, when the toy 1 is a stuffed animal, each of the operation switches is embedded in an arbitrary part of the toy 1 corresponding to various operations of the audio device 20. Then, the MPU 11 detects changes in the respective operation switches, and recognizes that the power of the audio device 20 is turned on / off when the stuffed nose is pressed, and that the volume is increased / decreased when the ear is tilted. Operation request information indicating an operation request is generated.
  • the audio device 20 reproduces and outputs music information stored in a CD, MD, hard disk device, or the like in accordance with a user operation in the operation unit 25 having a plurality of operation switches.
  • the MPU 21 and the communication unit 22 described above are included. Similar to the MPU 11 of the utterance information generating device 10, the MPU 21 also includes a CPU 21a, a ROM 2lb, a RAM 21c, and the like.
  • the communication unit 22 is connected to the MPU 11 and is configured to be able to communicate various types of information using infrared rays with the communication unit 13 of the utterance information generating device 10.
  • the R0M 21b stores a program for causing the CPU 21a to function as various means such as the above-described speech information capturing means.
  • speech information is taken into the audio device 20 via the communication unit 22 .
  • the present invention is not limited to this. It is possible to adopt various forms such as capturing utterance information via the interface unit to which the is connected.
  • the audio device 20 further includes an amplifier 23a and a speaker 23b corresponding to speech information output means, the amplifier 23a is connected to the MPU 21, and the speaker 23b is connected to the amplifier 23a. Then, when the CPU 21a executes a speech information reproduction program that reproduces speech information stored in the ROM 21b or the like and outputs it as an audio signal, the audio signal output from the MPU 11 is amplified by the amplifier 23a and output from the speaker 23b. Is output.
  • the output of the utterance information in the audio device 20 has been described with respect to the case where the utterance information captured by the MPU 21 is converted into an audio signal by software and output.
  • Audio ICs such as ICs that integrate devices such as microcontrollers, speech synthesizers, and memories into a single chip Is further connected to the MPU21 to convert the speech information output from the MPU11 into an audio signal and output it, or the speech information (audio data) input from the external input terminal etc. of the audio device 20 from the output circuit.
  • Various embodiments such as output can be made.
  • FIG. 4 is a flowchart showing a part of the processing outline according to the present invention executed by the CPU 1 la of the utterance information generating device 10 of FIG. 3
  • FIG. 5 is executed by the CPU 1 la of the utterance information generating device 10 of FIG. 10 is a flowchart showing another part of the processing outline according to the present invention.
  • step S11 activation state detecting means shown in Fig. 4, the activation state of whether or not the audio device (speech information output device) 20 is activated is detected, and the detection result is stored in the RAM. Is stored in 11c, and then proceeds to step S12.
  • a method for detecting the startup state of the audio device 20 detection based on the presence / absence of a response to a request transmitted to the audio device 20, or hardware connection such as voltage change when connected by wire. Based on state change
  • a detection method For example, a detection method.
  • step S12 based on the detection result of RAMI lc, it is determined whether or not the audio device 20 is being activated. If it is determined that it is not running (N in S12), go to step S21. On the other hand, if it is determined that it is being started (Y in S12), the process proceeds to step S13.
  • step S13 activation request information transmission instruction means
  • activation request information indicating an activation request in the audio device 20 is generated in the RAMI lc, and transmission of the activation request information to the audio device 20 is transmitted to the communication unit 13. Instructed, then proceed to step S14. Then, the communication unit 13 transmits activation request information to the communication unit 22 of the audio device 20.
  • step S14 the above-described utterance information generation program is started. Thereafter, in step S15, a timer that times out when a predetermined time elapses is started, and then the process proceeds to step S16. Note that the utterance information generation program is always operating after being activated, and stores the utterance information in the RAMI lc when it is generated (generated).
  • step S16 Based on whether or not new utterance information is stored in RAMI lc in step S16. Then, it is determined whether or not utterance information is generated. If it is determined that utterance information has not occurred (N in S16), the process proceeds to step S18. On the other hand, if it is determined that utterance information has occurred (Y in S16), the process proceeds to step S17.
  • step S 17 speech information instructing means
  • the communication unit 13 is instructed to transmit the generated utterance information to the audio device 20, and then the process proceeds to step S18. Then, the communication unit 13 transmits the utterance information generated according to the utterance operation in the stuffed toy 2 to the communication unit 22 of the audio device 20.
  • step S18 timing detection means
  • N in S18 the process proceeds to step S20.
  • Y in S18 it is assumed that a predetermined timing for warning the user of the stuffed toy 2 has been detected, and the process proceeds to step S19.
  • step S19 speech information instruction means
  • the communication unit 13 is instructed to send the utterance information for warning stored in the memory unit 12 in advance for warning to the audio device 20, and then the step Proceed to S20. Then, the communication unit 13 transmits the utterance information for warning to the communication unit 22 of the audio device 20.
  • the utterance information for warning is information for outputting voice data that warns the driver who is driving the vehicle for a long time, and this utterance information for warning is used. It will be changed according to the form.
  • step S20 (remaining amount detecting means), the remaining amount is detected based on the voltage indicating the remaining amount of the power supply 10p and stored in RAMI lc.
  • step S21 the remaining amount of RAMI lc is determined in advance. It is determined whether or not the threshold value of the memory unit 12 is smaller. If it is determined that the remaining amount is not smaller than the threshold (N in S21), the process proceeds to step S23 shown in FIG. On the other hand, if it is determined that the remaining amount is smaller than the threshold (Y in S21), the process proceeds to step S22.
  • step S22 speech information instruction means
  • the communication unit 13 is instructed to send the remaining amount warning speech information stored in the memory unit 12 in advance to the audio device 20 for the remaining amount warning. Proceed to step S23 shown in FIG. The communication unit 13 then utters the remaining amount warning. The information is transmitted to the communication unit 22 of the audio device 20.
  • step S23 operation request detecting means shown in FIG. 5, it is determined whether or not an operation request has been detected based on an input from the operation unit 14. If it is determined that no operation request has been detected (N in S23), the process proceeds to step S26. On the other hand, if it is determined that an operation request has been detected ( ⁇ 23), the process proceeds to step S24.
  • step S24 operation request information corresponding to the detected request is generated in the RAMI lc.
  • step S25 the communication unit 13 is instructed to send the operation request information of the RAMI lc to the audio device 20, and thereafter Proceed to step S26. Then, the communication unit 13 transmits the operation request information to the communication unit 22 of the audio device 20.
  • step S26 it is determined whether an end request is received from the user. If it is determined that an end request has not been received (N in S26), the process returns to step S16 shown in FIG. 4 and a series of processing is repeated. On the other hand, if it is determined that an end request has been received (Y in S26), the process ends. Note that the operation of the utterance information generating device 10 can be completed in various forms such as ending in response to a change from a startup state to a standby state such as a power failure in the audio device 20.
  • FIG. 6 is a flowchart showing an outline of processing according to the present invention executed by the CPU 21c of the audio device 20 of FIG.
  • step S 51 Whether the CPU 2 la in the standby state with reduced power consumption has detected the activation request information according to the input from the communication unit 22, the power ON operation of the audio device 20, or the like in step S 51. Is determined. If it is determined that the activation request information has not been detected (N in S51), this determination process is repeated to wait for reception of the activation request information. On the other hand, if it is determined that the activation request information is detected (Y in S51), the process proceeds to step S52.
  • step S52 it is determined whether or not utterance information has been received from utterance information generating apparatus 10 based on an input from communication unit 22. If it is determined that the utterance information has not been received (N in S52), the process proceeds to step S55. On the other hand, if it is determined that the utterance information has been received (Y in S52), the process proceeds to step S53.
  • step 53 speech information fetching means
  • the utterance information received by the communication unit 22 is fetched.
  • step S54 an utterance information reproduction program that reproduces the utterance information in RAM 21c is executed, so that the audio signal indicated by the utterance information is output to the amplifier 23a and output from the speaker 23b. Then, the process proceeds to step S55.
  • step S55 it is determined whether or not the operation request information has been received from the utterance information generating device 10 based on the input from the communication unit 22. If it is determined that the operation request information has been received and determined to be negative (N in S55), the process proceeds to step S58. On the other hand, if it is determined that the operation request information has been received (Y in S55), the process proceeds to step S56.
  • step S56 the operation request information received by the communication unit 22 is captured and stored in the RAM2 lc.
  • step S57 the operation control program is executed, whereby the operation indicated by the operation request information in the RA M21c, For example, control corresponding to volume increase / decrease is performed, and then the process proceeds to step S58.
  • step S58 it is determined whether or not a standby request is generated due to power-off of the audio device 20 or the like. If it is determined that a standby request has occurred (NO in S58), the process returns to step S52, and a series of processing is repeated. On the other hand, if it is determined that a standby request has occurred (Y in S58), the process shifts to the standby state and returns to step S51 to monitor reception of startup request information.
  • the utterance information generating device 10 built in the dog and gurumi 2 is mounted on the vehicle by the user. Then, when the power supply 10p is turned on by the user, it is confirmed whether or not the audio device 20 is in the activated state. If not activated, the activation request information is transmitted to the audio device 20 to be activated.
  • the utterance information generation device 10 transmits the utterance information to the audio device 20 through the communication unit 13.
  • the audio device 20 then reproduces the received speech information and outputs an audio signal generated from the speaker 23b.
  • the stuffed toy 2 since the utterance information generated in response to the utterance operation of the stuffed toy 2 is sent to the audio device 20 provided outside the stuffed toy 2, the stuffed toy 2 is unloaded. Since it is not necessary to incorporate a speaker, a speaker, etc., the stuffed toy 2 having a speech function can be reduced in weight. Also, since there is no need to install a large-capacity power supply 10p in the power supply or round 2 in order to output a loud sound, it is possible to further reduce the weight and to reduce the playback sound quality. .
  • the stuffed toy 2 is made of a soft material, when trying to output sound with the speaker built in the stuffed toy 2, the frequency characteristics of the sound output are narrow, and clear sound output is difficult.
  • the stuffed toy utterance operation can be realized with clear voice by using the external audio device 20 of the stuffed toy 2 to output the utterance information.
  • the utterance information generating device 10 detects that a predetermined time that has been determined in advance by the owner, user, etc. of the stuffed toy 2 has passed, the utterance information generating device 10 generates a warning for a predetermined time corresponding to the timing.
  • the utterance information is transmitted to the audio device 20 by the communication unit 13. Then, the audio device 20 reproduces the received warning utterance information and outputs an audio signal generated from the force 23b.
  • the utterance information for warning can be output from the utterance information output device 20 at an appropriate timing by carrying the stuffed toy 2 on the vehicle. Can warn you that you have been driving for a long time, and can encourage you to take breaks at regular intervals such as 2 hours.
  • the utterance information generating device 10 detects that the remaining amount of the power supply 10p has dropped below a predetermined threshold, the utterance information generating device 10 transmits utterance information for remaining amount warning to the audio device 20 by the communication unit 13. To do. Then, the audio device 20 outputs an audio signal generated by reproducing the received utterance information for remaining amount warning from the speaker 23b.
  • the power supply 10p of the utterance information generating device 10 is a battery or a battery, it is possible to prompt the user to replace the battery, charge the battery, or the like. Further, since the utterance information for the remaining amount warning is output by the audio device 20 operating with power other than the power source 10p, the remaining amount of the power source 10p is not further reduced by the warning.
  • the utterance information generating device 10 detects a request for various operations in the audio device 20 in response to the user's operation such as pushing the nose against the stuffed toy 2 or tilting the ear,
  • the operation request information corresponding to the detected request is transmitted to the audio device 20 by the communication unit 13.
  • the audio device 20 performs control corresponding to the operation request indicated by the received operation request information. Therefore, the utterance information generating device 10 can be used as a remote control for operating the utterance information output device 20, and a light weight such as a toy having an utterance function can be achieved.
  • the utterance information output system 1 described above if the utterance information output device is realized by a television game machine, it is possible to utter an utterance that causes eyes to rest at an appropriate timing such as 2 hours.

Landscapes

  • Toys (AREA)

Abstract

L’invention concerne un dispositif de création d’informations de parole pouvant diminuer le poids d’un jouet ou semblable. Lorsque ledit dispositif de création d’informations de parole (10) crée des informations de parole en réaction à la parole d’un jouet telle une poupée ou un robot, un moyen d’instruction d’envoi (11a1) demande à un moyen d’envoi (13) d’envoyer les informations de parole au dispositif de sortie d’informations de parole (20). En réaction à cette instruction, le moyen d’envoi (13) envoie les informations de parole créées au dispositif de sortie d’informations de parole (20) disposé à l’extérieur du jouet ou semblable. Les informations de parole envoyées sont sorties par le dispositif de sortie d’informations de parole (20).
PCT/JP2006/304951 2005-03-14 2006-03-14 Dispositif de création d’informations de parole, jouet, dispositif de sortie d’informations de parole et système de sortie d’informations de parole WO2006098295A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007508139A JP4406456B2 (ja) 2005-03-14 2006-03-14 発話情報発生装置、玩具、発話情報出力装置、及び、発話情報出力システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-070544 2005-03-14
JP2005070544 2005-03-14

Publications (1)

Publication Number Publication Date
WO2006098295A1 true WO2006098295A1 (fr) 2006-09-21

Family

ID=36991638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/304951 WO2006098295A1 (fr) 2005-03-14 2006-03-14 Dispositif de création d’informations de parole, jouet, dispositif de sortie d’informations de parole et système de sortie d’informations de parole

Country Status (2)

Country Link
JP (1) JP4406456B2 (fr)
WO (1) WO2006098295A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010100393A1 (fr) * 2009-03-03 2010-09-10 Talking Fruit & Vegetables Limited Jouet comportant un corps en forme de fruit ou de légume et un dispositif audio

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000135384A (ja) * 1998-10-30 2000-05-16 Fujitsu Ltd 情報処理装置及び擬似生物機器
JP2003135410A (ja) * 2001-11-08 2003-05-13 Duskin Healthcare:Kk マン・マシン・インターフェース、並びに、当該インターフェースを利用する健康診断システム、精神状態の判定システム及び健康維持システム
JP2004299033A (ja) * 2003-04-01 2004-10-28 Sony Corp ロボット装置、情報処理方法、およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000135384A (ja) * 1998-10-30 2000-05-16 Fujitsu Ltd 情報処理装置及び擬似生物機器
JP2003135410A (ja) * 2001-11-08 2003-05-13 Duskin Healthcare:Kk マン・マシン・インターフェース、並びに、当該インターフェースを利用する健康診断システム、精神状態の判定システム及び健康維持システム
JP2004299033A (ja) * 2003-04-01 2004-10-28 Sony Corp ロボット装置、情報処理方法、およびプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010100393A1 (fr) * 2009-03-03 2010-09-10 Talking Fruit & Vegetables Limited Jouet comportant un corps en forme de fruit ou de légume et un dispositif audio

Also Published As

Publication number Publication date
JP4406456B2 (ja) 2010-01-27
JPWO2006098295A1 (ja) 2008-08-21

Similar Documents

Publication Publication Date Title
US7953599B2 (en) System, method and computer program product for adding voice activation and voice control to a media player
WO2020173391A1 (fr) Procédé d'enregistrement de chanson, procédé de correction de son et dispositif électronique
US20070022223A1 (en) Electronic apparatus and method for implementing an intelligent sleep mode
US20110032098A1 (en) Portable electronic apparatus with a user physical status sensing and warning circuit
US20150294656A1 (en) Method and system for generating sounds using portable and inexpensive hardware and a personal computing device such as a smart phone
TW200901805A (en) Computer controlled amplifier and speaker system with power conservation feature
JP2003259147A (ja) リモートコントロール装置、電子機器および操作可能ボタン明示方法
JP2004096520A (ja) 音声認識リモコン
US20090287325A1 (en) Digital content player with sound-activation function and method for powering on and off the digital content player
JP4141646B2 (ja) オーディオシステム、音量設定方法およびプログラム
WO2006098295A1 (fr) Dispositif de création d’informations de parole, jouet, dispositif de sortie d’informations de parole et système de sortie d’informations de parole
CN109151783A (zh) 数据传输方法、装置、电子设备及存储介质
CN104104997A (zh) 一种电视机静音启动控制方法、装置及系统
JP6043518B2 (ja) 玩具体、制御方法、プログラム、及び玩具システム
JP6346245B2 (ja) 玩具システム、玩具体の制御方法、及びプログラム
CN101950155B (zh) 一种多功能智能闹钟音响
WO2012066685A1 (fr) Dispositif sonore et dispositif de commande de son émis
JP5205897B2 (ja) 携帯型コンテンツ処理装置、制御方法及びプログラム
KR102119701B1 (ko) 음악 인터랙션 로봇
WO2004064968A1 (fr) Jouet telecommande et son unite d'extension
JP2008026575A (ja) 音声処理装置およびその制御方法
JP7536566B2 (ja) オーディオ装置
WO2023142784A1 (fr) Procédé de commande de volume, dispositif électronique et support de stockage lisible
CN214588016U (zh) 一种语音控制跑步机
JP3237378U (ja) 赤外線ワイヤレスマイクの制御装置及び赤外線ワイヤレスマイク

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2007508139

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06715624

Country of ref document: EP

Kind code of ref document: A1