WO2013190963A1 - Dispositif de réponse vocale - Google Patents

Dispositif de réponse vocale Download PDF

Info

Publication number
WO2013190963A1
WO2013190963A1 PCT/JP2013/064918 JP2013064918W WO2013190963A1 WO 2013190963 A1 WO2013190963 A1 WO 2013190963A1 JP 2013064918 W JP2013064918 W JP 2013064918W WO 2013190963 A1 WO2013190963 A1 WO 2013190963A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
response
user
information
voice response
Prior art date
Application number
PCT/JP2013/064918
Other languages
English (en)
Japanese (ja)
Inventor
勉 足立
丈誠 横井
林 茂
健純 近藤
辰美 黒田
大介 毛利
豪生 野澤
謙史 竹中
毅 川西
健司 水野
博司 前川
岩田 誠
Original Assignee
エイディシーテクノロジー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by エイディシーテクノロジー株式会社 filed Critical エイディシーテクノロジー株式会社
Priority to JP2014521255A priority Critical patent/JP6267636B2/ja
Publication of WO2013190963A1 publication Critical patent/WO2013190963A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • This international application includes Japanese Patent Application No. 2012-137065, Japanese Patent Application No. 2012-137066, and Japanese Patent Application No. 2012-137067 filed with the Japan Patent Office on June 18, 2012.
  • the Japanese patent application No. 2012-137065, the Japanese patent application No. 2012-137066, and the Japanese patent application No. 2012-137067 are referred to in the present international application. Incorporated into.
  • the present invention relates to a voice response device that allows voice response to input character information.
  • One aspect of the present invention is to improve the usability for a user in a voice response device that makes a response to input character information by voice.
  • a voice response device that allows voice response to input character information, Response acquisition means for acquiring a plurality of different responses to the character information; Voice output means for outputting the plurality of different responses in different voice colors; It is provided with.
  • a voice response device it is possible to output a plurality of responses with different voice colors, so even if it is not possible to specify one solution for one character information, a different solution with different voice colors can be used. Can be easily output. Therefore, it is possible to improve usability for the user.
  • the voice response device of the present invention may be configured as a terminal device possessed by a user, or may be configured as a server that communicates with the terminal device.
  • the character information may be input to the keyboard or the like using input means, or may be input by converting voice into character information.
  • the voice response device For voice input means for a user to input voice and an external device that converts the input voice into character information, generates a plurality of different responses to the character information, and transmits the response to the voice response device Audio transmitting means for transmitting; With The response acquisition unit may acquire the response from the external device.
  • the voice response device since the voice response device can input voice, it can be configured to input character information by voice. Moreover, since it can be set as the structure which produces
  • the operation of “converting input voice into character information” may be performed by a voice response device or an external device.
  • the voice response device or the external device includes response recording means in which a plurality of different responses including a positive response and a negative response to each character information are recorded for each of a plurality of character information,
  • the response acquisition means acquires the positive response and the negative response as the plurality of different responses,
  • the voice output means may play back with different voice colors for the positive response and the negative response.
  • responses of different positions such as a positive response and a negative response can be played with different voice colors, so that the voice is played as if another person is speaking. Can do. Therefore, it is possible to make it difficult for the user who listens to the voice to feel uncomfortable.
  • the voice color may be changed depending on the type of response and the language used in the response. For example, when a response is made with a gentle tone, the voice is reproduced with a calm woman's voice, and when a response is made with a severe tone, a response with a brave man's voice may be made. That is, the response content and the personality are associated with each other, and the voice color may be set according to the personality.
  • the voice response device can be configured to be used at the reception of a workplace or company as in the invention of the fourth aspect, or can be configured to notify the user that it is difficult to tell someone directly.
  • the name and company name of the person coming to the sales are recorded in advance in the voice response device or an external device, and those who came to the receptionist have given this name or company name. In that case, a response may be generated so as to reproduce the voice of the phrase to be refused.
  • the voice response device may speak (reproduce the voice) instead.
  • the response may not be output immediately, but may be output when the reproduction condition is satisfied, for example, after a certain time has elapsed.
  • the external device or the voice response device may acquire information for generating a response to the character information from another voice response device.
  • the voice response device as in the invention of the sixth aspect, when information for generating a response to the character information is requested from another voice response device, the information corresponding to the request is returned. May be.
  • the voice response device includes sensors for detecting position information, temperature, humidity, illuminance, noise level, etc., and a database such as dictionary information, and extracts necessary information according to the request. You can do it.
  • Such a voice response device can acquire information for generating a response from another voice response device.
  • information unique to the other voice response device such as the position of the other voice response device can be acquired.
  • information unique to itself can be transmitted to another voice response device.
  • a response for example, a positive response or a negative response
  • a response output by itself or another voice response device is input as character information, and the response to this response is received.
  • This configuration can be realized using one or a plurality of voice response devices.
  • voices may be directly input / output, or wireless communication or the like may be used.
  • a voice response device that allows voice response to input character information
  • Personality information acquisition means for acquiring personality information associated with the personality of a person representing a user or a person related to the user according to a preset category
  • Response acquisition means for acquiring response candidates representing a plurality of different responses to the character information; Selecting a response to be output from a response candidate according to the personality information, and outputting the selected response; It is provided with.
  • a voice response device According to such a voice response device, different responses can be made according to the personality of the user or the person related to the user (related person). Therefore, usability can be improved for the user.
  • the voice response device Comprising first personality information generating means for generating personality information of the user or the related person based on answers to a plurality of preset questions;
  • the personality information acquisition unit may acquire personality information generated by the personality information generation unit.
  • personality information can be generated in the voice response device.
  • a well-known personality analysis technique (Rorschach test, Sondy test, etc.) may be used.
  • aptitude inspection technology used for employment tests by companies and the like may be used.
  • Second character information generating means for generating character information of the user or the related person based on a character string included in the input character information;
  • the personality information acquisition unit may acquire personality information generated by the personality information generation unit.
  • Preference information generating means for generating preference information indicating a tendency of preference of the user or the related person based on a character string included in character information;
  • the voice output means may select a response to be output from the response candidate based on the preference information, and output the selected response.
  • a response can be made according to the preference of the user or the person concerned. Further, in the above voice response device, as in the invention of the twelfth aspect, the user's behavior (conversation, place moved, reflected in the camera) is learned (recorded and analyzed), and the user's conversation You may make up for the lack of words.
  • Response candidate acquisition means for acquiring response candidates from a predetermined server or the Internet May be provided.
  • response candidates can be acquired not only from the device itself or an external device, but also from any device connected via the Internet or a dedicated line.
  • Character information generation means for converting user's action into character information; May be provided.
  • the action referred to in the present invention corresponds to an action caused by a muscle action such as conversation, handwriting of characters, or gesture gesture (for example, sign language).
  • a muscle action such as conversation, handwriting of characters, or gesture gesture (for example, sign language).
  • gesture gesture for example, sign language
  • the user's action can be converted into character information.
  • the character information generation means converts the voice of the user's utterance into character information, and accumulates utterances (such as pronunciation utterances) at the time of utterance as learning information (captures and records these characteristics) You may do it.
  • the character information can be generated based on the learning information, so that the generation accuracy of the character information can be improved.
  • Transfer means for transferring the learning information to another voice response device May be provided.
  • the learning information recorded by the voice response device can be used. Therefore, even when other voice response devices are used, the generation accuracy of character information can be improved.
  • any one of the user's behavior and operation may be detected, and learning information or personality information may be generated based on these. .
  • a voice response device for example, when it is detected that the user jumps on the train for several days in a row, it is urged to leave the house several minutes earlier from the next day, or the user is easily angry from the conversation. When it is detected that there is a tendency, it is possible to output voice or music that suppresses the mood.
  • a response can be generated based on information recorded in another voice response device.
  • Reproduction condition determination means for determining whether or not the state of the voice response device matches a reproduction condition set in advance as a condition for outputting voice when the character information is not input; Message reproduction means for outputting a preset message when the reproduction condition is satisfied; May be provided.
  • voice can be output even when character information is not input (that is, when the user does not speak). For example, by forcing the user to speak, it can be used as a measure for suppressing drowsiness while driving a car. Moreover, safety confirmation can be performed by determining whether a person living alone responds.
  • the message reproducing means may acquire news information and output a message related to the news in a question format for asking a user's answer.
  • the voice response device since it is possible to have a conversation about news, it is possible to prevent the conversation from always being the same. For example, if the information about the stock price of a company can be acquired, the content of the conversation can be "Today's stock price of XX company has increased by XX yen. Did you know?" .
  • the voice output means or the message reproduction means may be made to output by adding externally acquired information (news and environment (temperature, weather, position information, etc.)) separately acquired to a preset message.
  • a response in which a predetermined message and the acquired information are combined can be output.
  • a plurality of messages may be acquired, and a message to be reproduced may be selected and output according to the message reproduction frequency.
  • a voice response device it is difficult to reproduce a message having a high reproduction frequency, so that randomness at the time of message reproduction is achieved, or a message having a high reproduction frequency is intentionally reproduced repeatedly. It can promote establishment.
  • Unanswered transmission means for transmitting information that specifies a user and that a reply was not obtained to a preset contact address when a reply or response to a message is not obtained, May be provided.
  • the message playback means stores the conversation content and asks questions to obtain the same content as the heard content (memory confirmation processing). You may do it.
  • the user's memory ability can be confirmed and the memory can be fixed.
  • Utterance accuracy detection means for detecting the accuracy of the pronunciation and accent of the voice input by the user
  • An accuracy output means for outputting the detected accuracy; May be provided.
  • the accuracy level output means may output a voice including the nearest word when the accuracy level is a predetermined value or less.
  • the user can confirm the accuracy of pronunciation and accent. Furthermore, in the voice response device, as in the invention of the twenty-seventh aspect, the message reproduction means may output the same question again when the accuracy is below a certain value.
  • the connection control means may identify a sales activity (sales) and a visitor, and reproduce a message to decline if it is a sales activity.
  • a keyword included in input character information may be extracted and connected to a connection destination to which the keyword corresponds.
  • a keyword such as the name of the other party may be associated with the connection destination in advance.
  • a voice response device it is possible to assist operations such as telephone transfer and call reception.
  • the said voice response apparatus like the invention of the 31st aspect, it may be made to recognize the requirements which the other party speaks based on a keyword, and to tell the user the outline which the other party spoke.
  • Emotions that read emotions from the voice color of the voice input by the user and output the emotions that fall into at least one of emotions including at least one of normal, anger, joy, confusion, sadness, and uplift You may provide the determination means.
  • the invention of the 33rd aspect is Response generation means for generating a response according to a captured image obtained by imaging the periphery of the voice response device when the character information is input; Voice output means for outputting the response by voice; It is provided with.
  • a response can be output by voice according to the captured image. Therefore, usability can be improved compared with the structure which produces
  • Voice input video acquisition means for acquiring a moving image obtained by capturing the shape of the mouth of the user when inputting character information by voice;
  • Character information conversion means for converting the sound into character information and correcting the character information by estimating an unclear part of the sound based on the moving image; May be provided.
  • the utterance content can be estimated from the shape of the mouth, so that an unclear part of the voice can be estimated well.
  • the message reproduction means may detect the user's irritation and sway by detecting the unexpectedly uttered voice, and may generate a message for suppressing the irritation and sway.
  • the voice response device In the case of performing guidance to the destination, it is provided with route information acquisition means for acquiring route information such as weather, temperature, humidity, traffic information, road surface condition to the destination, The message reproducing means may output the route information by voice.
  • route information acquisition means for acquiring route information such as weather, temperature, humidity, traffic information, road surface condition to the destination.
  • the message reproducing means may output the route information by voice.
  • Gaze detection means for detecting the gaze of the user;
  • a line-of-sight movement request transmission unit that outputs a sound requesting to move the line of sight to a predetermined position when the user's line of sight does not move to a predetermined position in response to the call by the message reproduction unit; May be provided.
  • a change request transmitting means for observing the position of the body part and facial expression and outputting a voice requesting to change the position of the body part and facial expression when there is little change in the call may be provided.
  • the position of the body part of the user can be moved to a specific position or can be guided to have a specific facial expression.
  • the present invention can be used when driving a vehicle or performing a physical examination.
  • Broadcast program acquisition means for acquiring a broadcast program similar to the broadcast program viewed by the user;
  • a broadcast program supplementing means for complementing the discontinued broadcast program by outputting the broadcast program acquired by itself when the broadcast program is interrupted; May be provided.
  • the voice response device it is possible to compensate for the broadcast program viewed by the user from being interrupted.
  • the voice response device as in the invention of the forty-first aspect, When a user sings a song without lyrics, the song with lyrics is compared to the song with the lyrics, and the lyrics are added to output the sound in the part where only the user's lyrics are not present Means.
  • a voice response device it is possible to compensate for a portion where the user cannot sing (a portion where the lyrics are interrupted) in so-called karaoke. Furthermore, in the voice response device, as in the invention of the forty-second aspect, When a character is included in the captured image, when a user receives a question about how to read this character, the character information is acquired from the outside, and the reading output that causes the character reading included in this information to be output by voice means, May be provided.
  • the user can be taught how to read characters.
  • the voice response device as in the invention of the 43rd aspect, It is equipped with behavioral environment detection means that detects the user's behavior and the surrounding environment of the user,
  • the message generation means may generate a message according to the detected action and the surrounding environment.
  • the health condition of the user can be managed.
  • a voice response device it is possible to make a report when the health state of the user is equal to or less than a reference value. Therefore, the abnormality can be notified to the other person earlier. Further, in the above voice response device, as with the invention of the 46th aspect, information about the user may be output in response to an inquiry from a person other than the user.
  • Such a voice response device can answer a question in a hospital or the like on behalf of the user by detecting, for example, the walk distance of the user's meal content. Moreover, you may be allowed to learn about health conditions and self-introduction.
  • FIG. 1 is a block diagram showing a schematic configuration of a voice response system to which the present invention is applied. It is a block diagram which shows schematic structure of a terminal device. It is a flowchart which shows the voice response terminal process which MPU of a terminal device performs. It is a flowchart which shows the voice response server process which the calculating part of a server performs. It is explanatory drawing which shows an example of response candidate DB. It is a flowchart which shows the automatic conversation terminal process which MPU of a terminal device performs. It is a flowchart which shows the automatic conversation server process which the calculating part of a server performs. It is a flowchart which shows the message terminal process which MPU of a terminal device performs.
  • SYMBOLS 1 Terminal device, 10 ... Behavior sensor unit, 11 ... Dimensional acceleration sensor, 13 ... Axis gyro sensor, 15 ... Temperature sensor, 17 ... Humidity sensor, 19 ... Temperature sensor, 21 ... Humidity sensor, 23 ... Illuminance sensor, 25 ... Wet sensor 27 ... GPS receiver 29 ... Wind speed sensor 33 ... Electrocardiographic sensor 35 ... Heart sound sensor 37 ... Microphone 39 ... Memory 41 ... Camera 50 ... Communication unit 53 ... Wireless telephone unit 55 ... Contact memory, 60 ... notification unit, 61 ... display, 63 ... lighting, 65 ... speaker, 70 ... operation unit, 71 ... touch pad, 73 ... confirmation button, 75 ...
  • the voice response system 100 to which the present invention is applied is a system configured to generate an appropriate response at the server 90 and output the response by voice at the terminal device 1 with respect to the voice input at the terminal device 1. It is. Specifically, as shown in FIG. 1, the voice response system 100 is configured such that a plurality of terminal devices 1 and a server 90 can communicate with each other via a communication base station 80 or an Internet network 85.
  • the server 90 has a function as a normal server device.
  • the server 90 includes a calculation unit 101 and various databases (DB).
  • the calculation unit 101 is configured as a well-known calculation device including a CPU and a memory such as a ROM and a RAM. Based on a program in the memory, the calculation unit 101 communicates with the terminal device 1 and the like via the Internet network 85, Various processes such as voice recognition and response generation for performing reading / writing of data in various DBs or conversation with a user using the terminal device 1 are performed.
  • a speech recognition DB 102 As various DBs, as shown in FIG. 1, a speech recognition DB 102, a predictive conversion DB 103, a speech DB 104, a response candidate DB 105, a personality DB 106, a learning DB 107, a preference DB 108, a news DB 109, a weather DB 110, a reproduction condition DB 111, handwritten characters / sign language DB 112, terminal information DB 113, emotion determination DB 114, health determination DB 115, karaoke DB 116, report destination DB 117, sales DB 118, client DB 119, and the like.
  • the details of these DBs will be described every time the processing is described.
  • the terminal device 1 includes a behavior sensor unit 10, a communication unit 50, a notification unit 60, and an operation unit 70 provided in a predetermined housing.
  • the behavior sensor unit 10 includes a well-known MPU 31 (microprocessor unit), a memory 39 such as a ROM and a RAM, and various sensors.
  • the MPU 31 includes sensor elements that constitute various sensors to be inspected (humidity, wind speed, etc.). For example, processing such as driving a heater for optimizing the temperature of the sensor element is performed so that the detection can be performed satisfactorily.
  • the behavior sensor unit 10 includes, as various sensors, a three-dimensional acceleration sensor 11 (3DG sensor), a three-axis gyro sensor 13, a temperature sensor 15 disposed on the back surface of the housing, and humidity disposed on the back surface of the housing.
  • a wetness sensor 25 a GPS receiver 27 that detects the current location of the terminal device 1, and a wind speed sensor 29.
  • the behavior sensor unit 10 also includes an electrocardiogram sensor 33, a heart sound sensor 35, a microphone 37, and a camera 41 as various sensors.
  • the temperature sensors 15 and 19 and the humidity sensors 17 and 21 measure the temperature or humidity of the outside air of the housing as an inspection target.
  • the three-dimensional acceleration sensor 11 measures accelerations applied to the terminal device 1 in three orthogonal directions (vertical direction (Z direction), width direction of the casing (Y direction), and thickness direction of the casing (X direction)). Detect and output the detection result.
  • the three-axis gyro sensor 13 has an angular velocity applied to the terminal device 1 as a vertical direction (Z direction), two arbitrary directions orthogonal to the vertical direction (a width direction of the casing (Y direction), and a casing Angular acceleration (thickness direction (X direction)) (counterclockwise speed in each direction is positive) is detected, and the detection result is output.
  • the temperature sensors 15 and 19 include, for example, a thermistor element whose electric resistance changes according to temperature.
  • the temperature sensors 15 and 19 detect the Celsius temperature, and all temperature displays described in the following description are performed at the Celsius temperature.
  • the humidity sensors 17 and 21 are configured as, for example, known polymer film humidity sensors.
  • This polymer film humidity sensor is configured as a capacitor in which the amount of moisture contained in the polymer film changes in accordance with the change in relative humidity and the dielectric constant changes.
  • the illuminance sensor 23 is configured as a well-known illuminance sensor including a phototransistor, for example.
  • the wind speed sensor 29 is, for example, a well-known wind speed sensor, and calculates the wind speed from electric power (heat radiation amount) necessary for maintaining the heater temperature at a predetermined temperature.
  • the heart sound sensor 35 is configured as a vibration sensor that captures vibrations caused by the beat of the heart of the user.
  • the MPU 31 considers the detection result of the heart sound sensor 35 and the heart sound input from the microphone 37, Distinguish between noise and other vibrations and noise.
  • the wetness sensor 25 detects water droplets on the surface of the housing, and the electrocardiographic sensor 33 detects the user's heartbeat.
  • the camera 41 is arranged in the casing of the terminal device 1 so that the outside of the terminal device 1 is an imaging range.
  • the communication unit 50 includes a well-known MPU 51, a wireless telephone unit 53, and a contact memory 55, and can acquire detection signals from various sensors constituting the behavior sensor unit 10 via an input / output interface (not shown). It is configured. And MPU51 of the communication part 50 performs the process according to the detection result by this behavior sensor unit 10, the input signal input via the operation part 70, and the program stored in ROM (illustration omitted).
  • the MPU 51 of the communication unit 50 functions as an operation detection device that detects a specific operation performed by the user, a function as a positional relationship detection device that detects a positional relationship with the user, and is performed by the user.
  • the function as an exercise load detection device for detecting the exercise load and the function of transmitting the processing result by the MPU 51 are executed.
  • the radio telephone unit 53 is configured to be able to communicate with, for example, a mobile phone base station, and the MPU 51 of the communication unit 50 outputs a processing result by the MPU 51 to the notification unit 60 or via the radio telephone unit 53. To a preset destination.
  • the contact address memory 55 functions as a storage area for storing location information of the user's visit destination.
  • the contact address memory 55 stores information on contact information (such as a telephone number) to be contacted when an abnormality occurs in the user.
  • the notification unit 60 includes, for example, a display 61 configured as an LCD or an organic EL display, an electrical decoration 63 made of LEDs that can emit light in, for example, seven colors, and a speaker 65.
  • a display 61 configured as an LCD or an organic EL display
  • an electrical decoration 63 made of LEDs that can emit light in, for example, seven colors
  • a speaker 65 Each part which comprises the alerting
  • the operation unit 70 includes a touch pad 71, a confirmation button 73, a fingerprint sensor 75, and a rescue request lever 77.
  • the touch pad 71 outputs a signal corresponding to the position and pressure touched by the user (user, user's guardian, etc.).
  • the confirmation button 73 is configured so that the contact of the built-in switch is closed when pressed by the user, and the communication unit 50 can detect that the confirmation button 73 is pressed. Yes.
  • the fingerprint sensor 75 is a well-known fingerprint sensor, and is configured to be able to read a fingerprint using, for example, an optical sensor.
  • a means for recognizing a human physical feature such as a sensor for recognizing the shape of a palm vein (means capable of biometric authentication: identifying an individual) If it is a possible means), it can be adopted.
  • the voice response terminal process performed in the terminal device 1 is a process of receiving voice input by the user, sending the voice to the server 90, and playing back the voice response when receiving a response to be output from the server 90. . This process is started when the user inputs a voice input via the operation unit 70.
  • the input from the microphone 37 is accepted (ON state) (S2), and imaging (recording) by the camera 41 is started (S4). Then, it is determined whether or not there is a voice input (S6).
  • the timeout indicates that the allowable time for waiting for processing has been exceeded, and here the allowable time is set to about 5 seconds, for example.
  • the process returns to S10. If the voice input is not completed (S12: NO), the process returns to S10. If the voice input has been completed (S12: YES), data such as an ID for identifying itself, a voice, and a captured image are packet-transmitted to the server 90 (S14). Note that the process of transmitting data may be performed between S10 and S12.
  • S16 it is determined whether or not the data transmission is completed. If transmission has not been completed (S16: NO), the process returns to S14. If the transmission has been completed (S16: YES), it is determined whether or not data (packet) transmitted by the voice response server process described later has been received (S18). If no data has been received (S18: NO), it is determined whether or not a timeout has occurred (S20).
  • reception is completed S24. If reception has not been completed (S24: NO), it is determined whether or not a timeout has occurred (S26). If timeout has occurred (S26: YES), the fact that an error has occurred is output via the notification unit 60, and the voice response terminal process is terminated. If the time has not expired (S26: NO), the process returns to S22.
  • a response based on the received packet is output from the speaker 65 by voice (S28).
  • voice a response based on the received packet is output from the speaker 65 by voice (S28).
  • the plurality of responses are reproduced with different voice colors.
  • the voice response server process is a process of receiving voice from the terminal device 1, performing voice recognition for converting the voice into character information, and generating a response to the voice and returning it to the terminal device 1.
  • a plurality of responses may be transmitted in association with different voice colors.
  • the communication partner terminal device 1 is specified (S44). In this process, the terminal device 1 is specified by the ID of the terminal device 1 included in the packet.
  • the voice included in the packet is recognized (S46).
  • the speech recognition DB 102 many speech waveforms and many characters are associated with each other.
  • the predictive conversion DB 103 is associated with a word that is likely to be used after a certain word.
  • a known voice recognition process is performed by referring to the voice recognition DB 102 and the prediction conversion DB 103 to convert the voice into character information. Subsequently, an object in the captured image is specified by performing image processing on the captured image (S48). Then, the user's emotion is determined based on the waveform of the voice or the ending of the word (S50).
  • the user is referred to by referring to the emotion determination DB 114 in which a speech waveform (voice color), a word ending, and the like are usually associated with emotion categories such as anger, joy, confusion, sadness, and elevation. It is determined whether or not the emotion falls in any category, and the determination result is recorded in the memory. Subsequently, by referring to the learning DB 107, a word often spoken by the user is searched, and a portion where the character information generated by the speech recognition is ambiguous is corrected.
  • a speech waveform voice color
  • a word ending, and the like are usually associated with emotion categories such as anger, joy, confusion, sadness, and elevation. It is determined whether or not the emotion falls in any category, and the determination result is recorded in the memory.
  • the learning DB 107 a word often spoken by the user is searched, and a portion where the character information generated by the speech recognition is ambiguous is corrected.
  • the learning DB 107 user features such as words often spoken by the user and habits during pronunciation are recorded for each user. Further, addition / correction of data to the learning DB 107 is performed in a conversation with the user.
  • the corrected character information is specified as the input character information (S54), and a response similar to the character information is retrieved from the response candidate DB 105 as an input to obtain a response from the response candidate DB 105 (S56).
  • the response candidate DB 105 as shown in FIG. 5, input character information, first output, first output voice color, second output, and second output voice color are uniquely associated.
  • the first output “Today's * weather is *” will be the voice of woman 1. Output in association.
  • the portion “*” is acquired by accessing the weather DB 110 in which the region name and the weather forecast for several days in the region are associated with each other.
  • the weather at the time when today's weather changes is also acquired from the weather DB 110, and the second output “However, * is *.” 1 is output in association with the voice color.
  • “Today's Tokyo weather” is entered, and the voice of woman 1 is output, "Today's Tokyo weather is sunny.” The voice of man 1 will output “However, it will rain tomorrow.”
  • the response contents are associated with the voice color (S60).
  • the voice DB 104 stores an artificial voice database for each voice color, and in this process, the voice color set for each response is associated with the voice color in the database.
  • the response content is converted into voice (S62).
  • a process for outputting response contents (character information) as a voice is performed.
  • the generated response (voice) is packet-transmitted to the communication partner terminal device 1 (S64).
  • the packet may be transmitted while generating the voice of the response content.
  • the conversation content is recorded (S68).
  • the input character information and the output response contents are recorded in the learning DB 107 as conversation contents.
  • keywords words recorded in the speech recognition DB 102 included in the conversation content, pronunciation characteristics, and the like are recorded in the learning DB 107.
  • the voice response system 100 described in detail above is a system that makes a response to inputted character information by voice, and the terminal device 1 (MPU 31) acquires a plurality of different responses to the character information, and Are output in different voice colors.
  • a voice response system 100 since a plurality of responses can be output with different voice colors, different solutions are used with different voice colors even when a single answer to one character information cannot be specified. Can be output in an easy-to-understand manner. Therefore, it is possible to improve usability for the user.
  • the terminal device 1 inputs a voice by the user via the microphone 37, and the server 90 (calculation unit 101) converts the inputted voice into character information, A plurality of different responses are generated and transmitted to the terminal device 1. Then, the terminal device 1 acquires a response from the server 90.
  • the terminal device 1 can input voice, it can be configured to input character information by voice. Moreover, since it can be set as the structure which produces
  • the server 90 converts the voice of the user's utterance into character information, and accumulates utterances (such as pronunciation utterances) at the time of utterance as learning information (capturing the features and using the features). Record).
  • the character information can be generated based on the learning information, so that the generation accuracy of the character information can be improved.
  • the server 90 reads the emotion from the voice color of the voice input by the user, and usually includes at least one of anger, joy, confusion, sadness, and emotion. , Which emotion is applicable is output.
  • voice recognition is used as a configuration for inputting character information.
  • the present invention is not limited to voice recognition, and may be input using input means (operation unit 70) such as a keyboard or a touch panel.
  • operation unit 70 such as a keyboard or a touch panel.
  • the server 90 includes a response candidate DB 105 in which a plurality of different responses including a positive response and a negative response for each character information are recorded for each of the plurality of character information.
  • the terminal device 1 may acquire a positive response and a negative response as a plurality of different responses, and reproduce the voices with different voices according to the positive response and the negative response.
  • responses of different positions such as a positive response and a negative response can be reproduced with different voice colors, so that a voice is reproduced as if another person is speaking. Can do. Therefore, it is possible to make it difficult for the user who listens to the voice to feel uncomfortable.
  • the voice color may be changed depending on the type of response and the language used in the response. For example, when a response is made with a gentle tone, the voice is reproduced with a calm woman's voice, and when a response is made with a severe tone, a response with a brave man's voice may be made. That is, the response content and the personality are associated with each other, and the voice color may be set according to the personality.
  • a response for example, a positive response or a negative response
  • the response may be generated.
  • This configuration can be realized using one or a plurality of terminal devices 1.
  • voices may be directly input / output, or wireless communication or the like may be used.
  • data may be transmitted to other terminal devices 1 in the process of S66.
  • the calculation unit 101 learns (records and analyzes) the user's behavior (conversation, the place where the user moved, and what is reflected in the camera) to compensate for the lack of words in the user's conversation. You may do it.
  • the server 90 may acquire response candidates from a predetermined server or the Internet.
  • response candidates can be acquired not only from the server 90 but also from any device connected via the Internet, a dedicated line, or the like.
  • Speech Embodiment [Process of Second Embodiment] Next, another type of voice response system will be described.
  • This embodiment (second embodiment) In the following embodiment, only the parts different from the voice response system 100 of the first embodiment will be described in detail, and the same parts as the voice response system 100 of the first embodiment will be the same. The description is abbreviate
  • the voice response system outputs voice even when the user does not input character information.
  • the terminal device 1 performs the automatic conversation terminal process shown in FIG.
  • the automatic conversation terminal process is a process that is started when the terminal device 1 is turned on, for example, and is repeatedly executed thereafter.
  • the automatic conversation terminal process first, it is determined whether or not the setting for performing an automatic conversation is ON (S82). Whether or not to perform an automatic conversation can be set by the user via the operation unit 70 or by inputting voice.
  • the server 90 executes the automatic conversation server process shown in FIG.
  • the automatic conversation server process is a process that is started when the server 90 is turned on, for example, and then repeatedly executed.
  • the automatic conversation server processing first, it is determined whether or not the fact that the automatic conversation mode is set is received from the terminal device 1 (S92). If it is not received that the automatic conversation mode is set (S92: NO), the process proceeds to S98.
  • the terminal device 1 to be a communication partner is specified based on the ID included in the received packet (S94), The automatic conversation is set (S96). Subsequently, it is determined whether or not the reproduction condition is satisfied for each of the terminal devices 1 that are set to have automatic conversation (S98).
  • the playback condition is, for example, that a certain time has elapsed since the previous conversation (speech input), a certain time of the day, or a specific weather, and any sensor value is abnormal. When the value is shown.
  • the latest and the message in accordance with the playback conditions are, for example, "Good morning.” Or "Hello.” May be a fixed sentence such as, obtained from the news DB109 the latest news is automatically updated It may be related to news. For example, if you want to get information about the latest news, for example, if you can get information about the stock price of a certain company, "Today's stock price of XX company has increased by XX yen. Did you know? Or the like.
  • the processes of S42 to S54 described above are performed. Then, when the processing of S54 is completed, it is determined whether or not a predetermined answer has been obtained from the terminal device 1 that is the communication partner (S112).
  • the predetermined answer may be, for example, some voice or a specific answer. For example, for the question “Do you know?” For example, the answer “Do you know” or “Do not know” corresponds to the question “Do you know the weather?” On the other hand, those including words indicating weather such as “rainy” or “sunny” are applicable.
  • the server 90 determines whether or not the situation of the voice response system 100 matches a playback condition set in advance as a condition for outputting voice. When the reproduction condition is met, a preset message is output.
  • a voice response system 100 it is possible to output voice even when character information is not input (that is, when the user does not speak). For example, by forcing the user to speak, it can be used as a measure for suppressing drowsiness while driving a car. Moreover, safety confirmation can be performed by determining whether a person living alone responds.
  • the server 90 acquires news information and outputs a message related to the news in a question format for asking a user's answer. According to such a voice response system 100, since it is possible to have a conversation about news, it can be suppressed that the conversation is always the same.
  • the server 90 adds and outputs externally acquired information (news and environment (temperature, weather, location information, etc.)) acquired separately to a preset message.
  • externally acquired information news and environment (temperature, weather, location information, etc.)
  • a response in which a predetermined message and the acquired information are combined can be output. Further, in the voice response system 100, when the response to the response or message cannot be obtained, the server 90 informs the preset contact information that the user has not been obtained and that the response has not been obtained. Send.
  • a voice response system 100 it is possible to notify a contact person when an answer cannot be obtained. Therefore, for example, an abnormality such as an elderly person living alone can be notified early.
  • the server 90 may acquire a plurality of messages, and select and output a message to be reproduced according to the reproduction frequency of the message.
  • a voice response system 100 it is difficult to reproduce a message having a high reproduction frequency, thereby achieving randomness at the time of message reproduction, or repetitively reproducing a message having a high reproduction frequency to call attention or store the message. Or can be promoted.
  • the terminal device 1 notifies the user that it is difficult to tell the user directly. For example, if you talk to this device today to say something like this before dating, at an appropriate time (for example, when a preset time or a certain time has passed since the conversation was interrupted) The voice response system 100 speaks instead (plays voice).
  • the terminal device 1 performs the message terminal process shown in FIG. 8, and the server 90 performs the message server process shown in FIG.
  • the message terminal process is a process that is started when the terminal device 1 is turned on, for example, and then repeatedly executed.
  • S136 it is determined whether or not a packet from the server 90 has been received (S136). If no packet is received (S136: NO), the process of S136 is repeated. If a packet has been received (S136: YES), the processing of S24 to S30 is performed, and the message terminal processing is terminated.
  • the message server process is a process that starts when the server 90 is powered on, for example, and is repeatedly executed thereafter. Specifically, first, it is determined whether or not a packet is received from any one of the terminal devices 1 (S142). If no packet has been received (S142: NO), the process proceeds to S156 described later.
  • the communication partner terminal device 1 is specified (S44), and it is determined whether or not the packet includes a mode flag such as a message mode flag (S144). . If there is no mode flag (S144: NO), the process proceeds to S148.
  • the server 90 If there is a mode flag (S144: YES), the server 90 also sets the mode by setting the flag corresponding to the terminal device 1 of the communication partner to the ON state (S146). For example, if the message mode flag corresponds to the message mode, the processing of S46 to S152 described later is performed. If the guidance mode flag described later corresponds to the guidance mode, S46 to S176 (see FIG. 11) is performed. Will be.
  • the message reproduction condition can be set in advance by the user via the operation unit 70 of the terminal device 1, and corresponds to, for example, time and position.
  • the message reproduction condition is transmitted to the server 90 at the time of packet transmission for message terminal processing.
  • the message and voice are associated with each other and recorded in the memory (S152), and the process proceeds to S156. If the message flag is OFF (S148: NO), processing relating to another mode is performed (S154), and it is determined whether or not the playback timing has come (S156).
  • the reproduction timing indicates contents set in the message reproduction condition.
  • the voice input by the user is not played back immediately, but can be played back when a message playback condition is satisfied after a certain time.
  • the content spoken by the user is reproduced.
  • a word that triggers a difficult thing to say for example, "Sorry, did you say something to her?" It may be configured to speak such words.
  • the terminal device 1 performs the guidance terminal process shown in FIG. 10, and the server 90 performs the guidance server process shown in FIG.
  • the guidance terminal process is a process that is started when the terminal device 1 is turned on, for example, and then repeatedly executed. For example, this is a process that is started when the terminal device 1 is powered on and then repeatedly executed.
  • the guidance terminal process as shown in FIG. 10, it is first determined whether or not the guidance mode is set by the user (S162). If the guidance mode is not set (S162: NO), the process of S162 is repeated.
  • S166 it is determined whether a packet from the server 90 has been received (S166). If no packet has been received (S166: NO), the process of S166 is repeated. If a packet has been received (S166: YES), the processing of S24 to S30 is performed, and the guidance terminal processing is terminated.
  • the guidance server process is a process that is started, for example, when the server 90 is turned on and then repeatedly executed.
  • the processes of S142 to S146 described above are executed.
  • the guidance reproduction condition can be set in advance by the user via the operation unit 70 of the terminal device 1, and corresponds to, for example, time and position.
  • the guided reproduction condition is transmitted to the server 90 at the time of packet transmission for message terminal processing.
  • the guidance content is generated, and the guidance content and voice (voice color) are associated with each other and recorded in the memory (S176).
  • the guidance content for example, a word representing a desire such as “I want to” or “hope” included in the input character information is searched, keywords before these words are extracted, and these keywords are induced.
  • the words registered as words are output as guidance contents.
  • the keyword and the word indicating the guidance content are associated with each other in advance and recorded in the response candidate DB 105.
  • the terminal device 1 is installed at a company reception or the like. It can also be used for telephone reception for company representative telephones and telephone banking.
  • it implement achieves by replacing the process of S56 in 1st Embodiment with the reception process shown in FIG.
  • a response for asking the company name and personal name is generated (S194), and the reception process is terminated. In this process, for example, a response such as “Please tell us your name and business” is generated.
  • the company name or personal name is included in the character information (S192: NO)
  • the company name or personal name is extracted from the sales DB 118 and the client DB 119 (S196).
  • the sales DB 118 the name of the company and the person in charge who came to the sales in the past, or the name of the Kramer who only talks about complaints are recorded.
  • the client DB 119 a company name, a person in charge of the company, a person in charge on the user side (in-house side) of the terminal device 1, a schedule such as a scheduled visit time, and a contact address are recorded in association with each person in charge. ing.
  • the person who has come to the reception visits the schedule in the client DB 119 at a close time (for example, within 1 hour before and after the current time). It is determined whether or not it is a person who comes (S202). If the person is visiting at a close time (S202: YES), the contact information of the person in charge of this person is extracted from the client DB 119 so that the person in charge and the person who has come to the reception can have a conversation. The person in charge is connected (S204). In this process, it is only necessary to connect to the extension telephone of the person in charge, a mobile phone or the like.
  • an acceptance response for the client is generated (S206).
  • a response for the client for example, a response such as “Thank you XXX, please wait for a while as you are connected to the person in charge” is generated.
  • the acceptance process ends.
  • this person is connected to a preset contact for reception so that this person in charge and the person who has come to the reception can have a conversation.
  • the person in charge is connected (S208). Then, a normal acceptance response is generated (S210).
  • the normal reception response for example, a response such as “Please wait for a while because it is connected to reception” is generated.
  • the acceptance process ends.
  • the voice response system 100 is configured to be used at a workplace or company reception. In this configuration, the name and company name of the person coming to the sales are recorded in advance in the sales DB 118 of the server 90. Generate a response to play.
  • the server 90 identifies a communication partner based on the input character information, and connects a communication destination set in advance for each communication partner and the communication partner. According to such a voice response system 100, it is possible to assist reception work and telephone support. Moreover, according to such a voice response system 100, it is possible to eliminate a person who may interfere with a user's business without dealing with it.
  • the server 90 extracts a keyword included in the input character information (particularly voice) and connects to a connection destination corresponding to the keyword.
  • a keyword such as the name of the other party is associated with the connection destination in advance.
  • connection destination is set according to the other party.
  • requirements keyboards included in the character information
  • connection destination may be changed according to the requirements.
  • the server 90 may recognize the requirement of the other party to speak based on the keyword, and transmit the outline of the other party to the user. According to the voice response system 100 as described above, it is possible to assist an intermediary service with a customer.
  • the terminal device 1 may provide information requested by the other terminal device 1.
  • the server 90 requests other terminal device 1 for necessary information in the process of S56, and generates a response after obtaining necessary information from the other terminal device 1. And in the terminal device 1 which provides required information, the information provision terminal process shown in FIG. 13 is implemented.
  • the information providing terminal process is a process that is started when there is a request from the server 90, for example.
  • an information providing destination is extracted (S222).
  • the information providing destination indicates another terminal device 1 that requests information, and an ID for specifying the other terminal device 1 is included in the request from the server 90.
  • the partner is permitted to provide information (S224: YES)
  • the requested information is acquired from its own memory 39 or various sensors (S226), and this data is transmitted to the server 90 (S228). If it is not the other party who permits the provision of information (S224: NO), the server 90 is notified that the provision of information is rejected (S230).
  • the server 90 requests location information from Mr. XX's terminal device 1 in response to the question “What is Mr. XX doing?”
  • the terminal device 1 returns position information.
  • the server 90 recognizes the action of Mr. XX based on the position information. For example, if you are moving on the track at a speed faster than the speed of humans, it is judged that you are moving on a train, and “Mr. XX is on the train. And generate a response.
  • the server 90 acquires information recorded in the other terminal device 1 from another terminal device 1 different from the requesting terminal device 1 and provides the information to the other terminal device 1. That is, in the voice response system 100, the server 90 acquires information for generating a response to the character information from the other terminal device 1.
  • a response can be generated based on information recorded in another terminal device 1.
  • the terminal device 1 requests information for generating a response to the character information from another terminal device 1, the terminal device 1 returns information corresponding to the request.
  • the terminal device 1 includes sensors for detecting position information, temperature, humidity, illuminance, noise level, and a database such as dictionary information, and extracts necessary information as required.
  • information unique to the other terminal device 1 such as the position of the other terminal device 1 can be acquired.
  • information unique to itself can be transmitted to another terminal device 1.
  • a personality DB 106 is prepared, in which personality information that associates personalities of users or persons who are related to the users according to preset categories is recorded. ing. For example, as shown in FIG. 14, the personality DB 106 records the names of users and parties and the personality classifications of these persons in association with each other.
  • a personality test is performed on users and related parties, and the test results are also recorded.
  • a known personality analysis technique (Rorschach test, Sondy test, etc.) may be used.
  • aptitude inspection technology used for employment tests by companies and the like may be used.
  • the personality information generation process is a process that starts when, for example, the terminal device 1 is input to generate personality information using the operation unit 70 or the like.
  • the microphone 37 is turned on (S242), and one of the predetermined four-choice questions is output by voice (S244).
  • the four-choice question may be acquired from the server 90, or a problem recorded in advance in the memory 39 may be asked.
  • S246 it is determined whether or not there is a voice response from the target person (user or related person) (S246). If there is no answer (S246: NO), the process of S246 is repeated. If there is an answer (S246: YES), conversation parameters such as word ending and conversation speed are extracted (S248), and it is determined whether or not the current problem is the final problem (S250). If it is not the final problem (S250: NO), the next problem is selected (S252), and the process returns to S242.
  • a personality analysis is performed by answering a four-choice question (S254), and a personality analysis is performed using conversation parameters (S256).
  • conversation parameters those who are confident in themselves tend to have a strong ending, those who are not confident tend to have a weak ending, and those who are impatient have a fast conversation speed, those who are quiet have a slow conversation speed Trends can be captured.
  • these personality analysis results are comprehensively analyzed, such as weighted average (S258), and assigned to personality categories (S260). Specifically, the personality of the subject obtained through the test is scored and assigned to the personality classification for each score.
  • the target person and the personality classification are associated (S262) and recorded in the personality DB 106 (S264). That is, the relationship between the target person and the personality classification is transmitted to the server 90. At this time, the test result is also transmitted to the server 90, and the server 90 constructs the personality DB 106 as shown in FIG. When such processing ends, the personality information generation processing ends.
  • a response candidate DB 105 is prepared in which personality classifications and responses different from each other are associated with each other.
  • the server 90 acquires response candidates representing a plurality of different responses to the character information in the process of S56, selects a response to be output from the response candidates according to the personality information, and in the processes of S60 and S64, Output the selected response.
  • the terminal device 1 In the voice response system 100, the terminal device 1 generates personality information of a user or a person concerned based on answers to a plurality of preset questions, and acquires the generated personality information.
  • personality information can be generated in the server 90 or the terminal device 1. Further, in the voice response system 100, the calculation unit 101 generates personality information of the user or related person based on the character string included in the input character information.
  • personality information can be generated in the process in which the user uses the voice response system 100.
  • a different response can be performed according to the character of the user or a person related to the user (related person). Therefore, usability can be improved for the user.
  • the response may be output after being narrowed down to one according to the personality, or the voices of different voice colors may be associated with the plurality of responses and output.
  • the processing of S248 and S254 to S264 may be performed by the server 90.
  • the voice and problem may be exchanged between the terminal device 1 and the server 90 while allowing the server 90 to identify the terminal device 1.
  • the server 90 may detect any one of the user's actions and operations, and generate learning information or personality information based on these.
  • a voice response system 100 for example, when it is detected that the user jumps on the train for several consecutive days, the user is prompted to leave the house several minutes earlier from the next day, or the user is angry from the conversation. When it is detected that there is a tendency to be easy, it is possible to output voice and music to suppress mood.
  • a preference DB 108 is prepared in which preference information in which preferences of users and parties are associated in accordance with preset categories is recorded.
  • the preference DB 108 stores the names of users and related parties and the preference of these persons as the type of preference such as food preference (food), color preference (color), hobby, etc. Are recorded in association with each other.
  • sweet taste sweet
  • spicy taste spicy
  • the middle level and for color tastes, warm color (warm), cool color (cold), middle order, hobby are classified into indoor hobbies (inside), outdoor hobbies (outside), and both indoor and outdoor hobbies (inside and outside).
  • preference information generation processing shown in FIG. 17 is executed.
  • the preference information generation process is performed between S48 and S54, for example.
  • keywords relating to preference are extracted from character information (S282), and among objects identified by image processing, those relating to preference are extracted (S284).
  • the preference-related keywords and the classifications within the types are associated with each other in the preference DB 108, and are extracted in these processes.
  • keywords and objects are included in the preference DB 108, they are extracted as preferences.
  • the counter is incremented for each keyword group related to preference (S288). For example, when the type of preference is “food preference” and the type is “spicy”, such as kimchi, the counter corresponding to “food preference” and “spicy” is incremented. .
  • the preference information (preference DB 108) is updated based on the counter value (S290). That is, for each “preference type”, the “type” with the largest counter value is recorded in the preference DB 108 as the preference feature of the user or the person concerned as the best match.
  • the preference information generation processing ends.
  • the response candidate DB 105 prepares a response in which different responses are associated with each preference, and the server 90 performs a plurality of different character information items in the process of S56.
  • a response candidate representing the response is acquired, a response to be output from the response candidate is selected according to the preference information, and the selected response is output in the processing of S60 and S64.
  • the server 90 In the voice response system 100, the server 90 generates preference information indicating a preference tendency of a user or a person concerned based on a character string included in the character information. Then, a response to be output from the response candidates is selected based on the preference information, and the selected response is output.
  • a response can be made according to the preference of the user or the person concerned. For example, when a user asks the terminal device 1 “What do you want Mr. XX?” When buying a present of a related person, a response according to the preference information can be obtained.
  • the response candidate DB 105 may have a table in which personality classifications and preference information are associated with each other.
  • personality classifications and color preferences are associated with each other, and products that can be estimated to be happy when a woman gives a present as a present are arranged in a matrix.
  • a response can be generated in consideration of both personality and preference.
  • the terminal device 1 captures the user's action as a captured image and transmits it to the server 90.
  • the server 90 may perform, for example, an action character input process shown in FIG.
  • the action character input process is a process started when a part of the user's body is reflected in the captured image in the process of S48.
  • a captured image is acquired (S302). Then, it is determined whether the user intends to input characters by handwriting or to input characters by sign language (S304, S308).
  • the character input by the operation is associated with the character input by the speech, and whether there is a similar sound (whether the matching degree between the reference waveform based on the character and the pronunciation waveform is equal to or higher than the reference value). Is determined (S316). If there is such a voice input (S316: YES), the accent and pronunciation characteristics when the user inputs this character are recorded in the learning DB 107 in association with the character (S318), and the action character input process is performed. Exit.
  • the operation according to the present embodiment is not limited to handwriting of characters or gesture gestures (for example, sign language), but may be any operation that is caused by a muscle operation.
  • the contents of the learning DB 107 may be used in the other terminal device 1 when the user uses another terminal device 1 different from the terminal device 1 that the user normally uses.
  • the ID and password of the terminal device 1 that is normally used are transmitted from the other terminal device 1 to the server 90 together with the usage request.
  • the other terminal use process is a process started when a use request is received.
  • S332 If the ID and password are input (S332: YES), it is determined whether or not the authentication using the ID and password is completed (S334). If the authentication is completed (S334: YES), the fact that the authentication is complete is transmitted to the other terminal device 1 (S336), and the other terminal device 1 stores the learning DB 107 of the terminal device 1 corresponding to the ID and password. Setting to use is made (S338).
  • the server 90 transfers learning information of a certain terminal device 1 to another terminal device 1.
  • a voice response system 100 even when a user who uses a certain terminal device 1 uses another terminal device 1, learning information recorded in a certain terminal device 1 (recorded in the server 90). Learning information). Therefore, even when other terminal devices 1 are used, the generation accuracy of character information can be improved. This is particularly effective when the user has a plurality of terminal devices 1.
  • the server 90 outputs information about the user in response to an inquiry from a person other than the user.
  • a voice response system 100 for example, if the distance of a walk such as a user's meal content is detected, a question in a hospital or the like can be answered on behalf of the user. Moreover, you may be allowed to learn about health conditions and self-introduction.
  • the server 90 stores conversation contents and asks questions for obtaining the same contents about the heard contents. Specifically, the storage confirmation process shown in FIG. 21 is executed in S100 of the automatic conversation server process shown in FIG.
  • the past conversation content is extracted from the learning DB 107 (S352), and a question with the keyword included in any of the conversation content as an answer is generated (S353).
  • the storage confirmation processing ends.
  • the user's memory ability can be confirmed and the memory can be fixed. It is also considered effective in suppressing the progression of dementia in the elderly.
  • the voice response system according to the eleventh embodiment is configured such that a user can practice a foreign language using the terminal device 1 and the server 90.
  • the sound generation determination process 1 shown in FIG. 22, the sound generation determination process 2 shown in FIG. 23, and the sound generation determination process 3 shown in FIG. 24 are executed in order.
  • the server 90 executes one of the sound generation determination processes 1 to 3 each time the voice response server process (FIG. 2) is performed.
  • Each of the sound generation determination processes 1 to 3 is executed as the process of S56 described above.
  • a response to instruct to input a predetermined sentence by voice is generated (S362).
  • a sentence serving as a model for a foreign language is generated, and the sentence is prompted to imitate following the model.
  • the sound generation determination process 1 ends.
  • the sound generation determination process 2 is performed.
  • the pronunciation determination process 2 as shown in FIG. 23, the accuracy of pronunciation and accent is scored (score) (S372).
  • the speech is regarded as a waveform, and the degree of coincidence of the waveform with the case where the text as an example is used as a waveform is scored.
  • this score is recorded in the memory (S374), and the pronunciation determination process 2 is terminated. Subsequently, pronunciation determination processing 3 is performed.
  • the pronunciation determination process 3 as shown in FIG. 24, first, it is determined whether or not the score is less than a threshold value (S382).
  • a response to instruct to input the same sentence is generated again (S384).
  • a response for prompting the user to speak after imitating the model is generated again.
  • a response that prompts the user to input the next sentence and the fact that the pronunciation is good is generated (S386). For example, it generates a response such as "Good pronunciation. Let's move on.”
  • the sound generation determination processing 3 ends.
  • the server 90 detects the accuracy of voice pronunciation and accent input by the user, and outputs the detected accuracy.
  • the accuracy of pronunciation and accent can be confirmed. For example, it is effective when practicing a foreign language.
  • the server 90 causes the same question to be output again when the accuracy is a predetermined value or less.
  • the server 90 may output a voice including a word closest to the pronunciation made by the user for confirmation when the accuracy is equal to or less than a predetermined value.
  • the user can confirm the accuracy of pronunciation and accent.
  • a voice response system according to the twelfth embodiment will be described.
  • the user's emotion is detected from the voice input by the user, and a response that heals the user is generated according to the emotion.
  • the emotion determination process shown in FIG. 25 and the emotion response generation process shown in FIG. 26 are executed.
  • the emotion determination process is performed as the details of the process of S50 described above. As shown in FIG. 25, first, as shown in FIG. Then, the emotions are classified by the score and recorded in the memory (S394).
  • an emotion response generation process is executed in the process of S56 described above. Specifically, as shown in FIG. 26, first, the emotion classification set in the emotion determination process is determined (S412). If the emotion category is a normal (S412: Normal), generated as a response (message) an ordinary greeting such as "Hello” (S414).
  • the server 90 detects the user's irritation and shaking by detecting the unexpectedly generated voice, and generates a message for suppressing the irritation and shaking.
  • the terminal device 1 When the terminal device 1 inputs a voice message such as “Please guide to the visible tower”, guidance processing is performed in the processing of S56.
  • the terminal position information is acquired from the GPS receiver 27 or the like of the terminal device 1 (S432).
  • a target object is specified from among the objects in the captured image, and this position is specified (S434).
  • the position of the object is specified in the map information (which may be acquired from the outside or may be held by the server 90) based on the shape, relative position, etc. of the object. For example, when a tower is reflected in the captured image, the tower is specified on the map from the position of the terminal device 1 and the shape of the tower.
  • a response for guiding the route is generated (S440).
  • a response similar to the guidance by the navigation device may be generated.
  • the guidance process ends.
  • the automatic conversation server process may be used to reproduce the message on the condition that the user reaches the point to be guided.
  • the server 90 when character information is input, the server 90 generates a response corresponding to a captured image obtained by imaging the surroundings of the voice response system 100, and outputs the response by voice.
  • a response can be output by voice according to the captured image. Therefore, usability can be improved compared with the structure which produces
  • the server 90 searches for an object included in the character information from the captured image by image processing, specifies the position of the searched object, and guides to the position of the object.
  • the user can be guided to the object in the captured image.
  • the server 90 obtains route information such as weather, temperature, humidity, traffic information, road surface condition and the like to the destination when performing guidance to the destination, and the route information is voiced. Output.
  • the situation (route information) to the destination can be notified to the user by voice.
  • character information may be input so as to respond to what is recognized, and what (someone) is recognized from the captured image may be output by voice.
  • the server 90 may acquire a moving image obtained by capturing the shape of the mouth of the user when inputting character information by voice instead of the process of S48.
  • the voice instead of the processing of S52, the voice may be converted into character information, and the character information may be corrected by estimating an unclear part of the voice based on the moving image.
  • the utterance content can be estimated from the shape of the mouth, so that an unclear part of the voice can be estimated well.
  • the voice response system according to the fourteenth embodiment the user is requested to perform a predetermined operation, and it is determined whether the user has performed the operation as requested.
  • the movement request process 1 shown in FIG. 28 and the movement request process 2 shown in FIG. It is carried out in order.
  • the movement request process 1 is started, and the movement request process 1 outputs a response (message) instructing to move the line of sight or the head to a predetermined position as shown in FIG. (S452).
  • a response messages instructing to move the line of sight or the head to a predetermined position as shown in FIG. (S452).
  • the movement request process 2 is started.
  • the movement request process 2 as shown in FIG. 29, it is determined whether or not the position of the line of sight or the head has moved as instructed ( S462).
  • the user's action is detected by performing image processing on an image captured by the camera or using detection results by various sensors of the terminal device 1.
  • a known gaze recognition technique may be employed.
  • the movement request processing 2 ends.
  • the voice response system 100 the user's line of sight is detected, and if the user's line of sight does not move to a predetermined position in response to the call, a voice requesting to move the line of sight to the predetermined position is output.
  • the user can be made to see a specific position. Therefore, it is possible to reliably perform safety confirmation when driving the vehicle.
  • the server 90 observes the position of the body part and the facial expression, and outputs a voice requesting to change the position of the body part and the facial expression when there are few changes to the call. To do.
  • the position of the body part of the user can be moved to a specific position or can be guided to have a specific facial expression.
  • the present invention can be used when driving a vehicle or performing a physical examination.
  • broadcast music supplement processing shown in FIG. 30 is performed as the details of S56 described above.
  • the broadcast music complementing process it is first determined whether or not the broadcast program or the music (the song if the user sings) has been interrupted (S482).
  • the broadcast program or music synchronized in the process of S492 described later is set as the response content (S484), and the broadcast music complementing process is terminated. If there is no interruption (S482: NO), the broadcast program is acquired if the broadcast program is being viewed (S486), and if the music is being played, the corresponding music is acquired (S488).
  • the karaoke DB 116 music and lyrics are recorded in association with each other, and when music is acquired in this process, music with lyrics is acquired. Subsequently, the broadcast program or music to be viewed by the user is specified (S490). Then, this broadcast program or music is acquired and prepared so that it can be played back in synchronization with the broadcast program or music viewed by the user (S492), and the broadcast music supplement processing is terminated.
  • the server 90 acquires a broadcast program similar to the broadcast program viewed by the user, and outputs the broadcast program that the user acquired by outputting the broadcast program that the user acquired when the broadcast program is interrupted. Complement.
  • the server 90 compares the song with the lyrics with the lyrics added by the user, and only the user's lyrics.
  • the lyrics are output by voice in the part where there is no.
  • a voice response system 100 it is possible to make up for a portion where a user who uses a so-called karaoke apparatus cannot sing (a portion where the lyrics are interrupted).
  • a voice response system according to the sixteenth embodiment when a character is included in the captured image, when the terminal device 1 receives a question about how to read the character from the user, the character information is acquired from the outside. How to read characters included in information is output by voice.
  • the character explanation process shown in FIG. 31 is performed as the details of S56 described above.
  • the character commentary process as shown in FIG. 31, it is first determined whether or not a reading question such as “how to read” has been received (S502). If a reading question has been received (S502: YES), the image-recognized character is searched for reading from another server or the like connected via the Internet 85 (S504), and the obtained reading is set as a response. In step S506, the character explanation process is terminated.
  • the server 90 detects an abnormal action or state of the user of the terminal device 1, and notifies when there is an abnormality. Perform the process.
  • the terminal device 1 performs the behavior response terminal process shown in FIG. 32, and the server 90 performs the behavior response server process.
  • the action response terminal process as shown in FIG. 32, first, outputs from various sensors mounted on the terminal device 1 are acquired (S522), and an image captured by the camera 41 is acquired (S524). Then, the obtained outputs from the various sensors and the captured images are packet-transmitted to the server 90 (S526), and the action response terminal process is terminated.
  • the actions of S42 to S44 described above are performed. Subsequently, based on the position information of the terminal device 1 (detection result by the GPS receiver 27), an action such as a bag is specified (S532), and the environment of the user is determined based on the detection results by the temperature sensors 15, 19 and the like. It is detected (S534). Then, an abnormality is detected (S536).
  • an abnormality is detected based on the change in position information and the environment. For example, when the user does not move in a place where the temperature is high or low, or when the user exists in a place where the user does not normally go, it is detected that there is an abnormality (S536). Alternatively, the position information and the environment are scored, and if this score is below the reference value (out of the reference range), it is determined that there is an abnormality.
  • the server 90 detects the user's behavior and the surrounding environment of the user, and generates a message according to the detected behavior and the surrounding environment.
  • a voice response system 100 it is possible to notify a dangerous place or an area where entry is prohibited. It is also possible to detect that the user has an abnormal behavior.
  • the server 90 determines a health condition based on a captured image obtained by capturing the user, and generates a message according to the health condition. According to such a voice response system 100, the health condition of the user can be managed.
  • the server 90 notifies a predetermined contact when the health condition is lower than the reference value. According to such a voice response system 100, when the user's health state is equal to or less than a reference value, a report can be made. Therefore, the abnormality can be notified to the other person earlier.
  • Embodiments of the present invention are not limited to the above-described embodiments, and can take various forms as long as they belong to the technical scope of the present invention.
  • the voice response system 100 may mediate exchange between two parties or between multiple parties. Specifically, when it is necessary to give way at an intersection or the like, the terminal devices 1 may negotiate which vehicle enters the intersection first. In this case, each terminal device 1 transmits information on the moving direction when approaching the intersection and the approaching speed to the intersection to the server 90, and the server 90 sends the information to each terminal device 1 according to the moving direction and the approaching speed.
  • a priority order may be set, and a voice such as “Tare” or “Enterable” may be generated and output according to the priority order.
  • the terminal device 1 accepts an incoming call (incoming call) of communication that needs to respond in real time such as voice communication
  • the incoming call may be accepted only at the convenience of the user. Specifically, when the user's face can be imaged by the camera 41, it may be assumed that the user is convenient and the incoming call may be accepted.
  • the situation of the other party may be communicated to a user who is waiting for a response from the other party. For example, if the user's schedule is managed in the terminal device 1 and the user does not respond to an incoming call, the user's schedule is searched for what the user is doing, or the user's schedule, It is possible to tell when the user can respond.
  • the location of the user may be notified to the caller. For example, if the user is connected to the Internet or the like via a smartphone or a personal computer, it can be determined which terminal is being operated. It is conceivable to identify the location of the user from this information and convey it to the caller.
  • whether or not the user can respond to an incoming call may be determined using position information using GPS or the like. Based on location information, you can determine whether you are in a car, at home, etc. For example, if the user is on the move or on the bed, it is highly public or sleep What is necessary is just to judge that it cannot answer an incoming call by judging that it is inside. If the incoming call cannot be answered in this way, it can be considered to inform the caller what the user is doing as described above.
  • a configuration using a security camera can be considered.
  • various location security cameras have been installed, so it is possible to recognize the position of the user using a configuration for identifying the person such as face authentication using these security cameras.
  • a situation determination such as what the user is doing using the security camera (whether or not the telephone can be answered) may be performed.
  • whether or not an incoming call can be answered can also be determined based on conditions such as whether another fixed telephone is being used (the incoming call cannot be answered while the fixed telephone is in use).
  • the user of the terminal device 1 wants to have a conversation with someone, use the personality learning result of the user and call out the terminal device that is estimated to have good compatibility among users among the unspecified number. Also good.
  • a topic that is likely to be excited (a topic that both users are interested in (extracted using the learning result)) may be spoken to the user.
  • the voice response device when the voice response device is not used for a long time (when the user is not speaking for more than the reference time), the voice response device may put some words on the user. At this time, words to be spoken using position information such as GPS may be selected.
  • the terminal device 1 and the server 90 in the above embodiment correspond to an example of the voice response device of the present invention. Further, the processes of S22 and S56 in the above embodiment correspond to an example of a response acquisition unit of the present invention.
  • the process of S14 in the above embodiment corresponds to an example of the voice transmission means of the present invention.
  • the response candidate DB 105 in the above embodiment corresponds to an example of a response recording unit of the present invention.
  • process of S56 in the above embodiment corresponds to an example of the character information acquisition means of the present invention. Further, the processing of S22 and S56 in the above embodiment corresponds to an example of a response acquisition unit of the present invention.
  • the processes of S28, S60, and S64 in the above embodiment correspond to an example of the audio output means of the present invention.
  • the processing of S254, S258, and S260 in the above embodiment corresponds to an example of the first personality information generation unit and the second personality information generation unit of the present invention.
  • the process of S56 in the said embodiment is corresponded to an example of the character information acquisition means of this invention.
  • processing of S22 and S56 in the above embodiment corresponds to the response acquisition means of the present invention.
  • processes of S28, S60, and S64 in the above embodiment correspond to an example of the audio output means of the present invention.
  • processing of S254, S258, and S260 in the above embodiment corresponds to an example of the first personality information generation unit and the second personality information generation unit of the present invention.
  • processing of S48 and S56 in the above embodiment corresponds to an example of a response generation unit of the present invention.
  • processes of S28, S60, and S64 in the above embodiment correspond to an example of the audio output means of the present invention.
  • the process of S48 corresponds to an example of the voice input moving image acquiring means of the present invention.
  • the process of S52 in the said embodiment is corresponded to an example of the character information conversion means of this invention.
  • the preference information generation process in the above embodiment corresponds to an example of the preference information generation means of the present invention.
  • the process of S56 in the said embodiment is corresponded to an example of the response candidate acquisition means of this invention.
  • the action character input process in the above embodiment corresponds to an example of the character information generating means of the present invention.
  • Other device information acquisition means The other terminal use processing in the above embodiment corresponds to an example of the transfer means of the present invention.
  • process of S98 in the above embodiment corresponds to an example of the reproduction condition determining means of the present invention.
  • process of S100 in the said embodiment is corresponded to an example of the message reproduction
  • processing of S116 in the above embodiment corresponds to an example of the non-response transmission means of the present invention.
  • the process of S372 in the said embodiment is corresponded to an example of the speech accuracy detection means of this invention.
  • process of S374 in the above embodiment corresponds to an example of the accuracy output means of the present invention.
  • process of S204 in the said embodiment is corresponded to an example of the connection control means of this invention.
  • process of S50 in the above embodiment corresponds to an example of the emotion determination means of the present invention.
  • process of S438 in the said embodiment is equivalent to an example of the route information acquisition means of this invention.
  • process of S462 in the above embodiment corresponds to an example of the line-of-sight detection means of the present invention.
  • process of S464 in the above embodiment corresponds to an example of a line-of-sight movement request transmission unit of the present invention.
  • process of S464 in the above embodiment corresponds to an example of a change request transmission unit of the present invention.
  • process of S486 in the said embodiment is corresponded to an example of the broadcast program acquisition means of this invention.
  • processing of S484 in the above embodiment corresponds to an example of the broadcast program complementing means and the lyrics adding means of the present invention.
  • processing of S504 and S506 in the above embodiment corresponds to an example of the reading output means of the present invention.
  • processing of S522 and S524 in the above embodiment corresponds to an example of the behavior environment detection means of the present invention.
  • process of S538 in the above embodiment corresponds to an example of the health condition determining means of the present invention.
  • process of S540 in the said embodiment is corresponded to an example of the health message production

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)

Abstract

L'invention concerne un dispositif de réponse vocale qui réalise des réponses vocales pour l'entrée d'information textuelle. Ce dispositif est pourvu : d'un organe d'acquisition de réponse servant à l'acquisition d'une pluralité de réponses à l'information textuelle; et d'un organe de sortie vocale servant à produire en sortie la pluralité de réponses, et ce, en utilisant autant de voix différentes. Ce système de réponse vocale permet de produire en sortie une pluralité de réponses de différentes voix, si bien que, même lorsque la réponse dans laquelle un élément d'information textuelle ne peut pas être identifié comme étant un élément unique, différentes réponses peuvent être produites en sortie avec différentes voix d'une façon facilement compréhensible par un utilisateur.
PCT/JP2013/064918 2012-06-18 2013-05-29 Dispositif de réponse vocale WO2013190963A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014521255A JP6267636B2 (ja) 2012-06-18 2013-05-29 音声応答装置

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2012137065 2012-06-18
JP2012-137065 2012-06-18
JP2012137067 2012-06-18
JP2012137066 2012-06-18
JP2012-137067 2012-06-18
JP2012-137066 2012-06-18

Publications (1)

Publication Number Publication Date
WO2013190963A1 true WO2013190963A1 (fr) 2013-12-27

Family

ID=49768566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/064918 WO2013190963A1 (fr) 2012-06-18 2013-05-29 Dispositif de réponse vocale

Country Status (2)

Country Link
JP (14) JP6267636B2 (fr)
WO (1) WO2013190963A1 (fr)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015182177A1 (fr) * 2014-05-28 2015-12-03 シャープ株式会社 Dispositif électronique et système de message
WO2016052164A1 (fr) * 2014-09-30 2016-04-07 シャープ株式会社 Dispositif de conversation
JP2016076117A (ja) * 2014-10-07 2016-05-12 株式会社Nttドコモ 情報処理装置及び発話内容出力方法
JP2017084177A (ja) * 2015-10-29 2017-05-18 シャープ株式会社 電子機器およびその制御方法
WO2017130497A1 (fr) * 2016-01-28 2017-08-03 ソニー株式会社 Système de communication et procédé de commande de communication
JP6205039B1 (ja) * 2016-09-16 2017-09-27 ヤフー株式会社 情報処理装置、情報処理方法、およびプログラム
JP2018167339A (ja) * 2017-03-29 2018-11-01 富士通株式会社 発話制御プログラム、情報処理装置及び発話制御方法
WO2019187590A1 (fr) * 2018-03-29 2019-10-03 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2019535037A (ja) * 2016-10-03 2019-12-05 グーグル エルエルシー コンピュータによるエージェントのための合成音声の選択
US10853747B2 (en) 2016-10-03 2020-12-01 Google Llc Selection of computational agent for task performance
US10854188B2 (en) 2016-10-03 2020-12-01 Google Llc Synthesized voice selection for computational agents
JP2022017561A (ja) * 2017-06-14 2022-01-25 ヤマハ株式会社 情報処理装置、歌唱音声の出力方法、及びプログラム
US11595331B2 (en) 2016-01-28 2023-02-28 Sony Group Corporation Communication system and communication control method
US11663535B2 (en) 2016-10-03 2023-05-30 Google Llc Multi computational agent performance of tasks
JP7555620B2 (ja) 2014-12-25 2024-09-25 Case特許株式会社 情報処理システム、電子機器、情報処理方法、及びコンピュータプログラム

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013190963A1 (fr) * 2012-06-18 2013-12-27 エイディシーテクノロジー株式会社 Dispositif de réponse vocale
KR20200101975A (ko) 2018-03-16 2020-08-28 스미또모 덴꼬오 하드메탈 가부시끼가이샤 표면 피복 절삭 공구 및 그 제조 방법
KR20230002868A (ko) 2020-08-19 2023-01-05 니뽄 다바코 산교 가부시키가이샤 담배 상품용 포장재 및 담배 상품용 패키지
KR20230002883A (ko) 2020-08-19 2023-01-05 니뽄 다바코 산교 가부시키가이샤 담배 상품용 포장재 및 담배 상품용 패키지
WO2023047487A1 (fr) 2021-09-22 2023-03-30 株式会社Fuji Système de reconnaissance de situation, dispositif de réponse vocale et procédé de reconnaissance de situation
US12032807B1 (en) 2021-11-08 2024-07-09 Arrowhead Center, Inc. Assistive communication method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000181475A (ja) * 1998-12-21 2000-06-30 Sony Corp 音声応答装置
JP2007148039A (ja) * 2005-11-28 2007-06-14 Matsushita Electric Ind Co Ltd 音声翻訳装置および音声翻訳方法

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58104743U (ja) * 1982-01-13 1983-07-16 日本精機株式会社 車輛用音声報知装置
JP2912394B2 (ja) * 1989-10-04 1999-06-28 株式会社日立製作所 自動車電話位置通知装置
JP3120995B2 (ja) 1990-07-03 2000-12-25 本田技研工業株式会社 塗料温度調節システム
JP3674990B2 (ja) * 1995-08-21 2005-07-27 セイコーエプソン株式会社 音声認識対話装置および音声認識対話処理方法
US5766015A (en) * 1996-07-11 1998-06-16 Digispeech (Israel) Ltd. Apparatus for interactive language training
JP3408425B2 (ja) * 1998-08-11 2003-05-19 株式会社日立製作所 自動取次方法及びその処理プログラムを記録した媒体
EP1139318A4 (fr) * 1999-09-27 2002-11-20 Kojima Co Ltd Systeme d'evaluation de la prononciation
JP2001256036A (ja) 2000-03-03 2001-09-21 Ever Prospect Internatl Ltd 機器との情報授受方法及び当該方法を適用した対話機能を有する機器並びにこれら機器を複合させたライフサポートシステム
JP2001330450A (ja) * 2000-03-13 2001-11-30 Alpine Electronics Inc ナビゲーション装置
US6731307B1 (en) * 2000-10-30 2004-05-04 Koninklije Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
CN1266625C (zh) * 2001-05-04 2006-07-26 微软公司 用于web启用的识别的服务器
JP2002342356A (ja) * 2001-05-18 2002-11-29 Nec Software Kyushu Ltd 情報提供システム,方法,およびプログラム
JP2003023501A (ja) * 2001-07-06 2003-01-24 Self Security:Kk 単身生活者安否確認支援装置
JP2003108362A (ja) * 2001-07-23 2003-04-11 Matsushita Electric Works Ltd コミュニケーション支援装置およびコミュニケーション支援システム
JP2003108376A (ja) * 2001-10-01 2003-04-11 Denso Corp 応答メッセージ生成装置、及び端末装置
JP2003216176A (ja) * 2002-01-28 2003-07-30 Matsushita Electric Works Ltd 音声制御装置
JP4086280B2 (ja) * 2002-01-29 2008-05-14 株式会社東芝 音声入力システム、音声入力方法及び音声入力プログラム
CN100342214C (zh) * 2002-03-15 2007-10-10 三菱电机株式会社 车辆用导航装置
JP3777337B2 (ja) 2002-03-27 2006-05-24 ドコモ・モバイルメディア関西株式会社 データサーバのアクセス制御方法、そのシステム、管理装置、及びコンピュータプログラム並びに記録媒体
JP2003329477A (ja) * 2002-05-15 2003-11-19 Pioneer Electronic Corp ナビゲーション装置及び対話型情報提供プログラム
JP2004021121A (ja) * 2002-06-19 2004-01-22 Nec Corp 音声対話制御装置
JP2004030313A (ja) * 2002-06-26 2004-01-29 Ntt Docomo Tokai Inc サービス提供方法及びサービス提供システム
JP2004046400A (ja) * 2002-07-10 2004-02-12 Mitsubishi Heavy Ind Ltd ロボットの発話方法
JP2004301942A (ja) * 2003-03-28 2004-10-28 Bandai Co Ltd 音声認識装置、会話装置およびロボット玩具
JP2004364128A (ja) * 2003-06-06 2004-12-24 Sanyo Electric Co Ltd 通信装置
US7571099B2 (en) * 2004-01-27 2009-08-04 Panasonic Corporation Voice synthesis device
JP2005301914A (ja) * 2004-04-15 2005-10-27 Sharp Corp 携帯情報機器
JP2005342862A (ja) * 2004-06-04 2005-12-15 Nec Corp ロボット
JP2005352895A (ja) * 2004-06-11 2005-12-22 Kenwood Corp 車両運転者覚醒システム
JP4459238B2 (ja) 2004-12-28 2010-04-28 シャープ株式会社 携帯端末、通信端末、これらを用いた所在位置通知システム、及び所在位置通知方法
JP4924950B2 (ja) * 2005-02-08 2012-04-25 日本電気株式会社 質問応答データ編集装置、質問応答データ編集方法、質問応答データ編集プログラム
JP2006227846A (ja) * 2005-02-16 2006-08-31 Fujitsu Ten Ltd 情報表示システム
JP4586566B2 (ja) * 2005-02-22 2010-11-24 トヨタ自動車株式会社 音声対話システム
JP4631501B2 (ja) * 2005-03-28 2011-02-16 パナソニック電工株式会社 宅内システム
JP2008053989A (ja) * 2006-08-24 2008-03-06 Megachips System Solutions Inc ドアホンシステム
JP2008153889A (ja) * 2006-12-15 2008-07-03 Promise Co Ltd 応答業務取次システム
JP2008152013A (ja) * 2006-12-18 2008-07-03 Canon Inc 音声合成装置および音声合成方法
JP5173221B2 (ja) * 2007-03-25 2013-04-03 京セラ株式会社 携帯端末、情報処理システムおよび情報処理方法
JP2009093284A (ja) * 2007-10-04 2009-04-30 Toyota Motor Corp 運転支援装置
JP2009151766A (ja) 2007-11-30 2009-07-09 Nec Corp ライフアドバイザ支援システム、アドバイザ側端末システム、認証サーバ、サーバ、支援方法、及びプログラム
JP5305802B2 (ja) * 2008-09-17 2013-10-02 オリンパス株式会社 情報提示システム、プログラム及び情報記憶媒体
JP2010079149A (ja) * 2008-09-29 2010-04-08 Brother Ind Ltd 来客受付装置、担当者端末、来客受付方法、及び来客受付プログラム
JP5195405B2 (ja) * 2008-12-25 2013-05-08 トヨタ自動車株式会社 応答生成装置及びプログラム
US8499085B2 (en) * 2009-03-16 2013-07-30 Avaya, Inc. Advanced availability detection
JP5563422B2 (ja) * 2010-10-15 2014-07-30 京セラ株式会社 電子機器及び制御方法
WO2013190963A1 (fr) 2012-06-18 2013-12-27 エイディシーテクノロジー株式会社 Dispositif de réponse vocale

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000181475A (ja) * 1998-12-21 2000-06-30 Sony Corp 音声応答装置
JP2007148039A (ja) * 2005-11-28 2007-06-14 Matsushita Electric Ind Co Ltd 音声翻訳装置および音声翻訳方法

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015225258A (ja) * 2014-05-28 2015-12-14 シャープ株式会社 電子機器および伝言システム
CN106233372A (zh) * 2014-05-28 2016-12-14 夏普株式会社 电子设备以及留言系统
WO2015182177A1 (fr) * 2014-05-28 2015-12-03 シャープ株式会社 Dispositif électronique et système de message
CN106233372B (zh) * 2014-05-28 2019-07-26 夏普株式会社 电子设备以及留言系统
WO2016052164A1 (fr) * 2014-09-30 2016-04-07 シャープ株式会社 Dispositif de conversation
JP2016071247A (ja) * 2014-09-30 2016-05-09 シャープ株式会社 対話装置
JP2016076117A (ja) * 2014-10-07 2016-05-12 株式会社Nttドコモ 情報処理装置及び発話内容出力方法
JP7555620B2 (ja) 2014-12-25 2024-09-25 Case特許株式会社 情報処理システム、電子機器、情報処理方法、及びコンピュータプログラム
JP2017084177A (ja) * 2015-10-29 2017-05-18 シャープ株式会社 電子機器およびその制御方法
JPWO2017130497A1 (ja) * 2016-01-28 2018-11-22 ソニー株式会社 通信システムおよび通信制御方法
WO2017130497A1 (fr) * 2016-01-28 2017-08-03 ソニー株式会社 Système de communication et procédé de commande de communication
US11595331B2 (en) 2016-01-28 2023-02-28 Sony Group Corporation Communication system and communication control method
US11159462B2 (en) 2016-01-28 2021-10-26 Sony Corporation Communication system and communication control method
JP2018045630A (ja) * 2016-09-16 2018-03-22 ヤフー株式会社 情報処理装置、情報処理方法、およびプログラム
JP6205039B1 (ja) * 2016-09-16 2017-09-27 ヤフー株式会社 情報処理装置、情報処理方法、およびプログラム
US10853747B2 (en) 2016-10-03 2020-12-01 Google Llc Selection of computational agent for task performance
US10854188B2 (en) 2016-10-03 2020-12-01 Google Llc Synthesized voice selection for computational agents
JP2019535037A (ja) * 2016-10-03 2019-12-05 グーグル エルエルシー コンピュータによるエージェントのための合成音声の選択
US11663535B2 (en) 2016-10-03 2023-05-30 Google Llc Multi computational agent performance of tasks
JP2018167339A (ja) * 2017-03-29 2018-11-01 富士通株式会社 発話制御プログラム、情報処理装置及び発話制御方法
JP2022017561A (ja) * 2017-06-14 2022-01-25 ヤマハ株式会社 情報処理装置、歌唱音声の出力方法、及びプログラム
JP7424359B2 (ja) 2017-06-14 2024-01-30 ヤマハ株式会社 情報処理装置、歌唱音声の出力方法、及びプログラム
WO2019187590A1 (fr) * 2018-03-29 2019-10-03 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Also Published As

Publication number Publication date
JP2021184111A (ja) 2021-12-02
JP2018136540A (ja) 2018-08-30
JP2017215603A (ja) 2017-12-07
JP2018136545A (ja) 2018-08-30
JP2020038387A (ja) 2020-03-12
JP2019179243A (ja) 2019-10-17
JP2018092179A (ja) 2018-06-14
JP6969811B2 (ja) 2021-11-24
JP6552123B2 (ja) 2019-07-31
JP2018049285A (ja) 2018-03-29
JP7231289B2 (ja) 2023-03-01
JP2018136546A (ja) 2018-08-30
JP2017215602A (ja) 2017-12-07
JP2023079225A (ja) 2023-06-07
JP6267636B2 (ja) 2018-01-24
JPWO2013190963A1 (ja) 2016-05-26
JP2022062200A (ja) 2022-04-19
JP6751865B2 (ja) 2020-09-09
JP7531241B2 (ja) 2024-08-09
JP6669951B2 (ja) 2020-03-18
JP2018136541A (ja) 2018-08-30

Similar Documents

Publication Publication Date Title
JP7231289B2 (ja) 音声応答システム
US11241789B2 (en) Data processing method for care-giving robot and apparatus
US11004446B2 (en) Alias resolving intelligent assistant computing device
JP7070544B2 (ja) 学習装置、学習方法、音声合成装置、音声合成方法
US20160379107A1 (en) Human-computer interactive method based on artificial intelligence and terminal device
JP2019049742A (ja) 音声応答装置
CN109313935B (zh) 信息处理系统、存储介质和信息处理方法
WO2016181670A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
JP2018133696A (ja) 車載装置、コンテンツ提供システムおよびコンテンツ提供方法
JP2024150630A (ja) 音声応答システム
JP2020166593A (ja) ユーザ支援装置、ユーザ支援方法及びユーザ支援プログラム
US11270682B2 (en) Information processing device and information processing method for presentation of word-of-mouth information
JP2022006610A (ja) 社会的能力生成装置、社会的能力生成方法、およびコミュニケーションロボット

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13806621

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014521255

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13806621

Country of ref document: EP

Kind code of ref document: A1