WO2016178329A1 - Système de traitement d'informations, procédé de commande, et support de stockage - Google Patents

Système de traitement d'informations, procédé de commande, et support de stockage Download PDF

Info

Publication number
WO2016178329A1
WO2016178329A1 PCT/JP2016/052858 JP2016052858W WO2016178329A1 WO 2016178329 A1 WO2016178329 A1 WO 2016178329A1 JP 2016052858 W JP2016052858 W JP 2016052858W WO 2016178329 A1 WO2016178329 A1 WO 2016178329A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
advice
emotion signal
unit
processing system
Prior art date
Application number
PCT/JP2016/052858
Other languages
English (en)
Japanese (ja)
Inventor
宏 岩波
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to JP2017516562A priority Critical patent/JP6798484B2/ja
Publication of WO2016178329A1 publication Critical patent/WO2016178329A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present disclosure relates to an information processing system, a control method, and a storage medium.
  • Patent Document 1 has been proposed as an advice system for communicating with people through dialogue.
  • the degree of concentration of participants (presenters, students) in presentations, meetings, workshops, etc. is detected, and feedback is given to the participants. For example, when a less concentrated student is in the back seat, a message such as “Please speak louder” is displayed to the presenter.
  • the present disclosure proposes an information processing system, a control method, and a storage medium that can generate an advice for constructing an ideal communication state based on a signal indicating a user's emotion.
  • a usage setting unit that sets a usage in which the first user communicates with the second user, a communication unit that receives an emotion signal of at least one of the first user and the second user, Based on an ideal state corresponding to the use of the received emotion signal, a generation unit that generates advice to the first user, and the generated advice is sent to the first user via the communication unit.
  • an information processing system including a control unit that transmits to a client terminal corresponding to the above.
  • the selection input unit that receives a selection of a usage in which the first user communicates with the second user, and the emotion signal detected from at least one of the first user and the second user, Based on the ideal state corresponding to the application selected by the selection input unit, the selected application is transmitted to an external server that generates advice to the first user, and is generated according to the application and the emotion signal.
  • an information processing system including a communication unit that receives advice to the first user and a notification unit that notifies advice to the first user is proposed.
  • the first user sets an application for communicating with the second user, the emotion signal of at least one of the first user and the second user is received by the communication unit, Based on an ideal state according to the use of the received emotion signal, generating an advice to the first user, and sending the generated advice to the first user via the communication unit
  • a control method is proposed, including controlling by a control unit to transmit to a corresponding client terminal.
  • the selection input unit that receives the selection of the usage in which the first user communicates with the second user, and the emotion signal detected from at least one of the first user and the second user. Based on an ideal state corresponding to the use selected by the selection input unit, the selected use is transmitted to an external server that generates advice to the first user, and according to the use and emotion signal.
  • a storage medium storing a program for functioning as a communication unit that receives the generated advice to the first user and a notification unit that notifies the advice to the first user is proposed.
  • “Influence” is calculated from the pattern of speaker change during conversation. For example, when the user A speaks unilaterally or the user A keeps talking while the user B is blocked, it can be said that the influence of the user A is great. Also, if both words are spoken so as to cover the other party's words, both influences are great. Also, if the conversation has long and irregular interruptions, it can be said that its influence is small.
  • “Mimikuri” is calculated based on the number of times and time the user imitates the other person's words, gestures and gestures during the conversation. Usually, if you are crazy about conversation and are in the right wavelength or sympathize with each other, you tend to imitate each other's gestures, hand gestures, and words. It can be said that they are in sync with each other.
  • the amount of activity is calculated from the amount of gestures and the intensity of body movement. It can be seen that the greater the “activity” value, the more interested and interested in the other person's story.
  • Consistency is calculated from the prosody of the utterance, the amount of fluctuation of the gesture, and the fluctuation of the pitch, loudness and rhythm of the speaking voice. It can be seen that the greater the “consistency” value, the more concentrated the spirit is, and the smaller the value, the more emotional.
  • Such an emotion signal can be calculated from sensor data detected by, for example, a sensor device (a camera sensor, a microphone, a gyro sensor, an acceleration sensor, a geomagnetic sensor, or the like) worn by the user.
  • a sensor device a camera sensor, a microphone, a gyro sensor, an acceleration sensor, a geomagnetic sensor, or the like
  • the system recognizes the state of communication based on these emotion signals, and the ideal communication state according to the purpose of communication, the social position of the user, the relationship of the interlocutor, etc., that is, the ideal emotion signal. Advise the user to make the status appear.
  • the information processing system when the user A is negotiating with the user B, acquires an emotion signal detected from at least one of the user A and the user B, Recognize the state. Next, the information processing system compares the ideal communication state (that is, the emotion signal state) in the “negotiation” field with the current emotion signal, and gives advice for constructing ideal communication information from the user terminal 5. Present to user A. For example, when the value of “activity level” is lower than the ideal value, advice such as “Please speak with gestures” is presented. In addition, the advice to the user A is presented together with a radar chart indicating the values and ideal values of the detected user A's emotion signals (“influence”, “mimikuri”, “activity level”, and “consistency”). It may be broken.
  • the information processing system according to the present embodiment can generate advice for constructing an ideal communication state based on a signal indicating the user's emotion.
  • FIG. 2 is a diagram illustrating an overall configuration of the information processing system 1 according to the present embodiment.
  • the information processing system 1 according to the present embodiment includes a sensor device 10 (1 a, 1 b) that is attached to a user and detects data for calculating an emotion signal, and an emotion signal that calculates an emotion signal.
  • the server 2 includes an advice generation server 3 that generates advice based on emotion signals, and a user terminal 5 that feeds back the generated advice to the user.
  • the sensor device 10, the emotion signal server 2, the advice generation server 3, and the user terminal 5 can be connected via the network 4.
  • the sensor device 10 may have various forms such as a necklace type as shown in FIG. 2, a glasses type, a head-mounted type, a watch type, a bracelet type, and a badge type.
  • the sensor device 10 is not limited to a device worn by the user, and may be a device possessed by the user, such as a smartphone or a mobile phone terminal.
  • the sensor device 10 may be integrated with the user terminal 5.
  • the sensor device 10 detects a user's utterance, loudness, amount of physical activity, whispering, competing, etc. using a camera sensor, a microphone, a gyro sensor, an acceleration sensor, a geomagnetic sensor, and the like, and the detected data (sensor value) Is transmitted to the emotion signal server 2 via the network 4.
  • the transmission to the emotion signal server 2 may be performed via a mobile terminal such as a smartphone, a mobile phone terminal, or a tablet terminal that the user has.
  • the emotion signal server 2 analyzes data (sensor value) detected by the sensor device 10 and calculates a user's emotion signal (for example, influence, mimicry, activity level, and consistency).
  • the calculated emotion signal can be stored in the emotion signal DB 22.
  • the emotion signal DB 12 also stores ideal emotion signal values according to the use of this system (for example, types of social communication such as “negotiation”, “listening”, “collaboration”, “guidance”). .
  • the advice generation server 3 receives a selection of a use from the user, and performs advice based on the emotion signal value to construct an ideal communication according to the use. Specifically, the advice generation server 3 calls an ideal emotion signal value for the selected application from the emotion signal DB 22 of the emotion signal server 2 and is calculated by the ideal emotion signal value and the emotion signal server 2. Compared with the current emotion signal value, advice is generated from the result, and feedback is made to the user.
  • the advice based on the comparison result may be generated based on, for example, a case (rule) stored in the advice information DB 32, or may be generated using a recommendation technique such as machine learning or collaborative filtering.
  • the user terminal 5 is an information processing apparatus that presents a result to the user.
  • the user terminal 5 notifies the user of advice based on the comparison result transmitted from the advice generation server 3 by display output, audio output, vibration, or the like.
  • FIG. 3 is a block diagram illustrating an example of the configuration of the sensor device 10 according to the present embodiment.
  • the sensor device 10 according to the present embodiment includes a sensor unit 11, a sensor data extraction unit 12, a control unit 13, a communication unit 14, and a storage unit 15.
  • the sensor unit 11 has a function of detecting a user's movement and voice, and can be realized by, for example, a camera sensor, a microphone, a gyro sensor, an acceleration sensor, a geomagnetic sensor, and the like.
  • Sensor data extraction unit 12 extracts sensor data (captured image, audio information, etc.) from sensor unit 11.
  • the control unit 13 controls each component of the sensor device 10.
  • the control unit 13 is realized by a microcomputer having a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and a nonvolatile memory.
  • the control unit 13 controls the sensor data extracted by the sensor data extraction unit 12 to be temporarily stored in the storage unit 15 and transmitted from the communication unit 14 to the emotion signal server 2.
  • the communication unit 14 functions as an I / F (interface) that transmits / receives data to / from an external device in accordance with an instruction from the control unit 13.
  • the communication unit 14 connects to the network 4 and transmits the sensor data to the emotion signal server 2 on the network (cloud) in real time.
  • the communication unit 14 is connected to the user terminal 5 possessed by the user via a short-distance wireless connection via Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like, and the emotion signal server 2 via the user terminal 5.
  • the sensor data may be transmitted in real time.
  • the storage unit 15 stores a program for the control unit 13 to execute each process.
  • the storage unit 15 also has an area for temporarily storing sensor data.
  • FIG. 4 is a block diagram illustrating an example of the configuration of the emotion signal server 2 according to the present embodiment.
  • the emotion signal server 2 includes a control unit 20, a communication unit 21, an emotion signal DB 22, and a user information DB 23.
  • the control unit 20 controls each component of the emotion signal server 2.
  • the control unit 20 is realized by a microcomputer that includes a CPU, a ROM, a RAM, and a nonvolatile memory. As shown in FIG. 4, the control unit 20 according to this embodiment also functions as a data processing unit 20a, a data analysis unit 20b, an emotion signal calculation unit 20c, and a user management unit 20d.
  • the data processing unit 20a prepares the sensor data detected by the sensor device 10 received by the communication unit 21, processes the data into a state where it can be analyzed more accurately, and outputs the data to the data analysis unit 20b.
  • the data analysis unit 20b analyzes the sensor data arranged by the data processing unit 20a, extracts information such as speech, voice volume, amount of physical activity, gesture gesture, whispering, companion, etc., and an emotion signal calculation unit To 20c.
  • the emotion signal calculation unit 20c calculates the user's emotion signal based on the information obtained by the data analysis. More specifically, the emotion signal calculation unit 20c calculates four values such as influence, mimicry, activity level, and consistency as emotion signals. In this embodiment, as an example, the values of these emotion signals are calculated as normalized numerical values between 0 and 1. Hereinafter, calculation of four emotion signals will be described in detail.
  • the emotion signal calculation unit 20c for example, the frequency of speech, the length of speech, the strength of voice, and the overlap of speech obtained by analyzing voice data of conversation between the user A and the user B collected by the microphone ( "Impact power" is calculated based on the frequency at which speech is interrupted and the like. For example, the emotion signal calculation unit 20c determines that the influence of the user A is small when the frequency of the speech of the user A is less than that of the user B. The emotion signal calculation unit 20c also determines the length of the speech in the same manner (determines that the “influence” is greater as the length of the speech is longer).
  • the emotion signal calculation unit 20c compares the average levels of the voice strengths of the user A and the user B, and determines that the influence of the user A is small when the average level of the voice strength of the user B is larger. . The emotion signal calculation unit 20c determines that the influence of the interrupted user is small when the frequency of interrupting the speech is high when the speech is duplicated.
  • the mimicry emotion signal calculation unit 20c agrees with the other party obtained by analyzing the movement of the user's head detected by, for example, a camera, an infrared sensor, an acceleration sensor, a gyro sensor, or the utterance collected by the microphone. "Mimikuri” is calculated on the basis of the action (whispering, reconciliation, repetition of the same word, etc.) indicating. For example, the emotion signal calculation unit 20c determines that the mimicry is high when the same word is acquired in the conversation (appearance establishment is high). In addition, the appearance probability of a specific word (such as “I see”, “Is it true”, “Yes”, “Yes”, etc.) indicating a conflict is measured to determine mimicry.
  • a specific word such as “I see”, “Is it true”, “Yes”, “Yes”, etc.
  • the emotion signal calculation unit 20c is based on the gesture gesture motion obtained by analyzing the movements of the user's hands and upper body detected by the camera, infrared sensor, acceleration sensor, gyro sensor, etc. Calculate “activity level”. The number of gestures of the user and the movement of the upper body increase when the user is enthusiastic or eager to talk. The emotion signal calculation unit 20c determines the activity level by quantifying the number of gestures and the size of the upper body movement.
  • the emotion signal calculation unit 20c determines, for example, the frequency of speech, the length of speech, the strength of voice, the frequency of speech, the speed of speech obtained by analyzing the speech collected by the microsensor. Based on this, “consistency” is calculated. For example, the emotion signal calculation unit 20c measures changes in voice strength and voice pitch (frequency) during conversation in a time series, and based on the deviation and appearance probability, whether the utterance is consistent or modulation occurs. Measure consistency and calculate consistency (high consistency when consistent, low consistency when modulated).
  • the emotion signal calculated as described above is stored in the emotion signal DB 22 and is transmitted to the advice generation server 3 via the communication unit 21.
  • the user management unit 20d manages user information stored in the user information DB 23. For example, the user management unit 20d performs processing such as registration, deletion, and update of user information in the user information DB 23.
  • User information includes, for example, the user's name, age, gender, hobbies / preferences, user ID, and speaking tendency (average speech frequency, voice strength, voice frequency, speaking speed, frequency of gestures, etc.) including.
  • the communication unit 21 functions as an I / F (interface) that transmits / receives data to / from an external device in accordance with an instruction from the control unit 20.
  • the communication unit 21 is connected to the network 4 and receives sensor data sensed during conversation from the sensor device 10 or transmits a calculated emotion signal to the advice generation server 3.
  • the communication unit 21 returns an ideal emotion signal for a predetermined use extracted from the emotion signal DB 22 in response to a request from the advice generation server 3.
  • the emotion signal DB 22 stores the emotion signal calculated by the emotion signal calculation unit 20c. Further, the emotion signal DB 22 may store ideal emotion signals for different uses such as “negotiation”, “listening”, “collaboration”, and “guidance”.
  • the ideal emotion signal is, for example, an ideal value obtained by a machine learning method or a statistical method using many cases in the application as teacher data.
  • the ideal emotion signal may be input to the administrator as a desirable numerical value for each application.
  • the user information DB 23 stores information (user information) related to users who are users of this system.
  • FIG. 5 is a block diagram illustrating an example of the configuration of the advice generation server 3 according to the present embodiment.
  • the advice generation server 3 includes a control unit 30, a communication unit 31, and an advice information DB 32.
  • the control unit 30 controls each configuration of the advice generation server 3.
  • the control unit 30 is realized by a microcomputer that includes a CPU, a ROM, a RAM, and a nonvolatile memory. As shown in FIG. 5, the control unit 30 according to the present embodiment also functions as an application setting unit 30a, a comparison unit 30b, and an advice generation unit 30c.
  • the usage setting unit 30a sets the usage of advice generation.
  • the usage setting unit 30 a may set the usage selected by the user on the usage selection screen displayed on the user terminal 5.
  • the use of advice generation specifically refers to four categories assumed in social communication between people (or people and robots): “negotiation”, “listening”, “collaboration”, And “instruction” is assumed. “Negotiation” corresponds to situations where, for example, a request for transfer to the boss or improvement of treatment within the company is required, or a situation where a contract is concluded with an outside sales or business partner. Applicable to customers who have complained (complaint response scene).
  • collaboration corresponds to, for example, a case of consulting with a partner who performs together on how to proceed with work or duty
  • instruction corresponds to, for example, a case of educating a subordinate who repeats lateness.
  • the user selects which of the above four categories the communication to be performed on the use selection screen described later. In the middle of a conversation, it is assumed that the type of communication changes dynamically depending on the development of the story. Therefore, the user may select a use each time.
  • the comparison unit 30b compares the current emotion signal of the user calculated by the emotion signal server 2 with an ideal emotion signal corresponding to the use set by the use setting unit 30a.
  • the ideal emotion signal according to the application may be acquired from the emotion signal DB 22 of the emotion signal server 2, for example, or may be stored in the internal memory of the advice generation server 3. Further, the comparison unit 30b outputs the comparison result to the advice generation unit 30c.
  • the advice generation unit 30c refers to the advice information DB 32 based on the comparison result, and generates an advice to the user so as to approach an ideal state.
  • the advice information DB 32 stores an advice set for each use, which will be described later with reference to FIG. Each advice set depends on the combination of the value of user A (system use: self) and the value of user B (partner) in each emotional signal, that is, “influence”, “mimikuri”, “activity level” and “consistency” Including advice.
  • the communication unit 31 transmits / receives data to / from an external device in accordance with an instruction from the control unit 30.
  • the communication unit 31 receives an emotion signal of the user calculated by the emotion signal server 2 or an ideal emotion signal for the set use from the emotion signal server 2, or receives the advice generated by the advice generation unit 30c. Or transmitted to the user terminal 5. Further, the communication unit 31 may transmit the emotion signal of the target user and an ideal emotion signal for the set use together with the advice.
  • the advice information DB 32 stores advice information used when the advice generation unit 30c generates advice.
  • the advice information may be an advice set for each purpose-specific emotion signal type, which will be described later with reference to FIG.
  • Such an advice set is an advice rule generated in advance based on a case.
  • the configuration of the advice generation server 3 according to this embodiment has been described above. 2, 4, and 5, the advice generation server 3 and the emotion signal server 2 have been described as separate bodies. However, the present embodiment is not limited to this, and the advice generation server 3 and the emotion signal server 2 are one. May be realized by the same server.
  • FIG. 6 is a block diagram illustrating an example of the configuration of the user terminal 5 according to the present embodiment.
  • the user terminal 5 includes a control unit 50, a communication unit 51, an operation input unit 52, a display unit 53, and a storage unit 54.
  • the user terminal 5 may be realized by a notebook PC (personal computer) as shown in FIG. 1, or may be realized by a tablet terminal, a smartphone, a mobile phone terminal, a desktop PC, or the like.
  • the user terminal 5 may be realized by a wearable device such as a glasses-type device, a watch-type device, or a neck device.
  • the control unit 50 controls each component of the user terminal 5. Further, the control unit 50 is realized by a microcomputer including a CPU, a ROM, a RAM, and a nonvolatile memory.
  • the control unit 50 displays an application selection screen on the display unit 53 or transmits an application selection input from the operation input unit 52 from the communication unit 51 to the advice generation server 3 to give advice according to the application. Control to request generation.
  • the control unit 50 controls the display unit 53 to display a display screen including the advice information received from the advice generation server 3 and the current emotion signal value or ideal signal value, and notify the user.
  • the communication unit 51 transmits / receives data to / from an external device in accordance with an instruction from the control unit 50. For example, the communication unit 51 transmits use selection information to the advice generation server 3 on the network, or receives advice information from the advice generation server 3.
  • the communication unit 51 may have a function of transmitting and receiving data by connecting to an external device present in the vicinity. Specifically, the communication unit 51 can also receive sensor data from the sensor device 10 worn by the user and transmit it to the emotion signal server 2 on the network.
  • the operation input unit 52 has a function of detecting a user operation on the user terminal 5 and outputting it to the control unit 50 as operation input information.
  • the operation input unit 52 is realized by, for example, a touch panel, and can be integrated with the display unit 53 to detect a user operation on the display screen. Further, the operation input unit 52 may analyze the sound collected by the microphone to realize voice input by the user, or analyze the captured image captured by the camera to realize gesture input by the user. Also good.
  • the display unit 53 has a function of displaying various screens such as a menu screen, an operation screen, a usage selection screen, an advice notification screen, and is realized by, for example, a liquid crystal display.
  • the display unit 53 corresponds to a lens portion located in front of the user's eyes when worn, and is a lens unit that performs display using, for example, hologram optical technology. Also good. Specific examples of the display screen including the usage selection screen and the advice notification screen to be displayed will be described later with reference to FIGS.
  • the storage unit 54 stores a program for the control unit 50 to execute each process. Further, the storage unit 54 according to the present embodiment may temporarily store the advice information received from the advice generation server 3.
  • the main configuration of the user terminal 5 has been described above.
  • the configuration of the user terminal 5 is not limited to the example illustrated in FIG. 6, and further includes, for example, a sensor unit (camera, microphone, infrared sensor, acceleration sensor, gyro sensor, etc.) and a sensor data extraction unit. You may have.
  • the display unit 53 is used as an example of the output unit.
  • the present embodiment is not limited to this.
  • the user terminal 5 includes a speaker or a vibration unit and outputs advice information. You may perform by audio
  • FIG. 7 is a flowchart showing an operation process of the information processing system according to the present embodiment.
  • the user terminal 5 displays an opening message and a usage input screen (steps S103 and S106). Specifically, the user terminal 5 displays a usage selection screen on the display unit 53 together with an opening message such as “Please select a usage”.
  • FIG. 8 shows an example of an initial screen including a use selection screen according to the present embodiment. As shown in FIG. 8, for example, a usage selection screen 113 is displayed on the upper right of the initial screen 100. On the usage selection screen 113, four usages such as negotiation, listening, collaboration, and guidance can be selected. In the example illustrated in FIG. 8, “listening” is selected as an example.
  • an advice display screen 114 is displayed at the lower right of the initial screen 100, and a graph screen 111 on which the value of each emotion signal is displayed is displayed on the left side of the initial screen 100.
  • the advice display screen 114 is blank, and the emotion signal value graph is not displayed.
  • the details of the scramble ON / OFF button 112 will be described in Modification 3 described later, but in this embodiment, it is always “OFF”.
  • the display of the initial screen 100 can be performed based on the display information received from the advice generation server 3.
  • an emotion signal acquisition process for the user is performed (step S112).
  • the advice generation server 3 selects the emotion signal (specifically, influence, mimicry) calculated based on the user's sensing data acquired from the sensor device 10 by the emotion signal server 2. , Activity level, consistency value) from the emotion signal server 2 in substantially real time.
  • user A and user B communicating with each other wear sensor devices 10a and 10b (see FIG. 2), and user A's emotion signal based on user A's sensing data and user B's The emotion signal of the user B based on the sensing data can be acquired.
  • a significant emotion signal indicates a value of an emotion signal that can be generated by performing comparison processing with an ideal emotion signal.
  • the advice generation server 3 when a significant emotion signal can be acquired (step S115 / Yes), the advice generation server 3 generates advice for constructing an ideal state according to the use based on the emotion signal (step S118). ).
  • the specific processing content of the advice generation processing will be described later with reference to FIG.
  • the advice generation server 3 transmits the generated advice information to the user terminal 5.
  • the user terminal 5 displays the advice information received from the advice generation server 3 and notifies the user (step S121). Specifically, the user terminal 5 displays advice information for approaching an ideal state corresponding to the selected application on the advice display screen 114 shown in FIG.
  • the graph screen 111 displays the current user emotion signal values and ideal emotion signal values received from the advice generation server 3 together with the advice information, and presents the current state and ideal state to the user. It can be intuitively grasped. In the case of only the graph screen 111, it is assumed that it is impossible to determine how to respond to the ideal state during communication, and therefore, it is displayed together with specific advice. Specific examples of display of advice information and the like will be described later with reference to FIGS.
  • FIG. 9 is a flowchart showing the advice generation process according to the present embodiment.
  • the advice generation server 3 receives the “negotiation” advice set from the advice information DB 32 when the use set based on the user's selection is “negotiation” (step S133 / Yes). Extraction is performed, and advice on each emotion signal (specifically, influence, mimicry, activity level, and consistency) is acquired using the advice set (step S136).
  • FIG. 10 shows an example of an advice set stored in the advice information DB 32.
  • the advice information DB 32 stores an application set 60 for “negotiation”, an advice set 61 for “listening”, an advice set 62 for “collaboration”, and an advice set 63 for “guidance”.
  • Each advice set includes “influence” advice information 60a, “Mimikuri” advice information 60b, “activity level” advice information 60c, and “consistency” advice information 60d.
  • Each piece of advice information is normalized emotion signal values (influence, mimicry, activity level, consistency) of user A (referred to as “self” in FIG. 10) and user B (referred to as “partner” in FIG. 10). It is linked to the coordinates corresponding to the combination.
  • a description will be given using a matrix 600 in which advice is stored at each coordinate.
  • the advice generation unit 30c extracts the “negotiation” advice set 60, and first generates “influence” advice.
  • the “influence” advice generation processing will be described in detail with reference to FIG.
  • the “influence” of the user in communication is, for example, the frequency of speech, the length of speech, the strength of voice, the overlap of speech, the frequency at which speech is interrupted when the speech overlaps, etc. Based on the above, a normalized numerical value between 0 and 1 is calculated.
  • FIG. 11 is a diagram for explaining advice generation of “influence” in the case of use “negotiation” according to the present embodiment.
  • the ideal state of “influence” is considered to have the same influence on each other. This is because it is not preferable that one of the persons has a great influence in the negotiation. Therefore, the ideal combination of influences of the user A (self) and the user B (partner) is a portion indicated mainly by a region 601 where the influences of both are balanced.
  • the ideal value is based on data received from the emotion signal server 2 as an “ideal emotion signal”, for example.
  • the advice generation unit 30c refers to the advice information based on the current “influence” values of the user A and the user B and the ideal value, and the user A for approaching the ideal state. Advice can be generated.
  • the advice generation unit 30c performs the above-described advice acquisition process for “Mimikuri”, “Activity Level”, and “Consistency” in the same manner, and acquires each advice.
  • the advice generation server 3 extracts the “listening” advice set from the advice information DB 32. Similarly, advice on each emotion signal is acquired using the advice set (step S142).
  • the advice generation server 3 extracts the advice set of “collaboration” from the advice information DB 32, and the advice set Similarly, advice on each emotion signal is acquired (step S148).
  • the advice generation server 3 extracts the “guidance” advice set from the advice information DB 32, and the advice set Similarly, advice on each emotion signal is acquired using (Step S154).
  • generation server 3 can acquire the advice according to the selected use, in addition to advice, for example, user A's current emotion signal value (current value) and the ideal value according to the selected use A display is generated (step S157).
  • FIG. 12 is a diagram showing an example of the display screen 101 of the application “listening” according to the present embodiment.
  • the display screen 101 is an example of a display screen displayed on the display unit 53 of the user terminal 5 in the display process shown in step S121 shown in FIG.
  • the display screen 101 includes a graph screen 111a, a usage selection screen 113a, and an advice notification screen 114a.
  • “listening” is selected as shown in the application selection screen 113a.
  • the current value of each emotion signal of the user A and the ideal value of each emotion signal in the usage “listening” are shown in a radar chart.
  • the user A can intuitively grasp that his / her influence is greater than the ideal state, and his / her consistency, activity level, and mimicry are smaller than the ideal state. it can.
  • “listening” for example, the state where the influence on the user A side is reduced to talk to the other party (user B) and the person is sympathizing with the other party is ideal.
  • the ideal value of Mimikuri is small and large.
  • advice notification screen 114a advice information indicating what kind of response the user A should take in order to approach the ideal state from the current state is displayed.
  • advice information indicating what kind of response the user A should take in order to approach the ideal state from the current state is displayed.
  • advice corresponding to each emotion signal value that is, advice regarding “influence”, advice regarding “Mimikuri”, advice regarding “activity level”, and advice regarding “consistency” are specified.
  • the advice generation server 3 can also obtain advice using an advice rule as shown in Table 1 below, for example. Such advice rules are provided for each use and are stored in the advice information DB 32.
  • the advice generation server 3 determines whether the current value of the emotion signal of the user A (or the combination of the emotion signal values of the user A and the user B; see the P coordinate shown in FIG. 11) is larger or smaller than the ideal value. Appropriate advice can be obtained from the above advice rules.
  • FIG. 13 is a diagram showing an example of the display screen 102 for the usage “negotiation” according to the present embodiment.
  • the display screen 102 includes a graph screen 111b, a usage selection screen 113b, and an advice notification screen 114b.
  • “negotiation” is selected as shown in the use selection screen 113b.
  • the current value of each emotion signal of the user A and the ideal value of each emotion signal in the use “negotiation” are shown in a radar chart.
  • the user A can intuitively grasp that his / her mimicry is larger than the ideal state and his influence, consistency, and activity level are smaller than the ideal state. it can.
  • the other party is often in a strong position. Therefore, the user A listens to the other party's thoughts without interrupting the other party's story, and the user A's attitude is made consistent.
  • the ideal value of influence is small and the ideal value of consistency is large.
  • advice information indicating what kind of response the user A should take in order to approach the ideal state from the current state is displayed.
  • the advice information according to the present embodiment may be generated according to an advice rule as shown in Table 2 below, for example.
  • the information processing system according to the first modification of the present embodiment can generate advice for approaching an ideal state based on the emotion signal of only the user A.
  • the partner user B
  • the default value for each application is used to generate advice.
  • the default value is calculated based on, for example, the opponent's past influence, mimicry, activity level, and consistency values for each application (for example, the highest appearance rate).
  • FIG. 14 is a flowchart showing the pre-processing of the advice generation process when only the user A has the sensor device 10. The process shown in FIG. 14 is performed before the advice generation process shown in FIG. 9 when an emotion signal from the partner (user B) cannot be acquired (step S201 / No).
  • the advice generation server 3 extracts an advice set of “negotiation”, and influence, mimicry, activity level, and consistency in “negotiation”.
  • Each sex default value is set as the value of the other party (step S206).
  • the advice generation server 3 extracts an advice set of “listening”, and influence, mimicry, activity level, and consistency in “listening” Each default value is set as the value of the other party (step S212).
  • the advice generation server 3 extracts an advice set of “collaboration”, influence, mimicry, activity level, and consistency in “collaboration”. Each default value is set as the value of the other party (step S218).
  • the advice generation server 3 extracts an advice set of “guidance”, and influence, mimicry, activity level, and consistency in “guidance”. Each default value is set as the value of the other party (step S224).
  • the information processing system according to the second modification of the present embodiment can generate advice for approaching an ideal state based on the emotion signal of only the communication partner (user B).
  • the information processing system according to this modification is in a state in which the person (user A) does not have the sensor device 10 and cannot acquire an emotion signal, and therefore the other person's emotion signal is monitored, Compare with the standard value (normal value) of to generate advice.
  • a specific description will be given with reference to FIG.
  • FIG. 15 is a diagram showing a display screen example according to the second modification of the present embodiment.
  • the display screen 103 includes a graph screen 111c, a usage selection screen 113c, and an advice notification screen 114c.
  • the value (current value) of the other party's emotion signal and the normal value of the other party are shown in a radar chart.
  • the advice generation server 3 compares the other person's emotion signal value with the other party's normal value, and generates advice based on, for example, an advice rule as shown in Table 3 below. Such advice rules exist for each use.
  • Table 3 below as shown in the graph screen 111c of FIG. 15, the current “influence” of the opponent is smaller than the normal value, “Mimikuri” is larger than the normal value, and “activity level” is the normal value. And “consistency” is less than normal.
  • the information processing system can generate advice based on the emotion signal of the partner (user B) and the normal value of the partner.
  • FIG. 16 is a diagram illustrating an example of a display screen according to the third modification of the present embodiment.
  • the display screen 104 includes a graph screen 111d, a usage selection screen 113d, and an advice notification screen 114d.
  • “listen” is selected as shown in the application selection screen 113d.
  • the scramble ON The / OFF button 112d is set to “ON”.
  • a display for prompting the user to leave the ideal state is displayed on the graph screen 111d, and an advice for leaving the ideal state is displayed on the advice notification screen 114d.
  • the value of the emotion signal of the user A (“your current value” shown in FIG. 16) and the ideal value for the usage “listening” are displayed on the axis of each emotion signal (on the 4th axis).
  • the user A objectively grasps what emotion signal is currently being expressed from the user to the other party, and leaves the ideal state so that the other person cannot read the real emotion signal.
  • each emotion signal should be strengthened or weakened.
  • the “influence” has a current value slightly larger than the ideal value, and therefore an arrow is displayed to further increase the influence so as to further move away from the ideal state.
  • “Mimikuri”, “Activity Level”, and “Consistency” are “Mimikuri”, “Activity Level”, and “Consistency”, respectively, so that the current value is slightly smaller than the ideal value. Arrows prompting you to weaken each "consistency” are displayed.
  • advice for leaving the ideal state displayed on the advice notification screen 114d is generated by the advice generation server 3 based on, for example, advice rules for each application as shown in Table 4 below.
  • the display screen example according to the present modification is not limited to the example shown in FIG.
  • a state away from the ideal state may be indicated as the scrambled ideal value.
  • FIG. 17 is a diagram showing another display screen example according to this modification.
  • the display screen 105 includes a graph screen 111e, a usage selection screen 113e, and an advice notification screen 114e.
  • the value of the emotion signal of the user A (“the system user) (“your current value” shown in FIG. 17), the ideal value for the usage “listening”, and the ideal scramble value are shown. Radar chart is displayed.
  • the user A objectively grasps what emotion signal is currently being expressed from the user to the other party, and leaves the ideal state so that the other person cannot read the real emotion signal.
  • the user A can intuitively grasp how far from the ideal state it should be.
  • FIG. 18 is a diagram illustrating another display screen example according to the third modification of the present embodiment.
  • the display screen 106 includes a graph screen 111f, a usage selection screen 113f, and an advice notification screen 114f.
  • a radar chart indicating the value of the emotion signal of the user B (“current value of the other party” shown in FIG. 18) and the scrambled ideal value for the usage “negotiation” is displayed.
  • the user A grasps what emotion signals are currently being expressed from the other party, and in order not to make “negotiation” communication in an ideal state, the other person's emotion signal is set to the scrambled ideal value. It is possible to intuitively recognize that it is necessary to approach.
  • the advice notification screen 114f displays advice indicating what kind of response should be taken in order to make the other party's emotion signal the scrambled ideal value.
  • the advice displayed on the advice notification screen 114f is based on the emotion signal calculated by the advice generation server 3 based on the sensor data of the user B, for example, for each application as shown in Table 4 below. Generated with reference to a rule.
  • the sensor device 10, the emotion signal server 2, the advice generation server are installed in hardware such as the CPU, ROM, and RAM incorporated in the sensor device 10, the emotion signal server 2, the advice generation server 3, or the user terminal 5 described above. 3 or a computer program for exercising the function of the user terminal 5 can be created.
  • a computer-readable storage medium storing the computer program is also provided.
  • advice and the like are mainly displayed and output from the display unit 53 of the user terminal 5, but it may be extremely difficult to see the display screen during communication. Therefore, it is conceivable that feedback such as presentation of advice to the user is realized by, for example, a head-mounted wearable device instead of the notebook PC as shown in FIG.
  • the head-mounted wearable device has, for example, a plurality of vibration units, and four axes of emotion signals as shown in FIG. 8 are assigned to the vibration units arranged in the front, rear, left, and right, respectively. It is possible to inform the user of the situation to be taken and how to deal with it by the vibration of the vibration unit.
  • the head-mounted wearable device may be provided with, for example, a bone conduction speaker, and advice may be output as a voice. By outputting through the bone conduction speaker, the user A can obtain advice without looking at the display screen or being known to the communication partner.
  • the user terminal 5 that performs feedback by such vibration is not limited to a head-mounted wearable device.
  • a bracelet type (arm) having a function of vibrating in four directions (front, back, right, and left), respectively.
  • a band) wearable device or a belt-type (trunk band) wearable device may be used.
  • this technique can also take the following structures.
  • a usage setting unit for setting a usage in which the first user communicates with the second user;
  • a communication unit that receives an emotion signal of at least one of the first user and the second user;
  • a generation unit that generates advice to the first user,
  • a control unit that transmits the generated advice to the client terminal corresponding to the first user via the communication unit;
  • An information processing system comprising: (2) The information processing system according to (1), wherein the generation unit generates advice to the first user so that an emotion signal of the first user approaches an ideal state according to the application.
  • the information processing unit according to (1) or (2), wherein the generation unit generates advice to the first user based on correspondence information between the state of the emotion signal and advice accumulated for each use. system.
  • the generation unit generates the advice to the first user based on a comparison result of comparing the received emotion signal with an emotion signal in an ideal state according to the application, (1) to (1)
  • the information processing system according to any one of 3).
  • the generation unit is based on a comparison result of comparing a combination value of the current emotion signal of the first user and the current emotion signal of the second user with an emotion signal in an ideal state according to the application.
  • the information processing system according to any one of (1) to (4), wherein advice for the first user to approach the ideal state is generated.
  • the generation unit is based on a comparison result of comparing a combination value of the current emotion signal of the first user and a default value of the emotion signal of the second user with an emotion signal in an ideal state according to the application.
  • the information processing system according to any one of (1) to (4), wherein an advice to the first user for approaching the ideal state is generated.
  • the generating unit sets the normal value as an ideal state based on the current emotion signal of the second user and the normal value of the second user according to the application, and the second unit approaches the ideal state.
  • the information processing system according to any one of (1) to (4), wherein advice for one user is generated.
  • the information processing system wherein the generation unit generates advice to the first user so that the emotion signal of the first user is separated from an ideal state according to the application.
  • the information processing system wherein the generation unit generates advice to the first user so that the emotion signal of the first user approaches a non-ideal state according to the application.
  • the control unit performs control so as to transmit the emotion signal of the first user or the second user used for generating the advice and the emotion signal in the ideal state together with the generated advice.
  • the information processing system according to any one of (9) to (9).
  • a selection input unit that accepts selection of an application in which the first user communicates with the second user; Providing advice to the first user based on an ideal state of the emotion signal detected from at least one of the first user and the second user according to the usage selected by the selection input unit
  • a communication unit that transmits the selected usage to an external server that receives the advice to the first user generated according to the usage and emotion signal;
  • a notification unit for notifying advice to the first user;
  • An information processing system comprising: (12)
  • the information processing system includes: A detector that detects an emotion signal of at least one of the first user and the second user; The information processing system according to (11), wherein the communication unit transmits the detected emotion signal to the external server.
  • the notification unit notifies the first user of advice so that the emotion signal of the first user approaches an ideal state according to the use selected by the selection input unit, (11) or ( The information processing system according to 12).
  • the notifying unit notifies the first user of advice so that the emotion signal of the first user leaves the ideal state according to the use selected by the selection input unit, (11) to (11) The information processing system according to any one of 13).
  • the notification unit notifies the first user of an advice so that the emotion signal of the first user approaches a non-ideal state according to the use selected by the selection input unit.
  • the information processing system according to any one of (13).
  • the notification unit notifies the generated advice together with a graph indicating the emotion signal of the first user or the second user used for generating the advice and the emotion signal in the ideal state (11)
  • Computer A selection input unit that accepts selection of an application in which the first user communicates with the second user; Providing advice to the first user based on an ideal state of the emotion signal detected from at least one of the first user and the second user according to the usage selected by the selection input unit A communication unit that transmits the selected usage to an external server that receives the advice to the first user generated according to the usage and emotion signal; A notification unit for notifying advice to the first user; A storage medium storing a program for functioning as a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Le problème décrit par la présente invention est de fournir un système de traitement d'informations, un procédé de commande, et un support de stockage qui sont capables de créer des conseils pour établir une condition de communication idéale sur la base d'un signal indiquant une émotion de l'utilisateur. La solution selon l'invention porte sur un système de traitement d'informations qui comporte : une unité de réglage d'application qui règle un objectif pour lequel un premier utilisateur effectue une communication avec un second utilisateur ; une unité de communication qui reçoit un signal d'émotion à partir d'au moins le premier utilisateur ou le second utilisateur ; une unité de création qui crée des conseils pour le premier utilisateur sur la base de la condition idéale, correspondant à l'objectif, pour le signal d'émotion reçu ; et une unité de commande qui transmet les conseils créés à un terminal client correspondant au premier utilisateur par l'intermédiaire de l'unité de communication.
PCT/JP2016/052858 2015-05-07 2016-02-01 Système de traitement d'informations, procédé de commande, et support de stockage WO2016178329A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017516562A JP6798484B2 (ja) 2015-05-07 2016-02-01 情報処理システム、制御方法、およびプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015094575 2015-05-07
JP2015-094575 2015-05-07

Publications (1)

Publication Number Publication Date
WO2016178329A1 true WO2016178329A1 (fr) 2016-11-10

Family

ID=57217688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052858 WO2016178329A1 (fr) 2015-05-07 2016-02-01 Système de traitement d'informations, procédé de commande, et support de stockage

Country Status (2)

Country Link
JP (2) JP6798484B2 (fr)
WO (1) WO2016178329A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018197934A (ja) * 2017-05-23 2018-12-13 日本アイラック株式会社 情報処理装置
JP2019207625A (ja) * 2018-05-30 2019-12-05 株式会社エクサウィザーズ コーチング支援装置及びプログラム
JP2020095747A (ja) * 2020-03-10 2020-06-18 株式会社エクサウィザーズ コーチング支援装置及びプログラム
JP2020130528A (ja) * 2019-02-18 2020-08-31 沖電気工業株式会社 感情推定装置、感情推定方法、プログラム、及び感情推定システム
WO2020179478A1 (fr) 2019-03-05 2020-09-10 正通 亀井 Système de présentation de conseils
JPWO2019160104A1 (ja) * 2018-02-16 2021-01-07 日本電信電話株式会社 非言語情報生成装置、非言語情報生成モデル学習装置、方法、及びプログラム
JP2021111239A (ja) * 2020-01-14 2021-08-02 住友電気工業株式会社 提供システム、提供方法、提供装置、及びコンピュータプログラム
JP7057029B1 (ja) 2021-06-25 2022-04-19 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
JP7057011B1 (ja) 2021-06-25 2022-04-19 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
WO2022264217A1 (fr) * 2021-06-14 2022-12-22 株式会社I’mbesideyou Système d'analyse vidéo

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7331588B2 (ja) * 2019-09-26 2023-08-23 ヤマハ株式会社 情報処理方法、推定モデル構築方法、情報処理装置、推定モデル構築装置およびプログラム
US11935329B2 (en) 2021-03-24 2024-03-19 I'mbesideyou Inc. Video analysis program
WO2023135939A1 (fr) * 2022-01-17 2023-07-20 ソニーグループ株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7388768B2 (ja) 2022-02-01 2023-11-29 株式会社I’mbesideyou 動画像分析プログラム
WO2024101022A1 (fr) * 2022-11-10 2024-05-16 株式会社Nttドコモ Dispositif de mise en correspondance d'avatars

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000005317A (ja) * 1998-06-22 2000-01-11 Seiko Epson Corp 生体の状態制御支援装置及びロボット制御装置
JP2001344352A (ja) * 2000-05-31 2001-12-14 Toshiba Corp 生活支援装置および生活支援方法および広告情報提供方法
JP2002051153A (ja) * 2000-08-07 2002-02-15 Fujitsu Ltd Ctiサーバ及びプログラム記録媒体
JP2006061632A (ja) * 2004-08-30 2006-03-09 Ishisaki:Kk 感情データ提供装置、心理解析装置、および電話ユーザ心理解析方法
US20110010173A1 (en) * 2009-07-13 2011-01-13 Mark Scott System for Analyzing Interactions and Reporting Analytic Results to Human-Operated and System Interfaces in Real Time
JP2014095753A (ja) * 2012-11-07 2014-05-22 Hitachi Systems Ltd 音声自動認識・音声変換システム
JP2015075908A (ja) * 2013-10-09 2015-04-20 日本電信電話株式会社 感情情報表示制御装置、その方法及びプログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4389512B2 (ja) * 2003-07-28 2009-12-24 ソニー株式会社 腕時計型会話補助装置、会話補助システム、眼鏡型会話補助装置および会話補助装置
JP4904188B2 (ja) * 2007-03-30 2012-03-28 三菱電機インフォメーションシステムズ株式会社 分配装置、分配プログラム及び分配システム
CN103430523B (zh) * 2011-03-08 2016-03-23 富士通株式会社 通话辅助装置和通话辅助方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000005317A (ja) * 1998-06-22 2000-01-11 Seiko Epson Corp 生体の状態制御支援装置及びロボット制御装置
JP2001344352A (ja) * 2000-05-31 2001-12-14 Toshiba Corp 生活支援装置および生活支援方法および広告情報提供方法
JP2002051153A (ja) * 2000-08-07 2002-02-15 Fujitsu Ltd Ctiサーバ及びプログラム記録媒体
JP2006061632A (ja) * 2004-08-30 2006-03-09 Ishisaki:Kk 感情データ提供装置、心理解析装置、および電話ユーザ心理解析方法
US20110010173A1 (en) * 2009-07-13 2011-01-13 Mark Scott System for Analyzing Interactions and Reporting Analytic Results to Human-Operated and System Interfaces in Real Time
JP2014095753A (ja) * 2012-11-07 2014-05-22 Hitachi Systems Ltd 音声自動認識・音声変換システム
JP2015075908A (ja) * 2013-10-09 2015-04-20 日本電信電話株式会社 感情情報表示制御装置、その方法及びプログラム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MASAHIDE YUASA ET AL.: "A State Graph Model of Negotiation Considering Subjective Factors", THE INSTITUTE OF SYSTEMS, CONTROL AND INFORMATION ENGINEERS, vol. 14, no. 9, 15 September 2001 (2001-09-15), pages 439 - 446 *
MIKA FUKUI ET AL.: "Multimodal Personal Information Provider Using Natural Language and Emotion Understanding from Speech and Keyboard Input", IPSJ SIG NOTES, vol. 96, no. 1, 11 January 1996 (1996-01-11), pages 43 - 48, XP002997271 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018197934A (ja) * 2017-05-23 2018-12-13 日本アイラック株式会社 情報処理装置
JP7140984B2 (ja) 2018-02-16 2022-09-22 日本電信電話株式会社 非言語情報生成装置、非言語情報生成モデル学習装置、方法、及びプログラム
JPWO2019160104A1 (ja) * 2018-02-16 2021-01-07 日本電信電話株式会社 非言語情報生成装置、非言語情報生成モデル学習装置、方法、及びプログラム
JP2019207625A (ja) * 2018-05-30 2019-12-05 株式会社エクサウィザーズ コーチング支援装置及びプログラム
JP7172705B2 (ja) 2019-02-18 2022-11-16 沖電気工業株式会社 感情推定装置、感情推定方法、プログラム、及び感情推定システム
JP2020130528A (ja) * 2019-02-18 2020-08-31 沖電気工業株式会社 感情推定装置、感情推定方法、プログラム、及び感情推定システム
WO2020179478A1 (fr) 2019-03-05 2020-09-10 正通 亀井 Système de présentation de conseils
KR20210132061A (ko) 2019-03-05 2021-11-03 마사미치 가메이 어드바이스 제시 시스템
US11768879B2 (en) 2019-03-05 2023-09-26 Land Business Co., Ltd. Advice presentation system
JP2021111239A (ja) * 2020-01-14 2021-08-02 住友電気工業株式会社 提供システム、提供方法、提供装置、及びコンピュータプログラム
JP6994722B2 (ja) 2020-03-10 2022-01-14 株式会社エクサウィザーズ コーチング支援装置及びプログラム
JP2020095747A (ja) * 2020-03-10 2020-06-18 株式会社エクサウィザーズ コーチング支援装置及びプログラム
WO2022264217A1 (fr) * 2021-06-14 2022-12-22 株式会社I’mbesideyou Système d'analyse vidéo
JP7057029B1 (ja) 2021-06-25 2022-04-19 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
JP7057011B1 (ja) 2021-06-25 2022-04-19 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
WO2022270632A1 (fr) * 2021-06-25 2022-12-29 株式会社Kakeai Système informatique, procédé et programme pour améliorer les relations avec des parties individuelles dans une communication à deux abonnés
JP2023004833A (ja) * 2021-06-25 2023-01-17 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
JP2023004165A (ja) * 2021-06-25 2023-01-17 株式会社Kakeai 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム

Also Published As

Publication number Publication date
JPWO2016178329A1 (ja) 2018-02-22
JP6992870B2 (ja) 2022-01-13
JP2021044001A (ja) 2021-03-18
JP6798484B2 (ja) 2020-12-09

Similar Documents

Publication Publication Date Title
JP6992870B2 (ja) 情報処理システム、制御方法、およびプログラム
US9953650B1 (en) Systems, apparatus and methods for using biofeedback for altering speech
US10218852B2 (en) Methods and systems for providing teleconference participant quality feedback
TWI379205B (en) Instant communication interacting system and method thereof
WO2016080553A1 (fr) Robot d'apprentissage, système à robot d'apprentissage, et programme de robot d'apprentissage
CN108139988B (zh) 信息处理系统和信息处理方法
Reijula et al. Human well-being and flowing work in an intelligent work environment
US11528449B2 (en) System and methods to determine readiness in video collaboration
EP3360056B1 (fr) Facilitation de débit de reconnaissance et de conversation dans un système de suppléance à la communication et communication alternative
WO2017062163A1 (fr) Mandataires pour dispositifs de génération de la parole
EP3360036A1 (fr) Communication personnelle dirigée pour dispositifs de génération de parole
Shi et al. Accessible video calling: Enabling nonvisual perception of visual conversation cues
US20230260534A1 (en) Smart glass interface for impaired users or users with disabilities
JP7152453B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム
Kato et al. Sign language conversational user interfaces using luminous notification and eye gaze for the deaf and hard of hearing
TWI602174B (zh) 基於聲音辨識的情緒紀錄與管理裝置、系統以及方法
Boyd Designing and evaluating alternative channels: Visualizing nonverbal communication through AR and VR systems for people with autism
KR20190023610A (ko) 회의 중 휴식 시간 제안 방법, 전자장치 및 시스템
Rajan Considerate Systems
WO2024062779A1 (fr) Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations
JP7057029B1 (ja) 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
JP7123028B2 (ja) 情報処理システム、情報処理方法、及びプログラム
JP7057011B1 (ja) 二者間のコミュニケーションにおいて個々人への関わり方を改善するためのコンピュータシステム、方法、およびプログラム
Chayleva Zenth: An Affective Technology for Stress Relief
JP2022060797A (ja) 電子機器、制御方法、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16789465

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017516562

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16789465

Country of ref document: EP

Kind code of ref document: A1