JPWO2022180770A5 - - Google Patents

Download PDF

Info

Publication number
JPWO2022180770A5
JPWO2022180770A5 JP2023501941A JP2023501941A JPWO2022180770A5 JP WO2022180770 A5 JPWO2022180770 A5 JP WO2022180770A5 JP 2023501941 A JP2023501941 A JP 2023501941A JP 2023501941 A JP2023501941 A JP 2023501941A JP WO2022180770 A5 JPWO2022180770 A5 JP WO2022180770A5
Authority
JP
Japan
Prior art keywords
user
expression
character
present
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2023501941A
Other languages
Japanese (ja)
Other versions
JPWO2022180770A1 (en
Filing date
Publication date
Application filed filed Critical
Priority claimed from PCT/JP2021/007279 external-priority patent/WO2022180770A1/en
Publication of JPWO2022180770A1 publication Critical patent/JPWO2022180770A1/ja
Publication of JPWO2022180770A5 publication Critical patent/JPWO2022180770A5/ja
Pending legal-status Critical Current

Links

Claims (10)

コンピュータを、
聴覚に基づく感覚を提示するようにデバイスを制御する制御部として機能させるためのプログラムであって、
前記制御部は、
ユーザ設定又はセンサによる検知結果に応じて前記聴覚を提示して、前記聴覚に基づく所定のキャラクターを表現するキャラクター表現を人の声として提示するように前記デバイスを制御し、
方向に関連付けられた誘導表現を前記所定のキャラクターで提示するように前記デバイスを制御し、
前記ユーザの行動の前記センサの検知結果に基づいて前記ユーザの行動と前記誘導表現との間の一致度を判定し、
判定された前記一致度に応じて、前記誘導表現の特性を変更するように前記デバイスを制御して聴覚に提示し、
前記デバイスは、前記ユーザが前記デバイスと共感を構築するように前記ユーザと前記デバイスから提示される人の声のキャラクター表現を前記ユーザに提示する、プログラム。
computer,
A program for functioning as a control unit that controls a device to present an auditory -based sensation ,
The control unit includes:
controlling the device to present the auditory sense according to a user setting or a detection result by a sensor to present a character expression representing a predetermined character based on the auditory sense as a human voice;
controlling the device to present a guiding representation associated with a direction with the predetermined character;
determining the degree of matching between the user's behavior and the guidance expression based on the detection result of the sensor of the user's behavior;
controlling the device to change the characteristics of the guided expression and presenting it auditorily according to the determined degree of matching;
The device is programmed to present to the user a character representation of a human voice presented by the user and the device such that the user builds empathy with the device .
前記所定のキャラクターは、実在するキャラクター、又は架空のキャラクターを含む、請求項に記載のプログラム。 The program according to claim 1 , wherein the predetermined character includes a real character or a fictional character. 前記制御部は、センサによる検知結果に応じて、前記キャラクター表現の特性を変更するように前記デバイスを制御する、請求項1又は2に記載のプログラム。 3. The program according to claim 1 , wherein the control unit controls the device to change characteristics of the character expression according to a detection result by a sensor. 前記制御部は、センサによる検知結果に応じて、前記キャラクター表現により表現される感情を変更するように前記デバイスを制御する、請求項に記載のプログラム。 The program according to claim 1 , wherein the control unit controls the device to change the emotion expressed by the character expression according to a detection result by a sensor. 前記誘導表現は、進むこと、曲がること、又は止まることを含む行動をユーザが行うように誘導する表現を含む、請求項に記載のプログラム。 The program according to claim 1 , wherein the guiding expression includes an expression that guides the user to perform an action including moving forward, turning, or stopping. 前記制御部は、第1ユーザによる端末装置に対する操作に応じて、第2ユーザに対して前記誘導表現を提示するように前記デバイスを制御し、聴覚に提示する、請求項に記載のプログラム。 The program according to claim 1, wherein the control unit controls the device to present the guiding expression to the second user in response to an operation on the terminal device by the first user, and presents the guiding expression audibly . 前記制御部は、前記誘導表現に応じてユーザが目的地に到達したことが検知されたことに基づいて、前記目的地に応じた前記キャラクター表現を提示するように前記デバイスを制御し、聴覚に提示する、請求項1から6のいずれか一項に記載のプログラム。 The control unit controls the device to present the character expression according to the destination based on the detection that the user has reached the destination according to the guidance expression, and controls the device to present the character expression according to the destination, The program according to any one of claims 1 to 6 , presented . 前記制御部は、ユーザによる所定の動作がセンサにより検知されたことに応じて、前記所定の動作を検知した位置に関連付けて、前記所定の動作に対応する感情情報を記憶する、請求項1からのいずれか一項に記載のプログラム。 The control unit stores emotional information corresponding to the predetermined action in association with a position where the predetermined action is detected in response to the sensor detecting the predetermined action by the user. 7. The program according to any one of 7 . 聴覚に基づく感覚を提示するようにデバイスを制御する制御部を備える情報処理装置であって、
前記制御部は、
ユーザ設定又はセンサによる検知結果に応じて前記聴覚を提示して、前記聴覚に基づく所定のキャラクターを表現するキャラクター表現を人の声として提示するように前記デバイスを制御し、
方向に関連付けられた誘導表現を前記所定のキャラクターで提示するように前記デバイスを制御し、
前記ユーザの行動の前記センサの検知結果に基づいて前記ユーザの行動と前記誘導表現との間の一致度を判定し、
判定された前記一致度に応じて、前記誘導表現の特性を変更するように前記デバイスを制御して聴覚に提示し、
前記デバイスは、前記ユーザが前記デバイスと共感を構築するように前記ユーザと前記デバイスから提示される人の声のキャラクター表現を前記ユーザに提示する、情報処理装置。
An information processing device comprising a control unit that controls a device to present a sense based on hearing ,
The control unit includes:
controlling the device to present the auditory sense according to a user setting or a detection result by a sensor to present a character expression representing a predetermined character based on the auditory sense as a human voice;
controlling the device to present a guiding representation associated with a direction with the predetermined character;
determining the degree of matching between the user's behavior and the guidance expression based on the detection result of the sensor of the user's behavior;
controlling the device to change the characteristics of the guided expression and presenting it auditorily according to the determined degree of matching;
The device is an information processing apparatus that presents to the user a character representation of a human voice presented by the user and the device so that the user builds empathy with the device.
聴覚に基づく感覚を提示するようにデバイスを制御する制御部を備える情報処理装置における情報処理方法であって、
前記制御部は、
ユーザ設定又はセンサによる検知結果に応じて前記聴覚を提示して、前記聴覚に基づく所定のキャラクターを表現するキャラクター表現を人の声として提示するように前記デバイスを制御し、
方向に関連付けられた誘導表現を前記所定のキャラクターで提示するように前記デバイスを制御し、
前記ユーザの行動の前記センサの検知結果に基づいて前記ユーザの行動と前記誘導表現との間の一致度を判定し、
判定された前記一致度に応じて、前記誘導表現の特性を変更するように前記デバイスを制御して聴覚に提示し、
前記デバイスは、前記ユーザが前記デバイスと共感を構築するように前記ユーザと前記デバイスから提示される人の声のキャラクター表現を前記ユーザに提示する、情報処理方法。
An information processing method in an information processing device comprising a control unit that controls a device to present a sensation based on hearing , the method comprising:
The control unit includes:
controlling the device to present the auditory sense according to a user setting or a detection result by a sensor to present a character expression representing a predetermined character based on the auditory sense as a human voice;
controlling the device to present a guiding representation associated with a direction with the predetermined character;
determining the degree of matching between the user's behavior and the guidance expression based on the detection result of the sensor of the user's behavior;
controlling the device to change the characteristics of the guided expression and presenting it auditorily according to the determined degree of matching;
An information processing method , wherein the device presents to the user a character representation of a human voice presented by the user and the device so that the user builds empathy with the device .
JP2023501941A 2021-02-26 2021-02-26 Pending JPWO2022180770A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/007279 WO2022180770A1 (en) 2021-02-26 2021-02-26 Program, information processing device, and information processing method

Publications (2)

Publication Number Publication Date
JPWO2022180770A1 JPWO2022180770A1 (en) 2022-09-01
JPWO2022180770A5 true JPWO2022180770A5 (en) 2024-02-27

Family

ID=83048938

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2023501941A Pending JPWO2022180770A1 (en) 2021-02-26 2021-02-26

Country Status (2)

Country Link
JP (1) JPWO2022180770A1 (en)
WO (1) WO2022180770A1 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10240434A (en) * 1997-02-27 1998-09-11 Matsushita Electric Ind Co Ltd Command menu selecting method
JPH11259446A (en) * 1998-03-12 1999-09-24 Aqueous Reserch:Kk Agent device
JP5477740B2 (en) * 2009-11-02 2014-04-23 独立行政法人情報通信研究機構 Multisensory interaction system
JP5714855B2 (en) * 2010-09-16 2015-05-07 オリンパス株式会社 Image generation system, program, and information storage medium
EP3136055A4 (en) * 2014-04-21 2018-04-11 Sony Corporation Communication system, control method, and storage medium
JP2017181449A (en) * 2016-03-31 2017-10-05 カシオ計算機株式会社 Electronic device, route search method, and program
JP2018100936A (en) * 2016-12-21 2018-06-28 トヨタ自動車株式会社 On-vehicle device and route information presentation system
JP2019082904A (en) * 2017-10-31 2019-05-30 ソニー株式会社 Information processor, information processing method and program

Similar Documents

Publication Publication Date Title
JP2019096349A5 (en)
TWI713000B (en) Online learning assistance method, system, equipment and computer readable recording medium
US20160364002A1 (en) Systems and methods for determining emotions based on user gestures
Mead et al. Perceptual models of human-robot proxemics
JP2023085255A5 (en)
US10303436B2 (en) Assistive apparatus having accelerometer-based accessibility
DE60317338D1 (en) CONTROL SYSTEM FOR A LOAD HANDLING DEVICE
Hyrskykari Utilizing eye movements: Overcoming inaccuracy while tracking the focus of attention during reading
KR20160089184A (en) Apparatus and method for recognizing speech
CN107081774B (en) Robot shakes hands control method and system
GB2590473A (en) Method and apparatus for dynamic human-computer interaction
JPWO2022180770A5 (en)
JP2020202575A5 (en)
CN112219234B (en) Method, medium and system for identifying physiological stress of user of virtual reality environment
JP2022118239A5 (en)
JP4947439B2 (en) Voice guidance device, voice guidance method, voice guidance program
JP2000099306A5 (en)
Nolin et al. Activation cues and force scaling methods for virtual fixtures
KR20200134974A (en) Apparatus and method for controlling image based on user recognition
CN111027358A (en) Dictation and reading method based on writing progress and electronic equipment
KR101019655B1 (en) Apparatus and method having a function of guiding user's controlling behavior
US12019993B2 (en) Systems and methods for short- and long-term dialog management between a robot computing device/digital companion and a user
KR101938231B1 (en) Apparatus and method for estimation of user personality based on accumulated short-term personality character
JP2000352992A5 (en) Voice recognition device and navigation device
Zhou et al. Proposal and validation of an index for the operator’s haptic sensitivity in a master-slave system