WO2022239152A1 - Information presentation device, information presentation method, and program - Google Patents

Information presentation device, information presentation method, and program Download PDF

Info

Publication number
WO2022239152A1
WO2022239152A1 PCT/JP2021/018067 JP2021018067W WO2022239152A1 WO 2022239152 A1 WO2022239152 A1 WO 2022239152A1 JP 2021018067 W JP2021018067 W JP 2021018067W WO 2022239152 A1 WO2022239152 A1 WO 2022239152A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
display
information presentation
presentation device
positional relationship
Prior art date
Application number
PCT/JP2021/018067
Other languages
French (fr)
Japanese (ja)
Inventor
宇翔 草深
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2023520656A priority Critical patent/JPWO2022239152A1/ja
Priority to PCT/JP2021/018067 priority patent/WO2022239152A1/en
Publication of WO2022239152A1 publication Critical patent/WO2022239152A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • the present invention relates to an information presentation device, an information presentation method, and a program.
  • a responder hereinafter referred to as, for example, a customer service staff
  • a conversation partner hereinafter referred to as a counterpart of the responder
  • communicates and exchanges information to be conveyed such as characters, character information that expresses the intention of the conversation partner, and character information that allows the conversation partner to understand some of the responses and answers of the corresponding respondent.
  • characters, character information that expresses the intention of the conversation partner, and character information that allows the conversation partner to understand some of the responses and answers of the corresponding respondent are displayed on the same screen of one display at the same time.
  • a cross-language dialogue device which is an example of an information presentation device, translates the utterances of a respondent and a conversation partner who use different languages into character information that enables communication between them. , are displayed on the same screen of one display at the same time.
  • processing is performed to make it easier to see character information to be conveyed to the conversation partner by turning the display direction of only the display portion related to the conversation partner on one display screen.
  • each of a plurality of terminal units enables communication between users who use different languages (hereinafter referred to as conversation participants, collectively referring to a respondent and a conversation partner).
  • a cross-language dialogue device is described, and Patent Literature 2 describes a dialogue processing method that enables communication between users who use different languages.
  • Patent Document 3 when the person communicating with the other party of the conversation, the display method of the information on the information display device is turned upside down to display the information that the person wants to convey to the other party of the conversation in an easy-to-see direction. A display device is described.
  • JP 2004-355226 A Japanese Patent Application Laid-Open No. 2005-322145 Japanese Patent Application Laid-Open No. 2001-195049
  • Japanese Patent Application Laid-Open Nos. 2005-100001 and 2004-200322 propose techniques for smooth response, such as translation, but they do not consider the position of the user and a method of dynamically controlling the layout based on the position of the user. . Further, in Patent Document 3, as a configuration, the display of information is turned upside down to make it easier for the user to see the information. is not considered.
  • An object of the present invention which has been made in view of such circumstances, is to provide an information presentation device, an information presentation method, and a program that can dynamically change the screen layout according to the user's positional relationship.
  • an information presentation device is an information presentation device that presents information to a plurality of users via a display. and a screen layout determination unit that dynamically determines a screen layout according to the positional relationship and outputs display information based on the screen layout to the display.
  • an information presentation method is an information presentation method in an information presentation device for presenting information to a plurality of users via a display, wherein the information presentation device presents information to a plurality of users. obtaining position information and estimating the positional relationship of the user with respect to the display; dynamically determining a screen layout according to the positional relationship; and outputting display information based on the screen layout to the display. ,including.
  • a program causes a computer to function as the above information presentation device.
  • information can be used in a screen layout according to the characteristics of the conversation without complicated operations.
  • FIG. 1 is a block diagram showing a configuration example of an information presentation device according to a first embodiment
  • FIG. FIG. 4 is a schematic diagram showing a screen layout based on an estimated positional relationship
  • FIG. 4 is a schematic diagram showing a screen layout based on an estimated positional relationship
  • FIG. 10 is a diagram illustrating dynamic changes in screen layout according to positional relationships during dialogue
  • 4 is a flow chart showing an example of an information presentation method executed by the information presentation device according to the first embodiment
  • It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 2nd Embodiment.
  • FIG. 9 is a flow chart showing an example of an information presentation method executed by the information presentation device according to the second embodiment; It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 3rd Embodiment.
  • 10 is a flow chart showing an example of an information presentation method executed by an information presentation device according to a third embodiment; It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 4th Embodiment. It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 4th Embodiment.
  • 1 is a schematic diagram illustrating a display that can be viewed from any 360-degree position;
  • FIG. 1 is a schematic diagram illustrating a display that can be viewed from any 360-degree position;
  • FIG. 11 is a block diagram showing a configuration example of an information presentation device according to a fifth embodiment
  • FIG. 4 is a schematic diagram of presenting information by cooperation of a plurality of devices
  • FIG. 14 is a flow chart showing an example of an information presentation method executed by an information presentation device according to a fifth embodiment
  • FIG. 1 is a block diagram showing a schematic configuration of a computer functioning as an information presentation device
  • FIG. 1 is a block diagram showing a configuration example of an information presentation device 1 according to the first embodiment.
  • the information presentation device 1 shown in FIG. 1 includes a positional relationship estimation unit 11, a screen layout determination unit 12, and a screen display unit 13. Note that the screen display unit 13 may be provided outside the information presentation device 1 .
  • the positional relationship estimation unit 11 receives and acquires the user's positional information, and estimates the user's positional relationship with respect to the screen display unit 13 (hereinafter simply referred to as "positional relationship"). Then, positional relationship estimation section 11 outputs positional relationship information indicating the estimated positional relationship to screen layout determination section 12 .
  • the positional information of the user is acquired using a microphone, an environmental camera, Bluetooth (registered trademark), UWB (Ultra Wide Band), etc., which function in cooperation with the information presentation device 1, and is transmitted to the positional relationship estimation unit 11. .
  • the screen layout determining unit 12 dynamically determines the screen layout of information to be presented according to the positional relationship estimated by the positional relationship estimating unit 11.
  • the screen layout determination unit 12 then generates display information based on the determined screen layout, and outputs the display information to the screen display unit 13 .
  • the screen display unit 13 is a display that displays the display information input from the screen layout determination unit 12.
  • the screen display unit 13 is, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display.
  • the display may be a transmissive display.
  • the information presentation device 1 estimates the positional relationships among the three users U1, U2, and U3, determines a screen layout based on the estimated positional relationships, and displays information based on the determined screen layout. is output to the transmissive display D (screen display unit 13).
  • the user U3 is having a conversation with the users U1 and U2 who are side by side with the transmissive display D interposed therebetween.
  • normal characters are presented on the screen used by the user U3, and reversed characters are presented on the screens used by the users U1 and U2.
  • FIG. 2B user U2 and U user U3 who are side by side are having a conversation with user U1 with transmissive display D interposed therebetween.
  • normal characters are presented on the screens used by the users U2 and U3, and reversed characters are presented on the screen used by the user U1.
  • Fig. 3 is a diagram explaining the dynamic change of the screen layout according to the positional relationship during the dialogue.
  • mode A one in which the respondent and the conversation partner face each other across a single transmissive display D and view the screen
  • mode B A mode in which the other party views the same screen of one transmissive display D from a side-by-side position is also included.
  • mode A shown on the left side of FIG. 3
  • mode B a screen layout is presented in which normal characters are presented on the screen of the attendant and reversed characters are presented on the screen of the conversation partner.
  • mode B shown on the right side of FIG.
  • a screen layout for presenting normal characters is presented on the screens of the attendant and the conversation partner.
  • the screen layout is dynamically changed according to the positional relationship.
  • mode B the viewing directions of the respondent and the conversation partner are the same, so it is easy to have a conversation with reference to an object.
  • FIG. 4 is a flowchart showing an example of an information presentation method executed by the information presentation device 1.
  • FIG. 4 is a flowchart showing an example of an information presentation method executed by the information presentation device 1.
  • step S101 upon receiving the user's position information, the positional relationship estimation unit 11 estimates the positional relationship.
  • step S102 the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
  • step S103 the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
  • step S104 if it is determined that the conversation is ongoing, the steps from step S101 to step S103 are repeatedly executed, and as shown in FIG.
  • the mode is changed from the mode A to the mode A, the screen layout for displaying information is changed accordingly.
  • the information display ends.
  • the information presentation device 1 it is possible to dynamically change the screen layout according to the positional relationship.
  • Information can be used with the corresponding screen layout.
  • FIG. 5 is a block diagram showing a configuration example of the information presentation device 2 according to the second embodiment.
  • the information presentation device 2 shown in FIG. 5 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13, a voice acquisition unit 14, a speaker/position estimation unit 15, a positional relationship DB 16, Prepare.
  • the information presentation device 2 differs from the information presentation device 1 according to the first embodiment in that it further includes a voice acquisition unit 14, a speaker/position estimation unit 15, and a positional relationship DB 16.
  • FIG. The same reference numerals as in the first embodiment are assigned to the same configurations as in the first embodiment, and the description thereof is omitted as appropriate.
  • the voice acquisition unit 14 is a microphone or the like that acquires the user's uttered voice.
  • the voice acquisition unit 14 outputs voice information indicating the acquired voice to the speaker/position estimation unit 15 .
  • the positional relationship DB 16 associates and stores the use position of the respondent and the position of the voice acquisition unit 14 .
  • the speaker/position estimation unit 15 estimates the position of the speaker (the user who spoke at a certain point in time) based on the voice acquired by the voice acquisition unit 14 . For example, if the voice acquisition unit 14 is a directional microphone or a microphone array, the speaker/location estimation unit 15 can estimate the location of the speaker based on the loudness of the voice. Then, the speaker/position estimation unit 15 outputs position information indicating the estimated speech position to the positional relationship estimation unit 11 and updates the speaker position information stored in the positional relationship DB 16 .
  • the positional relationship estimation unit 11 refers to the positional relationship DB 16 and estimates the positional relationship between the speakers participating in the conversation, triggered by the end of the processing in the speaker/position estimation unit 15 . For example, if there are two speakers A and B, we assume that they are standing on the same side of the display, with speaker A on the right and speaker B on the left. . As another example, assume that speakers A and B are facing different sides of the display. Then, positional relationship estimating section 11 outputs positional relationship information indicating the estimated positional relationship to screen layout determining section 12 .
  • FIG. 6 is a flowchart showing an example of an information presentation method executed by the information presentation device 2 according to the second embodiment.
  • the information presentation device 2 Before the information presentation device 2 executes the information presentation method, as a preparation, it associates the usage position of the respondent with the position of the microphone and stores them in the positional relationship DB 16 .
  • step S201 the voice acquisition unit 14 of the information presentation device 2 acquires the uttered voice.
  • step S202 the speaker/position estimation unit 15 estimates the speaker and its position based on the acquired voice data, and updates the speaker and its position information stored in the positional relationship DB 16.
  • step S203 the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
  • step S204 the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
  • step S205 the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
  • step S206 if it is determined that the conversation is ongoing, the steps from step S201 to step S205 are repeatedly executed, and as shown in FIG. Alternatively, when the form B is changed to the form A, the screen layout for displaying information is changed accordingly. When it is determined that the conversation has ended, the information display ends.
  • the information presentation device 2 it is possible to acquire the user's location information by a simple method.
  • FIG. 7 is a block diagram showing a configuration example of the information presentation device 3 according to the third embodiment.
  • the information presentation device 3 shown in FIG. 7 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13, a speaker/position estimation unit 15, a positional relationship DB 16, an image acquisition unit 17, A face authentication DB 18 is provided.
  • the information presentation device 3 differs from the information presentation device 2 according to the second embodiment in that it includes an image acquisition section 17 instead of the voice acquisition section 14 and further comprises a face authentication DB 18 .
  • the same reference numerals as in the second embodiment are assigned to the same configurations as in the second embodiment, and the description thereof is omitted as appropriate.
  • the image acquisition unit 17 is a camera or the like that acquires image information of the user.
  • the image acquisition unit 17 outputs the acquired image information indicating the face of the speaker used for face recognition and face authentication to the speaker/position estimation unit 15 .
  • face authentication refers to an authentication method that identifies an individual based on the positions of feature points such as the eyes, nose, and mouth of the face, as well as the position and size of the face area.
  • face recognition identifies people, but does not identify individuals. That is, if face authentication is performed using the information registered in the face authentication DB 18, it is possible to distinguish between the attendant and the customer.
  • the face authentication DB 18 pre-registers the face authentication information required to identify the respondent as a specific individual by face authentication.
  • the speaker/position estimation unit 15 executes face recognition based on the image information acquired by the image acquisition unit 17 to estimate the user's position information. Then, the speaker/position estimation unit 15 outputs the estimated position information of the user to the positional relationship estimation unit 11 . In addition, the speaker/position estimation unit 15 updates the position information of the user stored in the positional relationship DB 16 .
  • the speaker/position estimation unit 15 performs face authentication based on the face authentication information registered in the face authentication DB 18 and the image information acquired by the image acquisition unit 17, are estimated separately.
  • the respondent is, for example, a customer service staff, and the other user is, for example, a conversation partner of the respondent.
  • the positional relationship estimating unit 11 refers to the positional relationship DB 16 to estimate the positional relationship, and outputs the estimated positional relationship to the screen layout determining unit 12 when the processing in the speaker/position estimating unit 15 ends.
  • face authentication since face authentication is not used, as a preparation, the use position of the person receiving the call and the position of the microphone are linked and stored in the positional relationship DB 16.
  • face authentication is used. , there is no need to predetermine the position of the attendant.
  • the information presentation device 3 may further include the above-described voice acquisition section 14 in addition to the image acquisition section 17 .
  • the voice acquisition unit 14 outputs voice information indicating the acquired voice to the speaker/position estimation unit 15 .
  • the speaker/position estimating unit 15 can also estimate the utterance position based on the voice acquired by the voice acquiring unit 14 .
  • FIG. 8 is a flowchart showing an example of an information presentation method executed by the information presentation device 3 according to the third embodiment.
  • the information presentation device 3 executes the information presentation method, as a preparation, the information of the responder necessary for identifying the responder as a specific individual by face authentication is registered in the face authentication DB 18.
  • step S301 the image acquisition unit 17 of the information presentation device 3 acquires image information of the speaker's face for performing face authentication and face recognition.
  • step S302 the speaker/position estimation unit 15 performs face authentication and face recognition to estimate the speaker and its position, and updates the position information of the speaker stored in the positional relationship DB 16.
  • step S303 the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
  • step S304 the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
  • step S305 the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
  • step S306 if it is determined that the conversation is ongoing, the steps from step S301 to step S305 are repeatedly executed, and as shown in FIG.
  • the mode is changed from the mode A to the mode A, the screen layout for displaying information is changed accordingly.
  • the information display ends.
  • the information presentation device 3 it is possible to acquire the user's location information by a simple method. In addition, since it is possible to distinguish between the responder and other users, it is possible to present different information to the responder and other users. Further, when the sound acquisition unit 14 is combined with the image acquisition unit 17, the user's position information can be acquired with higher accuracy.
  • FIG. 9A and 9B are block diagrams showing configuration examples of an information presentation device 4 according to the fourth embodiment.
  • the information presentation device 4 according to the fourth embodiment is an information presentation device that has a display that can be viewed not only from a plane but also from any position of 360 degrees. For this reason, the reference numeral of the screen display unit related to the information presentation device 4 is set to 13' instead of 13.
  • FIG. FIG. 9A differs from FIG. 5, which is a block diagram of the information presentation device 2 according to the second embodiment, in that it has a screen display unit 13'.
  • FIG. 9B is different from FIG. 7, which is a block diagram of the information presentation device 3 according to the third embodiment, in that it has a screen display unit 13'.
  • FIG. 10A when using a display D that can be viewed from any 360-degree position, information is output in normal characters when users U1, U2, and U3 view the screen from their respective positions.
  • FIG. 10B is a top view of FIG. 10A.
  • the information presentation device 4 it is possible to increase the degree of freedom in viewing directions, and for example, to display an aerial image.
  • FIG. 11 is a block diagram showing a configuration example of the information presentation device 5 according to the fifth embodiment.
  • the information presentation device 5 shown in FIG. 11 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13a, a voice acquisition unit 14, a speaker/position estimation unit 15, a positional relationship DB 16, A screen output distribution unit 19 and a screen position DB 20 are provided.
  • the information presentation device 5 differs from the information presentation device 2 according to the second embodiment in that it further includes a screen output distribution unit 19 and a screen position DB 20 .
  • the same reference numerals as in the second embodiment are assigned to the same configurations as in the second embodiment, and the description thereof is omitted as appropriate.
  • the information presentation device 5 is connected to one or more screen display units 13b-13n.
  • the screen position DB 20 stores position information for each of the plurality of screen display units 13a to 13n.
  • the screen layout determining unit 12 determines the screen layout of each of the screen display units 13a to 13n based on the positional relationship estimated by the positional relationship estimating unit 11 and the positional information stored in the screen position DB20. Then, the screen layout determination section 12 outputs display information based on the screen layouts of the screen display sections 13a to 13n to the screen output distribution section 19.
  • FIG. 1 The screen layout determining unit 12 determines the screen layout of each of the screen display units 13a to 13n based on the positional relationship estimated by the positional relationship estimating unit 11 and the positional information stored in the screen position DB20. Then, the screen layout determination section 12 outputs display information based on the screen layouts of the screen display sections 13a to 13n to the screen output distribution section 19.
  • the screen layout determining unit 12 based on the positional relationship of users U1, U2, and U3 and the positions of transmissive displays D1 and D2, Display information to be presented to the user U1 is generated on the transmissive display D1, and display information to be presented to the users U2 and U3 is generated on the transmissive display D2.
  • the transmissive display D1 Display information to be presented to the user U1 is generated on the transmissive display D1
  • display information to be presented to the users U2 and U3 is generated on the transmissive display D2.
  • normal characters are presented on the screen used by the user U2
  • reversed characters are presented on the screens used by the users U1 and U3.
  • the screen output distribution unit 19 distributes and transfers the display information input from the screen layout determination unit 12 to the plurality of screen display units 13a to 13n.
  • the information presentation device 5 may present information by cooperation of a plurality of devices. That is, the information presentation device 5 may present information on a display provided in the device in cooperation with a device owned by the conversation partner and provided outside the information presentation device 5 .
  • the screen display units 13b to 13n in FIG. 11 mean displays in a plurality of devices. Such devices include personal digital assistants such as smartphones and tablets.
  • the screen output distribution unit 19 transfers display information to the device by short-range wireless communication such as Bluetooth or UWB.
  • the information presentation device 5 may further include the above-described image acquisition section 17 and face authentication DB 18 in addition to the voice acquisition section 14 .
  • the image acquisition unit 17 outputs the acquired image information indicating the face of the speaker used for face recognition and face authentication to the speaker/position estimation unit 15 .
  • the speaker/position estimating unit 15 can also perform face authentication and face recognition based on the image acquired by the image acquiring unit 17 to estimate the position information of the user.
  • FIG. 13 is a flowchart showing an example of an information presentation method executed by the information presentation device 5 according to the fifth embodiment.
  • the position of each screen is saved in the screen position DB 20 as a preparation.
  • step S401 the voice acquisition unit 14 of the information presentation device 5 acquires the uttered voice.
  • step S402 the speaker/position estimation unit 15 estimates the speaker and position, and updates the speaker and its position information stored in the positional relationship DB 16.
  • step S403 the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
  • step S404 the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11 and the positional information of each screen stored in the screen position DB 20.
  • step S405 based on the screen layout passed from the screen layout determination unit 12, the screen output distribution unit 19 requests the screen display units 13a to 13n to display content and display output.
  • step S406 the screen display units 13a to 13n output information to the screen based on the determined screen layout.
  • step S407 if it is determined that the conversation is ongoing, the steps from step S401 to step S406 are repeatedly executed, and as shown in FIG.
  • the form B is changed to the form A
  • the screen layout for displaying information is changed accordingly.
  • the information display ends.
  • the positional relationship estimating unit 11, the screen layout determining unit 12, the speaker/position estimating unit 15, and the screen output distribution unit 19 in the information presentation devices 1, 2, 3, 4, and 5 described above are control arithmetic circuits (controllers). form part of The control arithmetic circuit may be configured by dedicated hardware such as ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array), or may be configured by a processor, or may include both. may be
  • FIG. 14 is a block diagram showing a schematic configuration of a computer functioning as the information presentation devices 1, 2, 3, 4 and 5.
  • the computer 100 may be a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), an electronic notepad, or the like.
  • Program instructions may be program code, code segments, etc. for performing the required tasks.
  • the computer 100 includes a processor 110, a ROM (Read Only Memory) 120, a RAM (Random Access Memory) 130, and a storage 140 as storage units, an input unit 150, an output unit 160, and communication and an interface (I/F) 170 .
  • Each component is communicatively connected to each other via a bus 180 .
  • the voice acquisition unit 14 in the information presentation devices 2, 4 and 5 and the image acquisition unit 17 in the information presentation devices 3 and 4 may be constructed as the input unit 150.
  • the screen display unit 13 may be constructed as the output unit 160 .
  • the ROM 120 stores various programs and various data.
  • RAM 130 temporarily stores programs or data as a work area.
  • the storage 140 is configured by a HDD (Hard Disk Drive) or SSD (Solid State Drive) and stores various programs including an operating system and various data.
  • the ROM 120 or storage 140 stores the program according to the present invention.
  • the positional relationship DB 16, the face authentication DB 18, and the screen position DB 20 according to the embodiment of the present invention may be constructed as the storage 140.
  • the processor 110 is specifically a CPU (Central Processing Unit), MPU (Micro Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), SoC (System on a Chip), etc. may be configured by a plurality of processors of The processor 110 reads a program from the ROM 120 or the storage 140 and executes the program using the RAM 130 as a work area, thereby performing control of each configuration and various arithmetic processing. Note that at least part of these processing contents may be realized by hardware.
  • CPU Central Processing Unit
  • MPU Micro Processing Unit
  • GPU Graphics Processing Unit
  • DSP Digital Signal Processor
  • SoC System on a Chip
  • the program may be recorded on a recording medium readable by the computer 100.
  • a program can be installed in the computer 100 by using such a recording medium.
  • the recording medium on which the program is recorded may be a non-transitory recording medium.
  • the non-transitory recording medium is not particularly limited, but may be, for example, a CD-ROM, a DVD-ROM, a USB (Universal Serial Bus) memory, or the like.
  • this program may be downloaded from an external device via a network.
  • the positional relationship DB 16, the face authentication DB 18, and the screen position DB 20 may be constructed in such a recording medium.
  • An information presentation device that presents information to a plurality of users via a display, Control for acquiring position information of the user, estimating the positional relationship of the user with respect to the display, dynamically determining a screen layout according to the positional relationship, and outputting display information based on the screen layout to the display
  • An information presentation device comprising: (Appendix 2) The control unit obtaining the user's uttered voice; 2. The information presentation device according to additional item 1, wherein the position information of the user is estimated based on the uttered voice. (Appendix 3) The control unit obtaining image information of the user; 2. The information presentation device according to claim 1, wherein face recognition is performed based on the image information to update the position information of the user.
  • Appendix 4 further comprising a face authentication DB for registering face authentication information for identifying a respondent as a specific individual by face authentication, 4.
  • the information presentation device according to claim 3, wherein the control unit performs face authentication based on the face authentication information and the image information, and distinguishes and estimates the positions of the attendant and other users.
  • Appendix 5 5.
  • the information presentation device according to any one of additional items 1 to 4, wherein the control unit outputs the display information to a display that can be viewed from any position 360 degrees.
  • Appendix 6) further comprising a screen position DB that stores position information of a plurality of displays that display the display information; The control unit distributes and transfers the display information to the plurality of displays, 6.
  • the information presentation device according to any one of additional items 1 to 5, wherein a screen layout of each of the plurality of displays is determined based on the positional relationship and the positional information of the displays.
  • Appendix 7 An information presentation method in an information presentation device for presenting information to a plurality of users via a display, By the information presentation device, obtaining position information of the user and estimating a positional relationship of the user with respect to the display; dynamically determining a screen layout according to the positional relationship and outputting display information based on the screen layout to the display;
  • Information presentation methods including;
  • (Appendix 8) A non-temporary storage medium storing a computer-executable program, the non-temporary storage medium storing a program that causes the computer to function as the information presentation device according to any one of appendices 1 to 6.
  • positional relationship estimation unit 12 screen layout determination unit 13, 13′, 13a to 13n screen display unit (display) 14 Voice Acquisition Unit 15 Speaker/Position Estimation Unit 16 Positional Relationship DB 17 Image Acquisition Unit 18 Screen Position DB 19 Screen output distribution unit 20 Screen position DB 100 computer 110 processor 120 ROM 130 RAM 140 storage 150 input unit 160 output unit 170 communication interface (I/F) 180 bus
  • I/F communication interface

Abstract

An information presentation device (1), according to the present invention, comprises: a positional relation estimation unit (11) that acquires information about the position of a user, and estimates the positional relationship of the user with respect to a display; and a screen layout determination unit (12) that dynamically determines a screen layout according to the positional relationship, and outputs display information based on the screen layout to a display (13).

Description

情報提示装置、情報提示方法、及びプログラムInformation presentation device, information presentation method, and program
 本発明は、情報提示装置、情報提示方法、及びプログラムに関する。 The present invention relates to an information presentation device, an information presentation method, and a program.
 従来、応対者(以下、たとえば、接客スタッフのことをいう。)及び会話相手(以下、応対者の対になる存在であり、接客等のシーンで言えば、問い合わせを行う顧客のことをいう。)が文字などの伝達したい情報を伝達交換する情報提示装置では、会話相手の意志を表す文字情報などと、それに対応する応対者のいくつかの返事、答えなどを会話相手が理解できる文字情報などとして、同時にひとつのディスプレイの同じ画面に表示している。また、情報提示装置の一例である異言語間対話装置では、異なる言語を使用する応対者及び会話相手の発話を、翻訳処理を行うことにより、両者間の意思疎通を可能にする文字情報などとして、同時にひとつのディスプレイの同じ画面に表示している。さらに、ひとつのディスプレイ画面における会話相手に関する表示部分のみの表示の向きを転回することで会話相手に伝達したい文字情報などを見やすくする処理が行われている。 Conventionally, a responder (hereinafter referred to as, for example, a customer service staff) and a conversation partner (hereinafter referred to as a counterpart of the responder), and in a scene such as customer service, refers to a customer who makes an inquiry. ) communicates and exchanges information to be conveyed such as characters, character information that expresses the intention of the conversation partner, and character information that allows the conversation partner to understand some of the responses and answers of the corresponding respondent. are displayed on the same screen of one display at the same time. In addition, a cross-language dialogue device, which is an example of an information presentation device, translates the utterances of a respondent and a conversation partner who use different languages into character information that enables communication between them. , are displayed on the same screen of one display at the same time. Furthermore, processing is performed to make it easier to see character information to be conveyed to the conversation partner by turning the display direction of only the display portion related to the conversation partner on one display screen.
 例えば、特許文献1には、複数の端末部のそれぞれにおいて、互いに異なる言語を使用するユーザ(以下、会話参加者であり、応対者と会話相手の総称をいう。)の間の意思疎通を可能にする異言語間対話装置が記載され、特許文献2には、異なる言語を使用するユーザの間の意思疎通を可能にする対話処理方法が記載されている。また、特許文献3には、応対者が会話相手に情報を伝達するにあたり、情報表示装置上の情報の表示方法を天地反転して会話相手が見やすい向きに伝達したい情報を表示する手段を備える情報表示装置が記載されている。 For example, in Patent Document 1, each of a plurality of terminal units enables communication between users who use different languages (hereinafter referred to as conversation participants, collectively referring to a respondent and a conversation partner). A cross-language dialogue device is described, and Patent Literature 2 describes a dialogue processing method that enables communication between users who use different languages. Further, in Patent Document 3, when the person communicating with the other party of the conversation, the display method of the information on the information display device is turned upside down to display the information that the person wants to convey to the other party of the conversation in an easy-to-see direction. A display device is described.
特開2004-355226号公報JP 2004-355226 A 特開2005-322145号公報Japanese Patent Application Laid-Open No. 2005-322145 特開2001-195049号公報Japanese Patent Application Laid-Open No. 2001-195049
 特許文献1及び特許文献2では、翻訳など、応対を円滑にするための工夫が提案されているが、ユーザの位置、及びそれを踏まえてレイアウトを動的に制御する方法については考慮されていない。また、特許文献3では、構成として、情報の表示を天地反転させることで、ユーザにとっての情報の見やすさを考慮しているが、ユーザの位置の変化に応じた画面レイアウトの動的な制御については考慮されていない。 Japanese Patent Application Laid-Open Nos. 2005-100001 and 2004-200322 propose techniques for smooth response, such as translation, but they do not consider the position of the user and a method of dynamically controlling the layout based on the position of the user. . Further, in Patent Document 3, as a configuration, the display of information is turned upside down to make it easier for the user to see the information. is not considered.
 かかる事情に鑑みてなされた本発明の目的は、ユーザの位置関係に応じて動的に画面レイアウトの変更を行うことが可能な情報提示装置、情報提示方法、及びプログラムを提供することにある。 An object of the present invention, which has been made in view of such circumstances, is to provide an information presentation device, an information presentation method, and a program that can dynamically change the screen layout according to the user's positional relationship.
 上記課題を解決するため、一実施形態に係る情報提示装置は、複数のユーザにディスプレイを介して情報を提示する情報提示装置であって、前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定する位置関係推定部と、前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力する画面レイアウト決定部と、を備える。 In order to solve the above problems, an information presentation device according to one embodiment is an information presentation device that presents information to a plurality of users via a display. and a screen layout determination unit that dynamically determines a screen layout according to the positional relationship and outputs display information based on the screen layout to the display.
 また、上記課題を解決するため、一実施形態に係る情報提示方法は、複数のユーザにディスプレイを介して情報を提示する情報提示装置における情報提示方法であって、情報提示装置により、前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定するステップと、前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力するステップと、を含む。 In order to solve the above problems, an information presentation method according to one embodiment is an information presentation method in an information presentation device for presenting information to a plurality of users via a display, wherein the information presentation device presents information to a plurality of users. obtaining position information and estimating the positional relationship of the user with respect to the display; dynamically determining a screen layout according to the positional relationship; and outputting display information based on the screen layout to the display. ,including.
 また、上記課題を解決するため、一実施形態に係るプログラムは、コンピュータを、上記情報提示装置として機能させる。 Also, in order to solve the above problems, a program according to one embodiment causes a computer to function as the above information presentation device.
 本発明によれば、煩雑な操作なく、会話の特性に応じた画面レイアウトで情報を利用可能となる。 According to the present invention, information can be used in a screen layout according to the characteristics of the conversation without complicated operations.
第1の実施形態に係る情報提示装置の構成例を示すブロック図である。1 is a block diagram showing a configuration example of an information presentation device according to a first embodiment; FIG. 推定した位置関係に基づく画面レイアウトを示す概略図である。FIG. 4 is a schematic diagram showing a screen layout based on an estimated positional relationship; 推定した位置関係に基づく画面レイアウトを示す概略図である。FIG. 4 is a schematic diagram showing a screen layout based on an estimated positional relationship; 対話時の位置関係に応じた画面レイアウトの動的な変更を説明する図である。FIG. 10 is a diagram illustrating dynamic changes in screen layout according to positional relationships during dialogue; 第1の実施形態に係る情報提示装置が実行する情報提示方法の一例を示すフローチャートである。4 is a flow chart showing an example of an information presentation method executed by the information presentation device according to the first embodiment; 第2の実施形態に係る情報提示装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 2nd Embodiment. 第2の実施形態に係る情報提示装置が実行する情報提示方法の一例を示すフローチャートである。9 is a flow chart showing an example of an information presentation method executed by the information presentation device according to the second embodiment; 第3の実施形態に係る情報提示装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 3rd Embodiment. 第3の実施形態に係る情報提示装置が実行する情報提示方法の一例を示すフローチャートである。10 is a flow chart showing an example of an information presentation method executed by an information presentation device according to a third embodiment; 第4の実施形態に係る情報提示装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 4th Embodiment. 第4の実施形態に係る情報提示装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the information presentation apparatus which concerns on 4th Embodiment. 360度任意の位置から閲覧可能なディスプレイを説明する概略図である。1 is a schematic diagram illustrating a display that can be viewed from any 360-degree position; FIG. 360度任意の位置から閲覧可能なディスプレイを説明する概略図である。1 is a schematic diagram illustrating a display that can be viewed from any 360-degree position; FIG. 第5の実施形態に係る情報提示装置の構成例を示すブロック図である。FIG. 11 is a block diagram showing a configuration example of an information presentation device according to a fifth embodiment; 複数のデバイスの連携により情報の提示を行う概略図である。FIG. 4 is a schematic diagram of presenting information by cooperation of a plurality of devices; 第5の実施形態に係る情報提示装置が実行する情報提示方法の一例を示すフローチャートである。FIG. 14 is a flow chart showing an example of an information presentation method executed by an information presentation device according to a fifth embodiment; FIG. 情報提示装置として機能するコンピュータの概略構成を示すブロック図である。1 is a block diagram showing a schematic configuration of a computer functioning as an information presentation device; FIG.
 以下、本発明を実施するための形態について、図面を参照しながら詳細に説明する。 Hereinafter, embodiments for carrying out the present invention will be described in detail with reference to the drawings.
(第1の実施形態)
 図1は、第1の実施形態に係る情報提示装置1の構成例を示すブロック図である。図1に示す情報提示装置1は、位置関係推定部11と、画面レイアウト決定部12と、画面表示部13と、を備える。なお、画面表示部13は、情報提示装置1の外部に備えられていてもよい。
(First embodiment)
FIG. 1 is a block diagram showing a configuration example of an information presentation device 1 according to the first embodiment. The information presentation device 1 shown in FIG. 1 includes a positional relationship estimation unit 11, a screen layout determination unit 12, and a screen display unit 13. Note that the screen display unit 13 may be provided outside the information presentation device 1 .
 位置関係推定部11は、ユーザの位置情報を受信して取得し、ユーザの画面表示部13に対する位置関係(以下、単に「位置関係」という。)を推定する。そして、位置関係推定部11は、推定した位置関係を示す位置関係情報を画面レイアウト決定部12に出力する。ユーザの位置情報は、情報提示装置1と連携して機能するマイク、環境カメラ、Bluetooth(登録商標)、UWB(Ultra Wide Band)などを利用して取得され、位置関係推定部11へ送信される。 The positional relationship estimation unit 11 receives and acquires the user's positional information, and estimates the user's positional relationship with respect to the screen display unit 13 (hereinafter simply referred to as "positional relationship"). Then, positional relationship estimation section 11 outputs positional relationship information indicating the estimated positional relationship to screen layout determination section 12 . The positional information of the user is acquired using a microphone, an environmental camera, Bluetooth (registered trademark), UWB (Ultra Wide Band), etc., which function in cooperation with the information presentation device 1, and is transmitted to the positional relationship estimation unit 11. .
 画面レイアウト決定部12は、位置関係推定部11において推定された位置関係に応じて動的に、提示する情報の画面レイアウトを決定する。そして、画面レイアウト決定部12は、決定した画面レイアウトに基づく表示情報を生成し、画面表示部13に出力する。 The screen layout determining unit 12 dynamically determines the screen layout of information to be presented according to the positional relationship estimated by the positional relationship estimating unit 11. The screen layout determination unit 12 then generates display information based on the determined screen layout, and outputs the display information to the screen display unit 13 .
 画面表示部13は、画面レイアウト決定部12から入力した表示情報を表示するディスプレイである。画面表示部13は、例えば液晶ディスプレイ又は有機EL(Electro-Luminescence)ディスプレイである。ディスプレイは、透過ディスプレイであってもよい。 The screen display unit 13 is a display that displays the display information input from the screen layout determination unit 12. The screen display unit 13 is, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display. The display may be a transmissive display.
 図2A及び図2Bでは、情報提示装置1は3人のユーザU1,U2,U3の位置関係を推定し、推定した位置関係に基づく画面レイアウトを決定し、そして決定された画面レイアウトに基づいた情報を透過ディスプレイD(画面表示部13)に出力している。図2Aでは、ユーザU3が、透過ディスプレイDを挟んで、横並びのユーザU1及びユーザU2と会話をしている。この時、ユーザU3から見て、ユーザU3が利用する画面には通常文字が提示され、ユーザU1及びユーザU2が利用する画面には反転文字が提示される。図2Bでは、横並びのユーザU2及びUユーザU3が、透過ディスプレイDを挟んで、ユーザU1と会話をしている。この時、ユーザU2及びユーザU3から見て、ユーザU2及びユーザU3が利用する画面には通常文字が提示され、ユーザU1が利用する画面には反転文字が提示される。 2A and 2B, the information presentation device 1 estimates the positional relationships among the three users U1, U2, and U3, determines a screen layout based on the estimated positional relationships, and displays information based on the determined screen layout. is output to the transmissive display D (screen display unit 13). In FIG. 2A, the user U3 is having a conversation with the users U1 and U2 who are side by side with the transmissive display D interposed therebetween. At this time, as seen from the user U3, normal characters are presented on the screen used by the user U3, and reversed characters are presented on the screens used by the users U1 and U2. In FIG. 2B, user U2 and U user U3 who are side by side are having a conversation with user U1 with transmissive display D interposed therebetween. At this time, as seen from the user U2 and the user U3, normal characters are presented on the screens used by the users U2 and U3, and reversed characters are presented on the screen used by the user U1.
 図3は、対話時の位置関係に応じた画面レイアウトの動的な変更を説明する図である。会話時の位置関係には、会話の特性に応じて、応対者と会話相手とがひとつの透過ディスプレイDを挟んで向かい合い画面を閲覧する形態(以下、形態Aという。)と、応対者と会話相手とがひとつの透過ディスプレイDの同じ画面を横並びに並んだ位置から閲覧する形態(以下、形態Bという。)とが含まれる。これらの位置関係に応じて、図3の左に示す形態Aでは、応対者の画面には通常文字を、会話相手の画面には反転文字を提示する画面レイアウトが提示される。一方、図3の右に示す形態Bでは、応対者及び会話相手の画面には通常文字を提示する画面レイアウトが提示される。ここで、応対者と会話相手の対話中の位置関係が変化すると、その位置関係に応じて、画面レイアウトが動的に変更される。形態Bでは応対者及び会話相手の閲覧方向が同じとなるため、物を参照した会話が容易となる。  Fig. 3 is a diagram explaining the dynamic change of the screen layout according to the positional relationship during the dialogue. Depending on the characteristics of the conversation, there are two types of positional relationships during conversation: one in which the respondent and the conversation partner face each other across a single transmissive display D and view the screen (hereinafter referred to as mode A); A mode (hereinafter referred to as mode B) in which the other party views the same screen of one transmissive display D from a side-by-side position is also included. According to these positional relationships, in form A shown on the left side of FIG. 3, a screen layout is presented in which normal characters are presented on the screen of the attendant and reversed characters are presented on the screen of the conversation partner. On the other hand, in mode B shown on the right side of FIG. 3, a screen layout for presenting normal characters is presented on the screens of the attendant and the conversation partner. Here, when the positional relationship between the respondent and the conversation partner changes, the screen layout is dynamically changed according to the positional relationship. In mode B, the viewing directions of the respondent and the conversation partner are the same, so it is easy to have a conversation with reference to an object.
 たとえば、形態Aにより対面で外国人観光客への応対を翻訳を利用して開始し、会話の途中から、形態Bにより付近の案内板を利用しながら会話を行うことも可能となる。 For example, it is possible to start a face-to-face correspondence with a foreign tourist in Form A using translation, and from the middle of the conversation, use a nearby guide board in Form B to have a conversation.
 図4は、情報提示装置1が実行する情報提示方法の一例を示すフローチャートである。 FIG. 4 is a flowchart showing an example of an information presentation method executed by the information presentation device 1. FIG.
 ステップS101では、ユーザの位置情報を受信すると、位置関係推定部11が、位置関係を推定する。 In step S101, upon receiving the user's position information, the positional relationship estimation unit 11 estimates the positional relationship.
 ステップS102では、画面レイアウト決定部12が、位置関係推定部11において推定された位置関係を基に提示する情報の画面レイアウトを決定する。 In step S102, the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
 ステップS103では、画面表示部13が、画面レイアウト決定部12において決定された画面レイアウトに基づいた情報の表示を行う。 In step S103, the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
 ステップS104では、会話が継続中であると判定されると、ステップS101からステップS103までのステップが繰り返し実行され、図3に示すように、この間に位置関係が形態Aから形態B、又は形態Bから形態Aに変わると、それに伴って情報を表示する画面レイアウトが変更される。会話が終了したと判定されると、情報表示が終了する。 In step S104, if it is determined that the conversation is ongoing, the steps from step S101 to step S103 are repeatedly executed, and as shown in FIG. When the mode is changed from the mode A to the mode A, the screen layout for displaying information is changed accordingly. When it is determined that the conversation has ended, the information display ends.
 本実施形態に係る情報提示装置1によれば、位置関係に応じて動的に画面レイアウトの変更を行うことが可能となり、煩雑な操作なく、参加人数、情報の閲覧方向などの会話の特性に応じた画面レイアウトで情報を利用可能となる。 According to the information presentation device 1 according to the present embodiment, it is possible to dynamically change the screen layout according to the positional relationship. Information can be used with the corresponding screen layout.
(第2の実施形態)
 次に、音声を利用してユーザの位置情報を取得する例を、第2の実施形態として説明する。図5は、第2の実施形態に係る情報提示装置2の構成例を示すブロック図である。図5に示す情報提示装置2は、位置関係推定部11と、画面レイアウト決定部12と、画面表示部13と、音声取得部14と、発話者・位置推定部15と、位置関係DB16と、を備える。情報提示装置2は、第1の実施形態に係る情報提示装置1と比較して、音声取得部14、発話者・位置推定部15、及び位置関係DB16を更に備える点が相違する。第1の実施形態と同一の構成については、第1の実施形態と同一の参照番号を付して適宜説明を省略する。
(Second embodiment)
Next, an example of acquiring user position information using voice will be described as a second embodiment. FIG. 5 is a block diagram showing a configuration example of the information presentation device 2 according to the second embodiment. The information presentation device 2 shown in FIG. 5 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13, a voice acquisition unit 14, a speaker/position estimation unit 15, a positional relationship DB 16, Prepare. The information presentation device 2 differs from the information presentation device 1 according to the first embodiment in that it further includes a voice acquisition unit 14, a speaker/position estimation unit 15, and a positional relationship DB 16. FIG. The same reference numerals as in the first embodiment are assigned to the same configurations as in the first embodiment, and the description thereof is omitted as appropriate.
 音声取得部14は、ユーザの発話音声を取得するマイクなどである。音声取得部14は、取得した音声を示す音声情報を発話者・位置推定部15に出力する。 The voice acquisition unit 14 is a microphone or the like that acquires the user's uttered voice. The voice acquisition unit 14 outputs voice information indicating the acquired voice to the speaker/position estimation unit 15 .
 位置関係DB16は、応対者の利用位置と音声取得部14の位置とを紐づけて保存する。 The positional relationship DB 16 associates and stores the use position of the respondent and the position of the voice acquisition unit 14 .
 発話者・位置推定部15は、音声取得部14により取得された音声を基に、発話者(ある時点で発話を行ったユーザ)の位置の推定を行う。たとえば、音声取得部14が指向性マイク又はマイクアレイである場合、発話者・位置推定部15は、音声の大きさにより発話者の位置を推定することができる。そして、発話者・位置推定部15は、推定した発話位置を示す位置情報を位置関係推定部11へ出力するとともに、位置関係DB16に保存された発話者の位置情報を更新する。 The speaker/position estimation unit 15 estimates the position of the speaker (the user who spoke at a certain point in time) based on the voice acquired by the voice acquisition unit 14 . For example, if the voice acquisition unit 14 is a directional microphone or a microphone array, the speaker/location estimation unit 15 can estimate the location of the speaker based on the loudness of the voice. Then, the speaker/position estimation unit 15 outputs position information indicating the estimated speech position to the positional relationship estimation unit 11 and updates the speaker position information stored in the positional relationship DB 16 .
 位置関係推定部11は、発話者・位置推定部15での処理終了を契機とし、位置関係DB16を参照し、会話に参加する発話者同士の位置関係の推定を行う。たとえば、2人の発話者AとBが存在する場合、発話者AとBはディスプレイの同じ面側に立っており、発話者Aが右側、発話者Bは左側に存在していると推定する。別の例としては、発話者AとBはディスプレイの異なる面側に立ち対面していると推定する。そして、位置関係推定部11は、推定した位置関係を示す位置関係情報を画面レイアウト決定部12へ出力する。 The positional relationship estimation unit 11 refers to the positional relationship DB 16 and estimates the positional relationship between the speakers participating in the conversation, triggered by the end of the processing in the speaker/position estimation unit 15 . For example, if there are two speakers A and B, we assume that they are standing on the same side of the display, with speaker A on the right and speaker B on the left. . As another example, assume that speakers A and B are facing different sides of the display. Then, positional relationship estimating section 11 outputs positional relationship information indicating the estimated positional relationship to screen layout determining section 12 .
 図6は、第2の実施形態に係る情報提示装置2が実行する情報提示方法の一例を示すフローチャートである。 FIG. 6 is a flowchart showing an example of an information presentation method executed by the information presentation device 2 according to the second embodiment.
 情報提示装置2が情報提示方法を実行する前に、事前準備として、応対者の利用位置とマイクの位置を紐づけて位置関係DB16へ保存する。 Before the information presentation device 2 executes the information presentation method, as a preparation, it associates the usage position of the respondent with the position of the microphone and stores them in the positional relationship DB 16 .
 ステップS201では、情報提示装置2の音声取得部14が発話音声を取得する。 In step S201, the voice acquisition unit 14 of the information presentation device 2 acquires the uttered voice.
 ステップS202では、発話者・位置推定部15が、取得した音声データを基に発話者及びその位置の推定を行うとともに、位置関係DB16に保存された発話者及びその位置情報の更新を行う。 In step S202, the speaker/position estimation unit 15 estimates the speaker and its position based on the acquired voice data, and updates the speaker and its position information stored in the positional relationship DB 16.
 ステップS203では、位置関係推定部11が、位置関係DB16に保存された情報を基に、会話に参加している発話者の位置関係の推定を行う。 In step S203, the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
 ステップS204では、画面レイアウト決定部12が、位置関係推定部11において推定された位置関係を基に提示する情報の画面レイアウトを決定する。 In step S204, the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
 ステップS205では、画面表示部13が、画面レイアウト決定部12において決定された画面レイアウトに基づいた情報の表示を行う。 In step S205, the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
 ステップS206では、会話が継続中であると判定されると、ステップS201からステップS205までのステップが繰り返し実行され、図3に示すように、この間に発話者の位置関係が形態Aから形態B、又は形態Bから形態Aに変わると、それに伴って情報を表示する画面レイアウトが変更される。会話が終了したと判定されると、情報表示が終了する。 In step S206, if it is determined that the conversation is ongoing, the steps from step S201 to step S205 are repeatedly executed, and as shown in FIG. Alternatively, when the form B is changed to the form A, the screen layout for displaying information is changed accordingly. When it is determined that the conversation has ended, the information display ends.
 本実施形態に係る情報提示装置2によれば、簡便な方法によりユーザの位置情報を取得することができる。 According to the information presentation device 2 according to this embodiment, it is possible to acquire the user's location information by a simple method.
(第3の実施形態)
 次に、画像を利用してユーザの位置情報を取得する例を、第3の実施形態として説明する。図7は、第3の実施形態に係る情報提示装置3の構成例を示すブロック図である。図7に示す情報提示装置3は、位置関係推定部11と、画面レイアウト決定部12と、画面表示部13と、発話者・位置推定部15と、位置関係DB16と、画像取得部17と、顔認証DB18と、を備える。情報提示装置3は、第2の実施形態に係る情報提示装置2と比較して、音声取得部14に代えて画像取得部17を備える点、及び顔認証DB18を更に備える点が相違する。第2の実施形態と同一の構成については、第2の実施形態と同一の参照番号を付して適宜説明を省略する。
(Third embodiment)
Next, an example of acquiring user position information using an image will be described as a third embodiment. FIG. 7 is a block diagram showing a configuration example of the information presentation device 3 according to the third embodiment. The information presentation device 3 shown in FIG. 7 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13, a speaker/position estimation unit 15, a positional relationship DB 16, an image acquisition unit 17, A face authentication DB 18 is provided. The information presentation device 3 differs from the information presentation device 2 according to the second embodiment in that it includes an image acquisition section 17 instead of the voice acquisition section 14 and further comprises a face authentication DB 18 . The same reference numerals as in the second embodiment are assigned to the same configurations as in the second embodiment, and the description thereof is omitted as appropriate.
 画像取得部17は、ユーザの画像情報を取得するカメラなどである。画像取得部17は、取得した顔認識及び顔認証に利用される発話者の顔を示す画像情報を発話者・位置推定部15に出力する。 The image acquisition unit 17 is a camera or the like that acquires image information of the user. The image acquisition unit 17 outputs the acquired image information indicating the face of the speaker used for face recognition and face authentication to the speaker/position estimation unit 15 .
 ここで、顔認証とは顔の目、鼻、口などの特徴点の位置や顔領域の位置や大きさを基に個人を判別する認証方式をいう。一方、顔認識とは人であることを判別するが、個人を判別しない。すなわち、顔認証DB18に登録された情報により顔認証を行えば、応対者と客との判別を行うことができる。 Here, face authentication refers to an authentication method that identifies an individual based on the positions of feature points such as the eyes, nose, and mouth of the face, as well as the position and size of the face area. On the other hand, face recognition identifies people, but does not identify individuals. That is, if face authentication is performed using the information registered in the face authentication DB 18, it is possible to distinguish between the attendant and the customer.
 顔認証DB18は、応対者を顔認証により特定の個人として判別するために必要な顔認証情報をあらかじめ登録する。 The face authentication DB 18 pre-registers the face authentication information required to identify the respondent as a specific individual by face authentication.
 発話者・位置推定部15は、画像取得部17により取得された画像情報に基づいて顔認識を実行して、ユーザの位置情報を推定する。そして、発話者・位置推定部15は、推定したユーザの位置情報を位置関係推定部11へ出力する。また、発話者・位置推定部15は、位置関係DB16に保存したユーザの位置情報を更新する。 The speaker/position estimation unit 15 executes face recognition based on the image information acquired by the image acquisition unit 17 to estimate the user's position information. Then, the speaker/position estimation unit 15 outputs the estimated position information of the user to the positional relationship estimation unit 11 . In addition, the speaker/position estimation unit 15 updates the position information of the user stored in the positional relationship DB 16 .
 また、発話者・位置推定部15は、顔認証DB18に登録された顔認証情報、及び画像取得部17により取得された画像情報に基づいて、顔認証を実行して、応対者と他のユーザの位置を区別して推定する。応対者とはたとえば接客スタッフであり、他のユーザとはたとえば応対者の会話相手である。 In addition, the speaker/position estimation unit 15 performs face authentication based on the face authentication information registered in the face authentication DB 18 and the image information acquired by the image acquisition unit 17, are estimated separately. The respondent is, for example, a customer service staff, and the other user is, for example, a conversation partner of the respondent.
 位置関係推定部11は、発話者・位置推定部15での処理終了を契機とし、位置関係DB16を参照して位置関係を推定し、推定した位置関係を画面レイアウト決定部12へ出力する。尚、上述の実施形態2では、顔認証を利用しないため、事前準備として、応対者の利用位置とマイクの位置を紐づけて位置関係DB16へ保存したが、実施形態3では顔認証を用いるため、応対者の位置を事前に決めておく必要はない。 The positional relationship estimating unit 11 refers to the positional relationship DB 16 to estimate the positional relationship, and outputs the estimated positional relationship to the screen layout determining unit 12 when the processing in the speaker/position estimating unit 15 ends. In the above-described Embodiment 2, since face authentication is not used, as a preparation, the use position of the person receiving the call and the position of the microphone are linked and stored in the positional relationship DB 16. However, in Embodiment 3, face authentication is used. , there is no need to predetermine the position of the attendant.
 尚、第3の実施形態に係る情報提示装置3は、画像取得部17と合わせて、上述の音声取得部14を更に備えてもよい。音声取得部14は、取得した音声を示す音声情報を発話者・位置推定部15に出力する。これにより、発話者・位置推定部15は、音声取得部14により取得された音声を基に、発話位置の推定を行うことも可能となる。 It should be noted that the information presentation device 3 according to the third embodiment may further include the above-described voice acquisition section 14 in addition to the image acquisition section 17 . The voice acquisition unit 14 outputs voice information indicating the acquired voice to the speaker/position estimation unit 15 . As a result, the speaker/position estimating unit 15 can also estimate the utterance position based on the voice acquired by the voice acquiring unit 14 .
 図8は、第3の実施形態に係る情報提示装置3が実行する情報提示方法の一例を示すフローチャートである。 FIG. 8 is a flowchart showing an example of an information presentation method executed by the information presentation device 3 according to the third embodiment.
 情報提示装置3が情報提示方法を実行する前に、事前準備として、顔認証DB18へ、応対者を顔認証により特定の個人として判別するために必要な応対者の情報を登録しておく。 Before the information presentation device 3 executes the information presentation method, as a preparation, the information of the responder necessary for identifying the responder as a specific individual by face authentication is registered in the face authentication DB 18.
 ステップS301では、情報提示装置3の画像取得部17が顔認証及び顔認識を実行するための発話者の顔の画像情報を取得する。 In step S301, the image acquisition unit 17 of the information presentation device 3 acquires image information of the speaker's face for performing face authentication and face recognition.
 ステップS302では、発話者・位置推定部15が顔認証及び顔認識を実行して、発話者及びその位置の推定を行うとともに、位置関係DB16に保存された発話者の位置情報の更新を行う。 In step S302, the speaker/position estimation unit 15 performs face authentication and face recognition to estimate the speaker and its position, and updates the position information of the speaker stored in the positional relationship DB 16.
 ステップS303では、位置関係推定部11が位置関係DB16に保存された情報を基に、会話に参加している発話者の位置関係の推定を行う。 In step S303, the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
 ステップS304では、画面レイアウト決定部12が位置関係推定部11において推定された位置関係を基に提示する情報の画面レイアウトを決定する。 In step S304, the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11.
 ステップS305では、画面表示部13が画面レイアウト決定部12において決定された画面レイアウトに基づいた情報の表示を行う。 In step S305, the screen display unit 13 displays information based on the screen layout determined by the screen layout determination unit 12.
 ステップS306では、会話が継続中であると判定されると、ステップS301からステップS305までのステップが繰り返し実行され、図3に示すように、この間に位置関係が形態Aから形態B、又は形態Bから形態Aに変わると、それに伴って情報を表示する画面レイアウトが変更される。会話が終了したと判定されると、情報表示が終了する。 In step S306, if it is determined that the conversation is ongoing, the steps from step S301 to step S305 are repeatedly executed, and as shown in FIG. When the mode is changed from the mode A to the mode A, the screen layout for displaying information is changed accordingly. When it is determined that the conversation has ended, the information display ends.
 本実施形態に係る情報提示装置3によれば、簡便な方法によりユーザの位置情報を取得することができる。また、応対者と他のユーザを区別することができるため、応対者と他のユーザとで異なる情報を提示することが可能となる。また、画像取得部17に音声取得部14を組み合わせた場合には、ユーザの位置情報をより高精度に取得することができる。 According to the information presentation device 3 according to this embodiment, it is possible to acquire the user's location information by a simple method. In addition, since it is possible to distinguish between the responder and other users, it is possible to present different information to the responder and other users. Further, when the sound acquisition unit 14 is combined with the image acquisition unit 17, the user's position information can be acquired with higher accuracy.
(第4の実施形態)
 図9A及び図9Bは、第4の実施形態に係る情報提示装置4の構成例を示すブロック図である。第4の実施形態に係る情報提示装置4は、画面表示部が平面のみでなく、360度任意の位置から閲覧可能なディスプレイを備える情報提示装置である。このため、情報提示装置4に係る画面表示部の符号を13ではなく13’とした。図9Aは、第2の実施形態に係る情報提示装置2のブロック図である図5と比較して、画面表示部13’を有する点が相違する。図9Bは、第3の実施形態に係る情報提示装置3のブロック図である図7と比較して、画面表示部13’を有する点が相違する。
(Fourth embodiment)
9A and 9B are block diagrams showing configuration examples of an information presentation device 4 according to the fourth embodiment. The information presentation device 4 according to the fourth embodiment is an information presentation device that has a display that can be viewed not only from a plane but also from any position of 360 degrees. For this reason, the reference numeral of the screen display unit related to the information presentation device 4 is set to 13' instead of 13. FIG. FIG. 9A differs from FIG. 5, which is a block diagram of the information presentation device 2 according to the second embodiment, in that it has a screen display unit 13'. FIG. 9B is different from FIG. 7, which is a block diagram of the information presentation device 3 according to the third embodiment, in that it has a screen display unit 13'.
 図10Aに示すように、360度任意の位置から閲覧可能なディスプレイDを利用すると、ユーザU1、U2、U3はそれぞれの位置から画面を見たときに、情報が通常文字で出力される。図10Bは、図10Aを上方から俯瞰してみた図である。 As shown in FIG. 10A, when using a display D that can be viewed from any 360-degree position, information is output in normal characters when users U1, U2, and U3 view the screen from their respective positions. FIG. 10B is a top view of FIG. 10A.
 本実施形態に係る情報提示装置4によれば、閲覧方向の自由度が高めることができ、たとえば空中映像を表示させることも可能となる。 According to the information presentation device 4 according to the present embodiment, it is possible to increase the degree of freedom in viewing directions, and for example, to display an aerial image.
(第5の実施形態)
 図11は、第5の実施形態に係る情報提示装置5の構成例を示すブロック図である。図11に示す情報提示装置5は、位置関係推定部11と、画面レイアウト決定部12と、画面表示部13aと、音声取得部14と、発話者・位置推定部15と、位置関係DB16と、画面出力振分部19と、画面位置DB20と、を備える。情報提示装置5は、第2の実施形態に係る情報提示装置2と比較して、画面出力振分部19及び画面位置DB20を更に備える点が相違する。第2の実施形態と同一の構成については、第2の実施形態と同一の参照番号を付して適宜説明を省略する。情報提示装置5は、1以上の画面表示部13b~13nと接続される。
(Fifth embodiment)
FIG. 11 is a block diagram showing a configuration example of the information presentation device 5 according to the fifth embodiment. The information presentation device 5 shown in FIG. 11 includes a positional relationship estimation unit 11, a screen layout determination unit 12, a screen display unit 13a, a voice acquisition unit 14, a speaker/position estimation unit 15, a positional relationship DB 16, A screen output distribution unit 19 and a screen position DB 20 are provided. The information presentation device 5 differs from the information presentation device 2 according to the second embodiment in that it further includes a screen output distribution unit 19 and a screen position DB 20 . The same reference numerals as in the second embodiment are assigned to the same configurations as in the second embodiment, and the description thereof is omitted as appropriate. The information presentation device 5 is connected to one or more screen display units 13b-13n.
 画面位置DB20は、複数の画面表示部13a~13nの各々の位置情報を保存する。 The screen position DB 20 stores position information for each of the plurality of screen display units 13a to 13n.
 画面レイアウト決定部12は、位置関係推定部11により推定された位置関係、及び画面位置DB20に保存された位置情報に基づいて、画面表示部13a~13nそれぞれの画面レイアウトを決定する。そして、画面レイアウト決定部12は、画面表示部13a~13nそれぞれの画面レイアウトに基づく表示情報を画面出力振分部19へ出力する。 The screen layout determining unit 12 determines the screen layout of each of the screen display units 13a to 13n based on the positional relationship estimated by the positional relationship estimating unit 11 and the positional information stored in the screen position DB20. Then, the screen layout determination section 12 outputs display information based on the screen layouts of the screen display sections 13a to 13n to the screen output distribution section 19. FIG.
 たとえば、図12において、ユーザU1,U2,U3が会話に参加している場合に、画面レイアウト決定部12は、ユーザU1,U2,U3の位置関係及び透過ディスプレイD1、D2の位置を基に、透過ディスプレイD1にはユーザU1に提示する表示情報を生成し、透過ディスプレイD2にはユーザU2,U3に提示する表示情報を生成する。図12に示す例では、ユーザU2から見て、ユーザU2が利用する画面には通常文字が提示され、ユーザU1、U3が利用する画面には反転文字が提示される。 For example, in FIG. 12, when users U1, U2, and U3 are participating in a conversation, the screen layout determining unit 12, based on the positional relationship of users U1, U2, and U3 and the positions of transmissive displays D1 and D2, Display information to be presented to the user U1 is generated on the transmissive display D1, and display information to be presented to the users U2 and U3 is generated on the transmissive display D2. In the example shown in FIG. 12, as seen from the user U2, normal characters are presented on the screen used by the user U2, and reversed characters are presented on the screens used by the users U1 and U3.
 画面出力振分部19は画面レイアウト決定部12から入力した表示情報を複数の画面表示部13a~13nへ振り分けて転送を行う。 The screen output distribution unit 19 distributes and transfers the display information input from the screen layout determination unit 12 to the plurality of screen display units 13a to 13n.
 第5の実施形態に係る情報提示装置5は複数のデバイスの連携により情報の提示を行ってもよい。すなわち、情報提示装置5は、会話相手の保有する、情報提示装置5の外部に備えられているデバイスとの連携により、該デバイスが備えるディスプレイへ情報の提示を行ってもよい。この場合、図11の画面表示部13b~13nは、複数のデバイスにおけるディスプレイを意味する。該デバイスには、スマートフォン、タブレット等の携帯情報端末が含まれる。この場合、画面出力振分部19は、Bluetooth、UWB等の近距離無線などにより、該デバイスへ表示情報の転送を行う。 The information presentation device 5 according to the fifth embodiment may present information by cooperation of a plurality of devices. That is, the information presentation device 5 may present information on a display provided in the device in cooperation with a device owned by the conversation partner and provided outside the information presentation device 5 . In this case, the screen display units 13b to 13n in FIG. 11 mean displays in a plurality of devices. Such devices include personal digital assistants such as smartphones and tablets. In this case, the screen output distribution unit 19 transfers display information to the device by short-range wireless communication such as Bluetooth or UWB.
 尚、第5の実施形態に係る情報提示装置5は、音声取得部14と合わせて、上述の画像取得部17及び顔認証DB18を更に備えてもよい。画像取得部17は、取得した顔認識及び顔認証に利用される発話者の顔を示す画像情報を発話者・位置推定部15に出力する。これにより、発話者・位置推定部15は、画像取得部17により取得された画像を基に、顔認証及び顔認識を実行して、ユーザの位置情報の推定を行うことも可能となる。 It should be noted that the information presentation device 5 according to the fifth embodiment may further include the above-described image acquisition section 17 and face authentication DB 18 in addition to the voice acquisition section 14 . The image acquisition unit 17 outputs the acquired image information indicating the face of the speaker used for face recognition and face authentication to the speaker/position estimation unit 15 . As a result, the speaker/position estimating unit 15 can also perform face authentication and face recognition based on the image acquired by the image acquiring unit 17 to estimate the position information of the user.
 図13は、第5の実施形態に係る情報提示装置5が実行する情報提示方法の一例を示すフローチャートである。 FIG. 13 is a flowchart showing an example of an information presentation method executed by the information presentation device 5 according to the fifth embodiment.
 情報提示方法の実行を開始する前に、事前準備として、各画面の位置を画面位置DB20へ保存する。 Before starting the execution of the information presentation method, the position of each screen is saved in the screen position DB 20 as a preparation.
 ステップS401では、情報提示装置5の音声取得部14が発話音声を取得する。 In step S401, the voice acquisition unit 14 of the information presentation device 5 acquires the uttered voice.
 ステップS402では、発話者・位置推定部15が発話者及び位置の推定を行うとともに、位置関係DB16に保存された発話者及びその位置情報の更新を行う。 In step S402, the speaker/position estimation unit 15 estimates the speaker and position, and updates the speaker and its position information stored in the positional relationship DB 16.
 ステップS403では、位置関係推定部11が位置関係DB16に保存された情報を基に、会話に参加している発話者の位置関係の推定を行う。 In step S403, the positional relationship estimation unit 11 estimates the positional relationship of the speakers participating in the conversation based on the information stored in the positional relationship DB 16.
 ステップS404では、画面レイアウト決定部12が位置関係推定部11において推定された位置関係及び画面位置DB20に保存された各画面の位置情報を基に提示する情報の画面レイアウトを決定する。 In step S404, the screen layout determining unit 12 determines the screen layout of information to be presented based on the positional relationship estimated by the positional relationship estimating unit 11 and the positional information of each screen stored in the screen position DB 20.
 ステップS405では、画面出力振分部19が画面レイアウト決定部12より渡された画面レイアウトを基に、各画面表示部13a~13nへ表示内容及び表示出力を依頼する。 In step S405, based on the screen layout passed from the screen layout determination unit 12, the screen output distribution unit 19 requests the screen display units 13a to 13n to display content and display output.
 ステップS406では、画面表示部13a~13nが、画面へ、決定された画面レイアウトに基づいた情報の出力を行う。 In step S406, the screen display units 13a to 13n output information to the screen based on the determined screen layout.
 ステップS407では、会話が継続中であると判定されると、ステップS401からステップS406までのステップが繰り返し実行され、図3に示すように、この間に発話者の位置が形態Aから形態B、又は形態Bから形態Aに変わると、それに伴って情報を表示する画面レイアウトが変更される。会話が終了したと判定されると、情報表示が終了する。 In step S407, if it is determined that the conversation is ongoing, the steps from step S401 to step S406 are repeatedly executed, and as shown in FIG. When the form B is changed to the form A, the screen layout for displaying information is changed accordingly. When it is determined that the conversation has ended, the information display ends.
 上記の情報提示装置1,2,3,4,5における位置関係推定部11、画面レイアウト決定部12、発話者・位置推定部15、及び画面出力振分部19は、制御演算回路(コントローラ)の一部を構成する。該制御演算回路は、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)などの専用のハードウェアによって構成されてもよいし、プロセッサによって構成されてもよいし、双方を含んで構成されてもよい。 The positional relationship estimating unit 11, the screen layout determining unit 12, the speaker/position estimating unit 15, and the screen output distribution unit 19 in the information presentation devices 1, 2, 3, 4, and 5 described above are control arithmetic circuits (controllers). form part of The control arithmetic circuit may be configured by dedicated hardware such as ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array), or may be configured by a processor, or may include both. may be
 また、上記の情報提示装置1,2,3,4,5を機能させるために、プログラム命令を実行可能なコンピュータを用いることも可能である。図14は、情報提示装置1,2,3,4,5として機能するコンピュータの概略構成を示すブロック図である。ここで、コンピュータ100は、汎用コンピュータ、専用コンピュータ、ワークステーション、PC(Personal Computer)、電子ノートパッドなどであってもよい。プログラム命令は、必要なタスクを実行するためのプログラムコード、コードセグメントなどであってもよい。 It is also possible to use a computer capable of executing program instructions in order to function the information presentation devices 1, 2, 3, 4, and 5 described above. FIG. 14 is a block diagram showing a schematic configuration of a computer functioning as the information presentation devices 1, 2, 3, 4 and 5. As shown in FIG. Here, the computer 100 may be a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), an electronic notepad, or the like. Program instructions may be program code, code segments, etc. for performing the required tasks.
 図14に示すように、コンピュータ100は、プロセッサ110と、記憶部としてROM(Read Only Memory)120、RAM(Random Access Memory)130、及びストレージ140と、入力部150と、出力部160と、通信インターフェース(I/F)170と、を備える。各構成は、バス180を介して相互に通信可能に接続されている。上記の情報提示装置2,4,5における音声取得部14及び情報提示装置3,4における画像取得部17は入力部150として構築されてもよく、情報提示装置1,2,3,4,5における画面表示部13は出力部160として構築されていてもよい。 As shown in FIG. 14, the computer 100 includes a processor 110, a ROM (Read Only Memory) 120, a RAM (Random Access Memory) 130, and a storage 140 as storage units, an input unit 150, an output unit 160, and communication and an interface (I/F) 170 . Each component is communicatively connected to each other via a bus 180 . The voice acquisition unit 14 in the information presentation devices 2, 4 and 5 and the image acquisition unit 17 in the information presentation devices 3 and 4 may be constructed as the input unit 150. , the screen display unit 13 may be constructed as the output unit 160 .
 ROM120は、各種プログラム及び各種データを保存する。RAM130は、作業領域として一時的にプログラム又はデータを記憶する。ストレージ140は、HDD(Hard Disk Drive)又はSSD(Solid State Drive)により構成され、オペレーティングシステムを含む各種プログラム及び各種データを保存する。本発明では、ROM120又はストレージ140に、本発明に係るプログラムが保存されている。また、本発明の実施形態に係る位置関係DB16、顔認証DB18、及び画面位置DB20は、ストレージ140として構築されていてもよい。 The ROM 120 stores various programs and various data. RAM 130 temporarily stores programs or data as a work area. The storage 140 is configured by a HDD (Hard Disk Drive) or SSD (Solid State Drive) and stores various programs including an operating system and various data. In the present invention, the ROM 120 or storage 140 stores the program according to the present invention. Also, the positional relationship DB 16, the face authentication DB 18, and the screen position DB 20 according to the embodiment of the present invention may be constructed as the storage 140. FIG.
 プロセッサ110は、具体的にはCPU(Central Processing Unit)、MPU(Micro Processing Unit)、GPU(Graphics Processing Unit)、DSP(Digital Signal Processor)、SoC(System on a Chip)などであり、同種又は異種の複数のプロセッサにより構成されてもよい。プロセッサ110は、ROM120又はストレージ140からプログラムを読み出し、RAM130を作業領域としてプログラムを実行することで、上記各構成の制御及び各種の演算処理を行う。なお、これらの処理内容の少なくとも一部をハードウェアで実現することとしてもよい。 The processor 110 is specifically a CPU (Central Processing Unit), MPU (Micro Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), SoC (System on a Chip), etc. may be configured by a plurality of processors of The processor 110 reads a program from the ROM 120 or the storage 140 and executes the program using the RAM 130 as a work area, thereby performing control of each configuration and various arithmetic processing. Note that at least part of these processing contents may be realized by hardware.
 プログラムは、コンピュータ100が読み取り可能な記録媒体に記録されていてもよい。このような記録媒体を用いれば、プログラムをコンピュータ100にインストールすることが可能である。ここで、プログラムが記録された記録媒体は、非一過性(non-transitory)の記録媒体であってもよい。非一過性の記録媒体は、特に限定されるものではないが、例えば、CD-ROM、DVD-ROM、USB(Universal Serial Bus)メモリなどであってもよい。また、このプログラムは、ネットワークを介して外部装置からダウンロードされる形態としてもよい。尚、本発明の実施形態に係る位置関係DB16、顔認証DB18、及び画面位置DB20は、かかる記録媒体に構築されていてもよい。 The program may be recorded on a recording medium readable by the computer 100. A program can be installed in the computer 100 by using such a recording medium. Here, the recording medium on which the program is recorded may be a non-transitory recording medium. The non-transitory recording medium is not particularly limited, but may be, for example, a CD-ROM, a DVD-ROM, a USB (Universal Serial Bus) memory, or the like. Also, this program may be downloaded from an external device via a network. Incidentally, the positional relationship DB 16, the face authentication DB 18, and the screen position DB 20 according to the embodiment of the present invention may be constructed in such a recording medium.
 以上の実施形態に関し、更に以下の付記を開示する。 Regarding the above embodiments, the following additional remarks are disclosed.
 (付記項1)
 複数のユーザにディスプレイを介して情報を提示する情報提示装置であって、
 前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定し、前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力する制御部
を備える情報提示装置。
 (付記項2)
 前記制御部は、
 前記ユーザの発話音声を取得し、
 前記発話音声に基づいて前記ユーザの位置情報を推定する、付記項1に記載の情報提示装置。
 (付記項3)
 前記制御部は、
 前記ユーザの画像情報を取得し、
 前記画像情報に基づいて顔認識を実行して、前記ユーザの位置情報を更新する、付記項1に記載の情報提示装置。
 (付記項4)
 応対者を顔認証により特定の個人として判別するための顔認証情報を登録する顔認証DBを更に備え、
 前記制御部は、前記顔認証情報及び前記画像情報に基づいて顔認証を実行して、前記応対者と他のユーザの位置を区別して推定する、付記項3に記載の情報提示装置。
 (付記項5)
 前記制御部は、前記表示情報を360度任意の位置から閲覧可能なディスプレイに出力する、付記項1から4のいずれか一項に記載の情報提示装置。
 (付記項6)
 前記表示情報を表示する複数のディスプレイの位置情報を保存する画面位置DBを更に備え、
 前記制御部は、前記表示情報を、前記複数のディスプレイへ振り分けて転送を行い、
前記位置関係、及び前記ディスプレイの位置情報に基づいて、前記複数のディスプレイそれぞれの画面レイアウトを決定する、付記項1から5のいずれか一項に記載の情報提示装置。
 (付記項7)
 複数のユーザにディスプレイを介して情報を提示する情報提示装置における情報提示方法であって、
 情報提示装置により、
 前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定するステップと、
 前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力するステップと、
を含む情報提示方法。
 (付記項8)
 コンピュータによって実行可能なプログラムを記憶した非一時的記憶媒体であって、前記コンピュータを付記項1から6のいずれか一項に記載の情報提示装置として機能させるプログラムを記憶した非一時的記憶媒体。
(Appendix 1)
An information presentation device that presents information to a plurality of users via a display,
Control for acquiring position information of the user, estimating the positional relationship of the user with respect to the display, dynamically determining a screen layout according to the positional relationship, and outputting display information based on the screen layout to the display An information presentation device comprising:
(Appendix 2)
The control unit
obtaining the user's uttered voice;
2. The information presentation device according to additional item 1, wherein the position information of the user is estimated based on the uttered voice.
(Appendix 3)
The control unit
obtaining image information of the user;
2. The information presentation device according to claim 1, wherein face recognition is performed based on the image information to update the position information of the user.
(Appendix 4)
further comprising a face authentication DB for registering face authentication information for identifying a respondent as a specific individual by face authentication,
4. The information presentation device according to claim 3, wherein the control unit performs face authentication based on the face authentication information and the image information, and distinguishes and estimates the positions of the attendant and other users.
(Appendix 5)
5. The information presentation device according to any one of additional items 1 to 4, wherein the control unit outputs the display information to a display that can be viewed from any position 360 degrees.
(Appendix 6)
further comprising a screen position DB that stores position information of a plurality of displays that display the display information;
The control unit distributes and transfers the display information to the plurality of displays,
6. The information presentation device according to any one of additional items 1 to 5, wherein a screen layout of each of the plurality of displays is determined based on the positional relationship and the positional information of the displays.
(Appendix 7)
An information presentation method in an information presentation device for presenting information to a plurality of users via a display,
By the information presentation device,
obtaining position information of the user and estimating a positional relationship of the user with respect to the display;
dynamically determining a screen layout according to the positional relationship and outputting display information based on the screen layout to the display;
Information presentation methods, including;
(Appendix 8)
A non-temporary storage medium storing a computer-executable program, the non-temporary storage medium storing a program that causes the computer to function as the information presentation device according to any one of appendices 1 to 6.
 上述の実施形態は代表的な例として説明したが、本発明の趣旨及び範囲内で、多くの変更及び置換ができることは当業者に明らかである。したがって、本発明は、上述の実施形態によって制限するものと解するべきではなく、特許請求の範囲から逸脱することなく、種々の変形又は変更が可能である。たとえば、実施形態の構成図に記載の複数の構成ブロックを1つに組み合わせたり、あるいは1つの構成ブロックを分割したりすることが可能である。 Although the above embodiments have been described as representative examples, it is obvious to those skilled in the art that many modifications and substitutions can be made within the spirit and scope of the present invention. Therefore, the present invention should not be construed as limited by the embodiments described above, and various modifications and changes are possible without departing from the scope of the appended claims. For example, it is possible to combine a plurality of configuration blocks described in the configuration diagrams of the embodiments into one, or divide one configuration block.
 1,2,3,4,5        情報提示装置
 11                      位置関係推定部
 12                      画面レイアウト決定部
 13,13’,13a~13n     画面表示部(ディスプレイ)
 14                      音声取得部
 15                      発話者・位置推定部
 16                      位置関係DB
 17                      画像取得部
 18                      画面位置DB
 19                      画面出力振分部
 20                      画面位置DB
 100                    コンピュータ
 110                    プロセッサ
 120                    ROM
 130                    RAM
 140                    ストレージ
 150                    入力部
 160                    出力部
 170                    通信インターフェース(I/F)
 180                    バス
1, 2, 3, 4, 5 information presentation device 11 positional relationship estimation unit 12 screen layout determination unit 13, 13′, 13a to 13n screen display unit (display)
14 Voice Acquisition Unit 15 Speaker/Position Estimation Unit 16 Positional Relationship DB
17 Image Acquisition Unit 18 Screen Position DB
19 Screen output distribution unit 20 Screen position DB
100 computer 110 processor 120 ROM
130 RAM
140 storage 150 input unit 160 output unit 170 communication interface (I/F)
180 bus

Claims (8)

  1.  複数のユーザにディスプレイを介して情報を提示する情報提示装置であって、
     前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定する位置関係推定部と、
     前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力する画面レイアウト決定部と、
    を備える情報提示装置。
    An information presentation device that presents information to a plurality of users via a display,
    a positional relationship estimation unit that acquires positional information of the user and estimates the positional relationship of the user with respect to the display;
    a screen layout determination unit that dynamically determines a screen layout according to the positional relationship and outputs display information based on the screen layout to the display;
    Information presentation device.
  2.  前記ユーザの発話音声を取得する音声取得部と、
     前記発話音声に基づいて前記ユーザの位置情報を推定する発話者・位置推定部と、
    を更に備える、請求項1に記載の情報提示装置。
    a voice acquisition unit that acquires the user's uttered voice;
    a speaker/location estimation unit that estimates location information of the user based on the uttered voice;
    The information presentation device according to claim 1, further comprising:
  3.  前記ユーザの画像情報を取得する画像取得部を更に備え、
     前記発話者・位置推定部は、前記画像情報に基づいて顔認識を実行して、前記ユーザの位置情報を推定する、請求項1又は2に記載の情報提示装置。
    further comprising an image acquisition unit that acquires image information of the user;
    3. The information presentation device according to claim 1, wherein said speaker/position estimation unit executes face recognition based on said image information to estimate said user's position information.
  4.  応対者を顔認証により特定の個人として判別するための顔認証情報を登録する顔認証DBを更に備え、
     前記発話者・位置推定部は、前記顔認証情報及び前記画像情報に基づいて顔認証を実行して、前記応対者と他のユーザの位置を区別して推定する、請求項3に記載の情報提示装置。
    further comprising a face authentication DB for registering face authentication information for identifying a respondent as a specific individual by face authentication,
    4. The information presentation according to claim 3, wherein said speaker/position estimation unit executes face authentication based on said face authentication information and said image information, and distinguishes and estimates the positions of said attendant and other users. Device.
  5.  画面レイアウト決定部は、前記表示情報を360度任意の位置から閲覧可能なディスプレイに出力する、請求項1から4のいずれか一項に記載の情報提示装置。 The information presentation device according to any one of claims 1 to 4, wherein the screen layout determination unit outputs the display information to a display that can be viewed from any position 360 degrees.
  6.  前記表示情報を表示する複数のディスプレイの位置情報を保存する画面位置DBと、
     前記表示情報を、前記複数のディスプレイへ振り分けて転送を行う画面出力振分部と、
    を更に備え、
     前記画面レイアウト決定部は、前記位置関係、及び前記ディスプレイの位置情報に基づいて、前記複数のディスプレイそれぞれの画面レイアウトを決定する、請求項1から5のいずれか一項に記載の情報提示装置。
    a screen position DB that stores position information of a plurality of displays that display the display information;
    a screen output distribution unit that distributes and transfers the display information to the plurality of displays;
    further comprising
    The information presentation device according to any one of claims 1 to 5, wherein the screen layout determining unit determines the screen layout of each of the plurality of displays based on the positional relationship and the positional information of the displays.
  7.  複数のユーザにディスプレイを介して情報を提示する情報提示装置における情報提示方法であって、
     情報提示装置により、
     前記ユーザの位置情報を取得し、前記ユーザの前記ディスプレイに対する位置関係を推定するステップと、
     前記位置関係に応じて動的に画面レイアウトを決定し、該画面レイアウトに基づく表示情報を前記ディスプレイに出力するステップと、
    を含む情報提示方法。
    An information presentation method in an information presentation device for presenting information to a plurality of users via a display,
    By the information presentation device,
    obtaining position information of the user and estimating a positional relationship of the user with respect to the display;
    dynamically determining a screen layout according to the positional relationship and outputting display information based on the screen layout to the display;
    Information presentation methods, including;
  8.  コンピュータを、請求項1から6のいずれか一項に記載の情報提示装置として機能させるためのプログラム。 A program for causing a computer to function as the information presentation device according to any one of claims 1 to 6.
PCT/JP2021/018067 2021-05-12 2021-05-12 Information presentation device, information presentation method, and program WO2022239152A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023520656A JPWO2022239152A1 (en) 2021-05-12 2021-05-12
PCT/JP2021/018067 WO2022239152A1 (en) 2021-05-12 2021-05-12 Information presentation device, information presentation method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/018067 WO2022239152A1 (en) 2021-05-12 2021-05-12 Information presentation device, information presentation method, and program

Publications (1)

Publication Number Publication Date
WO2022239152A1 true WO2022239152A1 (en) 2022-11-17

Family

ID=84028032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/018067 WO2022239152A1 (en) 2021-05-12 2021-05-12 Information presentation device, information presentation method, and program

Country Status (2)

Country Link
JP (1) JPWO2022239152A1 (en)
WO (1) WO2022239152A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015049931A1 (en) * 2013-10-04 2015-04-09 ソニー株式会社 Information processing device, information processing method, and program
WO2015098188A1 (en) * 2013-12-27 2015-07-02 ソニー株式会社 Display control device, display control method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015049931A1 (en) * 2013-10-04 2015-04-09 ソニー株式会社 Information processing device, information processing method, and program
WO2015098188A1 (en) * 2013-12-27 2015-07-02 ソニー株式会社 Display control device, display control method, and program

Also Published As

Publication number Publication date
JPWO2022239152A1 (en) 2022-11-17

Similar Documents

Publication Publication Date Title
CN111634188B (en) Method and device for projecting screen
WO2009104564A1 (en) Conversation server in virtual space, method for conversation and computer program
US20140081634A1 (en) Leveraging head mounted displays to enable person-to-person interactions
US20110316880A1 (en) Method and apparatus providing for adaptation of an augmentative content for output at a location based on a contextual characteristic
KR102193029B1 (en) Display apparatus and method for performing videotelephony using the same
KR101624454B1 (en) Method for providing message service based hologram image, user device, and display for hologram image
US20140232655A1 (en) System for transferring the operation of a device to an external apparatus
WO2022239152A1 (en) Information presentation device, information presentation method, and program
US20160364383A1 (en) Multi-channel cross-modality system for providing language interpretation/translation services
US20230054740A1 (en) Audio generation method, related apparatus, and storage medium
JP2020136921A (en) Video call system and computer program
JP2019057047A (en) Display control system, display control method and program
JP2016048301A (en) Electronic device
EP3040915A1 (en) Method and apparatus for identifying trends
JP2003234842A (en) Real-time handwritten communication system
Tu et al. Conversational greeting detection using captioning on head worn displays versus smartphones
CN112346630B (en) State determination method, device, equipment and computer readable medium
JP2013205995A (en) Server, electronic device, control method of server and control program of server
JP7355467B2 (en) Guide device, guide system, guide method, program, and recording medium
JP2005222316A (en) Conversation support device, conference support system, reception work support system, and program
JP7198952B1 (en) Insurance consultation system, solicitor terminal, and insurance consultation program
WO2022102432A1 (en) Information processing device and information processing method
WO2022185551A1 (en) Voice assist system, voice assist method, and computer program
WO2021121291A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
JP7414053B2 (en) Information processing device, program and information communication method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941886

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023520656

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE