WO2022054603A1 - 情報処理装置、情報処理端末、情報処理方法、およびプログラム - Google Patents
情報処理装置、情報処理端末、情報処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2022054603A1 WO2022054603A1 PCT/JP2021/031450 JP2021031450W WO2022054603A1 WO 2022054603 A1 WO2022054603 A1 WO 2022054603A1 JP 2021031450 W JP2021031450 W JP 2021031450W WO 2022054603 A1 WO2022054603 A1 WO 2022054603A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound image
- user
- information processing
- voice
- image localization
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 77
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 230000004807 localization Effects 0.000 claims abstract description 156
- 238000000034 method Methods 0.000 claims abstract description 96
- 230000008569 process Effects 0.000 claims abstract description 94
- 230000009471 action Effects 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims description 134
- 230000000694 effects Effects 0.000 claims description 57
- 230000004044 response Effects 0.000 claims description 19
- 239000012636 effector Substances 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 15
- 230000035807 sensation Effects 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 117
- 238000007726 management method Methods 0.000 description 116
- 230000006870 function Effects 0.000 description 46
- 230000005540 biological transmission Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 20
- 239000000203 mixture Substances 0.000 description 13
- 238000013500 data storage Methods 0.000 description 11
- 238000010009 beating Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 4
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1083—In-session procedures
- H04L65/1089—In-session procedures by adding media; by removing media
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/56—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
- H04M3/568—Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2242/00—Special services or facilities
- H04M2242/30—Determination of the location of a subscriber
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
- H04S7/304—For headphones
Definitions
- This technology is particularly related to an information processing device, an information processing terminal, an information processing method, and a program that can output voice content according to an action by a conversation participant in a realistic state.
- the so-called remote conference in which multiple remote participants hold a conference using a device such as a PC, is becoming widespread.
- a user who knows the URL can join the conference as a participant by starting the Web browser installed on the PC or a dedicated application and accessing the access destination specified by the URL assigned to each conference. Can be done.
- Participant's voice collected by the microphone is transmitted to the device used by other participants via the server, and is output from headphones and speakers.
- the image of the participant taken by the camera is transmitted to the device used by the other participant via the server and displayed on the display of the device.
- Participants cannot individually specify a specific participant and have a conversation only with the specified participant because their utterance is shared with all other participants.
- the participant cannot concentrate on the utterance of a specific participant and listen to the content of the utterance.
- the screen display may visually indicate that a specific participant is performing the action, but which participant is performing the action. It's hard to tell if it's there.
- This technology was made in view of such a situation, and makes it possible to output audio content according to the action of the participants in the conversation in a realistic state.
- the information processing device of one aspect of the present technology consists of a storage unit that stores HRTF data corresponding to a plurality of positions based on the listening position, and a specific participant among the participants of the conversation participating via the network.
- a sound image localization processing unit that provides audio content selected according to the action so that the sound image is localized at a predetermined position by performing sound image localization processing using the HRTF data selected according to the action.
- the information processing terminal of the other aspect of the present technology stores HRTF data corresponding to multiple positions based on the listening position, and can be used for actions by a specific participant among the participants of the conversation participating via the network.
- the information processing device that provides the audio content selected according to the action is transmitted so that the sound image is localized at a predetermined position.
- the present invention includes an audio receiving unit that receives the audio content obtained by performing the sound image localization process and outputs the audio.
- HRTF data corresponding to multiple positions relative to the listening position is stored and selected according to the action of a particular participant of the conversation participating over the network.
- the audio content selected according to the action is provided so that the sound image is localized at a predetermined position.
- HRTF data corresponding to multiple positions relative to the listening position is stored and selected according to the action of a particular participant of the conversation participating over the network.
- the information processing device that provides the audio content selected according to the action so that the sound image is localized at a predetermined position by performing the sound image localization process using the HRTF data.
- the audio content obtained by performing the sound image localization process is received, and the audio is output.
- FIG. 1 is a diagram showing a configuration example of a Tele-communication system according to an embodiment of the present technology.
- the Tele-communication system of FIG. 1 is configured by connecting a plurality of client terminals used by conference participants to the communication management server 1 via a network 11 such as the Internet.
- client terminals 2A to 2D which are PCs, are shown as client terminals used by users A to D, who are participants in the conference.
- client terminals 2A and 2D having a voice input device such as a microphone (microphone) and a voice output device such as headphones and speakers may be used as the client terminal.
- a voice input device such as a microphone (microphone)
- a voice output device such as headphones and speakers
- client terminal 2A and 2D When it is not necessary to distinguish between the client terminals 2A and 2D, it is appropriately referred to as the client terminal 2.
- Users A to D are users who participate in the same conference.
- the number of users participating in the conference is not limited to four.
- the communication management server 1 manages a conference that is advanced by having a plurality of users have a conversation online.
- the communication management server 1 is an information processing device that controls the transmission and reception of voices between client terminals 2 and manages so-called remote conferences.
- the communication management server 1 receives the voice data of the user A transmitted from the client terminal 2A in response to the utterance of the user A, as shown by the arrow A1 in the upper part of FIG. From the client terminal 2A, the voice data of the user A collected by the microphone provided in the client terminal 2A is transmitted.
- the communication management server 1 transmits the voice data of the user A to each of the client terminals 2B to 2D as shown by the arrows A11 to A13 in the lower part of FIG. 2, and outputs the voice of the user A.
- the user A speaks as the speaker
- the users B to D are the listeners.
- the user who becomes the speaker is referred to as an uttering user
- the user who becomes a listener is referred to as a listening user.
- the voice data transmitted from the client terminal 2 used by the speaking user is transmitted to the client terminal 2 used by the listening user via the communication management server 1. ..
- the communication management server 1 manages the position of each user in the virtual space.
- the virtual space is, for example, a three-dimensional space virtually set as a place for a meeting. Positions in virtual space are represented by three-dimensional coordinates.
- FIG. 3 is a plan view showing an example of the user's position in the virtual space.
- a vertically long rectangular table T is arranged substantially in the center of the virtual space indicated by the rectangular frame F, and the positions P1 to P4, which are the positions around the table T, are the positions P1 to P4 of the users A to D, respectively. It is set as a position.
- the front direction of each user is the direction of the table T from the position of each user.
- a participant icon which is information visually representing the user, is displayed on the screen of the client terminal 2 used by each user during the meeting, superimposed on the background image showing the place where the meeting is held. Will be done.
- the position of the participant icon on the screen corresponds to the position of each user in the virtual space.
- the participant icon is configured as a circular image including the user's face.
- the participant icon is displayed in a size corresponding to the distance from the reference position set in the virtual space to the position of each user.
- Participant icons I1 to I4 represent users A to D, respectively.
- the position of each user is automatically set by the communication management server 1 when participating in the conference.
- the position on the virtual space may be set by the user himself by moving the participant icon on the screen of FIG.
- the communication management server 1 is an HRTF (Head-Related Transfer Function) (head-related transfer function) that expresses the sound transfer characteristics from a plurality of positions to the listening position when each position on the virtual space is set as the listening position. It has HRTF data, which is data. The communication management server 1 prepares HRTF data corresponding to a plurality of positions based on each listening position on the virtual space.
- HRTF Head-Related Transfer Function
- the communication management server 1 performs sound image localization processing using HRTF data on the voice data so that the voice of the speaking user can be heard from a position on the virtual space of the speaking user for each listening user, and the sound image localization processing is performed.
- the voice data obtained by performing the above is transmitted.
- the voice data transmitted to the client terminal 2 as described above is the voice data obtained by performing the sound image localization process on the communication management server 1.
- Sound image localization processing includes rendering such as VBAP (Vector Based Amplitude Panning) based on position information, and binaural processing using HRTF data.
- VBAP Vector Based Amplitude Panning
- the voice of each speaking user is processed by the communication management server 1 as voice data of object audio.
- voice data of object audio For example, channel-based audio data of two channels of L / R generated by sound image localization processing in the communication management server 1 is transmitted from the communication management server 1 to each client terminal 2, and headphones provided in the client terminal 2 or the like. To output the voice of the speaking user.
- each listening user can hear the speaking user's voice and the speaking user's position. You will feel like you can hear it from.
- FIG. 5 is a diagram showing an example of how the voice is heard.
- the voice of user B is between position P2-position P1 whose sound source position is position P2, as shown by the arrow in FIG.
- the front of the user A having a conversation with his / her face facing the client terminal 2A is the direction of the client terminal 2A.
- the voice of the user C can be heard from the front by performing the sound image localization processing based on the HRTF data between the positions P3-position P1 with the position P3 as the sound source position.
- the voice of the user D can be heard from the back right by performing the sound image localization process based on the HRTF data between the position P4-the position P1 with the position P4 as the sound source position.
- the voice of the user A is heard from the left side of the user B who is having a conversation with his / her face facing the client terminal 2B, and is having a conversation with his / her face facing the client terminal 2C.
- the voice of the user A can be heard from the right back for the user D who is having a conversation with his face facing the client terminal 2D.
- voice data for each listening user is generated according to the positional relationship between the position of each listening user and the position of the speaking user, and is used for outputting the voice of the speaking user. Be done.
- the voice data transmitted to each listening user is voice data whose hearing is different depending on the positional relationship between the position of each listening user and the position of the speaking user.
- FIG. 7 is a diagram showing a state of users participating in the conference.
- user A who wears headphones and participates in a conference listens to the voices of users B to D whose sound images are localized at the positions of the right side, the front side, and the right back side, and has a conversation.
- the positions of the users B to D are the positions on the right side, the front side, and the right back position, respectively, based on the position of the user A.
- the colored use of the users B to D in FIG. 7 indicates that the users B to D do not actually exist in the same space as the space where the user A is having a meeting.
- background sounds such as bird chirping and BGM are also output based on the audio data obtained by the sound image localization process so that the sound image is localized at a predetermined position.
- the voice to be processed by the communication management server 1 includes not only spoken voice but also sounds such as environmental sounds and background sounds.
- sounds such as environmental sounds and background sounds.
- the sound to be processed by the communication management server 1 will be simply described as voice.
- the sound to be processed by the communication management server 1 includes sounds of types other than voice.
- the listening user can easily distinguish the voice of each user even when there are a plurality of participants. For example, even when a plurality of users speak at the same time, the listening user can distinguish each voice.
- the listening user can obtain the feeling that the speaking user actually exists at the position of the sound image from the voice.
- the listening user can have a realistic conversation with another user.
- step S1 the communication management server 1 determines whether or not the voice data has been transmitted from the client terminal 2, and waits until it is determined that the voice data has been transmitted.
- step S1 When it is determined in step S1 that the voice data has been transmitted from the client terminal 2, the communication management server 1 receives the voice data transmitted from the client terminal 2 in step S2.
- step S3 the communication management server 1 performs sound image localization processing based on the position information of each user, and generates audio data for each listening user.
- the voice data for the user A is generated so that the sound image of the voice of the speaking user is localized at a position corresponding to the position of the speaking user when the position of the speaking user is used as a reference.
- the voice data for the user B is generated so that the sound image of the voice of the speaking user is localized at a position corresponding to the position of the speaking user when the position of the speaking user is used as a reference.
- the voice data for other listening users is also generated using the HRTF data according to the relative positional relationship between the position with the speaking user and the position of the listening user as a reference.
- the voice data for each listening user is different data.
- step S4 the communication management server 1 transmits voice data to each listening user.
- the above processing is performed every time voice data is transmitted from the client terminal 2 used by the speaking user.
- step S11 the client terminal 2 determines whether or not the microphone voice has been input.
- the microphone sound is a sound collected by a microphone provided in the client terminal 2.
- step S11 When it is determined in step S11 that the microphone voice has been input, the client terminal 2 transmits the voice data to the communication management server 1 in step S12. If it is determined in step S11 that no microphone sound has been input, the process of step S12 is skipped.
- step S13 the client terminal 2 determines whether or not voice data has been transmitted from the communication management server 1.
- step S14 the communication management server 1 receives the voice data and outputs the voice of the speaking user.
- step S13 After the voice of the speaking user is output, or when it is determined in step S13 that the voice data is not transmitted, the process returns to step S11 and the above-mentioned process is repeated.
- FIG. 10 is a block diagram showing a hardware configuration example of the communication management server 1.
- the communication management server 1 is composed of a computer.
- the communication management server 1 may be configured by one computer having the configuration shown in FIG. 10, or may be configured by a plurality of computers.
- the CPU 101, ROM 102, and RAM 103 are connected to each other by the bus 104.
- the CPU 101 executes the server program 101A and controls the overall operation of the communication management server 1.
- the server program 101A is a program for realizing a Tele-communication system.
- An input / output interface 105 is further connected to the bus 104.
- An input unit 106 including a keyboard, a mouse, and the like, and an output unit 107 including a display, a speaker, and the like are connected to the input / output interface 105.
- the input / output interface 105 is connected to a storage unit 108 made of a hard disk, a non-volatile memory, etc., a communication unit 109 made of a network interface, etc., and a drive 110 for driving the removable media 111.
- the communication unit 109 communicates with the client terminal 2 used by each user via the network 11.
- FIG. 11 is a block diagram showing a functional configuration example of the communication management server 1. At least a part of the functional units shown in FIG. 11 is realized by executing the server program 101A by the CPU 101 of FIG.
- the information processing unit 121 is realized in the communication management server 1.
- the information processing unit 121 includes a voice receiving unit 131, a signal processing unit 132, a participant information management unit 133, a sound image localization processing unit 134, an HRTF data storage unit 135, a system voice management unit 136, a 2ch mix processing unit 137, and voice transmission. It is composed of a part 138.
- the voice receiving unit 131 controls the communication unit 109 and receives the voice data transmitted from the client terminal 2 used by the speaking user.
- the voice data received by the voice receiving unit 131 is output to the signal processing unit 132.
- the signal processing unit 132 appropriately performs predetermined signal processing on the audio data supplied from the audio receiving unit 131, and outputs the audio data obtained by performing the signal processing to the sound image localization processing unit 134.
- the signal processing unit 132 performs a process of separating the voice of the speaking user from the environmental sound.
- the microphone voice includes environmental sounds such as noise and noise in the space where the speaking user is located.
- the participant information management unit 133 controls the communication unit 109 and communicates with the client terminal 2 to manage the participant information which is information about the participants of the conference.
- FIG. 12 is a diagram showing an example of participant information.
- the participant information includes user information, location information, setting information, and volume information.
- User information is information of a user who participates in a conference set by a certain user. For example, the user ID and the like are included in the user information. Other information included in the participant information is managed in association with, for example, user information.
- Location information is information that represents the location of each user in the virtual space.
- the setting information is information that represents the contents of the settings related to the conference, such as the setting of the background sound used in the conference.
- Volume information is information indicating the volume when outputting the voice of each user.
- Participant information managed by the participant information management unit 133 is supplied to the sound image localization processing unit 134. Participant information managed by the participant information management unit 133 is appropriately supplied to the system voice management unit 136, the 2ch mix processing unit 137, the voice transmission unit 138, and the like. In this way, the participant information management unit 133 functions as a position management unit that manages the position of each user in the virtual space, and also functions as a background sound management unit that manages the background sound setting.
- the sound image localization processing unit 134 reads HRTF data according to the positional relationship of each user from the HRTF data storage unit 135 based on the position information supplied from the participant information management unit 133 and acquires it.
- the sound image localization processing unit 134 performs sound image localization processing using the HRTF data read from the HRTF data storage unit 135 on the audio data supplied from the signal processing unit 132, and generates audio data for each listening user. do.
- the sound image localization processing unit 134 performs sound image localization processing using predetermined HRTF data on the system audio data supplied from the system audio management unit 136.
- the system voice is a voice generated on the communication management server 1 side and heard by the listening user together with the voice of the speaking user.
- the system voice includes, for example, a background sound such as BGM and a sound effect.
- the system voice is a voice different from the user's voice.
- voices other than the voice of the speaking user are also processed as object audio.
- Sound image localization processing for localizing the sound image at a predetermined position in the virtual space is also performed on the audio data of the system audio. For example, a sound image localization process for localizing a sound image at a position farther than the position of the participant is applied to the audio data of the background sound.
- the sound image localization processing unit 134 outputs the audio data obtained by performing the sound image localization processing to the 2ch mix processing unit 137.
- the voice data of the speaking user and the voice data of the system voice are output to the 2ch mix processing unit 137 as appropriate.
- the HRTF data storage unit 135 stores HRTF data corresponding to a plurality of positions based on each listening position in the virtual space.
- the system voice management unit 136 manages the system voice.
- the system audio management unit 136 outputs the audio data of the system audio to the sound image localization processing unit 134.
- the 2ch mix processing unit 137 performs 2ch mix processing on the audio data supplied from the sound image localization processing unit 134. By performing the 2ch mix processing, channel-based audio data including the components of the audio signal L and the audio signal R of the voice of the speaking user and the system voice is generated. The audio data obtained by performing the 2ch mix processing is output to the audio transmission unit 138.
- the voice transmission unit 138 controls the communication unit 109 and transmits the voice data supplied from the 2ch mix processing unit 137 to the client terminal 2 used by each listening user.
- FIG. 13 is a block diagram showing a hardware configuration example of the client terminal 2.
- the client terminal 2 is configured by connecting a memory 202, a voice input device 203, a voice output device 204, an operation unit 205, a communication unit 206, a display 207, and a sensor unit 208 to the control unit 201.
- the control unit 201 is composed of a CPU, ROM, RAM, and the like.
- the control unit 201 controls the overall operation of the client terminal 2 by executing the client program 201A.
- the client program 201A is a program for using the Tele-communication system managed by the communication management server 1.
- the client program 201A includes a transmitting side module 201A-1 that executes the processing on the transmitting side and a receiving side module 201A-2 that executes the processing on the receiving side.
- the memory 202 is composed of a flash memory or the like.
- the memory 202 stores various information such as the client program 201A executed by the control unit 201.
- the voice input device 203 is composed of a microphone.
- the voice collected by the voice input device 203 is output to the control unit 201 as a microphone voice.
- the audio output device 204 is composed of devices such as headphones and speakers.
- the audio output device 204 outputs the audio of the participants of the conference based on the audio signal supplied from the control unit 201.
- the voice input device 203 will be described as a microphone as appropriate.
- the audio output device 204 will be described as a headphone.
- the operation unit 205 is composed of various buttons and a touch panel provided on the display 207.
- the operation unit 205 outputs information representing the content of the user's operation to the control unit 201.
- the communication unit 206 is a communication module compatible with wireless communication of mobile communication systems such as 5G communication, and a communication module compatible with wireless LAN and the like.
- the communication unit 206 receives the radio wave output from the base station and communicates with various devices such as the communication management server 1 via the network 11.
- the communication unit 206 receives the information transmitted from the communication management server 1 and outputs it to the control unit 201. Further, the communication unit 206 transmits the information supplied from the control unit 201 to the communication management server 1.
- the display 207 is composed of an organic EL display, an LCD, and the like. Various screens such as a remote conference screen are displayed on the display 207.
- the sensor unit 208 is composed of various sensors such as an RGB camera, a depth camera, a gyro sensor, and an acceleration sensor.
- the sensor unit 208 outputs the sensor data obtained by performing the measurement to the control unit 201. Based on the sensor data measured by the sensor unit 208, the user's situation is appropriately recognized.
- FIG. 14 is a block diagram showing a functional configuration example of the client terminal 2. At least a part of the functional units shown in FIG. 14 is realized by executing the client program 201A by the control unit 201 of FIG.
- the information processing unit 211 is realized in the client terminal 2.
- the information processing unit 211 is composed of a voice processing unit 221, a setting information transmission unit 222, a user situation recognition unit 223, and a display control unit 224.
- the information processing unit 211 is composed of a voice receiving unit 231, an output control unit 232, a microphone voice acquisition unit 233, and a voice transmitting unit 234.
- the voice receiving unit 231 controls the communication unit 206 and receives the voice data transmitted from the communication management server 1.
- the voice data received by the voice receiving unit 231 is supplied to the output control unit 232.
- the output control unit 232 outputs the voice corresponding to the voice data transmitted from the communication management server 1 from the voice output device 204.
- the microphone voice acquisition unit 233 acquires the voice data of the microphone voice collected by the microphones constituting the voice input device 203.
- the voice data of the microphone voice acquired by the microphone voice acquisition unit 233 is supplied to the voice transmission unit 234.
- the voice transmission unit 234 controls the communication unit 206 and transmits the voice data of the microphone voice supplied from the microphone voice acquisition unit 233 to the communication management server 1.
- the setting information transmission unit 222 generates setting information representing the contents of various settings according to the user's operation.
- the setting information transmission unit 222 controls the communication unit 206 and transmits the setting information to the communication management server 1.
- the user situation recognition unit 223 recognizes the user situation based on the sensor data measured by the sensor unit 208.
- the user situational awareness unit 223 controls the communication unit 206 and transmits information indicating the user's situation to the communication management server 1.
- the display control unit 224 communicates with the communication management server 1 by controlling the communication unit 206, and displays the remote conference screen on the display 207 based on the information transmitted from the communication management server 1.
- the virtual reaction function is a function used to convey one's reaction to other users.
- the remote conference realized by the communication management server 1 is provided with, for example, an applause function which is a virtual reaction function. It is instructed from the screen displayed as GUI on the display 207 of the client terminal 2 to output the sound effect of the applause by using the applause function.
- FIG. 15 is a diagram showing an example of a remote conference screen.
- Participant icons I31 to I33 representing users participating in the conference are displayed on the remote conference screen shown in FIG. Assuming that the remote conference screen shown in FIG. 15 is a screen displayed on the client terminal 2A used by the user A, the participant icons I31 to I33 represent the users B to D, respectively. Participant icons I31 to I33 are displayed at positions corresponding to the positions of users B to D in the virtual space.
- a virtual reaction button 301 is displayed under the participant icons I31 to I33.
- the virtual reaction button 301 is a button pressed when instructing the output of the applause sound effect.
- a similar screen is displayed on the client terminal 2 used by the users B to D.
- the icons indicating that the user B and the user C are using the applause function are the participant icon I31 and the participant icon. It is displayed next to I32.
- the applause sound effect is played back as system voice on the communication management server 1 side, and is delivered to each listening user together with the voice of the speaking user.
- the sound image localization process for localizing the sound image at a predetermined position is also performed on the audio data of the applause sound effect.
- FIG. 17 is a diagram showing a flow of processing related to sound effect output using the virtual reaction function.
- the operation information indicating that the output of the applause sound effect is instructed is transmitted from the client terminal 2 to the communication management server 1 as shown by the arrows A11 and A12.
- the clapping sound is added to the microphone voice in the communication management server 1, and the voice data of the speaking user and the voice of the sound are heard. Sound image localization processing using HRTF data according to the positional relationship is performed for each of the data.
- sound image localization processing for localizing the sound image at the same position as the position of the user who instructed the output of the applause sound effect is performed on the sound data of the sound effect.
- the sound image of the applause sound effect is localized and felt at the same position as the position of the user who instructed the output of the applause sound effect.
- the sound image localization process for localizing the sound image at the position of the center of gravity of the positions of the multiple users who have instructed the output of the applause sound effect is applied to the audio data of the sound effect. It is given to.
- the sound image of the applause sound effect is localized and felt at a position where the user who has instructed the output of the applause sound effect is dense. It is possible to localize the sound image of the sound effect to various positions selected based on the position of the user who instructed the output of the sound effect of the applause, not the position of the center of gravity.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by each listening user as shown by arrow A15.
- the HRTF data for localizing the sound image of the applause sound effect in a predetermined position is selected in response to an action such as executing the applause function. Will be done. Further, based on the audio data obtained by the sound image localization processing using the selected HRTF data, the applause sound effect is provided to each listening user as audio content.
- the microphone voices # 1 to # N shown at the top using a plurality of blocks are the voices of the uttering user detected in different client terminals 2, respectively. Further, the audio output shown at the bottom using one block represents the output at the client terminal 2 used by one listening user.
- the functions indicated by the arrows A11 and A12 regarding the instruction to send the virtual reaction are realized by the transmitting side module 201A-1. Further, the sound image localization process using the HRTF data is realized by the server program 101A.
- step S101 the system voice management unit 136 (FIG. 11) receives operation information indicating that the output of the applause sound effect is instructed.
- the client terminal 2 used by the user sends operation information indicating that the output of the applause sound effect is instructed.
- the operation information is transmitted, for example, by the user situation recognition unit 223 (FIG. 14) of the client terminal 2.
- step S102 the voice receiving unit 131 receives the voice data transmitted from the client terminal 2 used by the speaking user.
- the audio data received by the audio receiving unit 131 is supplied to the sound image localization processing unit 134 via the signal processing unit 132.
- step S103 the system voice management unit 136 outputs the voice data of the applause sound effect to the sound image localization processing unit 134, and adds it as the voice data to be subject to the sound image localization processing.
- the sound image localization processing unit 134 has HRTF data according to the positional relationship between the position of the listening user and the position of the speaking user, and HRTF according to the positional relationship between the position of the listening user and the position of the sound effect of the applause. Data is read from the HRTF data storage unit 135 and acquired. As the position of the sound effect of the applause, a predetermined position as described above is selected as the position for localizing the sound image of the sound effect of the applause.
- the sound image localization processing unit 134 performs sound image localization processing using the HRTF data for the utterance voice on the voice data of the utterance user, and the sound image using the HRTF data for the utterance sound on the voice data of the clapping sound. Perform localization processing.
- step S105 the audio transmission unit 138 transmits the audio data obtained by the sound image localization process to the client terminal 2 used by the listening user.
- the sound image of the voice of the uttering user and the sound image of the sound effect of the applause are localized and felt at predetermined positions, respectively.
- the sound image localization process is not performed for each of the voice data of the speaking user and the voice data of the clapping sound, but the voice data of the uttering user is synthesized with the voice data of the clapping sound. Sound image localization processing may be performed on the synthesized voice data. Also by this, the sound image of the applause sound effect is localized at the same position as the position of the user who instructed the output of the applause sound effect.
- each listening user is a user who shows a reaction such as empathy or surprise. You can intuitively recognize if there is one.
- the voice including the microphone voice of the speaking user and the sound effect of applause may be output as follows.
- the microphone voice whose voice quality is changed by the filter processing on the client terminal 2 side is transmitted to the communication management server 1.
- a filter process for changing the voice quality of an old man or a child is performed on the microphone voice of the speaking user.
- the type of sound effect reproduced as the system voice is changed according to the number of users who simultaneously instruct the output of the sound effect. For example, if the number of users instructing the output of the applause sound effect is equal to or greater than the threshold number, a sound effect representing a large number of cheers is reproduced and delivered to the listening user instead of the applause sound effect. ..
- the selection of the type of sound effect is performed by the system voice management unit 136.
- HRTF data for localization to a predetermined position such as a position near the listening user's position, an upper position, a lower position, etc. are selected, and sound image localization is performed. Processing is done.
- the position where the sound image of the sound effect is localized may be changed, or the volume may be changed.
- a function to convey other reactions different from applause such as a function to express joy and a function to express anger, may be prepared as a virtual reaction function. Different voice data is reproduced for each type of reaction and output as a sound effect. The position where the sound image is localized may be changed for each type of reaction.
- the ear-beating function is a function of designating one user as a listening user and performing an utterance.
- the voice of the speaking user is delivered only to the specified user, not to other users. It is specified from the screen displayed as GUI on the display 207 of the client terminal 2 that the voice is delivered to one user by using the ear-beating function.
- FIG. 19 is a diagram showing an example of a remote conference screen.
- the remote conference screen displays the participant icons I31 to I33 representing the users participating in the conference.
- the remote conference screen shown in FIG. 19 is a screen displayed on the client terminal 2A used by the user A
- the participant icons I31 to I33 represent the users B to D, respectively.
- the participant icon I31 is selected by the user A using the cursor, the user B is designated as the user to be heard by the ear.
- the participant icon I31 representing user B is highlighted as shown in FIG.
- the communication management server 1 When the user A speaks in this state, the communication management server 1 performs a sound image localization process for localizing the voice data of the user A at the ear of the user B designated as the user to be struck. Is done.
- the default state is a state in which the user to be struck is not specified.
- the voice of the speaking user is delivered to all other users so that the sound image is localized at a position corresponding to the positional relationship between the listening user and the speaking user.
- FIG. 20 is a diagram showing a flow of processing related to voice output using the ear-beating function.
- the operation information indicating that the user to be heard is specified is transmitted from the client terminal 2 to the communication management server 1 as shown by arrow A21. Will be done.
- the operation information indicating that the user to be hit by the ear is specified is transmitted as shown by the arrow A22. May be good.
- the communication management server 1 designates the voice data of the microphone voice # 1 as the target of the ear hit. Sound image localization processing is performed to localize the sound image at the position near the user's ear. That is, the HRTF data corresponding to the position of the ear of the user designated as the ear hitting target is selected and used for the sound image localization processing.
- the microphone voice # 1 indicated by the arrow A23 is the voice of the user who hit the ear, that is, the voice of the uttering user who designated one user as the user to be hit by the ear hitting function.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by the user to be struck by the ear, as shown by the arrow A24.
- the communication management server 1 determines the positional relationship between the listening user and the speaking user. Sound image localization processing is performed using the corresponding HRTF data.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by the listening user, as shown by arrow A26.
- step S111 the system voice management unit 136 receives the operation information indicating that the user to be heard is selected.
- operation information indicating that the user to be heard is selected is transmitted from the client terminal 2 used by that user.
- the operation information is transmitted, for example, by the user situation recognition unit 223 of the client terminal 2.
- step S112 the voice receiving unit 131 receives the voice data transmitted from the client terminal 2 used by the user who has made an ear.
- the audio data received by the audio receiving unit 131 is supplied to the sound image localization processing unit 134.
- step S113 the sound image localization processing unit 134 reads out the HRTF data corresponding to the position of the ear of the user to be struck from the HRTF data storage unit 135 and acquires it. Further, the sound image localization processing unit 134 performs sound image localization processing using HRTF data on the voice data of the speaking user (user who has made an ear hit) so that the sound image is localized at the ear of the user who is the target of ear hitting.
- step S114 the audio transmission unit 138 transmits the audio data obtained by the sound image localization process to the client terminal 2 used by the user to be heard.
- the voice of the user who has been heard is output based on the voice data transmitted from the communication management server 1.
- the user selected as the ear-beating target listens to the voice of the user who hit the ear while feeling the sound image near the ear.
- the speaking user can specify one user and speak only to that user.
- the voice of the user who has uttered the ear and the voice of another user who is speaking at the same time may be delivered to the user (listening user) selected as the target of the ear hit.
- the sound image localization process is performed on the voice data of the user who has hit the ear so that the sound image is localized at the listening user's ear.
- sound image localization processing is performed using the HRTF data according to the positional relationship between the position of the listening user and the position of the speaking user.
- the position where the sound image is localized may be specified by the user who hits the ear.
- the focus function is a function for designating one user as a focus target and making it easier to hear the voice of that user.
- the above-mentioned ear-beating function is a function used by the user on the speaking side
- the focus function is a function used by the user on the listening side.
- the user to be focused is specified from the screen displayed as a GUI on the display 207 of the client terminal 2.
- FIG. 22 is a diagram showing an example of a remote conference screen.
- the remote conference screen displays the participant icons I31 to I33 representing the users participating in the conference.
- the participant icons I31 to I33 represent the users B to D, respectively.
- the participant icon I31 is selected by the user A using the cursor
- the user B is designated as the user to be focused.
- the participant icon I31 representing user B is highlighted as shown in FIG.
- the communication management server 1 localizes the voice image of the user B in the vicinity of the user A who has designated the user B as the focus target user. Localization processing is performed.
- HRTF data corresponding to the positional relationship with user A is used for the voice data of user C and the voice data of user D, respectively. The sound image localization process that was performed is performed.
- the default state is a state in which the user to be focused is not specified.
- the voice of the speaking user is delivered to all other users so that the sound image is localized at a position corresponding to the positional relationship between the listening user and the speaking user.
- FIG. 23 is a diagram showing a flow of processing related to audio output using the focus function.
- the operation information indicating that the focus target user has been specified is transmitted from the client terminal 2 to the communication management server 1 as shown by the arrow A31. Will be done.
- operation information indicating that the focus target user is specified is transmitted as shown by arrow A32. You may do so.
- the communication management server 1 When the microphone voice is transmitted from the client terminal 2 as shown by the arrows A33 and A34, the communication management server 1 localizes the sound image close to the user with respect to the voice data of the microphone voice of the user to be focused. Sound image localization processing for this is performed. That is, the HRTF data corresponding to the position near the position of the user who specified the focus target is selected and used for the sound image localization process.
- sound image localization processing is performed to localize the sound image at a position away from the user with respect to the voice data of the microphone voice of a user other than the user to be focused. That is, the HRTF data corresponding to the position away from the position of the user who specified the focus target is selected and used for the sound image localization process.
- the microphone voice # 1 indicated by the arrow A33 is the microphone voice of the user to be focused.
- the voice data of the microphone voice # 1 is transmitted from the client terminal 2 used by the focus target user to the communication management server 1.
- the microphone voice #N indicated by the arrow A34 is the microphone voice of a user other than the user to be focused.
- the voice data of the microphone voice # N is transmitted from the client terminal 2 used by a user other than the focus target user to the communication management server 1.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by the user who has specified the focus target, as shown by the arrow A35.
- the audio sound image of the focus target user is localized near the selected user in response to an action such as execution of the focus function.
- HRTF data for is selected. Further, based on the audio data obtained by the sound image localization processing using the selected HRTF data, the audio of the user to be focused is provided to the user who has selected the focus target as audio content.
- step S121 the participant information management unit 133 receives the operation information indicating that the user to be focused is selected.
- operation information indicating that the user to be focused has been selected is transmitted from the client terminal 2 used by that user.
- the operation information is transmitted, for example, by the user situation recognition unit 223 of the client terminal 2.
- the voice receiving unit 131 receives the voice data transmitted from the client terminal 2. For example, the voice data of a user other than the focus target user (a user who is not selected as the focus target) is received together with the voice data of the focus target user.
- the audio data received by the audio receiving unit 131 is supplied to the sound image localization processing unit 134.
- step S123 the sound image localization processing unit 134 reads out the HRTF data corresponding to the position near the user who has selected the focus target from the HRTF data storage unit 135 and acquires it. Further, the sound image localization processing unit 134 performs sound image localization processing using the acquired HRTF data on the audio data of the user to be focused so as to localize the sound image near the user who has selected the focus target.
- step S124 the sound image localization processing unit 134 reads out the HRTF data corresponding to the position away from the user who selected the focus target from the HRTF data storage unit 135 and acquires it. Further, the sound image localization processing unit 134 uses the acquired HRTF data for the audio data of a user other than the focus target user so as to localize the sound image at a position away from the user who has selected the focus target. Perform processing.
- step S125 the audio transmission unit 138 transmits the audio data obtained by the sound image localization process to the client terminal 2 used by the user who has selected the focus target.
- the voice of the speaking user is output based on the voice data transmitted from the communication management server 1.
- the user who selects the focus target listens to the voice of the focus target user while feeling the sound image close to the user. Further, the user who selects the focus target listens to the voice of a user other than the focus target user while feeling the sound image at a distant position.
- the user can specify one user and listen to the utterance of that user in a concentrated manner.
- the sound image localization process is performed so that the sound image is localized at a position away from the listening user for the voice data of the voice of the user selected as the user who wants to keep away.
- the same configurations as the sound image localization processing unit 134, the HRTF data storage unit 135, and the 2ch mix processing unit 137 are provided in the client terminal 2.
- the same configuration as the sound image localization processing unit 134, the HRTF data storage unit 135, and the 2ch mix processing unit 137 is realized by, for example, the receiving side module 201A-2.
- the sound image localization processing is performed on the client terminal 2 side. By performing the sound image localization process locally, it is possible to speed up the response to parameter changes.
- the sound image localization process is performed on the communication management server 1 side.
- the sound image localization process it is possible to reduce the amount of data communication between the communication management server 1 and the client terminal 2.
- FIG. 25 is a diagram showing a processing flow related to dynamic switching of sound image localization processing.
- the microphone sound transmitted from the client terminal 2 as shown by the arrows A101 and A102 is transmitted to the client terminal 2 as it is as shown by the arrow A103.
- the client terminal 2 that is the transmission source of the microphone voice is the client terminal 2 used by the speaking user, and the client terminal 2 that is the transmission destination of the microphone voice is the client terminal 2 that is used by the listening user.
- the setting of the parameter related to the localization of the sound image such as the position of the listening user
- the change of the setting is reflected in real time and the microphone sound transmitted from the communication management server 1 is reflected. Sound image localization processing is performed on the server.
- the sound corresponding to the sound data generated by the sound image localization process on the client terminal 2 side is output as shown by the arrow A105.
- the changed contents of the parameter settings are saved, and the information indicating the changed contents is transmitted to the communication management server 1 as shown by the arrow A106.
- the sound image localization process is performed on the communication management server 1 side, the sound image localization process is performed for the microphone sound transmitted from the client terminal 2 as shown by arrows A107 and A108, reflecting the changed parameters. Will be done.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by the listening user as shown by arrow A109.
- step S201 it is determined whether or not the parameter setting has been changed for a certain period of time or longer. This determination is performed by the participant information management unit 133, for example, based on the information transmitted from the client terminal 2 used by the listening user.
- step S202 the voice transmission unit 138 uses the voice data of the speaking user received by the participant information management unit 133 as it is, as a client used by the listening user. Send to terminal 2.
- the transmitted audio data is object audio data.
- step S203 the participant information management unit 133 receives the information indicating the content of the setting change transmitted from the client terminal 2. After updating the position information of the listening user based on the information transmitted from the client terminal 2, the process returns to step S201 and the subsequent processing is performed. The sound image localization process performed on the communication management server 1 side is performed based on the updated position information.
- step S204 sound image localization processing is performed on the communication management server 1 side in step S204.
- the process performed in step S204 is basically the same process as described with reference to FIG.
- the above processing is performed not only when the position is changed, but also when other parameters such as the background sound setting are changed.
- Acoustic settings suitable for the background sound may be stored in a database and managed by the communication management server 1. For example, a position suitable as a position for localizing the sound image is set for each type of background sound, and HRTF data corresponding to the set position is saved. Parameters for other acoustic settings, such as reverb, may be saved.
- FIG. 27 is a diagram showing a flow of processing related to management of acoustic settings.
- the background sound is synthesized with the voice of the speaking user, the background sound is reproduced on the communication management server 1, and the sound image localization process is performed using the acoustic settings such as HRTF data suitable for the background sound as shown by the arrow A121. It will be done.
- the audio data generated by the sound image localization process is transmitted to and output to the client terminal 2 used by the listening user as shown by arrow A122.
- the series of processes described above can be executed by hardware or software.
- the programs constituting the software are installed on a computer embedded in dedicated hardware, a general-purpose personal computer, or the like.
- the installed program is recorded and provided on the removable media 111 shown in FIG. 10, which consists of an optical disk (CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc), etc.), a semiconductor memory, or the like. It may also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
- the program can be installed in the ROM 102 or the storage unit 108 in advance.
- the program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
- Headphones or speakers are used as the audio output device, but other devices may be used.
- ordinary earphones inner ear headphones
- open-type earphones capable of capturing environmental sounds can be used as audio output devices.
- this technology can take a cloud computing configuration in which one function is shared by multiple devices via a network and processed jointly.
- each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
- a storage unit that stores HRTF data corresponding to multiple positions based on the listening position, By performing sound image localization processing using the HRTF data selected according to the action by a specific participant among the participants of the conversation participating via the network, the sound image is localized in a predetermined position.
- An information processing device including a sound image localization processing unit that provides audio content selected according to the action.
- the sound image localization processing unit provides the audio content for outputting the sound effect in response to the action of instructing the output of the sound effect being performed by the specific participant. Information processing equipment.
- the sound image localization processing unit uses the HRTF data according to the relationship between the position of the participant who becomes a listener and the position of the specific participant who has performed the action in the virtual space, and the effector.
- the information processing apparatus which performs the sound image localization process on the audio data of the above.
- the sound image localization processing unit outputs the audio content of the specific participant in response to the action of selecting the participant to listen to the audio by the specific participant.
- the information processing apparatus according to (1) above.
- the information processing apparatus according to (4), wherein the selection of the participant as the listening destination is performed by using the visual information displayed on the screen to visually represent the participant.
- the sound image localization processing unit performs the sound image localization processing on the audio data of the specific participant by using the HRTF data corresponding to the position of the ear of the participant as the listening destination in the virtual space.
- the information processing apparatus according to (4) or (5) above.
- the sound image localization processing unit provides the audio content for outputting the audio of the speaker in response to the action of selecting the speaker to be focused by the specific participant (1). ).
- the information processing device (8) The information processing apparatus according to (7) above, wherein the speaker to be focused is selected by using the visual information displayed on the screen to visually represent the participant.
- the sound image localization processing unit performs the sound image localization processing on the voice data of the speaker to be focused by using the HRTF data corresponding to the position in the vicinity of the position of the specific participant in the virtual space.
- the information processing apparatus according to (7) or (8).
- Information processing equipment Stores HRTF data corresponding to multiple positions based on the listening position, By performing sound image localization processing using the HRTF data selected according to the action by a specific participant among the participants of the conversation participating via the network, the sound image is localized in a predetermined position.
- On the computer Stores HRTF data corresponding to multiple positions based on the listening position, By performing sound image localization processing using the HRTF data selected in response to an action by a specific participant among the participants of the conversation participating via the network, the sound image is localized in a predetermined position.
- a program that executes a process that provides audio content selected according to the action.
- An information processing terminal including an audio receiving unit that receives the audio content and outputs the audio.
- the voice receiving unit receives the voice data of the sound effect transmitted in response to the action instructing the output of the sound effect to be performed by the specific participant. Processing terminal.
- the audio receiving unit performs the sound image localization process using the HRTF data according to the relationship between the position of the user of the information processing terminal and the position of the specific participant who has performed the action in the virtual space.
- the information processing terminal according to (13) above which receives the voice data of the effector obtained by the effect.
- the voice receiving unit has been transmitted in response to the action of selecting the user of the information processing terminal as the participant to listen to the voice by the specific participant.
- the information processing terminal according to (12) above which receives the voice data of.
- the audio receiving unit receives audio data of the specific participant obtained by performing the sound image localization process using the HRTF data according to the position of the user's ear of the information processing terminal in the virtual space.
- the voice receiving unit of the focus target speaker has been transmitted in response to the action of selecting the focus target speaker being performed by the user of the information processing terminal as the specific participant.
- the information processing terminal according to (12) above which receives voice data.
- the voice receiving unit is the speaker of the focus target obtained by performing the sound image localization process using the HRTF data according to the position in the vicinity of the position of the user of the information processing terminal in the virtual space.
- the information processing terminal according to (17) above which receives voice data.
- Information processing terminal A sound image using the HRTF data corresponding to a plurality of positions based on the listening position and selected according to an action by a specific participant among the participants of the conversation participating via the network.
- 1 Communication management server 2A to 2D client terminal, 121 Information processing unit, 131 Audio receiving unit, 132 Signal processing unit, 133 Participant information management unit, 134 Sound image localization processing unit, 135 HRTF data storage unit, 136 System audio management unit , 137 2ch mix processing unit, 138 voice transmission unit, 201 control unit, 211 information processing unit, 221 voice processing unit, 222 setting information transmission unit, 223 user status recognition unit, 231 voice reception unit, 233 microphone voice acquisition unit.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Telephonic Communication Services (AREA)
- Stereophonic System (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
1.Tele-communicationシステムの構成
2.基本的な動作
3.各装置の構成
4.音像定位のユースケース
5.変形例
図1は、本技術の一実施形態に係るTele-communicationシステムの構成例を示す図である。
ここで、コミュニケーション管理サーバ1とクライアント端末2の基本的な動作の流れについて説明する。
図8のフローチャートを参照して、コミュニケーション管理サーバ1の基本処理について説明する。
図9のフローチャートを参照して、クライアント端末2の基本処理について説明する。
<コミュニケーション管理サーバ1の構成>
図10は、コミュニケーション管理サーバ1のハードウェア構成例を示すブロック図である。
図13は、クライアント端末2のハードウェア構成例を示すブロック図である。
会議の参加者による発話音声を含む各種の音声の音像定位のユースケースについて説明する。
バーチャルリアクション機能は、他のユーザに自分の反応を伝えるときに用いられる機能である。コミュニケーション管理サーバ1により実現されるリモート会議には、例えば、バーチャルリアクション機能である拍手機能が用意される。拍手機能を利用して拍手の効果音を出力することが、クライアント端末2のディスプレイ207にGUIとして表示される画面から指示される。
耳打ち機能は、1人のユーザを聴取ユーザとして指定し、発話を行う機能である。発話ユーザの音声は、指定したユーザにだけ届けられ、他のユーザには届けられない。耳打ち機能を利用して1人のユーザに音声を届けることが、クライアント端末2のディスプレイ207にGUIとして表示される画面から指定される。
フォーカス機能は、1人のユーザをフォーカス対象として指定し、そのユーザの音声を聞きやすくする機能である。上述した耳打ち機能が、発話側のユーザが利用する機能であるのに対して、フォーカス機能は、聴取側のユーザが利用する機能である。フォーカス対象のユーザが、クライアント端末2のディスプレイ207にGUIとして表示される画面から指定される。
レンダリングなどを含むオブジェクトオーディオの処理である音像定位処理をコミュニケーション管理サーバ1側で行うのか、クライアント端末2側で行うのかが動的に切り替えられる。
背景音に適した音響設定がデータベース化され、コミュニケーション管理サーバ1において管理されるようにしてもよい。例えば、背景音の種類毎に、音像を定位させる位置として適した位置が設定され、設定された位置に応じたHRTFデータが保存される。リバーブなどの、他の音響設定に関するパラメータが保存されるようにしてもよい。
複数のユーザにより行われる会話がリモート会議での会話であるものとしたが、食事の場面での会話、講演会での会話などの、複数人がオンライン経由で参加する会話であれば、様々な種類の会話に上述した技術は適用可能である。
上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または、汎用のパーソナルコンピュータなどにインストールされる。
本技術は、以下のような構成をとることもできる。
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶する記憶部と、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する音像定位処理部と
を備える情報処理装置。
(2)
前記音像定位処理部は、効果音の出力を指示する前記アクションが前記特定の参加者により行われることに応じて、前記効果音を出力するための前記音声コンテンツを提供する
前記(1)に記載の情報処理装置。
(3)
前記音像定位処理部は、仮想空間における、聴取者となる前記参加者の位置と、前記アクションを行った前記特定の参加者の位置との関係に応じた前記HRTFデータを用いて、前記効果者の音声データに対して前記音像定位処理を行う
前記(2)に記載の情報処理装置。
(4)
前記音像定位処理部は、音声の聴取先とする前記参加者を選択する前記アクションが前記特定の参加者により行われることに応じて、前記特定の参加者の音声を出力するための前記音声コンテンツを提供する
前記(1)に記載の情報処理装置。
(5)
前記聴取先とする前記参加者の選択は、画面上に表示された、前記参加者を視覚的に表す視覚情報を用いて行われる
前記(4)に記載の情報処理装置。
(6)
前記音像定位処理部は、仮想空間における、前記聴取先とする前記参加者の耳元の位置に応じた前記HRTFデータを用いて、前記特定の参加者の音声データに対して前記音像定位処理を行う
前記(4)または(5)に記載の情報処理装置。
(7)
前記音像定位処理部は、フォーカス対象の発話者を選択する前記アクションが前記特定の参加者により行われることに応じて、前記発話者の音声を出力するための前記音声コンテンツを提供する
前記(1)に記載の情報処理装置。
(8)
フォーカス対象の前記発話者の選択は、画面上に表示された、前記参加者を視覚的に表す視覚情報を用いて行われる
前記(7)に記載の情報処理装置。
(9)
前記音像定位処理部は、仮想空間における、前記特定の参加者の位置の近傍の位置に応じた前記HRTFデータを用いて、フォーカス対象の前記発話者の音声データに対して前記音像定位処理を行う
前記(7)または(8)に記載の情報処理装置。
(10)
情報処理装置が、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する
情報処理方法。
(11)
コンピュータに、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する
処理を実行させるプログラム。
(12)
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する音声受信部を備える
情報処理端末。
(13)
前記音声受信部は、効果音の出力を指示する前記アクションが前記特定の参加者により行われることに応じて送信されてきた、前記効果音の音声データを受信する
前記(12)に記載の情報処理端末。
(14)
前記音声受信部は、仮想空間における、前記情報処理端末のユーザの位置と、前記アクションを行った前記特定の参加者の位置との関係に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られた前記効果者の音声データを受信する
前記(13)に記載の情報処理端末。
(15)
前記音声受信部は、音声の聴取先とする前記参加者として前記情報処理端末のユーザを選択する前記アクションが前記特定の参加者により行われることに応じて送信されてきた、前記特定の参加者の音声データを受信する
前記(12)に記載の情報処理端末。
(16)
前記音声受信部は、仮想空間における、前記情報処理端末のユーザの耳元の位置に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られた前記特定の参加者の音声データを受信する
前記(15)に記載の情報処理端末。
(17)
前記音声受信部は、フォーカス対象の発話者を選択する前記アクションが、前記特定の参加者としての前記情報処理端末のユーザにより行われることに応じて送信されてきた、フォーカス対象の前記発話者の音声データを受信する
前記(12)に記載の情報処理端末。
(18)
前記音声受信部は、仮想空間における、前記情報処理端末のユーザの位置の近傍の位置に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られたフォーカス対象の前記発話者の音声データを受信する
前記(17)に記載の情報処理端末。
(19)
情報処理端末が、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する
情報処理方法。
(20)
コンピュータに、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する
処理を実行させるプログラム。
Claims (20)
- 聴取位置を基準とした複数の位置に対応するHRTFデータを記憶する記憶部と、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する音像定位処理部と
を備える情報処理装置。 - 前記音像定位処理部は、効果音の出力を指示する前記アクションが前記特定の参加者により行われることに応じて、前記効果音を出力するための前記音声コンテンツを提供する
請求項1に記載の情報処理装置。 - 前記音像定位処理部は、仮想空間における、聴取者となる前記参加者の位置と、前記アクションを行った前記特定の参加者の位置との関係に応じた前記HRTFデータを用いて、前記効果者の音声データに対して前記音像定位処理を行う
請求項2に記載の情報処理装置。 - 前記音像定位処理部は、音声の聴取先とする前記参加者を選択する前記アクションが前記特定の参加者により行われることに応じて、前記特定の参加者の音声を出力するための前記音声コンテンツを提供する
請求項1に記載の情報処理装置。 - 前記聴取先とする前記参加者の選択は、画面上に表示された、前記参加者を視覚的に表す視覚情報を用いて行われる
請求項4に記載の情報処理装置。 - 前記音像定位処理部は、仮想空間における、前記聴取先とする前記参加者の耳元の位置に応じた前記HRTFデータを用いて、前記特定の参加者の音声データに対して前記音像定位処理を行う
請求項4に記載の情報処理装置。 - 前記音像定位処理部は、フォーカス対象の発話者を選択する前記アクションが前記特定の参加者により行われることに応じて、前記発話者の音声を出力するための前記音声コンテンツを提供する
請求項1に記載の情報処理装置。 - フォーカス対象の前記発話者の選択は、画面上に表示された、前記参加者を視覚的に表す視覚情報を用いて行われる
請求項7に記載の情報処理装置。 - 前記音像定位処理部は、仮想空間における、前記特定の参加者の位置の近傍の位置に応じた前記HRTFデータを用いて、フォーカス対象の前記発話者の音声データに対して前記音像定位処理を行う
請求項7に記載の情報処理装置。 - 情報処理装置が、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する
情報処理方法。 - コンピュータに、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、
ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する
処理を実行させるプログラム。 - 聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する音声受信部を備える
情報処理端末。 - 前記音声受信部は、効果音の出力を指示する前記アクションが前記特定の参加者により行われることに応じて送信されてきた、前記効果音の音声データを受信する
請求項12に記載の情報処理端末。 - 前記音声受信部は、仮想空間における、前記情報処理端末のユーザの位置と、前記アクションを行った前記特定の参加者の位置との関係に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られた前記効果者の音声データを受信する
請求項13に記載の情報処理端末。 - 前記音声受信部は、音声の聴取先とする前記参加者として前記情報処理端末のユーザを選択する前記アクションが前記特定の参加者により行われることに応じて送信されてきた、前記特定の参加者の音声データを受信する
請求項12に記載の情報処理端末。 - 前記音声受信部は、仮想空間における、前記情報処理端末のユーザの耳元の位置に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られた前記特定の参加者の音声データを受信する
請求項15に記載の情報処理端末。 - 前記音声受信部は、フォーカス対象の発話者を選択する前記アクションが、前記特定の参加者としての前記情報処理端末のユーザにより行われることに応じて送信されてきた、フォーカス対象の前記発話者の音声データを受信する
請求項12に記載の情報処理端末。 - 前記音声受信部は、仮想空間における、前記情報処理端末のユーザの位置の近傍の位置に応じた前記HRTFデータを用いた前記音像定位処理が行われることによって得られたフォーカス対象の前記発話者の音声データを受信する
請求項17に記載の情報処理端末。 - 情報処理端末が、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する
情報処理方法。 - コンピュータに、
聴取位置を基準とした複数の位置に対応するHRTFデータを記憶し、ネットワークを介して参加する会話の参加者のうちの特定の参加者によるアクションに応じて選択された前記HRTFデータを用いた音像定位処理を行うことによって、音像が所定の位置に定位するように、前記アクションに応じて選択された音声コンテンツを提供する情報処理装置から送信されてきた、前記音像定位処理を行うことによって得られた前記音声コンテンツを受信し、音声を出力する
処理を実行させるプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/024,461 US20240031758A1 (en) | 2020-09-10 | 2021-08-27 | Information processing apparatus, information processing terminal, information processing method, and program |
JP2022547498A JPWO2022054603A1 (ja) | 2020-09-10 | 2021-08-27 | |
DE112021004759.0T DE112021004759T5 (de) | 2020-09-10 | 2021-08-27 | Informationsverarbeitungsvorrichtung, informationsverarbeitungsendgerät, informationsverarbeitungsverfahren und programm |
CN202180054730.8A CN116057927A (zh) | 2020-09-10 | 2021-08-27 | 信息处理装置、信息处理终端、信息处理方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020152417 | 2020-09-10 | ||
JP2020-152417 | 2020-09-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022054603A1 true WO2022054603A1 (ja) | 2022-03-17 |
Family
ID=80631619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/031450 WO2022054603A1 (ja) | 2020-09-10 | 2021-08-27 | 情報処理装置、情報処理端末、情報処理方法、およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240031758A1 (ja) |
JP (1) | JPWO2022054603A1 (ja) |
CN (1) | CN116057927A (ja) |
DE (1) | DE112021004759T5 (ja) |
WO (1) | WO2022054603A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006287878A (ja) * | 2005-04-05 | 2006-10-19 | Matsushita Electric Ind Co Ltd | 携帯電話端末 |
US20080144794A1 (en) * | 2006-12-14 | 2008-06-19 | Gardner William G | Spatial Audio Teleconferencing |
JP2014011509A (ja) * | 2012-06-27 | 2014-01-20 | Sharp Corp | 音声出力制御装置、音声出力制御方法、プログラム及び記録媒体 |
US20150373477A1 (en) * | 2014-06-23 | 2015-12-24 | Glen A. Norris | Sound Localization for an Electronic Call |
WO2017061218A1 (ja) * | 2015-10-09 | 2017-04-13 | ソニー株式会社 | 音響出力装置、音響生成方法及びプログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11331992A (ja) | 1998-05-15 | 1999-11-30 | Sony Corp | デジタル処理回路と、これを使用したヘッドホン装置およびスピーカ装置 |
-
2021
- 2021-08-27 DE DE112021004759.0T patent/DE112021004759T5/de active Pending
- 2021-08-27 US US18/024,461 patent/US20240031758A1/en active Pending
- 2021-08-27 JP JP2022547498A patent/JPWO2022054603A1/ja active Pending
- 2021-08-27 WO PCT/JP2021/031450 patent/WO2022054603A1/ja active Application Filing
- 2021-08-27 CN CN202180054730.8A patent/CN116057927A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006287878A (ja) * | 2005-04-05 | 2006-10-19 | Matsushita Electric Ind Co Ltd | 携帯電話端末 |
US20080144794A1 (en) * | 2006-12-14 | 2008-06-19 | Gardner William G | Spatial Audio Teleconferencing |
JP2014011509A (ja) * | 2012-06-27 | 2014-01-20 | Sharp Corp | 音声出力制御装置、音声出力制御方法、プログラム及び記録媒体 |
US20150373477A1 (en) * | 2014-06-23 | 2015-12-24 | Glen A. Norris | Sound Localization for an Electronic Call |
WO2017061218A1 (ja) * | 2015-10-09 | 2017-04-13 | ソニー株式会社 | 音響出力装置、音響生成方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN116057927A (zh) | 2023-05-02 |
DE112021004759T5 (de) | 2023-08-10 |
US20240031758A1 (en) | 2024-01-25 |
JPWO2022054603A1 (ja) | 2022-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3627860B1 (en) | Audio conferencing using a distributed array of smartphones | |
US11758329B2 (en) | Audio mixing based upon playing device location | |
EP3410740B1 (en) | Spatially ducking audio produced through a beamforming loudspeaker array | |
US10491643B2 (en) | Intelligent augmented audio conference calling using headphones | |
JP3321178B2 (ja) | 音声会議システム中に空間音声環境を作る装置と方法 | |
KR102035477B1 (ko) | 카메라 선택에 기초한 오디오 처리 | |
US9197755B2 (en) | Multidimensional virtual learning audio programming system and method | |
US20150189457A1 (en) | Interactive positioning of perceived audio sources in a transformed reproduced sound field including modified reproductions of multiple sound fields | |
US20110058662A1 (en) | Method and system for aurally positioning voice signals in a contact center environment | |
CN110035250A (zh) | 音频处理方法、处理设备、终端及计算机可读存储介质 | |
CN111492342B (zh) | 音频场景处理 | |
WO2022054900A1 (ja) | 情報処理装置、情報処理端末、情報処理方法、およびプログラム | |
WO2022054899A1 (ja) | 情報処理装置、情報処理端末、情報処理方法、およびプログラム | |
EP1657961A1 (en) | A spatial audio processing method, a program product, an electronic device and a system | |
JP2006279492A (ja) | 電話会議システム | |
WO2022054603A1 (ja) | 情報処理装置、情報処理端末、情報処理方法、およびプログラム | |
WO2018198790A1 (ja) | コミュニケーション装置、コミュニケーション方法、プログラム、およびテレプレゼンスシステム | |
JP2006094315A (ja) | 立体音響再生システム | |
EP3588988B1 (en) | Selective presentation of ambient audio content for spatial audio presentation | |
WO2023286320A1 (ja) | 情報処理装置および方法、並びにプログラム | |
US12028178B2 (en) | Conferencing session facilitation systems and methods using virtual assistant systems and artificial intelligence algorithms | |
JP7472091B2 (ja) | オンライン通話管理装置及びオンライン通話管理プログラム | |
EP3588986A1 (en) | An apparatus and associated methods for presentation of audio | |
Albrecht et al. | Continuous Mobile Communication with Acoustic Co-Location Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21866562 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022547498 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18024461 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21866562 Country of ref document: EP Kind code of ref document: A1 |