WO2016084736A1 - 情報処理装置、情報処理システム、制御方法、及びプログラム - Google Patents
情報処理装置、情報処理システム、制御方法、及びプログラム Download PDFInfo
- Publication number
- WO2016084736A1 WO2016084736A1 PCT/JP2015/082678 JP2015082678W WO2016084736A1 WO 2016084736 A1 WO2016084736 A1 WO 2016084736A1 JP 2015082678 W JP2015082678 W JP 2015082678W WO 2016084736 A1 WO2016084736 A1 WO 2016084736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reflection surface
- information
- reflection
- sound
- user
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims description 34
- 238000004441 surface measurement Methods 0.000 claims abstract description 28
- 239000000463 material Substances 0.000 description 36
- 238000000605 extraction Methods 0.000 description 33
- 238000003860 storage Methods 0.000 description 31
- 230000003287 optical effect Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000010191 image analysis Methods 0.000 description 13
- 230000004044 response Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000012925 reference material Substances 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- -1 etc. Substances 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/403—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/02—Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
- H04R2201/025—Transducer mountings or cabinet supports enabling variable orientation of transducer of cabinet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2203/00—Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
- H04R2203/12—Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/13—Aspects of volume control, not necessarily automatic, in stereophonic sound systems
Definitions
- the present invention relates to an information processing apparatus, an information processing system, a control method, and a program.
- a directional speaker that outputs directional sound so that sound can be heard only in a specific direction, or makes a user feel as if sound is being emitted from the reflective surface by reflecting the directional sound on the reflective surface. There is.
- the reflection characteristics vary depending on the material and orientation of the reflective surface, so even if the same sound is output, the sound characteristics such as volume and frequency change depending on the reflective surface.
- the sound characteristics such as volume and frequency change depending on the reflective surface.
- reflection characteristics corresponding to the material and orientation of the reflecting surface have not been considered.
- the present invention has been made in view of the above problems, and one of its purposes is to provide an information processing apparatus that controls the output of directional sound in accordance with the reflection characteristics of the reflecting surface.
- An information processing apparatus includes a reflection surface determination unit that determines a reflection surface that is a target to reflect sound, a reflection surface information acquisition unit that acquires reflection surface information indicating the reflection characteristics of the determined reflection surface, and And an output control unit that outputs a directional sound corresponding to the acquired reflecting surface information to the determined reflecting surface.
- the reflection surface information acquisition unit may acquire the reflectance of the reflection surface as the reflection surface information.
- the output control unit may determine the output amount of the directional sound according to the acquired reflectance.
- the reflection surface information acquisition unit may acquire an incident angle at which the directional sound is incident on the reflection surface as the reflection surface information.
- the output control unit may determine the output amount of the directional sound according to the acquired incident angle.
- the reflection surface information acquisition unit acquires, as the reflection surface information, an arrival distance until the directional sound is reflected by the reflection surface and reaches the user. Also good.
- the output control unit may determine the output amount of the directional sound according to the acquired reach distance.
- the reflection surface information acquisition unit acquires the reflection surface information of each of a plurality of candidate reflection surfaces that are candidates for the reflection surface, and among the plurality of candidate reflection surfaces, the A reflection surface selection unit that selects a candidate reflection surface having excellent reflection characteristics indicated by reflection surface information may be further included.
- the reflection surface information acquisition unit may acquire the reflection surface information based on feature information of an image of the reflection surface captured by a camera.
- An information processing system includes a directional speaker that causes a omnidirectional sound generated by reflecting a directional sound to a predetermined reflecting surface to reach a user, and the directional sound.
- a reflection surface determination unit that determines a reflection surface to be reflected
- a reflection surface information acquisition unit that acquires reflection surface information indicating the reflection characteristics of the determined reflection surface, and an existence corresponding to the acquired reflection surface information.
- an output control unit that outputs a directional sound from the directional speaker to the determined reflecting surface.
- control method includes a reflection surface determination step for determining a reflection surface to be a target for reflecting sound, and a reflection surface information acquisition step for acquiring reflection surface information indicating the reflection characteristics of the determined reflection surface. And an output control step of outputting a directional sound corresponding to the acquired reflecting surface information to the determined reflecting surface.
- the program according to the present invention includes a reflection surface determination unit that determines a reflection surface that is a target for reflecting sound, a reflection surface information acquisition unit that acquires reflection surface information indicating the reflection characteristics of the determined reflection surface, and the acquisition.
- This is a program for causing a computer to function as an output control unit that outputs a directional sound according to the reflected surface information to the determined reflecting surface.
- This program may be stored in a computer-readable information storage medium.
- FIG. 1 is a diagram showing a hardware configuration of an entertainment system (sound output system) 10 according to an embodiment of the present invention.
- the entertainment system 10 includes a control unit 11, a main memory 20, an image processing unit 24, a monitor 26, an input / output processing unit 28, an audio processing unit 30, and a directional speaker 32.
- the computer system includes an optical disk reading unit 34, an optical disk 36, a hard disk 38, an interface (I / F) 40, a controller 42, and a network interface (I / F) 44.
- the control unit 11 includes, for example, a CPU, an MPU, and a GPU (Graphical Processing Unit), and executes various processes according to a program stored in the main memory 20. A specific example of processing executed by the control unit 11 in the present embodiment will be described later.
- the main memory 20 includes memory elements such as RAM and ROM. Programs and data read from the optical disk 36 and the hard disk 38 and programs and data supplied from the network via the network interface 48 are written in the main memory 20 as necessary.
- the main memory 20 also operates as a work memory for the control unit 11.
- the image processing unit 24 includes a GPU and a frame buffer.
- the GPU renders various screens in the frame buffer based on the image data supplied from the control unit 11.
- the screen formed in the frame buffer is converted into a video signal at a predetermined timing and output to the monitor 26.
- the monitor 26 for example, a home television receiver is used.
- the audio processing unit 30, the optical disk reading unit 34, the hard disk 38, the interfaces 40 and 44, and the network interface 48 are connected to the input / output processing unit 28.
- the input / output processing unit 28 controls data exchange between the control unit 11, the audio processing unit 30, the optical disk reading unit 34, the hard disk 38, the interfaces 40 and 44, and the network interface 48.
- the audio processing unit 30 includes an SPU (Sound Processing Unit) and a sound buffer.
- the sound buffer stores various audio data such as game music, game sound effects and messages read from the optical disk 36 and the hard disk 38.
- the SPU reproduces these various audio data and outputs them from the directional speaker 32.
- the control unit 11 may reproduce various sound data and output the sound data from the directional speaker 32. That is, reproduction output of various audio data from the directional speaker 32 may be realized by software processing executed by the control unit 11.
- the directional speaker 32 is a parametric speaker, for example, and outputs directional sound.
- An actuator for moving the directional speaker 32 is connected to the directional speaker 32, and a motor driver 33 is connected to the actuator.
- the motor driver 33 performs drive control of the actuator.
- FIG. 2 is a diagram schematically illustrating an example of the structure of the directional speaker 32.
- the directional speaker 32 is configured by arranging a plurality of ultrasonic sounding bodies 32b on a substrate 32a, and the ultrasonic waves output from the ultrasonic sounding bodies 32a are overlapped in the air so that the ultrasonic waves are generated. Become audible.
- the directional speaker 32 is rotated about the x-axis and the y-axis by driving the actuator by the motor driver 33. Thereby, the direction of the directional sound output from the directional speaker 32 can be arbitrarily adjusted, and the directional sound is reflected at an arbitrary place to generate the sound from that place. It can be made to feel to the user.
- the optical disc reading unit 34 reads programs and data stored in the optical disc 36 in accordance with instructions from the control unit 11.
- the optical disc 36 is a general optical disc such as a DVD-ROM.
- the hard disk 38 is a general hard disk device.
- Various programs and data are stored in the optical disk 36 and the hard disk 38 so as to be readable by a computer.
- the entertainment system 10 may be configured to be able to read programs and data stored in information storage media other than the optical disk 36 and the hard disk 38.
- the optical disc 36 is a general optical disc (computer-readable information storage medium) such as a DVD-ROM.
- the hard disk 38 is a general hard disk device. Various programs and data are stored in the optical disk 36 and the hard disk 38 so as to be readable by a computer.
- Interfaces (I / F) 40 and 44 are interfaces for connecting various peripheral devices such as the controller 42 and the camera unit 46.
- a USB (Universal Serial Bus) interface is used.
- a wireless communication interface such as a Bluetooth (registered trademark) interface may be used.
- the controller 42 is a general-purpose operation input means, and is used for a user to input various operations (for example, game operations).
- the input / output processing unit 28 scans the state of each unit of the controller 42 every predetermined time (for example, 1/60 seconds), and supplies an operation signal representing the result to the control unit 11.
- the control unit 11 determines the content of the operation performed by the user based on the operation signal.
- the entertainment system 10 is configured to be able to connect a plurality of controllers 42, and the control unit 11 executes various processes based on operation signals input from the controllers 42.
- the camera unit 46 includes, for example, a known digital camera, and inputs black and white, gray scale, or color photographed images every predetermined time (for example, 1/60 seconds).
- the camera unit 46 in this embodiment is configured to input a captured image as image data in JPEG (Joint Photographic Experts Group) format.
- JPEG Joint Photographic Experts Group
- the camera unit 46 is connected to the interface 44 via a cable.
- the network interface 48 is connected to the input / output processing unit 28 and a communication network, and relays data communication between the entertainment system 10 and other entertainment systems 10 via the communication network.
- FIG. 3 is an overall schematic diagram showing a usage scene of the entertainment system 10 according to the present embodiment.
- the entertainment system 10 is used by a user in a private room surrounded by walls on all sides and arranged with various furniture.
- the directional speaker 32 is installed on the monitor 26 so as to output a directional sound toward an arbitrary place in the room.
- the camera unit 46 is also installed on the monitor 26 so that the entire room can be photographed.
- the monitor 26, the directional speaker 32, and the camera unit 46 are connected to an information processing apparatus 50 such as a home game machine.
- the entertainment system 10 controls the directional speaker 32 so as to generate sound effects from a predetermined location in accordance with the game image displayed on the monitor 26 and the progress of the game, thereby providing the user with a sense of presence.
- a game environment Specifically, for example, when an explosion occurs behind the user character in the game, an audible sound can be heard from behind the actual user by reflecting a directional sound on the wall behind the user. Can be directed to.
- the present invention is configured such that the output of the directional speaker 32 can be controlled in accordance with the material and direction of the reflecting surface that reflects the directional sound.
- the present embodiment can also be applied to a case where a moving image such as a movie is viewed or a case where only a sound such as a radio is heard.
- FIG. 4 is a functional block diagram showing an example of main functions executed by the entertainment system 10 according to the first embodiment.
- the entertainment system 10 functionally includes, for example, an audio information storage unit 54, a material feature information storage unit 52, a room image analysis unit 60, and an output control unit 70. It is configured to include.
- the room image analysis unit 60 and the output control unit 70 execute, for example, a program read from the optical disc 36 or the hard disk 38 or a program supplied from the network via the network interface 48 by the control unit 11. It is realized by doing.
- the audio information storage unit 54 and the material feature information storage unit 52 are realized by the optical disc 36 or the hard disk 38, for example.
- sound information in which sound data such as game sound effects and control parameter data for outputting each sound data (referred to as sound output control parameter data) is associated is stored in the sound information storage unit 54 in advance.
- the audio data is waveform data indicating a waveform of an audio signal created assuming that the audio data is output from the directional speaker 32.
- the sound output control parameter data is a control parameter created assuming that sound data is output from the directional speaker 32.
- FIG. 5 is a diagram illustrating an example of audio information. As shown in FIG. 5, the audio information is managed in association with an audio signal and an output condition for each audio data.
- the sound signal has its volume and frequency (sound pitch) determined by its waveform data, but each sound signal of this embodiment has a sound volume and frequency that is assumed to be reflected by a reflective surface having a reference reflection characteristic. It shall be stipulated. Specifically, the sound is output from the directional speaker, and the reaching distance until it reaches the user after being reflected by the reflecting surface is the reference reaching distance Dm (for example, 4 m), and the material of the reflecting surface is the reference material M (for example, The reflecting surface having the condition that the incident angle is ⁇ (for example, 45 °) is set as a reflecting surface having a reference reflecting characteristic.
- the output condition is information indicating the timing of outputting the sound data and the sound generation location where the sound is generated.
- the output condition is information indicating the sound generation location based on the user character in the game. For example, it is information indicating a direction and a location on the basis of the user character such as the right side and the front side when viewed from the user character. Based on this output condition, the direction of the directional sound output from the directional speaker 32 is determined. It should be noted that output conditions are not associated with audio data for which an output location is not determined in advance, and output conditions are given according to the game situation and user operation.
- the material feature information indicating the relationship between the material of the representative surface, the feature information of the surface, and the reflectance of the sound is stored in the material feature information storage unit 52 in advance.
- FIG. 6 is a diagram illustrating an example of the material feature information.
- the material feature information includes, for each material, material names such as wood, metal, glass, etc., material feature information that is feature information obtained from an image taken by the camera, and reflection of sound. The rate is managed in association with each other.
- the feature information obtained from the image is, for example, a distribution of color components (for example, color components in a color space such as RGB or VBr), a saturation distribution, or a brightness distribution included in the image, and any one of these One or any combination of two or more may be used.
- the room image analysis unit 60 analyzes the room image taken by the camera unit 46.
- the room image analysis unit 60 is realized mainly by the control unit 11, and includes a room image acquisition unit 62, a user position specifying unit 64, and a candidate reflection surface selection unit 66.
- the room image acquisition unit 62 acquires a room image taken by the camera unit 46 in response to the room image acquisition request.
- the room image acquisition request is transmitted, for example, at the start of the game or at a predetermined timing according to the game situation.
- the room image generated by the camera unit 46 every predetermined time (for example, 1/60 seconds) is stored in the main memory 20, and the room image stored in the main memory 20 in response to the room image acquisition request is stored. An image may be acquired.
- the user position specifying unit 64 specifies the position of the user in the room by performing image analysis on the room image acquired by the room image acquiring unit 62 (referred to as an acquired room image).
- the user position specifying unit 64 detects the face image of the user in the room from the acquired room image using a known face recognition technique.
- the user position specifying unit 64 may detect facial parts such as eyes, nose, and mouth, and may detect the face based on the positions of these parts.
- the user position specifying unit 64 may detect a face using skin color information. Further, the user position specifying unit 64 may detect the face using other detection methods.
- the user position specifying unit 64 specifies the position of the face image detected in this way as the user position.
- the user position specifying unit 64 stores user position information in which user feature information, which is feature information obtained from the user's face image, and position information indicating the specified user position are associated with each other in the user position information storage unit.
- the position information indicating the position may be information indicating a distance from the imaging device (for example, a distance from the imaging device to the user's face image), or may be a coordinate value in a three-dimensional space.
- FIG. 7 is a diagram illustrating an example of user position information. As shown in FIG. 7, the user position information is obtained by associating a user ID given to each identified user, user feature information obtained from the identified user's face image, and position information indicating the user's position. Managed.
- the user position specifying unit 64 may detect the controller 42 held by the user and specify the detected position of the controller 42 as the position of the user.
- the user position specifying unit 64 detects light emitted from the light emitting unit of the controller 42 from the acquired room image, and the detected light position is determined by the user position.
- the plurality of users may be distinguished by the difference in the color of light emitted from the light emitting unit of the controller 42.
- the candidate reflection surface selection unit 66 reflects the directional sound output from the directional speaker 32 based on the acquired room image and the user position information stored in the user position information storage unit. Select (candidate reflection surface).
- the reflecting surface that reflects the directional sound may have a size of 6 to 9 cm square, and may be a part of a surface such as a wall, a desk, a chair, a bookshelf, or a user's body.
- the candidate reflection surface selection unit 66 divides the room space into a plurality of divided regions according to the sound generation location where the sound is generated.
- the sound generation location corresponds to the output condition included in the audio information stored in the audio information storage unit 54, and is determined based on the user character in the game.
- the candidate reflection surface selection unit 66 divides the room space into a plurality of divided regions corresponding to the sound generation locations with reference to the user position indicated by the user position information stored in the user position information storage unit.
- FIG. 8 is a diagram illustrating an example of a segmented area.
- Eight types of sound generation locations are prepared based on the user character in the game: lower right front, lower left front, upper left front, upper right front, upper left front, upper right front, lower right rear, lower left rear, upper left rear, and upper right rear.
- the room space is divided into eight divided areas (partition area IDs: 1 to 8) based on the actual user position.
- the segmented areas are a segmented area 1 positioned in the lower right front of the user, a segmented area 2 positioned in the lower left front of the user, a segmented area 3 positioned in the upper left front of the user, a segmented area 4 positioned in the upper right front of the user, and the user Are divided into 8 areas: a divided area 5 located at the lower right rear of the user, a divided area 6 located at the lower left rear of the user, a divided area 7 located at the upper left rear of the user, and a divided area 8 located at the upper right rear of the user.
- the divided area information in which the divided areas generated by dividing the room space and the sound generation locations are associated with each other is stored in the divided area information storage unit.
- FIG. 9 is a diagram illustrating an example of the segment area information. As shown in FIG. 9, the segmented area information is managed in association with the segmented area ID and the sound generation location. Note that the partitioned areas shown in FIG. 8 are merely examples, and the room space may be partitioned so that a partitioned area corresponding to a sound generation location determined according to the type of game is generated, for example.
- the candidate reflection surface selection unit 66 selects, as the candidate reflection surface, an optimum surface for reflecting the sound from the surface in the divided region for each divided region.
- the optimal surface for reflecting sound is a surface having excellent reflection characteristics, for example, a surface composed of a material and a color having high reflectivity.
- the candidate reflection surface selection unit 66 extracts a surface that can be a candidate reflection surface in the divided area from the acquired room image, and acquires feature information of the extracted surface (referred to as an extraction reflection surface).
- the plurality of extraction reflecting surfaces in the segmented region are surfaces that can be candidate reflecting surfaces, and are candidates for candidate reflecting surfaces.
- the candidate reflective surface selection part 66 selects the extraction reflective surface with the most excellent reflective characteristic as a candidate reflective surface among the several extracted reflective surfaces in a division area.
- the candidate reflection surface selection unit 66 selects the extraction reflection surface having the best reflection characteristics as the candidate reflection surface, the reflectance of the extraction reflection surface is compared.
- the candidate reflection surface selection unit 66 refers to the material feature information stored in the material feature information storage unit 52 and estimates the material / reflectance of the extraction reflection surface from the feature information of the extraction reflection surface.
- the candidate reflection surface selection unit 66 estimates the material / reflectance of the extraction reflection surface from the feature information of the extraction reflection surface using, for example, a known pattern matching technique, but other methods may be used.
- the candidate reflection surface selection unit 66 matches the feature information of the extracted reflection surface with the material feature information stored in the material feature information storage unit 52, and the material feature information having the highest degree of matching.
- the candidate reflection surface selection unit 66 estimates the material / reflectance of each extraction reflection surface from the feature information of each of the plurality of extraction reflection surfaces. And the candidate reflective surface selection part 66 selects the extraction reflective surface with the most excellent reflectance as a candidate reflective surface among the several extracted reflective surfaces in a division area. The candidate reflection surface selection unit 66 executes such a process for each divided region, so that candidate reflection surfaces for the divided region are selected.
- the method for estimating the reflectance of the extraction reflecting surface is not limited to the above-described method.
- the directional speaker 32 may actually output sound to the extraction reflection surface and collect the reflected sound reflected by the extraction reflection surface with a microphone to measure the reflectance of the extraction reflection surface.
- the reflectance of the light may be measured by outputting light to the extraction reflection surface and detecting the reflected light reflected by the extraction reflection surface. Then, the light reflectance may be used for selection of the candidate reflection surface instead of the sound reflectance, or the sound reflectance may be estimated from the light reflectance.
- the candidate reflection surface selection unit 66 selects the extraction reflection surface having the best reflection characteristics as the candidate reflection surface, the incident angle at which the directional sound output from the directional speaker 32 enters the extraction reflection surface. It is good also as comparing. This utilizes the characteristic that the reflection efficiency improves as the incident angle increases.
- the candidate reflection surface selection unit 66 calculates an incident angle at which a straight line extending from the directional speaker 32 enters the extraction reflection surface based on the acquired room image. Then, the candidate reflection surface selection unit 66 calculates the incident angle of each of the plurality of extraction reflection surfaces that is incident on the straight extraction reflection surface extending from the directional speaker 32, and selects the extraction reflection surface having the largest incident angle as the candidate reflection surface. Select as
- the candidate reflection surface selection unit 66 selects the extraction reflection surface having the best reflection characteristics as the candidate reflection surface, the linear distance from the directional speaker 32 to the extraction reflection surface and the distance from the extraction reflection surface to the user are determined. It is good also as comparing the reach distance of the sound which is the sum total of the straight line distance. This is based on the idea that the voice data output from the directional speaker 32 is reflected on the reflection surface and reaches the user, and the shorter the distance the voice data is, the easier it is for the user to listen to the voice. In this case, the candidate reflection surface selection unit 66 calculates the reach distance based on the acquired room image. Then, the candidate reflecting surface selection unit 66 calculates the reaching distances in each of the plurality of extracting reflecting surfaces, and selects the extracting reflecting surface with the shortest reaching distance as the candidate reflecting surface.
- the candidate reflection surface information indicating the candidate reflection surface selected by the candidate reflection surface selection unit 66 as described above is stored in the candidate reflection surface information storage unit.
- FIG. 10 is a diagram illustrating an example of candidate reflection surface information.
- the candidate reflection surface information includes, for each division region, a division region ID indicating the division region, position information indicating the position of the candidate reflection surface, and sound output from the directional speaker 32 is reflected on the reflection surface.
- the reach distance indicating the distance to reach the user, the reflectance of the candidate reflection surface, and the incident angle of the directional sound on the candidate reflection surface are managed in association with each other.
- the candidate reflection surface selection unit 66 selects the extraction reflection surface having the best reflection characteristics as the candidate reflection surface, among the reflectance of the extraction reflection surface, the incident angle of the extraction reflection surface, and the reach distance described above. Any two or more of them may be arbitrarily combined to select a surface with excellent reflection characteristics.
- the room image acquisition unit 62 acquires a room image taken by the camera unit 46 in response to a room image acquisition request (S1).
- the user position specifying unit 64 specifies the position of the user from the acquired room image acquired by the room image acquiring unit 62 (S2).
- the candidate reflecting surface selection unit 66 divides the room space into a plurality of divided areas based on the acquired room image (S3).
- the room space is divided into k divided areas, and numbers 1 to k are assigned to the divided areas as the divided area IDs.
- the candidate reflection surface selection unit 66 selects a candidate reflection surface for each of the 1 to k segment areas.
- the variable i is a variable indicating the segmented area ID, and is a counter variable taking an integer value from 1 to k.
- the candidate reflection surface selection unit 66 extracts an extraction reflection surface that can be a reflection surface from the segmented region 1 based on the acquired room image, and acquires feature information of the extraction reflection surface (S5).
- the candidate reflective surface selection unit 66 collates the feature information of the extracted reflective surface acquired in step S5 with the material feature information stored in the material feature information storage unit 52 (S6), and the reflectance of the extracted reflective surface. Is estimated. And the candidate reflective surface selection part 66 selects the extraction reflective surface with the most excellent reflectance as a candidate reflective surface of the division area 1 among several extracted reflective surfaces (S7).
- the reflection characteristics of the candidate reflection surface selected by the candidate reflection surface selection unit 66 are stored in the candidate reflection surface information storage unit as candidate reflection surface information (S8).
- reflection characteristics the reflectance of the candidate reflection surface, the incident angle at which the sound output from the directional speaker is incident on the candidate reflection surface, and the sound output from the directional speaker are reflected by the candidate reflection surface to the user. Reach distance to reach, etc.
- the reflectance included in the candidate reflection surface information may be a reflectance estimated from the material feature information stored in the material feature information storage unit 52, or may be actually voiced from the directional speaker to the candidate reflection surface. It may be a reflectance measured by collecting reflected sound when data is output. Further, it is assumed that the incident angle and the reach distance included in the candidate reflection surface information are calculated based on the acquired room image. These reflection characteristics are stored in association with the segment area ID indicating the segment area and the position information indicating the position of the candidate reflection surface.
- the room image analysis process ends, and candidate reflection surface information of k candidate reflection surfaces respectively corresponding to the divided areas 1 to k as shown in FIG. It is stored in the reflection surface information storage unit.
- the room image analysis process as described above may be executed at the timing when the game is started, or may be periodically executed while the game is starting. Since the room image analysis process is periodically executed while the game is running, it is possible to output an appropriate sound according to the movement of the user even when the user moves in the room during the game.
- the output control unit 70 controls the motor driver 33 to control the direction of the directional speaker 32 and causes the directional speaker 32 to output predetermined audio data.
- the output control unit 70 is realized mainly by the control unit 11 and the audio processing unit 30, and the output control unit 70 is an audio information acquisition unit 72, a reflection surface determination unit 74, a reflection surface information acquisition unit 76, and an output amount determination unit 78. including.
- the output control unit 70 controls the audio output from the directional speaker 32 based on the information on the determined reflection surface acquired by the reflection surface information acquisition unit 76 and the audio information acquired by the audio information acquisition unit 72. Specifically, the output control unit 70 changes the audio data included in the audio information based on the information on the decision reflection surface so that the audio data corresponding to the information on the decision reflection surface is output from the directional speaker 32. Let Here, the output control unit 70 changes the audio data so as to compensate for the change in audio characteristics caused by the difference between the reflection characteristic of the decision reflection surface and the reference reflection characteristic.
- the audio data included in the audio information is data that is generated on the assumption that the light is reflected by a reflection surface having a reference reflection characteristic.
- the voice of the characteristics of the period can be reached.
- a voice having a difference from an intended characteristic may reach the user and make the user feel uncomfortable.
- the output control unit 70 includes the sound data included in the acquired sound information. Increase the volume.
- the output amount determining unit 78 determines the output amount or output change amount of the audio data for compensating for the change in the audio feature.
- the relationship between the difference between the reflection characteristic of the decision reflection surface and the reference reflection characteristic and the amount of change in audio characteristics caused by the difference are defined in advance. It is also assumed that the relationship between the amount of change in voice characteristics and the amount of output of audio data for compensating for the amount of change or the amount of change in output is defined in advance.
- the voice information acquisition unit 72 acquires voice data to be output from the directional speaker 32 from the voice information storage unit 54 according to the game situation.
- the reflection surface determination unit 74 outputs from the directional speaker 32 from among the plurality of candidate reflection surfaces included in the candidate reflection surface information based on the audio data acquired by the audio information acquisition unit 72 and the candidate reflection surface information.
- the reflection surface to be reflected is determined.
- the reflecting surface determination unit 74 specifies a segment area ID corresponding to the output condition associated with the acquired audio data.
- the reflection surface determination unit 74 determines the candidate reflection surface corresponding to the segment area ID identified with reference to the candidate reflection surface information as the reflection surface that reflects the audio data output from the directional speaker 32.
- the reflection surface information acquisition unit 76 acquires, from the candidate reflection surface information, information related to the candidate reflection surface (determined reflection surface) determined as the reflection surface by which the reflection surface determination unit 74 reflects the audio data output from the directional speaker 32. To do. Specifically, the reflecting surface information acquisition unit 76 acquires information on the position information of the determined reflecting surface, the reach distance that is the reflection characteristics of the determined reflecting surface, the reflectance, and the incident angle from the candidate reflecting surface information.
- the output amount determination unit 78 determines the output amount of audio data according to the reflection characteristics of the determined reflection surface acquired by the reflection surface information acquisition unit 76.
- audio data is output from the directional speaker 32, and the output amount of the audio data is determined according to the reach distance until it reaches the user after being reflected by the decision reflecting surface.
- the reach distance through the decision reflecting surface is compared with the reference reach distance, and if the reach distance through the decision reflecting surface is larger than the reference reach distance, the output amount is increased. If it is smaller than the reference reach distance, the output amount is decreased.
- the increase amount and the decrease amount of the output are determined according to the difference between the reach distance via the decision reflecting surface and the reference reach distance.
- the output amount determination unit 78 determines the output amount of audio data according to the reflectance of the determined reflecting surface. Specifically, the reflectance of the decision reflecting surface is compared with the reflectance of the reference material. If the reflectance of the decision reflecting surface is larger than the reflectance of the reference material, the output amount is reduced, and the reflectance of the decision reflecting surface is reduced. If it is smaller than the reflectance of the reference material, the output amount is increased. The amount of increase or decrease in output is determined according to the difference between the reflectance of the decision reflecting surface and the reflectance of the reference material.
- the output amount determination unit 78 determines the output amount of audio data according to the incident angle of the audio data output from the directional speaker 32 to the determined reflection surface. Specifically, the incident angle on the decision reflecting surface is compared with the reference incident angle. If the incident angle on the decision reflecting surface is larger than the reference incident angle, the output amount is reduced, and the incident angle on the decided reflecting surface is the reference angle. If the angle is smaller than the incident angle, the output amount is increased. The amount of increase and decrease in output is determined according to the difference between the incident angle on the decision reflecting surface and the reference incident angle.
- the output amount determination unit 78 may determine the output amount using any one of the information of the reach distance, the reflectance, and the incident angle, which are the reflection characteristics of the determined reflecting surface described above, Any two or more may be arbitrarily combined to determine the output amount.
- the output control unit 70 controls the motor driver 33 to output sound data from the directional speaker 32 to the determined reflecting surface based on the position information of the determined reflecting surface, thereby controlling the directional speaker 32. Adjust the orientation. Then, the output control unit 70 causes the directional speaker 32 to output the audio data having the output amount determined by the output amount determining unit 78.
- the output amount determination unit 78 may determine the frequency of the audio data according to the reach distance through the determination reflection surface, the reflectance of the determination reflection surface, and the incident angle on the determination reflection surface.
- the sound output can be controlled in accordance with the reflection characteristics of the decision reflection surface, so that the user can be placed regardless of the decision reflection surface material, decision reflection surface position, user position, etc. You can listen to the sound of the period.
- the audio information acquisition unit 72 acquires audio information of audio to be output from the directional speaker 32 from the audio information stored in the audio information storage unit 54 (S11).
- the reflective surface determination part 74 specifies a division area based on the audio
- the reflecting surface determination unit 74 specifies a segment area corresponding to the output condition included in the audio information acquired by the audio information acquisition unit 72 in step S11.
- the reflecting surface determination unit 74 outputs, from the directional speaker 32, the candidate reflecting surface corresponding to the segment area identified in step S12 from the candidate reflecting surface information stored in the candidate reflecting surface information storage unit.
- a decision reflecting surface for reflecting the audio data is determined (S13).
- the reflective surface information acquisition part 76 acquires the reflective surface information of a decision reflective surface from a candidate reflective surface information storage part (S14). Specifically, the reflecting surface information acquisition unit 76 acquires position information indicating the position of the determined reflecting surface and reflection characteristics (reach distance, reflectance, incident angle) of the determined reflecting surface.
- the output amount determination unit 78 determines the output amount of audio data to be output to the determined reflection surface determined by the reflection surface determination unit 74 in step S14 (S15).
- the output amount determination unit 78 determines the output amount based on the reach distance, the reflectance, and the incident angle, which are the reflection characteristics of the determined reflection surface acquired by the reflection surface information acquisition unit 76.
- the output control unit 70 controls the motor driver 33 to adjust the direction of the directional speaker 32 so that the audio data is output with respect to the position indicated by the position information of the decision reflection surface, and the output is performed in step S15.
- the sound data of the output amount determined by the force determination unit 78 is output from the directional speaker 32 (S16), and the sound output control process is terminated.
- the entertainment system 10 may include a plurality of directional speakers 32.
- FIG. 13 shows an example of a structure in which a plurality of directional speakers 32 are arranged.
- the direction of each directional speaker 32-n is adjusted so that sound data is output toward a different reflecting surface.
- each directional speaker 32-n is based on a room image acquired by the room image acquisition unit 62, for example. Determine the reflective surface to which.
- the direction of the directional speaker 32-n once determined here is basically fixed.
- each directional speaker 32-n When adjusting the direction of each directional speaker 32-n, the room space is divided into a plurality of divided areas (for example, divided areas corresponding to the number of directional speakers 32) regardless of the position of the user.
- the speakers 32-n may be adjusted so as to face the reflecting surfaces in different divided areas.
- the number of reflective surfaces having excellent reflection characteristics in the room may be selected by the number of the directional speakers 32, and each directional speaker 32-n may be adjusted to face a different reflective surface.
- each directional speaker 32-n and the position information of the reflecting surface to which the directional speaker 32-n is directed are stored in association with each other.
- the output condition here, the sound generation location
- the directional speaker 32 that outputs audio data is selected based on the position information of the reflecting surface to which each directional speaker 32 faces and the position information of the user. Specifically, based on the position information of the reflecting surface and the position information of the user, the region where the reflecting surface is located with reference to the user is determined. Thereby, even if the user moves in the room, an area based on the user's position can be determined.
- the directional speaker 32 corresponding to the said reflective surface is selected. If there is no region that coincides with the sound generation location, the directional speaker 32 corresponding to the reflection surface located in the region closest to the sound generation location is selected. In this way, by determining the orientations of the plurality of directional speakers 32-n in advance, quick response of sound output is obtained, for example, when sound is output to a location based on the user's position in response to a user operation. It can also be applied when
- the output condition associated with the sound data stored in the sound information storage unit 54 is mainly information indicating a sound generation location based on the user character in the game.
- information indicating a specific position in the room such as information indicating a sound generation location based on the position of an object in the room, information indicating a predetermined position based on the structure of the room, and the like. The case where is is also described.
- information indicating a position away from the user by a predetermined distance such as 50 cm to the left from the user's position
- information indicating the direction and location viewed from the user such as the right side and the front as viewed from the user, the center of the room, etc.
- Information indicating a predetermined position based on the room structure.
- information which shows the sound generation place on the basis of a user character is linked
- the functional block diagram showing an example of main functions executed by the entertainment system 10 according to the second embodiment is that the functional block diagram according to the first embodiment shown in FIG. It is the same except for it.
- the parts different from the first embodiment will be described, and overlapping description will be omitted.
- the voice information acquisition unit 72 acquires voice data to be output from the directional speaker 32 from the voice information storage unit 54 according to the game situation.
- information indicating a specific position in the room such as a predetermined position based on the object in the room is associated with the output condition of the audio data.
- the output condition is information indicating a specific position in the room such as 50 cm to the left from the user's position, 30 cm in front of the display, and the center of the room.
- the reflection surface determination unit 74 determines a reflection surface to be a target for reflecting the sound data output from the directional speaker 32 based on the sound data acquired by the sound information acquisition unit 72.
- the reflecting surface determination unit 74 specifies a position in the room corresponding to the position indicated by the output condition associated with the acquired audio data. For example, when a predetermined position (for example, 50 cm to the left from the user position) is associated with the output condition as a reference, the reflecting surface determination unit 74 specifies the user position specifying unit 64.
- the position of the reflecting surface is specified from the position information of the user and the position information indicated by the output condition. Further, when a predetermined position (for example, 30 cm in front of the display) that is based on the position of an object other than the user is associated with the output condition, the position information of the associated object is specified and the position information is obtained. It will be acquired.
- the reflection surface information acquisition unit 76 acquires reflection surface information related to the determined reflection surface (determined reflection surface) determined by the reflection surface determination unit 74. Specifically, the reflecting surface information acquisition unit 76 acquires position information indicating the position of the determined reflecting surface, reflection characteristics of the determined reflecting surface, and the like. First, the reflection surface information acquisition unit 76 outputs the feature information and sound data of the determined reflection surface image corresponding to the position of the determined reflection surface from the directional speaker 32, and reflects the information on the determined reflection surface until reaching the user. The arrival distance and the incident angle of the audio data output from the directional speaker 32 to the determined reflection surface are acquired from the room image.
- the determined reflection surface image may be an image of a region in a predetermined range centered on the position of the determined reflection surface.
- the reflection surface information acquisition unit 76 compares the acquired feature information of the determined reflection surface image with the material feature information stored in the material feature information storage unit 52 to thereby determine the material and reflectivity of the determination reflection surface. Is identified. In this way, the reflection surface information acquisition unit 76 acquires information on the reflectance, the reach distance, and the incident angle, which are the reflection characteristics of the decision reflection surface.
- the output amount determination unit 78 determines the output amount of audio data to be output to the determined reflection surface.
- the reflection characteristic of the reflection surface determined by the reflection surface determination unit 74 is different from the reference reflection characteristic, it is stored in the audio information storage unit so that the sound data of the desired volume can be heard by the user.
- the output amount determined by the audio data is changed.
- the output amount determination unit 78 determines the output amount of the audio data according to the reflectance, the arrival distance, and the incident angle, which are the reflection characteristics of the determined reflecting surface.
- the output amount determination process by the output amount determination unit 78 is as described in the first embodiment.
- the output control unit 70 controls the motor driver 33 so that sound data is output from the directional speaker 32 to the determined reflecting surface based on the position information of the determined reflecting surface, thereby controlling the directional speaker. Adjust the direction of 32. Then, the output control unit 70 causes the directional speaker 32 to output the audio data having the output amount determined by the output amount determining unit 78.
- the intended sound can be heard by the user according to the reflection characteristics of the reflecting surface at the specific position, and the arrangement of the furniture
- the desired sound can be generated from any position without being influenced by the user's position, the material of the reflecting surface, etc.
- the room image acquisition unit 62 acquires a room image taken by the camera unit 46 in response to the room image acquisition request (S21).
- the user position specifying unit 64 specifies the position of the user from the acquired room image acquired by the room image acquiring unit 62 (S22).
- the audio information acquisition unit 72 acquires audio data to be output from the directional speaker 32 from the audio information stored in the audio information storage unit 54 (S23).
- the reflective surface determination part 74 determines a reflective surface based on the audio
- the reflection surface determination unit 74 specifies a reflection surface corresponding to the reflection position associated with the output condition of the audio data acquired by the audio information acquisition unit 72.
- the reflection surface information acquisition unit 76 acquires information on the determined reflection surface determined by the reflection surface determination unit 74 in the process S24 from the room image acquired by the room image acquisition unit 62 (S25). Specifically, the reflecting surface information acquisition unit 76 acquires position information indicating the position of the determined reflecting surface and reflection characteristics (reach distance, reflectance, incident angle) of the determined reflecting surface.
- the output amount determination unit 78 determines the output amount of audio data to be output to the determined reflection surface determined by the reflection surface determination unit 74 in step S24 (S26).
- the output amount determination unit 78 determines the output amount based on the reach distance, the reflectance, and the incident angle, which are the reflection characteristics of the determined reflection surface acquired by the reflection surface information acquisition unit 76.
- the output control unit 70 controls the motor driver 33 to adjust the direction of the directional speaker 32 so that the audio data is output to the position indicated by the position information of the decision reflection surface, and outputs it in the process S26.
- the sound data of the output amount determined by the force determination unit 78 is output from the directional speaker 32 (S27), and the sound output control process is terminated.
- the reflection surface determination unit 74 may change the reflection surface that reflects the audio data.
- the determined reflecting surface is a material that is difficult to reflect
- the surrounding reflecting surfaces may be searched, and a reflecting surface with better reflection characteristics may be used as the determined reflecting surface.
- the position is within an allowable range (for example, a radius of 30 cm) from the position of the first determined reflecting surface.
- a reflective surface with good reflection characteristics may be selected from the allowable range.
- the output amount determination processing by the output amount determination unit 78 may be executed for the first determined reflection surface.
- the process of selecting a reflection surface with good reflection characteristics from within the allowable range can be applied with the candidate reflection surface selection process by the candidate reflection surface selection unit 66 described in the first embodiment.
- the entertainment system 10 can be applied as an operation input system for a user to perform an input operation.
- one or more sound generation locations are set in the room, and an object (such as a part of the user's body) is arranged at the sound generation location by a user operation.
- the directional sound output from the directional speaker 32 to the sound generation location is reflected by the object arranged by the user, thereby generating a reflected sound.
- input information corresponding to a user operation is received based on the reflected sound generated in this way.
- the sound generation location, the sound data, and the input information may be stored in advance so that the input information can be recognized according to the sound generation location and the sound data of the reflected sound.
- a sound generation place is set at a position 30 cm to the right of the user's face, and an operation input system that accepts input information corresponding to a user operation of raising or not raising the hand on the right side of the face is constructed.
- input information for example, information indicating “yes”
- an instruction for allowing the user to select whether to raise his / her hand on the right side of the face For example, if “yes”, the hand is raised, and if “no”, an instruction that the hand is not raised is outputted).
- input information (“yes” or “no”) can be received depending on whether or not a reflected sound is generated.
- different audio data may be set at a plurality of sound generation locations using a plurality of directional speakers 32, and different input information may be associated with each other.
- input information corresponding to the generated reflected sound may be received.
- different audio data for example, “left: yes”, “right: no”
- input information for example, information indicating “left: yes”
- “right: no” at positions 30 cm left and right of the user's face Are associated with each other, and in response to the selection of “yes” or “no”, an instruction to raise the hand on either the left or right side of the face is output.
- the entertainment system 10 since the entertainment system 10 according to the second embodiment can generate reflected sound at an arbitrary position, it can also be applied as an operation input system using the directional speaker 32.
- a specific object such as a user character's body, a cup on a table, room lighting, a ceiling, or the like is used as a sound generation location.
- information indicating an object may be associated as an audio information output condition.
- voice information acquisition part 72 acquires audio
- the room image analysis unit 60 analyzes the room image taken by the camera unit 46.
- the present invention is not limited to this example.
- sound generated from the user's position may be collected to identify the user's position, or the room structure may be estimated.
- the entertainment system 10 may instruct the user to tap a hand or utter a voice, and generate a sound from the position of the user. Then, the generated sound may be collected using a microphone or the like provided in the entertainment system 10 to measure the position of the user, the size of the room, and the like.
- the user can select a reflection surface that is a target for reflecting sound.
- the room image acquired by the room image acquisition unit 62 or the structure of the room estimated by collecting sounds generated from the user's position is displayed on the monitor 26 or other display means, and the displayed room image is displayed. It is good also as making a user select a reflective surface by seeing these.
- the candidate reflection surface selection unit 66 may extract and display information on the extracted reflection surface on the monitor 26 or other display means, and may instruct the position to execute the test from the extracted reflection surface.
- the reflective surface determination unit 74 determines the reflective surface so that sound is reflected only on the object selected by the user. It is good to do.
- the monitor 26, the directional speaker 32, the controller 42, the camera unit 46, and the information processing device 50 are separate devices.
- these are integrated devices. It can also be applied to portable game machines and virtual reality game machines.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
Abstract
Description
前記取得した反射面情報に応じた有指向性の音を前記決定した反射面に対して出力する出力制御部と、を含むことを特徴とする。
以下、本発明の第1実施形態について、図面に基づき詳細に説明する。
図1は、本発明の実施形態に係るエンタテインメントシステム(音出力システム)10のハードウェア構成を示す図である。図1に示すように、エンタテインメントシステム10は、制御部11と、メインメモリ20と、画像処理部24と、モニタ26と、入出力処理部28と、音声処理部30と、指向性スピーカ32と、光ディスク読み取り部34と、光ディスク36と、ハードディスク38と、インタフェース(I/F)40と、コントローラ42と、ネットワークインタフェース(I/F)44と、を含んで構成されるコンピュータシステムである。
図3は、本実施形態に係るエンタテインメントシステム10の利用シーンを示す全体概要図である。図3に示すように、エンタテインメントシステム10は、例えば、四方を壁で囲まれ、様々な家具が配置されるような個人の部屋の中でユーザにより利用される。ここで、指向性スピーカ32は、部屋内の任意の場所に向けて有指向性の音を出力できるようモニタ26の上に設置される。また、カメラユニット46も、部屋全体を撮影できるようモニタ26の上に設置される。そして、モニタ26、指向性スピーカ32、及びカメラユニット46は、家庭用ゲーム機などである情報処理装置50に接続されている。このような部屋の中でエンタテインメントシステム10を利用してユーザがコントローラ42を操作してゲームを行う場合は、まず、エンタテインメントシステム10では、ゲームプログラムと、ゲーム効果音等の音声データと、各音声データを出力するための制御パラメータデータと、が情報処理装置50に備えられる光ディスク36又はハードディスク38から読み出され、ゲームが実行される。そして、エンタテインメントシステム10は、モニタ26に表示されるゲーム画像や、ゲームの進行状況に合せて所定の場所から効果音を発生させるように指向性スピーカ32を制御することで、ユーザに臨場感のあるゲーム環境を提供する。具体的には、例えば、ゲーム内においてユーザキャラクタの後方で爆発が発生する場合に、ユーザの後方の壁に有指向性の音を反射させることで、実際のユーザの後方から爆発音が聴こえるように演出することができる。また、ゲーム内のユーザキャラクタの心拍数が上昇するような場合にユーザの体に有指向性の音を反射させることで、実際のユーザ自身から心拍音が聴こえるように演出することができる。このような演出をするとき、有指向性の音を反射させる反射面(壁、机、ユーザの体など)の素材や向きによって反射特性が異なるため、必ずしも所期の特徴(音量、音の高低など)の音声がユーザに聴こえるとは限らない。そこで、本発明は、有指向性の音を反射させる反射面の素材や向きに応じて指向性スピーカ32の出力を制御できる構成としている。なお、本実施形態では、エンタテインメントシステム10を利用してユーザがゲームを行う場合について説明するが、映画などの動画像を視聴する場合やラジオなどの音声だけを聴く場合にも適用できる。
図4は、第1実施形態に係るエンタテインメントシステム10により実行される主な機能の一例を示す機能ブロック図である。図4に示すように、第1実施形態においてエンタテインメントシステム10は、機能的には、例えば、音声情報記憶部54、素材特徴情報記憶部52、部屋画像解析部60、及び出力制御部70、を含んで構成されている。これらの機能のうち、部屋画像解析部60、及び出力制御部70は、例えば光ディスク36やハードディスク38から読み出されるプログラムや、ネットワークからネットワークインタフェース48を介して供給されるプログラムが制御部11によって実行されることによって実現される。また、音声情報記憶部54、及び素材特徴情報記憶部52は、例えば光ディスク36やハードディスク38によって実現される。
部屋画像解析部60は、カメラユニット46が撮影した部屋の画像を解析する。部屋画像解析部60は、制御部11を主として実現され、部屋画像取得部62、ユーザ位置特定部64、及び候補反射面選定部66を含む。
出力制御部70は、モータドライバ33を制御して指向性スピーカ32の向きを制御し、所定の音声データを指向性スピーカ32から出力させる。出力制御部70は、制御部11や音声処理部30を主として実現され、出力制御部70は、音声情報取得部72、反射面決定部74、反射面情報取得部76、及び出力量決定部78を含む。
第1実施形態では、音声情報記憶部54に記憶されている音声データに関連付けられている出力条件が、主に、ゲーム内のユーザキャラクタを基準とした音発生場所を示す情報である場合について説明したが、第2実施形態では、さらに、部屋内のオブジェクトの位置を基準とした音発生場所を示す情報、部屋の構造に基づく所定の位置を示す情報、といった部屋内の特定の位置を示す情報である場合についても説明する。具体的には、ユーザの位置から左に50cmといったユーザから所定距離、所定範囲離れた位置を示す情報、ユーザから見て右側、前方といったユーザから見た方向や場所を示す情報、部屋の中心といった部屋の構造に基づく所定の位置を示す情報、とする。なお、出力条件にユーザキャラクタを基準とした音発生場所を示す情報が関連付けられている場合に、当該情報から部屋の特定の位置を示す情報を特定してもよい。
Claims (12)
- 音を反射させる対象となる反射面を決定する反射面決定部と、
前記決定した反射面の反射特性を示す反射面情報を取得する反射面情報取得部と、
前記取得した反射面情報に応じた有指向性の音を前記決定した反射面に対して出力する出力制御部と、
を含むことを特徴とする情報処理装置。 - 前記反射面情報取得部は、前記反射面情報として前記反射面の反射率を取得する、
ことを特徴とする請求項1に記載の情報処理装置。 - 前記出力制御部は、前記取得した反射率に応じて前記有指向性の音の出力量を決定する、
ことを特徴とする請求項2に記載の情報処理装置。 - 前記反射面情報取得部は、前記反射面情報として前記有指向性の音が前記反射面に入射する入射角を取得する、
ことを特徴とする請求項1に記載の情報処理装置。 - 前記出力制御部は、前記取得した入射角に応じて前記有指向性の音の出力量を決定する、
ことを特徴とする請求項4に記載の情報処理装置。 - 前記反射面情報取得部は、前記反射面情報として前記有指向性の音が前記反射面に反射してユーザに到達するまでの到達距離を取得する、
ことを特徴とする請求項1に記載の情報処理装置。 - 前記出力制御部は、前記取得した到達距離に応じて前記有指向性の音の出力量を決定する、
ことを特徴とする請求項6に記載の情報処理装置。 - 前記反射面情報取得部は、前記反射面の候補となる複数の候補反射面それぞれの前記反射面情報を取得し、
前記複数の候補反射面のうち、その前記反射面情報が示す前記反射特性が優れた候補反射面を選定する反射面選定部、をさらに含む、
ことを特徴とする請求項1に記載の情報処理装置。 - 前記反射面情報取得部は、カメラにより撮影された前記反射面の画像の特徴情報に基づいて前記反射面情報を取得する、
ことを特徴とする請求項1から8のいずれか一項に記載の情報処理装置。 - 所定の反射面に有指向性の音を反射させることで生成された無指向性の音をユーザに到達させる指向性スピーカと、
前記有指向性の音を反射させる対象となる反射面を決定する反射面決定部と、
前記決定した反射面の反射特性を示す反射面情報を取得する反射面情報取得部と、
前記取得した反射面情報に応じた有指向性の音を前記指向性スピーカから前記決定した反射面に対して出力する出力制御部と、
を含むことを特徴とする情報処理システム。 - 音を反射させる対象となる反射面を決定する反射面決定ステップと、
前記決定した反射面の反射特性を示す反射面情報を取得する反射面情報取得ステップと、
前記取得した反射面情報に応じた有指向性の音を前記決定した反射面に対して出力する出力制御ステップと、
を含むことを特徴とする制御方法。 - 音を反射させる対象となる反射面を決定する反射面決定部、
前記決定した反射面の反射特性を示す反射面情報を取得する反射面情報取得部、
前記取得した反射面情報に応じた有指向性の音を前記決定した反射面に対して出力する出力制御部、
としてコンピュータを機能させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15863624.1A EP3226579B1 (en) | 2014-11-26 | 2015-11-20 | Information-processing device, information-processing system, control method, and program |
JP2016561558A JP6330056B2 (ja) | 2014-11-26 | 2015-11-20 | 情報処理装置、情報処理システム、制御方法、及びプログラム |
CN201580062967.5A CN107005761B (zh) | 2014-11-26 | 2015-11-20 | 信息处理设备、信息处理系统、控制方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-239088 | 2014-11-26 | ||
JP2014239088 | 2014-11-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016084736A1 true WO2016084736A1 (ja) | 2016-06-02 |
Family
ID=56011554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/082678 WO2016084736A1 (ja) | 2014-11-26 | 2015-11-20 | 情報処理装置、情報処理システム、制御方法、及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10057706B2 (ja) |
EP (1) | EP3226579B1 (ja) |
JP (1) | JP6330056B2 (ja) |
CN (1) | CN107005761B (ja) |
WO (1) | WO2016084736A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3386217A4 (en) * | 2015-12-04 | 2019-04-03 | Samsung Electronics Co., Ltd. | AUDIO PROVIDING METHOD AND DEVICE THEREOF |
JP2021513264A (ja) * | 2018-02-06 | 2021-05-20 | 株式会社ソニー・インタラクティブエンタテインメント | スピーカシステムにおける音の定位 |
US11337024B2 (en) | 2018-06-21 | 2022-05-17 | Sony Interactive Entertainment Inc. | Output control device, output control system, and output control method |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170164099A1 (en) * | 2015-12-08 | 2017-06-08 | Sony Corporation | Gimbal-mounted ultrasonic speaker for audio spatial effect |
US9924291B2 (en) | 2016-02-16 | 2018-03-20 | Sony Corporation | Distributed wireless speaker system |
EP4376444A3 (en) * | 2016-08-01 | 2024-08-21 | Magic Leap, Inc. | Mixed reality system with spatialized audio |
CN108579084A (zh) | 2018-04-27 | 2018-09-28 | 腾讯科技(深圳)有限公司 | 虚拟环境中的信息显示方法、装置、设备及存储介质 |
US11425492B2 (en) * | 2018-06-26 | 2022-08-23 | Hewlett-Packard Development Company, L.P. | Angle modification of audio output devices |
US11443737B2 (en) | 2020-01-14 | 2022-09-13 | Sony Corporation | Audio video translation into multiple languages for respective listeners |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010056710A (ja) * | 2008-08-27 | 2010-03-11 | Sharp Corp | 指向性スピーカ反射方向制御機能付プロジェクタ |
JP2012029096A (ja) * | 2010-07-23 | 2012-02-09 | Nec Casio Mobile Communications Ltd | 音声出力装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4731848A (en) * | 1984-10-22 | 1988-03-15 | Northwestern University | Spatial reverberator |
GB9922919D0 (en) * | 1999-09-29 | 1999-12-01 | 1 Ipr Limited | Transducer systems |
NO316560B1 (no) * | 2001-02-21 | 2004-02-02 | Meditron Asa | Mikrofon med avstandsmåler |
KR100922910B1 (ko) * | 2001-03-27 | 2009-10-22 | 캠브리지 메카트로닉스 리미티드 | 사운드 필드를 생성하는 방법 및 장치 |
ITBS20020063A1 (it) * | 2002-07-09 | 2004-01-09 | Outline Di Noselli G & S N C | Guida d'onda a singola e multipla riflessione |
JP4464064B2 (ja) * | 2003-04-02 | 2010-05-19 | ヤマハ株式会社 | 残響付与装置および残響付与プログラム |
JP4114584B2 (ja) | 2003-09-25 | 2008-07-09 | ヤマハ株式会社 | 指向性スピーカ制御システム |
JP4114583B2 (ja) | 2003-09-25 | 2008-07-09 | ヤマハ株式会社 | 特性補正システム |
US7240544B2 (en) * | 2004-12-22 | 2007-07-10 | Daimlerchrysler Corporation | Aerodynamic noise source measurement system for a motor vehicle |
JP5043701B2 (ja) * | 2008-02-04 | 2012-10-10 | キヤノン株式会社 | 音声再生装置及びその制御方法 |
US8396226B2 (en) * | 2008-06-30 | 2013-03-12 | Costellation Productions, Inc. | Methods and systems for improved acoustic environment characterization |
CN102893175B (zh) | 2010-05-20 | 2014-10-29 | 皇家飞利浦电子股份有限公司 | 使用声音信号的距离估计 |
EP2410769B1 (en) * | 2010-07-23 | 2014-10-22 | Sony Ericsson Mobile Communications AB | Method for determining an acoustic property of an environment |
JP5577949B2 (ja) | 2010-08-25 | 2014-08-27 | パナソニック株式会社 | 天井スピーカ装置 |
US20130163780A1 (en) * | 2011-12-27 | 2013-06-27 | John Alfred Blair | Method and apparatus for information exchange between multimedia components for the purpose of improving audio transducer performance |
WO2015035093A1 (en) * | 2013-09-05 | 2015-03-12 | Daly George William | Systems and methods for acoustic processing of recorded sounds |
US9226090B1 (en) * | 2014-06-23 | 2015-12-29 | Glen A. Norris | Sound localization for an electronic call |
-
2015
- 2015-09-10 US US14/850,414 patent/US10057706B2/en active Active
- 2015-11-20 WO PCT/JP2015/082678 patent/WO2016084736A1/ja active Application Filing
- 2015-11-20 JP JP2016561558A patent/JP6330056B2/ja active Active
- 2015-11-20 EP EP15863624.1A patent/EP3226579B1/en active Active
- 2015-11-20 CN CN201580062967.5A patent/CN107005761B/zh active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010056710A (ja) * | 2008-08-27 | 2010-03-11 | Sharp Corp | 指向性スピーカ反射方向制御機能付プロジェクタ |
JP2012029096A (ja) * | 2010-07-23 | 2012-02-09 | Nec Casio Mobile Communications Ltd | 音声出力装置 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3386217A4 (en) * | 2015-12-04 | 2019-04-03 | Samsung Electronics Co., Ltd. | AUDIO PROVIDING METHOD AND DEVICE THEREOF |
US10362430B2 (en) | 2015-12-04 | 2019-07-23 | Samsung Electronics Co., Ltd | Audio providing method and device therefor |
JP2021513264A (ja) * | 2018-02-06 | 2021-05-20 | 株式会社ソニー・インタラクティブエンタテインメント | スピーカシステムにおける音の定位 |
US11337024B2 (en) | 2018-06-21 | 2022-05-17 | Sony Interactive Entertainment Inc. | Output control device, output control system, and output control method |
Also Published As
Publication number | Publication date |
---|---|
EP3226579B1 (en) | 2021-01-20 |
US20160150314A1 (en) | 2016-05-26 |
JP6330056B2 (ja) | 2018-05-23 |
CN107005761A (zh) | 2017-08-01 |
EP3226579A1 (en) | 2017-10-04 |
CN107005761B (zh) | 2020-04-10 |
US10057706B2 (en) | 2018-08-21 |
JPWO2016084736A1 (ja) | 2017-04-27 |
EP3226579A4 (en) | 2018-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6330056B2 (ja) | 情報処理装置、情報処理システム、制御方法、及びプログラム | |
US9906885B2 (en) | Methods and systems for inserting virtual sounds into an environment | |
CN104301664B (zh) | 指向性控制系统、指向性控制方法、收音系统及收音控制方法 | |
JP4474013B2 (ja) | 情報処理装置 | |
JP5456832B2 (ja) | 入力された発話の関連性を判定するための装置および方法 | |
JP6055657B2 (ja) | ゲームシステム、ゲーム処理制御方法、ゲーム装置、および、ゲームプログラム | |
KR102463806B1 (ko) | 이동이 가능한 전자 장치 및 그 동작 방법 | |
KR102072146B1 (ko) | 입체 음향 서비스를 제공하는 디스플레이 장치 및 방법 | |
KR20070042104A (ko) | 화상 표시 장치 및 방법, 및 프로그램 | |
JP2013008031A (ja) | 情報処理装置、情報処理システム、情報処理方法及び情報処理プログラム | |
JP7100824B2 (ja) | データ処理装置、データ処理方法及びプログラム | |
JP5700963B2 (ja) | 情報処理装置およびその制御方法 | |
WO2017135194A1 (ja) | 情報処理装置、情報処理システム、制御方法およびプログラム | |
JP6678315B2 (ja) | 音声再生方法、音声対話装置及び音声対話プログラム | |
US11032659B2 (en) | Augmented reality for directional sound | |
JP7053074B1 (ja) | 鑑賞システム、鑑賞装置及びプログラム | |
CN108304152B (zh) | 手持式电子装置、影音播放装置以及其影音播放方法 | |
JP6600186B2 (ja) | 情報処理装置、制御方法およびプログラム | |
WO2023195048A1 (ja) | 音声拡張現実オブジェクト再生装置、情報端末システム | |
JP2021033907A (ja) | 表示システムおよびその制御方法 | |
US20240177425A1 (en) | Information processing apparatus, information processing method, and recording medium | |
US20230101693A1 (en) | Sound processing apparatus, sound processing system, sound processing method, and non-transitory computer readable medium storing program | |
JP2021033906A (ja) | 表示システムおよびその制御方法 | |
JP2021033908A (ja) | 表示システムおよびその制御方法 | |
JP2021033200A (ja) | 表示システムおよびその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15863624 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016561558 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015863624 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015863624 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |