WO2000018112A1 - Apparatus and method for presenting sound and image - Google Patents
Apparatus and method for presenting sound and image Download PDFInfo
- Publication number
- WO2000018112A1 WO2000018112A1 PCT/JP1998/004301 JP9804301W WO0018112A1 WO 2000018112 A1 WO2000018112 A1 WO 2000018112A1 JP 9804301 W JP9804301 W JP 9804301W WO 0018112 A1 WO0018112 A1 WO 0018112A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- video
- data
- area
- presenting
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
- G06F3/1446—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/02—Synthesis of acoustic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4348—Demultiplexing of additional data and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
- H04N5/602—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
- H04N5/607—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for more than one sound signal, e.g. stereo, multilanguages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/64—Constructional details of receivers, e.g. cabinets or dust covers
- H04N5/642—Disposition of sound reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
Definitions
- the present invention relates to a technique for presenting an image together with sound, and more particularly to a technique for presenting a sound and an image to an audience using a large display device.
- the present invention aims at providing a mosquito? Possible presentation method and presentation device that presents with a harmony of sound and video. Disclosure of the invention
- a first aspect of the present invention provides a device for presenting sound and video
- a display device having a display screen for presenting an image
- An audio device that has a plurality of sound sources arranged around a display screen, and that uses these sound sources to present sound so that a sound image is formed in an arbitrary region in the display screen;
- a presentation information storage device for storing presentation information including: video data indicating a video; audio data indicating a sound to be presented; and area data indicating an audio reproduction area in which the audio data is to be reproduced.
- a video playback device that plays back video based on video data in a predetermined video playback area on a display screen
- a sound reproduction device that reproduces sound based on sound data using a plurality of sound sources of the sound device so that a sound image is formed in a sound reproduction region on a display screen;
- the area data indicates an audio reproduction area in which audio data is to be reproduced, and includes information indicating a video reproduction area in which video data is to be reproduced.
- a third aspect of the present invention provides a device for presenting sound and video according to the first or second aspect, An instruction input device for inputting an operator's instruction,
- a presentation mode changing device that modifies the presentation mode in the presentation information storage apparatus based on the instruction to change the presentation mode of sound and video;
- a fourth aspect of the present invention is the apparatus for presenting sound and video according to the first to third aspects
- An information reading device for reading the presentation information recorded on the information recording medium and storing the presentation information in the presentation information storage device is further provided.
- a fifth aspect of the present invention is the apparatus for presenting sound and video according to the first to fourth aspects
- a display device having a rectangular display screen, and an audio device having four sound sources arranged at arrangement points located at almost four corners of the display screen,
- the sound reproduction area is defined as a rectangular area, and representative points representing the sound reproduction area are defined at four vertex positions of the rectangular area.
- the sound data to be reproduced in the sound reproduction area is composed of four-channel sound signals
- the four-channel sound signals correspond to the four representative points, respectively.
- the sound reproducing device By calculating the distance between each arrangement point and each representative point, and performing volume control in accordance with this distance, the sound reproducing device is used to obtain a sound image of an acoustic signal corresponding to the position of each representative point. The sound is reproduced.
- a sixth aspect of the present invention provides the apparatus for presenting sound and video according to the first to fourth aspects, wherein
- a display device having a rectangular display screen, and an audio device having four sound sources arranged at arrangement points located at almost four corners of the display screen,
- the sound reproduction area is defined as a rectangular area, and representative points representing the sound reproduction area are defined at four vertex positions of the rectangular area. If the sound data to be reproduced in the sound reproduction area is composed of two-channel stereo sound signals, the left sound signal is made to correspond to two representative points on the left of the four representative points, By assigning the right acoustic signal to the two representative points on the right side, calculating the distance between each arrangement point and each representative point, and performing volume control according to this distance, it corresponds to each representative point position The sound is reproduced by the sound reproducing device so that a sound image of the sound signal can be obtained.
- a display device having a rectangular display screen and an audio device having four sound sources arranged at arrangement points located at almost four corners of the display screen are used, and a sound reproduction area is defined as a rectangular area.
- a representative point representing the sound reproduction area is set at the four vertices of this rectangular area,
- the monaural sound signal is made to correspond to each of the four representative points, and the distance between each arrangement point and each representative point is calculated.
- volume control By performing volume control according to the distance, sound is reproduced by the sound reproducing device so that sound images of sound signals corresponding to the positions of the respective representative points can be obtained.
- An eighth aspect of the present invention is the apparatus for presenting sound and video according to the first to seventh aspects
- a ninth aspect of the present invention is the apparatus for presenting sound and video according to the first to seventh aspects described above,
- Priorities are defined for multiple pieces of presentation information, and for parts that overlap each other, only the video for the presentation information with a higher priority is played back, and the video for the presentation information with a lower priority is hidden. West,
- the function of reproducing the sound by reducing the volume by an amount corresponding to the area of the concealed portion of the video is provided.
- a first aspect of the present invention provides a method for presenting a video on a predetermined display screen and presenting a sound and an object for presenting a sound related to the video,
- An area having a hierarchical structure is defined so that the area includes one or more lower-level areas, lower-level pronunciation areas are displayed in lower-level areas, and lower-level pronunciation areas are displayed in upper-level areas.
- Preparing video data for reproducing the video screen on which the higher sounding physical strength is displayed including:
- the thirteenth aspect of the present invention provides the sound and the video according to the above-described first and second aspects. In the method
- the area including the lower sounding body is enlarged and the sound related to the lower sounding body is selectively reproduced.
- a fourteenth aspect of the present invention is the method for presenting sound and video according to the above-mentioned first aspect
- the video screen can be arbitrarily enlarged or reduced to be displayed, and at this time, the sound related to the highest-level sounding body whose entirety is displayed is selectively reproduced.
- the sound volume of the sound related to the sounding body is controlled based on the display magnification of the sounding body.
- the playback volume of each sounding body can be set to a specific volume value based on the operator's instruction, and when the sound relating to the sounding body for which the volume value has been set is played, the playback using the set volume value is performed. It is intended to be performed.
- a seventeenth aspect of the present invention is the method for presenting sound and video according to the above-mentioned first aspect
- a microphone with directivity that can mainly collect sounds generated by the lower sounding body is installed near the lower sounding body, so that the sound of the lower sounding body is recorded and the upper sounding body is generated.
- FIG. 1 is a plan view showing an example of an image of a car presented on a large display device.
- FIG. 2 is a plan view showing a method of presenting a sound so that a sound image of the sound of the engine is formed in a partial area in the video shown in FIG.
- FIG. 3 is a block diagram showing a configuration of presentation information I used in the device for presenting sound and video according to the present invention.
- FIG. 4 is a block diagram showing a configuration example of the presentation information shown in FIG.
- FIG. 5 is a principle diagram illustrating an example of a method of dividing a display screen and showing a partial area as digital data.
- FIG. 6 is a diagram showing an example of a bit expression in the method shown in FIG.
- FIG. 7 is a block diagram showing an example of presentation information configured using the method shown in FIG.
- FIG. 8 is a plan view showing an example of a state where video and sound are presented on a part of a display screen by the method according to the present invention, and a block diagram showing presentation information corresponding to such presentation.
- FIG. 9 is a plan view showing another example of a state where video and sound are presented on a part of the display screen by the method according to the present invention, and a block diagram showing presentation information corresponding to such presentation.
- FIG. 10 is a plan view showing another example of a state where video and sound are presented on a part of the display screen by the method according to the present invention, and a block diagram showing presentation information corresponding to such presentation. It is.
- FIG. 11 is a plan view showing a state in which two different sounding bodies are presented on the same screen by the method according to the present invention.
- FIG. 12 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 13 is a plan view showing an example of a state in which two sets of sounding bodies having a hierarchical structure are presented on the same screen by the method according to the present invention.
- FIG. 14 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 15 is a plan view showing another example of a state where two sets of sounding bodies having a P-layer structure are presented on the same screen by the method according to the present invention.
- FIG. 16 shows the presentation information to be prepared in order to make the presentation shown in FIG.
- FIG. 17 is a plan view showing an example of a state in which six sets of sounding bodies having a hierarchical structure are presented on the same screen by the method according to the present invention.
- FIG. 18 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 19 is a plan view showing a state in which only one of the six sounding bodies shown in FIG. 17 is displayed.
- FIG. 20 is a plan view showing a state in which one set of the sounding bodies shown in FIG. 19 is enlarged and displayed.
- FIG. 21 shows the presentation information to be prepared in order to make the presentation shown in FIG.
- FIG. 22 is a plan view showing a state in which the two sets of sounding bodies shown in FIG. 19 are enlarged and displayed.
- FIG. 23 is a diagram showing a part of presentation information to be prepared for making the presentation shown in FIG.
- FIG. 24 is a plan view showing an example in which a plurality of sounding bodies having a hierarchical structure are defined in the same video by the method according to the present invention.
- FIG. 25 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 26 is a plan view showing a state where a part of the sounding body shown in FIG. 24 is enlarged and displayed.
- FIG. 27 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 28 is a plan view showing another definition form of the plurality of sounding bodies shown in FIG.
- FIG. 29 is a diagram showing presentation information corresponding to the pronunciation body definition shown in FIG.
- FIG. 30 is a plan view showing another video presentation using the sounding body definition shown in FIG.
- FIG. 31 shows the presentation information to be prepared in order to make the presentation shown in FIG.
- FIG. 32 is a front view showing a positional relationship between a sound source and a display screen in the device for presenting sound and video according to the present invention.
- FIG. 33 is a view for explaining a method of forming a sound image in a predetermined area on a display screen using the apparatus shown in FIG.
- FIG. 34 is a block diagram showing a configuration example of ⁇ ⁇ information including sound data of four channels.
- FIG. 35 is a diagram showing a method of calculating a reproduced sound signal to be given to each speaker based on the presentation information shown in FIG.
- FIG. 36 is a front view showing a state in which two sets of presentation information are simultaneously presented using the apparatus shown in FIG. 32.
- FIG. 37 is a diagram showing presentation information to be prepared for performing the presentation shown in FIG.
- Fig. 38 shows the simultaneous presentation of four sets of presentation information using the device shown in Fig. 32.
- FIG. 38 shows the simultaneous presentation of four sets of presentation information using the device shown in Fig. 32.
- FIG. 39 is a diagram showing presentation information to be prepared for making the presentation shown in FIG.
- FIG. 40 is a diagram showing a practical configuration example of video data and audio data to be prepared when executing the method for presenting sound and video according to the present invention.
- FIG. 41 is a plan view showing an area having a hierarchical structure defined when the method for presenting sound and video according to the present invention is performed.
- FIG. 42 is a block diagram showing a basic configuration of a device for presenting sound and video according to the present invention.
- the image of this car may be a still image or a moving image.
- the power of the vehicle stopped while the engine is running is displayed, and that a moving image in which the engine hood is shaking due to vibration is presented.
- this video has engine sound added.
- the engine sound is presented by displacing the position of the sound image slightly to the right of the center position of the image.
- when shooting video arrange multiple microphones to collect multi-channel audio signals, and use multiple speakers arranged at positions corresponding to each microphone. The audio signal of each channel may be reproduced.
- the present invention has been made based on such an idea, and its basic concept is to add information for designating a region where a sound image is to be formed to an acoustic signal of each sounding body.
- the image area is divided into 16 parts, and the hatched area in the figure is defined as the sound reproduction area of the engine sound.
- Information indicating this sound image forming area is added.
- the video of the car is reproduced on the display screen, and the engine sound is reproduced so that a sound image is formed in the sound reproduction area.
- this sound reproduction area is an area corresponding to the engine part in the image of the car, and by forming a sound image of the engine sound in this area, there is a sense of presence. Information becomes possible.
- the method for presenting sound and video according to the present invention comprises defining a sound reproduction area having an area. This is fundamentally different from the conventional stereo sound reproduction method. In other words, in the example shown in Fig. 2, the impression received by the audience receiving the information is not the impression that "the engine sound is heard from the lower right of the video screen", but "the video is displayed on the video screen. You can hear the engine sound from the engine part of the car.
- the method of presenting a specific sound such that the sound image power S can be obtained in a two-dimensional plane area having a large area has a two-dimensional spread on the display screen, as described in ⁇ 6.
- a plurality of sound sources such as a speaker
- FIG. 3 is a block diagram showing a configuration of presentation information I used in the device for presenting sound and video according to the present invention.
- the presentation information I is composed of video data, audio data A, and area data T.
- the video data V is data representing a video to be presented, and in the case of the example of FIG. 2, is data representing a moving image of a stopped vehicle with the engine running.
- the sound data A is data representing the sound to be presented, and is usually a sounding body present in the video presented based on the video data V (the engine shown in FIG. ).
- the region data T is data indicating a sound reproduction region in which the sound data A is to be reproduced. In the example of FIG. 2, the region data T is data indicating a hatched rectangular region.
- the presentation information I may be constituted by three data of video data V, audio data A, and area data T.
- the structure of the presentation information I can be appropriately changed depending on the content of the information to be presented. For example, if the sound reproduction area is the same over the entire time period from the generation of the engine sound to the end, as shown in Fig. 3, the video data V, the sound data A, and the area data T
- the presentation information I may be configured by preparing one each.
- the area T1 is the sound reproduction area
- the idling sound after the engine starts ⁇ ⁇ 2 is the area where the entire engine is located ⁇ 2
- the sound reproduction area, and the engine speed is increased by increasing the accelerator.
- common data is prepared for the video data V, and the data A l and T 1 are used for the audio data and the area data.
- the first set for playback during starter
- the second set of data A2 and T2 for playback during idling
- the third set of data A3 and T3 for high speed It is sufficient to prepare three sets.
- the video data V three sets of data V1 indicating the video at the starter, data V2 indicating the video at the time of idling, and data V3 indicating the video at the high rotation speed are prepared.
- the configuration shown in presentation information I (2) may be adopted.
- area data is prepared to define an audio reproduction area for reproducing the audio data A. Therefore, here, a specific configuration example of the area data will be shown.
- FIG. 5 is a principle diagram for explaining an example of a method of dividing the display screen into several blocks and showing a partial area as area data, and dividing the display screen into a plurality of blocks.
- the partitioning scheme and the addressing power defined for each block obtained in the individual partitioning scheme are shown.
- Each division mode is indicated by a division level n.
- the division configuration shown by the division level n so that the 2 2 n blocks is obtained by respectively 2 n divided into vertical and horizontal two-dimensional pixel array.
- an address for indicating each block is defined for each division mode.
- the lower two bits of the address of block e are ,
- the address of block a is set to "00”
- the lower two bits of the address of block f are set to "01” which is the same as the address of block b
- the lower two bits of the address of block g are set to block c.
- "1 0" is the same as the address of block h
- the lower 2 bits of the address of block h are "1 1" which is the same as the address of block d.
- the above-described address definition is performed. Is preferred. With such an address definition, by removing the lower two bits from the address of a specific block, the address of the block at the same position at the next lower division level can be obtained.
- the number of bits required for such an address definition is indicated by 2 n bits as shown in FIG. Also, the total number of display resolution, i.e., resulting that blocks in each division level n, as shown in Figure 5, a 2 2n.
- FIG. 6 is a diagram showing the bit levels of the division levels and addresses for the individual division modes described above.
- the division level n is represented by 4 bits.
- the number of address bits required to indicate each block is different for each division level, as described above.
- each additional division level increases the address by 2 bits. Required.
- the presentation information I can be represented by a configuration as shown in FIG. That is, the area data T is composed of a bit string indicating the division level and a bit string indicating the address, and the length of the bit string indicating the address is determined by the division level.
- the bit string indicating the division level may be omitted. In this case, the 5 ⁇ ! J level may be determined based on the length of the bit string indicating the address.
- the area data is composed of 2-bit data of "0 1”
- it can be recognized as indicating the area of block b with a division level of n 1 in Fig. 5.
- FIG. 8 is a plan view showing an example of a state in which video and sound are presented on a part of a display screen by a method according to the present invention, and a block diagram showing presentation information corresponding to such presentation.
- the plan view shown on the left of the figure shows a state in which the display screen is divided into four parts and predetermined contents are presented in a lower left area T (a) shown by hatching.
- presentation information I (a) as shown on the right of the figure. I hope you keep it.
- the video data V (a) is data for presenting a video in the hatched area
- the audio data A (a) is in such a form that a sound image is formed in the same area. This is the data of the sound to be presented.
- the area data T (a) is data for indicating the area T (a) indicated by hatching. Specifically, by using the method described in ⁇ 2, the area is defined by a 2-bit data string of "10".
- FIG. 9 shows another example of the force.
- a state is shown in which the display screen is divided into 16 and predetermined contents are presented in an area T (b) indicated by hatching in the figure.
- the video data V (b) and the audio data A (b) are data for presenting video and sound in the hatched area
- the area data T (b) is the hatched area T ( This is data to show b).
- the area is defined by a 4-bit data string of "01 1 0".
- the first 0 drawing shows yet another example power s.
- a state force s showing predetermined contents is shown on all the display screens on which hatching is performed.
- presentation information I (c) as shown on the right of the figure should be prepared.
- the video data V (c) and the audio data A (c) are data for presenting video and sound on the entire display screen
- the area data T (c) is the entire hatched area. This is data indicating an area T (c) corresponding to the display screen.
- the area data T (c) is data that does not exist as a bit string (so-called “null data”, which is indicated by a symbol in the figure), and is composed of 0 bits.
- the entire display screen will be shown by the area data.
- the present invention is practiced.
- multiple sound sources are provided around the display screen. Therefore, when the sound reproduction area is the entire area of the display screen, the sound is presented so that a sound image having a two-dimensional spread corresponding to the entire display screen is formed by the plurality of sound sources. Will be performed.
- the embodiment of the present invention shown in FIG. 10 is simply a monaural sound, in that the sound power has a spread corresponding to the sound reproduction area specified by the area data T (C). It will be clearly distinguished from playback.
- each of the area data T (a), T (b), and T (c) indicates a sound reproduction area for generating a sound image and a video reproduction area for reproducing a video. Is shown. For example, in the example of FIG.
- a video represented by video data V (a) is reproduced in a hatched area represented by area data T (a), and a sound represented by sound data A (a) is reproduced. Is reproduced such that a sound image is generated in the hatched area indicated by the area data T (a).
- the area data T can be used as data indicating the audio reproduction area and also as data indicating the video reproduction area.
- the area data indicating the sound reproduction area and the area data indicating the video reproduction area can be separately prepared, and the sound and the image can be presented in separate areas.
- the image of the car is presented on the entire display screen, while the engine sound is presented so that a sound image is generated in the hatched area.
- the playback area is the entire display area, while the sound playback area is a partial area with no and tching. ing.
- the region data indicating the video reproduction region may be omitted, and only the region data indicating the sound reproduction region may be prepared.
- FIG. 11 is a plan view showing a state in which two different sounding bodies are presented on the same screen by the method according to the present invention. More specifically, the display screen is 1 6 divided, appears piano force in the region of one section portion of them, in the region of another two compartments content is tiger Npe' DOO force? Display.
- FIG. 12 is a diagram showing presentation information to be prepared in order to make the presentation shown in FIG. On the left side of FIG. 12, there is shown a divided view of the display screen in which the area where each sounding body (in this example, the piano and the trumpet) is located is hatched, and each area T (a), At T (b), the presentation information I (a) about the piano and the presentation information I (b) about the trumpet will be presented.
- the display screen is 1 6 divided, appears piano force in the region of one section portion of them, in the region of another two compartments content is tiger Npe' DOO force? Display.
- FIG. 12 is a diagram showing presentation information to be prepared in order to make the presentation
- the presentation information I (a) is composed of video data V (a) composed of a video of a piano and acoustic data A (a) composed of the sound of a piano. And area data T (a) indicating an area for presenting the information.
- the presentation information I (b) includes video data V (b) composed of trumpet video, audio data A (b) composed of trumpet performance sound, and an area indicating an area for presenting these.
- the area data T (a) is composed of a bit string of "0110", and indicates a video reproduction area and an audio reproduction area related to the indication / indicating information I (a).
- the area data T (b) is composed of a bit string of “101 1” and a bit string of “1 110”, and indicates a video reproduction area and a sound reproduction area related to the presentation information I (b). ing.
- Fig. 11 shows an example in which two sounding bodies are presented, but three or more sounding bodies can be presented in a similar manner.
- a presentation area for each sounding body (video playback area and sound reproducing area)
- the force is expressed as a set of blocks 1 6 divides the display screen?, Increasing the number of divisions
- each presentation area is defined as a set of blocks obtained by dividing the display screen into 100 or more divisions
- the sound localization function based on human hearing does not have the function of recognizing such a small elephant area, in practice, the block obtained by a fairly coarse division as shown in Fig. 11 is used. It is sufficient to define each presentation area by a set.
- Another feature of the present invention is that a sounding body definition having a hierarchical structure is defined, and information of this hierarchical structure can be presented to a viewer as it is.
- this feature will be described with reference to specific examples.
- FIG. 13 is a plan view showing an example of a state in which two sets of sounding bodies having a hierarchical structure are presented on the same screen by the method according to the present invention.
- the display screen is divided into 16, and the lower left part shows the drum and the image of the room containing the drum. (The dividing line may be displayed as necessary.
- Fig. 14 is a diagram showing presentation information to be prepared in order to make such a presentation. In this example, the area where the drum and the entire room including the drum are located is hatched on the display screen. A split diagram is shown, and the presentation information I (a) about the drum and the presentation information I (b) about the whole room are shown in each area T (a) and T (b) in the figure. Become.
- "pronunciation body” is a broad concept that includes not only objects that generate sound by themselves, such as musical instruments, but also objects that reflect sound, such as floors, walls, ceilings, and furniture in rooms. Means.
- the presentation information I (a) indicates the sound data A (a) composed of the drum performance sound and the area for presenting the drum performance sound.
- the area data T (a) is composed of video data V (b) composed of an image of the room (including the drum) where the drum is placed, and audio data A (b) composed of the reverberation sound of the drum for the entire room.
- region data T (b) indicating a region for presenting these.
- the area data T (a) is composed of a bit string “101 1”, and indicates the sound reproduction area T (a) related to the presentation information I (a).
- the area data T (b) is composed of a bit string of "10", and indicates a video area and a sound reproduction area T (b) related to the presentation information I (b).
- the presentation information I (a) does not include the video data V (a) indicating the image of the drum itself
- the presentation information I (a) does not include the video data V (a) included in the presentation information I (b). This is because a part of the video data V (b) indicating the "room” can be used as the video data V (a) indicating the video of the drum itself.
- video data V (a) indicating the video of the drum itself may be separately prepared in the presentation information I (a).
- the former includes the latter. You can see that it is doing. Therefore, if the area T (b) is defined as an area of the upper hierarchy and the area T (a) is defined as an area of the lower hierarchy, an area having a hierarchical structure is defined, and the area T (a) of the lower hierarchy is defined. There is a drum power s as a lower sounding body, Position will be room the entire force? The presence of as an upper sounding body in the hierarchy of the area T (b).
- the upper sounding body is a sounding body including the lower sounding body
- the sound data A (a) is data containing only pure drum performance sound as the lower sounding body
- Data A (b) is data that includes not only the direct sound from the drum but also the indirect reverberation of the drum reflected from the floor, walls, ceiling, etc. of the room.
- a microphone with directivity capable of collecting the sound generated by the drum, which is the lower sounding body is installed near the drum, which is the lower sounding body, so that the sound belonging to the lower hierarchy can be obtained.
- Record data A (a) On the other hand, a microphone with directivity that can collect the sound generated by the entire upper sounding room is placed in a position suitable for collecting the entire sound generated by the higher sounding room. (For example, in the four corners of a room), it is sufficient to record the sound data A (b) belonging to the higher hierarchy.
- only one lower layer area is defined within one upper layer area.
- a plurality of lower layer areas are defined within one upper layer area, and
- the body may include a plurality of lower sounding bodies.
- the force only two hierarchy of the upper and lower are defined?, It may also be to define a multiple of hierarchy Ri good record,.
- an area having a hierarchical structure is defined on the display screen, a lower phonetic body is displayed in a lower hierarchical area, and a higher phonetic body including the lower phonemic body is displayed in an upper hierarchical area.
- the reproduction mode of the video data V (b) may be changed according to the specification of the viewer. For example, when only the sound based on the sound data A (a) is reproduced, only a part of the video data V (b) where the drum force is displayed is displayed, and the sound based on the sound data A (b) is displayed. During playback, it is possible to display all of the video data V (b).
- FIG. 15 is a plan view showing another example in which two sets of sounding bodies having a hierarchical structure are presented on the same screen.
- the display screen is divided into four parts and the scenery of the city is drawn (partition lines may or may not be displayed as needed).
- two sets of pronunciation units with a hierarchical structure are defined.
- the lower-level pronunciation body is the church drawn at the lower left, and the church functions as the main pronunciation body.
- the upper-level pronunciation body is the environment of the whole city including the bell of this church.
- FIG. 16 is a diagram showing presentation information to be prepared for making such a presentation. On the left side of Fig.
- T (a), the T (b), will be presenting information about the church I (a) and presented for the entire city information I (b) is a force? presented.
- the presentation information I (a) is the sound of the church bell. It is composed of acoustic data A (a) and domain data T (a) indicating a region for presenting sounds related to the church.
- the presentation information I (b) is composed of video data V (b) composed of images of the entire city including the church, and acoustic data A (b) composed of environmental sounds of the entire city including the sound of church bells. It consists of area data T (b) indicating the area for presenting these.
- the area data T (a) is composed of a bit string “10” and indicates a sound reproduction area related to the presentation information I (a).
- the area data T (b) is composed of data without bits, and indicates that the video reproduction area and the sound reproduction area related to the presentation information I (b) are the entire display screen.
- the upper sounding body is a sounding body including the lower sounding body
- the sound data A (a) is a data recording only the sound of the church bell as the lower sounding body.
- Tag A (b) is the data that includes the noise of the church bell and various noises of the city.
- acoustic data A (a) and A (b) having a hierarchical structure the following should be performed.
- sound data A (a) belonging to the lower hierarchy is recorded by installing a microphone near the church that has the best directivity for collecting the sound of the church bell.
- microphones with predetermined directivity are attached to the left and right of the camera, and the microphones are simultaneously scanned when capturing images of the entire city. Tele-recording is performed, and the sound data A (b) belonging to the higher hierarchy can be obtained.
- the reproduction mode of the video data V (b) may be changed according to the specification of the viewer. For example, when only the sound based on the audio data A (a) is being reproduced, the image portion of the church in the image data V (b) may be enlarged and displayed.
- FIG. 17 is a plan view showing an example of a state in which six sets of sounding bodies having a hierarchical structure are presented on the same screen by the method according to the present invention.
- the display screen is divided into 16 (partition lines may or may not be displayed as needed), and four areas T (a) and T (b) , T (c), and T (d) show four performers as lower-layer pronunciation bodies, respectively.
- an upper layer area T (e) that includes the areas T (a) and T (b) as lower areas, and an upper layer area T that includes the areas T (c) and T (d) as lower areas (f) is defined as shown by the broken line in the figure.
- each of the four performers constitutes a lower sounding body.
- the two performers displayed in the regions T (a) and T (b) constitute one overall higher sounding body, and are displayed in the regions T (c) and T (d). The two performers also form one higher sounding body as a whole.
- FIG. 18 is a diagram showing presentation information to be prepared for making such a presentation.
- the presentation information I (b) to I (d) are also composed of information to be presented in the areas T (b) to T (d) and data indicating the area, respectively.
- the presentation information I (e) and I (f) do not include the video data, but this is because the video data for the lower-layer presentation information can be used.
- Acoustic data A (a) to A of the lower sounding body (d) is a microphone having the force? Possible directivity to only the direct sound collection sound of each instrument, installed in the vicinity of each instrument force, Alternatively, it can be prepared by attaching to the performer's clothes and recording.
- the upper sounding body is defined as a sounding body that generates the sound of the musical instrument played by the two players and the reverberation sound from the surrounding floor and walls.
- the sound data A (e) is , a microphone with a collected force? possible directional, including the reverberation, it is possible to prepare more to record installed at a slightly distance in front of the two performers.
- the presentation information as shown in FIG. 18 can be prepared, it becomes possible to present the quartet information to the viewer in a preferred manner.
- the overall strength ? do it.
- the sound based on the sound data A (e) and A (f) may be reproduced so that sound images are generated in the upper regions T (e) and T (f).
- the presentation information I (a 2) is composed of the video data V (a 2), which is a four-fold increase in the video size of the original video data V (a), and the original audio data A (a).
- FIG. 22 is a plan view showing a presentation mode when an instruction to enlarge the image of the adjacent area together with the area T (a) by 4 times in the state shown in FIG. 17 is given.
- the area T (a2) has the image power of the first violin. Force four times is enlarged?
- this video of the second violin is enlarged four times become.
- the original presentation information I (a) is changed as shown in FIG. 21 and the original presentation information I (b) is also modified.
- the presentation information I (b 2) as shown in FIG. 23 should be obtained.
- the presentation information I (b 2) after this change is composed of the video data V (b 2), which is a four-fold enlarged video size of the original video data V (b), and the original audio data
- the acoustic data A (b 2) in which the volume of A (b) is intuitively increased to four times the volume and the region data T (b 2) in which the original presentation region T (b) is enlarged four times in size ) "1 1".
- the sound data to be reproduced are only sound data A (a 2) and A (b 2), and these sound data correspond to the regions T (a 2) and T (b 2)
- the sound is reproduced in such a way that a sound image is formed, and the volume is sensibly quadrupled compared to the original volume. Therefore, when switching from the display mode shown in FIG. 17 to the display mode shown in FIG. 22, the sound is switched together with the video, and the sound that is in harmony with the video is always displayed. It can be presented. That is, in the presentation mode shown in FIG. 22, the performance sound of the first violin is heard from the lower left position of the display screen, and the performance sound of the second violin is heard from the lower right position of the display screen. Since the volume also depends on the size of the video, the video and sound are presented in a natural state without any discomfort.
- the force instruction is an example that has been given to expand the presentation information?
- Finger be reduced to reverse The processing when the power S is given can be performed in the same manner. In this case, the image is reduced and displayed, and the volume is also reduced, so that the presentation area of the image and the sound is also changed.
- the user may want to hear the sound of a sounding body that is not displayed, in some cases, the ability to present only the sound of the sounding body displayed on the screen.
- the display shown in Fig. 20 if you can hear the sound of the first violin mainly and the sounds of the second violin, the third violin, and the piano at a certain volume at the same time, the overall atmosphere of the song It is convenient because it is possible to understand In order to respond to such demands, a function has been provided that allows the playback volume of each sounding body (regardless of whether it is currently displayed or not) to be set to an arbitrary volume value based on the instructions of the operator.
- the reproduction may be performed using the set volume value.
- the force in the state where the second 0 are displayed mosquitoes? Conducted as shown in the figure, normally that is to be presented at the volume only the sound of the first by Orin is in accordance with the area T (a 2)
- these instrument sounds are also presented at the set volume value.
- the sound may be presented so that a sound image is formed in the entire screen area.
- Fig. 24 is a plan view showing a dinosaur video for learning (either video or still image) with sound.
- some dividing lines are drawn, and it is not necessary to display these dividing lines on the actual display screen.
- Regions T (a) to ⁇ (e) are defined, and each region has video and sound based on presentation information I (a) to I (e) as shown on the right of Fig. 25. Shall be presented.
- the region T (a) is a region of the upper hierarchy corresponding to the entire display screen, and includes the regions T (b) to T (e) of the lower hierarchy.
- Areas T (b) to T (e) in the lower hierarchy are areas indicating a specific part of the dinosaur, and specifically, the area T (b) is the head of the dinosaur, and the area T (c) is the dinosaur Region T (d) corresponds to the dinosaur's leg, region T (e) corresponds to the dinosaur's tail, and each of these functions as an independent sub-pronouncer.
- the presentation information I (b) to I (e) are sound data A (b) to A (e) for presenting the sound generated by each of these sounding bodies, and area data T ( b) ⁇ T (e).
- sound data A (b) is data of a dinosaur roar
- sound data A (c) is data of a dinosaur heart sound
- sound data A (d) is data of a dinosaur footstep
- sound data A (e) is the data of the dinosaur tail sibilance.
- the presentation information I (a) is composed of video data V (a) consisting of a dinosaur and a background image, and the sound generated by the upper sounding body including all of the lower sounding bodies (specifically, the dinosaur generation Sound data A (a) indicating all the sounds to be played, background sounds generated by background trees, and area data T (a) indicating an area corresponding to the entire display screen. Since dinosaurs are not living creatures, it is not possible to prepare sound data by actually recording the sound generated by real dinosaurs. Therefore, each sound data is prepared by a synthesis method using a synthesizer or the like.
- the presentation information as shown in FIG. 25 it is possible to present video and sound information about the dinosaur in various presentation modes according to the needs of the viewer.
- the sound related to the highest-level sounding body in which the whole is displayed The sound data may be reproduced in all areas. If necessary, only the sound of the specific sounding body specified by the viewer is selectively reproduced. For example, if a viewer clicks near the head of a dinosaur using a pointing device such as a mouse, the audio data A ( Only the sound based on b) needs to be reproduced so that a sound image is generated in the area T (b). Viewers will only be presented with the dinosaur roar. Also, as in the above-described example, it is possible to provide a function of enlarging or reducing a specific video portion, and to present the image by changing the volume based on the scaling factor. .
- Fig. 26 shows a state where the tail of the dinosaur is magnified 4 times with the area T (e) as the center.
- the presentation information I (e) shown in FIG. 25 will be modified as shown in FIG. That is, the acoustic data A (e) indicating the dinosaur's tail sibilance is modified to the acoustic data A (e 2) whose sound volume is intuitively increased fourfold, and the area data T (e) indicating the sound reproduction area Is corrected to quadruple-size region data T (e 2).
- the tail sibilance having a fourfold volume is presented in such a manner that the sound image power s is generated in a fourfold large area.
- the force that defines the position of each sounding body? When presenting the dinosaurs as video, as shown in FIG. 28
- it is necessary to define the position of each sounding body not as an area on the display screen but as an area on video data. That is, the dinosaur head region T (b), dinosaur chest region T (c), dinosaur leg region T (d), dinosaur tail region T (e), and background region T (g) on the video data ).
- Each area is defined in association with video data, and presentation information I (b) to 1 (g) as shown in Fig. 29 should be prepared.
- presentation information I (b) to 1 (g) as shown in Fig. 29 should be prepared.
- a part of the dinosaur's image may be hidden by a rock as shown in Fig. 30.
- the rock displayed in the area T (h) obscures 100% of the dinosaur tail area T (e) shown in Fig. 28, and the dinosaur leg area T ( d), part of the dinosaur outline area T (f) and part of the background area T (g) are hidden.
- the volume by an amount corresponding to the area of the concealed portion of the video to reproduce the video.
- the area T (e) of the dinosaur's tail is 100% obscured, the sound based on the sound data A (e) will have its volume reduced by 100% during playback, Power ? Make it inaudible at all.
- the area of the dinosaur legs T (d), the outline area of the dinosaur T (f), and the area of the background T (g) are reduced to x%, y%, and z%, respectively, and a new area T (d 2)
- the presentation information shown in FIG. 29 may be modified as shown in FIG. 31.
- the area of the area indicated by the area data is reduced by the concealed amount, and at the same time, the volume value indicated by the acoustic data is also reduced by the concealed amount.
- the region data T As described above, in implementing the present invention, it is indicated by the region data T. It is necessary to reproduce the sound data A so that a sound image is formed in a predetermined area.
- a specific method for forming a sound image in a predetermined area on the display screen will be described.
- FIG. 32 is a front view showing a positional relationship between a sound source and a display screen in the device for presenting sound and video according to the present invention.
- a display device having a rectangular display screen 110 is used, and four sound sources 21 are located at arrangement points P1 to P4 located at almost four corners of the display screen 110.
- 0 to 240 (speaker) Power Self-placed By presenting an acoustic signal using the four sound sources arranged at the four corners of the display screen 110, a sound image can be formed at an arbitrary position P on the display screen 110.
- the position P of the sound image can be set freely by controlling the volume of each sound source.
- a sound based on the same audio signals from the four sound sources, all to play at the same volume, will be sound mosquito? Formed at the central position of the display screen 1 1 0.
- the volume of the left sound source 210, 230 is increased, the sound image moves to the left, and conversely, the volume of the right sound source 220, 240 is increased. And the sound image moves to the right.
- the volume of the upper sound sources 210 and 220 is increased from the neutral state, the sound image moves upward, and conversely, the volume of the lower sound sources 230 and 240 is reduced. When increased, the sound image moves downward.
- each sound source 210 to 24 is determined according to these distances. You only have to control the volume of 0.
- the sound image position can be controlled in the left-right direction by using a pair of sound sources arranged on the left and right, and the sound image position can be controlled in the vertical direction by using the pair of sound sources arranged vertically. Therefore, even if only two sound sources are used, the effect of the present invention can be obtained to some extent. But what power? In order to perform more effective sound image position control, the display screen
- each sound source provided in the four corners of 110.
- a square display screen 110 is used.
- these four sound sources are theoretically arranged at the four corners of the display screen 110.However, since the localization function by human hearing is not very accurate, in practice, it is not necessarily It is not necessary to arrange each sound source exactly at the four corners of the display screen 110.
- a sound image can be placed at an arbitrary position P.
- force is that force? possible to form?, a sound image given as a point in this and Ha sound image formed by bears.
- a sound image necessary for carrying out the present invention is a sound image as a surface distributed in a predetermined area. Therefore, here, as shown in FIG. 33, four sound sources 210 to 240 are used to form a sound image as a surface in an arbitrary rectangular area T (X) on the display screen 110. The method is described below.
- the presentation information I (X) is composed of video data V (X), area data T (X), and four-channel sound data A1 (x) to A4 (x).
- the area data T (X) is data for defining the area T (X) shown in FIG. 33, and functions as a video reproduction area and a sound reproduction area. Therefore, the video data V (X) is reproduced in this area T (X), and the sound based on the four-channel sound data A 1 (X) to A4 (x) is The reproduction is performed in such a manner that a sound image is formed in this area T (X).
- the presentation of a sound based on such presentation information I (X) is performed by the following method. First, a representative point is set at the four vertices of a region T (X) defined as a rectangular region.
- the sound data A 1 (x) is at the representative point PI 1
- the sound data A 2 (X) is at the representative point P 12
- the sound data A 3 (X) is at the representative point P 13
- the sound data A 4 (x) is associated with the representative point P14.
- four channels of sound data are obtained by recording with four microphones arranged before, after, left and right of a given sounding body. Therefore, when associating the representative point and each acoustic data, ⁇ I location of microphones during recording of the sound data, the force s preferred to have a consistency between each representative point position.
- the distance between the arrangement points P1 to P4 of each sound source and each of the representative points P11 to P14 is calculated, and the volume control is performed according to this distance, so that each representative point? 1 1?
- the four-channel sound data A1 (x) to A4 (x) are reproduced so that sound images of the sound data corresponding to the fourteen positions are obtained.
- a sound signal based on the sound data A 1 (X) is supplied to each sound source 210 to 240, and the volume of each sound source is appropriately controlled so that the first channel is located at the position of the representative point P 11.
- sound power of the acoustic data a l (X) Le? to be obtained can be as discussed in the third 2 FIG.
- a sound signal based on the sound data A 2 (x) is supplied to each sound source 210 to 240, and the sound image power of the sound data A2 (X) of the second channel is obtained at the position of the representative point P12. It is also possible to control the sound volume in such a way that the sound signal based on the sound data A 3 (x) is supplied to each sound source 210 to 240, and the sound data A 3 ( It is also possible to control the volume so that a sound image of (X) is obtained.
- a sound signal based on the sound data A 4 (x) is supplied to each of the sound sources 210 to 240, and a sound signal is provided at the position of the representative point P14. sound force of the 4-channel audio data A4 (X)? Ru also possible der be a volume control so as to obtain o
- a sound signal based on the four-channel sound data A 1 (x) to A 4 (x) is synthesized and supplied to each of the sound sources 210 to 240. If signal synthesis is performed after controlling the volume of each channel so that a sound image can be obtained, the sound image power s of the sound based on the acoustic data A 1 (X) can be obtained at the representative point P11.
- a sound image of the sound based on the sound data A2 (X) is obtained at the representative point p12
- a sound image of the sound based on the sound data A3 (X) is obtained at the representative point P13
- a sound image of the sound based on the sound data A3 (X) is obtained at the representative point P14.
- a sound image of the sound based on the acoustic data A4 (X) can be obtained.
- the sound image of the sound of each channel is formed at each of the four representative points.
- these four representative points P 11 1 to A sound image having a planar spread in a rectangular area T (X) having four vertices of P14 is recognized.
- FIG. 35 is a diagram showing a method of calculating a reproduced sound signal to be given to each of the sound sources (speakers) 210 to 240 based on the presentation information I (X) shown in FIG.
- f (Pm, P n) are two points Pm, a constant round function according to the distance between P n, taking ⁇ mosquitoes s small familiar if made higher the value between two points.
- AAk (x) indicates the amplitude of the sound signal of the k-th channel.
- f (P1, P11) is a function determined according to the distance between the representative point P11 and the arrangement point P1
- AA1 (X) is the acoustic data A1 of the first channel.
- the amplitude of the sound signal based on (X) is shown.
- the playback sound of sound source 210 is obtained by synthesizing four-channel sound signals AA 1 (X) to AA 4 (x), each of which is determined according to the distance between the representative point and the arrangement point.
- the function will be multiplied as a coefficient.
- FIG. 36 is a front view showing a state in which two sets of presentation information are presented at the same time using the device shown in FIG. 32
- FIG. 37 shows the presentation shown in FIG. FIG. 6 is a diagram showing presentation information to be prepared for the above.
- the second presentation information I (b) is information about the concert
- a video based on the video data V (a) is presented, and in the area T (b), the video data V (b ) Is presented.
- a sound image of a sound based on the acoustic data A 1 (a) is formed at the representative points Pa 1 and Pa 3 shown in FIG. 36, and at the representative points Pa 2 and Pa 4,
- a sound image of a sound based on the sound data A2 (a) is formed, a sound image of the sound based on the sound data A 1 (b) is formed at the representative point Pb1, and a sound image is formed at the representative point Pb2.
- a sound image of the sound based on A 2 (b) is formed, a sound image of the sound based on the sound data A 3 (b) is formed at the representative point P b 3, and the sound data A 4 is formed at the representative point P b 4.
- the volume control of each of the sound sources 210 to 240 may be performed so that a sound image of the sound based on (b) is formed.
- the presentation position and the presentation magnification of each presentation information can be arbitrarily changed based on a viewer's instruction.
- the presentation positions of the presentation information I (a) and I (b) presented in FIG. 36 are changed to regions T (a 2) and T (b 2).
- FIG. 39 is a diagram showing presentation information to be prepared for making the presentation shown in FIG. 38.
- the area data is modified to T (a 2) and T (b 2), respectively.
- the newly added third presentation information I (c) is information on the baseball game, and includes video data V (c) showing the baseball video and audio data showing the baseball sound.
- a (c) and area data T (c) “10” indicating a video reproduction area and an audio reproduction area.
- the sound volume at the time of reproducing the acoustic data of each presentation information is based on the case where the video reproduction area has a reference area, that is, an area corresponding to 1/16 of the display screen 110.
- a reference sound volume when it is displayed video Ca? expansion, so that increased or decreased control the volume based on the display magnification. Therefore, in the example shown in Fig. 38, basketball sounds and concert sounds displayed in the areas T (a 2) and T (b 2) of the reference area must be played back at the reference volume.
- the baseball sound displayed in the four times larger area T (c) is played at four times the reference volume and displayed in the ten times larger area T (d).
- the sound of the yacht will be played at 10 times the reference volume (the original image of the yacht should be displayed in an area 12 times larger than the reference area). Is concealed by the baseball image, so the actual display area is 10 times the reference area.)
- FIG. 40 is a diagram showing a practical configuration example of video data and audio data to be prepared when executing the method for presenting sound and video according to the present invention.
- the video data V high-resolution video data corresponding to the maximum magnification is prepared. That is, high-resolution video data is prepared so that good video can be reproduced even when displayed at the highest magnification. For example, video day If a video of all the orchestra members is prepared as a video V, and if it is possible to present an enlarged video of only one string of the violin when displayed at the maximum magnification, this string It is necessary to prepare high-resolution video data that can reproduce video well.
- the first-layer sound data A includes the second-layer sound data A 1, A 2,..., And the second-layer sound data A 1 includes the third-layer sound data A 1.
- the sound data A11, A12, A13, ... of the second layer are included, and the sound data A2 of the second layer include the sound data A21, A22, ... of the third layer.
- orchestral images are prepared as video data, for example, the first-layer sound data A is data recording the performance of the entire orchestra, and the second-layer sound data A 1 is a member of the first violin.
- the sound data of all members is recorded, and the sound data A11 of the third hierarchy is the data of the sound of one specific member of the first violin.
- the sound data A11 of the third hierarchy is the data of the sound of one specific member of the first violin.
- Fig. 41 shows an example of an area definition having a hierarchical structure.
- the area T of the first hierarchy includes the areas T 1, T 2,... Of the second hierarchy indicated by a dashed line, and furthermore, the areas of the second hierarchy include dashed lines.
- the area of the third hierarchy T11, T12, T13, ... is included.
- the first layer area ⁇ is an area corresponding to the entire orchestra image
- the second layer area ⁇ ⁇ is a member of the first violin.
- the region corresponding to the video of all members, and the region T11 of the third hierarchy is a region corresponding to the video of one specific member of the first violin.
- the presentation information prepared in such a configuration can be used in the form of a kind of database. For example, if the viewer wants to learn about the entire orchestra, he or she may give an instruction to display the video of the entire orchestra corresponding to the area T of the first hierarchy and reproduce the sound data of the entire orchestra. If it is necessary to learn about the first violin, an instruction to display only the image of the first violin corresponding to the area T1 of the second hierarchy is given, and the sound data of the first violin is reproduced. I'll do it. In this case, it is preferable that the image in the area T1 is displayed enlarged on the entire display screen.
- the video screen can be arbitrarily enlarged or reduced and displayed, the sound related to the highest-level sounding body that is currently displayed in its entirety is selectively reproduced at this time. It is convenient to do so. For example, if the viewer gives an instruction to display the entire orchestra's video on the entire display screen, only the sound of the orchestra, which is the highest-ranking sounding body, will be selectively played back. Given an instruction to display an image of only 1 violin on the display screen full, at which point, the only force? selectively reproduced sound of the first violin is sounded of the most upper hierarchy displayed entire force Will be. Chi words, the viewer when a selective operation on the video data, to be automatically acoustic data force? Selected accordingly. Such a function is important for improving operability when the device according to the present invention is used as a database browsing device.
- the presentation information having the above-described hierarchical structure is prepared in a computer installed in an art museum, a museum, or the like, and necessary data is transmitted as needed, It can be used as a database.
- the viewer requests information of the entire orchestra, only the data necessary for presenting the information on the first layer may be transmitted, and more detailed information on the lower layer may be transmitted.
- the data necessary for the information presentation of the hierarchy according to the request may be transmitted again.
- a database composed of acoustic data obtained by recording heart sounds in a spatial hierarchical structure (for example, a sound recording immediately adjacent to a specific valve). With the recorded sound of the whole heart, the former has a hierarchical structure in which the former is the lower pronunciation body and the latter is the upper pronunciation body.) .
- FIG. 42 is a block diagram showing a basic configuration of a device for presenting sound and video according to the present invention.
- this device includes a display device 100, an audio device 200, a video reproduction device 300, a sound reproduction device 400, a presentation information storage device 500, a presentation mode change device 600, It is composed of an instruction input device 700 and an information reading device 800.
- the display device 100 is a device having a display screen 110 for presenting an image, and includes, for example, a large display device in which a large number of light emitting diodes are arranged in a matrix. Is done.
- the acoustic device 200 is provided with a plurality of sound devices arranged around the display screen 110 so that sound can be presented so that sound image power is formed in an arbitrary area in the display screen 110. And a speaker system arranged around the display screen 110.
- the presentation information storage device 500 stores video data V indicating a video to be presented, audio data A indicating a sound to be presented, and a video reproduction area and audio data A in which the video data V is to be reproduced. This is a device that stores region data T indicating a sound reproduction region to be reproduced, and presentation information I including, and is actually configured by a computer memory or an external storage device.
- the video playback device 300 has a function of playing back video based on the video data V in a video playback area on the display screen 110, and the sound playback device 400 It has a function of reproducing a sound based on the sound data A using a plurality of sound sources 210 to 240 of the sound device 200 so that a sound image is formed in the sound reproduction area.
- the instruction input device 700 is a device for inputting an instruction of an operator (viewer)
- the presentation mode changing device 600 is a presentation information storage device 500 based on the input instruction. It performs the function of modifying the presentation information I in, and changing the presentation mode of sound and video.
- the instruction to select the audio data to be presented, the instruction to enlarge the video data, and the like are input from the instruction input device 700, and the process for changing the presentation mode is performed by the presentation mode changing device 600. Will be executed.
- the information reading device 800 reads the presentation information I recorded on the information recording medium 900 such as a CD-ROM or a DVD, and stores the presentation information I in the presentation information storage device 500. It is a device for performing the above, and in practice, various pieces of presentation information are provided by being recorded in the information recording medium 900. Industrial availability
- the apparatus and method for presenting sound and video according to the present invention can be widely used in a technical field that needs to present video together with sound, and provide multimedia contents using a computer. It can be applied to provision.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Acoustics & Sound (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Controls And Circuits For Display Device (AREA)
- Stereophonic System (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98944238A EP1035732A1 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
PCT/JP1998/004301 WO2000018112A1 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
AU91853/98A AU756265B2 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
CA002311817A CA2311817A1 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP1998/004301 WO2000018112A1 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000018112A1 true WO2000018112A1 (en) | 2000-03-30 |
Family
ID=14209070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1998/004301 WO2000018112A1 (en) | 1998-09-24 | 1998-09-24 | Apparatus and method for presenting sound and image |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1035732A1 (ja) |
AU (1) | AU756265B2 (ja) |
CA (1) | CA2311817A1 (ja) |
WO (1) | WO2000018112A1 (ja) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002150130A (ja) * | 2000-11-14 | 2002-05-24 | Nippon Telegr & Teleph Corp <Ntt> | 電子広告システム |
JP2003264900A (ja) * | 2002-03-07 | 2003-09-19 | Sony Corp | 音響提示システムと音響取得装置と音響再生装置及びその方法並びにコンピュータ読み取り可能な記録媒体と音響提示プログラム |
JP2004343376A (ja) * | 2003-05-15 | 2004-12-02 | Funai Electric Co Ltd | Avシステム |
JP2005012255A (ja) * | 2003-06-16 | 2005-01-13 | Konica Minolta Holdings Inc | 画像表示装置 |
JP2006067295A (ja) * | 2004-08-27 | 2006-03-09 | Sony Corp | 音響生成方法、音響生成装置、音響再生方法及び音響再生装置 |
US7376332B2 (en) | 2003-11-05 | 2008-05-20 | Canon Kabushiki Kaisha | Information processing method and information processing apparatus |
JP2008167032A (ja) * | 2006-12-27 | 2008-07-17 | Canon Inc | 映像音声出力装置及び映像音声出力方法 |
JP2009010992A (ja) * | 2008-09-01 | 2009-01-15 | Sony Corp | 音声信号処理装置、音声信号処理方法、プログラム |
JP2010041190A (ja) * | 2008-08-01 | 2010-02-18 | Yamaha Corp | 音響装置及びプログラム |
JP4913038B2 (ja) * | 2004-04-08 | 2012-04-11 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 音声レベル制御 |
WO2012105183A1 (ja) * | 2011-02-02 | 2012-08-09 | Necカシオモバイルコミュニケーションズ株式会社 | 音声出力装置 |
JP2014072661A (ja) * | 2012-09-28 | 2014-04-21 | Jvc Kenwood Corp | 映像音声記録再生装置 |
JPWO2015194075A1 (ja) * | 2014-06-18 | 2017-06-01 | ソニー株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2017134713A (ja) * | 2016-01-29 | 2017-08-03 | セイコーエプソン株式会社 | 電子機器、電子機器の制御プログラム |
WO2019093155A1 (ja) * | 2017-11-10 | 2019-05-16 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
JP2020025310A (ja) * | 2013-03-28 | 2020-02-13 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 見かけのサイズをもつオーディオ・オブジェクトの任意のラウドスピーカー・レイアウトへのレンダリング |
CN113841426A (zh) * | 2019-05-31 | 2021-12-24 | 微软技术许可有限责任公司 | 使用应用位置信息向各种通道发送音频 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1194006A3 (en) * | 2000-09-26 | 2007-04-25 | Matsushita Electric Industrial Co., Ltd. | Signal processing device and recording medium |
US10158958B2 (en) | 2010-03-23 | 2018-12-18 | Dolby Laboratories Licensing Corporation | Techniques for localized perceptual audio |
JP5919201B2 (ja) * | 2010-03-23 | 2016-05-18 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 音声を定位知覚する技術 |
KR101632238B1 (ko) | 2013-04-05 | 2016-06-21 | 돌비 인터네셔널 에이비 | 인터리브된 파형 코딩을 위한 오디오 인코더 및 디코더 |
WO2015008538A1 (ja) | 2013-07-19 | 2015-01-22 | ソニー株式会社 | 情報処理装置および情報処理方法 |
CN104036789B (zh) | 2014-01-03 | 2018-02-02 | 北京智谷睿拓技术服务有限公司 | 多媒体处理方法及多媒体装置 |
CN105590487A (zh) * | 2014-11-03 | 2016-05-18 | 声活工坊文化事业有限公司 | 有声书制作复合功能系统 |
CN105989845B (zh) | 2015-02-25 | 2020-12-08 | 杜比实验室特许公司 | 视频内容协助的音频对象提取 |
CN115442549B (zh) * | 2021-06-01 | 2024-09-17 | Oppo广东移动通信有限公司 | 电子设备的发声方法及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0638122A (ja) * | 1992-07-15 | 1994-02-10 | Sanyo Electric Co Ltd | 画像分割表示システムの音声処理装置 |
JPH06311448A (ja) * | 1993-04-27 | 1994-11-04 | Sanyo Electric Co Ltd | テレビジョン受像機 |
JPH0830430A (ja) * | 1994-07-19 | 1996-02-02 | Matsushita Electric Ind Co Ltd | 表示装置 |
JPH0851580A (ja) * | 1994-08-08 | 1996-02-20 | Fujitsu General Ltd | 画面分割表示装置の音声回路 |
JPH0898102A (ja) * | 1994-09-22 | 1996-04-12 | Sony Corp | テレビジョン受信機 |
JPH09322094A (ja) * | 1996-05-31 | 1997-12-12 | Toshiba Corp | 複数画面用音声出力回路 |
-
1998
- 1998-09-24 EP EP98944238A patent/EP1035732A1/en not_active Withdrawn
- 1998-09-24 AU AU91853/98A patent/AU756265B2/en not_active Ceased
- 1998-09-24 CA CA002311817A patent/CA2311817A1/en not_active Abandoned
- 1998-09-24 WO PCT/JP1998/004301 patent/WO2000018112A1/ja not_active Application Discontinuation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0638122A (ja) * | 1992-07-15 | 1994-02-10 | Sanyo Electric Co Ltd | 画像分割表示システムの音声処理装置 |
JPH06311448A (ja) * | 1993-04-27 | 1994-11-04 | Sanyo Electric Co Ltd | テレビジョン受像機 |
JPH0830430A (ja) * | 1994-07-19 | 1996-02-02 | Matsushita Electric Ind Co Ltd | 表示装置 |
JPH0851580A (ja) * | 1994-08-08 | 1996-02-20 | Fujitsu General Ltd | 画面分割表示装置の音声回路 |
JPH0898102A (ja) * | 1994-09-22 | 1996-04-12 | Sony Corp | テレビジョン受信機 |
JPH09322094A (ja) * | 1996-05-31 | 1997-12-12 | Toshiba Corp | 複数画面用音声出力回路 |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002150130A (ja) * | 2000-11-14 | 2002-05-24 | Nippon Telegr & Teleph Corp <Ntt> | 電子広告システム |
JP2003264900A (ja) * | 2002-03-07 | 2003-09-19 | Sony Corp | 音響提示システムと音響取得装置と音響再生装置及びその方法並びにコンピュータ読み取り可能な記録媒体と音響提示プログラム |
JP2004343376A (ja) * | 2003-05-15 | 2004-12-02 | Funai Electric Co Ltd | Avシステム |
JP2005012255A (ja) * | 2003-06-16 | 2005-01-13 | Konica Minolta Holdings Inc | 画像表示装置 |
US7376332B2 (en) | 2003-11-05 | 2008-05-20 | Canon Kabushiki Kaisha | Information processing method and information processing apparatus |
JP4913038B2 (ja) * | 2004-04-08 | 2012-04-11 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 音声レベル制御 |
US8150061B2 (en) | 2004-08-27 | 2012-04-03 | Sony Corporation | Sound generating method, sound generating apparatus, sound reproducing method, and sound reproducing apparatus |
JP2006067295A (ja) * | 2004-08-27 | 2006-03-09 | Sony Corp | 音響生成方法、音響生成装置、音響再生方法及び音響再生装置 |
JP2008167032A (ja) * | 2006-12-27 | 2008-07-17 | Canon Inc | 映像音声出力装置及び映像音声出力方法 |
JP2010041190A (ja) * | 2008-08-01 | 2010-02-18 | Yamaha Corp | 音響装置及びプログラム |
JP2009010992A (ja) * | 2008-09-01 | 2009-01-15 | Sony Corp | 音声信号処理装置、音声信号処理方法、プログラム |
US9215523B2 (en) | 2011-02-02 | 2015-12-15 | Nec Corporation | Audio output device |
JP2012160983A (ja) * | 2011-02-02 | 2012-08-23 | Nec Casio Mobile Communications Ltd | 音声出力装置 |
WO2012105183A1 (ja) * | 2011-02-02 | 2012-08-09 | Necカシオモバイルコミュニケーションズ株式会社 | 音声出力装置 |
JP2014072661A (ja) * | 2012-09-28 | 2014-04-21 | Jvc Kenwood Corp | 映像音声記録再生装置 |
JP2020025310A (ja) * | 2013-03-28 | 2020-02-13 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 見かけのサイズをもつオーディオ・オブジェクトの任意のラウドスピーカー・レイアウトへのレンダリング |
US11019447B2 (en) | 2013-03-28 | 2021-05-25 | Dolby Laboratories Licensing Corporation | Rendering of audio objects with apparent size to arbitrary loudspeaker layouts |
US11564051B2 (en) | 2013-03-28 | 2023-01-24 | Dolby Laboratories Licensing Corporation | Methods and apparatus for rendering audio objects |
US11979733B2 (en) | 2013-03-28 | 2024-05-07 | Dolby Laboratories Licensing Corporation | Methods and apparatus for rendering audio objects |
JPWO2015194075A1 (ja) * | 2014-06-18 | 2017-06-01 | ソニー株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2017134713A (ja) * | 2016-01-29 | 2017-08-03 | セイコーエプソン株式会社 | 電子機器、電子機器の制御プログラム |
WO2019093155A1 (ja) * | 2017-11-10 | 2019-05-16 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
CN113841426A (zh) * | 2019-05-31 | 2021-12-24 | 微软技术许可有限责任公司 | 使用应用位置信息向各种通道发送音频 |
Also Published As
Publication number | Publication date |
---|---|
AU756265B2 (en) | 2003-01-09 |
EP1035732A1 (en) | 2000-09-13 |
CA2311817A1 (en) | 2000-03-30 |
AU9185398A (en) | 2000-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2000018112A1 (en) | Apparatus and method for presenting sound and image | |
JP4674505B2 (ja) | 音声信号処理方法、音場再現システム | |
US5812688A (en) | Method and apparatus for using visual images to mix sound | |
JP4735108B2 (ja) | 音声信号処理方法、音場再現システム | |
Zvonar | A history of spatial music | |
CN103733249B (zh) | 信息系统、信息再现装置、信息生成方法及记录介质 | |
JP5168373B2 (ja) | 音声信号処理方法、音場再現システム | |
CN110447071A (zh) | 信息处理装置、信息处理方法和程序 | |
JP7003924B2 (ja) | 情報処理装置と情報処理方法およびプログラム | |
JP4883197B2 (ja) | 音声信号処理方法、音場再現システム | |
KR100677156B1 (ko) | 음원 관리 방법 및 그 장치 | |
JP2019080188A (ja) | オーディオシステム及び車両 | |
Mulder | Making things louder: Amplified music and multimodality | |
JP2016102982A (ja) | カラオケシステム、プログラム、カラオケ音声再生方法及び音声入力処理装置 | |
JP6220576B2 (ja) | 複数人による通信デュエットに特徴を有する通信カラオケシステム | |
JP2017092832A (ja) | 再生方法および再生装置 | |
Sharma et al. | Are Loudspeaker Arrays Musical Instruments | |
Williams | 'You never been on a ride like this befo': Los Angeles, automotive listening, and Dr. Dre's' G-Funk'. | |
JP6920489B1 (ja) | カラオケ装置 | |
JPH1064198A (ja) | ディスク及びディスク再生装置及びディスク記録再生装置 | |
Mikkonen | Lost in Space: Three Case Studies in Music Production Using Immersive Audio | |
JPH10240281A (ja) | カラオケ装置 | |
Pottier et al. | Interpretation and space | |
Austin et al. | Computer Music for Compact Disc: Composition, Production, Audience | |
JP2002223409A (ja) | 記録映画コンテンツ又はフィクション性コンテンツの場面展開システム及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09554792 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2311817 Country of ref document: CA Ref country code: CA Ref document number: 2311817 Kind code of ref document: A Format of ref document f/p: F |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 91853/98 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1998944238 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1998944238 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 91853/98 Country of ref document: AU |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1998944238 Country of ref document: EP |