WO2018139117A1 - 情報処理装置、情報処理方法およびそのプログラム - Google Patents
情報処理装置、情報処理方法およびそのプログラム Download PDFInfo
- Publication number
- WO2018139117A1 WO2018139117A1 PCT/JP2017/045619 JP2017045619W WO2018139117A1 WO 2018139117 A1 WO2018139117 A1 WO 2018139117A1 JP 2017045619 W JP2017045619 W JP 2017045619W WO 2018139117 A1 WO2018139117 A1 WO 2018139117A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information processing
- information
- processing apparatus
- image
- visual effect
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/687—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Definitions
- This technology mainly relates to an information processing apparatus using AR (Augmented Reality), its method and program.
- the image processing apparatus described in Patent Literature 1 acquires an input image that reflects the real world, and executes predetermined processing on the input image. For example, the image processing apparatus processes a partial image in an input image corresponding to an object specified by a search or the like by a user among objects included in a three-dimensional model existing in the real world (in the input image), and an enhanced image (Paragraph paragraphs [0044] and [0058]).
- Patent Document 2 discloses that there is an application that displays an image that matches the music to be reproduced on a monitor when reproducing the music captured in a personal computer or the like (paragraph [specification paragraph [ 0002]). Further, the reproduction control device described in Patent Document 2 acquires image data to which the same time code is added as the time code added to the audio data instructed to be reproduced, and causes the speaker to output the audio data. . Then, the playback control device displays the acquired image data on the display with an effect that matches the sound pattern of the audio data (see paragraph [0023] in the specification).
- An object of the present disclosure is to provide an information processing apparatus, an information processing method, and a program thereof that can improve the interest of the user.
- an information processing apparatus includes a recognition unit and a processing unit.
- the recognition unit is configured to recognize an object in real space.
- the processing unit is configured to perform visual effect processing on a target object image, which is an image of the object recognized by the recognition unit, according to a music feature amount.
- the user can enjoy watching the image displayed so that the object in the real space is linked to the music, and can improve the interest of the user.
- the processing unit may be configured to execute a visual effect process associated with the type of the object.
- the processing unit acquires a frequency band of the music as the feature amount, assigns a plurality of target object images corresponding to a plurality of objects for each frequency band, and performs the visual effect processing on the plurality of target object images. It may be configured to execute. Thereby, the user can experience the visual effect of each target object image assigned for each frequency band.
- the processing unit acquires position information of the music sound source as the feature amount, assigns a plurality of target object images corresponding to a plurality of objects for each position of the sound source, and assigns the visual object to the plurality of target object images. You may be comprised so that an effect process may be performed. Thereby, the user can experience the visual effect of each target object image assigned for each position of the sound source.
- the processing unit may be configured to execute a plurality of different visual effect processes on the plurality of target object images, respectively. Thereby, the user can experience different visual effects for each frequency band or for each position of the sound source.
- the processing unit may be configured to acquire information on the tempo of the music as the feature amount and to execute the visual effect processing according to the tempo.
- the processing unit may be configured to acquire the music key information as the feature amount and to execute the visual effect processing according to the key.
- the processing unit may be configured to acquire meta information attached to the music data and to execute the visual effect processing based on the meta information.
- the meta information may include visual effect setting information that is setting information related to the visual effect processing.
- the information processing apparatus may further include a feature amount extraction unit that extracts the feature amount from the music data. That is, this information processing apparatus can dynamically extract a feature amount from music data and execute visual effect processing.
- the information processing apparatus may further include a setting unit configured to execute processing for causing a user to set at least one of the feature amount, the object, and the content of the visual effect processing.
- the information processing apparatus may further include a surrounding environment information acquisition unit configured to acquire information about the surrounding environment of the information processing apparatus.
- the processing unit may be configured to further execute the visual effect processing based on the information on the surrounding environment.
- the information processing apparatus can display a visual effect according to the environment around the information processing apparatus.
- the surrounding environment acquisition unit may be configured to acquire position information of the information processing device, natural environment information where the information processing device is placed, or user's biological information as information on the surrounding environment. .
- An information processing apparatus includes the recognition unit and a processing unit.
- the processing unit is configured to execute a visual effect process on a target object image, which is an image of the object recognized by the recognition unit, according to meta information attached to music data.
- An information processing method includes recognizing an object in real space.
- a visual effect process is performed on the target object image, which is an image of the recognized object, according to the feature amount of music.
- An information processing method includes recognizing an object in real space.
- a visual effect process is executed on the target object image, which is an image of the recognized object, in accordance with the meta information attached to the music data.
- a program causes an information processing apparatus (computer) to execute the information processing method.
- the interest of the user can be improved.
- FIG. 1 is a block diagram illustrating a configuration of the information processing apparatus according to the first embodiment of the present technology.
- FIG. 2 is a flowchart showing the operation of the information processing apparatus shown in FIG.
- FIG. 3 shows Example 1 of a composite image by visual effect processing in the first embodiment.
- FIG. 4 shows a second example of the composite image by the visual effect process in the first embodiment.
- FIG. 5 shows an example 3 of a composite image by the visual effect process in the first embodiment.
- FIG. 6 shows a fourth example of the composite image by the visual effect process in the first embodiment.
- FIG. 7 is a block diagram illustrating a configuration of the information processing apparatus according to the second embodiment of the present technology.
- FIG. 8 is a flowchart showing the operation of the information processing apparatus shown in FIG.
- FIG. 9 is a block diagram illustrating a configuration of an information processing device according to the third embodiment of the present technology.
- FIG. 10 is a flowchart showing the operation of the information processing apparatus shown in FIG.
- FIG. 1 is a block diagram illustrating a configuration of the information processing device according to the first embodiment of the present technology.
- the information processing apparatus 100 is, for example, a smartphone, a tablet computer, a head mounted display device, or other portable, wearable, or non-portable computer.
- the information processing apparatus 100 may be a dedicated device optimized for the present technology.
- the information processing apparatus 100 includes, for example, a camera 10, an image recognition unit 11, a music data storage unit 15, a reproduction processing unit 16, a feature amount extraction unit 17, a processing unit 13, a display unit 14, a speaker 18, and an operation unit 19.
- the image recognition unit (recognition unit) 11 analyzes an image captured in real time by the camera 10 or an image captured in the past, that is, an image showing a real space (hereinafter referred to as a real space image). , Has a function of recognizing an object in real space.
- the real space image captured by the camera 10 may be either a still image or a moving image.
- the image recognition unit 11 identifies and recognizes an object in the real space image by processing and analyzing the real space image with a known algorithm.
- Known algorithms include, for example, block processing, filter processing, contrast processing, segmentation, Fourier transform, discrete cosine transform, object analysis, texture analysis, and the like.
- the image recognition unit 11 has a function of classifying the analyzed object into the types of the objects and specifying them.
- object types include buildings, bridges, street lamps, light sources, vehicles, humans, mountains, rivers, seas, flowers, desks, chairs, books, pens, cups, plates, etc. (ID) corresponds.
- ID corresponds.
- this is referred to as an object ID.
- This object ID may be stored in advance in a memory (not shown) of the information processing apparatus 100, or may be stored in a server on the cloud accessible by the information processing apparatus 100.
- the music data storage unit 15 has a function of storing music (song) data.
- the reproduction processing unit 16 has a function of reproducing music data stored in the music data storage unit 15 and outputting it to the speaker 18.
- the reproduction processing unit 16 includes a decoding unit 161 and a DAC (DA conversion unit) 163 (not shown).
- the decoding unit 161 decodes music data encoded by a predetermined codec, and outputs an analog signal to the speaker 18 via the DAC 163.
- the information processing apparatus 100 may include an audio output terminal instead of or in addition to the speaker 18. Headphones and earphones can be connected to the audio output terminal.
- the feature amount extraction unit 17 has a function of extracting music feature amounts from the decoded music data.
- the feature amount includes a frequency band (or a signal level for each frequency band), a sound source position, a tempo, or a key (key such as a major key or a minor key).
- the processing unit 13 has a function of executing visual effect processing on a target object image that is an image of an object recognized by the image recognition unit 11 in accordance with the music feature amount extracted by the feature amount extraction unit 17.
- the processing unit 13 includes, for example, an effect image generation unit 131 and a superimposition unit 133.
- the effect image generation unit 131 generates an effect image for visual effect processing based on the target object image recognized by the image recognition unit 11 (corresponding object ID).
- the effect image may be either a still image or a moving image.
- the superimposing unit 133 superimposes the effect image generated by the effect image generating unit 131 on the real space image, and generates a composite image obtained thereby.
- the recognized object ID and the type of effect image may be associated in advance.
- the information processing apparatus 100 may download a table indicating the association between the object ID and the effect image type from a server on the cloud.
- the effect image generation unit 131 can generate an effect image using a known AR algorithm based on the form (shape, size, color, etc.) of the target object image (object).
- the display unit 14 displays the composite image generated by the processing unit 13.
- the operation unit 19 has a function of receiving operation information from the user.
- the operation unit 19 may be integrated with the display unit 14 like a touch panel, or may be separate from the display unit 14.
- the information processing apparatus 100 includes hardware (not shown) such as a CPU (Central Processing Unit), a RAM (Random Access Memory), and a ROM (Read Only Memory).
- the information processing apparatus 100 may include other hardware such as PLD (Programmable Logic Device) such as FPGA (Field Programmable Gate Array), DSP (Digital Signal Processor), GPU (Graphics Processing Unit).
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- DSP Digital Signal Processor
- GPU Graphics Processing Unit
- FIG. 2 is a flowchart showing the operation of the information processing apparatus 100.
- music data is selected from the music data storage unit 15 by a user operation, and is played back by the playback processing unit 16 (step 101).
- a feature quantity of music is extracted by the feature quantity extraction unit 17 (step 102).
- a real space image is acquired (or acquisition is started) by imaging (or shooting start) by the camera 10 (step 103)
- an object in the real space image is recognized by the image recognition unit 11 (step 104).
- This object image is set as a target object image to be subjected to visual effect processing.
- steps 101-102 and 103-104 may be reversed or simultaneous. That is, step 103 may be performed first and step 101 may be performed later, or may be performed simultaneously.
- the processing unit 13 generates an effect image according to the extracted feature amount, generates a composite image by superimposing the generated effect image on the real space image including the target object image, and displays this. (Step 105).
- the processing unit 13 assigns a plurality of target object images for each feature value, and executes visual effect processing on the plurality of target object images.
- the feature amount extraction unit 17 typically extracts the feature amount in real time during reproduction of music data.
- the information processing apparatus 100 may have a function of storing feature values extracted once for the music data in a storage. Thereby, when the music data is reproduced for the second time or later, the feature amount extraction processing can be omitted.
- FIG. 3 shows Example 1 of a composite image by visual effect processing.
- the real space image according to Example 1 is a night road scenery.
- the feature amount of music is, for example, a signal level for each frequency band.
- the image recognition unit 11 recognizes an image of a streetlight or illumination (or a light source 70 having a predetermined area or more) as a target object image.
- the processing unit 13 superimposes a ring 72 as an effect image around the light source 70 in accordance with the frequency band of the music being played. That is, the processing unit 13 assigns a plurality of different light sources (target object images) 70a, 70b, and 70c for each frequency band, and executes visual effect processing on the plurality of target object images.
- Example 1 when the area of the light source is small (when it is equal to or less than the first threshold), an image of one ring 72 is generated as an effect image corresponding to the high sound range (first frequency band) (see the light source 70a). ). When the area of the light source is medium (when the first threshold value is exceeded and the second threshold value is greater than or equal to the second threshold value), the images of the two rings 72 are obtained as effect images corresponding to the middle sound range (second frequency band). It is generated (see light source 70b). When the area of the light source is large (when the second threshold value is exceeded), three ring 72 images are generated as effect images corresponding to the low sound range (third frequency band) (see the light source 70c).
- the effect image by these rings 72 is, for example, an animation image in which one ring 72 around the light source in the high sound range blinks or a plurality of concentric rings 72 in the low sound region and the middle sound region are sequentially lit from the inside. Consists of.
- an effect image may be generated in which the number, size, color density, and the like of the ring 72 change according to the signal level for each frequency band.
- the effect image is not limited to a ring, and may be a filled circle or other forms.
- the image recognition unit 11 may use the luminance or brightness of the light source instead of or in addition to the area of the light source as a recognition standard as the light source of the target object image.
- the information processing apparatus 100 can present a composite image including a rhythmic effect image linked to music that the user is currently listening to the user.
- the user can watch and enjoy the composite image, and can improve the interest of the user.
- the information processing apparatus 100 includes the feature amount extraction unit 17, it is possible to dynamically extract the feature amount from the music data to be played back and execute the visual effect processing.
- the user can experience the visual effect of each target object image assigned for each frequency band as a feature amount.
- FIG. 4 shows Example 2 of a composite image by visual effect processing.
- the real space image according to Example 2 is a landscape of a group of buildings.
- the feature amount of music is, for example, a signal level for each frequency band.
- the image recognition unit 11 recognizes the building 75.
- the processing unit 13 superimposes an image 77 having an outer shape of the building 75 or an outer shape similar thereto as an effect image according to the frequency band of the music being played.
- the effect image is composed of an animation image that expands and contracts vertically, for example.
- Example 2 for example, an image of the building 75a having a small footprint (target object image) is assigned to the high sound range, and an image of the building 75b having a large footprint is assigned to the low sound range, and an effect image is superimposed on each of the images of these buildings. Is done.
- the processing unit 13 may execute visual effect processing that lowers the visibility of the target object image of the building 75 in addition to the effect image 77. Thereby, the visibility of the effect image 77 is relatively increased.
- FIG. 5 shows Example 3 of a composite image by visual effect processing.
- the real space image according to Example 3 is mainly a night sky landscape.
- the image recognition unit 11 recognizes the night sky based on the luminance (or brightness), color, and other conditions of the real space image.
- the processing unit 13 superimposes an animation of the fireworks 80 of different sizes on the night sky image (target object image) as an effect image according to the frequency band of the music being played. For example, an effect image of large fireworks for low sounds and small fireworks for high sounds is generated.
- Visual effect processing that moves in conjunction with music may also be performed on the target object image of the audience watching the fireworks 80.
- the processing unit 13 reduces the lightness of the sky, that is, changes the daylight to the night sky. An effect image may be generated. Then, the processing unit may superimpose the effect image of the fireworks 80 on the night sky. Thereby, the user can enjoy the visual effect of fireworks even in the daytime.
- a threshold value for example, in the case of a bright daylight
- FIG. 6 shows an example 4 of a composite image by visual effect processing.
- the real space image according to Example 4 includes the image of the light source 70 as the target object image, as in FIG.
- the feature quantity extraction unit 17 extracts the position of the sound source as the music feature quantity.
- the processing unit 13 assigns an image (target object image) of the light source 70 for each position of the sound source, and executes visual effect processing.
- the effect images are superimposed on the light sources 70a and 70b respectively arranged on the left and right sides in the real space image.
- the effect image ring 72
- the right light source 70b is displayed. Only effect images are superimposed.
- the feature quantity extraction unit 17 may also extract signal levels for each position of the sound source, and the processing unit 13 may execute visual effect processing on the target object image according to those signal levels.
- the processing unit 13 can change the number of the respective rings 72 according to the signal levels.
- the processing unit 13 may generate an effect image that increases the number of rings 72 as the signal level increases.
- the processing unit 13 allocates more sound source positions in the three-dimensional space including the real space image including the depth. As a result, visual effect processing in conjunction with music can be performed on the target object image arranged at each position in the three-dimensional space including the depth.
- the feature quantity extraction unit 17 may extract a tempo (speed) as a music feature quantity. Taking a real space image including the light source 70 as shown in FIG. 3 as an example, the processing unit 13 superimposes the ring 72 on the light source 70c having a large area (or high luminance) in the case of a slow-tempo music. . On the other hand, in the case of an up-tempo song, the processing unit 13 superimposes the ring 72 on the light source 70 regardless of the size (or luminance) of the light source 70.
- an effect image with a slow motion animation may be generated for a slow tempo song, and a fast motion animation for an up tempo song.
- FIG. 7 is a block diagram illustrating a configuration of the information processing apparatus according to the second embodiment of the present technology.
- elements substantially the same as those included in the information processing apparatus 100 according to the embodiment shown in FIG. 1 and the like are denoted by the same reference numerals, and the description thereof is simplified or omitted, and is different. The explanation will be focused on.
- the information processing apparatus 200 includes a meta information storage unit 20 that stores meta information.
- the meta information storage unit 20 stores, for example, meta information attached to music data. Examples of meta-information accompanying music data include bibliographic information such as song titles, lyrics, and singers. Alternatively, as meta information, an object ID associated in advance with the music data can be cited.
- the meta information storage unit 20 can store visual effect setting information for setting the visual effect processing as meta information.
- the processing unit 13 is configured to acquire the meta information stored in the meta information storage unit 20 and execute a visual effect process based on the acquired meta information.
- FIG. 8 is a flowchart showing the operation of the information processing apparatus 200 according to the second embodiment. Steps 201 to 204 are the same as steps 101 to 104 shown in FIG.
- the processing unit 13 acquires meta information (step 205). Based on the meta information, the processing unit 13 generates a composite image by superimposing the effect image on the real space image including the target object image in accordance with the feature amount of the music to be played back, and displays this (Step). 206). Hereinafter, the process of step 206 will be described with some examples.
- Operation example 1 It is assumed that the processing unit 13 acquires a lyrics or a title as meta information. The processing unit 13 determines whether there is a predetermined keyword in the words in the lyrics or the title. If there is a keyword, the processing unit 13 generates an effect image corresponding to the keyword. For example, when “flower” is used as a keyword, an effect image of a predetermined flower is generated. The processing unit 13 superimposes and displays the flower effect image in an arbitrary real space image.
- Operation example 2 Assume that the processing unit 13 acquires the lyrics or the title as the meta information and the object ID as in the case of the processing example 1 described above. The processing unit 13 determines whether or not there is a predetermined keyword in the words in the lyrics or the title. Further, the processing unit 13 determines whether or not the object type of the target object image recognized by the image recognition unit 11 matches the acquired object ID. When there is a keyword in the lyrics or the title and the object type of the target object image matches the object ID, the processing unit 13 generates an effect image corresponding to the keyword. Then, the processing unit 13 superimposes and displays the effect image on the real space image including the target object image.
- the processing unit 13 recognizes the visual effect processing related to the flower as a flower. To the target object image.
- the processing unit 13 acquires meta information including setting information (visual effect setting information) related to visual effect processing in addition to information related to music data such as lyrics and titles.
- the visual effect setting information is information for setting visual effect processing, such as visual effect intensity (display size and area), display speed, display frequency, display color, and the like.
- the processing unit 13 performs visual effect processing according to the visual effect setting information. Execute.
- the processing unit 13 may acquire, as the visual effect setting information, for example, information indicating an effect image (what kind of effect image is used) used for each time-series part of a song.
- the visual effect setting information is information indicating effect images respectively used for an intro part, a first part part, a second part part, and rust in one song.
- the visual effect setting information may be information that the visual effect processing is stopped depending on the part.
- the use of the object ID is not an essential element as described in the first embodiment.
- the image recognition unit 11 recognizes the light source region in the real space image according to the keyword. Good.
- the information processing apparatus 200 can execute various visual effect processing based on meta information in conjunction with music by using meta information.
- FIG. 9 is a block diagram illustrating a configuration of the information processing apparatus according to the third embodiment of the present technology.
- the information processing apparatus 300 does not include the feature amount extraction unit 17 (see FIGS. 1 and 7).
- the information processing apparatus 300 includes the meta information storage unit 20 as in the information processing apparatus 200 illustrated in FIG.
- FIG. 10 is a flowchart showing the operation of the information processing apparatus 300.
- Steps 301 to 304 are the same as steps 201 and 203 to 205 shown in FIG.
- the processing unit 13 Based on the meta information, the processing unit 13 generates a composite image by superimposing the effect image on the real space image including the target object image, and displays it (step 305).
- the processing unit 13 may execute the visual effect process regardless of the feature amount, for example, while music is being played or in conjunction with the playback volume of the music.
- the information processing apparatus includes a setting function (setting unit) for the user to perform an operation input via the operation unit 19 (see FIG. 1 and the like), for example.
- the setting contents are, for example, the type of music feature quantity, the object (object ID), and / or the contents of the visual effect process.
- the setting unit displays a setting screen (not shown) on the display unit 14.
- the type of music feature amount, the object, and / or the contents of the visual effect processing are selected and set. That is, the feature amount, object, and / or visual effect processing content desired by the user is set.
- the visual effect processing content is, for example, what kind of effect image is used and / or the above-described visual effect setting information.
- the user can select an object by tapping the object image displayed on the touch panel display unit 14.
- the user selects one or more effect images from a plurality of types of effect images corresponding to one object, for example. For example, the same applies to the visual effect setting information described above.
- the user can enjoy the visual effect that the user likes.
- the user can set a modest effect image with little movement or an effect image with a large movement depending on his / her personality and preference.
- the information processing apparatus further includes a surrounding environment information acquisition unit configured to acquire information about the surrounding environment.
- the information processing apparatus is configured to preferentially present one or more visual effect processing contents set based on the information on the surrounding environment, for example, to the user. The user can select one or more visual effect processing contents via the operation unit 19.
- the peripheral environment information is, for example, position information of the information processing apparatus, natural environment information in which the information processing apparatus is placed, or user biometric information.
- the position information is not limited to the two-dimensional position on the map, and may include three-dimensional position including altitude and direction information.
- Examples of natural environment information include weather, atmospheric pressure, pollen amount, and direction.
- Examples of user biometric information include body temperature, blood pressure, heart rate, running speed, and the like.
- the information processing apparatus may include a sensor that detects natural environment information or user biometric information. For example, information such as the weather and the amount of pollen may be acquired from the server.
- the user can enjoy an effective visual effect suitable for the surrounding environment or his / her own biological information.
- the present technology is not limited to the embodiment described above, and other various embodiments can be realized.
- the use of music feature values is not essential.
- the visual effect process can be executed while music is being played or in conjunction with the playback volume of the music.
- the information processing apparatus may acquire information on a user's movement as the surrounding environment described above, and may execute a visual effect process according to the movement information and / or the feature amount.
- Examples of user movement information include heart rate, arm swing, and running speed.
- a warm (for example, red) effect image is generated.
- a cold (for example, blue) effect image is generated.
- Example 2 The device to which the information processing apparatus is applied may be a projector as well as the smartphone described above. Visual effects processing such as projection mapping onto windows and doors can be executed by the projector.
- Example 3 In a place where the user has a high tendency to listen to music using the information processing apparatus, a preset visual effect process may be executed. The place may also be set in advance. That is, when a user listens to music with this information processing apparatus at a predetermined location, predetermined visual effect processing is executed.
- a system may be constructed in which a user can register information on the contents of visual effects processing effective at the place or share it with other users.
- this system there is a system in which a user can register (store) information on the contents of visual effect processing in an information processing apparatus or server in association with the location of a store on a map.
- the information is not limited to the store position on the map, and the information on the visual effect processing content may be associated with the target object image in the store.
- Example 4 the present technology can also be applied to digital signage used for advertisements and the like.
- the display unit 14 of the information processing apparatus is a display unit 14 used for digital signage.
- the processing unit 13 executes visual effect processing based on music on the real space image shown in the display unit 14.
- the music data may be, for example, music provided by the advertiser or the store, or music detected with a microphone from around the display of the digital signage.
- the music may be a user's singing voice.
- the information processing apparatus includes a microphone that detects a user's singing voice and a storage unit that stores the data as music data.
- the storage unit may be in a server on the cloud.
- the information processing apparatus may include an analysis unit that analyzes the lyrics contents and title contents of music.
- the analysis unit is configured to generate a summary of a story and a keyword based on the analysis. For example, when “light” is included in the generated summary or keyword, and the image of the light source is included in the real space image as the target object image, the processing unit 13 performs visual effect processing on the image of the light source. Can be executed.
- At least one of the analysis units described in Example 6 may be a function of a server on the cloud accessible by the information processing apparatus.
- the processing unit 13 is configured to download music feature amount data from the server.
- the information processing apparatus 100 transmits identification information for identifying individual music data selected by the user to the server, and the server extracts a feature quantity of music corresponding to the identification information, and this information is extracted from the information processing apparatus.
- the server may have the function of the music data storage unit 15 and store the music data and its identification information in association with each other.
- the processing unit 13 is configured to download meta information from the server.
- the information processing apparatus transmits identification information for identifying the music data selected by the user to the server, and the server transmits meta information corresponding to the identification information to the information processing apparatus.
- the server may have the function of the music data storage unit 15 and store the music data, the identification information thereof, and the meta information in association with each other.
- the processing unit 13 executes the visual effect process according to one type of feature value, but may execute the process according to a plurality of types of feature value. That is, the processing unit 13 may execute processing according to a combination of at least two of the frequency band, the position of the sound source, the tempo, and the key.
- the recognition unit (image recognition unit) in each of the above embodiments is configured to recognize an object in an image showing a real space.
- the recognition unit may be configured to recognize an object by measuring the real space.
- the recognition unit can recognize an object using laser, radio waves, and / or ultrasonic waves.
- the recognition unit may perform both object recognition based on measurement in real space and object recognition based on image recognition.
- a recognition unit configured to recognize an object in real space;
- An information processing apparatus comprising: a processing unit configured to execute a visual effect process on a target object image that is an image of the object recognized by the recognition unit according to a music feature amount.
- the information processing apparatus according to (1) The information processing apparatus, wherein the processing unit is configured to execute a visual effect process associated with the type of the object.
- the processing unit according to (1) or (2) The processing unit acquires a frequency band of the music as the feature amount, assigns a plurality of target object images corresponding to a plurality of objects for each frequency band, and performs the visual effect processing on the plurality of target object images.
- An information processing device configured to execute.
- the information processing apparatus acquires position information of the music sound source as the feature amount, assigns a plurality of target object images corresponding to a plurality of objects for each position of the sound source, and assigns the visual object to the plurality of target object images.
- An information processing apparatus configured to perform effect processing.
- the information processing apparatus according to (3) or (4), The information processing apparatus, wherein the processing unit is configured to respectively execute a plurality of different visual effect processes on the plurality of target object images.
- the information processing apparatus according to (1) or (2), The information processing apparatus configured to obtain the tempo information of the music as the feature amount and to execute the visual effect process according to the tempo.
- the information processing apparatus configured to obtain the music key information as the feature amount and to execute the visual effect processing according to the key.
- An information processing apparatus further comprising a feature amount extraction unit that extracts the feature amount from the music data.
- the information processing apparatus according to any one of (1) to (10), An information processing apparatus further comprising a setting unit configured to execute a process for causing a user to set at least one of the feature amount, the object, and the content of the visual effect process.
- the information processing apparatus according to any one of (1) to (10), A surrounding environment information acquisition unit configured to acquire information about the surrounding environment of the information processing apparatus; The information processing apparatus, wherein the processing unit is configured to further execute the visual effect processing based on information on the surrounding environment.
- the surrounding environment acquisition unit is configured to acquire position information of the information processing device, natural environment information in which the information processing device is placed, or biometric information of a user as information on the surrounding environment. .
- a recognition unit configured to recognize an object in real space;
- An information processing apparatus comprising: a processing unit configured to perform a visual effect process on a target object image that is an image of the object recognized by the recognition unit according to meta information attached to music data.
- An information processing method comprising: executing a visual effect process on a target object image that is an image of the recognized object according to a feature amount of music.
- Recognize real space objects An information processing method comprising: executing a visual effect process on a target object image, which is an image of the recognized object, according to meta information attached to music data.
- Recognize real space objects A program that causes an information processing apparatus to execute visual effect processing on a target object image that is an image of the recognized object according to a feature amount of music.
- Recognize real space objects A program that causes an information processing device to execute visual effect processing on a target object image, which is an image of a recognized object, according to meta information attached to music data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
前記認識部は、実空間の物体を認識するように構成される。
前記処理部は、音楽の特徴量に応じて、前記認識部により認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行するように構成される。
前記処理部は、音楽のデータに付随するメタ情報に応じて、前記認識部により認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行するように構成される。
音楽の特徴量に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理が実行される。
音楽のデータに付随するメタ情報に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理が実行される。
図1は、本技術の実施形態1に係る情報処理装置の構成を示すブロック図である。情報処理装置100は、例えば、スマートフォン、タブレット型コンピュータ、ヘッドマウントディスプレイデバイス、また、その他の携帯型、ウェアラブル型、または非携帯型のコンピュータである。あるいは、情報処理装置100は、本技術に最適化された専用のデバイスであってもよい。
図3は、視覚エフェクト処理による合成画像の例1を示す。例1に係る実空間画像は夜の道路の風景である。音楽の特徴量は、例えば周波数帯域ごとの信号レベルである。画像認識部11は、街灯や照明(あるいは、所定の面積以上の光源70)の画像を対象物体画像として認識する。処理部13は、再生されている音楽の周波数帯域に応じて、光源70の周囲にエフェクト画像としてリング72を重畳する。すなわち、処理部13は、周波数帯域ごとに複数の異なる光源(対象物体画像)70a、70b、70cを割り当て、それら複数の対象物体画像に視覚エフェクト処理を実行する。
図4は、視覚エフェクト処理による合成画像の例2を示す。例2に係る実空間画像は、ビルディング群の風景である。音楽の特徴量は、例えば周波数帯域ごとの信号レベルである。画像認識部11は、建物75を認識する。処理部13は、再生されている音楽の周波数帯域に応じて、エフェクト画像として、当該建物75の外形またはそれに似た外形を持つ画像77を建物に重畳する。エフェクト画像は、例えば上下に伸縮するようなアニメーション画像で構成される。
図5は、視覚エフェクト処理による合成画像の例3を示す。例3に係る実空間画像は、主に夜空の風景である。画像認識部11は、実空間画像の輝度(または明度)、色、およびその他の条件に基づき夜空を認識する。処理部13は、再生されている音楽の周波数帯域に応じて、エフェクト画像として、異なる大きさの花火80のアニメーションを夜空の画像(対象物体画像)に重畳する。例えば、低音なら大きい花火、高音なら小さい花火のエフェクト画像が生成される。花火80を観る観客の対象物体画像にも、音楽に連動する動く視覚エフェクト処理が実行されてもよい。
図6は、視覚エフェクト処理による合成画像の例4を示す。例4に係る実空間画像は、図3と同様に、対象物体画像として光源70の画像を含む。特徴量抽出部17は、音楽の特徴量として、音源の位置を抽出する。処理部13は、それら音源の位置ごとに光源70の画像(対象物体画像)を割り当て、視覚エフェクト処理を実行する。
特徴量抽出部17は、音楽の特徴量として、テンポ(スピード)を抽出してもよい。図3のような光源70を含む実空間画像を例に挙げると、処理部13は、スローテンポな曲の場合には、大きな面積(または高い輝度)の光源70cにもにリング72を重畳する。一方、処理部13は、アップテンポな曲の場合には、光源70の面積(または輝度)の大小を問わず、それらの光源70にリング72を重畳する。
処理部13は、メタ情報として歌詞またはタイトルを取得したとする。処理部13は、歌詞またはタイトル内のワードに、予め決められたキーワードがあるか否かを判定する。キーワードがあれば、処理部13は、そのキーワードに対応するエフェクト画像を生成する。例えば、キーワードとして「花」がある場合、予め決められた花のエフェクト画像を生成する。処理部13は、その花のエフェクト画像を、任意の実空間画像内に重畳して表示する。
処理部13は、上記処理例1と同様に、メタ情報として歌詞またはタイトルを取得し、かつ、物体IDを取得したとする。処理部13は、この歌詞またはタイトル内のワードに、予め決められたキーワードがあるか否かを判定する。また処理部13は、画像認識部11により認識される対象物体画像の物体の種類が、取得した物体IDと一致するか否かを判定する。歌詞またはタイトル内にキーワードがあり、かつ、対象物体画像の物体の種類が物体IDと一致する場合、処理部13は、キーワードに対応するエフェクト画像を生成する。そして処理部13は、そのエフェクト画像を、その対象物体画像を含む実空間画像に重畳して表示する。
処理部13は、歌詞やタイトル等の音楽データに関する情報の他、視覚エフェクト処理に関する設定情報(視覚エフェクト設定情報)を含むメタ情報を取得する。視覚エフェクト設定情報は、例えば、視覚エフェクトの強度(表示の大きさや面積)、表示スピード、表示頻度、表示色など、視覚エフェクト処理を設定するめの情報である。
例えば、メタ情報に「灯り」のキーワードが含まれている場合、画像認識部11は、そのキーワードに応じて実空間画像内の光源領域を認識してもよい。
情報処理装置は、上述した周辺環境として、ユーザの動きの情報を取得し、その動きの情報、および/または、特徴量に応じて、視覚エフェクトを処理を実行してもよい。ユーザの動きの情報として、例えば心拍数、腕振り、ランニングスピードなどが挙げられる。
情報処理装置が適用されるデバイスは、上述したスマートフォン等だけでなく、プロジェクタであってもよい。プロジェクタにより、窓やドアにプロジェクションマッピングするような視覚エフェクト処理が実行され得る。
ユーザは、情報処理装置を利用して音楽を聴きく傾向が高い場所では、予め設定された視覚エフェクト処理が実行されてもよい。当該場所も予め設定されていてもよい。すなわち、所定の場所でユーザがこの情報処理装置で音楽を聴くと、所定の視覚エフェクト処理が実行される。
例えば、本技術は広告等に利用されるデジタルサイネージにも適用され得る。この場合、情報処理装置の表示部14は、デジタルサイネージに利用される表示部14である。処理部13は、その表示部14に映っている実空間画像に対して、音楽に基づく視覚エフェクト処理が実行される。音楽データは、例えばその広告主や店舗が提供する音楽、あるいは、そのデジタルサイネージのディスプレイの周囲からマイクロフォンで検出される音楽であってもよい。
例えば、音楽はユーザの歌声であってもよい。この場合、情報処理装置は、ユーザの歌声を検出するマイクロフォンと、音楽データとして記憶する記憶部とを備える。記憶部は、クラウド上のサーバにあってもよい。
情報処理装置は、音楽の歌詞内容やタイトル内容を解析する解析部を備えていてもよい。解析部は、その解析に基づき、ストーリーの要約や、キーワードを生成するように構成される。例えば生成された要約やキーワードに「灯り」が含まれる場合であって、かつ、対象物体画像として光源の画像が実空間画像に含まれる場合、処理部13は、その光源の画像に視覚エフェクト処理を実行できる。
上記各実施形態に係る情報処理装置において、例えば画像認識部11、音楽データ記憶部15、デコード部161、特徴量抽出部17、処理部13、メタ情報記憶部20、および上記6.6)例6で説明した解析部のうち少なくとも1つは、情報処理装置がアクセス可能なクラウド上のサーバが有する機能であってもよい。
(1)
実空間の物体を認識するように構成された認識部と、
音楽の特徴量に応じて、前記認識部により認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行するように構成された処理部と
を具備する情報処理装置。
(2)
前記(1)に記載の情報処理装置であって、
前記処理部は、前記物体の種類に関連付けられた視覚エフェクト処理を実行するように構成される
情報処理装置。
(3)
前記(1)または(2)に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の周波数帯域を取得し、前記周波数帯域ごとに、複数の物体に対応する複数の対象物体画像を割り当て、それら複数の対象物体画像に前記視覚エフェクト処理を実行するように構成される
情報処理装置。
(4)
前記(1)または(2)に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の音源の位置情報を取得し、前記音源の位置ごとに、複数の物体に対応する複数の対象物体画像を割り当て、それら複数の対象物体画像に前記視覚エフェクト処理を実行するように構成される
情報処理装置。
(5)
前記(3)または(4)に記載の情報処理装置であって、
前記処理部は、前記複数の対象物体画像に異なる複数の視覚エフェクト処理をそれぞれ実行するように構成される
情報処理装置。
(6)
前記(1)または(2)に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽のテンポの情報を取得し、前記テンポに応じて前記視覚エフェクト処理を実行するように構成される
情報処理装置。
(7)
前記(1)または(2)に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の調の情報を取得し、前記調に応じて前記視覚エフェクト処理を実行するように構成される
情報処理装置。
(8)
前記(1)から(7)のうちいずれか1項に記載の情報処理装置であって、
前記処理部は、前記音楽のデータに付随するメタ情報を取得し、前記メタ情報に基づき、前記視覚エフェクト処理を実行するように構成される
情報処理装置。
(9)
前記(8)に記載の情報処理装置であって、
前記メタ情報は、前記視覚エフェクト処理に関する設定の情報である視覚エフェクト設定情報を含む
情報処理装置。
(10)
前記(1)から(9)のうちいずれか1項に記載の情報処理装置であって、
前記音楽のデータから前記特徴量を抽出する特徴量抽出部をさらに具備する情報処理装置。
(11)
前記(1)から(10)のうちいずれか1項に記載の情報処理装置であって、
前記特徴量、前記物体、および前記視覚エフェクト処理の内容のうち少なくとも1つを、ユーザーに設定させる処理を実行するように構成された設定部をさらに具備する情報処理装置。
(12)
前記(1)から(10)のうちいずれか1項に記載の情報処理装置であって、
前記情報処理装置の周辺環境の情報を取得するように構成された周辺環境情報取得部をさらに具備し、
前記処理部は、前記周辺環境の情報に基づき、前記視覚エフェクト処理をさらに実行するように構成される
情報処理装置。
(13)
前記(12)に記載の情報処理装置であって、
前記周辺環境取得部は、前記情報処理装置の位置情報、前記情報処理装置が置かれる自然環境情報、または、ユーザーの生体情報を、前記周辺環境の情報として取得するように構成される
情報処理装置。
(14)
実空間の物体を認識するように構成された認識部と、
音楽のデータに付随するメタ情報に応じて、前記認識部により認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行するように構成された処理部と
を具備する情報処理装置。
(15)
実空間の物体を認識し、
音楽の特徴量に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
を具備する情報処理方法。
(16)
実空間の物体を認識し、
音楽のデータに付随するメタ情報に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
を具備する情報処理方法。
(17)
実空間の物体を認識し、
音楽の特徴量に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
ことを情報処理装置に実行させるプログラム。
(18)
実空間の物体を認識し、
音楽のデータに付随するメタ情報に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
ことを情報処理装置に実行させるプログラム。
11…画像認識部
13…処理部
14…表示部
15…音楽データ記憶部
16…再生処理部
17…特徴量抽出部
18…スピーカー
19…操作部
20…メタ情報記憶部
100、200、300…情報処理装置
Claims (15)
- 実空間の物体を認識するように構成された認識部と、
音楽の特徴量に応じて、前記認識部により認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行するように構成された処理部と
を具備する情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記物体の種類に関連付けられた視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の周波数帯域を取得し、前記周波数帯域ごとに、複数の物体に対応する複数の対象物体画像を割り当て、それら複数の対象物体画像に前記視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の音源の位置情報を取得し、前記音源の位置ごとに、複数の物体に対応する複数の対象物体画像を割り当て、それら複数の対象物体画像に前記視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項3に記載の情報処理装置であって、
前記処理部は、前記複数の対象物体画像に異なる複数の視覚エフェクト処理をそれぞれ実行するように構成される
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽のテンポの情報を取得し、前記テンポに応じて前記視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記特徴量として前記音楽の調の情報を取得し、前記調に応じて前記視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記処理部は、前記音楽のデータに付随するメタ情報を取得し、前記メタ情報に基づき、前記視覚エフェクト処理を実行するように構成される
情報処理装置。 - 請求項8に記載の情報処理装置であって、
前記メタ情報は、前記視覚エフェクト処理に関する設定の情報である視覚エフェクト設定情報を含む
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記音楽のデータから前記特徴量を抽出する特徴量抽出部をさらに具備する情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記特徴量、前記物体、および前記視覚エフェクト処理の内容のうち少なくとも1つを、ユーザーに設定させる処理を実行するように構成された設定部をさらに具備する情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記情報処理装置の周辺環境の情報を取得するように構成された周辺環境情報取得部をさらに具備し、
前記処理部は、前記周辺環境の情報に基づき、前記視覚エフェクト処理をさらに実行するように構成される
情報処理装置。 - 請求項12に記載の情報処理装置であって、
前記周辺環境取得部は、前記情報処理装置の位置情報、前記情報処理装置が置かれる自然環境情報、または、ユーザーの生体情報を、前記周辺環境の情報として取得するように構成される
情報処理装置。 - 実空間の物体を認識し、
音楽の特徴量に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
を具備する情報処理方法。 - 実空間の物体を認識し、
音楽の特徴量に応じて、認識された前記物体の画像である対象物体画像に視覚エフェクト処理を実行する
ことを情報処理装置に実行させるプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018564169A JP6930547B2 (ja) | 2017-01-27 | 2017-12-20 | 情報処理装置、情報処理方法およびそのプログラム |
US16/478,272 US11037370B2 (en) | 2017-01-27 | 2017-12-20 | Information processing apparatus, and information processing method and program therefor |
CN201780084009.7A CN110214343B (zh) | 2017-01-27 | 2017-12-20 | 信息处理装置、信息处理方法及其程序 |
KR1020197020671A KR102410840B1 (ko) | 2017-01-27 | 2017-12-20 | 정보 처리 장치, 정보 처리 방법 및 그 프로그램 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-013494 | 2017-01-27 | ||
JP2017013494 | 2017-01-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018139117A1 true WO2018139117A1 (ja) | 2018-08-02 |
Family
ID=62978555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/045619 WO2018139117A1 (ja) | 2017-01-27 | 2017-12-20 | 情報処理装置、情報処理方法およびそのプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US11037370B2 (ja) |
JP (1) | JP6930547B2 (ja) |
KR (1) | KR102410840B1 (ja) |
CN (1) | CN110214343B (ja) |
WO (1) | WO2018139117A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022069007A (ja) * | 2020-10-23 | 2022-05-11 | 株式会社アフェクション | 情報処理システム、情報処理方法および情報処理プログラム |
JP7456232B2 (ja) | 2020-03-26 | 2024-03-27 | 大日本印刷株式会社 | フォトムービー生成システム、フォトムービー生成装置、ユーザ端末、フォトムービー生成方法、及びプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111540032B (zh) * | 2020-05-27 | 2024-03-15 | 网易(杭州)网络有限公司 | 基于音频的模型控制方法、装置、介质及电子设备 |
CN111833460A (zh) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | 增强现实的图像处理方法、装置、电子设备及存储介质 |
CN113192152A (zh) * | 2021-05-24 | 2021-07-30 | 腾讯音乐娱乐科技(深圳)有限公司 | 基于音频的图像生成方法、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000270203A (ja) * | 1999-03-18 | 2000-09-29 | Sanyo Electric Co Ltd | 撮像装置及び画像合成装置並びに方法 |
JP2004271901A (ja) * | 2003-03-07 | 2004-09-30 | Matsushita Electric Ind Co Ltd | 地図表示装置 |
JP2010237516A (ja) * | 2009-03-31 | 2010-10-21 | Nikon Corp | 再生演出プログラムおよび再生演出装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4221308B2 (ja) * | 2004-01-15 | 2009-02-12 | パナソニック株式会社 | 静止画再生装置、静止画再生方法及びプログラム |
JP4978765B2 (ja) | 2005-07-25 | 2012-07-18 | ソニー株式会社 | 再生制御装置および方法、並びにプログラム |
CN101577114B (zh) * | 2009-06-18 | 2012-01-25 | 无锡中星微电子有限公司 | 一种音频可视化实现方法及装置 |
JP5652097B2 (ja) | 2010-10-01 | 2015-01-14 | ソニー株式会社 | 画像処理装置、プログラム及び画像処理方法 |
KR101343609B1 (ko) * | 2011-08-24 | 2014-02-07 | 주식회사 팬택 | 증강 현실 데이터를 이용할 수 있는 어플리케이션 자동 추천 장치 및 방법 |
WO2014199453A1 (ja) * | 2013-06-11 | 2014-12-18 | Toa株式会社 | マイクロホンアレイ制御装置 |
CN105513583B (zh) * | 2015-11-25 | 2019-12-17 | 福建星网视易信息系统有限公司 | 一种歌曲节奏的显示方法及其系统 |
CN105872838A (zh) * | 2016-04-28 | 2016-08-17 | 徐文波 | 即时视频的媒体特效发送方法和装置 |
-
2017
- 2017-12-20 KR KR1020197020671A patent/KR102410840B1/ko active IP Right Grant
- 2017-12-20 CN CN201780084009.7A patent/CN110214343B/zh active Active
- 2017-12-20 WO PCT/JP2017/045619 patent/WO2018139117A1/ja active Application Filing
- 2017-12-20 JP JP2018564169A patent/JP6930547B2/ja active Active
- 2017-12-20 US US16/478,272 patent/US11037370B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000270203A (ja) * | 1999-03-18 | 2000-09-29 | Sanyo Electric Co Ltd | 撮像装置及び画像合成装置並びに方法 |
JP2004271901A (ja) * | 2003-03-07 | 2004-09-30 | Matsushita Electric Ind Co Ltd | 地図表示装置 |
JP2010237516A (ja) * | 2009-03-31 | 2010-10-21 | Nikon Corp | 再生演出プログラムおよび再生演出装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7456232B2 (ja) | 2020-03-26 | 2024-03-27 | 大日本印刷株式会社 | フォトムービー生成システム、フォトムービー生成装置、ユーザ端末、フォトムービー生成方法、及びプログラム |
JP2022069007A (ja) * | 2020-10-23 | 2022-05-11 | 株式会社アフェクション | 情報処理システム、情報処理方法および情報処理プログラム |
Also Published As
Publication number | Publication date |
---|---|
CN110214343A (zh) | 2019-09-06 |
JP6930547B2 (ja) | 2021-09-01 |
JPWO2018139117A1 (ja) | 2019-11-14 |
KR102410840B1 (ko) | 2022-06-21 |
US11037370B2 (en) | 2021-06-15 |
CN110214343B (zh) | 2023-02-03 |
KR20190109410A (ko) | 2019-09-25 |
US20200134921A1 (en) | 2020-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018139117A1 (ja) | 情報処理装置、情報処理方法およびそのプログラム | |
US10819969B2 (en) | Method and apparatus for generating media presentation content with environmentally modified audio components | |
US20200221247A1 (en) | Method of providing to user 3d sound in virtual environment | |
KR102609668B1 (ko) | 가상, 증강, 및 혼합 현실 | |
Smith | The sound of intensified continuity | |
US10798518B2 (en) | Apparatus and associated methods | |
CN111916039B (zh) | 音乐文件的处理方法、装置、终端及存储介质 | |
EP3465679A1 (en) | Method and apparatus for generating virtual or augmented reality presentations with 3d audio positioning | |
CN110244998A (zh) | 页面背景、直播页面背景的设置方法、装置及存储介质 | |
JP2020520576A (ja) | 空間オーディオの提示のための装置および関連する方法 | |
KR20120081874A (ko) | 증강 현실을 이용한 노래방 시스템 및 장치, 이의 노래방 서비스 방법 | |
WO2021143574A1 (zh) | 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质 | |
TWI672948B (zh) | 影像製作系統及方法 | |
CN109002275B (zh) | Ar背景音频处理方法、装置、ar设备和可读存储介质 | |
JPWO2017061278A1 (ja) | 信号処理装置、信号処理方法及びコンピュータプログラム | |
EP4080907A1 (en) | Information processing device and information processing method | |
US11696088B1 (en) | Method and apparatus to generate a six dimensional audio dataset | |
US20220036659A1 (en) | System and Method for Simulating an Immersive Three-Dimensional Virtual Reality Experience | |
WO2022102446A1 (ja) | 情報処理装置、情報処理方法、情報処理システム、及びデータ生成方法 | |
JP2015211338A (ja) | 画像処理装置、それを備える撮像装置、画像処理方法及び画像処理プログラム | |
US11445322B1 (en) | Method and apparatus to generate a six dimensional audio dataset | |
US11726551B1 (en) | Presenting content based on activity | |
US11842729B1 (en) | Method and device for presenting a CGR environment based on audio data and lyric data | |
Filimowicz | An audiovisual colocation display system | |
Gillies | Composing with Frames and Spaces: Cinematic Virtual Reality as an Audiovisual Compositional Practice |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17893986 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018564169 Country of ref document: JP Kind code of ref document: A Ref document number: 20197020671 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17893986 Country of ref document: EP Kind code of ref document: A1 |