WO2023281803A1 - Information processing device, information processing method, and storage medium - Google Patents

Information processing device, information processing method, and storage medium Download PDF

Info

Publication number
WO2023281803A1
WO2023281803A1 PCT/JP2022/007217 JP2022007217W WO2023281803A1 WO 2023281803 A1 WO2023281803 A1 WO 2023281803A1 JP 2022007217 W JP2022007217 W JP 2022007217W WO 2023281803 A1 WO2023281803 A1 WO 2023281803A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual space
information
virtual
information processing
image
Prior art date
Application number
PCT/JP2022/007217
Other languages
French (fr)
Japanese (ja)
Inventor
孝悌 清水
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023281803A1 publication Critical patent/WO2023281803A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a storage medium.
  • VR Virtual Reality
  • HMD Head Mounted Display
  • Patent Literature 1 listed below discloses a technique for arranging a virtual object such as a pillar in the vicinity of the boundary between a live video displayed in the virtual space and the virtual space.
  • the virtual object merely hides the boundary between the live video and the virtual space.
  • the present disclosure proposes a new and improved information processing device, information processing method, and storage medium that can further improve entertainment in virtual space.
  • an acquisition unit for acquiring feature information from content included in a first virtual object arranged in a virtual space, and at least arranged in the virtual space based on the feature information acquired by the acquisition unit. and a video generation unit that controls display of a second virtual object around the content to be processed.
  • acquiring feature information from content included in a first virtual object placed in a virtual space and placing at least the virtual object in the virtual space based on the acquired feature information. and controlling the display of a second virtual object around said content.
  • a storage medium that non-temporarily stores a computer-executable program, the program extracting feature information from content included in a first virtual object arranged in a virtual space. and a display control function of controlling display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition function.
  • a storage medium is provided.
  • FIG. 1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining an example of a virtual space V to which XR live is distributed
  • 1 is an explanatory diagram for explaining a functional configuration example of a distribution system 10 according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure
  • FIG. 4 is an explanatory diagram for explaining an image example of the first embodiment according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining an example of operation processing of the first embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of operation processing of the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image of the third embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of operation processing of the third embodiment according to the present disclosure
  • FIG. 5 is an explanatory diagram for explaining a modification according to the present disclosure
  • 2 is an explanatory diagram showing a hardware configuration of an HMD 40 according to the present disclosure
  • FIG. 1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure.
  • the information processing system according to the present disclosure includes a network 1, a distribution system 10, and an HMD 40.
  • a network 1 is a wired or wireless transmission path for information transmitted from devices connected to the network 1 .
  • the network 1 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs (Local Area Networks) including Ethernet (registered trademark), WANs (Wide Area Networks), and the like.
  • the network 1 may also include a dedicated line network such as IP-VPN (Internet Protocol-Virtual Private Network).
  • the distribution system 10 and the HMD 40 are connected via the network 1.
  • the HMD 40A used by a certain user and the HMD 40B used by another user are also connected via the network 1, respectively.
  • the distribution system 10 is a system used by a user on the side of a distributor who distributes various types of event information such as live video and live audio (hereinafter sometimes referred to as a distribution user). For example, the distribution system 10 transmits to the HMD 40 a virtual space including live video captured by a camera and live audio captured by a sensor, which will be described later.
  • XR Extended Reality
  • the HMD 40 is an example of an information processing device, and is a terminal used by a user who views live video and live audio in virtual space.
  • the HMD 40 is, for example, a head-mounted display to which applications such as VR (Virtual Reality) or AR (Augmented Reality) are applied.
  • VR Virtual Reality
  • AR Augmented Reality
  • a user wearing the HMD 40 can use an input device such as a hand controller to move the avatar, which is the alter ego of the user, in the virtual space.
  • the user wearing the HMD 40 can view the live video from the first-person viewpoint based on the eyes of the avatar from the display of the HMD 40 .
  • the HMD 40 according to the present disclosure acquires feature information from content included in virtual objects placed in the virtual space.
  • the HMD 40 according to the present disclosure controls display of images representing the virtual space based on the acquired feature information.
  • display control includes displaying a virtual object in the virtual space, dynamically displaying the virtual object in the virtual space, and the like. Various details such as images representing virtual objects, content, and virtual space will be described later.
  • FIG. 2 is an explanatory diagram for explaining an example of the virtual space V in which XR live is delivered.
  • a virtual space V in which a certain XR live is delivered includes, for example, a two-dimensional screen VC1, a two-dimensional video CT1, and an avatar U participating in the virtual space.
  • the two-dimensional screen VC1 is the first virtual object arranged to output the two-dimensional image CT1.
  • the two-dimensional image CT1 is an example of content, and is a live image captured by a camera.
  • the two-dimensional image CT1 may include images of equipment and facilities such as the performer P1 and the speaker C1 as shown in FIG.
  • the two-dimensional image CT1 is inserted on the two-dimensional screen VC1 arranged in the virtual space V.
  • the two-dimensional image CT1 may be subjected to image processing according to the size and shape of the two-dimensional screen VC1.
  • the content includes the above-described two-dimensional image CT1 and various types of information such as sound information output in conjunction with the two-dimensional image CT1.
  • the avatar U is the clone of the user who participated in the virtual space V where the XR live is delivered. For example, when a user in the virtual space V where the XR live is distributed participates, the user's avatar U1 is displayed in the virtual space V. FIG. In addition, when other users (for example, two users) also participate in the virtual space V where the XR live is distributed, the number of avatars U2 corresponding to the number of other users who participated in the virtual space V displayed above. Each avatar can move and act based on each user's operation.
  • virtual space V in which XR live is distributed has been described above, but the virtual space V according to the present disclosure is not limited to such an example.
  • virtual avatars such as NPCs (Non Player Characters) may be placed in the virtual space V
  • various virtual objects such as buildings and mobile objects may be placed in the virtual space V.
  • FIG. 3 is an explanatory diagram for explaining a functional configuration example of the distribution system 10 according to the present disclosure.
  • distribution system 10 according to the present disclosure includes camera 20 , sensor 25 , and distribution server 30 .
  • the camera 20 captures an image of an object and obtains an image.
  • the camera 20 captures a live venue and acquires live video including performers and equipment or facilities at the live venue.
  • the video captured by the camera 20 is transmitted to the distribution server 30 by any communication method.
  • the sensor 25 is a sensor that acquires live sound at a live venue. Further, the sensor 25 may be a sensor that acquires body position information of the performer. Various information (for example, live sound) acquired by the sensor 25 is transmitted to the distribution server 30 by any communication method.
  • the distribution server 30 is a server used by the distributor.
  • the distribution server 30 includes a control unit 310 and a communication unit 320, as shown in FIG.
  • control unit 310 controls overall operations of the distribution server 30 .
  • the control unit 310 synchronizes live video and live audio received from the camera 20 and the sensor 25 and causes the communication unit 320 to transmit them to the HMD 40 used by the user.
  • the communication unit 320 Under the control of the control unit 310, the communication unit 320 performs various communications with the HMD 40 participating in the virtual space where the XR live is distributed. For example, the communication unit 320 streams a virtual space including live video captured by the camera 20 and live audio captured by the sensor 25 to the HMD 40 .
  • FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure.
  • the HMD 40 according to the present disclosure includes a communication section 410 , a storage section 420 , a display section 430 , an operation section 440 and a control section 450 .
  • the communication unit 410 performs various communications with the distribution server 30 and other HMDs 40 under the control of the control unit 450 .
  • a plurality of HMDs 40 participating in the virtual space V to which the same XR live is distributed are bi-directionally connected via the network 1 .
  • Storage unit 420 holds software and various data.
  • the storage unit 420 holds learning data obtained by learning using pairs of object types and object feature information as teacher data.
  • the object feature information may include information about the shape of the object and information about the color of the object.
  • the object to be learned as teacher data is not limited to an object.
  • the storage unit 420 may hold learning data obtained by learning using sets of various objects such as performers and lights and feature information of the objects as teacher data.
  • the storage unit 420 holds three-dimensional data representing images representing the virtual space, such as virtual objects and particles. Specific examples of virtual objects and particles will be described later.
  • the display unit 430 displays an image in the virtual space from the first-person viewpoint based on the eyes of the avatar. Note that the first-person viewpoint image based on the avatar's eyes is generated based on sensor data acquired by the orientation sensor mounted on the HMD 40 .
  • the functions of the display unit 430 are implemented by, for example, a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, or an OLED (Organic Light Emitting Diode) device.
  • the display unit 430 may display various images such as a screen for selecting a virtual space to participate in and an image from a bird's-eye view of the virtual space in which the participant participates.
  • the operation unit 440 controls movement and actions of the avatar. Functions of the operation unit 440 are implemented by a device such as a hand controller. Note that the avatar's action includes, for example, various actions of the avatar such as waving and jumping.
  • control unit 450 controls overall operations of the HMD 40 according to the present disclosure. As shown in FIG. 3, the control unit 450 includes a color information detection unit 451, an object/body recognition unit 455, a correction processing unit 459, a correlation processing unit 463, a music analysis processing unit 467, and a space drawing processing unit. 471.
  • the color information detection unit 451 is an example of an acquisition unit and detects color information of a 2D image. For example, the color information detection unit acquires color information from the background image of the two-dimensional image. More specifically, the color information detection unit 451 acquires color information near the boundary between the 2D video and the virtual space in the background video included in the 2D video. It should be noted that the background image here indicates an area in which the performer and the object included in the two-dimensional image do not exist.
  • the color information detection unit 451 may detect color information of an object included in the two-dimensional image.
  • the object/body recognition unit 455 is an example of an acquisition unit, and acquires feature information from a two-dimensional image. For example, the object/body recognition unit 455 recognizes objects and performers included in the two-dimensional video. The object/body recognition unit 455 also acquires feature information from objects and performers included in the two-dimensional video, and recognizes the objects and performers based on the feature information and learning data held in the storage unit 420 .
  • the correction processing unit 459 performs correction processing such as shape interpolation and noise removal on the images of the objects and performers recognized by the object/body recognition unit 455 .
  • the correlation processing unit 463 calculates the degree of correlation between the image of the object or the performer corrected by the correction processing unit 459 and the three-dimensional data of the virtual object held by the storage unit 420 .
  • the music analysis processing unit 467 is an example of an acquisition unit, and acquires sound information of live audio included in the two-dimensional video output in the virtual space.
  • the music analysis processing unit 467 analyzes the acquired sound information and acquires various kinds of information such as rhythm, tempo, volume change (crescendo and decrescendo) of the live sound.
  • the space drawing processing unit 471 is an example of an image generation unit, and controls display of an image representing a virtual space based on the feature information acquired by the object/body recognition unit 455 .
  • a specific example of the video representing the virtual space will be described later.
  • the space drawing processing unit 471 may control the display of the image representing the virtual space based on the color information of the background image detected by the color information detection unit 451 .
  • the space drawing processing unit 471 may control the display of the video representing the virtual space based on the sound information acquired by the music analysis processing unit 467 .
  • the space drawing processing unit 471 controls the display of an image that expresses a virtual space with three-dimensional data of a virtual object similar to an object displayed on a two-dimensional image.
  • FIG. 5 is an explanatory diagram for explaining a video example of the first embodiment according to the present disclosure.
  • the object/body recognition unit 455 recognizes an object included in the two-dimensional image CT1.
  • the object/body recognition unit 455 recognizes the speaker C1 installed at the live venue.
  • the correlation processing unit 463 then calculates the degree of correlation between the speaker C ⁇ b>1 and each of the virtual objects held by the storage unit 420 .
  • the space drawing processing unit 471 controls the display of the three-dimensional data of the virtual object whose degree of correlation satisfies a predetermined criterion by the correlation processing unit 463 as the image of the second virtual object, and arranges it in the virtual space V.
  • the space drawing processing unit 471 controls the display of the speaker model VC2 having a high degree of correlation with the speaker C1 as the second virtual object. This allows the user to feel more like they are participating in a live performance.
  • a virtual object whose degree of correlation satisfies a predetermined criterion may be, for example, a virtual object whose degree of correlation is equal to or greater than a predetermined value.
  • the space drawing processing unit 471 may control the display of the video of the speaker model VC2 in an area that does not interfere with other virtual objects placed in the virtual space V.
  • the number of speaker models VC2 whose display is controlled may be singular or plural.
  • objects included in the two-dimensional image CT1 may change over time, but when the speaker C1 disappears from the two-dimensional image CT1, the space drawing processing unit 471 renders the once-displayed speaker model VC2 into the virtual space. You can erase it from the top, or you can leave it on. Further, the space drawing processing unit 471 may erase the image of the speaker model VC2 when the speaker C1 is not displayed from the two-dimensional image CT1 for a certain period of time.
  • FIG. 6 is an explanatory diagram for explaining an operation processing example of the first embodiment according to the present disclosure.
  • the object/body recognition unit 455 recognizes an object on a two-dimensional image (S101). For example, when a speaker is included in the two-dimensional image, the object/body recognition unit 455 recognizes the object type of the speaker on the two-dimensional image as “speaker”.
  • the color information detection unit 451 acquires the color information of the object (S105).
  • the correlation processing unit 463 calculates the degree of correlation between the virtual object held by the storage unit 420 and the color information of the virtual object, the type of the recognized object, and the color information of the object (S113). At this time, the correlation processing unit 463 determines a virtual object that has a high degree of similarity to the object based on the degree of correlation between the type of object and the color information of the object. may be prepared. More specifically, when determining the degree of similarity, the weighting parameter for the type of object may be set larger than the weighting parameter set for the color information of the object. As a result, it is possible to improve the determination accuracy of the product type of the virtual object, which has a particularly large impact on the user's visual recognition.
  • the control unit 450 determines whether or not there is a virtual object similar to the recognized object (S113). If it is determined that there is a similar virtual object (S113/Yes), the process proceeds to S117, and if it is determined that there is no similar virtual object (S113/No), the control unit 450 according to the present disclosure processes exit.
  • control unit 450 acquires position information of each virtual object in the virtual space (S117).
  • the space drawing processing unit 471 arranges the virtual objects determined to be similar in S113 at positions that do not interfere with each virtual object (S121), and the control unit 450 according to the present disclosure ends the processing.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • the space drawing processing unit 471 creates virtual objects (for example, lighting) or particles that have the same color as the color indicated by the color information of the background image of the dynamically changeable two-dimensional image.
  • the display of the image is controlled as an image expressing the virtual space.
  • FIG. 7 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure.
  • the color information detection unit 451 detects color information from the background image of the two-dimensional image CT1.
  • the space drawing processing unit 471 controls the display of the video representing the virtual space. More specifically, the space drawing processing unit 471 controls display of an image having a similar color to the color indicated by the color information acquired by the color information detection unit 451 .
  • the color information detection unit 451 acquires color information near the boundary between the two-dimensional image CT1 and the virtual space V. Then, the space drawing processing unit 471 controls the display of the image VC3 of the light emitting light having the same color as the color indicated by the color information as the third virtual object. As a result, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V.
  • the space drawing processing unit 471 may control the display of the light image VC3 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1, as shown in FIG.
  • the space drawing processing unit 471 controls the display of the image based on the color information near the left edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary with the virtual space V at the left edge of the two-dimensional screen VC1.
  • the space drawing processing unit 471 controls the display of the image based on the color information near the right edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary between the two-dimensional screen VC1 and the virtual space V at the right edge. do.
  • the left and right sides of the two-dimensional screen VC1 have different background colors, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V.
  • the image based on the color information acquired by the color information detection unit 451 is not limited to the image of a virtual object such as a light.
  • it may be an image of particles such as light or particles.
  • FIG. 8 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure.
  • the space drawing processing unit 471 may control the display of the particle image E1 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1.
  • the space drawing processing unit 471 may control display of an image obtained by combining the light image VC3 and the particle image E1 described above.
  • FIG. 9 is an explanatory diagram for explaining an operation processing example of the second embodiment according to the present disclosure.
  • the color information detection unit 451 acquires color information from the background image included in the two-dimensional image (S201).
  • the space drawing processing unit 471 controls the display of lights that emit light of the same color as the color indicated by the color information so as to hide the boundaries at both ends of the two-dimensional image (S205).
  • control unit 450 determines whether or not the degree of similarity between the background color of the 2D video included in the 2D screen and the background color of the virtual space is equal to or greater than a threshold (S209). If the degree of similarity is greater than or equal to the threshold (S209/Yes), the process ends, and if the degree of similarity is less than the threshold (S209/No), the process proceeds to S213.
  • the space drawing processing unit 471 controls the display of the image of the light with the expanded light irradiation range (S213). Then, the processing of S209 to S213 is repeated until it is determined in S209 that the degree of similarity is equal to or greater than the threshold.
  • FIG. 10 The second embodiment according to the present disclosure has been described above.
  • a third embodiment according to the present disclosure will be described with reference to FIGS. 10 and 11.
  • the space drawing processing unit 471 expresses images such as NPC actions, virtual object actions, particles, etc. in the virtual space based on the sound information acquired by the music analysis processing unit 467. Control display as video.
  • FIG. 10 is an explanatory diagram for explaining a video example of the third embodiment according to the present disclosure.
  • the music analysis processing unit 467 analyzes the sound information M1 of the live audio included in the two-dimensional image CT1 output in the virtual space V, and acquires various information such as the rhythm of the live audio.
  • the sound information M1 is illustrated as a musical note mark, but the sound information M1 is actually included in the voice, so it is not displayed in the video.
  • the space drawing processing unit 471 may control the display of the video of the virtual object that moves in response to the rhythm of the live sound as the video of the fourth virtual object.
  • the image of the virtual object that moves in response to the rhythm of sound may be, for example, an image that moves as if the psyllium VC 4 shown in FIG. 10 is swung from side to side.
  • the space drawing processing unit 471 may control the display of an image E2 of particles such as light and particles that move in response to the rhythm of sound as an image representing the virtual space V.
  • an image E2 of particles such as light and particles that move in response to the rhythm of sound as an image representing the virtual space V.
  • FIG. 11 is an explanatory diagram for explaining an operation processing example of the third embodiment according to the present disclosure.
  • the music analysis processing unit 467 acquires sound information output in conjunction with a two-dimensional image (S301).
  • the music analysis processing unit 467 analyzes the rhythm, volume, etc. from the acquired sound information (S305).
  • the space drawing processing unit 471 displays the virtual object corresponding to the analysis result in S305 in the virtual space (S309), and the control unit 450 according to the present disclosure ends the processing.
  • the two-dimensional screen is a planar screen
  • the two-dimensional screen according to the present disclosure is not limited to such an example.
  • a modification according to the present disclosure will be described with reference to FIG. 12 .
  • FIG. 12 is an explanatory diagram for explaining a modification according to the present disclosure.
  • the two-dimensional screen VC1 may have a curved shape as shown in FIG.
  • the two-dimensional screen VC1 may have a curved shape with a curvature radius corresponding to a predetermined area range on the virtual space V.
  • FIG. 12 is an explanatory diagram for explaining a modification according to the present disclosure.
  • the two-dimensional screen VC1 may have a curved shape as shown in FIG.
  • the two-dimensional screen VC1 may have a curved shape with a curvature radius corresponding to a predetermined area range on the virtual space V.
  • the predetermined area range may be the crowd area PA1 as shown in FIG. This makes it possible to reduce the effects of video distortion that may occur depending on the viewing position in the crowd area PA1, and provides the user with the feeling of participating in an actual live performance.
  • Hardware configuration example >> The embodiment according to the present disclosure has been described above. The information processing described above is realized by cooperation between software and the hardware of the HMD 40 described below. The hardware configuration described below can also be applied to the camera 20, the sensor 25, or the distribution server 30 included in the distribution system 10. FIG.
  • FIG. 13 is an explanatory diagram showing the hardware configuration of the HMD 40 according to the present disclosure.
  • the HMD 40 includes a CPU (Central Processing Unit) 4201, a ROM (Read Only Memory) 4202, a RAM (Random Access Memory) 4203, an input device 4208, an output device 4210, and a storage device 4211. , a drive 4212 , an imaging device 4213 , and a communication device 4215 .
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 4201 functions as an arithmetic processing device and a control device, and controls the overall operation of the HMD 40 according to various programs.
  • the CPU 4201 may be a microprocessor.
  • the ROM 4202 stores programs and calculation parameters used by the CPU 4201 .
  • the RAM 4203 temporarily stores programs used in the execution of the CPU 4201, parameters that change as appropriate during the execution, and the like. These are interconnected by a host bus comprising a CPU bus or the like.
  • the input device 4208 generates an input signal based on input means for the user to input information, such as a hand controller, mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input by the user, and outputs the signal to the CPU 4201 . It consists of an input control circuit, etc.
  • a user using the HMD 40 can input various data to the HMD 40 and instruct processing operations by operating the input device 4208 .
  • Output devices 4210 include display devices such as liquid crystal display (LCD) devices, OLED devices, and lamps, for example.
  • output device 4210 includes audio output devices such as speakers and headphones.
  • the display device displays an image of an avatar viewpoint in virtual space.
  • the audio output device converts audio data including live audio into audio and outputs the audio.
  • the storage device 4211 is a data storage device configured as an example of the storage unit 420 of the HMD 40 according to the present disclosure.
  • the storage device 4211 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
  • the storage device 4211 stores programs executed by the CPU 4201 and various data.
  • a drive 4212 is a reader/writer for storage media, and is built in or externally attached to the HMD 40 .
  • the drive 4212 reads information recorded in the removable storage medium 44 such as a semiconductor memory and outputs it to the RAM 4203 .
  • Drive 4212 can also write information to removable storage medium 44 .
  • the imaging device 4213 includes an imaging optical system such as an imaging lens and a zoom lens for condensing light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the imaging optical system collects light emitted from a subject to form a subject image in a signal conversion section, and the signal conversion element converts the formed subject image into an electrical image signal.
  • the communication device 4215 is, for example, a communication interface configured with a communication device or the like for connecting to the network 1. Also, the communication device 4215 may be a wireless LAN compatible communication device, an LTE compatible communication device, or a wire communication device that performs wired communication.
  • the HMD 40 may be configured separately from the HMD 40.
  • the HMD 40 transmits information about two-dimensional video and audio in the virtual space to the server, and the server executes various processes such as object recognition included in the two-dimensional video and analysis of rhythms included in sound information. may be This can reduce the burden on the HMD 40 .
  • the object/body recognition unit 455 may recognize the audience. Then, the space drawing processing unit 471 may control display of images such as particles that move according to the amount of movement of the spectator.
  • each step in the processing of the HMD 40 in this specification does not necessarily have to be processed in chronological order according to the order described as the flowchart.
  • each step in the processing of the HMD 40 may be processed in an order different from the order described as the flowchart or in parallel.
  • HMD 40 hardware such as CPU, ROM, and RAM built in HMD 40, camera 20, sensor 25, and distribution server 30 exhibits the same functions as each configuration of HMD 40, camera 20, sensor 25, and distribution server 30 described above.
  • a computer program can also be written to do so.
  • a storage medium storing the computer program is also provided.
  • an acquisition unit that acquires feature information from content included in a first virtual object placed in the virtual space; a video generation unit that controls display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition unit; An information processing device.
  • the acquisition unit obtaining the feature information from an object included in the content; The information processing device according to (1) above.
  • the acquisition unit Recognizing an object based on the acquired feature information, The video generation unit controlling display of a virtual object whose degree of correlation with the object recognized by the acquisition unit satisfies a predetermined criterion as an image of the second virtual object; The information processing device according to (2) above.
  • a virtual object whose degree of correlation meets a predetermined criterion is a virtual object whose degree of correlation is equal to or greater than a predetermined value, The information processing device according to (3) above.
  • the video generation unit controlling display of the image of the second virtual object in an area that does not interfere with other virtual objects placed in the virtual space;
  • the information processing apparatus according to any one of (2) to (4).
  • the acquisition unit obtaining color information from a background video included in the content;
  • the video generation unit controlling display of an image representing the virtual space based on the color information of the background image acquired by the acquisition unit;
  • the information processing apparatus according to any one of (1) to (5) above.
  • the video generation unit controlling display of an image having a similar color to the color indicated by the color information of the background image acquired by the acquisition unit; The information processing device according to (6) above.
  • the video generation unit Based on the color information of the background image acquired by the acquisition unit, controlling display of an image of a third virtual object emitting light having a color similar to that indicated by the color information of the background image; The information processing device according to (7) above.
  • the video generation unit controlling the display of an image based on the color information of the background image acquired by the acquisition unit so as to hide boundaries with the virtual space at both ends of the first virtual object; The information processing apparatus according to any one of (6) to (8).
  • the color information of the background video indicates a color near the boundary between the content and the virtual space in the content;
  • the information processing apparatus according to any one of (6) to (9).
  • the video generation unit controlling display of an image based on color information near a left edge boundary between the content and the virtual space in the content so as to hide a boundary with the virtual space at the left edge of the first virtual object; controlling display of an image based on color information near the right edge boundary between the content and the virtual space so as to hide the boundary with the virtual space at the right edge of the first virtual object;
  • the information processing device according to (10) above.
  • the acquisition unit Acquiring sound information included in the content output in the virtual space;
  • the video generation unit controlling display of an image representing the virtual space based on the sound information acquired by the acquisition unit;
  • the information processing apparatus according to any one of (1) to (11).
  • the video generation unit controlling display of a video of a fourth virtual object representing the virtual space based on the sound information acquired by the acquisition unit;
  • the information processing device according to (12) above.
  • the video generation unit controlling video display that moves in response to the rhythm of the sound acquired by the acquisition unit;
  • the first virtual object is a screen having a curved shape,
  • the information processing apparatus according to any one of (1) to (14).
  • the curved shape has a radius of curvature corresponding to a predetermined area range in the virtual space,
  • the information processing device according to (15) above.
  • (17) acquiring feature information from content included in a first virtual object placed in the virtual space; controlling display of a second virtual object around at least the content arranged in the virtual space based on the acquired feature information;
  • a computer-implemented information processing method comprising: (18) A storage medium that non-temporarily stores a computer-executable program, Said program an acquisition function for acquiring feature information from content included in a first virtual object placed in the virtual space; a display control function that controls display of a second virtual object around at least the content placed in the virtual space based on the feature information acquired by the acquisition function;

Abstract

[Problem] To further improve entertainment properties in a virtual space. [Solution] Provided is an information processing device comprising: an acquisition unit for acquiring feature information from content included in a first virtual object arranged in a virtual space; and a video generation unit for, on the basis of the feature information acquired by the acquisition unit, controlling display of a second virtual object in at least the surroundings of the content arranged in the virtual space.

Description

情報処理装置、情報処理方法および記憶媒体Information processing device, information processing method and storage medium
 本開示は、情報処理装置、情報処理方法および記憶媒体に関する。 The present disclosure relates to an information processing device, an information processing method, and a storage medium.
 近年普及しているVR(Virtual Reality)アプリケーションでは、3Dモデルが配置された仮想空間内を、ユーザが任意の視点から視聴することができる。このようなVRの世界は、主にユーザの視界を表示部で覆う非透過型のHMD(Head Mounted Display)を用いて提供され得る。 VR (Virtual Reality) applications, which have become popular in recent years, allow users to view a virtual space in which a 3D model is placed from any viewpoint. Such a VR world can be provided mainly using a non-transmissive HMD (Head Mounted Display) that covers the user's field of vision with a display unit.
 また、仮想空間を提供する技術に関し、現実空間でのイベントを撮像して得られたライブ映像を仮想空間上で表示する技術が開発されている。例えば、下記特許文献1では、仮想空間上で表示されるライブ映像と仮想空間との境界付近に、柱などの仮想オブジェクトを配置する技術が開示されている。 In addition, regarding the technology for providing virtual space, technology has been developed for displaying live video obtained by capturing events in real space in virtual space. For example, Patent Literature 1 listed below discloses a technique for arranging a virtual object such as a pillar in the vicinity of the boundary between a live video displayed in the virtual space and the virtual space.
国際公開第2020/129115号WO2020/129115
 しかし、特許文献1に記載の技術では、仮想オブジェクトによりライブ映像と仮想空間との境界が隠されるに過ぎない。 However, with the technology described in Patent Document 1, the virtual object merely hides the boundary between the live video and the virtual space.
 そこで、本開示では、仮想空間におけるエンターテインメント性をより向上させることが可能な、新規かつ改良された情報処理装置、情報処理方法および記憶媒体を提案する。 Therefore, the present disclosure proposes a new and improved information processing device, information processing method, and storage medium that can further improve entertainment in virtual space.
 本開示によれば、仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得部と、前記取得部により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する映像生成部と、を備える、情報処理装置が提供される。 According to the present disclosure, an acquisition unit for acquiring feature information from content included in a first virtual object arranged in a virtual space, and at least arranged in the virtual space based on the feature information acquired by the acquisition unit. and a video generation unit that controls display of a second virtual object around the content to be processed.
 また、本開示によれば、仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得することと、取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置さえる前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御することと、を含む、コンピュータにより実行される情報処理方法が提供される。 Further, according to the present disclosure, acquiring feature information from content included in a first virtual object placed in a virtual space, and placing at least the virtual object in the virtual space based on the acquired feature information. and controlling the display of a second virtual object around said content.
 また、本開示によれば、コンピュータが実行可能なプログラムを非一時的に記憶する記憶媒体であって、前記プログラムは、仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得機能と、前記取得機能により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置さえる前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する表示制御機能と、を実現させる、記憶媒体が提供される。 Further, according to the present disclosure, there is provided a storage medium that non-temporarily stores a computer-executable program, the program extracting feature information from content included in a first virtual object arranged in a virtual space. and a display control function of controlling display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition function. A storage medium is provided.
本開示に係る情報処理システムの構成例を説明するための説明図である。1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure; FIG. XRライブが配信される仮想空間Vの一例を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an example of a virtual space V to which XR live is distributed; 本開示に係る配信システム10の機能構成例を説明するための説明図である。1 is an explanatory diagram for explaining a functional configuration example of a distribution system 10 according to the present disclosure; FIG. 本開示に係るHMD40の機能構成例を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure; 本開示に係る第1の実施例の映像例を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an image example of the first embodiment according to the present disclosure; FIG. 本開示に係る第1の実施例の動作処理例を説明するための説明図である。FIG. 4 is an explanatory diagram for explaining an example of operation processing of the first embodiment according to the present disclosure; 本開示に係る第2の実施例の映像例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure; FIG. 本開示に係る第2の実施例の映像例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure; FIG. 本開示に係る第2の実施例の動作処理例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of operation processing of the second embodiment according to the present disclosure; 本開示に係る第3の実施例の映像例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of an image of the third embodiment according to the present disclosure; FIG. 本開示に係る第3の実施例の動作処理例を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an example of operation processing of the third embodiment according to the present disclosure; 本開示に係る変形例を説明するための説明図である。FIG. 5 is an explanatory diagram for explaining a modification according to the present disclosure; 本開示に係るHMD40のハードウェア構成を示した説明図である。2 is an explanatory diagram showing a hardware configuration of an HMD 40 according to the present disclosure; FIG.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In the present specification and drawings, constituent elements having substantially the same functional configuration are denoted by the same reference numerals, thereby omitting redundant description.
 また、以下に示す項目順序に従って当該「発明を実施するための形態」を説明する。
  1.概要
   1.1.情報処理システムの概要
   1.2.XRライブが配信される仮想空間例
   1.3.配信システム10の機能構成例
  2.HMD40の機能構成例
  3.実施例
   3.1.第1の実施例
   3.2.第2の実施例
   3.3.第3の実施例
  4.変形例
  5.ハードウェア構成例
  6.補足
In addition, the "Mode for Carrying Out the Invention" will be explained according to the order of items shown below.
1. Overview 1.1. Outline of information processing system 1.2. Example of virtual space where XR live is delivered 1.3. Functional configuration example of distribution system 10 2 . Functional configuration example of HMD 40 3 . Example 3.1. First embodiment 3.2. Second embodiment 3.3. Third embodiment 4. Modification 5. Hardware configuration example6. supplement
 <<1.概要>>
 本開示の一実施形態として、仮想空間におけるエンターテインメント性をより向上させる仕組みについて説明する。
<<1. Overview>>
As an embodiment of the present disclosure, a mechanism for further improving entertainment in virtual space will be described.
 <1.1.情報処理システムの概要>
 図1は、本開示に係る情報処理システムの構成例を説明するための説明図である。図1に示すように、本開示に係る情報処理システムは、ネットワーク1と、配信システム10と、HMD40とを備える。
<1.1. Outline of information processing system>
FIG. 1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure. As shown in FIG. 1, the information processing system according to the present disclosure includes a network 1, a distribution system 10, and an HMD 40.
 (ネットワーク1)
 ネットワーク1は、ネットワーク1に接続されている装置から送信される情報の有線、または無線の伝送路である。例えば、ネットワーク1は、インターネット、電話回線網、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、ネットワーク1は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。
(Network 1)
A network 1 is a wired or wireless transmission path for information transmitted from devices connected to the network 1 . For example, the network 1 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs (Local Area Networks) including Ethernet (registered trademark), WANs (Wide Area Networks), and the like. The network 1 may also include a dedicated line network such as IP-VPN (Internet Protocol-Virtual Private Network).
 配信システム10およびHMD40は、ネットワーク1を介して接続される。また、あるユーザが利用するHMD40Aおよび他のユーザが利用するHMD40Bも、ネットワーク1を介してそれぞれ接続される。 The distribution system 10 and the HMD 40 are connected via the network 1. The HMD 40A used by a certain user and the HMD 40B used by another user are also connected via the network 1, respectively.
 (配信システム10)
 配信システム10は、ライブ映像やライブ音声などの各種イベント情報を配信する配信者側のユーザ(以下、配信ユーザと表現する場合がある。)により使用されるシステムである。例えば、配信システム10は、後述するカメラにより取得されたライブ映像やセンサにより取得されたライブ音声を含む仮想空間をHMD40に送信する。なお、以下の説明では、ライブ映像やライブ音声を仮想空間上で配信するイベントのことを総称してXR(Extended Reality)ライブと表現する場合がある。
(Distribution system 10)
The distribution system 10 is a system used by a user on the side of a distributor who distributes various types of event information such as live video and live audio (hereinafter sometimes referred to as a distribution user). For example, the distribution system 10 transmits to the HMD 40 a virtual space including live video captured by a camera and live audio captured by a sensor, which will be described later. In the following description, an event in which live video and live audio are delivered in a virtual space may be collectively referred to as XR (Extended Reality) live.
 (HMD40)
 HMD40は、情報処理装置の一例であり、仮想空間上でライブ映像やライブ音声を視聴するユーザにより使用される端末である。HMD40は、例えば、VR(Virtual Reality)またはAR(Augmented Reality)等のアプリケーションを適用した頭部装着ディスプレイである。
(HMD40)
The HMD 40 is an example of an information processing device, and is a terminal used by a user who views live video and live audio in virtual space. The HMD 40 is, for example, a head-mounted display to which applications such as VR (Virtual Reality) or AR (Augmented Reality) are applied.
 HMD40を装着するユーザは、ハンドコントローラ等の入力装置を用いて、当該ユーザの分身であるアバターを仮想空間上で移動させることが可能である。 A user wearing the HMD 40 can use an input device such as a hand controller to move the avatar, which is the alter ego of the user, in the virtual space.
 また、HMD40を装着するユーザは、当該HMD40が備えるディスプレイから、アバターの目を基準とした一人称視点におけるライブ映像を見ることが可能である。 Also, the user wearing the HMD 40 can view the live video from the first-person viewpoint based on the eyes of the avatar from the display of the HMD 40 .
 また、本開示に係るHMD40は、仮想空間に配置される仮想オブジェクトに含まれるコンテンツから特徴情報を取得する。また、本開示に係るHMD40は、取得した特徴情報に基づき、仮想空間を表現する映像の表示を制御する。なお、表示の制御とは、仮想空間上に仮想オブジェクトを表示させることや仮想空間上に仮想オブジェクトを動的に表示させること等を含む。仮想オブジェクト、コンテンツおよび仮想空間を表現する映像などの各種詳細については後述する。 Also, the HMD 40 according to the present disclosure acquires feature information from content included in virtual objects placed in the virtual space. In addition, the HMD 40 according to the present disclosure controls display of images representing the virtual space based on the acquired feature information. Note that display control includes displaying a virtual object in the virtual space, dynamically displaying the virtual object in the virtual space, and the like. Various details such as images representing virtual objects, content, and virtual space will be described later.
 続いて、図2を参照し、あるXRライブが配信される仮想空間の一例を説明する。 Next, with reference to FIG. 2, an example of a virtual space where a certain XR live is distributed will be described.
 <1.2.XRライブが配信される仮想空間例>
 図2は、XRライブが配信される仮想空間Vの一例を説明するための説明図である。あるXRライブが配信される仮想空間Vは、例えば、2次元スクリーンVC1と、2次元映像CT1と、当該仮想空間に参加するアバターUとを含む。
<1.2. Example of virtual space where XR live is delivered>
FIG. 2 is an explanatory diagram for explaining an example of the virtual space V in which XR live is delivered. A virtual space V in which a certain XR live is delivered includes, for example, a two-dimensional screen VC1, a two-dimensional video CT1, and an avatar U participating in the virtual space.
 2次元スクリーンVC1は、2次元映像CT1を出力するために配置された第1の仮想オブジェクトである。2次元映像CT1は、コンテンツの一例であり、カメラにより取得されたライブ映像である。2次元映像CT1には、図2に示すような演者P1やスピーカC1などの機器や設備の映像が含まれ得る。また、2次元映像CT1は、仮想空間V上に配置された2次元スクリーンVC1に挿入される。この際に、2次元映像CT1は、2次元スクリーンVC1のサイズや形状に応じた画像処理が実行されてもよい。なお、コンテンツは、上述した2次元映像CT1および当該2次元映像CT1と連動して出力される音情報などの各種情報を含む。 The two-dimensional screen VC1 is the first virtual object arranged to output the two-dimensional image CT1. The two-dimensional image CT1 is an example of content, and is a live image captured by a camera. The two-dimensional image CT1 may include images of equipment and facilities such as the performer P1 and the speaker C1 as shown in FIG. Also, the two-dimensional image CT1 is inserted on the two-dimensional screen VC1 arranged in the virtual space V. FIG. At this time, the two-dimensional image CT1 may be subjected to image processing according to the size and shape of the two-dimensional screen VC1. Note that the content includes the above-described two-dimensional image CT1 and various types of information such as sound information output in conjunction with the two-dimensional image CT1.
 また、アバターUは、XRライブが配信される仮想空間Vに参加したユーザの分身体である。例えば、XRライブが配信される仮想空間Vにあるユーザが参加した場合、当該ユーザのアバターU1が仮想空間Vに表示される。また、他のユーザ(例えば、2ユーザ)も当該XRライブが配信される仮想空間Vに参加した場合、当該仮想空間Vに参加した他のユーザの人数に応じた数のアバターU2が仮想空間V上に表示される。各アバターは、各ユーザの操作に基づき、移動や動作を行うことが可能である。 Also, the avatar U is the clone of the user who participated in the virtual space V where the XR live is delivered. For example, when a user in the virtual space V where the XR live is distributed participates, the user's avatar U1 is displayed in the virtual space V. FIG. In addition, when other users (for example, two users) also participate in the virtual space V where the XR live is distributed, the number of avatars U2 corresponding to the number of other users who participated in the virtual space V displayed above. Each avatar can move and act based on each user's operation.
 以上、XRライブが配信される仮想空間Vの一例を説明したが、本開示に係る仮想空間Vは係る例に限定されない。例えば、仮想空間Vには、NPC(Non Player Character)などの仮想アバターが配置されてもよいし、仮想空間V上に建物や移動体などの様々な仮想オブジェクトが配置されてもよい。 An example of the virtual space V in which XR live is distributed has been described above, but the virtual space V according to the present disclosure is not limited to such an example. For example, virtual avatars such as NPCs (Non Player Characters) may be placed in the virtual space V, and various virtual objects such as buildings and mobile objects may be placed in the virtual space V.
 以上、本開示に係る情報処理システムの概要を説明した。続いて、図3を参照し、本開示に係る配信システム10の機能構成例を説明する。 The outline of the information processing system according to the present disclosure has been described above. Next, a functional configuration example of the distribution system 10 according to the present disclosure will be described with reference to FIG. 3 .
 <<1.3.配信システム10の機能構成例>>
 図3は、本開示に係る配信システム10の機能構成例を説明するための説明図である。図3に示すように、本開示に係る配信システム10は、カメラ20と、センサ25と、配信サーバ30とを備える。
<<1.3. Functional Configuration Example of Distribution System 10>>
FIG. 3 is an explanatory diagram for explaining a functional configuration example of the distribution system 10 according to the present disclosure. As shown in FIG. 3 , distribution system 10 according to the present disclosure includes camera 20 , sensor 25 , and distribution server 30 .
 (カメラ20)
 カメラ20は、被写体を撮影して映像を取得する。例えば、カメラ20は、ライブ会場を撮影し、当該ライブ会場における演者や機器または設備を含むライブ映像を取得する。カメラ20により取得された映像は、任意の通信方式により、配信サーバ30に送信される。
(Camera 20)
The camera 20 captures an image of an object and obtains an image. For example, the camera 20 captures a live venue and acquires live video including performers and equipment or facilities at the live venue. The video captured by the camera 20 is transmitted to the distribution server 30 by any communication method.
 (センサ25)
 センサ25は、ライブ会場におけるライブ音声を取得するセンサである。また、センサ25は、演者の身体位置情報などを取得するセンサであってもよい。センサ25により取得された各種情報(例えば、ライブ音声)は、任意の通信方式により、配信サーバ30に送信される。
(Sensor 25)
The sensor 25 is a sensor that acquires live sound at a live venue. Further, the sensor 25 may be a sensor that acquires body position information of the performer. Various information (for example, live sound) acquired by the sensor 25 is transmitted to the distribution server 30 by any communication method.
 (配信サーバ30)
 配信サーバ30は、配信者側で使用されるサーバである。配信サーバ30は、図2に示すように、制御部310と、通信部320とを備える。
(Distribution server 30)
The distribution server 30 is a server used by the distributor. The distribution server 30 includes a control unit 310 and a communication unit 320, as shown in FIG.
 {制御部310}
 制御部310は、配信サーバ30の動作全般を制御する。例えば、制御部310は、カメラ20やセンサ25から受信したライブ映像やライブ音声を同期し、利用者側で使用されるHMD40に対して通信部320に送信させる。
{control unit 310}
The control unit 310 controls overall operations of the distribution server 30 . For example, the control unit 310 synchronizes live video and live audio received from the camera 20 and the sensor 25 and causes the communication unit 320 to transmit them to the HMD 40 used by the user.
 {通信部320}
 通信部320は、制御部310の制御に従い、XRライブが配信される仮想空間に参加したHMD40との間で各種通信を行う。例えば、通信部320は、カメラ20の撮影により得られたライブ映像と、センサ25により取得されたライブ音声を含む仮想空間を、HMD40にストリーミング送信する。
{Communication unit 320}
Under the control of the control unit 310, the communication unit 320 performs various communications with the HMD 40 participating in the virtual space where the XR live is distributed. For example, the communication unit 320 streams a virtual space including live video captured by the camera 20 and live audio captured by the sensor 25 to the HMD 40 .
 以上、本開示に係る概要を説明した。続いて、図4を参照し、本開示に係るHMD40の機能構成例を説明する。 The above is an overview of the present disclosure. Next, a functional configuration example of the HMD 40 according to the present disclosure will be described with reference to FIG. 4 .
 <<3.HMD40の機能構成例>>
 図4は、本開示に係るHMD40の機能構成例を説明するための説明図である。図4に示すように、本開示に係るHMD40は、通信部410と、記憶部420と、表示部430と、操作部440と、制御部450とを備える。
<<3. Example of functional configuration of HMD 40 >>
FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure. As shown in FIG. 4 , the HMD 40 according to the present disclosure includes a communication section 410 , a storage section 420 , a display section 430 , an operation section 440 and a control section 450 .
 (通信部410)
 通信部410は、制御部450の制御に従い、配信サーバ30や他のHMD40との間で各種通信を行う。同一のXRライブが配信される仮想空間Vに参加する複数のHMD40は、ネットワーク1を介して双方向に接続される。
(Communication unit 410)
The communication unit 410 performs various communications with the distribution server 30 and other HMDs 40 under the control of the control unit 450 . A plurality of HMDs 40 participating in the virtual space V to which the same XR live is distributed are bi-directionally connected via the network 1 .
 (記憶部420)
 記憶部420は、ソフトウェアおよび各種データを保持する。例えば、記憶部420は、物体の種類と、物体の特徴情報との組を教師データとする学習により得られた学習データを保持する。物体の特徴情報は、物体の形状に関する情報と、物体の色に関する情報とを含んでもよい。また、教師データとして学習される対象は、物体に限定されない。例えば、記憶部420は、演者や光などの各種対象と、当該対象の特徴情報との組を教師データとする学習により得られた学習データを保持してもよい。
(storage unit 420)
Storage unit 420 holds software and various data. For example, the storage unit 420 holds learning data obtained by learning using pairs of object types and object feature information as teacher data. The object feature information may include information about the shape of the object and information about the color of the object. Also, the object to be learned as teacher data is not limited to an object. For example, the storage unit 420 may hold learning data obtained by learning using sets of various objects such as performers and lights and feature information of the objects as teacher data.
 また、記憶部420は、仮想オブジェクトやパーティクル等の仮想空間を表現する映像を示す3次元データを保持する。仮想オブジェクトおよびパーティクルの具体例は後述する。 In addition, the storage unit 420 holds three-dimensional data representing images representing the virtual space, such as virtual objects and particles. Specific examples of virtual objects and particles will be described later.
 (表示部430)
 表示部430は、アバターの目を基準とした一人称視点における仮想空間上の映像を表示する。なお、アバターの目を基準とした一人称視点における映像は、HMD40に搭載される姿勢センサが取得したセンサデータに基づき生成される。表示部430の機能は、例えば、CRT(Cathode Ray Tube)ディスプレイ装置、液晶ディスプレイ(LCD)装置、OLED(Organic Light Emitting Diode)装置により実現される。また、表示部430は、参加する仮想空間を選択する画面や、参加している仮想空間を俯瞰的な視点における映像などの各種映像を表示してもよい。
(Display unit 430)
The display unit 430 displays an image in the virtual space from the first-person viewpoint based on the eyes of the avatar. Note that the first-person viewpoint image based on the avatar's eyes is generated based on sensor data acquired by the orientation sensor mounted on the HMD 40 . The functions of the display unit 430 are implemented by, for example, a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, or an OLED (Organic Light Emitting Diode) device. In addition, the display unit 430 may display various images such as a screen for selecting a virtual space to participate in and an image from a bird's-eye view of the virtual space in which the participant participates.
 (操作部440)
 操作部440は、アバターの移動や動作を制御する。操作部440の機能は、例えばハンドコントローラなどの機器により実現される。なお、アバターの動作とは、例えばアバターが手を振る動作やジャンプなどの各種動作を含む。
(Operation unit 440)
The operation unit 440 controls movement and actions of the avatar. Functions of the operation unit 440 are implemented by a device such as a hand controller. Note that the avatar's action includes, for example, various actions of the avatar such as waving and jumping.
 (制御部450)
 制御部450は、本開示に係るHMD40の動作全般を制御する。制御部450は、図3に示すように、色情報検出部451と、物体・身体認識部455と、補正処理部459と、相関処理部463と、楽曲解析処理部467と、空間描画処理部471とを備える。
(control unit 450)
The control unit 450 controls overall operations of the HMD 40 according to the present disclosure. As shown in FIG. 3, the control unit 450 includes a color information detection unit 451, an object/body recognition unit 455, a correction processing unit 459, a correlation processing unit 463, a music analysis processing unit 467, and a space drawing processing unit. 471.
 {色情報検出部451}
 色情報検出部451は、取得部の一例であり、2次元映像の色情報を検出する。例えば、色情報検出部は、2次元映像の背景映像から色情報を取得する。より具体的には、色情報検出部451は、2次元映像に含まれる背景映像のうち、2次元映像と仮想空間との境界付近の色情報を取得する。なお、ここでの背景映像とは、2次元映像に含まれる演者や物体が存在しない領域を示す。
{Color information detector 451}
The color information detection unit 451 is an example of an acquisition unit and detects color information of a 2D image. For example, the color information detection unit acquires color information from the background image of the two-dimensional image. More specifically, the color information detection unit 451 acquires color information near the boundary between the 2D video and the virtual space in the background video included in the 2D video. It should be noted that the background image here indicates an area in which the performer and the object included in the two-dimensional image do not exist.
 また、色情報検出部451は、2次元映像に含まれる物体の色情報を検出してもよい。 Also, the color information detection unit 451 may detect color information of an object included in the two-dimensional image.
 {物体・身体認識部455}
 物体・身体認識部455は、取得部の一例であり、2次元映像から特徴情報を取得する。例えば、物体・身体認識部455は、2次元映像に含まれる物体や演者を認識する。また、物体・身体認識部455は、2次元映像に含まれる物体や演者から特徴情報を取得し、当該特徴情報および記憶部420に保持される学習データに基づき、物体や演者を認識する。
{Object/body recognition unit 455}
The object/body recognition unit 455 is an example of an acquisition unit, and acquires feature information from a two-dimensional image. For example, the object/body recognition unit 455 recognizes objects and performers included in the two-dimensional video. The object/body recognition unit 455 also acquires feature information from objects and performers included in the two-dimensional video, and recognizes the objects and performers based on the feature information and learning data held in the storage unit 420 .
 {補正処理部459}
 補正処理部459は、物体・身体認識部455により認識された物体や演者の映像に対して、形状補間やノイズ除去などの補正処理を実行する。
{Correction processing unit 459}
The correction processing unit 459 performs correction processing such as shape interpolation and noise removal on the images of the objects and performers recognized by the object/body recognition unit 455 .
 {相関処理部463}
 相関処理部463は、補正処理部459により補正された物体や演者の映像と、記憶部420により保持される仮想オブジェクトの3次元データとの間で相関度合を算出する。
{Correlation processing unit 463}
The correlation processing unit 463 calculates the degree of correlation between the image of the object or the performer corrected by the correction processing unit 459 and the three-dimensional data of the virtual object held by the storage unit 420 .
 {楽曲解析処理部467}
 楽曲解析処理部467は、取得部の一例であり、仮想空間上で出力される2次元映像に含まれるライブ音声の音情報を取得する。例えば、楽曲解析処理部467は、取得した音情報を解析し、当該ライブ音声のリズム、テンポ、音量の変化(クレッシェンドやデクレッシェンド)等の各種情報を取得する。
{music analysis processing unit 467}
The music analysis processing unit 467 is an example of an acquisition unit, and acquires sound information of live audio included in the two-dimensional video output in the virtual space. For example, the music analysis processing unit 467 analyzes the acquired sound information and acquires various kinds of information such as rhythm, tempo, volume change (crescendo and decrescendo) of the live sound.
 {空間描画処理部471}
 空間描画処理部471は、映像生成部の一例であり、物体・身体認識部455により取得された特徴情報に基づき、仮想空間を表現する映像の表示を制御する。仮想空間を表現する映像の具体例については後述する。
{Spatial drawing processing unit 471}
The space drawing processing unit 471 is an example of an image generation unit, and controls display of an image representing a virtual space based on the feature information acquired by the object/body recognition unit 455 . A specific example of the video representing the virtual space will be described later.
 また、空間描画処理部471は、色情報検出部451により検出された背景映像の色情報に基づき、仮想空間を表現する映像の表示を制御してもよい。 Also, the space drawing processing unit 471 may control the display of the image representing the virtual space based on the color information of the background image detected by the color information detection unit 451 .
 また、空間描画処理部471は、楽曲解析処理部467により取得された音情報に基づき、仮想空間を表現する映像の表示を制御してもよい。 Also, the space drawing processing unit 471 may control the display of the video representing the virtual space based on the sound information acquired by the music analysis processing unit 467 .
 以上、本開示に係るHMD40の機能構成例を説明した。続いて、空間描画処理部471により表示が制御される「仮想空間を表現する映像」の具体例について順次説明する。 The functional configuration example of the HMD 40 according to the present disclosure has been described above. Concrete examples of the “image representing the virtual space” whose display is controlled by the space drawing processing unit 471 will be sequentially described.
 <<3.実施例>>
 <3.1.第1の実施例>
 本開示に係る第1の実施例では、空間描画処理部471は、2次元映像上に表示される物体と類似する仮想オブジェクトの3次元データを、仮想空間を表現する映像の表示を制御する。
<<3. Example>>
<3.1. First embodiment>
In the first embodiment according to the present disclosure, the space drawing processing unit 471 controls the display of an image that expresses a virtual space with three-dimensional data of a virtual object similar to an object displayed on a two-dimensional image.
 (映像例)
 図5は、本開示に係る第1の実施例の映像例を説明するための説明図である。まず、物体・身体認識部455は、2次元映像CT1に含まれる物体を認識する。図5に示した例では、物体・身体認識部455は、ライブ会場に設置されるスピーカC1を認識する。そして、相関処理部463は、スピーカC1と記憶部420により保持される仮想オブジェクトの各々との間で相関度合を算出する。
(Video example)
FIG. 5 is an explanatory diagram for explaining a video example of the first embodiment according to the present disclosure. First, the object/body recognition unit 455 recognizes an object included in the two-dimensional image CT1. In the example shown in FIG. 5, the object/body recognition unit 455 recognizes the speaker C1 installed at the live venue. The correlation processing unit 463 then calculates the degree of correlation between the speaker C<b>1 and each of the virtual objects held by the storage unit 420 .
 そして、空間描画処理部471は、相関処理部463により相関度合が所定の基準を満たした仮想オブジェクトの3次元データを第2の仮想オブジェクトの映像として表示を制御し、仮想空間Vに配置する。図5に示す例では、空間描画処理部471は、スピーカC1に相関度合が高いスピーカモデルVC2を第2の仮想オブジェクトとして表示を制御する。これにより、ユーザはよりライブに参加している感覚を覚え得る。なお、相関度合が所定の基準を満たした仮想オブジェクトは、例えば、相関度合が所定値以上である仮想オブジェクトであってもよい。 Then, the space drawing processing unit 471 controls the display of the three-dimensional data of the virtual object whose degree of correlation satisfies a predetermined criterion by the correlation processing unit 463 as the image of the second virtual object, and arranges it in the virtual space V. In the example shown in FIG. 5, the space drawing processing unit 471 controls the display of the speaker model VC2 having a high degree of correlation with the speaker C1 as the second virtual object. This allows the user to feel more like they are participating in a live performance. A virtual object whose degree of correlation satisfies a predetermined criterion may be, for example, a virtual object whose degree of correlation is equal to or greater than a predetermined value.
 また、空間描画処理部471は、仮想空間Vに配置される他の仮想オブジェクトに干渉しないエリアにスピーカモデルVC2の映像の表示を制御してもよい。この際、表示を制御されるスピーカモデルVC2の数は、単数であってもよいし、複数であってもよい。 Also, the space drawing processing unit 471 may control the display of the video of the speaker model VC2 in an area that does not interfere with other virtual objects placed in the virtual space V. At this time, the number of speaker models VC2 whose display is controlled may be singular or plural.
 また、2次元映像CT1に含まれる物体は、時間に応じて変わり得るが、空間描画処理部471は、2次元映像CT1からスピーカC1が映らなくなった場合において、一度表示したスピーカモデルVC2を仮想空間上から消してもよいし、残しておいてもよい。また、空間描画処理部471は、2次元映像CT1からスピーカC1が一定時間映らなかった場合にスピーカモデルVC2の映像を消してもよい。 Also, objects included in the two-dimensional image CT1 may change over time, but when the speaker C1 disappears from the two-dimensional image CT1, the space drawing processing unit 471 renders the once-displayed speaker model VC2 into the virtual space. You can erase it from the top, or you can leave it on. Further, the space drawing processing unit 471 may erase the image of the speaker model VC2 when the speaker C1 is not displayed from the two-dimensional image CT1 for a certain period of time.
 (動作例)
 図6は、本開示に係る第1の実施例の動作処理例を説明するための説明図である。まず、物体・身体認識部455は、2次元映像上の物体を認識する(S101)。例えば、2次元映像中にスピーカが含まれる場合、物体・身体認識部455は、当該2次元映像上のスピーカの物体の種類を「スピーカ」として認識する。
(Operation example)
FIG. 6 is an explanatory diagram for explaining an operation processing example of the first embodiment according to the present disclosure. First, the object/body recognition unit 455 recognizes an object on a two-dimensional image (S101). For example, when a speaker is included in the two-dimensional image, the object/body recognition unit 455 recognizes the object type of the speaker on the two-dimensional image as “speaker”.
 続いて、色情報検出部451は、物体の色情報を取得する(S105)。 Subsequently, the color information detection unit 451 acquires the color information of the object (S105).
 そして、相関処理部463は、記憶部420により保持される仮想オブジェクトおよび当該仮想オブジェクトの色情報と、認識された物体の種類および当該物体の色情報との相関度合を算出する(S113)。この際に、相関処理部463は、物体の種類および物体の色情報の各相関度合に基づき、物体との類似度が高い仮想オブジェクトを判別するが、例えば、類似度の判定には重みパラメータが用意されてもよい。より具体的には、類似度の判定には、物体の色情報に設定された重みパラメータと比較して、物体の種類の重みパラメータが大きく設定されてもよい。これにより、ユーザが視認する上で特に影響が大きい仮想オブジェクトの物品種の判定精度を向上し得る。 Then, the correlation processing unit 463 calculates the degree of correlation between the virtual object held by the storage unit 420 and the color information of the virtual object, the type of the recognized object, and the color information of the object (S113). At this time, the correlation processing unit 463 determines a virtual object that has a high degree of similarity to the object based on the degree of correlation between the type of object and the color information of the object. may be prepared. More specifically, when determining the degree of similarity, the weighting parameter for the type of object may be set larger than the weighting parameter set for the color information of the object. As a result, it is possible to improve the determination accuracy of the product type of the virtual object, which has a particularly large impact on the user's visual recognition.
 そして、制御部450は、S113にて算出された相関度合に基づき、認識された物体と類似する仮想オブジェクトがあるか否かを判定する(S113)。類似する仮想オブジェクトがあると判定された場合(S113/Yes)、処理はS117に進められ、類似する仮想オブジェクトがないと判定された場合(S113/No)、本開示に係る制御部450は処理を終了する。 Then, based on the degree of correlation calculated in S113, the control unit 450 determines whether or not there is a virtual object similar to the recognized object (S113). If it is determined that there is a similar virtual object (S113/Yes), the process proceeds to S117, and if it is determined that there is no similar virtual object (S113/No), the control unit 450 according to the present disclosure processes exit.
 類似する仮想オブジェクトがあると判定された場合(S113/Yes)、制御部450は、仮想空間内における、各仮想オブジェクトの位置情報を取得する(S117)。 When it is determined that there are similar virtual objects (S113/Yes), the control unit 450 acquires position information of each virtual object in the virtual space (S117).
 そして、空間描画処理部471は、各仮想オブジェクトに干渉しない位置にS113にて類似すると判定された仮想オブジェクトを配置し(S121)、本開示に係る制御部450は処理を終了する。 Then, the space drawing processing unit 471 arranges the virtual objects determined to be similar in S113 at positions that do not interfere with each virtual object (S121), and the control unit 450 according to the present disclosure ends the processing.
 以上、本開示に係る第1の実施例について説明した。続いて、図7~図9を参照し、本開示に係る第2の実施例について説明する。 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9. FIG.
 <3.2.第2の実施例>
 本開示に係る第2の実施例では、空間描画処理部471は、動的に変化し得る2次元映像の背景映像の色情報が示す色と同系色である仮想オブジェクト(例えば、ライティング)またはパーティクルの映像を、仮想空間を表現する映像として表示を制御する。
<3.2. Second embodiment>
In the second embodiment according to the present disclosure, the space drawing processing unit 471 creates virtual objects (for example, lighting) or particles that have the same color as the color indicated by the color information of the background image of the dynamically changeable two-dimensional image. The display of the image is controlled as an image expressing the virtual space.
 (映像例)
 図7は、本開示に係る第2の実施例の映像例を説明するための説明図である。まず、色情報検出部451は、2次元映像CT1の背景映像から色情報を検出する。
(Video example)
FIG. 7 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure. First, the color information detection unit 451 detects color information from the background image of the two-dimensional image CT1.
 そして、空間描画処理部471は、色情報検出部451により取得された色情報に基づき、仮想空間を表現する映像の表示を制御する。より具体的には、空間描画処理部471は、色情報検出部451により取得された色情報が示す色と同系色である映像の表示を制御する。 Then, based on the color information acquired by the color information detection unit 451, the space drawing processing unit 471 controls the display of the video representing the virtual space. More specifically, the space drawing processing unit 471 controls display of an image having a similar color to the color indicated by the color information acquired by the color information detection unit 451 .
 例えば、図7に示すように、色情報検出部451は、2次元映像CT1と、仮想空間Vとの境界付近の色情報を取得する。そして、空間描画処理部471は、当該色情報が示す色と同系色である光を発するライトの映像VC3を第3の仮想オブジェクトとして表示を制御する。これにより、2次元映像CT1と仮想空間Vとの境界の視認性が低下し、ユーザは仮想空間Vへの没入感を向上し得る。 For example, as shown in FIG. 7, the color information detection unit 451 acquires color information near the boundary between the two-dimensional image CT1 and the virtual space V. Then, the space drawing processing unit 471 controls the display of the image VC3 of the light emitting light having the same color as the color indicated by the color information as the third virtual object. As a result, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V. FIG.
 また、空間描画処理部471は、図7に示したように、2次元スクリーンVC1の両端における仮想空間Vとの境界を隠すようにライトの映像VC3の表示を制御してもよい。例えば、空間描画処理部471は、2次元映像CT1と仮想空間Vとの左端境界付近の色情報に基づく映像を、2次元スクリーンVC1の左端における仮想空間Vとの境界を隠すように表示を制御する。また、空間描画処理部471は、2次元映像CT1と仮想空間Vとの右端境界付近の色情報に基づく映像を、2次元スクリーンVC1の右端における仮想空間Vとの境界を隠すように表示を制御する。これにより、2次元スクリーンVC1の左右で背景色が異なる場合においても、2次元映像CT1と仮想空間Vとの境界の視認性が低下し、ユーザは仮想空間Vへの没入感を向上し得る。 Further, the space drawing processing unit 471 may control the display of the light image VC3 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1, as shown in FIG. For example, the space drawing processing unit 471 controls the display of the image based on the color information near the left edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary with the virtual space V at the left edge of the two-dimensional screen VC1. do. Further, the space drawing processing unit 471 controls the display of the image based on the color information near the right edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary between the two-dimensional screen VC1 and the virtual space V at the right edge. do. As a result, even when the left and right sides of the two-dimensional screen VC1 have different background colors, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V.
 また、色情報検出部451により取得された色情報に基づく映像は、ライトなどの仮想オブジェクトの映像に限定されない。例えば、光や粒子などのパーティクルの映像であってもよい。 Also, the image based on the color information acquired by the color information detection unit 451 is not limited to the image of a virtual object such as a light. For example, it may be an image of particles such as light or particles.
 図8は、本開示に係る第2の実施例の映像例を説明するための説明図である。空間描画処理部471は、図8に示したように、2次元スクリーンVC1の両端における仮想空間Vとの境界を隠すようにパーティクルの映像E1の表示を制御してもよい。 FIG. 8 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure. As shown in FIG. 8, the space drawing processing unit 471 may control the display of the particle image E1 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1.
 また、空間描画処理部471は、上述したライトの映像VC3とパーティクルの映像E1とを組み合わせた映像の表示を制御してもよい。 In addition, the space drawing processing unit 471 may control display of an image obtained by combining the light image VC3 and the particle image E1 described above.
 (動作例)
 図9は、本開示に係る第2の実施例の動作処理例を説明するための説明図である。まず、色情報検出部451は、2次元映像に含まれる背景映像から色情報を取得する(S201)。
(Operation example)
FIG. 9 is an explanatory diagram for explaining an operation processing example of the second embodiment according to the present disclosure. First, the color information detection unit 451 acquires color information from the background image included in the two-dimensional image (S201).
 続いて、空間描画処理部471は、2次元映像の両端における境界を隠すように色情報が示す色と同系色の光を発するライトの表示を制御する(S205)。 Next, the space drawing processing unit 471 controls the display of lights that emit light of the same color as the color indicated by the color information so as to hide the boundaries at both ends of the two-dimensional image (S205).
 そして、制御部450は、2次元スクリーンに含まれる2次元映像の背景色と、仮想空間の背景色の類似度が閾値以上か否かを判定する(S209)。類似度が閾値以上であった場合(S209/Yes)、処理は終了し、類似度が閾値未満であった場合(S209/No)、処理はS213に進められる。 Then, the control unit 450 determines whether or not the degree of similarity between the background color of the 2D video included in the 2D screen and the background color of the virtual space is equal to or greater than a threshold (S209). If the degree of similarity is greater than or equal to the threshold (S209/Yes), the process ends, and if the degree of similarity is less than the threshold (S209/No), the process proceeds to S213.
 類似度が閾値未満であった場合(S209/No)、空間描画処理部471は、光の照射範囲が拡大されたライトの映像の表示を制御する(S213)。そして、S209にて類似度が閾値以上であると判定されるまでS209~S213の処理が繰り返される。 If the degree of similarity is less than the threshold (S209/No), the space drawing processing unit 471 controls the display of the image of the light with the expanded light irradiation range (S213). Then, the processing of S209 to S213 is repeated until it is determined in S209 that the degree of similarity is equal to or greater than the threshold.
 以上、本開示に係る第2の実施例について説明した。続いて、図10~図11を参照し、本開示に係る第3の実施例について説明する。 The second embodiment according to the present disclosure has been described above. Next, a third embodiment according to the present disclosure will be described with reference to FIGS. 10 and 11. FIG.
 <3.3.第3の実施例>
 本開示に係る第3の実施例では、空間描画処理部471は、楽曲解析処理部467により取得された音情報に基づき、NPC動作、仮想オブジェクト動作またはパーティクル等の映像を、仮想空間を表現する映像として表示を制御する。
<3.3. Third embodiment>
In the third embodiment according to the present disclosure, the space drawing processing unit 471 expresses images such as NPC actions, virtual object actions, particles, etc. in the virtual space based on the sound information acquired by the music analysis processing unit 467. Control display as video.
 (映像例)
 図10は、本開示に係る第3の実施例の映像例を説明するための説明図である。まず、楽曲解析処理部467は、仮想空間V上で出力される2次元映像CT1に含まれるライブ音声の音情報M1を解析し、当該ライブ音声のリズムなどの各種情報を取得する。なお、図10では、音情報M1を、音符マークとして図示しているが、実際には音情報M1は音声に含まれる情報であるため、映像には表示されない。
(Video example)
FIG. 10 is an explanatory diagram for explaining a video example of the third embodiment according to the present disclosure. First, the music analysis processing unit 467 analyzes the sound information M1 of the live audio included in the two-dimensional image CT1 output in the virtual space V, and acquires various information such as the rhythm of the live audio. In FIG. 10, the sound information M1 is illustrated as a musical note mark, but the sound information M1 is actually included in the voice, so it is not displayed in the video.
 そして、空間描画処理部471は、ライブ音声のリズムに呼応して動く仮想オブジェクトの映像を第4の仮想オブジェクトの映像として表示を制御してもよい。音のリズムに呼応して動く仮想オブジェクトの映像は、例えば、図10に示すようなサイリウムVC4が左右に振られるように動く映像であってもよい。 Then, the space drawing processing unit 471 may control the display of the video of the virtual object that moves in response to the rhythm of the live sound as the video of the fourth virtual object. The image of the virtual object that moves in response to the rhythm of sound may be, for example, an image that moves as if the psyllium VC 4 shown in FIG. 10 is swung from side to side.
 また、空間描画処理部471は、音のリズムに呼応して動く光や粒子などのパーティクルの映像E2を、仮想空間Vを表現する映像として表示を制御してもよい。このようなライブ音声と連動する映像を仮想空間V上で表示することにより、ユーザは、ライブへの参加感や周囲の観客との一体感を向上し得る。 Further, the space drawing processing unit 471 may control the display of an image E2 of particles such as light and particles that move in response to the rhythm of sound as an image representing the virtual space V. By displaying such video linked with the live sound in the virtual space V, the user can improve the sense of participation in the live performance and the sense of unity with the surrounding audience.
 (動作例)
 図11は、本開示に係る第3の実施例の動作処理例を説明するための説明図である。まず、楽曲解析処理部467は、2次元映像と連動して出力される音情報を取得する(S301)。
(Operation example)
FIG. 11 is an explanatory diagram for explaining an operation processing example of the third embodiment according to the present disclosure. First, the music analysis processing unit 467 acquires sound information output in conjunction with a two-dimensional image (S301).
 続いて、楽曲解析処理部467は、取得した音情報からリズムや音量などを解析する(S305)。 Next, the music analysis processing unit 467 analyzes the rhythm, volume, etc. from the acquired sound information (S305).
 そして、空間描画処理部471は、S305において解析された結果に応じた仮想オブジェクトを仮想空間上に表示し(S309)、本開示に係る制御部450は、処理を終了する。 Then, the space drawing processing unit 471 displays the virtual object corresponding to the analysis result in S305 in the virtual space (S309), and the control unit 450 according to the present disclosure ends the processing.
 以上、説明した実施例では、2次元スクリーンが平面形状のスクリーンである例を説明したが本開示に係る2次元スクリーンは係る例に限定されない。続いて、図12を参照し、本開示に係る変形例を説明する。 In the above-described embodiments, an example in which the two-dimensional screen is a planar screen has been described, but the two-dimensional screen according to the present disclosure is not limited to such an example. Next, a modification according to the present disclosure will be described with reference to FIG. 12 .
 <<4.変形例>>
 図12は、本開示に係る変形例を説明するための説明図である。例えば、2次元スクリーンVC1は、図12に示すように、曲面形状を有していてもよい。例えば、2次元スクリーンVC1は、仮想空間V上における所定のエリア範囲に応じた曲率半径を有する曲面形状を有していてもよい。
<<4. Modification>>
FIG. 12 is an explanatory diagram for explaining a modification according to the present disclosure. For example, the two-dimensional screen VC1 may have a curved shape as shown in FIG. For example, the two-dimensional screen VC1 may have a curved shape with a curvature radius corresponding to a predetermined area range on the virtual space V. FIG.
 なお、所定のエリア範囲は、図12に示すような群衆エリアPA1であってもよい。これにより、群衆エリアPA1の視聴位置によって生じ得る映像歪みの影響を低減することを可能とし、ユーザに実際のライブに参加している感覚を提供し得る。 It should be noted that the predetermined area range may be the crowd area PA1 as shown in FIG. This makes it possible to reduce the effects of video distortion that may occur depending on the viewing position in the crowd area PA1, and provides the user with the feeling of participating in an actual live performance.
 以上、本開示に係る変形例を説明した。続いて、図13を参照し、本開示に係るHMD40のハードウェア構成例を説明する。 The modified example according to the present disclosure has been described above. Next, a hardware configuration example of the HMD 40 according to the present disclosure will be described with reference to FIG. 13 .
 <<5.ハードウェア構成例>>
 以上、本開示に係る実施形態を説明した。上述した情報処理は、ソフトウェアと、以下に説明するHMD40のハードウェアとの協働により実現される。なお、以下に説明するハードウェア構成は、配信システム10が備えるカメラ20、センサ25または配信サーバ30にも適用可能である。
<<5. Hardware configuration example >>
The embodiment according to the present disclosure has been described above. The information processing described above is realized by cooperation between software and the hardware of the HMD 40 described below. The hardware configuration described below can also be applied to the camera 20, the sensor 25, or the distribution server 30 included in the distribution system 10. FIG.
 図13は、本開示に係るHMD40のハードウェア構成を示した説明図である。図13に示したようにHMD40は、CPU(Central Processing Unit)4201と、ROM(Read Only Memory)4202と、RAM(Random Access Memory)4203と、入力装置4208と、出力装置4210と、ストレージ装置4211と、ドライブ4212と、撮像装置4213と、通信装置4215とを備える。 FIG. 13 is an explanatory diagram showing the hardware configuration of the HMD 40 according to the present disclosure. As shown in FIG. 13, the HMD 40 includes a CPU (Central Processing Unit) 4201, a ROM (Read Only Memory) 4202, a RAM (Random Access Memory) 4203, an input device 4208, an output device 4210, and a storage device 4211. , a drive 4212 , an imaging device 4213 , and a communication device 4215 .
 CPU4201は、演算処理装置および制御装置として機能し、各種プログラムに従ってHMD40の動作全般を制御する。また、CPU4201は、マイクロプロセッサであってもよい。ROM4202は、CPU4201が使用するプログラムや演算パラメータ等を記憶する。RAM4203は、CPU4201の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一時記憶する。これらはCPUバスなどから構成されるホストバスにより相互に接続されている。 The CPU 4201 functions as an arithmetic processing device and a control device, and controls the overall operation of the HMD 40 according to various programs. Alternatively, the CPU 4201 may be a microprocessor. The ROM 4202 stores programs and calculation parameters used by the CPU 4201 . The RAM 4203 temporarily stores programs used in the execution of the CPU 4201, parameters that change as appropriate during the execution, and the like. These are interconnected by a host bus comprising a CPU bus or the like.
 入力装置4208は、ハンドコントローラ、マウス、キーボード、タッチパネル、ボタン、マイクロフォン、スイッチおよびレバーなどのユーザが情報を入力するための入力手段と、ユーザによる入力に基づいて入力信号を生成し、CPU4201に出力する入力制御回路などから構成されている。HMD40を利用するユーザは、当該入力装置4208を操作することにより、HMD40に対して各種のデータを入力したり処理動作を指示したりすることができる。 The input device 4208 generates an input signal based on input means for the user to input information, such as a hand controller, mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input by the user, and outputs the signal to the CPU 4201 . It consists of an input control circuit, etc. A user using the HMD 40 can input various data to the HMD 40 and instruct processing operations by operating the input device 4208 .
 出力装置4210は、例えば液晶ディスプレイ(LCD)装置、OLED装置およびランプなどの表示装置を含む。さらに、出力装置4210は、スピーカおよびヘッドホンなどの音声出力装置を含む。例えば、表示装置は、仮想空間におけるアバター視点の映像を表示する。音声出力装置は、ライブ音声を含む音声データを音声に変換して出力する。 Output devices 4210 include display devices such as liquid crystal display (LCD) devices, OLED devices, and lamps, for example. In addition, output device 4210 includes audio output devices such as speakers and headphones. For example, the display device displays an image of an avatar viewpoint in virtual space. The audio output device converts audio data including live audio into audio and outputs the audio.
 ストレージ装置4211は、本開示に係るHMD40の記憶部420の一例として構成されたデータ格納用の装置である。ストレージ装置4211は、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置および記憶媒体に記録されたデータを削除する削除装置などを含んでもよい。このストレージ装置4211は、CPU4201が実行するプログラムや各種データを格納する。 The storage device 4211 is a data storage device configured as an example of the storage unit 420 of the HMD 40 according to the present disclosure. The storage device 4211 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like. The storage device 4211 stores programs executed by the CPU 4201 and various data.
 ドライブ4212は、記憶媒体用リーダライタであり、HMD40に内蔵、あるいは外付けされる。ドライブ4212は、半導体メモリ等のリムーバブル記憶媒体44に記録されている情報を読み出して、RAM4203に出力する。またドライブ4212は、リムーバブル記憶媒体44に情報を書き込むこともできる。 A drive 4212 is a reader/writer for storage media, and is built in or externally attached to the HMD 40 . The drive 4212 reads information recorded in the removable storage medium 44 such as a semiconductor memory and outputs it to the RAM 4203 . Drive 4212 can also write information to removable storage medium 44 .
 撮像装置4213は、光を集光する撮像レンズおよびズームレンズなどの撮像光学系、およびCCD(Charge Coupled Device)またはCMOS(Complementary Metal Oxide Semiconductor)などの信号変換素子を備える。撮像光学系は、被写体から発せられる光を集光して信号変換部に被写体像を形成し、信号変換素子は、形成された被写体像を電気的な画像信号に変換する。 The imaging device 4213 includes an imaging optical system such as an imaging lens and a zoom lens for condensing light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). The imaging optical system collects light emitted from a subject to form a subject image in a signal conversion section, and the signal conversion element converts the formed subject image into an electrical image signal.
 通信装置4215は、例えば、ネットワーク1に接続するための通信デバイス等で構成された通信インタフェースである。また、通信装置4215は、無線LAN対応通信装置であってもよいし、LTE対応通信装置であってもよいし、有線による通信を行うワイヤー通信装置であってもよい。 The communication device 4215 is, for example, a communication interface configured with a communication device or the like for connecting to the network 1. Also, the communication device 4215 may be a wireless LAN compatible communication device, an LTE compatible communication device, or a wire communication device that performs wired communication.
 以上、本開示に係るHMD40のハードウェア構成例を説明した。続いて、本開示に係る補足を説明する。 The hardware configuration example of the HMD 40 according to the present disclosure has been described above. Next, supplements related to the present disclosure will be described.
 <<6.補足>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示はかかる例に限定されない。本開示の属する技術の分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<<6. Supplement >>
Although the preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, the present disclosure is not limited to such examples. It is clear that a person having ordinary knowledge in the technical field to which the present disclosure belongs can conceive of various modifications or modifications within the scope of the technical idea described in the claims. It is understood that these also naturally belong to the technical scope of the present disclosure.
 例えば、HMD40に含まれる一部の機能は、HMD40から分離して構成されてもよい。例えば、HMD40は、仮想空間上の2次元映像や音声に関する情報をサーバに送信し、当該サーバが2次元映像に含まれる物体認識や、音情報に含まれるリズム等の解析などの各種処理を実行されてもよい。これにより、HMD40に生じる負担を低減し得る。 For example, some functions included in the HMD 40 may be configured separately from the HMD 40. For example, the HMD 40 transmits information about two-dimensional video and audio in the virtual space to the server, and the server executes various processes such as object recognition included in the two-dimensional video and analysis of rhythms included in sound information. may be This can reduce the burden on the HMD 40 .
 また、2次元映像上にライブ会場の観客が含まれていた場合、物体・身体認識部455は、当該観客を認識してもよい。そして、空間描画処理部471は、当該観客の動き量に応じて動くパーティクルなどの映像の表示を制御してもよい。 Also, if the audience at the live venue is included in the two-dimensional video, the object/body recognition unit 455 may recognize the audience. Then, the space drawing processing unit 471 may control display of images such as particles that move according to the amount of movement of the spectator.
 また、本明細書のHMD40の処理における各ステップは、必ずしもフローチャートとして記載された順序に沿って時系列に処理する必要はない。例えば、HMD40の処理における各ステップは、フローチャートとして記載した順序と異なる順序や並列的に処理されてもよい。 Also, each step in the processing of the HMD 40 in this specification does not necessarily have to be processed in chronological order according to the order described as the flowchart. For example, each step in the processing of the HMD 40 may be processed in an order different from the order described as the flowchart or in parallel.
 また、HMD40、カメラ20、センサ25および配信サーバ30に内蔵されるCPU、ROMおよびRAMなどのハードウェアに、上述したHMD40、カメラ20、センサ25および配信サーバ30の各構成と同等の機能を発揮させるためのコンピュータプログラムも作成可能である。また、当該コンピュータプログラムを記憶させた記憶媒体も提供される。 In addition, hardware such as CPU, ROM, and RAM built in HMD 40, camera 20, sensor 25, and distribution server 30 exhibits the same functions as each configuration of HMD 40, camera 20, sensor 25, and distribution server 30 described above. A computer program can also be written to do so. A storage medium storing the computer program is also provided.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Also, the effects described in this specification are merely descriptive or exemplary, and are not limiting. In other words, the technology according to the present disclosure can produce other effects that are obvious to those skilled in the art from the description of this specification, in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得部と、
 前記取得部により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する映像生成部と、
を備える、情報処理装置。
(2)
 前記取得部は、
 前記コンテンツに含まれる物体から前記特徴情報を取得する、
前記(1)に記載の情報処理装置。
(3)
 前記取得部は、
 取得した前記特徴情報に基づき、物体を認識し、
 前記映像生成部は、
 前記取得部により認識された物体との相関度合が所定の基準を満たした仮想オブジェクトを、前記第2の仮想オブジェクトの映像として表示を制御する、
前記(2)に記載の情報処理装置。
(4)
 前記相関度合が所定の基準を満たした仮想オブジェクトは、前記相関度合が所定値以上である仮想オブジェクトである、
前記(3)に記載の情報処理装置。
(5)
 前記映像生成部は、
 前記仮想空間に配置される他の仮想オブジェクトに干渉しないエリアに前記第2の仮想オブジェクトの映像の表示を制御する、
前記(2)から前記(4)までのうちいずれか一項に記載の情報処理装置。
(6)
 前記取得部は、
 前記コンテンツに含まれる背景映像から色情報を取得し、
 前記映像生成部は、
 前記取得部により取得された前記背景映像の色情報に基づき、前記仮想空間を表現する映像の表示を制御する、
前記(1)から前記(5)までのうちいずれか一項に記載の情報処理装置。
(7)
 前記映像生成部は、
 前記取得部により取得された前記背景映像の色情報が示す色と同系色である映像の表示を制御する、
前記(6)に記載の情報処理装置。
(8)
 前記映像生成部は、
 前記取得部により取得された前記背景映像の色情報に基づき、前記背景映像の色情報が示す色と同系色である光を発する第3の仮想オブジェクトの映像の表示を制御する、
前記(7)に記載の情報処理装置。
(9)
 前記映像生成部は、
 前記取得部により取得された前記背景映像の色情報に基づく映像を、前記第1の仮想オブジェクトの両端における前記仮想空間との境界を隠すように表示を制御する、
前記(6)から前記(8)までのうちいずれか一項に記載の情報処理装置。
(10)
 前記背景映像の色情報は、前記コンテンツにおける前記コンテンツと前記仮想空間との境界付近の色を示す、
前記(6)から前記(9)までのうちいずれか一項に記載の情報処理装置。
(11)
 前記映像生成部は、
 前記コンテンツにおける前記コンテンツと前記仮想空間との左端境界付近の色情報に基づく映像を、前記第1の仮想オブジェクトの左端における前記仮想空間との境界を隠すように表示を制御し、前記コンテンツにおける前記コンテンツと前記仮想空間との右端境界付近の色情報に基づく映像を、前記第1の仮想オブジェクトの右端における前記仮想空間との境界を隠すように表示を制御する、
前記(10)に記載の情報処理装置。
(12)
 前記取得部は、
 前記仮想空間上で出力される前記コンテンツに含まれる音情報を取得し、
 前記映像生成部は、
 前記取得部により取得された音情報に基づき、前記仮想空間を表現する映像の表示を制御する、
前記(1)から前記(11)までのうちいずれか一項に記載の情報処理装置。
(13)
 前記映像生成部は、
 前記取得部により取得された音情報に基づき、前記仮想空間を表現する第4の仮想オブジェクトの映像の表示を制御する、
前記(12)に記載の情報処理装置。
(14)
 前記映像生成部は、
 前記取得部により取得された音のリズムに呼応して動く映像表示を制御する、
前記(12)または前記(13)に記載の情報処理装置。
(15)
 前記第1の仮想オブジェクトは、曲面形状を有するスクリーンである、
前記(1)から前記(14)までのうちいずれか一項に記載の情報処理装置。
(16)
 前記曲面形状は、前記仮想空間上における所定のエリア範囲に応じた曲率半径を有する、
前記(15)に記載の情報処理装置。
(17)
 仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得することと、
 取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御することと、
を含む、コンピュータにより実行される情報処理方法。
(18)
 コンピュータが実行可能なプログラムを非一時的に記憶する記憶媒体であって、
 前記プログラムは、
 仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得機能と、
 前記取得機能により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する表示制御機能と、
を実現させる、記憶媒体。
Note that the following configuration also belongs to the technical scope of the present disclosure.
(1)
an acquisition unit that acquires feature information from content included in a first virtual object placed in the virtual space;
a video generation unit that controls display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition unit;
An information processing device.
(2)
The acquisition unit
obtaining the feature information from an object included in the content;
The information processing device according to (1) above.
(3)
The acquisition unit
Recognizing an object based on the acquired feature information,
The video generation unit
controlling display of a virtual object whose degree of correlation with the object recognized by the acquisition unit satisfies a predetermined criterion as an image of the second virtual object;
The information processing device according to (2) above.
(4)
A virtual object whose degree of correlation meets a predetermined criterion is a virtual object whose degree of correlation is equal to or greater than a predetermined value,
The information processing device according to (3) above.
(5)
The video generation unit
controlling display of the image of the second virtual object in an area that does not interfere with other virtual objects placed in the virtual space;
The information processing apparatus according to any one of (2) to (4).
(6)
The acquisition unit
obtaining color information from a background video included in the content;
The video generation unit
controlling display of an image representing the virtual space based on the color information of the background image acquired by the acquisition unit;
The information processing apparatus according to any one of (1) to (5) above.
(7)
The video generation unit
controlling display of an image having a similar color to the color indicated by the color information of the background image acquired by the acquisition unit;
The information processing device according to (6) above.
(8)
The video generation unit
Based on the color information of the background image acquired by the acquisition unit, controlling display of an image of a third virtual object emitting light having a color similar to that indicated by the color information of the background image;
The information processing device according to (7) above.
(9)
The video generation unit
controlling the display of an image based on the color information of the background image acquired by the acquisition unit so as to hide boundaries with the virtual space at both ends of the first virtual object;
The information processing apparatus according to any one of (6) to (8).
(10)
the color information of the background video indicates a color near the boundary between the content and the virtual space in the content;
The information processing apparatus according to any one of (6) to (9).
(11)
The video generation unit
controlling display of an image based on color information near a left edge boundary between the content and the virtual space in the content so as to hide a boundary with the virtual space at the left edge of the first virtual object; controlling display of an image based on color information near the right edge boundary between the content and the virtual space so as to hide the boundary with the virtual space at the right edge of the first virtual object;
The information processing device according to (10) above.
(12)
The acquisition unit
Acquiring sound information included in the content output in the virtual space;
The video generation unit
controlling display of an image representing the virtual space based on the sound information acquired by the acquisition unit;
The information processing apparatus according to any one of (1) to (11).
(13)
The video generation unit
controlling display of a video of a fourth virtual object representing the virtual space based on the sound information acquired by the acquisition unit;
The information processing device according to (12) above.
(14)
The video generation unit
controlling video display that moves in response to the rhythm of the sound acquired by the acquisition unit;
The information processing apparatus according to (12) or (13).
(15)
The first virtual object is a screen having a curved shape,
The information processing apparatus according to any one of (1) to (14).
(16)
The curved shape has a radius of curvature corresponding to a predetermined area range in the virtual space,
The information processing device according to (15) above.
(17)
acquiring feature information from content included in a first virtual object placed in the virtual space;
controlling display of a second virtual object around at least the content arranged in the virtual space based on the acquired feature information;
A computer-implemented information processing method comprising:
(18)
A storage medium that non-temporarily stores a computer-executable program,
Said program
an acquisition function for acquiring feature information from content included in a first virtual object placed in the virtual space;
a display control function that controls display of a second virtual object around at least the content placed in the virtual space based on the feature information acquired by the acquisition function;
A storage medium that realizes
1  ネットワーク
10  配信システム
20  カメラ
25  センサ
30  配信サーバ
310  制御部
320  通信部
40  HMD
410  通信部
420  記憶部
430  表示部
440  操作部
450  制御部
 451  色情報検出部
 455  物体・身体認識部
 459  補正処理部
 463  相関処理部
 467  楽曲解析処理部
 471  空間描画処理部
1 network 10 distribution system 20 camera 25 sensor 30 distribution server 310 control unit 320 communication unit 40 HMD
410 communication unit 420 storage unit 430 display unit 440 operation unit 450 control unit 451 color information detection unit 455 object/body recognition unit 459 correction processing unit 463 correlation processing unit 467 music analysis processing unit 471 space drawing processing unit

Claims (18)

  1.  仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得部と、
     前記取得部により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する映像生成部と、
    を備える、情報処理装置。
    an acquisition unit that acquires feature information from content included in a first virtual object placed in the virtual space;
    a video generation unit that controls display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition unit;
    An information processing device.
  2.  前記取得部は、
     前記コンテンツに含まれる物体から前記特徴情報を取得する、
    請求項1に記載の情報処理装置。
    The acquisition unit
    obtaining the feature information from an object included in the content;
    The information processing device according to claim 1 .
  3.  前記取得部は、
     取得した前記特徴情報に基づき、物体を認識し、
     前記映像生成部は、
     前記取得部により認識された物体との相関度合が所定の基準を満たした仮想オブジェクトを、前記第2の仮想オブジェクトの映像として表示を制御する、
    請求項2に記載の情報処理装置。
    The acquisition unit
    Recognizing an object based on the acquired feature information,
    The video generation unit
    controlling display of a virtual object whose degree of correlation with the object recognized by the acquisition unit satisfies a predetermined criterion as an image of the second virtual object;
    The information processing apparatus according to claim 2.
  4.  前記相関度合が所定の基準を満たした仮想オブジェクトは、前記相関度合が所定値以上である仮想オブジェクトである、
    請求項3に記載の情報処理装置。
    A virtual object whose degree of correlation meets a predetermined criterion is a virtual object whose degree of correlation is equal to or greater than a predetermined value,
    The information processing apparatus according to claim 3.
  5.  前記映像生成部は、
     前記仮想空間に配置される他の仮想オブジェクトに干渉しないエリアに前記第2の仮想オブジェクトの映像の表示を制御する、
    請求項4に記載の情報処理装置。
    The video generation unit
    controlling display of the image of the second virtual object in an area that does not interfere with other virtual objects placed in the virtual space;
    The information processing apparatus according to claim 4.
  6.  前記取得部は、
     前記コンテンツに含まれる背景映像から色情報を取得し、
     前記映像生成部は、
     前記取得部により取得された前記背景映像の色情報に基づき、前記仮想空間を表現する映像の表示を制御する、
    請求項5に記載の情報処理装置。
    The acquisition unit
    obtaining color information from a background video included in the content;
    The video generation unit
    controlling display of an image representing the virtual space based on the color information of the background image acquired by the acquisition unit;
    The information processing device according to claim 5 .
  7.  前記映像生成部は、
     前記取得部により取得された前記背景映像の色情報が示す色と同系色である映像の表示を制御する、
    請求項6に記載の情報処理装置。
    The video generation unit
    controlling display of an image having a similar color to the color indicated by the color information of the background image acquired by the acquisition unit;
    The information processing device according to claim 6 .
  8.  前記映像生成部は、
     前記取得部により取得された前記背景映像の色情報に基づき、前記背景映像の色情報が示す色と同系色である光を発する第3の仮想オブジェクトの映像の表示を制御する、
    請求項7に記載の情報処理装置。
    The video generation unit
    Based on the color information of the background image acquired by the acquisition unit, controlling display of an image of a third virtual object emitting light having a color similar to that indicated by the color information of the background image;
    The information processing apparatus according to claim 7.
  9.  前記映像生成部は、
     前記取得部により取得された前記背景映像の色情報に基づく映像を、前記第1の仮想オブジェクトの両端における前記仮想空間との境界を隠すように表示を制御する、
    請求項8に記載の情報処理装置。
    The video generation unit
    controlling the display of an image based on the color information of the background image acquired by the acquisition unit so as to hide boundaries with the virtual space at both ends of the first virtual object;
    The information processing apparatus according to claim 8 .
  10.  前記背景映像の色情報は、前記コンテンツにおける前記コンテンツと前記仮想空間との境界付近の色を示す、
    請求項9に記載の情報処理装置。
    the color information of the background video indicates a color near the boundary between the content and the virtual space in the content;
    The information processing apparatus according to claim 9 .
  11.  前記映像生成部は、
     前記コンテンツにおける前記コンテンツと前記仮想空間との左端境界付近の色情報に基づく映像を、前記第1の仮想オブジェクトの左端における前記仮想空間との境界を隠すように表示を制御し、前記コンテンツにおける前記コンテンツと前記仮想空間との右端境界付近の色情報に基づく映像を、前記第1の仮想オブジェクトの右端における前記仮想空間との境界を隠すように表示を制御する、
    請求項10に記載の情報処理装置。
    The video generation unit
    controlling display of an image based on color information near a left edge boundary between the content and the virtual space in the content so as to hide a boundary with the virtual space at the left edge of the first virtual object; controlling display of an image based on color information near the right edge boundary between the content and the virtual space so as to hide the boundary with the virtual space at the right edge of the first virtual object;
    The information processing apparatus according to claim 10.
  12.  前記取得部は、
     前記仮想空間上で出力される前記コンテンツに含まれる音情報を取得し、
     前記映像生成部は、
     前記取得部により取得された音情報に基づき、前記仮想空間を表現する映像の表示を制御する、
    請求項11に記載の情報処理装置。
    The acquisition unit
    Acquiring sound information included in the content output in the virtual space;
    The video generation unit
    controlling display of an image representing the virtual space based on the sound information acquired by the acquisition unit;
    The information processing device according to claim 11 .
  13.  前記映像生成部は、
     前記取得部により取得された音情報に基づき、前記仮想空間を表現する第4の仮想オブジェクトの表示を制御する、
    請求項12に記載の情報処理装置。
    The video generation unit
    controlling display of a fourth virtual object representing the virtual space based on the sound information acquired by the acquisition unit;
    The information processing apparatus according to claim 12.
  14.  前記映像生成部は、
     前記取得部により取得された音のリズムに呼応して動く映像の表示を制御する、
    請求項13に記載の情報処理装置。
    The video generation unit
    controlling the display of moving images in response to the rhythm of the sound acquired by the acquisition unit;
    The information processing apparatus according to claim 13.
  15.  前記第1の仮想オブジェクトは、曲面形状を有するスクリーンである、
    請求項14に記載の情報処理装置。
    The first virtual object is a screen having a curved shape,
    The information processing apparatus according to claim 14.
  16.  前記曲面形状は、前記仮想空間上における所定のエリア範囲に応じた曲率半径を有する、
    請求項15に記載の情報処理装置。
    The curved shape has a radius of curvature corresponding to a predetermined area range in the virtual space,
    The information processing device according to claim 15 .
  17.  仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得することと、
     取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御することと、
    を含む、コンピュータにより実行される情報処理方法。
    acquiring feature information from content included in a first virtual object placed in the virtual space;
    controlling display of a second virtual object around at least the content arranged in the virtual space based on the acquired feature information;
    A computer-implemented information processing method comprising:
  18.  コンピュータが実行可能なプログラムを非一時的に記憶する記憶媒体であって、
     前記プログラムは、
     仮想空間に配置される第1の仮想オブジェクトに含まれるコンテンツから特徴情報を取得する取得機能と、
     前記取得機能により取得された前記特徴情報に基づき、少なくとも前記仮想空間に配置される前記コンテンツの周囲に第2の仮想オブジェクトの表示を制御する表示制御機能と、
    を実現させる、記憶媒体。
    A storage medium that non-temporarily stores a computer-executable program,
    Said program
    an acquisition function for acquiring feature information from content included in a first virtual object placed in the virtual space;
    a display control function that controls display of a second virtual object around at least the content placed in the virtual space based on the feature information acquired by the acquisition function;
    A storage medium that realizes
PCT/JP2022/007217 2021-07-08 2022-02-22 Information processing device, information processing method, and storage medium WO2023281803A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021113660 2021-07-08
JP2021-113660 2021-07-08

Publications (1)

Publication Number Publication Date
WO2023281803A1 true WO2023281803A1 (en) 2023-01-12

Family

ID=84801708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/007217 WO2023281803A1 (en) 2021-07-08 2022-02-22 Information processing device, information processing method, and storage medium

Country Status (1)

Country Link
WO (1) WO2023281803A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018122167A1 (en) * 2016-12-30 2018-07-05 Thomson Licensing Device and method for generating flexible dynamic virtual contents in mixed reality
JP2019536339A (en) * 2016-10-25 2019-12-12 株式会社ソニー・インタラクティブエンタテインメント Method and apparatus for synchronizing video content
JP2020017176A (en) * 2018-07-27 2020-01-30 株式会社Nttドコモ Information processing device
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019536339A (en) * 2016-10-25 2019-12-12 株式会社ソニー・インタラクティブエンタテインメント Method and apparatus for synchronizing video content
WO2018122167A1 (en) * 2016-12-30 2018-07-05 Thomson Licensing Device and method for generating flexible dynamic virtual contents in mixed reality
JP2020017176A (en) * 2018-07-27 2020-01-30 株式会社Nttドコモ Information processing device
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: ""Jumping TV" where characters in the TV jump out of the screen and move around", GIGAZINE, 30 May 2014 (2014-05-30), XP055897657, Retrieved from the Internet <URL:https://gigazine.net/news/20140530-ar-spring-tv> [retrieved on 20220304] *
EYECANDYLAB: "augmen.tv by eyecandylab // The AR Lens for Video", YOUTUBE, XP093022663, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=QZVsi7Y5YZ0> [retrieved on 20230209] *
SHOJI RYOICHI: ""MR CM" which people and characters pop out of commercial images. Demonstration experiment on BS Nippon Television program; in particular, drawings "Image of MR CM"", 5 March 2019 (2019-03-05), XP093022662, Retrieved from the Internet <URL:https://av.watch.impress.co.jp/docs/news/1173046.html> [retrieved on 20230209] *
SUGIHARA KENJI: "Keywords you should know Second Screen; in particular "How the TV and second screen integration works"", 1 January 2013 (2013-01-01), pages 409 - 412, XP093022661, Retrieved from the Internet <URL:https://www.ite.or.jp/contents/keywords/FILE-20160413114522.pdf> [retrieved on 20230209] *
YAMAMOTO SUSUMU, TSUTSUGUCHI KEN, HIDENORI TANAKA, SHINGO ANDO, ATSUSHI KATAYAMA: "Visual SyncAR: Augmented Reality which Synchronizes Video and Overlaid Information", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 43, no. 3, 1 January 2014 (2014-01-01), pages 397 - 403, XP093022660, DOI: 10.11371/iieej.43.397 *

Similar Documents

Publication Publication Date Title
JP6646620B2 (en) Wide-ranging simultaneous remote digital presentation world
KR102581453B1 (en) Image processing for Head mounted display devices
KR102503413B1 (en) Animation interaction method, device, equipment and storage medium
US10922865B2 (en) Information processing apparatus, information processing method, and program
US8294557B1 (en) Synchronous interpersonal haptic communication system
JP4921550B2 (en) How to give emotional features to computer-generated avatars during gameplay
JP5208810B2 (en) Information processing apparatus, information processing method, information processing program, and network conference system
US9071808B2 (en) Storage medium having stored information processing program therein, information processing apparatus, information processing method, and information processing system
US9424678B1 (en) Method for teleconferencing using 3-D avatar
TWI647593B (en) System and method for providing simulated environment
US11048326B2 (en) Information processing system, information processing method, and program
TW201210663A (en) Natural user input for driving interactive stories
CN106730815A (en) The body-sensing interactive approach and system of a kind of easy realization
US11321892B2 (en) Interactive virtual reality broadcast systems and methods
TW201523509A (en) Method for rhythm visualization, system, and computer-readable memory
WO2023035897A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
WO2008087621A1 (en) An apparatus and method for animating emotionally driven virtual objects
JP2014187559A (en) Virtual reality presentation system and virtual reality presentation method
JP2016045814A (en) Virtual reality service providing system and virtual reality service providing method
JP2023036740A (en) Video distribution system, video distribution method, and video distribution program
JP2014164537A (en) Virtual reality service providing system and virtual reality service providing method
JP6688378B1 (en) Content distribution system, distribution device, reception device, and program
WO2023281803A1 (en) Information processing device, information processing method, and storage medium
CN111510769A (en) Video image processing method and device and electronic equipment
WO2021065694A1 (en) Information processing system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22837225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE