WO2017199848A1 - Procédé destiné à fournir un espace virtuel, un programme et un support d'enregistrement - Google Patents

Procédé destiné à fournir un espace virtuel, un programme et un support d'enregistrement Download PDF

Info

Publication number
WO2017199848A1
WO2017199848A1 PCT/JP2017/017878 JP2017017878W WO2017199848A1 WO 2017199848 A1 WO2017199848 A1 WO 2017199848A1 JP 2017017878 W JP2017017878 W JP 2017017878W WO 2017199848 A1 WO2017199848 A1 WO 2017199848A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
virtual space
user
hmd
sight
Prior art date
Application number
PCT/JP2017/017878
Other languages
English (en)
Japanese (ja)
Inventor
健登 中島
裕一郎 新井
功淳 馬場
Original Assignee
株式会社コロプラ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016099088A external-priority patent/JP6262283B2/ja
Priority claimed from JP2016099108A external-priority patent/JP6126271B1/ja
Priority claimed from JP2016099119A external-priority patent/JP6126272B1/ja
Application filed by 株式会社コロプラ filed Critical 株式会社コロプラ
Publication of WO2017199848A1 publication Critical patent/WO2017199848A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region

Definitions

  • the present disclosure relates to a method, a program, and a recording medium that provide a virtual space.
  • Patent Document 1 describes a method of displaying objects such as a signboard and a bulletin board in a virtual space.
  • Patent Document 2 describes a system for viewing content using a head-mounted display.
  • Patent Literature 3 describes a system for viewing content using a head-mounted display.
  • Patent Document 4 discloses a technique for allowing a user to visually recognize content played back in a virtual space through a head-mounted display.
  • JP 2003-248844 A Japanese Patent No. 5882517 JP 2013-258614 A JP 2009-145883 A
  • the present disclosure has been made in view of such circumstances, and provides a method of displaying other content such as advertisements when reproducing moving image content on a head-mounted display.
  • FIG. 1 is a diagram showing a configuration of the HMD system 100.
  • the HMD system 100 includes an HMD 110, an HMD sensor 120, a control circuit unit 200, and a controller 300.
  • the HMD 110 is worn on the user's head.
  • the HMD 110 includes a display 112 which is a non-transmissive display device, a sensor 114, and a gaze sensor 130.
  • the HMD 110 displays a right-eye image and a left-eye image on the display 112, thereby allowing the user to visually recognize a three-dimensional image that is stereoscopically viewed by the user based on the parallax between both eyes of the user.
  • This provides a virtual space to the user. Since the display 112 is disposed in front of the user's eyes, the user can immerse in the virtual space through an image displayed on the display 112.
  • the virtual space may include a background, various objects that can be operated by the user, menu images, and the like.
  • the display 112 may include a right-eye sub-display that displays a right-eye image and a left-eye sub-display that displays a left-eye image.
  • the display 112 may be a single display device that displays the right-eye image and the left-eye image on a common screen.
  • a display device for example, a display device that alternately displays a right-eye image and a left-eye image independently by switching a shutter so that the display image can be recognized only by one eye. .
  • FIG. 2 is a diagram illustrating a hardware configuration of the control circuit unit 200.
  • the control circuit unit 200 is a computer for causing the HMD 110 to provide a virtual space.
  • the control circuit unit 200 includes a processor, a memory, a storage, an input / output interface, and a communication interface. These are connected to each other in the control circuit unit 200 through a bus as a data transmission path.
  • the memory functions as main memory.
  • the memory stores a program processed by the processor and control data (such as calculation parameters).
  • Memory is ROM (Read It may be configured to include only memory (RAM) or random access memory (RAM).
  • Storage functions as auxiliary storage stores a program for controlling the operation of the entire HMD system 100, various simulation programs, a user authentication program, and various data (images, objects, etc.) for defining a virtual space. Furthermore, a database including a table for managing various data may be constructed in the storage.
  • the storage can be configured to include a flash memory or a HDD (Hard Disc Drive).
  • Input / output interfaces include USB (Universal Serial Bus) terminals, DVI (Digital Visual Interface) terminals, HDMI (registered trademark) (High-Definition Multimedia Interface) terminals, and various wired connection terminals for wireless connection. These processing circuits are included.
  • the input / output interface connects the HMD 110, various sensors including the HMD sensor 120, and the controller 300 to each other.
  • the communication interface includes various wired connection terminals for communicating with an external device via the network NW and various processing circuits for wireless connection.
  • the communication interface is configured to conform to various communication standards and protocols for communication via a LAN (Local Area Network) or the Internet.
  • the control circuit unit 200 provides a virtual space to the user by loading a predetermined application program stored in the storage into the memory and executing it.
  • the memory and the storage store various programs for operating various objects arranged in the virtual space and displaying and controlling various menu images and the like.
  • the control circuit unit 200 may or may not be mounted on the HMD 110. That is, the control circuit unit 200 may be another hardware independent of the HMD 110 (for example, a personal computer or a server device that can communicate with the HMD 110 through a network).
  • the control circuit unit 200 may be a device in which one or a plurality of functions are implemented by cooperation of a plurality of hardware. Alternatively, only a part of all the functions of the control circuit unit 200 may be mounted on the HMD 110, and the remaining functions may be mounted on different hardware.
  • a global coordinate system (reference coordinate system, xyz coordinate system) is set in advance for each element such as the HMD 110 constituting the HMD system 100.
  • This global coordinate system has three reference directions (axes) parallel to the vertical direction, the horizontal direction orthogonal to the vertical direction, and the front-rear direction orthogonal to both the vertical direction and the horizontal direction in the real space.
  • the horizontal direction, vertical direction (vertical direction), and front-rear direction in the global coordinate system are set as an x-axis, a y-axis, and a z-axis, respectively.
  • the x-axis of the global coordinate system is parallel to the horizontal direction of the real space
  • the y-axis is parallel to the vertical direction of the real space
  • the z-axis is parallel to the front-rear direction of the real space.
  • the HMD sensor 120 has a position tracking function for detecting the movement of the HMD 110. With this function, the HMD sensor 120 detects the position and inclination of the HMD 110 in the real space.
  • the HMD 110 includes a plurality of light sources (not shown). Each light source is, for example, an LED that emits infrared rays.
  • the HMD sensor 120 includes, for example, an infrared sensor. The HMD sensor 120 detects the detection point of the HMD 110 by detecting the infrared ray irradiated from the light source of the HMD 110 with the infrared sensor.
  • the HMD sensor 120 may include an optical camera. In this case, the HMD sensor 120 detects the position and inclination of the HMD 110 based on the image information of the HMD 110 obtained by the optical camera.
  • the HMD 110 may detect its own position and inclination using the sensor 114.
  • the sensor 114 may be an angular velocity sensor, a geomagnetic sensor, an acceleration sensor, or a gyro sensor, for example.
  • the HMD 110 uses at least one of these.
  • the sensor 114 is an angular velocity sensor
  • the sensor 114 detects the angular velocity around the three axes in the real space of the HMD 110 over time according to the movement of the HMD 110.
  • the HMD 110 can determine the temporal change of the angle around the three axes of the HMD 110 based on the detected value of the angular velocity, and can detect the inclination of the HMD 110 based on the temporal change of the angle.
  • the HMD sensor 120 When the HMD 110 detects the position and inclination of the HMD 110 based on the detection value by the sensor 114, the HMD sensor 120 is not necessary for the HMD system 100. Conversely, when the HMD sensor 120 arranged at a position away from the HMD 110 detects the position and inclination of the HMD 110, the sensor 114 is not necessary for the HMD 110.
  • each inclination of the HMD 110 detected by the HMD sensor 120 corresponds to each inclination around the three axes of the HMD 110 in the global coordinate system.
  • the HMD sensor 120 sets the uvw visual field coordinate system to the HMD 110 based on the detected value of the inclination of the HMD sensor 120 in the global coordinate system.
  • the uvw visual field coordinate system set in the HMD 110 corresponds to a viewpoint coordinate system when a user wearing the HMD 110 views an object.
  • FIG. 3 is a diagram illustrating a uwv visual field coordinate system set in the HMD 110.
  • the HMD sensor 120 detects the position and inclination of the HMD 110 in the global coordinate system when the HMD 110 is activated. Then, a three-dimensional uvw visual field coordinate system based on the detected tilt value is set in the HMD 110. As shown in FIG. 3, the HMD sensor 120 sets a three-dimensional uvw visual field coordinate system around the head of the user wearing the HMD 110 as the center (origin) in the HMD 110.
  • the horizontal direction, the vertical direction, and the front-rear direction (x axis, y axis, z axis) that define the global coordinate system are respectively set around each axis by an inclination around each axis of the HMD 110 in the global coordinate system.
  • Three new directions obtained by tilting are set as the pitch direction (u-axis), yaw direction (v-axis), and roll direction (w-axis) of the uvw visual field coordinate system in the HMD 110.
  • the HMD sensor 120 sets the uvw visual field coordinate system parallel to the global coordinate system to the HMD 110 when the user wearing the HMD 110 stands upright and visually recognizes the front.
  • the horizontal direction (x-axis), the vertical direction (y-axis), and the front-back direction (z-axis) of the global coordinate system are the same as the pitch direction (u-axis) and yaw direction (v of the uvw visual field coordinate system in the HMD 110. Axis) and the roll direction (w-axis).
  • the HMD sensor 120 can detect the inclination (amount of change in inclination) of the HMD 110 in the currently set uvw visual field coordinate system according to the movement of the HMD 110 after setting the uvw visual field coordinate system in the HMD 110.
  • the HMD sensor 120 detects the pitch angle ( ⁇ u), yaw angle ( ⁇ v), and roll angle ( ⁇ w) of the HMD 110 in the currently set uvw visual field coordinate system as the inclination of the HMD 110.
  • the pitch angle ( ⁇ u) is an inclination angle of the HMD 110 around the pitch direction in the uvw visual field coordinate system.
  • the yaw angle ( ⁇ v) is an inclination angle of the HMD 110 around the yaw direction in the uvw visual field coordinate system.
  • the roll angle ( ⁇ w) is an inclination angle of the HMD 110 around the roll direction in the uvw visual field coordinate system.
  • the HMD sensor 120 newly sets the uvw visual field coordinate system in the HMD 110 after moving based on the detected value of the inclination of the HMD 110 to the HMD 110.
  • the relationship between the HMD 110 and the uvw visual field coordinate system of the HMD 110 is always constant regardless of the position and inclination of the HMD 110.
  • the position and inclination of the HMD 110 change, the position and inclination of the uvw visual field coordinate system of the HMD 110 in the global coordinate system similarly change in conjunction with the change.
  • the HMD sensor 120 determines the position of the HMD 110 in the real space relative to the HMD sensor 120 based on the infrared light intensity acquired by the infrared sensor and the relative positional relationship (distance between the detection points) between the plurality of detection points. You may specify as a position.
  • the origin of the uvw visual field coordinate system of the HMD 110 in the real space (global coordinate system) may be determined based on the specified relative position.
  • the HMD sensor 120 detects the inclination of the HMD 110 in the real space based on the relative positional relationship between the plurality of detection points, and further, based on the detected value, the uvw visual field coordinates of the HMD 110 in the real space (global coordinate system).
  • the orientation of the system may be determined.
  • FIG. 4 is a diagram illustrating an outline of the virtual space 2 provided to the user. As shown in this figure, the virtual space 2 has a spherical structure that covers the entire 360 ° direction of the center 21. FIG. 4 illustrates only the upper half celestial sphere in the entire virtual space 2. A plurality of substantially square or substantially rectangular meshes are associated with the virtual space 2. The position of each mesh in the virtual space 2 is defined in advance as coordinates in a space coordinate system (XYZ coordinate system) defined in the virtual space 2.
  • XYZ coordinate system space coordinate system
  • the control circuit unit 200 associates each partial image constituting content (still image, moving image, etc.) that can be developed in the virtual space 2 with each corresponding mesh in the virtual space 2, thereby enabling the virtual space image that can be visually recognized by the user.
  • the virtual space 2 in which 22 is expanded is provided to the user.
  • the virtual space 2 defines an XYZ space coordinate system with the center 21 as the origin.
  • the XYZ coordinate system is parallel to the global coordinate system, for example. Since the XYZ coordinate system is a kind of viewpoint coordinate system, the horizontal direction, vertical direction (vertical direction), and front-rear direction in the XYZ coordinate system are defined as an X axis, a Y axis, and a Z axis, respectively.
  • the X axis (horizontal direction) of the XYZ coordinate system is parallel to the x axis of the global coordinate system
  • the Y axis (vertical direction) of the XYZ coordinate system is parallel to the y axis of the global coordinate system
  • the Z axis (front-rear direction) is parallel to the z axis of the global coordinate system.
  • the virtual camera 1 When the HMD 110 is activated (initial state), the virtual camera 1 is arranged at the center 21 of the virtual space 2.
  • the virtual camera 1 moves similarly in the virtual space 2 in conjunction with the movement of the HMD 110 in the real space. Thereby, changes in the position and orientation of the HMD 110 in the real space are similarly reproduced in the virtual space 2.
  • the uvw visual field coordinate system is defined for the virtual camera 1 similarly to the HMD 110.
  • the uvw visual field coordinate system of the virtual camera 1 in the virtual space 2 is defined to change to the uvw visual field coordinate system of the HMD 110 in the real space (global coordinate system). Therefore, when the inclination of the HMD 110 changes, the inclination of the virtual camera 1 also changes accordingly.
  • the virtual camera 1 can also move in the virtual space 2 in conjunction with the movement of the user wearing the HMD 110 in the real space.
  • the orientation of the virtual camera 1 in the virtual space 2 is determined according to the position and inclination of the virtual camera 1 in the virtual space 2.
  • a line of sight reference line of sight 5
  • the control circuit unit 200 determines the visual field region 23 in the virtual space 2 based on the reference visual line 5.
  • the visual field area 23 is an area corresponding to the visual field of the user wearing the HMD 110 in the virtual space 2.
  • FIG. 5 is a diagram showing a cross section of the visual field region 23.
  • FIG. 5A shows a YZ cross section of the visual field region 23 viewed from the X direction in the virtual space 2.
  • FIG. 5B shows an XZ cross-section of the visual field region 23 viewed from the Y direction in the virtual space 2.
  • the visual field region 23 is defined by the first region 24 (see FIG. 5A) that is a range defined by the reference line of sight 5 and the YZ section of the virtual space 2, and the reference line of sight 5 and the XZ section of the virtual space 2.
  • a second region 25 (see FIG. 5B) which is a defined range.
  • the control circuit unit 200 sets a range including the polar angle ⁇ around the reference line of sight 5 in the virtual space 2 as the first region 24.
  • a range including the azimuth angle ⁇ around the reference line of sight 5 in the virtual space 2 is set as the second region 25.
  • the control circuit unit 200 may move the virtual camera 1 in the virtual space 2 in conjunction with the movement of the user wearing the HMD 110 in the real space. In this case, the control circuit unit 200 identifies the visual field region 23 that is visually recognized by the user by being projected on the display 112 of the HMD 110 in the virtual space 2 based on the position and orientation of the virtual camera 1 in the virtual space 2.
  • the gaze sensor 130 has an eye tracking function that detects the direction (gaze direction) in which the line of sight of the user's right eye and left eye is directed.
  • the gaze sensor 130 a known sensor having an eye tracking function can be employed.
  • the gaze sensor 130 preferably includes a right eye sensor and a left eye sensor.
  • the gaze sensor 130 may be, for example, a sensor that detects the rotation angle of each eyeball by irradiating the user's right eye and left eye with infrared light and receiving reflected light from the cornea and iris with respect to the irradiated light.
  • the gaze sensor 130 can detect the direction of the user's line of sight based on each detected rotation angle.
  • the user's line-of-sight direction detected by the gaze sensor 130 is a direction in the viewpoint coordinate system when the user visually recognizes the object.
  • the uvw visual field coordinate system of the HMD 110 is equal to the viewpoint coordinate system when the user visually recognizes the display 112.
  • the uvw visual field coordinate system of the virtual camera 1 is linked to the uvw visual field coordinate system of the HMD 110. Therefore, in the HMD system 100, the user's line-of-sight direction detected by the gaze sensor 130 can be regarded as the user's line-of-sight direction in the uvw visual field coordinate system of the virtual camera 1.
  • the line-of-sight direction N0 is a direction in which the user U actually points the line of sight with both eyes.
  • the line-of-sight direction N0 is also a direction in which the user U actually directs his / her line of sight with respect to the field-of-view area 23.
  • the HMD system 100 may include a microphone and a speaker in any element constituting the HMD system 100. Thereby, the user can give a voice instruction to the virtual space 2. Further, in order to receive a broadcast of a television program on a virtual television in a virtual space, the HMD system 100 may include a television receiver as any element. Further, it may include a communication function or the like for displaying an electronic mail or the like acquired by the user.
  • the controller 300 is a device that can transmit various commands based on user operations to the control circuit unit 200.
  • the controller 300 can be a portable terminal capable of wired or wireless communication.
  • the controller 300 may be, for example, a smartphone, a PDA (Personal Digital Assistant), a tablet computer, a game console, a general-purpose PC (Personal Computer), or the like.
  • the controller 300 is preferably a device including a touch panel, and any terminal including a processor, a memory, a storage, a communication unit, and a touch panel in which a display unit and an input unit are integrated with each other may be employed.
  • the user inputs various touch operations including tap, swipe, and hold on the touch panel of the controller 300, thereby affecting various objects and UI (User Interface) arranged in the virtual space 2. be able to.
  • UI User Interface
  • FIG. 7 is a block diagram showing a functional configuration of the control circuit unit 200.
  • the control circuit unit 200 controls the virtual space 2 provided to the user by using various data received from the HMD sensor 120, the gaze sensor 130, and the controller 300, and displays an image on the display 112 of the HMD 110.
  • Control As illustrated in FIG. 7, the control circuit unit 200 includes a detection unit 210, a display control unit 220, a virtual space control unit 230, a storage unit 240, and a communication unit 250.
  • the control circuit unit 200 functions as a detection unit 210, a display control unit 220, a virtual space control unit 230, a storage unit 240, and a communication unit 250 by the cooperation of the hardware illustrated in FIG.
  • the functions of the detection unit 210, the display control unit 220, and the virtual space control unit 230 can be realized mainly by the cooperation of the processor and the memory.
  • the function of the storage unit 240 can be realized mainly by the cooperation of the memory and the storage.
  • the function of the communication unit 250 can be realized mainly by the cooperation of the processor and the communication interface.
  • the detection unit 210 receives detection values from various sensors (such as the HMD sensor 120) connected to the control circuit unit 200. Moreover, the predetermined process using the received detected value is performed as needed.
  • the detection unit 210 includes an HMD detection unit 211, a line-of-sight detection unit 212, and an operation reception unit 213.
  • the HMD detection unit 211 receives detection values from the HMD 110 and the HMD sensor 120, respectively.
  • the line-of-sight detection unit 212 receives the detection value from the gaze sensor 130.
  • the operation reception unit 213 receives the operation by receiving a command transmitted in response to a user operation on the controller 300.
  • the display control unit 220 controls image display on the display 112 of the HMD 110.
  • the display control unit 220 includes a virtual camera control unit 221, a visual field region determination unit 222, and a visual field image generation unit 223.
  • the virtual camera control unit 221 arranges the virtual camera 1 in the virtual space 2 and controls the behavior of the virtual camera 1 in the virtual space 2.
  • the view area determination unit 222 determines the view area 23.
  • the view image generation unit 223 generates a view image 26 displayed on the display 112 based on the determined view area 23.
  • the virtual space control unit 230 controls the virtual space 2 provided to the user.
  • the virtual space control unit 230 includes a virtual space defining unit 231, a line-of-sight management unit 232, a content specification unit 233, a content management unit 234, an inclination correction value specification unit 235, an operation object control unit 236, and a rotation control unit 237.
  • the virtual space defining unit 231 defines the virtual space 2 in the HMD system 100 by generating virtual space data representing the virtual space 2 provided to the user.
  • the line-of-sight management unit 232 manages the user's line of sight in the virtual space 2.
  • the content specifying unit 233 specifies the content to be reproduced in the virtual space 2.
  • the content management unit 234 synthesizes the advertising content with the moving image content.
  • the content management unit 234 identifies content to be played back in the virtual space 2.
  • the content management unit 234 recognizes the temporal position of the content being reproduced in the virtual space 2 and determines whether or not it is time for the advertisement to be displayed.
  • the inclination correction value specifying unit 235 specifies an inclination correction value defined in advance for content to be reproduced in the virtual space 2.
  • the operation object control unit 236 controls the operation object 28 in the virtual space 2.
  • the rotation control unit 237 controls the rotation of the virtual camera 1 or the virtual space 2 based on the operation on the operation object 28.
  • the template data is data representing a template of the virtual space 2.
  • the template data has spatial structure data that defines the spatial structure of the virtual space 2.
  • the spatial structure data is data that defines the spatial structure of a 360-degree celestial sphere centered on the center 21, for example.
  • the template data further includes data defining the XYZ coordinate system of the virtual space 2.
  • the template data further includes coordinate data for specifying the position of each mesh constituting the celestial sphere in the XYZ coordinate system.
  • the template data further includes a flag indicating whether or not an object can be arranged in the virtual space 2.
  • Content is content that can be played back in the virtual space 2.
  • Examples of content include platform content and viewing content.
  • Platform content is content related to an environment (platform) for allowing the user to select viewing content that the user wants to view in the virtual space 2. By reproducing the platform content in the virtual space 2, a platform for content selection is provided to the user.
  • the platform content has at least a background image and data defining an object.
  • the viewing content is, for example, still image content or moving image content.
  • the still image content has a background image.
  • the moving image content includes at least an image (still image) of each frame.
  • the moving image content may further include audio data.
  • the moving image content is, for example, content generated by an omnidirectional camera.
  • An omnidirectional camera is a camera that can generate images in all directions at once by photographing all directions in a real space around the lens of the camera at once.
  • Each image constituting the moving image content obtained by the omnidirectional camera is distorted, but when the moving image content is reproduced in the virtual space 2, the distortion of each image is canceled by the lens constituting the display 112 of the HMD 110. It is solved by. Therefore, at the time of reproduction of the moving image content, the user can visually recognize a natural image without distortion in the virtual space 2.
  • an initial direction facing an image to be shown to the user in the initial state (starting up) of the HMD 110 is defined in advance.
  • the initial direction defined in the moving image content generated by the omnidirectional camera usually matches the predetermined shooting direction defined in the omnidirectional camera used for shooting the moving image content.
  • the initial direction may be changed to a direction different from the shooting direction.
  • the moving image content obtained may be edited as appropriate so that the moving image content is defined as the initial direction in the direction deviated from the shooting direction.
  • Each content has a predetermined inclination correction value as required in advance.
  • the tilt correction value is defined for the content in accordance with the attitude that is assumed when the user views the content. For example, an inclination correction value corresponding to a sitting posture is defined for content on the assumption that the user views in a sitting posture. On the other hand, an inclination correction value corresponding to the supine posture is defined for content on the assumption that the user views in the supine posture.
  • the tilt correction value is used to correct the tilt of the virtual camera 1 with respect to the HMD 110 in the global coordinate system. Alternatively, it is used to correct the inclination of the XYZ coordinate system of the virtual space 2 with respect to the global coordinate system.
  • the tilt correction value defined for the content is 0 °.
  • Many contents are based on the premise that the user views in a sitting position. The user views this type of content while actually sitting or standing in order to enhance the reality in the virtual space 2 where this type of content is reproduced.
  • the tilt correction value defined in the content is, for example, 60 °.
  • this type of content for example, there is a moving image content that presents the user with a video that the user visually recognizes when another user has a knee pillow.
  • the initial direction defined in advance for such moving image content is directed to a portion of the omnidirectional image that is viewed with the user lying down. The user views this type of moving image content while actually lying on the floor in order to enhance the reality in the virtual space 2 where this type of moving image content is reproduced.
  • the tilt correction value described above is merely an example.
  • An arbitrary value corresponding to the user's posture assumed when viewing the content can be set as the tilt correction value in the content. For example, if the content is based on the premise that the user views with the user facing vertically upward, 90 ° is defined as the tilt correction value. On the other hand, if the content is based on the premise that the user is viewing directly below the content, -90 ° is defined as the tilt correction value for the content.
  • tilt correction values need not be specified in advance for such content. If no tilt correction value is specified for the content, the tilt correction value specifying unit 235 assumes that a tilt correction value having a value of 0 ° is specified for the content.
  • the communication unit 250 transmits / receives data to / from an external device 400 (for example, a server) via the network NW.
  • an external device 400 for example, a server
  • the gaze sensor 130 detects the right eye line and the left eye line of the user in S25, and transmits each detected value to the control circuit unit 200 in S26.
  • the line-of-sight detection unit 212 receives this detection value.
  • the line-of-sight detection unit 212 identifies the user's line-of-sight direction N ⁇ b> 0 in the uvw visual field coordinate system of the virtual camera 1 using the received detection value.
  • the line-of-sight management unit 232 determines whether or not the user's line of sight has hit a specific thumbnail included in the view image 26 for a predetermined time or more based on the line-of-sight direction N0 and each thumbnail included in the view area 23. . More specifically, the line-of-sight management unit 232 determines whether or not the point where the line-of-sight direction N0 in the visual field region 23 intersects is included in the display range (arrangement range) of a specific thumbnail included in the visual field region 23. If the determination result is YES, it is determined that the line of sight has hit a specific thumbnail included in the field-of-view image 26. If NO, it is determined that the line of sight has not hit the specific thumbnail.
  • the process of FIG. 9 returns to immediately before S25. Thereafter, the processes of S25 to S28 are repeated until S28 becomes YES.
  • the content specifying unit 233 specifies the content corresponding to the thumbnail that has been determined that the line of sight has been hit for a specified time or more. For example, when the user focuses on the thumbnail SN1 for a predetermined time or more, the moving image content associated with the management data of the thumbnail SN1 is specified as the content corresponding to the thumbnail SN1.
  • the virtual space defining unit 231 defines the virtual space 2 for reproducing the moving image content by generating virtual space data for reproducing the specified moving image content.
  • the generation procedure is as follows. First, the virtual space defining unit 231 acquires the template data of the virtual space 2 corresponding to the moving image content from the template storage unit 241.
  • the virtual space defining unit 231 acquires the moving image content specified by the content specifying unit 233 from the content storage unit 242.
  • the virtual space defining unit 231 generates virtual space data that defines the virtual space 2 for reproducing moving image content by adapting the acquired moving image content to the acquired template data.
  • the virtual space defining unit 231 appropriately associates each partial image constituting the first frame image included in the moving image content with the management data of each mesh constituting the celestial sphere of the virtual space 2 in the virtual space data.
  • the virtual space defining unit 231 generates virtual space data that does not include object management data.
  • the view image generation unit 223 After generating virtual space data representing the virtual space 2 for reproducing moving image content, the view image generation unit 223 generates a view image 26 based on the user's reference line of sight 5 in S31. This generation method is the same as the method described with reference to FIG. Here, a view image 26 of the moving image content is generated.
  • the visual field image generation unit 223 outputs the generated visual field image 26 to the HMD 110.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112. Thereby, the reproduction of the moving image content in the virtual space 2 is started, and the user visually recognizes the view image 26 of the moving image content.
  • the view field image 26 is updated in conjunction with the movement. Therefore, the user can visually recognize a partial image (view image 26) at a desired position in the omnidirectional image of each frame constituting the moving image content by appropriately moving the HMD 110.
  • FIG. 10C shows an example of the visual field image 26 of the moving image content.
  • the view image 26 shown in this figure is the view image 26 of the moving image content corresponding to the thumbnail SN1 selected by the user.
  • the platform view image 26 displayed on the display 112 is shown in FIG.
  • the view image 26 of the moving image content shown in FIG. That is, the user can view the moving image content corresponding to this in the virtual space 2 by selecting the thumbnail SN1 by moving the line of sight in the virtual space 2.
  • the user can select the moving image content to be viewed in the virtual space 2 through the moving image content selection platform in the virtual space 2. Therefore, the user does not need to select the moving image content that the user wants to view in the virtual space 2 while visually recognizing another general display connected to the control circuit unit 200 in the real space before wearing the HMD 110. Thereby, a user's immersion feeling with respect to the virtual space 2 can be improved further.
  • the control circuit unit 200 provides the user with the virtual space 2 in which the moving image content is reproduced. Then, again, the virtual space 2 in which the platform for selecting moving image contents is developed is provided to the user. Thus, the user can view other moving image content in the virtual space 2 by selecting another thumbnail. Since the user does not need to remove the HMD 110 when switching the moving image content that the user wants to view, the user's immersion in the virtual space 2 can be further enhanced.
  • the combined content obtained by combining the moving image content and the advertising content SC1 is reproduced so that the content of the advertising content SC1 is displayed in the specified display area in the moving image content.
  • the display area is defined by information including at least a temporal position and a spatial position of moving image content.
  • the temporal position can be said to be information indicating which frame is displayed.
  • the spatial position indicates a place in one frame.
  • the information indicating the viewing direction of the user used in the processing of the control circuit unit 200 such as the line-of-sight management unit 232 may be information indicating the direction of the line-of-sight direction N0, or information indicating the viewing direction. It may be.
  • FIG. 11 shows a view field image of a scene displaying the contents of the advertisement content SC1 or SC2 among the view images displayed by the view field image generation unit 223 reproducing the composite content. That is, the visual field image generation unit 223 displays a visual field image obtained by combining the advertising content SC1 in the moving image content display area F1 in the scene C1 ′, and displays the advertising content SC2 in the moving picture content display area F2 in the scene C1 ′′. In other words, the view field image generation unit 223 combines the moving image content and the advertising content SC1 so as to display the advertising content SC1 in the display area F1 in the scene C1 ′, and the scene. Processing for synthesizing the moving image content and the advertising content SC2 is performed so that the advertising content SC2 is displayed in the display area F2 in C1 ′′.
  • the advertisement contents SC1 and SC2 may be moving images or still images.
  • FIG. 12 is a flowchart showing a flow of processing in which the control circuit unit 200 synthesizes the moving image content and the advertising content SC1.
  • the communication unit 250 receives moving image content.
  • the timing at which the communication unit 250 acquires the moving image content is not particularly limited.
  • the operation receiving unit 213 receives an operation instructing acquisition of the moving image content performed by the user using the controller 300, thereby acquiring the moving image content. May be.
  • the communication unit 250 determines whether or not the video content not stored in the storage unit 240 is stored in the external device 400 at a predetermined time by referring to the external device 400, and the unstored video content May be obtained automatically.
  • Display advertising content A user who views moving image content on a head-mounted display is deeply immersed in the content of the moving image content as compared to a user who views moving image content on a stationary television or the like. Therefore, if an advertisement can be displayed when reproducing the moving image content on the head mounted display, a high advertising effect can be expected. However, if an advertisement is displayed at a position that is easy to enter the user's field of view, the sense of immersion in the virtual space may be impaired.
  • This disclosure provides a method of displaying content such as an advertisement that has little effect on the user's immersive feeling even when other content such as an advertisement is displayed when reproducing moving image content on a head-mounted display.
  • FIG. 13 shows a visual field image for displaying the advertising content SC1 (first sub-content), a visual field image for displaying the advertising content SC2 (second sub-content) for guiding the line of sight to the advertising content SC1, and the virtual space 2 It is a figure for demonstrating the relationship with this position.
  • the visual field image v1 is a part of the virtual space image 22 generated during the reproduction of the moving image content.
  • the visual field image v ⁇ b> 1 is a part of the visual field image displayed based on the reference line of sight when the user is facing forward in the virtual space 2.
  • the “front” may be a plus direction ( ⁇ 90 ° or more and + 90 ° or less) in the horizontal direction with respect to the initial direction, for example, preferably in the vicinity of the initial direction.
  • “rearward” is a minus direction in the horizontal direction with respect to the initial direction (range of greater than 90 ° and less than or equal to 180 °, less than ⁇ 90 ° and greater than or equal to ⁇ 180
  • the visual field image generation unit 223 displays a part of the bookshelf in the advertising content SC1 on the mirror m1 in the visual field image v1 (second sub content display step).
  • a part of the image on the bookshelf is the advertisement content SC2 for guiding the line of sight to the advertisement content SC1 located on the rear side as viewed from the user viewing the moving image content facing the front side.
  • the advertisement content SC2 is content for informing the user that there is content to be noted on the rear side with respect to the initial direction. While the moving image content is being reproduced, the user tends to mainly view the front side near the initial direction. Therefore, if the control circuit unit 200 displays advertising content on the rear side with respect to the initial direction, the user's immersive feeling is not impaired.
  • the display method of the advertisement content SC2 is not limited to the form in which a part of the advertisement content SC1 is displayed in the mirror as in the present embodiment. What is necessary is just to be able to reflect a part of the content on the rear side on the front side of the virtual space image 22 so as not to be unnatural. For example, if an image having some reflection surface is displayed on the front side of the virtual space image 22, it is preferable to display an image showing at least a part of the advertising content SC1 in that portion. Therefore, when a person appearing in the moving image content opens the window, a part of the advertisement content SC1 may be drawn in the window, and it may be drawn as if a part of the advertisement content SC1 behind is reflected in the window. .
  • the visual field image generation unit 223 may always display the advertisement content SC2, or may display only a predetermined temporal position.
  • the visual field image generation unit 223 may display the advertising content SC2 when the main character in the moving image content looks into a mirror or opens a curtain on the window.
  • a still image is used as content for visual line guidance will be described.
  • a moving image or audio may be used.
  • the visual field image generation unit 223 displays the visual field image v2 when recognizing that the user is facing backward. That is, the visual field image generation unit 223 generates the visual field image v2 by updating the visual field image based on the orientation of the HMD 110.
  • the visual field image v2 is a part of the virtual space image 22 generated during the reproduction of the moving image content.
  • the visual field image v2 is a part of the visual field image displayed based on the reference line of sight when the user is facing backward in the virtual space 2.
  • the visual field image v2 includes advertisement content SC1.
  • the advertisement content SC1 includes advertisement images p1 and p2 and an advertisement background image b1.
  • the advertisement background image b1 hides a joint area S described later.
  • the advertisement images p1 and p2 are displayed on the advertisement background image b1.
  • the advertising content SC1 may be a moving image or a still image.
  • the joint region S is a place where joints that cause inconsistencies between images among the joints of a plurality of images are gathered.
  • the joint region S is positioned on the negative direction (backward) side of the initial direction.
  • the moving image content is taken by an omnidirectional camera.
  • an image may be generated by connecting the separated images.
  • the separated image may be an image separated into a plurality of images or a part of one image separated.
  • the communication unit 250 When the communication unit 250 acquires the moving image content, the communication unit 250 stores it in the storage unit 240.
  • the content management unit 234 recognizes that new moving image content is stored in the storage unit 240.
  • the content management unit 234 acquires line-of-sight guidance frame information (frame information) from the moving image content (second sub-content frame information acquisition step).
  • the line-of-sight guidance frame information includes information indicating the temporal position of the scene including the display area of the advertisement content SC2 and the spatial position of the display area.
  • the creator of the moving image content associates the gaze guidance frame information with the content when creating the content.
  • the creator of the moving image content can specify a portion that is not unnatural even if the advertising content SC2 is displayed, such as the above-described mirror, in the video displayed in the moving image content. Therefore, it is possible to display an image for guiding the line of sight without impairing the user's immersive feeling.
  • the content management unit 234 synthesizes the advertisement background image b1 and the virtual space image 22 of the moving image content so as to have continuity.
  • the content management unit 234 synthesizes the wall and floor in the advertisement image b ⁇ b> 1 so as to be continuous with the wall and floor near the joint region S of the virtual space image 22.
  • Advertisement images p1 and p2 are images showing the contents of the advertisement.
  • the advertisement background image b1 includes a bookshelf and a poster stand so that a natural display can be achieved even if the contents of the advertisement are displayed.
  • the content management unit 234 combines the advertisement images p1 and p2 on the shelves and poster stands in the bookshelf. As a result, the advertisement images p1 and p2 are displayed as if they were originally present in the bookshelf or the posters themselves, so that the user's immersive feeling is not impaired.
  • a method for combining the advertising content SC1 and SC2 with the moving image content is not particularly limited. For example, a process of pasting an image such as the advertising content on a frame image of the moving image content is performed. Further, the size and shape of the image may be adjusted as appropriate. For example, if an object such as a bookshelf or a mirror to which the image is to be pasted is visible in the distance, the size of each image is reduced, or if the object is visible from an oblique direction, each image The shape may be processed into a trapezoidal shape.
  • the present invention is not limited to such a mode.
  • the visual field image generation unit 223 displays the moving image content on the HMD 110
  • the composite content is generated by generating the visual field image so that the advertising content SC1 and the advertising content SC2 are superimposed and displayed in a predetermined display area. May be.
  • the HMD system 100 may acquire the composite of the external device 400 such as a server.
  • the external device 400 may perform a process of hiding the joint area S with an image corresponding to the advertisement background image, and the HMD system 100 may perform a process of synthesizing the advertisement image on the advertisement background image.
  • FIG. 14 is a schematic diagram for explaining the relationship between the lattice and the position of the advertisement content.
  • a lattice G is associated with the virtual space data generated when the control circuit unit 200 reproduces the moving image content. More specifically, the grid G is associated with the template data that is a part of the virtual space data.
  • the line-of-sight information collection section FE is one of the sections formed by the lattice G, and is set at a specified location in the celestial sphere constituting the virtual space 2.
  • the line-of-sight information collection section FE is assigned a function for determining whether or not the user's line of sight (view direction or line-of-sight direction N0) intersects the line-of-sight information collection section FE. More specifically, the line-of-sight information collection section FE is associated with template data for generating virtual space data.
  • a line-of-sight information collection section FE is set in the virtual space 2 by generating virtual space data using the template data.
  • a virtual space is provided by arranging objects as in a game or the like, it is sufficient that the objects have a function of collecting line-of-sight information. Easy. However, in the case of moving image content that does not use an object, it is difficult to collect line-of-sight information. However, it is possible to collect line-of-sight information by providing a function for collecting line-of-sight information in a specified section as in this embodiment.
  • template data may be created so that the line-of-sight information collection section FE overlaps the joint region S
  • virtual space data may be created using the template data in a plurality of moving image contents. Since there is no need to change the position of the line-of-sight information collection section for each moving image content, it is possible to efficiently collect line-of-sight information for advertisements even if the moving image content increases. Even for a content that does not use an object or a limited number of contents even if it is used, it becomes easy to efficiently collect information indicating attention to an advertisement in many moving image contents.
  • FIG. 15 is a diagram showing a flow of processing for collecting line-of-sight information.
  • the content management unit 234 determines whether it is time to display the advertisement content SC1. Specifically, the content management unit 234 refers to information indicating the temporal position in the advertisement space information, and compares the information with the temporal position of the moving image content being reproduced by the content management unit 234 to make the above-described determination. .
  • S301 is NO, that is, if it is not time for the advertisement content SC1 to be displayed, the determination in S301 is continued until S301 becomes YES.
  • the reproduction of the moving image content may be started by the user selecting the moving image content from the platform described above.
  • the line-of-sight management unit 232 determines whether or not the user's line of sight intersects with the line-of-sight information collection section FE corresponding to the location displaying the advertisement content SC1. In the case of NO in S302, the determination of whether or not the line of sight intersects the line-of-sight information collection section FE is continued. In the case of YES in S302, in S303, the line-of-sight management unit 232 measures the time when the line of sight intersects the line-of-sight information collection section FE (line-of-sight information collection step).
  • the line-of-sight management unit 232 finishes the measurement.
  • the line-of-sight management unit 232 collects the line-of-sight information to be measured when the time position at which the measurement is started, the time position at which the measurement is finished, and a plurality of line-of-sight information collection sections FE are set.
  • the line-of-sight information is created by associating with the information indicating whether the section is FE.
  • the line-of-sight management unit 232 associates information indicating which advertising content SC1 is combined with the line-of-sight information at a corresponding position in the line-of-sight information collection section FE. Note that the information indicating which advertising content SC1 has been combined with the line-of-sight information collection section FE is determined by the communication unit 250 at any timing after combining the moving image content and the advertising content SC1 in S203 described above. You may transmit to 400.
  • the communication unit 250 transmits line-of-sight information to the external device 400.
  • the administrator of the external device 400 can evaluate how much the line of sight is directed to which advertisement based on the line-of-sight information. Further, the degree of interest that the user has for the advertisement can be evaluated according to the content of the moving image content. For example, if the scene where the user saw the advertisement is a mountain scene in a moving image content, a scene that is excited, or the like, it can be evaluated that the user has shown a high interest in the advertisement.
  • the scene in which the user views the advertisement is a scene that is not a hill in the moving image content and a scene that is not raised, it can be evaluated that the interest that the user has shown in the advertisement is not high.
  • a temporal position in the moving image content may be associated with information indicating the situation of the scene such as whether or not it is a hill. Analysis of the user's interest described above becomes easy. As described above, according to the present embodiment, it is possible to efficiently grasp the effect of the advertisement displayed on the moving image content reproduced on the head mounted display.
  • the information indicating which advertising content SC1 has been combined with the line-of-sight information collection section FE is determined by the communication unit 250 at any timing after combining the moving image content and the advertising content SC1 in S203 described above. You may transmit to 400. Also, which advertising content SC1 is to be combined with which moving image content is managed by the external device 400, and the control circuit unit 200 may combine the content in accordance with an instruction from the external device 400. In this case, the line-of-sight information does not need to be associated with information indicating which advertisement content SC1 is combined with the corresponding position of the line-of-sight information collection section FE, and how much interest the user has in which advertisement the external device 400 has. Can be evaluated.
  • the content management unit 234 associates a plurality of advertisement contents with the line-of-sight information collection section FE at the same location, and changes the advertisement contents displayed as time passes.
  • Video content and advertising content may be combined.
  • the content management unit 234 may associate information indicating the advertising content corresponding to the temporal position of the video content being played back with the line-of-sight information.
  • One gaze information collection section can be used to evaluate the degree of user interest in a plurality of advertisements.
  • Information indicating which advertisement content is displayed at which time position may be stored in the external device 400.
  • the line-of-sight information only needs to include the temporal position at which the line of sight intersects the line-of-sight information collection section FE.
  • the external device 400 can evaluate how much line of sight is directed to which advertisement.
  • the mode of collecting line-of-sight information using one line-of-sight information collection section FE in one video content has been described, but the present invention is not limited to such a form.
  • the template data having the line-of-sight information collection section FE may be commonly used for virtual space data when a plurality of moving image contents are reproduced.
  • what is not unnatural may be positioned even if an advertisement is displayed at the same spatial position. There is no need to change the position of the line-of-sight information collection section for each video content. Therefore, even if there are a large number of moving image contents, the degree of user interest in the advertisement can be efficiently evaluated.
  • a plurality of line-of-sight information collection sections FE may be set in one template data, and the size may be different for each line-of-sight information collection section FE.
  • the information of the line-of-sight information collection section FE is provided to the creator of the moving image content, and the creator of the moving image content can appropriately select which line-of-sight information collection section FE and when the advertisement is displayed.
  • Information indicating at which temporal position an advertisement is to be displayed in the line-of-sight information collection section FE at which spatial position may be associated with the moving image content as advertisement frame information.
  • the size of the line-of-sight information collection section FE may be set as appropriate, but is larger than the display area F1 in both the vertical and horizontal directions, and from the difference in the vertical width between the line-of-sight information collection section FE and the display area F1. More preferably, the horizontal width difference is set to be small. Adjustment of the initial vertical direction in the virtual space 2 can be reduced. For example, when moving image content is shot with an omnidirectional camera, the initial direction (front) and the initial image in the virtual space are automatically determined based on the shooting direction of the omnidirectional camera. However, the degree of freedom of the spatial position of the target object for displaying the advertisement is limited by the target object and the shooting environment.
  • the initial direction may be edited when editing the moving image content.
  • the ground where the user stands in the virtual space is inclined. This may cause the user to get drunk.
  • the vertical direction width of the line-of-sight information collection section FE is increased in advance, the necessity of adjusting the initial direction to the vertical direction can be reduced.
  • the difference in size between the line-of-sight information collection section FE and the display area F1 is small.
  • the difference in size between the line-of-sight information collection section FE and the display area F1 is reduced by making the width difference in the horizontal direction smaller than the width difference in the vertical direction between the line-of-sight information collection section FE and the display area F1.
  • the necessity of adjusting the initial direction to the vertical direction can be reduced.
  • the prior art is configured to play back content that is assumed to be viewed by the user in a sitting position.
  • some content played back in the virtual space is premised on the user viewing in a supine posture.
  • the prior art described above does not take into account that the user actually visually recognizes such content in a supine posture. Therefore, when such content is played back, if the user takes a supine posture, there arises a problem of visually recognizing an unnatural image.
  • the present disclosure suitably reproduces various contents on the premise that the content is viewed in different postures in a virtual space according to the posture of the user.
  • FIG. 16 is a sequence diagram showing a flow of processing in which the HMD system 100 provides the user with the virtual space 2 in which the tilt of the virtual camera 1 is corrected.
  • the processing shown in this figure is processing at the start of reproduction of the corresponding video content after the user selects any thumbnail through the platform.
  • the virtual space defining unit 231 After the moving image content corresponding to the thumbnail selected by the user is specified, in S41, the virtual space defining unit 231 generates virtual space data for reproducing the moving image content. This detail is the same as the detail of S21 of FIG. Therefore, the virtual space defining unit 231 generates virtual space data without considering any inclination correction value defined for the moving image content to be played back. That is, in the example of FIG. 10, the XYZ coordinate system of the virtual space 2 defined by the generated virtual space data is not corrected by the inclination correction value defined by the content, and is always parallel to the global coordinate system. .
  • the virtual camera control unit 221 After generating the virtual space data, the virtual camera control unit 221 initializes the virtual camera 1 in the virtual space 2. Unlike the initialization procedure described with reference to FIG. 8, the initialization procedure performed at this time takes into account the content inclination correction value.
  • the initialization procedure is as follows. First, in S42, the virtual camera control unit 221 places the virtual camera 1 at an initial position in the virtual space 2 (eg, the center 21 in FIG. 4). Next, in S43, the HMD sensor 120 detects the current position and inclination of the HMD 110, and outputs these detected values to the control circuit unit 200 in S44. The HMD detection unit 211 receives each detection value.
  • the tilt correction value specifying unit 235 specifies the tilt correction value defined for the video content to be played back (the video content specified by the content specifying unit 233) from the video content. If the tilt correction value is not defined for the moving image content, the tilt correction value specifying unit 235 specifies 0 ° as the tilt correction value.
  • FIG. 19B shows an example of the view image 26 including the advertisement A2.
  • the view image 26 including the advertisement A2 is displayed on the display 112 as the initial view image 26.
  • the position of the advertisement A2 in the virtual space 2 is superimposed on the functional section 27 on the celestial sphere. Therefore, the line-of-sight management unit 232 can detect that the user has applied the line of sight to the advertisement A2 by detecting that the line of sight has hit the functional section 27.
  • the HMD system 100 can measure the degree of the user's interest in the advertisement A2.
  • the position of the functional section 27 for the advertisement for each content is easily unified. be able to.
  • the virtual camera control unit 221 arranges the virtual camera 1 in the virtual space 2 and then matches the inclination of the virtual camera 1 in the global coordinate system with the inclination of the HMD 110 to thereby change the orientation of the virtual camera 1 in the virtual space 2.
  • the inclination of the virtual camera 1 with respect to the HMD 110 is not corrected.
  • the virtual camera control unit 221 changes the orientation of the virtual camera 1 with respect to the XY plane in the virtual space 2 when the tilt of the HMD 110 with respect to the horizontal direction in the global coordinate system matches the tilt correction value defined in the moving image content.
  • the XYZ coordinate system is parallel to the global coordinate system, and the tilt of the virtual camera 1 is linked to the tilt of the HMD 110.
  • the orientation of the virtual camera 1 is parallel to the XZ plane of the virtual space 2.
  • the field-of-view image 26 corresponding to the reference line of sight 5 parallel to the XZ plane is displayed on the display 112, so that the user can visually recognize a natural image in a sitting posture.
  • the XYZ coordinate system of the virtual space 2 is inclined by 60 ° with respect to the global coordinate system, and The inclination of the virtual camera 1 in the global coordinate system is linked to the inclination of the HMD 110. Therefore, when the user is sleeping and viewing vertically 60 ° obliquely upward (the HMD 110 is inclined 60 ° from the horizontal direction), the orientation of the virtual camera 1 is parallel to the XZ plane of the virtual space 2. As a result, the view field image 26 corresponding to the reference line of sight 5 parallel to the XZ plane is displayed on the display 112. As a result, the user can visually recognize a natural image while sleeping.
  • the user's visual recognition location in the virtual space can be changed, so that the user's sense of immersion in the virtual space can be enhanced.
  • a device for improving the operability in the virtual space so that the user can more easily visually recognize an arbitrary place in the virtual space while increasing the immersion space in the virtual space.
  • the present disclosure further improves operability in the virtual space.
  • FIG. 20A shows an example of a virtual space 2 for providing a platform.
  • This figure shows a virtual space 2 for providing a platform in which one operation object 28 and four thumbnails SN1 to SN4 are arranged.
  • Each of these thumbnails SN1 to SN4 is an object associated with a summary image (thumbnail) of the corresponding moving image content.
  • the operation object 28 is used for rotating the virtual camera 1 or the virtual space 2.
  • the shape of the operation object 28 is basically circular. As will be described in detail later, the operation object 28 elastically deforms (extends) in response to an operation on the operation object 28 by the user.
  • the visual field image generation unit 223 determines the position of the gazing point 29 in the visual field image 26 based on the reference visual line 5 and the visual line direction N0.
  • the visual field image generation unit 223 applies a note to the center of the visual field image 26 based on the reference visual line 5 when generating the visual field image 26.
  • the viewpoint 29 is arranged.
  • the reference line of sight 5 coincides with the line-of-sight direction N0, so that the gazing point 29 is always arranged at the center of the visual field image 26.
  • the visual field image generation unit 223 determines the position of the gazing point 29 in the visual field image 26 based on the current visual line direction N0 when the visual line direction N0 deviates from the reference visual line 5 because the user moves the line of sight. Specifically, an intersection point where the line-of-sight direction N0 intersects in the visual field region 23 is specified, and then a visual field image 26 in which the gazing point 29 is applied to a position corresponding to the intersection point in the visual field image 26 is generated.
  • the position of the gazing point 29 in the view field image 26 is changed following the movement of the line of sight.
  • the view field image 26 is updated so that the gazing point 29 is displayed at the position where the user gazed in the view field image 26.
  • the user can freely move the gazing point 29 in the view field image 26 by the movement of the line of sight. Therefore, the user can accurately grasp which part of the view image 26 is line of sight by confirming the gazing point 29.
  • the view field image 26 is updated in conjunction with the movement. For example, when the user moves the HMD 110 and the position of the visual field area 23 changes to a position including the thumbnails SN1 and SN3, the visual field image 26 including the thumbnails SN1 and SN3 is displayed on the display 112. Therefore, the user can put the thumbnail of the moving image content that he / she wants to watch in his / her field of view by moving the HMD 110 as appropriate.
  • the line-of-sight management unit 232 determines whether the user's line of sight (gaze point 29) has hit a specific thumbnail included in the field-of-view image 26 based on the line-of-sight direction N ⁇ b> 0 and each thumbnail included in the field-of-view area 23. Determine whether or not. More specifically, the line-of-sight management unit 232 determines whether or not the point where the line-of-sight direction N0 in the visual field region 23 intersects is included in the display range (arrangement range) of a specific thumbnail included in the visual field region 23. If the determination result is YES, it is determined that the line of sight has hit a specific thumbnail included in the field-of-view image 26. If NO, it is determined that the line of sight has not hit the specific thumbnail.
  • the virtual space defining unit 231 defines the virtual space 2 for reproducing the moving image content by generating virtual space data for reproducing the specified moving image content.
  • the generation procedure is as follows. First, the virtual space defining unit 231 acquires the template data of the virtual space 2 corresponding to the moving image content from the template storage unit 241.
  • the virtual space defining unit 231 acquires the moving image content specified by the content specifying unit 233 from the content storage unit 242.
  • the virtual space defining unit 231 generates virtual space data that defines the virtual space 2 for reproducing moving image content by adapting the acquired moving image content to the acquired template data.
  • the virtual space defining unit 231 appropriately associates each partial image constituting the first frame image included in the moving image content with the management data of each mesh constituting the celestial sphere of the virtual space 2 in the virtual space data.
  • the virtual space defining unit 231 generates virtual space data that does not include object management data.
  • the view image generation unit 223 After generating virtual space data representing the virtual space 2 for reproducing moving image content, the view image generation unit 223 generates a view image 26 based on the user's reference line of sight 5 in S31. This generation method is the same as the method described with reference to FIG. Here, a view image 26 of the moving image content is generated.
  • the visual field image generation unit 223 outputs the generated visual field image 26 to the HMD 110.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112. Thereby, the reproduction of the moving image content in the virtual space 2 is started, and the user visually recognizes the view image 26 of the moving image content.
  • the view field image 26 is updated in conjunction with the movement. Therefore, the user can visually recognize a partial image (view image 26) at a desired position in the omnidirectional image of each frame constituting the moving image content by appropriately moving the HMD 110.
  • FIG. 20 (c) shows an example of the field-of-view image 26 of the moving image content.
  • the view image 26 shown in this figure is the view image 26 of the moving image content corresponding to the thumbnail SN1 selected by the user.
  • the platform view image 26 displayed on the display 112 is displayed as shown in FIG.
  • the view image 26 of the moving image content shown in FIG. That is, the user can view the moving image content corresponding to this in the virtual space 2 by selecting the thumbnail SN1 by moving the line of sight in the virtual space 2.
  • the user can select the moving image content to be viewed in the virtual space 2 through the moving image content selection platform in the virtual space 2. Therefore, the user does not need to select the moving image content that the user wants to view in the virtual space 2 while visually recognizing another general display connected to the control circuit unit 200 in the real space before wearing the HMD 110. Thereby, a user's immersion feeling with respect to the virtual space 2 can be improved further.
  • the control circuit unit 200 provides the user with the virtual space 2 in which the moving image content is reproduced. Then, again, the virtual space 2 in which the platform for selecting moving image contents is developed is provided to the user. Thus, the user can view other moving image content in the virtual space 2 by selecting another thumbnail. Since the user does not need to remove the HMD 110 when switching the moving image content that the user wants to view, the user's immersion in the virtual space 2 can be further enhanced.
  • the control circuit unit 200 does not control the tilt of the virtual camera 1 in the virtual space 2 in conjunction with the tilt of the HMD 110. Therefore, even if the user tilts the HMD 110 in the rotation mode, the virtual camera 1 does not tilt in conjunction with the tilt. Instead, the control circuit unit 200 rotates the virtual camera 1 in the virtual space 2 according to the operation of the operation object 28 by the user. In the present embodiment, the user can operate the operation object 28 by the movement of the HMD 110. Therefore, the control circuit unit 200 rotates the virtual camera 1 according to the inclination of the HMD 110 in the rotation mode.
  • the rotation mode even if the user stops the movement of the HMD 110, the rotation of the virtual camera 1 does not stop. That is, as long as the user continues to maintain the state in which the HMD 110 is further tilted from the state at the time of starting rotation, the virtual camera 1 continues to rotate. Thereby, the view image 26 is also continuously updated.
  • FIG. 21 is a sequence diagram showing the flow of processing when the HMD system 100 shifts to the rotation mode.
  • the virtual space 2 in which the platform is reproduced is provided to the user will be described as an example.
  • the HMD sensor 120 detects the position and inclination of the HMD 110 in S41-1, and outputs the detected value to the control circuit unit 200 in S42-1.
  • the HMD detection unit 211 receives this detection value.
  • the virtual camera control unit 221 specifies the reference line of sight 5 according to the above-described procedure based on the detected values of the position and inclination of the HMD 110.
  • the virtual camera control unit 221 controls the virtual camera 1 based on the identified reference line of sight 5.
  • the operation object control unit 236 determines whether or not the reference line of sight 5 has hit the operation object 28 in the virtual space 2 for a predetermined time or more.
  • the visual field region determination unit 222 determines the visual field region 23 in the virtual space 2 based on the identified reference visual line 5.
  • the visual field image generation unit 223 generates the visual field image 26, and outputs it to the HMD 110 in S45-1.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112. Thereby, if the user moves the HMD 110 in the normal mode, the view image 26 is updated in conjunction with the movement.
  • the procedure for realizing this is as follows. First, the operation object control unit 236 detects that at least a part of the operation object 28 is located outside the field-of-view area 23. Next, when the operation object control unit 236 detects that at least a part of the operation object 28 is located outside the view area 23, the operation object control unit 236 moves the operation object 28 into the view area 23. Specifically, the position (coordinates) of the operation object 28 in the data defining the operation object 28 is updated to one of the coordinates in the current visual field region 23.
  • the position where the operation object 28 returns to the view area 23 may be any position within the view area 23. It is preferable that the position where the movement amount is the smallest when the operation object 28 moves. For example, when the operation object 28 comes to be positioned on the right side of the view area 23, the operation object control unit 236 preferably moves the operation object 28 to the right end of the view area 23.
  • the operation object 28 is always included in the visual field region 23 since the operation object 28 is always included in the visual field image 26 generated based on the visual field region 23. Therefore, when the user wants to select the operation object 28, the user can easily find the operation object 28. Further, as will be described later, when the operation object 28 is selected with the reference line of sight 5, the movement control of the HMD 110 for applying the reference line of sight 5 to the operation object 28 can be minimized.
  • the operation object control unit 236 detects that the operation object 28 has been selected by the user in S47-1. As described above, the user can select the operation object 28 by applying a line of sight to the operation object 28. Using the selection of the operation object 28 as a trigger, the control circuit unit 200 shifts to the rotation mode in S48-1. Thereby, the user can shift the control circuit unit 200 to the rotation mode by his / her own intention. After shifting to the rotation mode, the rotation of the virtual camera 1 is started in response to a user operation on the operation object 28.
  • FIG. 22 is a sequence diagram showing a flow of processing when the control circuit unit 200 starts to rotate the virtual camera 1 in the rotation mode.
  • the HMD sensor 120 After the shift to the rotation mode, the HMD sensor 120 detects the position and inclination of the HMD 110 in S51, and outputs the detected value to the control circuit unit 200 in S52.
  • the HMD detection unit 211 receives this detection value.
  • the operation object control unit 236 determines whether or not the inclination of the HMD 110 has changed. If NO in S53, the process in FIG. 22 returns to before S51. Therefore, after the operation object 28 is selected, the steps S51 to S53 are repeated until the user further tilts the HMD 110.
  • the operation object control unit 236 specifies the amount of change in the inclination of the HMD 110 in S54.
  • the operation object control unit 236 specifies the extension direction (predetermined direction) and the extension amount (predetermined amount) of the operation object 28 designated by the user based on the specified change amount of inclination.
  • the operation object control unit 236 expands the operation object 28 by the specified extension amount in the specified extension direction. At this time, the operation object control unit 236 expands the operation object 28 so that the operation object 28 gradually becomes narrower from the root of the operation object 28 toward the end portion in the expansion direction. Further, the display state of the operation object 28 is changed so that the reference line of sight 5 is superimposed on the end portion of the operation object 28 in the extension direction.
  • the rotation control unit 237 determines the rotation direction of the virtual camera 1 based on the expansion direction of the operation object 28, and determines the rotation speed of the virtual camera 1 based on the expansion amount of the operation object 28.
  • the operation object control unit 236 preferably specifies a direction parallel to the horizontal component in the extension direction as the rotation direction of the virtual camera 1.
  • the virtual camera 1 rotates only around the Y axis in the virtual space 2.
  • the virtual camera 1 can be prevented from rotating around the Z axis in the virtual space 2.
  • the virtual camera control unit 221 starts rotating the virtual camera 1 at the determined rotation speed in the determined rotation direction.
  • the roll direction of the virtual camera 1 in the virtual space 2 changes, so that the field-of-view area 23 is changed.
  • the operation object control unit 236 controls the operation object 28 so that the operation object 28 is always arranged at the center of the visual field area 23 when the virtual camera 1 is rotated.
  • the coordinates of the operation object 28 are updated to the coordinates of the center of the visual field area 23.
  • the visual field area determination unit 222 determines the visual field area 23 based on the direction (roll direction) of the virtual camera 1 after rotation for a certain time.
  • the visual field image generation unit 223 generates a visual field image 26 based on the determined visual field region 23, and transmits the visual field image 26 to the HMD 110 in step S60.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112. Thereby, the view image 26 before the rotation of the virtual camera 1 is updated to the view image 26 after the rotation of the virtual camera 1.
  • the view field image 26 based on the view field area 23 after the virtual camera 1 rotates includes a thumbnail SN1 and a thumbnail SN2. Since the positions of the thumbnail SN1 and the thumbnail SN2 in the view area 23 are shifted to the right side before the rotation of the virtual camera 1 is started, correspondingly, the positions of the thumbnail SN1 and the thumbnail SN2 in the view image 26 are also the virtual camera. 1 is shifted to the right than before the start of rotation. Therefore, the user can grasp that the virtual camera 1 has started to rotate counterclockwise.
  • the virtual camera 1 After the virtual camera 1 starts to rotate, the virtual camera 1 continues to rotate counterclockwise while the user continues to twist his neck to the left. On the other hand, when the user stops twisting his neck and turns his face to the front, the rotation of the virtual camera 1 is stopped.
  • the flow of these processes will be described.
  • the visual field area determination unit 222 determines the visual field area 23 based on the direction (roll direction) of the virtual camera 1 after rotation for a predetermined time.
  • the visual field image generation unit 223 generates the visual field image 26 based on the determined visual field region 23, and transmits the visual field image 26 to the HMD 110 in step S76.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112. Thereby, the view image 26 after rotating the virtual camera 1 for a certain time is updated to the view image 26 after further rotating the virtual camera 1.
  • FIG. 25 is a diagram showing the virtual space 2 and the view field image 26 after the virtual camera 1 is further rotated.
  • FIG. 25A shows the virtual space 2
  • FIG. 25B shows the field-of-view image 26.
  • the virtual camera 1 further rotates counterclockwise, the position of the visual field region 23 is further shifted counterclockwise in the virtual space 2.
  • the visual field region 23 is arranged at a position including the thumbnail SN1, the thumbnail SN3, and the thumbnail SN4 in the virtual space 2. Since the virtual camera 1 is rotating, the operation object 28 is still stretched.
  • the view field image 26 based on the view field area 23 after the virtual camera 1 has further rotated includes a thumbnail SN1, a thumbnail SN3, and a thumbnail SN4. Therefore, the user grasps that the thumbnail SN4 newly enters his field of view while the virtual camera 1 is rotating.
  • an operation for stopping the rotation of the virtual camera 1 at this time is performed on the operation object 28 will be described.
  • the operation object control unit 236 detects that the designation of the extension direction and extension amount to the operation object 28 by the user has been cancelled. As a result, the operation object control unit 236 returns the shape of the operation object 28 to the original circle in S79. In step S80, the operation object control unit 236 cancels the selection of the operation object 28.
  • the virtual camera control unit 221 stops the rotation of the virtual camera 1.
  • the control circuit unit 200 shifts to the normal mode.
  • the visual field region determination unit 222 identifies the visual field region 23 based on the orientation of the virtual camera 1 at the time when the rotation of the virtual camera 1 stops.
  • the view image generation unit 223 generates a view image 26 based on the view region 23 in S83, and outputs the view image 26 to the HMD 110 in S84.
  • the HMD 110 updates the view image 26 by displaying the received view image 26 on the display 112.
  • FIG. 26 is a diagram illustrating the virtual space 2 and the view image 26 after the rotation of the virtual camera 1 is stopped.
  • FIG. 26A shows the virtual space 2
  • FIG. 26B shows the view field image 26.
  • FIG. 26 shows an example in which the rotation of the virtual camera 1 is stopped immediately after the virtual camera 1 is rotated to the state shown in FIG. After the rotation of the virtual camera 1 is stopped, the shape of the operation object 28 in the virtual space 2 returns to the original circle. Correspondingly, the shape of the operation object 28 included in the view field image 26 is also returned to the original circle.
  • the operation object 28 is not selected, and the control circuit unit 200 operates in the normal mode. Therefore, the user can select a desired thumbnail in the view field image 26 by moving the line of sight as usual.
  • the field-of-view image 26 shown in (b) of FIG. 26 includes a thumbnail SN4 that is located away from the thumbnail SN1 in the virtual space 2. Therefore, the user can easily put the thumbnail SN4 that could not be visually recognized before the rotation of the virtual camera 1 into his field of view by a simple operation. Further, the user can start viewing the moving image content corresponding to the thumbnail SN4 by gazing at the thumbnail SN4.
  • the HMD system 100 can enhance the user's immersion in the virtual space 2 in the normal mode. Further, in the rotation mode, the user can visually recognize a desired location in the virtual space 2 with a simpler operation than in the normal mode. Since the HMD system 100 supports both the normal mode and the rotation mode, the operability in the virtual space 2 can be further improved while increasing the user's immersion in the virtual space 2. Further, since each thumbnail can be arranged in a wide range in the virtual space 2, the virtual space 2 can be used more effectively.
  • the virtual space defining unit 231 rotates the virtual space 2 by rotating the XYZ coordinate system of the virtual space 2 in the global coordinate system.
  • the data is updated to data defining the rotated XYZ coordinate system. Since the position of each mesh in the virtual space 2 is defined in the management data of each mesh as coordinates in the XYZ coordinate system, each mesh is similarly rotated in the global coordinate system when the XYZ coordinate system rotates in the global coordinate system. To do.
  • the positions of the thumbnails SN1 to SN4 arranged in the virtual space 2 are also defined in the management data of each thumbnail as coordinates in the XYZ coordinate system. Therefore, when the XYZ coordinate system rotates in the global coordinate system, the thumbnails SN1 to SN4 rotate in the same manner in the global coordinate system.
  • the controller 300 may include a right controller that the user has with the right hand and a left controller that the user has with the left hand.
  • the control circuit unit 200 generates the user's virtual right hand in the virtual space 2 based on the detection value of the position and inclination of the right controller and the detection result of the user's pressing operation on each button included in the right controller. be able to.
  • the control circuit unit 200 generates the virtual left hand of the user in the virtual space 2 based on the detection value of the position and inclination of the left controller and the detection result of the user's pressing operation on each button included in the left controller. be able to.
  • the virtual right hand and the virtual left hand are both objects.
  • the user can select or operate the operation object 28 by acting on the operation object 28 with the virtual right hand or the virtual left hand.
  • the operation object control unit 236 detects that the operation object 28 has been selected by the user. Thereafter, the user performs an operation for pinching the operation object 28 with the virtual right hand on the controller 300 (for example, pressing any button of the right controller) and drags the operation object 28 in the predetermined direction with the virtual right hand.
  • the operation object control unit 236 specifies the extension direction and the extension amount of the operation object 28 based on the detected values of the position of the right controller before and after the right hand is moved.
  • the extension method of the operation object 28 and the rotation method of the virtual camera 1 (or the virtual space 2) after the extension direction and the extension amount are determined are the same as in the above-described example.
  • the user can rotate the virtual camera 1 or the virtual space 2 in a desired direction by operating the right controller without moving the HMD 110 and without changing the line of sight.
  • the virtual camera 1 can be rotated clockwise (the virtual space 2 is counterclockwise).
  • the virtual camera 1 can be rotated counterclockwise (the virtual space 2 is rotated clockwise). In these cases, the user can rotate the virtual camera 1 or the virtual space 2 more intuitively.
  • a functional partition may be defined at any position on the celestial sphere constituting the virtual space 2 where the platform is developed.
  • the functional partition is an area to which a predetermined function can be assigned.
  • a function for detecting whether or not the user's line of sight (viewing direction or line-of-sight direction N0) hits the functional section is assigned to the functional section.
  • a function for detecting the time when the user's line of sight hits the function section is also assigned to the function section.
  • the control circuit unit 200 When the line-of-sight management unit 232 detects that the user's line of sight (the reference line of sight 5 or the line-of-sight direction N0) hits the functional section in the normal mode for a predetermined time or more, it detects that the user has selected the functional section. Thereby, the control circuit unit 200 shifts to the rotation mode. Thereafter, the control circuit unit 200 starts to rotate the virtual camera 1 or the virtual space 2 based on some operation by the user. For example, the control circuit unit 200 determines whether or not the user's line of sight has deviated from the functional section selected by the user by gaze for a specified time or longer.
  • the control circuit unit 200 identifies the direction in which the line of sight has deviated. Then, the direction in which the line of sight is deviated is specified as the movement direction of the line of sight, and the virtual camera 1 or the virtual space 2 is rotated in the rotation direction based on the specified movement direction of the line of sight. If the functional partition is used as in this example, the virtual camera 1 or the virtual space 2 can be rotated based on the user's operation even when the virtual space 2 in which the operation object 28 cannot be defined is provided to the user. it can.
  • a control block (detection unit 210, display control unit 220, virtual space control unit 230, storage unit 240, and communication unit 250) of the control circuit unit 200 is a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like. ) Or by software using a CPU (Central Processing Unit).
  • IC chip integrated circuit
  • CPU Central Processing Unit
  • control block is a CPU that executes instructions of a program that is software that realizes each function, a ROM (Read Only Memory) or a memory in which the program and various data are recorded so as to be readable by the computer (or CPU)
  • a device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided.
  • recording media these are referred to as “recording media”
  • RAM Random Access Memory
  • the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
  • the recording medium a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • a transmission medium such as a communication network or a broadcast wave
  • the present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
  • a method for providing a virtual space to a user wearing a head-mounted display (hereinafter, HMD), the step of defining the virtual space, the moving image content reproduced in the virtual space, and the moving image content Generating a composite content by combining sub-contents displayed in a part of the display area, adapting the composite content to the virtual space, identifying the user's line of sight, and the line of sight
  • HMD head-mounted display
  • a method for providing a virtual space comprising: identifying a visual field area, and generating a visual field image corresponding to the visual field area of the composite content and outputting the visual field image to the HMD.
  • Information 3 including a step of acquiring advertising space information indicating a spatial position and a temporal position of the section associated with the moving image content, and combining the moving image content and the sub-content based on the advertising space information
  • the creator of the video content can identify a spatial position and a temporal position that are not unnatural even if an advertisement is displayed in the video displayed in the video content, thereby impairing the user's immersive feeling.
  • Ads can be displayed without
  • (Item 5) A step of displaying a list of contents indicating the contents of a plurality of candidate moving image contents in conformity with the virtual space, and displaying the candidate moving image content selected by the user in the content list as the moving image And in the two or more candidate moving image contents included in the plurality of candidate moving image contents, when the moving image content is specified, the sub-contents are defined in the same spatial position.
  • the method of item 4 which is synthesized so as to include a part of Even if there are a plurality of moving image contents, it is possible to efficiently evaluate the degree of user interest in the advertisement.
  • (Item 5) A step of displaying a list of contents indicating the contents of a plurality of candidate moving image contents in conformity with the virtual space, and displaying the candidate moving image content selected by the user in the content list as the moving image And in the two or more candidate moving image contents included in the plurality of candidate moving image contents, when the moving image content is specified, the sub-contents are defined in the same spatial position.
  • Item 5 The method according to Item 4, wherein the compound is synthesized so as to include a part thereof.
  • (Item 7) A step of displaying a list of contents indicating the contents of a plurality of candidate moving image contents in conformity with the virtual space, and displaying the candidate moving image content selected by the user in the content list as the moving image
  • the user can select desired video content from a plurality of video content.
  • the moving image content is associated with frame information including a spatial position and a temporal position for displaying the second sub-content, and further includes the step of acquiring the frame information, The method of item 2 or 3, wherein the second sub-content is displayed based on frame information.
  • the creator of the video content can identify a portion that is not unnatural even if the second sub-content is displayed in the video displayed in the video content, so that the user's immersive feeling is not impaired.
  • Content for guiding the line of sight to the advertisement can be displayed.
  • (Item 2) A step of defining the virtual space having the spatial coordinate system parallel to the reference coordinate system, a step of arranging the virtual camera in the virtual space, and an inclination correction value defined in advance for the content And setting the orientation of the virtual camera in the virtual space by tilting the virtual camera relative to the HMD in the reference coordinate system. There is no need to tilt the spatial coordinate system of the virtual space with respect to the reference coordinate system.
  • (Item 3) A step of defining the virtual space having the spatial coordinate system tilted relative to the reference coordinate system based on a tilt correction value preliminarily defined in the content; and the virtual space in the virtual space
  • the method of item 1, comprising: arranging a camera; and setting an orientation of the virtual camera in the virtual space by matching an inclination of the virtual camera in the reference coordinate system with an inclination of the HMD. There is no need to tilt the virtual camera relative to the HMD.
  • Step 2 In the step of displaying the view image on the HMD, the view image including the operation object arranged in the virtual space is displayed on the HMD, and the operation object is selected by the user in the first mode.
  • (Item 4) 4 The method according to item 3, wherein in the step of determining the rotation direction, the rotation direction is determined based on a horizontal component in the predetermined direction. It is possible to prevent the user from getting drunk in the virtual space.
  • the method further includes a step of specifying a change amount of the inclination of the HMD in a reference coordinate system, and the step of specifying the predetermined amount
  • the predetermined amount is specified based on a change amount of the inclination.
  • the user can adjust the rotation speed method of the virtual camera or the virtual space by changing the degree of tilting the HMD.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon la présente invention, lorsqu'un contenu vidéo a été reproduit à l'aide d'un visiocasque, un autre contenu telle qu'une publicité est élément affiché. Ce procédé destiné à fournir un espace virtuel comprend les étapes consistant : à définir un espace virtuel; à générer un contenu combiné en combinant le contenu vidéo devant être reproduit dans l'espace virtuel, et un sous-contenu devant être affiché dans une partie d'une région d'affichage du contenu vidéo, à adapter le contenu combiné à l'espace virtuel; à identifier une ligne de visée d'un utilisateur; à identifier un champ de zone de vision sur la base de la ligne de visée; et à générer un champ d'image de vision correspondant au champ de la zone de vision, à partir du contenu combiné, et à délivrer le champ généré d'image de vision au visiocasque (HDM).
PCT/JP2017/017878 2016-05-17 2017-05-11 Procédé destiné à fournir un espace virtuel, un programme et un support d'enregistrement WO2017199848A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2016099088A JP6262283B2 (ja) 2016-05-17 2016-05-17 仮想空間を提供する方法、プログラム、および記録媒体
JP2016-099108 2016-05-17
JP2016099108A JP6126271B1 (ja) 2016-05-17 2016-05-17 仮想空間を提供する方法、プログラム及び記録媒体
JP2016-099088 2016-05-17
JP2016-099119 2016-05-17
JP2016099119A JP6126272B1 (ja) 2016-05-17 2016-05-17 仮想空間を提供する方法、プログラム及び記録媒体

Publications (1)

Publication Number Publication Date
WO2017199848A1 true WO2017199848A1 (fr) 2017-11-23

Family

ID=60326322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/017878 WO2017199848A1 (fr) 2016-05-17 2017-05-11 Procédé destiné à fournir un espace virtuel, un programme et un support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2017199848A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020054456A1 (fr) * 2018-09-14 2020-03-19 ソニー株式会社 Dispositif de commande d'affichage et procédé de commande d'affichage, et programme

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271693A (ja) * 2001-03-13 2002-09-20 Canon Inc 画像処理装置、画像処理方法、及び制御プログラム
JP2005173042A (ja) * 2003-12-09 2005-06-30 Canon Inc 携帯型情報装置及びその広告表示方法
JP2012048597A (ja) * 2010-08-30 2012-03-08 Univ Of Tokyo 複合現実感表示システム、画像提供画像提供サーバ、表示装置及び表示プログラム
JP2012168798A (ja) * 2011-02-15 2012-09-06 Sony Corp 情報処理装置、オーサリング方法及びプログラム
JP5777185B1 (ja) * 2014-05-16 2015-09-09 株式会社ユニモト 全周動画配信システム、全周動画配信方法、通信端末装置およびそれらの制御方法と制御プログラム
JP5914739B1 (ja) * 2015-08-27 2016-05-11 株式会社コロプラ ヘッドマウントディスプレイシステムを制御するプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271693A (ja) * 2001-03-13 2002-09-20 Canon Inc 画像処理装置、画像処理方法、及び制御プログラム
JP2005173042A (ja) * 2003-12-09 2005-06-30 Canon Inc 携帯型情報装置及びその広告表示方法
JP2012048597A (ja) * 2010-08-30 2012-03-08 Univ Of Tokyo 複合現実感表示システム、画像提供画像提供サーバ、表示装置及び表示プログラム
JP2012168798A (ja) * 2011-02-15 2012-09-06 Sony Corp 情報処理装置、オーサリング方法及びプログラム
JP5777185B1 (ja) * 2014-05-16 2015-09-09 株式会社ユニモト 全周動画配信システム、全周動画配信方法、通信端末装置およびそれらの制御方法と制御プログラム
JP5914739B1 (ja) * 2015-08-27 2016-05-11 株式会社コロプラ ヘッドマウントディスプレイシステムを制御するプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020054456A1 (fr) * 2018-09-14 2020-03-19 ソニー株式会社 Dispositif de commande d'affichage et procédé de commande d'affichage, et programme

Similar Documents

Publication Publication Date Title
US10539797B2 (en) Method of providing virtual space, program therefor, and recording medium
JP6126271B1 (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP6511386B2 (ja) 情報処理装置および画像生成方法
JP7042644B2 (ja) 情報処理装置、画像生成方法およびコンピュータプログラム
US20220006973A1 (en) Placement of virtual content in environments with a plurality of physical participants
CN111684393A (zh) 在虚拟、增强或混合现实环境中生成和显示3d视频的方法和系统
US12022357B1 (en) Content presentation and layering across multiple devices
JP6523493B1 (ja) プログラム、情報処理装置、及び情報処理方法
JP6130478B1 (ja) プログラム及びコンピュータ
JP6126272B1 (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP2019139673A (ja) 情報処理装置、情報処理方法およびコンピュータプログラム
JP6306083B2 (ja) 仮想空間を提供する方法、プログラム、および記録媒体
US20230412897A1 (en) Video distribution system for live distributing video containing animation of character object generated based on motion of actors
JP6262283B2 (ja) 仮想空間を提供する方法、プログラム、および記録媒体
JP2017208808A (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP2017121082A (ja) プログラム及びコンピュータ
JP6223614B1 (ja) 情報処理方法、情報処理プログラム、情報処理システム及び情報処理装置
CN113574849A (zh) 用于后续对象检测的对象扫描
WO2017199848A1 (fr) Procédé destiné à fournir un espace virtuel, un programme et un support d'enregistrement
JP2017228322A (ja) 仮想空間を提供する方法、プログラム、および記録媒体
JP2023065528A (ja) ヘッドマウント情報処理装置およびヘッドマウントディスプレイシステム
JP2017208809A (ja) 仮想空間を提供する方法、プログラム及び記録媒体
JP2017201524A (ja) 仮想空間を提供する方法、プログラム、および記録媒体
JP7030075B2 (ja) プログラム、情報処理装置、及び情報処理方法
GB2566758A (en) Motion signal generation

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17799272

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17799272

Country of ref document: EP

Kind code of ref document: A1