WO2019230437A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2019230437A1
WO2019230437A1 PCT/JP2019/019622 JP2019019622W WO2019230437A1 WO 2019230437 A1 WO2019230437 A1 WO 2019230437A1 JP 2019019622 W JP2019019622 W JP 2019019622W WO 2019230437 A1 WO2019230437 A1 WO 2019230437A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
sight
line
viewpoint
wraparound
Prior art date
Application number
PCT/JP2019/019622
Other languages
French (fr)
Japanese (ja)
Inventor
安田 亮平
石川 毅
高橋 慧
孝悌 清水
惇一 清水
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2019230437A1 publication Critical patent/WO2019230437A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory

Definitions

  • the present technology relates to an information processing device, an information processing method, and a program, and more particularly, to an information processing device, an information processing method, and a program that can move a virtual space more smoothly.
  • Patent Document 1 discloses an image generation apparatus that can move a user's viewpoint with respect to a wide viewing angle image such as a panoramic image when a user moves a panoramic space (virtual space) with a head-mounted display. Is disclosed.
  • This technology has been made in view of such a situation, and makes it possible to move the virtual space more smoothly.
  • An information processing apparatus includes a movement information acquisition unit that acquires movement information that moves a user's viewpoint in a virtual space, and a direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight.
  • a movement information acquisition unit that acquires movement information that moves a user's viewpoint in a virtual space
  • a direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight.
  • the information processing method and program according to one aspect of the present technology are an information processing method and program corresponding to the information processing apparatus according to one aspect of the present technology described above.
  • movement information for moving the viewpoint of the user in the virtual space is acquired, direction information corresponding to the direction of the line of sight of the user is acquired,
  • direction information corresponding to the direction of the line of sight of the user is acquired
  • the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information.
  • the display device is controlled so as to present the user with an image visually recognized by the user by changing the direction of the line of sight.
  • the information processing apparatus may be an independent apparatus, or may be an internal block constituting one apparatus.
  • the virtual space can be moved more smoothly.
  • FIG. 1 is a diagram illustrating an example of the appearance of a head mounted display to which the present technology is applied.
  • the head-mounted display 1 is an information processing apparatus that is worn on the head so as to cover both eyes of the user and that can be used for viewing still images, moving images, and the like displayed on a display screen provided in front of the user's eyes.
  • the head mounted display 1 has a main body portion 11 and a head contact portion 12.
  • the main body 11 is provided with a display, various sensors, a communication module, and the like.
  • the head contact portion 12 fixes the main body portion 11 so as to contact the user's head and cover both eyes.
  • FIG. 2 is a block diagram illustrating an example of a configuration of an embodiment of a head mounted display to which the present technology is applied.
  • the head mounted display 1 includes a control unit 100, a sensor unit 101, a storage unit 102, a display unit 103, an audio output unit 104, an input terminal 105, an output terminal 106, and a communication unit 107.
  • the control unit 100 includes, for example, a CPU (Central Processing Unit).
  • the control unit 100 is a central processing device that controls the operation of each unit and performs various arithmetic processes.
  • a dedicated processor such as a GPU (Graphics Processing Unit) may be provided.
  • the sensor unit 101 includes, for example, various sensor devices.
  • the sensor unit 101 performs sensing of the user and its surroundings, and supplies sensor data corresponding to the sensing result to the control unit 100.
  • the sensor unit 101 for example, a magnetic sensor that detects the magnitude and direction of a magnetic field (magnetic field), an acceleration sensor that detects acceleration, a gyro sensor that detects an angle (attitude), angular velocity, and angular acceleration, or a nearby sensor
  • the user's gesture or the like may be detected by an inertial measurement device (IMU: Inertial Measurement Unit) that obtains a three-dimensional angular velocity and acceleration.
  • IMU Inertial Measurement Unit
  • a camera having an image sensor may be provided as the sensor unit 101, and image data obtained by imaging a subject may be supplied to the control unit 100.
  • the sensor unit 101 includes sensors for measuring the surrounding environment such as a temperature sensor for detecting temperature, a humidity sensor for detecting humidity, an ambient light sensor for detecting ambient brightness, breathing and pulse, A biosensor for detecting biometric information such as a fingerprint and an iris, and a sensor for detecting position information such as a GPS (Global Positioning System) signal can be included.
  • GPS Global Positioning System
  • the storage unit 102 includes, for example, a semiconductor memory including a nonvolatile memory and a volatile memory.
  • the storage unit 102 stores various data according to control from the control unit 100.
  • the display unit 103 includes, for example, a display device (display device) such as a liquid crystal display (LCD) or an organic EL display (OLED: Organic Light Emitting Diode).
  • the display unit 103 displays a video (or image) corresponding to the video data (or image data) supplied from the control unit 100.
  • the audio output unit 104 includes an audio output device such as a speaker.
  • the sound output unit 104 outputs sound (sound) corresponding to the sound data supplied from the control unit 100.
  • the input terminal 105 includes, for example, an input interface circuit, and is connected to an electronic device via a predetermined cable.
  • the input terminal 105 supplies the control unit 100 with video data, audio data, commands, and the like input from devices such as a game machine (dedicated console), a personal computer, and a playback machine, for example.
  • the output terminal 106 includes, for example, an output interface circuit and the like, and is connected to an electronic device via a predetermined cable.
  • the output terminal 106 outputs audio data supplied thereto to a device such as an earphone or a headphone via a cable.
  • the communication unit 107 is configured as a communication module that supports wireless communication such as Bluetooth (registered trademark), wireless LAN (Local Area Network), or cellular communication (for example, LTE-Advanced or 5G), or wired communication. Is done.
  • the communication unit 107 communicates with an external device according to a predetermined communication method, and exchanges various data (for example, video data, audio data, commands, etc.).
  • the external devices here include, for example, a game machine (dedicated console), a personal computer, a server, a player, a dedicated controller, a remote controller, and the like.
  • FIG. 3 is a block diagram illustrating an example of a functional configuration of the control unit 100 of FIG.
  • the control unit 100 includes a movement information acquisition unit 151, a direction information acquisition unit 152, a wraparound destination control unit 153, a viewpoint camera position / posture control unit 154, and a display control unit 155.
  • the movement information acquisition unit 151 acquires movement information based on data supplied from the sensor unit 101, the input terminal 105, the communication unit 107, and the like, and supplies the movement information to the wraparound destination control unit 153.
  • the movement information is information for moving the viewpoint of the user in the virtual space. That is, the movement information is obtained based on some input for determining the movement of the user in the virtual space. For example, the user's operation (for example, controller operation) or action (for example, walking or gesture), or the user The detection result of the dwell time of the line of sight (for example, the gaze time of a predetermined area) is included.
  • the movement information may include information indicating that the user has stopped.
  • the direction information acquisition unit 152 acquires direction information based on data supplied from the sensor unit 101, the input terminal 105, the communication unit 107, or the like, and supplies the direction information to the wraparound destination control unit 153.
  • the direction information is information corresponding to the direction of the user's line of sight.
  • the direction information is obtained based on some input for determining the user's line of sight, for example, the user's line of sight or face orientation, or the user's operation (for example, a pointing device (such as a dedicated controller or remote controller). Operation) or action (for example, gestures such as pointing).
  • the wraparound destination control unit 153 determines the wraparound destination of the user's viewpoint (position) with respect to the obstacle object based on at least one of the movement information from the movement information acquisition unit 151 and the direction information from the direction information acquisition unit 152. Control.
  • the viewpoint camera position / posture control unit 154 controls the position and posture of the viewpoint camera according to the control of the wraparound destination by the wraparound destination control unit 153.
  • the display control unit 155 controls the display of the video (or image) and displays it on the display unit 103 according to the control of the position and orientation of the viewpoint camera by the viewpoint camera position / orientation control unit 154.
  • the control unit 100 is configured as one or a plurality of CPUs, and at least one CPU includes a movement information acquisition unit 151, a direction information acquisition unit 152, a wraparound destination control unit 153, and a viewpoint camera position / posture control.
  • the function of the unit 154 or the display control unit 155 is included. In addition, these functions may be realized in a single device, or may be realized in cooperation with a plurality of devices (CPUs) without closing to a single device.
  • the present technology moves so that the position around the obstacle object is easy to see. And adjust the rotation.
  • the gaze area 200 is a peripheral area of the obstacle object 30 at the position of the line of sight L S of the user 20.
  • the edge of the obstacle object 30 is included in the peripheral area is illustrated, but the edge may not be included.
  • the position (viewpoint) of the user 20 moves to the position around the obstacle object 30 (instantaneous movement) as shown in FIG.
  • adjustment may be made so that the direction of the line of sight of the user 20 is parallel to the front surface (the surface on the back side of B in FIG. 5) (B in FIG. 5).
  • the obstacle object 30 as shown in FIG. 5B is like a thin plate, there is a great advantage that the position (viewpoint) of the user 20 is made to reach the back surface of the obstacle object 30.
  • the position (viewpoint) of the user 20 can be adjusted to be a position separated by a predetermined distance from the parallel plane, for example.
  • the position of the line of sight L S of the user 20 is in the peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space, the position of the user 20 in the direction of the line of sight L S ( By moving the (viewpoint) and changing the direction of the line of sight, an image visually recognized by the user 20 is presented to the user 20.
  • the position (viewpoint) of the user 20 For example, by moving the position (viewpoint) of the user 20 to a position in the vicinity of the side surface (A in FIG. 5) or the back surface (B in FIG. 5) with respect to the user-side surface of the obstacle object 30. Is moved to a position where the movement of the viewpoint of the user 20 can be reduced, so that the virtual space can be moved more smoothly.
  • the position (viewpoint) of the user 20 after movement when the conventional method is used is indicated by a dotted line, but in the conventional method, the viewpoint of the user 20 is the obstacle object 30. Therefore, after moving once near the edge of the obstacle object 30, the user 20 needs to further perform an operation of changing the movement and direction.
  • the position (viewpoint) of the user 20 is moved to a position where the movement of the viewpoint of the user 20 can be reduced, the moving operation for turning around the obstacle object 30 as compared with the conventional method. The movement in the virtual space can be performed more smoothly.
  • position of user 20 includes the meaning of “viewpoint of user 20” in the virtual space.
  • the virtual space includes a virtual space imitating reality constructed on a computer and a virtual space in reality.
  • the position of the line of sight L S of the user 20 is on another object (for example, an effect object) that emphasizes a specific position in the virtual space
  • the position (view point) of the user 20 is moved in the direction of the line of sight L S.
  • the image visually recognized by the user 20 while maintaining the direction of the line of sight can be presented to the user 20.
  • the effect object is not the obstacle object 30 such as the wall described above, but an object floating in the space, for example.
  • This effect object is presented, for example, when the position of the line of sight L S of the user 20 is not in the peripheral area (for example, edge) of the obstacle object 30 arranged in the virtual space.
  • step S11 the movement information acquisition unit 151 acquires movement information.
  • step S12 the direction information acquisition unit 152 acquires direction information.
  • the sensor unit 101 when acquiring the direction information, for example, a sensor for detecting the line of sight of the user 20, a pointing from a dedicated controller, or a sensor from various sensors for detecting the gesture of the user 20
  • Direction information is generated based on the data.
  • a method for detecting the line of sight of the user 20 for example, a line of sight estimation by capturing an eyeball and acquiring image data of a Purkinje image, or a method of estimating the direction of the apparatus as the direction of the line of sight can be used.
  • step S13 the wraparound destination control unit 153 determines whether there is a moving operation based on the acquired movement information.
  • step S13 If it is determined in step S13 that there is no moving operation, the process returns to step S11, and the processes of steps S11 to S13 are repeated. On the other hand, if it is determined in step S13 that there is a moving operation, the process proceeds to step S14.
  • step S14 the wraparound destination control unit 153 determines whether or not the end (edge) of the obstacle object 30 exists in the vicinity of the gazing point by the user 20 based on the acquired direction information.
  • step S14 If it is determined in the determination process in step S14 that only one edge exists in the vicinity of the gazing point, the process proceeds to step S15.
  • step S15 the wraparound destination control unit 153 executes a single edge wraparound destination determination process.
  • this single edge wraparound destination determination process a process for determining the wraparound destination of the position (viewpoint) of the user 20 with respect to the obstacle object 30 when one edge exists in the vicinity of the gazing point is executed. Details of the single edge wraparound destination determination process will be described later with reference to FIGS.
  • step S14 If it is determined in step S14 that there is no edge near the point of interest, the process proceeds to step S16.
  • step S16 the wraparound destination control unit 153 performs a wraparound destination determination process from the obstacle upper end point.
  • step S14 when it is determined in step S14 that there are a plurality of edges in the vicinity of the gazing point, the process proceeds to step S17.
  • step S17 the wraparound destination control unit 153 performs a wraparound destination determination process for multiple edges.
  • this multiple edge wraparound destination determination process a process for determining the wraparound destination of the position (viewpoint) of the user 20 with respect to the obstacle object 30 when there are a plurality of edges near the gazing point is executed.
  • the details of the multi-edge wraparound destination determination process will be described later with reference to FIGS. 32 to 38.
  • step S18 When the process in any of steps S15 to S17 ends, the process proceeds to step S18.
  • step S ⁇ b> 18 the viewpoint camera position / posture control unit 154 determines the position (viewpoint) of the user 20 based on the result of the wraparound destination determination process by the wraparound destination control unit 153 (the result of any one of steps S ⁇ b> 15 to S ⁇ b> 17). Move to the position where you wrap around and change its direction.
  • step S19 the display control unit 155 displays the wraparound video on the display unit 103 based on the result of the process in step S18. That is, the display control unit 155 displays at least one of the movement information and the direction information when the position of the line of sight L S of the user 20 is in a peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space. Based on this, the position (viewpoint) of the user 20 is moved in the direction of the line of sight L S , and an image visually recognized by the user 20 by changing the direction of the line of sight is presented to the user 20.
  • step S19 When the process in step S19 is completed, the process returns to step S11, and the subsequent processes are repeated.
  • FIG. 7 is a flowchart for explaining the flow of processing when determining the movement destination according to the quadrant including the current user position as a first example of the single edge wraparound destination determination processing.
  • step S101 the wraparound destination control unit 153 generates an edge gaze detection virtual object.
  • This edge gaze detection virtual object is generated to detect the area being watched by the user 20 when the line of sight of the user 20 faces the end (edge) of the obstacle object 30 such as a wall. Virtual object.
  • step S102 the wraparound destination control unit 153 includes the current position of the user 20 in any quadrant on the XZ plane (first quadrant to fourth quadrant on the XZ plane) of the generated edge gaze detection virtual object. To identify.
  • step S103 the wraparound destination control unit 153 determines the position of the user 20 according to the quadrant (one quadrant of the first quadrant to the fourth quadrant on the XZ plane) including the identified current user 20 position. Determine the location and direction of the destination. The position (viewpoint) of the moving destination determined in this way is set as a position (around position) around the obstacle object 30 such as a wall.
  • step S103 When the process in step S103 is completed, the process returns to step S15 in FIG. 6, and the subsequent processes are executed.
  • FIG. 8 is a diagram showing the relationship between the position before movement of the user 20 and the position after movement when the edge of the obstacle object 30 such as a wall is used as a reference.
  • the edge gaze detection virtual object 300 is virtually displayed when the line of sight of the user 20 faces the edge of the obstacle object 30.
  • the 3D coordinate system (left-handed coordinate system) XZ plane including the object is generated.
  • the position of the user 20 is in the first quadrant, so the destination position P M is on the negative X axis.
  • the direction DM is an obliquely lower left direction.
  • the movement destination position P M is on the positive X axis, and the direction D M is an oblique lower right direction.
  • the movement destination position P M is on the positive Z axis, and the direction D M is an upper right direction.
  • the movement destination position P M is on the positive Z axis, and the direction D M is the upper left diagonal direction.
  • the destination position P M and its direction D M are determined depending on which quadrant on the XZ plane the user 20 is located.
  • the edge gaze detection virtual object 300 exists in the third quadrant on the XZ plane (S102 in FIG. 7).
  • the position P M of 20 (the position of the wraparound destination) is determined on the positive Z axis, and the direction D M is an obliquely upper right direction (S103 in FIG. 7). An example in which an actual video is displayed using this wraparound destination determination method is shown in FIGS.
  • FIGS. 10 to 13 show a CG image of a virtual model room as an example of an image displayed on the display unit 103 of the head mounted display 1 mounted on the head of the user 20.
  • this virtual model room you can walk around in the virtual space created by CG (Computer Graphics) and feel as if you were on the spot.
  • CG Computer Graphics
  • the walls that partition the rooms of the model room correspond to the obstacle object 30.
  • FIG. 10 is a diagram illustrating a first example of a CG image of a model room.
  • FIG. 10A shows a state where the user 20 is watching the edge of the wall separating the kitchen and the hallway when the user 20 is in the kitchen.
  • the position of the user 20 is moved to a position around the wall as shown in FIG.
  • An image is displayed in which the direction of the line of sight is directed toward the end of the corridor that has turned around. That is, when the gaze area 200 is directed to the edge of the wall by the user 20, the user 20 automatically moves (instantaneously moves) from the kitchen to a position around the wall.
  • FIG. 11 is a diagram showing a second example of the CG video of the model room.
  • the hallway shown in FIG. 11 is the same as the hallway shown in FIG.
  • FIG. 11A shows a state in which, when the user 20 is in the hallway, the edge of the wall separating the hallway and the bathroom is watched, and the edge becomes the gaze area 200.
  • FIG. 11B the position of the user 20 is moved to a position around the wall, and an image in which the direction of the line of sight is directed to the surrounding bathroom is displayed. Will move to a position around the wall.
  • FIG. 12 is a diagram showing a third example of the CG video of the model room.
  • FIG. 12A shows a state in which, when the user 20 is in the kitchen, the edge of the wall separating the hallway and the adjacent room is watched, and the edge becomes the watched area 200.
  • FIG. 12B the position of the user 20 is moved to the position around the wall, and an image in which the direction of the line of sight is directed to the adjacent room around is displayed. , I will move to the position around the wall from the kitchen.
  • FIG. 13 is a diagram showing a fourth example of the CG video of the model room.
  • the adjacent room shown in FIG. 13 is the same as the adjacent room shown in FIG.
  • FIG. 13A shows a state in which, when the user 20 is in the next room, the edge of the wall partitioning the next room and the bedroom is watched, and the edge becomes the gaze area 200.
  • FIG. 13B the position of the user 20 is moved to a position around the wall, and an image in which the direction of the line of sight is directed to the bedroom around is displayed. It will move instantly from the room to the position around the wall.
  • the virtual space moves more smoothly in a situation where it is desired to move around the obstacle object 30. Can do.
  • the technique of the present technology can move the obstacle object 30 to a position that wraps around directly from the current position of the user 20.
  • FIG. 14 is a flowchart for explaining the flow of processing when determining the position of the movement destination by moving the virtual viewpoint object along the obstacle object as a second example of the single edge wraparound destination determination processing. is there.
  • step S121 the wraparound destination control unit 153 specifies a gazing point on the obstacle object 30 according to the line of sight of the user 20.
  • the gazing point on the obstacle object 30 includes a portion that is not an edge.
  • step S122 the wraparound destination control unit 153 generates a virtual viewpoint object with the identified gazing point as a contact point.
  • This virtual viewpoint object is a virtual object that is generated to determine the position of the movement destination when the line of sight of the user 20 is not facing the edge of the obstacle object 30 such as a wall.
  • step S123 the wraparound destination control unit 153 calculates a vector along the obstacle object by applying a predetermined arithmetic expression to the normal line from the specified gazing point.
  • a vector along the wall is obtained.
  • step S124 the wraparound destination control unit 153 moves the generated virtual viewpoint object while contacting the obstacle object 30 based on the calculated vector.
  • the virtual viewpoint object is moved while being in contact with the wall based on the vector along the wall, with the position in contact with the specified gazing point as the start point of the movement trajectory.
  • step S125 the wraparound destination control unit 153 determines whether the length of the movement trajectory of the virtual viewpoint object has reached a predetermined value.
  • step S125 If it is determined in step S125 that the length of the movement locus is not the predetermined value, the process returns to step S124, and the above-described process is repeated. If it is determined that the length of the movement trajectory has reached the predetermined value by moving the virtual viewpoint object while contacting the obstacle object 30, the process proceeds to step S126.
  • step S126 the wraparound destination control unit 153 determines the movement point at which the length of the movement locus has become a predetermined value as the position of the movement destination of the user 20.
  • the position (viewpoint) of the moving destination determined in this way is set as a position (around position) around the obstacle object 30 such as a wall.
  • step S126 When the process in step S126 is completed, the process returns to step S15 in FIG. 6, and the subsequent processes are executed.
  • FIG. 15 shows an example of a virtual viewpoint object 310 that is virtually generated for the obstacle object 30 that is a wall.
  • the along-wall vector w is calculated by performing the calculation shown in FIG. 16 (S123 in FIG. 14). That is, in FIG. 16, the starting point of the wall along the vector w, so as to be positioned at the base (starting point) of the traveling vector f, the normal of the gaze point P G on the surface of the obstacle object 30, the traveling vector f Try to match the destination (end point).
  • the wall-side vector w can be obtained by the following equation (1).
  • the normal is normalized
  • the coefficient a is the length when the inverse vector of the progression vector f is projected onto the normal. That is, the coefficient a can be obtained by the following equation (2).
  • the along-wall vector w can be obtained by the following equation (3) from the relationship between the equations (1) and (2).
  • the virtual viewpoint object 310 is moved in contact with the obstacle object 30 (wall) based on the along-wall vector w (S124 in FIG. 14).
  • FIG. 17 shows an example of movement of the virtual viewpoint object 310.
  • the virtual viewpoint object 310 is moved while touching the obstacle object 30 (wall) with the position in contact with the gazing point P G as the start point P S of the movement locus, and the length L of the movement locus is reached.
  • M reaches a predetermined value (“YES” in S125 of FIG. 14)
  • the movement point P M is determined as the movement destination position of the user 20 (S126 of FIG. 14).
  • the user 20 can select a certain part of the obstacle object 30 (part excluding the edge). In a situation where it is desired to go around the obstacle object 30 in order to move (instantaneous movement) to the position around the obstacle object 30 when watching (including gazing). Space movement can be performed more smoothly.
  • FIGS. 18 to 21 As a third example of the single edge wraparound destination determination process, when determining the position of the movement destination, the wraparound is performed according to the attribute of the space behind the obstacle object 30. A case where a method of changing the direction is used will be described.
  • the walls that partition the rooms of the model room correspond to the obstacle object 30.
  • FIG. 18 is a flowchart for explaining the flow of processing when changing the wraparound method according to the space attribute on the back side of the obstacle object, as a third example of the wraparound destination determination processing for single edge.
  • step S141 the wraparound destination control unit 153 determines a preset space attribute on the back side of the wall.
  • room and “corridor” in the model room are assigned as an example of the space attribute.
  • step S141 If it is determined in step S141 that the space attribute is “room”, the process proceeds to step S142.
  • step S142 the wraparound destination control unit 153 determines the position and direction of the movement destination when the space attribute is “room” as the position that wraps around the side surface of the wall (near the entrance of the space) and the direction parallel to the side surface. Adjust to.
  • step S141 If it is determined in step S141 that the space attribute is “corridor”, the process proceeds to step S143.
  • step S143 the wrap-around destination control unit 153 adjusts the position and direction of the movement destination when the space attribute is “corridor” to the position where the wrap-around is performed on the back side of the wall and the direction parallel to the wall.
  • step S142 or S143 When the processing in step S142 or S143 is completed, the processing returns to step S15 in FIG. 6 and the subsequent processing is executed.
  • the position and direction of the movement destination adjusted in the process of step S142 or S143 can be determined by the process of the first example or the second example described above, for example.
  • FIG. 19 shows an example of the attributed space in the model room.
  • the same first space attribute is assigned to the LDK and the two Western rooms.
  • the second space attribute is assigned to the Japanese-style room, and the third space attribute is assigned to the washroom, bathroom, and toilet. Furthermore, a fourth space attribute is assigned to the hallway.
  • the position and direction of the destination are parallel to the side of the wall (near the entrance of the space) and the side.
  • the direction is determined (S142 in FIG. 18).
  • FIG. 20 shows a state in which the user 20 looks at the edge of the wall that partitions the room as the obstacle object 30 and the edge becomes the gaze area 200.
  • the position (viewpoint) of the user 20 is at a position wrapping around the side surface of the obstacle object 30 (wall) based on the first space attribute that is “room”. While being moved, the direction of the line of sight is parallel to the side surface of the obstacle object 30 (wall) (arrow 320 in the figure). At this time, the head-mounted display 1 displays an image that has instantaneously moved near the entrance of the room (next to the wall).
  • the position and direction of the destination are determined as a position that turns back around the wall and the direction parallel to the wall (see FIG. 18 S143).
  • FIG. 21 shows a state in which the user 20 looks at the edge of the wall that partitions the corridor as the obstacle object 30 and the edge becomes the gaze area 200.
  • the position (viewpoint) of the user 20 is turned around on the back side of the obstacle object 30 (wall) based on the fourth space attribute of “corridor”. While moving to the position, the direction of the line of sight is parallel to the obstacle object 30 (wall) (arrow 320 in the figure). At this time, the head-mounted display 1 displays an image that has instantaneously moved in the corridor (a place that has gone around to the back side of the wall).
  • the space attribute of the room and the hallway was illustrated here, the washroom, the bathroom, the toilet, etc. to which the second space attribute is assigned and the third space attribute are similarly applied.
  • processing for determining the position of the movement destination and the direction of the line of sight is executed.
  • the space attribute can be set for each room in advance.
  • both the position (viewpoint) of the user 20 and the direction of the line of sight are adjusted according to the space attribute, but at least one of them may be adjusted.
  • the user 20 is looking at a certain part of the obstacle object 30 (when gazing).
  • FIG. 22 is a flowchart for explaining the flow of processing when adjusting the amount of wraparound according to the amount of inclination of the user's body as a fourth example of the wraparound destination determination processing for single edge.
  • step S161 the wraparound destination control unit 153 calculates the amount of inclination of the body of the user 20 who is gazing at the edge of the obstacle object 30 based on the sensor data detected by the sensor unit 101.
  • the tilt amount of the body of the user 20 can be detected based on sensor data detected by an acceleration sensor or a gyro sensor.
  • the amount of inclination of the body of the user 20 in addition to the amount of inclination of the entire body, for example, the amount of inclination of a part of the body such as the head may be used.
  • step S162 the wraparound destination control unit 153 compares the calculated body tilt amount with a preset threshold value to determine whether the body tilt amount is larger than the threshold value.
  • step S162 If it is determined in step S162 that the amount of body tilt is greater than the threshold, the process proceeds to step S163.
  • step S163 the wraparound destination control unit 153 adjusts the wraparound amount according to the inclination amount of the body of the user 20 when determining the position and direction of the movement destination.
  • the amount of sneaking around the obstacle object 30 can be increased as the inclination of the body of the user 20 is increased.
  • the position of the movement destination adjusted in the process of step S163 can be determined by the processes of the first example and the second example described above, for example.
  • step S162 If it is determined in step S162 that the amount of body tilt is smaller than the threshold value, the process in step S163 is skipped. That is, in this case, when the position of the movement destination is determined, adjustment according to the amount of body tilt is not performed.
  • step S163 ends or is skipped, the process returns to step S15 of FIG. 6, and the subsequent processes are executed.
  • FIG. 23 shows an example of adjustment of the amount of wraparound according to the amount of inclination of the body of the user 20.
  • FIG. 23A shows a state where the user 20 has watched the edge of the obstacle object 30 and the edge has become the watch area 200. At this time, since the body (for example, the head) of the user 20 is not tilted (it is determined that the body tilt amount is smaller than the threshold (“NO” in S162 in FIG. 22)), The amount of wraparound is not adjusted.
  • FIG. 23B shows that when the edge of the obstacle object 30 becomes the gaze area 200, the body of the user 20 is tilted (because it is determined that the amount of body tilt is greater than the threshold ( 22 is adjusted according to the amount of body tilt (S163 in FIG. 22).
  • An image that matches the psychology of the user 20 who is looking into the back side of the object 30 while tilting his / her body is displayed.
  • an arrow 320 corresponding to the amount of wraparound is illustrated, but by presenting this arrow 320 (annotation display), the user 20 can move immediately before moving to the back side of the obstacle object 30. It is possible to predict in advance how much it will wrap around. In other words, it can be said that the arrow 320 is specific information for specifying at least one of the user's position (viewpoint) and the direction of the line of sight before moving the position of the user 20. Note that the arrow 320 may be hidden.
  • the parameters for adjusting the amount of wraparound are not limited to the amount of inclination of the body of the user 20, but other parameters such as, for example, the gaze time of the edge portion of the obstacle object 30 by the user 20 and the gesture of the user 20 May be used.
  • the gaze time of the edge part is used as an example of another parameter will be described.
  • FIG. 24 is a flowchart for explaining the flow of processing when the wraparound amount is adjusted according to the gaze time of the edge portion as a fourth example of the single edge wraparound destination determination processing.
  • step S181 the wraparound destination control unit 153 calculates the gaze time of the edge portion of the obstacle object 30 by the user 20 based on the sensor data detected by the sensor unit 101.
  • step S182 the wraparound destination control unit 153 compares the calculated gaze time with a preset threshold value and determines whether or not the gaze time is greater than the threshold value.
  • step S182 If it is determined in step S182 that the gaze time is greater than the threshold, the process proceeds to step S183.
  • step S183 the wraparound destination control unit 153 adjusts the wraparound amount according to the gaze time when determining the position and direction of the movement destination.
  • the position of the movement destination adjusted in the process of step S183 can be determined by the process of the first example or the second example described above, for example.
  • step S183 If it is determined in step S182 that the gaze time is smaller than the threshold value, the process in step S183 is skipped. That is, in this case, when determining the position of the movement destination, adjustment according to the gaze time is not performed.
  • the process of step S183 ends or is skipped, the process returns to step S15 of FIG. 6 and the subsequent processes are executed.
  • the processing when the amount of inclination of the body of the user 20 and the gaze time of the edge portion of the obstacle object 30 are used as parameters for adjusting the amount of wraparound has been described as separate processing. However, processing using these parameters simultaneously may be executed. For example, when executing the processing, adjustment is made so that the amount of sneaking around the obstacle object 30 increases as the amount of inclination of the body of the user 20 is large and the gaze time of the edge portion is long. Can do.
  • the user 20 is viewing a portion where the obstacle object 30 is present.
  • the obstacle object 30 is moved (instantaneous movement) to the position where the obstacle object 30 is wrapped around with the amount of wraparound according to the parameter (when gazing)
  • the virtual space that matches the psychology of the user 20 can be moved.
  • FIG. 25 shows an example in which the user 20 opens the door when gazing around the door.
  • a to C in FIG. 25 are arranged in time series in that order.
  • FIG. 25A shows a state where the user 20 in the hallway in the virtual space is looking at the closed door on the right side.
  • the right door viewed by the user 20 is automatically opened (B in FIG. 25).
  • the control unit 100 determines that the gaze time of the right door by the user 20 is longer than the threshold based on the sensor data detected by the sensor unit 101, the door is opened. To be.
  • B in FIG. 25 shows a state in which the user 20 gazes near the entrance of the opened right door (near the edge of the wall) and the edge becomes the gaze area 200.
  • the position (viewpoint) of the user 20 is moved to a position (around the entrance of the opened right door) around the wall, and the direction of the line of sight is moved. Be directed to the previous room.
  • the user 20 performs an operation of opening the door by opening the door when the user 20 is gazing around the door. Instead, it is possible to move to a position around the wall simply by turning the line of sight L S.
  • the door has been described as an example of the obstacle object, but the object other than the door may be removed as the preceding process for determining the position of the movement destination.
  • FIG. 26 shows an example of a case where the user 20 displays an image of an overhead view when watching the vicinity of the boundary between the wall and the ceiling.
  • FIG. 26A shows a state in which the user 20 in the room in the virtual space gazes near the edge of the boundary between the ceiling and the wall, and the edge becomes the gaze area 200.
  • the viewpoint of the user 20 is switched to the overhead viewpoint, and an image of the model room including the room in which the user 20 is present viewed from obliquely above is displayed.
  • the user 20 can grasp the entire image of the model room at an arbitrary timing.
  • the vicinity of the boundary between the wall and the ceiling is exemplified.
  • the overhead view video is displayed. May be.
  • the model room is also an example, and an image of an overhead view can be displayed when a specific area is watched even in another virtual space.
  • FIG. 27 shows an example in the case of displaying a video image of the obstacle wrapping around the obstacle object 30.
  • the user 20 can check the video of the wraparound destination and determine whether to move the obstacle object 30 to the wraparound destination. If it is determined to move to the destination, the above-described processing is performed, so that the position (viewpoint) of the user 20 is moved to the position around the obstacle object 30.
  • the obstacle object 30 can include, for example, display shelves of retail stores such as department stores and supermarkets, in addition to the walls that partition the model room.
  • the small screen display area 400 pops up, and the image of the point around the shelf is displayed.
  • the position of the user's line of sight L S when the position of the user's line of sight L S is in a peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space, the position of the user 20 in the direction of the line of sight L S.
  • the image visually recognized by the user 20 can be presented not only on the entire visual field area of the user 20 but also on only a part of the visual field area.
  • the edge of the obstacle object 30 becomes the gaze area 200
  • the position (viewpoint) and direction of the destination are determined, and the user 20 moves the viewpoint when presenting an image recognized by the user 20. It can be said that there are cases where the viewpoint of the user 20 is not moved and cases where the viewpoint of the user 20 is not moved.
  • the user 20 can determine whether or not to move the obstacle object 30 to the wraparound destination. It is possible to make a more appropriate judgment in situations where it is desired to get around.
  • the first to seventh examples of the single edge wraparound destination determination process described above are examples, and the single edge wraparound destination determination process is performed when one edge exists in the vicinity of the gazing point.
  • Another process for determining the position of the wraparound destination with respect to the obstacle object 30 may be executed.
  • FIG. 28 is a flowchart for explaining the flow of the wraparound destination determination process from the obstacle upper end point.
  • step S201 the wraparound destination control unit 153 determines a wraparound point for an obstacle object having no edge.
  • a wraparound point can be determined using a ray casting method.
  • step S202 the wraparound destination control unit 153 determines the position (wraparound position) and direction of the movement destination according to the determined wraparound point.
  • step S202 When the process in step S202 is completed, the process returns to step S16 in FIG. 6, and the subsequent processes are executed.
  • FIG. 29 shows an example of a method for determining a wraparound point for an obstacle object 31 having no edge.
  • the point on the obstacle object 31 is the end point of the line-of-sight vector s corresponding to the direction of the line of sight.
  • the ray casting method is a method in which a ray (ray) is emitted (cast) from the viewpoint of the user 20 and a distance to the closest object (object) is measured.
  • a ray casting range R RC corresponding to a predetermined range in the front direction of the face such as the viewing angle of the user 20 is set, and a plurality of ray castings are performed within the range of the ray casting range R RC .
  • five light rays Ray (solid line and dotted line arrows) are illustrated, but the outermost light ray Ray (light ray Ray indicated by the solid line arrow) hitting the obstacle object 31 in the left and right direction respectively.
  • the points hit as the rays Ray-A and Ray-B are the end points EP A and EP B on the obstacle object 31, respectively.
  • the obstacle object 31 is searched in the direction of the vector w along the wall, so that it is first found among the end points EP A and EP B.
  • the light ray Ray having a smaller angle is selected from the angles ⁇ A and ⁇ B which are angles formed by the light ray Ray-A and the light ray Ray-B, respectively.
  • ⁇ A and ⁇ B are angles formed by the light ray Ray-A and the light ray Ray-B, respectively.
  • the end point EP B by the ray Ray- B is selected.
  • the end point EP B by Ray Ray-B is determined as the point coupling loop with respect to the obstacle object 31 is not an edge (S201 in FIG. 28).
  • FIG. 30 shows an example of a method for determining the position (viewpoint) of the movement destination as the wraparound position.
  • the end point EP B based on the ray Ray- A is not used because the end point EP B based on the ray Ray- B is determined as the wraparound point by the method for determining the wraparound point described above. .
  • the end point EP B by the ray Ray- B is used as the position to which the user 20 moves, but here the end point EP B , that is, the obstacle object 31 and the ray Ray-B, is used.
  • a position away from the object surface of the collision point by a predetermined distance in the normal direction can be set as a position P M (around position) of the movement destination of the user 20.
  • the direction of the line of sight of the user 20 after the movement may be directed in another direction, for example, the direction of the ray Ray-B in addition to the same direction as the line of sight before the movement.
  • a position away from the determined wraparound point end point EP B by a predetermined distance is determined as the position P M (wraparound position) of the user 20 movement destination (S202 in FIG. 28).
  • the virtual space can be moved more smoothly in a situation where it is desired to go around the obstacle object 31. be able to.
  • Clusters allow objects close to each other to be handled as a large block of obstacle objects.
  • a collection of a plurality of trees is handled as one obstacle object 32-1 and 32-2.
  • the above-described method can be applied to a large group of obstacle objects 32-1 and 32-2 as obstacle objects 32-1 and 32-2 having no edge.
  • wraparound destination determination process from the obstacle upper end point As above, the details of the wraparound destination determination process from the obstacle upper end point have been described.
  • the wraparound destination determination process from the obstacle upper end point described above is an example, and the wraparound destination determination process from the obstacle upper end point is a wraparound destination determination process for the obstacle object 31 when no edge exists near the point of interest. Another process for determining the position of the may be executed.
  • Multi-edge wraparound destination determination process Next, with reference to FIGS. 32 to 38, details of the multi-edge wraparound destination determination process corresponding to the process of step S17 of FIG. 6 will be described. Here, examples of the three processes of the first to third examples are shown as the wraparound destination determination process for multiple edges.
  • FIG. 32 is a flowchart for explaining the flow of processing when a wraparound destination candidate is displayed as a first example of the wraparound destination determination process for multiple edges.
  • step S301 the wraparound destination control unit 153 calculates the distance from the gazing point of each edge of the obstacle object 30.
  • step S ⁇ b> 302 the wraparound destination control unit 153 selects one edge as the target edge among the edges for which the distance from the gazing point is calculated.
  • step S303 the wraparound destination control unit 153 determines whether the selected target edge is the edge closest to the gazing point based on the calculated distance from the gazing point of each edge.
  • step S303 If it is determined in step S303 that the edge is closest to the gazing point, the process proceeds to step S304.
  • step S304 the wraparound destination control unit 153 highlights (for example, highlights) an arrow indicating the wraparound destination of the target edge.
  • step S303 determines whether the edge is closest to the gazing point. If it is determined in step S303 that the edge is not closest to the gazing point, the process proceeds to step S305.
  • step S305 the wraparound destination control unit 153 non-highlights (for example, displays lightly) an arrow indicating the wraparound destination of the target edge.
  • step S304 or S305 the process proceeds to step S306.
  • step S306 the wraparound destination control unit 153 determines whether all edges have been selected.
  • step S306 If it is determined in step S306 that not all edges have been selected, the process returns to step S302, and the processes of steps S302 to S306 are repeated. That is, an unselected edge (other edge) is selected as the target edge (S302), and when the selected other edge is closest to the gazing point, the arrow is highlighted (S304). In this case, the arrow is not highlighted (S305).
  • any other display form can be used as long as the display form is highlighted.
  • thin display is an example of a non-highlighted display form, and other display forms may be used.
  • step S306 If it is determined in step S306 that all edges have been selected, the process proceeds to step S307.
  • step S307 the wraparound destination control unit 153 determines whether or not the wraparound destination candidate indicated by the highlighted arrow matches the user's intention.
  • step S306 If it is determined in step S306 that the wraparound candidate matches the intention of the user 20, the process proceeds to step S308.
  • step S308 the wraparound destination control unit 153 determines the position corresponding to the wraparound destination candidate as the position of the movement destination.
  • step S307 If it is determined in step S307 that the wraparound candidate indicated by the highlighted arrow does not match the intention of the user 20, the process of step S308 is skipped. When the process in step S308 ends or is skipped, the process returns to step S17 in FIG. 6 and the subsequent processes are executed. Further, when the wraparound destination candidate does not match the intention of the user 20, for example, the arrow of the other edge is highlighted to determine again whether the wraparound destination candidate matches the intention of the user 20. You may make it do.
  • FIG. 33 shows an example of the display of the wraparound candidate.
  • the line of sight L S of the user 20 is between the obstacle objects 30-1 to 30-3.
  • the area indicated by the dotted circle in the figure is the gazing point vicinity area 340.
  • the edge 30E-1 of the obstacle object 30-1, the edge 30E-2 of the obstacle object 30-2, and the edge 30E-3 of the obstacle object 30-3 are within the gazing point vicinity region 340. Since it exists, the user 20 is gazing at one of the edges 30E.
  • the distance from the gazing point of each edge is calculated, and since the edge 30E-3 is determined to be the closest edge from the gazing point among the edges 30E-1 to 30E-3, the obstacle object 30-
  • the arrow 320-3 indicating the wraparound destination of 3 is highlighted (for example, highlighted) (S304 in FIG. 32), and the arrows 320-1 and 320-2 indicating the wraparound destination of the obstacle objects 30-1 and 30-2 are displayed. Is not highlighted (for example, lightly displayed) (S305 in FIG. 32).
  • the arrow 320 is identification information for identifying the edge 30E closest to the line of sight of the user 20.
  • arrows 320-1 and 320-2 are presented together with the highlighted arrow 320-3, but the arrows 320-1 and 320-2 may be hidden.
  • the position corresponding to the wraparound destination candidate is determined as the movement destination position (S308 in FIG. 32), and the user 20 Is moved (instantaneous movement) to a position around the obstacle object 30-3.
  • FIG. 34 is a flowchart for explaining the flow of processing when a specific obstacle object is deformed as a second example of the wraparound destination determination processing for multiple edges.
  • step S321 the wraparound destination control unit 153 identifies the obstacle object 30 to be deformed.
  • an obstacle object 30 is specified that makes it easy to determine the edge that the user 20 is gazing by deforming its shape or position.
  • step S322 the wraparound destination control unit 153 deforms the identified obstacle object 30.
  • the deformation process for example, a process such as changing the shape or position of the specified obstacle object 30 is performed so that the edge that the user 20 is gazing at can be easily determined.
  • step S323 the wraparound destination control unit 153 determines whether or not the gaze area 200 has been determined. If it is determined in step S323 that the gaze area 200 has not been determined, the determination process in step S323 is repeated.
  • step S323 If it is determined in step S323 that the gaze area 200 has been determined, the process proceeds to step S324.
  • step S324 the wraparound destination control unit 153 determines the wraparound destination position corresponding to the gaze area 200 as the movement destination position.
  • step S324 When the process in step S324 is completed, the process returns to step S17 in FIG. 6, and the subsequent processes are executed.
  • FIG. 35 shows an example of a default display of a specific obstacle object 30.
  • the line of sight L S of the user 20 is the obstacle objects 30-1 to 30-3.
  • the space is a gazing point vicinity region 340.
  • the lateral lengths of the obstacle objects 30-1 and 30-3 are shortened (or left side).
  • the space between the obstacle objects 30-1 to 30-3 is widened, and the edge 30E at which the user 20 is gazing can be easily identified.
  • the back edge of the obstacle object 30-3 is set as the gaze area 200.
  • FIG. 36 is a flowchart for explaining the flow of processing when determining the edge centroid position as the position of the movement destination as a third example of the wraparound destination determination processing for multiple edges.
  • step S341 the wraparound destination control unit 153 calculates the position of the center of gravity of the edge.
  • step S342 the wraparound destination control unit 153 determines the calculated center of gravity position as the position of the movement destination.
  • step S342 When the process in step S342 is completed, the process returns to step S17 in FIG. 6, and the subsequent processes are executed.
  • FIG. 37 shows an example in which the edge barycenter position is set as the movement destination position (viewpoint).
  • the line of sight L S of the user 20 is between the obstacle objects 30-1 to 30-3.
  • the space is a gazing point vicinity region 340.
  • the edges 30E-1 to 30E-3 exist in the gazing point vicinity area 340, it is impossible to determine which edge 30E is being watched by the user 20, and the user 20 It is necessary not to go around in an unintended direction. Therefore, here, when a plurality of edges exist in the gazing point vicinity region 340 and it is difficult to discriminate, the position of the center of gravity of the edge 30E is calculated, and the position of the center of gravity is set as the movement destination position.
  • the barycentric position g of the edge 30E can be calculated as follows.
  • FIG. 38 shows a state where the space between the obstacle objects 30-1 to 30-3 is looked down from directly above.
  • the gravity center position g of the figure formed by connecting the position P1 of the edge 30E-1, the position P2 of the edge 30E-2, and the position P3 of the edge 30E-3 is calculated.
  • the barycentric position g can be determined as the movement destination position (S342 in FIG. 36). Thereby, after moving to the gravity center position g, the user 20 can gaze at the edge 30E in the direction intended by the user.
  • centroid position in the case where the figure formed by connecting the edge positions (the figure corresponding to a plurality of edges) is a triangle is illustrated, but the centroid position g is similarly calculated and moved in other figures. It can be the previous position.
  • the user 20 wraps around in an unintended direction.
  • the user 20 can surely move to the wraparound destination intended by the user 20.
  • wraparound destination determination process for multiple edges.
  • the first to third examples of the multiple edge wraparound destination determination process described above are examples, and the multiple edge wraparound destination determination process is an obstacle when there are a plurality of edges in the vicinity of the gazing point.
  • Another process for determining the position of the wraparound destination with respect to the object 30 may be executed.
  • the head-mounted display 1 has been exemplified.
  • the present technology is not limited to a head-mounted display in a narrow sense.
  • the present technology is applied not only to the head mounted display 1 configured as a single device, but also to an information processing device (virtual space experience system such as a game machine or a personal computer) connected to the head mounted display 1.
  • the present invention may be applied to some devices among a plurality of devices.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing.
  • the present technology may be a two-dimensional video (2D) displayed on a display device such as a monitor. It can also be applied to (video).
  • a virtual reality virtual reality
  • the present technology is not limited to virtual reality (VR), and may be applied to augmented reality (AR) that expands the real world by displaying additional information in a real space.
  • the displayed video is not limited to a video in a VR (Virtual Reality) space, and may be another video such as a video in a real space.
  • FIG. 39 is a block diagram illustrating an example of a hardware configuration of a computer that executes the above-described series of processes using a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 1005 is further connected to the bus 1004.
  • An input unit 1006, an output unit 1007, a recording unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
  • the input unit 1006 includes a microphone, a keyboard, a mouse, and the like.
  • the output unit 1007 includes a speaker, a display, and the like.
  • the recording unit 1008 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 1009 includes a network interface or the like.
  • the drive 1010 drives a removable recording medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 1001 loads the program recorded in the ROM 1002 or the recording unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program. A series of processing is performed.
  • the program executed by the computer 1000 can be provided by being recorded on a removable recording medium 1011 as a package medium, for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the recording unit 1008 via the input / output interface 1005 by attaching the removable recording medium 1011 to the drive 1010.
  • the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the recording unit 1008.
  • the program can be installed in the ROM 1002 or the recording unit 1008 in advance.
  • the processing performed by the computer according to the program does not necessarily have to be performed in chronological order in the order described as the flowchart. That is, the processing performed by the computer according to the program includes processing executed in parallel or individually (for example, parallel processing or object processing).
  • the program may be processed by a single computer (processor) or may be distributedly processed by a plurality of computers.
  • the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
  • the configuration described in the above embodiment may be implemented alone, or may be implemented by combining a plurality of configurations.
  • each step of the above-described processing can be executed by one apparatus or can be executed by a plurality of apparatuses. Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • this technique can take the following structures.
  • the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information.
  • An information processing apparatus comprising: a display control unit configured to control a display device so as to present an image visually recognized by the user by changing a direction of the line of sight to the user.
  • the information processing apparatus (7) The information processing apparatus according to (6), wherein the viewpoint of the user is moved to a position in the vicinity of a side surface or a back surface of the object with respect to the user-side surface.
  • the peripheral area is an area including the object at the line of sight and includes an edge of the object.
  • the movement information includes a detection result of an operation or action of the user or a dwell time of the user's line of sight.
  • the direction information includes a detection result of the user's line of sight or face direction, or the user's operation or action.
  • At the time of moving the user's viewpoint at least one of the user's viewpoint and the direction of the line of sight is adjusted according to an attribute of a space of the movement destination.
  • the object When the object includes a plurality of edges, present specific information for specifying an edge closest to the position of the line of sight, deform one or a plurality of the objects, or correspond to the plurality of edges
  • the information processing apparatus according to (5), wherein the viewpoint of the user is moved to the position of the center of gravity of the figure.
  • the display control unit presents the image only on all or a part of the visual field region of the user.
  • the display control unit is configured to move the user's viewpoint in the direction of the line of sight and maintain the direction of the line of sight when the position of the line of sight is in another object that emphasizes a specific position in the virtual space.
  • the information processing apparatus according to any one of (1) to (15), wherein an image visually recognized by the user is presented.
  • the display control unit presents specific information for specifying at least one of the user's viewpoint and the direction of the line of sight before moving the user's viewpoint. Any one of (1) to (16) The information processing apparatus described in 1. (18) The information processing apparatus according to any one of (1) to (17), configured as a head mounted display.
  • Information processing device Get movement information to move the user's viewpoint in virtual space, Obtaining direction information corresponding to the direction of the user's line of sight; When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information.
  • An information processing method for controlling a display device so as to present an image visually recognized by the user to the user by changing a direction of the line of sight.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present feature pertains to an information processing device, an information processing method, and a program that are configured so as to enable a virtual space to be moved more smoothly. Provided is an information processing device provided with: a movement information acquisition unit for acquiring movement information for moving a user's line of sight in a virtual space; a direction information acquisition unit for acquiring direction information that corresponds to the direction of the user's line of sight; and a display control unit that, when the position of the user's light of sign is present in the peripheral region of an object arranged in the virtual space, moves the user's view point in the direction of the line of sight on the basis of at least one of the acquired movement information and direction information and changes the direction of the line of sight, thereby controlling a display device so that an image visually recognized by the user is presented to the user. The present feature can be applied, for example, to a head-mounted display.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing apparatus, information processing method, and program
 本技術は、情報処理装置、情報処理方法、及びプログラムに関し、特に、仮想空間の移動をよりスムーズに行うことができるようにした情報処理装置、情報処理方法、及びプログラムに関する。 The present technology relates to an information processing device, an information processing method, and a program, and more particularly, to an information processing device, an information processing method, and a program that can move a virtual space more smoothly.
 近年、ヘッドマウントディスプレイ(HMD:Head Mounted Display)等の機器を利用して、バーチャルリアリティ(VR:Virtual Reality)機能を提供するための技術の研究・開発が盛んに行われている(例えば、特許文献1参照)。 In recent years, research and development of technologies for providing a virtual reality (VR) function using devices such as a head-mounted display (HMD) have been actively conducted (for example, patents) Reference 1).
 特許文献1には、ヘッドマウントディスプレイを装着してユーザがパノラマ空間(仮想空間)を移動するに際して、パノラマ画像等の広視野角画像に対してユーザの視点を移動させることが可能な画像生成装置が開示されている。 Patent Document 1 discloses an image generation apparatus that can move a user's viewpoint with respect to a wide viewing angle image such as a panoramic image when a user moves a panoramic space (virtual space) with a head-mounted display. Is disclosed.
特開2016-62486号公報JP 2016-62486
 ところで、仮想空間内で、ユーザの視点を移動させる場合に、壁等の障害物オブジェクトの向こう側に回り込みたい状況がある。しかしながら、現状では、障害物オブジェクトの向こう側に回り込むに際して、障害物オブジェクトのエッジ付近に一度移動した後で、さらにユーザが移動と向きを変える操作を行う必要がある。そのため、仮想空間の移動をスムーズに行うための技術が求められている。 By the way, when moving the user's viewpoint in the virtual space, there is a situation where the user wants to go around the obstacle object such as a wall. However, under the present situation, when moving around the obstacle object, it is necessary to further move and change the direction of the user after moving once near the edge of the obstacle object. Therefore, there is a demand for a technique for smoothly moving the virtual space.
 本技術はこのような状況に鑑みてなされたものであり、仮想空間の移動をよりスムーズに行うことができるようにするものである。 This technology has been made in view of such a situation, and makes it possible to move the virtual space more smoothly.
 本技術の一側面の情報処理装置は、仮想空間におけるユーザの視点を移動する移動情報を取得する移動情報取得部と、前記ユーザの視線の方向に対応する方向情報を取得する方向情報取得部と、前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する表示制御部とを備える情報処理装置である。 An information processing apparatus according to an aspect of the present technology includes a movement information acquisition unit that acquires movement information that moves a user's viewpoint in a virtual space, and a direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight. When the position of the user's line of sight is in the peripheral area of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. And a display control unit that controls the display device so as to present an image visually recognized by the user by changing the direction of the line of sight to the user.
 本技術の一側面の情報処理方法、及びプログラムは、上述した本技術の一側面の情報処理装置に対応する情報処理方法、及びプログラムである。 The information processing method and program according to one aspect of the present technology are an information processing method and program corresponding to the information processing apparatus according to one aspect of the present technology described above.
 本技術の一側面の情報処理装置、情報処理方法、及びプログラムにおいては、仮想空間におけるユーザの視点を移動する移動情報が取得され、前記ユーザの視線の方向に対応する方向情報が取得され、前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置が制御される。 In the information processing apparatus, the information processing method, and the program according to one aspect of the present technology, movement information for moving the viewpoint of the user in the virtual space is acquired, direction information corresponding to the direction of the line of sight of the user is acquired, When the position of the user's line of sight is in the peripheral area of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. The display device is controlled so as to present the user with an image visually recognized by the user by changing the direction of the line of sight.
 本技術の一側面の情報処理装置は、独立した装置であってもよいし、1つの装置を構成している内部ブロックであってもよい。 The information processing apparatus according to one aspect of the present technology may be an independent apparatus, or may be an internal block constituting one apparatus.
 本技術の一側面によれば、仮想空間の移動をよりスムーズに行うことができる。 According to one aspect of the present technology, the virtual space can be moved more smoothly.
 なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 It should be noted that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本技術を適用したヘッドマウントディスプレイの外観の例を示す図である。It is a figure which shows the example of the external appearance of the head mounted display to which this technique is applied. 本技術を適用したヘッドマウントディスプレイの一実施の形態の構成の例を示すブロック図である。It is a block diagram showing an example of composition of a 1 embodiment of a head mount display to which this art is applied. 図2の制御部の機能的の構成の例を示すブロック図である。It is a block diagram which shows the example of a functional structure of the control part of FIG. 本技術の概要を説明する図である。It is a figure explaining the outline | summary of this technique. 本技術の概要を説明する図である。It is a figure explaining the outline | summary of this technique. ヘッドマウントディスプレイにより実行される処理の流れを説明するフローチャートである。It is a flowchart explaining the flow of the process performed by a head mounted display. 単一エッジ用回り込み先決定処理の第1の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 1st example of the surrounding determination process for single edges. 回り込み先の位置と方向の決定方法の第1の例を示す図である。It is a figure which shows the 1st example of the determination method of the position and direction of a wraparound destination. 回り込み先の位置と方向の決定方法の第1の例を示す図である。It is a figure which shows the 1st example of the determination method of the position and direction of a wraparound destination. モデルルームのCG映像の第1の例を示す図である。It is a figure which shows the 1st example of the CG image | video of a model room. モデルルームのCG映像の第2の例を示す図である。It is a figure which shows the 2nd example of the CG image | video of a model room. モデルルームのCG映像の第3の例を示す図である。It is a figure which shows the 3rd example of the CG image | video of a model room. モデルルームのCG映像の第4の例を示す図である。It is a figure which shows the 4th example of CG image of a model room. 単一エッジ用回り込み先決定処理の第2の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 2nd example of the surrounding determination process for single edges. 回り込み先の位置と方向の決定方法の第2の例を示す図である。It is a figure which shows the 2nd example of the determination method of the position and direction of a wraparound destination. 回り込み先の位置と方向の決定方法の第2の例を示す図である。It is a figure which shows the 2nd example of the determination method of the position and direction of a wraparound destination. 回り込み先の位置と方向の決定方法の第2の例を示す図である。It is a figure which shows the 2nd example of the determination method of the position and direction of a wraparound destination. 単一エッジ用回り込み先決定処理の第3の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 3rd example of the surrounding determination process for single edges. 部屋ごとの空間属性の割り当ての例を示す図である。It is a figure which shows the example of allocation of the space attribute for every room. 空間属性に応じた回り込み先の位置と方向の調整の例を示す図である。It is a figure which shows the example of adjustment of the position and direction of a wraparound destination according to a space attribute. 空間属性に応じた回り込み先の位置と方向の調整の例を示す図である。It is a figure which shows the example of adjustment of the position and direction of a wraparound destination according to a space attribute. 単一エッジ用回り込み先決定処理の第4の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 4th example of the wraparound destination determination process for single edges. 体の傾き量に応じた回り込み量の調整の例を示す図である。It is a figure which shows the example of adjustment of the wraparound amount according to the inclination amount of a body. 単一エッジ用回り込み先決定処理の第4の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 4th example of the wraparound destination determination process for single edges. 扉周辺を注視している場合にその扉を開けた状態にする場合の例を示す図である。It is a figure which shows the example in the case of making the state which opened the door, when gazing at the door periphery. 特定の領域を注視している場合に俯瞰視点の映像を表示する場合の例を示す図である。It is a figure which shows the example in the case of displaying the image | video of an overhead viewpoint when gazing at a specific area | region. 回り込み先の映像を表示する場合の例を示す図である。It is a figure which shows the example in the case of displaying the image | video of a wraparound destination. 障害物上端点からの回り込み先決定処理の流れを説明するフローチャートである。It is a flowchart explaining the flow of the wraparound destination determination process from an obstruction upper end point. 回り込み先の位置と方向の決定方法の例を示す図である。It is a figure which shows the example of the determination method of the position and direction of a wraparound destination. 回り込み先の位置と方向の決定方法の例を示す図である。It is a figure which shows the example of the determination method of the position and direction of a wraparound destination. 詳細オブジェクトのクラスタリングの例を示す図である。It is a figure which shows the example of clustering of a detailed object. 複数エッジ用回り込み先決定処理の第1の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 1st example of the surrounding destination determination process for multiple edges. 回り込み先の候補の表示の例を示す図である。It is a figure which shows the example of a display of the candidate of a wraparound destination. 複数エッジ用回り込み先決定処理の第2の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 2nd example of the wraparound destination determination process for multiple edges. 特定の障害物オブジェクトのデフォルメ表示の例を示す図である。It is a figure which shows the example of the deformation display of a specific obstruction object. 複数エッジ用回り込み先決定処理の第3の例の流れを説明するフローチャートである。It is a flowchart explaining the flow of the 3rd example of the surrounding determination process for multiple edges. エッジ重心位置を移動先の位置とする場合の例を示す図である。It is a figure which shows the example in case the edge gravity center position is made into the position of a movement destination. エッジ重心位置を移動先の位置とする場合の例を示す図である。It is a figure which shows the example in case the edge gravity center position is made into the position of a movement destination. コンピュータの構成の例を示す図である。It is a figure which shows the example of a structure of a computer.
 以下、図面を参照しながら本技術の実施の形態について説明する。なお、説明は以下の順序で行うものとする。 Hereinafter, embodiments of the present technology will be described with reference to the drawings. The description will be made in the following order.
1.本技術の実施の形態
2.変形例
3.コンピュータの構成
1. Embodiment 2 of the present technology Modification 3 Computer configuration
<1.本技術の実施の形態> <1. Embodiment of the present technology>
(HMDの外観の例)
 図1は、本技術を適用したヘッドマウントディスプレイの外観の例を示す図である。
(Example of HMD appearance)
FIG. 1 is a diagram illustrating an example of the appearance of a head mounted display to which the present technology is applied.
 ヘッドマウントディスプレイ1は、ユーザの両眼を覆うように頭部に装着して、ユーザの眼前に設けられた表示画面に表示される静止画や動画などを鑑賞するための情報処理装置である。 The head-mounted display 1 is an information processing apparatus that is worn on the head so as to cover both eyes of the user and that can be used for viewing still images, moving images, and the like displayed on a display screen provided in front of the user's eyes.
 図1において、ヘッドマウントディスプレイ1は、本体部11及び頭部接触部12を有する。本体部11には、ディスプレイ、各種のセンサ類、及び通信モジュールなどが設けられる。頭部接触部12は、ユーザの頭部に接触して両眼を覆うように本体部11を固定する。 1, the head mounted display 1 has a main body portion 11 and a head contact portion 12. The main body 11 is provided with a display, various sensors, a communication module, and the like. The head contact portion 12 fixes the main body portion 11 so as to contact the user's head and cover both eyes.
(HMDの構成の例)
 図2は、本技術を適用したヘッドマウントディスプレイの一実施の形態の構成の例を示すブロック図である。
(Example of HMD configuration)
FIG. 2 is a block diagram illustrating an example of a configuration of an embodiment of a head mounted display to which the present technology is applied.
 図2において、ヘッドマウントディスプレイ1は、制御部100、センサ部101、記憶部102、表示部103、音声出力部104、入力端子105、出力端子106、及び通信部107を含んで構成される。 2, the head mounted display 1 includes a control unit 100, a sensor unit 101, a storage unit 102, a display unit 103, an audio output unit 104, an input terminal 105, an output terminal 106, and a communication unit 107.
 制御部100は、例えば、CPU(Central Processing Unit)等から構成される。制御部100は、各部の動作の制御や各種の演算処理を行う中心的な処理装置である。なお、ここでは、GPU(Graphics Processing Unit)等の専用のプロセッサを設けるようにしてもよい。 The control unit 100 includes, for example, a CPU (Central Processing Unit). The control unit 100 is a central processing device that controls the operation of each unit and performs various arithmetic processes. Here, a dedicated processor such as a GPU (Graphics Processing Unit) may be provided.
 センサ部101は、例えば、各種のセンサデバイス等から構成される。センサ部101は、ユーザやその周辺などのセンシングを行い、そのセンシング結果に応じたセンサデータを、制御部100に供給する。 The sensor unit 101 includes, for example, various sensor devices. The sensor unit 101 performs sensing of the user and its surroundings, and supplies sensor data corresponding to the sensing result to the control unit 100.
 ここで、センサ部101としては、例えば、磁場(磁界)の大きさや方向を検出する磁気センサ、加速度を検出する加速度センサ、角度(姿勢)や角速度、角加速度を検出するジャイロセンサ、近接するものを検出する近接センサなどを含めることができる。なお、ここでは、3次元の角速度と加速度を求める慣性計測装置(IMU:Inertial Measurement Unit)により、ユーザのジェスチャ等を検出してもよい。 Here, as the sensor unit 101, for example, a magnetic sensor that detects the magnitude and direction of a magnetic field (magnetic field), an acceleration sensor that detects acceleration, a gyro sensor that detects an angle (attitude), angular velocity, and angular acceleration, or a nearby sensor A proximity sensor for detecting Here, the user's gesture or the like may be detected by an inertial measurement device (IMU: Inertial Measurement Unit) that obtains a three-dimensional angular velocity and acceleration.
 また、センサ部101として、イメージセンサを有するカメラを設けて、被写体を撮像して得られる画像データを、制御部100に供給するようにしてもよい。さらに、センサ部101には、温度を検出する温度センサや、湿度を検出する湿度センサ、周囲の明るさを検出する環境光センサなどの周囲の環境を測定するためのセンサや、呼吸や脈拍、指紋、虹彩などの生体情報を検出する生体センサ、GPS(Global Positioning System)信号などの位置情報を検出するためのセンサを含めることができる。 Further, a camera having an image sensor may be provided as the sensor unit 101, and image data obtained by imaging a subject may be supplied to the control unit 100. Further, the sensor unit 101 includes sensors for measuring the surrounding environment such as a temperature sensor for detecting temperature, a humidity sensor for detecting humidity, an ambient light sensor for detecting ambient brightness, breathing and pulse, A biosensor for detecting biometric information such as a fingerprint and an iris, and a sensor for detecting position information such as a GPS (Global Positioning System) signal can be included.
 記憶部102は、例えば、不揮発性メモリや揮発性メモリを含む半導体メモリ等から構成される。記憶部102は、制御部100からの制御に従い、各種のデータを記憶する。 The storage unit 102 includes, for example, a semiconductor memory including a nonvolatile memory and a volatile memory. The storage unit 102 stores various data according to control from the control unit 100.
 表示部103は、例えば、液晶ディスプレイ(LCD:Liquid Crystal Display)や有機ELディスプレイ(OLED:Organic Light Emitting Diode)等の表示デバイス(表示装置)から構成される。表示部103は、制御部100から供給される映像データ(又は画像データ)に応じた映像(又は画像)を表示する。 The display unit 103 includes, for example, a display device (display device) such as a liquid crystal display (LCD) or an organic EL display (OLED: Organic Light Emitting Diode). The display unit 103 displays a video (or image) corresponding to the video data (or image data) supplied from the control unit 100.
 音声出力部104は、例えば、スピーカ等の音声出力デバイスから構成される。音声出力部104は、制御部100から供給される音声データに応じた音声(音)を出力する。 The audio output unit 104 includes an audio output device such as a speaker. The sound output unit 104 outputs sound (sound) corresponding to the sound data supplied from the control unit 100.
 入力端子105は、例えば,入力インターフェース回路等を含んで構成され、所定のケーブルを介して電子機器と接続される。例えば、入力端子105は、例えばゲーム機(専用のコンソール)やパーソナルコンピュータ、再生機等の機器から入力される映像データや音声データ、コマンド等を、制御部100に供給する。 The input terminal 105 includes, for example, an input interface circuit, and is connected to an electronic device via a predetermined cable. For example, the input terminal 105 supplies the control unit 100 with video data, audio data, commands, and the like input from devices such as a game machine (dedicated console), a personal computer, and a playback machine, for example.
 出力端子106は、例えば、出力インターフェース回路等を含んで構成され、所定のケーブルを介して電子機器と接続される。例えば、出力端子106は、そこに供給される音声データを、ケーブルを介してイヤホンやヘッドホン等の機器に出力する。 The output terminal 106 includes, for example, an output interface circuit and the like, and is connected to an electronic device via a predetermined cable. For example, the output terminal 106 outputs audio data supplied thereto to a device such as an earphone or a headphone via a cable.
 通信部107は、例えば、Bluetooth(登録商標)、無線LAN(Local Area Network)、若しくはセルラー方式の通信(例えばLTE-Advancedや5G等)などの無線通信、又は有線通信に対応した通信モジュールとして構成される。通信部107は、所定の通信方式に従い、外部の機器との間で通信を行い、各種のデータ(例えば映像データや音声データ、コマンド等)をやり取りする。なお、ここでの外部の機器としては、例えば、ゲーム機(専用のコンソール)やパーソナルコンピュータ、サーバ、再生機、専用のコントローラ、リモートコントローラ等を含む。 The communication unit 107 is configured as a communication module that supports wireless communication such as Bluetooth (registered trademark), wireless LAN (Local Area Network), or cellular communication (for example, LTE-Advanced or 5G), or wired communication. Is done. The communication unit 107 communicates with an external device according to a predetermined communication method, and exchanges various data (for example, video data, audio data, commands, etc.). The external devices here include, for example, a game machine (dedicated console), a personal computer, a server, a player, a dedicated controller, a remote controller, and the like.
(制御部の機能的構成の例)
 図3は、図2の制御部100の機能的の構成の例を示すブロック図である。
(Example of functional configuration of control unit)
FIG. 3 is a block diagram illustrating an example of a functional configuration of the control unit 100 of FIG.
 図3において、制御部100は、移動情報取得部151、方向情報取得部152、回り込み先制御部153、視点カメラ位置・姿勢制御部154、及び表示制御部155を含んで構成される。 3, the control unit 100 includes a movement information acquisition unit 151, a direction information acquisition unit 152, a wraparound destination control unit 153, a viewpoint camera position / posture control unit 154, and a display control unit 155.
 移動情報取得部151は、センサ部101、入力端子105、又は通信部107等から供給されるデータに基づいて、移動情報を取得し、回り込み先制御部153に供給する。 The movement information acquisition unit 151 acquires movement information based on data supplied from the sensor unit 101, the input terminal 105, the communication unit 107, and the like, and supplies the movement information to the wraparound destination control unit 153.
 ここで、移動情報は、仮想空間におけるユーザの視点を移動するための情報である。すなわち、移動情報は、仮想空間において、ユーザの移動を決定する何らかの入力に基づき得られるものであって、例えば、ユーザの操作(例えばコントローラ操作等)若しくは動作(例えば歩行やジェスチャ等)、又はユーザの視線の滞留時間(例えば所定の領域の注視時間等)の検出結果などを含む。なお、移動情報には、ユーザが停止していることを示す情報を含めてもよい。 Here, the movement information is information for moving the viewpoint of the user in the virtual space. That is, the movement information is obtained based on some input for determining the movement of the user in the virtual space. For example, the user's operation (for example, controller operation) or action (for example, walking or gesture), or the user The detection result of the dwell time of the line of sight (for example, the gaze time of a predetermined area) is included. The movement information may include information indicating that the user has stopped.
 方向情報取得部152は、センサ部101、入力端子105、又は通信部107等から供給されたデータに基づいて、方向情報を取得し、回り込み先制御部153に供給する。 The direction information acquisition unit 152 acquires direction information based on data supplied from the sensor unit 101, the input terminal 105, the communication unit 107, or the like, and supplies the direction information to the wraparound destination control unit 153.
 ここで、方向情報は、ユーザの視線の方向に対応する情報である。すなわち、方向情報は、ユーザの視線を決定する何らかの入力に基づき得られるものであって、例えば、ユーザの視線若しくは顔の向き、又はユーザの操作(例えば、ポインティングデバイス(専用のコントローラやリモートコントローラを含む)の操作等)若しくは動作(例えば指さし等のジェスチャ)の検出結果を含む。 Here, the direction information is information corresponding to the direction of the user's line of sight. In other words, the direction information is obtained based on some input for determining the user's line of sight, for example, the user's line of sight or face orientation, or the user's operation (for example, a pointing device (such as a dedicated controller or remote controller). Operation) or action (for example, gestures such as pointing).
 回り込み先制御部153は、移動情報取得部151からの移動情報、及び方向情報取得部152からの方向情報の少なくとも一方の情報に基づいて、障害物オブジェクトに対するユーザの視点(位置)の回り込み先を制御する。 The wraparound destination control unit 153 determines the wraparound destination of the user's viewpoint (position) with respect to the obstacle object based on at least one of the movement information from the movement information acquisition unit 151 and the direction information from the direction information acquisition unit 152. Control.
 視点カメラ位置・姿勢制御部154は、回り込み先制御部153による回り込み先の制御に従い、視点カメラの位置及び姿勢を制御する。 The viewpoint camera position / posture control unit 154 controls the position and posture of the viewpoint camera according to the control of the wraparound destination by the wraparound destination control unit 153.
 表示制御部155は、視点カメラ位置・姿勢制御部154による視点カメラの位置及び姿勢の制御に従い、映像(又は画像)の表示を制御して表示部103に表示する。 The display control unit 155 controls the display of the video (or image) and displays it on the display unit 103 according to the control of the position and orientation of the viewpoint camera by the viewpoint camera position / orientation control unit 154.
 なお、図3において、制御部100は、1又は複数のCPUとして構成され、少なくとも1つのCPUが、移動情報取得部151、方向情報取得部152、回り込み先制御部153、視点カメラ位置・姿勢制御部154、又は表示制御部155の機能を含む。また、これらの機能は、単一の装置内で実現されることは勿論、単一の装置に閉じずに複数の装置(のCPU)が連携して実現されるようにしてもよい。 3, the control unit 100 is configured as one or a plurality of CPUs, and at least one CPU includes a movement information acquisition unit 151, a direction information acquisition unit 152, a wraparound destination control unit 153, and a viewpoint camera position / posture control. The function of the unit 154 or the display control unit 155 is included. In addition, these functions may be realized in a single device, or may be realized in cooperation with a plurality of devices (CPUs) without closing to a single device.
 ところで、仮想空間内でユーザの視点を移動する装置(システム)において、壁等の障害物オブジェクトの向こう側に回り込みたい状況がある。しかしながら、現在のユーザの視点(位置)から、障害物オブジェクトを回り込んだ位置に直接的に移動する手段が提供されていない場合には、障害物オブジェクトのエッジ付近に一度移動した後で、さらにユーザが移動と向きを変える操作を行う必要がある。 By the way, in a device (system) that moves the user's viewpoint in the virtual space, there is a situation where it is desired to go around the obstacle object such as a wall. However, if there is no means for directly moving from the current user's viewpoint (position) to the position around the obstacle object, after moving once near the edge of the obstacle object, It is necessary for the user to perform an operation of changing the direction of movement.
 ここで、仮想空間において、ユーザが自身のいる場所から、障害物オブジェクトの向こう側に行こうとするときには、障害物オブジェクトのエッジ付近を自然に見ることが多いと考えられる。この考えに基づき、本技術では、例えば、ユーザの視線を検出して、障害物オブジェクトのエッジを見ている(注視している)ときには、障害物オブジェクトを回り込んだ位置が見やすいように、移動と回転の調整が行われるようにする。 Here, in the virtual space, when the user tries to go to the other side of the obstacle object from the place where the user is, it is considered that the vicinity of the edge of the obstacle object is often seen naturally. Based on this idea, for example, when the user's line of sight is detected and the edge of the obstacle object is viewed (gazing), the present technology moves so that the position around the obstacle object is easy to see. And adjust the rotation.
 具体的には、図4に示すように、ヘッドマウントディスプレイ1を装着したユーザ20が、仮想空間内で障害物オブジェクト30のエッジを注視して、当該エッジが注視領域200となった状態を想定する。 Specifically, as shown in FIG. 4, it is assumed that the user 20 wearing the head mounted display 1 gazes at the edge of the obstacle object 30 in the virtual space and the edge becomes the gaze area 200. To do.
 以下、ユーザ20が注視している注視点を含む領域を、注視領域200と記述する。換言すれば、この注視領域200は、ユーザ20の視線LSの位置にある障害物オブジェクト30の周辺領域であるとも言える。なお、ここでは、周辺領域に、障害物オブジェクト30のエッジが含まれる場合を図示しているが、エッジを含まない場合もある。 Hereinafter, an area including a gazing point that the user 20 is gazing at is described as a gazing area 200. In other words, it can be said that the gaze area 200 is a peripheral area of the obstacle object 30 at the position of the line of sight L S of the user 20. Here, the case where the edge of the obstacle object 30 is included in the peripheral area is illustrated, but the edge may not be included.
 このとき、障害物オブジェクト30のエッジが注視領域200となったので、図5に示すように、ユーザ20の位置(視点)が、障害物オブジェクト30を回り込んだ位置に移動(瞬間移動)するが、ここでは、エッジから奥行き方向の先に折り込まれた面と、ユーザ20の視線の方向が並行になるように調整することができる(図5のA)。 At this time, since the edge of the obstacle object 30 has become the gaze area 200, the position (viewpoint) of the user 20 moves to the position around the obstacle object 30 (instantaneous movement) as shown in FIG. However, here, it is possible to adjust so that the surface folded in the depth direction from the edge and the direction of the line of sight of the user 20 are parallel (A in FIG. 5).
 また、さらにその先の面(図5のBの裏側の面)に対し、ユーザ20の視線の方向が並行になるように調整してもよい(図5のB)。特に、図5のBに示すような障害物オブジェクト30が薄い板のようなものである場合に、ユーザ20の位置(視点)を、障害物オブジェクト30の裏側の面まで回り込ませる利点が大きい。また、ユーザ20の位置(視点)については、例えば、その並行平面からの所定の距離だけ離れた位置となるように調整することができる。 Further, adjustment may be made so that the direction of the line of sight of the user 20 is parallel to the front surface (the surface on the back side of B in FIG. 5) (B in FIG. 5). In particular, when the obstacle object 30 as shown in FIG. 5B is like a thin plate, there is a great advantage that the position (viewpoint) of the user 20 is made to reach the back surface of the obstacle object 30. Further, the position (viewpoint) of the user 20 can be adjusted to be a position separated by a predetermined distance from the parallel plane, for example.
 このように、本技術では、ユーザ20の視線LSの位置が仮想空間に配置された障害物オブジェクト30の周辺領域(例えばエッジ)にある場合に、視線LSの方向にユーザ20の位置(視点)を移動させるとともに視線の方向を変更することでユーザ20に視認される像が、ユーザ20に対して提示されるようにする。 Thus, in the present technology, when the position of the line of sight L S of the user 20 is in the peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space, the position of the user 20 in the direction of the line of sight L S ( By moving the (viewpoint) and changing the direction of the line of sight, an image visually recognized by the user 20 is presented to the user 20.
 例えば、ユーザ20の位置(視点)を、障害物オブジェクト30におけるユーザ側の面に対する側面(図5のA)又は裏側の面(図5のB)の付近の位置に移動させることで、ユーザ20の位置(視点)が、ユーザ20の視点の移動を低減可能な位置に移動されることになるため、仮想空間の移動をよりスムーズに行うことができるようになる。 For example, by moving the position (viewpoint) of the user 20 to a position in the vicinity of the side surface (A in FIG. 5) or the back surface (B in FIG. 5) with respect to the user-side surface of the obstacle object 30. Is moved to a position where the movement of the viewpoint of the user 20 can be reduced, so that the virtual space can be moved more smoothly.
 ここで、図5には、従来の手法を用いた場合の移動後のユーザ20の位置(視点)を、点線により表しているが、従来の手法では、ユーザ20の視点が、障害物オブジェクト30のエッジ付近の位置に移動するため、障害物オブジェクト30のエッジ付近に一度移動した後で、さらにユーザ20が移動と向きを変える操作を行う必要がある。一方で、本技術では、ユーザ20の位置(視点)が、ユーザ20の視点の移動を低減可能な位置に移動されるため、従来の手法と比べて、障害物オブジェクト30を回り込むための移動操作を減らして、仮想空間内の移動をよりスムーズに行うことができる。 Here, in FIG. 5, the position (viewpoint) of the user 20 after movement when the conventional method is used is indicated by a dotted line, but in the conventional method, the viewpoint of the user 20 is the obstacle object 30. Therefore, after moving once near the edge of the obstacle object 30, the user 20 needs to further perform an operation of changing the movement and direction. On the other hand, in the present technology, since the position (viewpoint) of the user 20 is moved to a position where the movement of the viewpoint of the user 20 can be reduced, the moving operation for turning around the obstacle object 30 as compared with the conventional method. The movement in the virtual space can be performed more smoothly.
 なお、以下の説明においては、「ユーザ20の位置」と記述した場合には、仮想空間内の「ユーザ20の視点(viewpoint)」の意味を含むものとする。また、仮想空間には、コンピュータ上に構築された現実を模した仮想的な空間のほか、事実上の現実の空間も含むものとする。 In the following description, “position of user 20” includes the meaning of “viewpoint of user 20” in the virtual space. The virtual space includes a virtual space imitating reality constructed on a computer and a virtual space in reality.
 また、ユーザ20の視線LSの位置が、仮想空間の特定の位置を強調する他のオブジェクト(例えばエフェクトオブジェクト)にある場合には、視線LSの方向にユーザ20の位置(視点)を移動させるとともに視線の方向を維持した状態でユーザ20に視認される像が、ユーザ20に対して提示されるようにすることができる。ここで、エフェクトオブジェクトは、上述した壁等の障害物オブジェクト30ではなく、例えば、空間に浮かんだオブジェクトとされる。このエフェクトオブジェクトは、例えば、ユーザ20の視線LSの位置が仮想空間に配置された障害物オブジェクト30の周辺領域(例えばエッジ)にない場合に提示される。 In addition, when the position of the line of sight L S of the user 20 is on another object (for example, an effect object) that emphasizes a specific position in the virtual space, the position (view point) of the user 20 is moved in the direction of the line of sight L S. The image visually recognized by the user 20 while maintaining the direction of the line of sight can be presented to the user 20. Here, the effect object is not the obstacle object 30 such as the wall described above, but an object floating in the space, for example. This effect object is presented, for example, when the position of the line of sight L S of the user 20 is not in the peripheral area (for example, edge) of the obstacle object 30 arranged in the virtual space.
(HMDの処理の流れ)
 次に、図6のフローチャートを参照して、ヘッドマウントディスプレイ1(の制御部100)により実行される処理の流れを説明する。
(HMD processing flow)
Next, the flow of processing executed by the head mounted display 1 (the control unit 100) will be described with reference to the flowchart of FIG.
 ステップS11において、移動情報取得部151は、移動情報を取得する。ステップS12において、方向情報取得部152は、方向情報を取得する。 In step S11, the movement information acquisition unit 151 acquires movement information. In step S12, the direction information acquisition unit 152 acquires direction information.
 ここでは、方向情報を取得するに際し、センサ部101として、例えば、ユーザ20の視線を検出するためのセンサ、専用のコントローラによるポインティングや、ユーザ20のジェスチャを検出するための各種のセンサからのセンサデータに基づき、方向情報が生成される。なお、ユーザ20の視線の検出方法としては、例えば、眼球を撮影してプルキニエ像の画像データを取得することによる視線推定や、装置の向きを視線の方向として推定する方法を用いることができる。 Here, when acquiring the direction information, as the sensor unit 101, for example, a sensor for detecting the line of sight of the user 20, a pointing from a dedicated controller, or a sensor from various sensors for detecting the gesture of the user 20 Direction information is generated based on the data. As a method for detecting the line of sight of the user 20, for example, a line of sight estimation by capturing an eyeball and acquiring image data of a Purkinje image, or a method of estimating the direction of the apparatus as the direction of the line of sight can be used.
 なお、移動情報や方向情報を取得するに際し、ユーザ20の視線を検出する代わりに、例えば、ユーザ20の頭の向きや指さし、ポインティングデバイスの操作を検出するなどして視線の方向に対応する情報を取得するようにしてもよい。 In addition, when acquiring movement information and direction information, instead of detecting the line of sight of the user 20, information corresponding to the direction of the line of sight, for example, by detecting the direction of the head of the user 20 or pointing and operating the pointing device. May be obtained.
 ステップS13において、回り込み先制御部153は、取得した移動情報に基づいて、移動操作があるかどうかを判定する。 In step S13, the wraparound destination control unit 153 determines whether there is a moving operation based on the acquired movement information.
 ステップS13において、移動操作がないと判定された場合、処理は、ステップS11に戻り、ステップS11乃至S13の処理が繰り返される。一方で、ステップS13において、移動操作があると判定された場合、処理は、ステップS14に進められる。 If it is determined in step S13 that there is no moving operation, the process returns to step S11, and the processes of steps S11 to S13 are repeated. On the other hand, if it is determined in step S13 that there is a moving operation, the process proceeds to step S14.
 ステップS14において、回り込み先制御部153は、取得した方向情報に基づいて、ユーザ20による注視点の近傍に、障害物オブジェクト30の端(エッジ)が存在するかどうかを判定する。 In step S14, the wraparound destination control unit 153 determines whether or not the end (edge) of the obstacle object 30 exists in the vicinity of the gazing point by the user 20 based on the acquired direction information.
 ステップS14の判定処理で、注視点近傍に、1つだけエッジが存在すると判定された場合、処理は、ステップS15に進められる。 If it is determined in the determination process in step S14 that only one edge exists in the vicinity of the gazing point, the process proceeds to step S15.
 ステップS15において、回り込み先制御部153は、単一エッジ用回り込み先決定処理を実行する。 In step S15, the wraparound destination control unit 153 executes a single edge wraparound destination determination process.
 この単一エッジ用回り込み先決定処理では、注視点近傍にエッジが1つ存在する場合における、障害物オブジェクト30に対するユーザ20の位置(視点)の回り込み先を決定するための処理が実行される。なお、単一エッジ用回り込み先決定処理の詳細については、図7乃至図27を参照して後述する。 In this single edge wraparound destination determination process, a process for determining the wraparound destination of the position (viewpoint) of the user 20 with respect to the obstacle object 30 when one edge exists in the vicinity of the gazing point is executed. Details of the single edge wraparound destination determination process will be described later with reference to FIGS.
 また、ステップS14の判定処理で、注視点近傍に、エッジが存在しないと判定された場合、処理は、ステップS16に進められる。 If it is determined in step S14 that there is no edge near the point of interest, the process proceeds to step S16.
 ステップS16において、回り込み先制御部153は、障害物上端点からの回り込み先決定処理を実行する。 In step S16, the wraparound destination control unit 153 performs a wraparound destination determination process from the obstacle upper end point.
 この障害物上端点からの回り込み先決定処理では、注視点近傍にエッジが存在しない場合における、障害物オブジェクト30に対するユーザ20の位置(視点)の回り込み先を決定するための処理が実行される。なお、障害物上端点からの回り込み先決定処理の詳細については、図28乃至図31を参照して後述する。 In this wraparound destination determination process from the obstacle upper end point, a process for determining the wraparound destination of the position (viewpoint) of the user 20 with respect to the obstacle object 30 when there is no edge near the gazing point is executed. Details of the wraparound destination determination process from the obstacle upper end point will be described later with reference to FIGS.
 さらに、ステップS14の判定処理で、注視点近傍に、複数のエッジが存在すると判定された場合、処理は、ステップS17に進められる。 Furthermore, when it is determined in step S14 that there are a plurality of edges in the vicinity of the gazing point, the process proceeds to step S17.
 ステップS17において、回り込み先制御部153は、複数エッジ用回り込み先決定処理を実行する。 In step S17, the wraparound destination control unit 153 performs a wraparound destination determination process for multiple edges.
 この複数エッジ用回り込み先決定処理では、注視点近傍にエッジが複数存在する場合における、障害物オブジェクト30に対するユーザ20の位置(視点)の回り込み先を決定するための処理が実行される。なお、複数エッジ用回り込み先決定処理の詳細については、図32乃至図38を参照して後述する。 In this multiple edge wraparound destination determination process, a process for determining the wraparound destination of the position (viewpoint) of the user 20 with respect to the obstacle object 30 when there are a plurality of edges near the gazing point is executed. The details of the multi-edge wraparound destination determination process will be described later with reference to FIGS. 32 to 38.
 ステップS15乃至S17のいずれかの処理が終了すると、処理は、ステップS18に進められる。 When the process in any of steps S15 to S17 ends, the process proceeds to step S18.
 ステップS18において、視点カメラ位置・姿勢制御部154は、回り込み先制御部153による回り込み先決定処理の結果(ステップS15乃至S17のいずれかの処理の結果)に基づいて、ユーザ20の位置(視点)を回り込み先の位置へ移動し、その向きを変更する。 In step S <b> 18, the viewpoint camera position / posture control unit 154 determines the position (viewpoint) of the user 20 based on the result of the wraparound destination determination process by the wraparound destination control unit 153 (the result of any one of steps S <b> 15 to S <b> 17). Move to the position where you wrap around and change its direction.
 ステップS19において、表示制御部155は、ステップS18の処理の結果に基づいて、回り込み先の映像を、表示部103に表示する。すなわち、表示制御部155は、ユーザ20の視線LSの位置が仮想空間に配置された障害物オブジェクト30の周辺領域(例えばエッジ)にある場合に、移動情報及び方向情報の少なくとも一方の情報に基づき視線LSの方向にユーザ20の位置(視点)を移動させるとともに視線の方向を変更することでユーザ20に視認される像を、ユーザ20に対して提示する。 In step S19, the display control unit 155 displays the wraparound video on the display unit 103 based on the result of the process in step S18. That is, the display control unit 155 displays at least one of the movement information and the direction information when the position of the line of sight L S of the user 20 is in a peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space. Based on this, the position (viewpoint) of the user 20 is moved in the direction of the line of sight L S , and an image visually recognized by the user 20 by changing the direction of the line of sight is presented to the user 20.
 ステップS19の処理が終了すると、処理は、ステップS11に戻り、それ以降の処理が繰り返される。 When the process in step S19 is completed, the process returns to step S11, and the subsequent processes are repeated.
 以上、ヘッドマウントディスプレイ1により実行される処理の流れを説明した。 The flow of processing executed by the head mounted display 1 has been described above.
(単一エッジ用回り込み先決定処理)
 次に、図7乃至図26を参照して、図6のステップS15の処理に対応する単一エッジ用回り込み先決定処理の詳細について説明する。ここでは、単一エッジ用回り込み先決定処理として、第1の例乃至第7の例の7つの処理の例を示す。
(Single edge wraparound destination determination process)
Next, details of the single edge wraparound destination determination process corresponding to the process of step S15 of FIG. 6 will be described with reference to FIGS. Here, as examples of the single edge wraparound destination determination process, seven examples of the first to seventh examples are shown.
(第1の例)
 まず、図7乃至図13を参照しながら、単一エッジ用回り込み先決定処理の第1の例として、現在のユーザ20の位置を含む象限に応じて移動先の位置と方向を決定する方法を用いた場合について説明する。
(First example)
First, referring to FIG. 7 to FIG. 13, as a first example of the single edge wraparound destination determination process, a method of determining the position and direction of the movement destination according to the quadrant including the current user 20 position. The case where it is used will be described.
 図7は、単一エッジ用回り込み先決定処理の第1の例として、現在のユーザ位置を含む象限に応じて移動先を決定する場合の処理の流れを説明するフローチャートである。 FIG. 7 is a flowchart for explaining the flow of processing when determining the movement destination according to the quadrant including the current user position as a first example of the single edge wraparound destination determination processing.
 ステップS101において、回り込み先制御部153は、エッジ注視検出用仮想オブジェクトを生成する。 In step S101, the wraparound destination control unit 153 generates an edge gaze detection virtual object.
 このエッジ注視検出用仮想オブジェクトは、ユーザ20の視線が、壁等の障害物オブジェクト30の端(エッジ)の方向を向いているときに、ユーザ20により注視されている領域を検出するために生成される仮想的なオブジェクトである。 This edge gaze detection virtual object is generated to detect the area being watched by the user 20 when the line of sight of the user 20 faces the end (edge) of the obstacle object 30 such as a wall. Virtual object.
 ステップS102において、回り込み先制御部153は、現在のユーザ20の位置が、生成したエッジ注視検出用仮想オブジェクトのXZ平面上のどの象限(XZ平面上の第一象限乃至第四象限)に含まれるかを特定する。 In step S102, the wraparound destination control unit 153 includes the current position of the user 20 in any quadrant on the XZ plane (first quadrant to fourth quadrant on the XZ plane) of the generated edge gaze detection virtual object. To identify.
 ステップS103において、回り込み先制御部153は、特定した現在のユーザ20の位置を含む象限(XZ平面上の第一象限乃至第四象限のうちの1つの象限)に応じて、ユーザ20の位置の移動先の位置と方向を決定する。このようにして決定される移動先の位置(視点)が、壁等の障害物オブジェクト30を回り込んだ位置(回り込み先の位置)とされる。 In step S103, the wraparound destination control unit 153 determines the position of the user 20 according to the quadrant (one quadrant of the first quadrant to the fourth quadrant on the XZ plane) including the identified current user 20 position. Determine the location and direction of the destination. The position (viewpoint) of the moving destination determined in this way is set as a position (around position) around the obstacle object 30 such as a wall.
 ステップS103の処理が終了すると、処理は、図6のステップS15に戻り、それ以降の処理が実行される。 When the process in step S103 is completed, the process returns to step S15 in FIG. 6, and the subsequent processes are executed.
 以上、現在のユーザ位置を含む象限に応じて移動先を決定する場合の処理の流れを説明した。次に、第1の例の具体例について、図8及び図9を参照しながら説明する。 In the above, the flow of processing when determining the movement destination according to the quadrant including the current user position has been described. Next, a specific example of the first example will be described with reference to FIGS.
 図8は、壁等の障害物オブジェクト30のエッジを基準としたときのユーザ20の移動前の位置と移動後の位置との関係を示す図である。 FIG. 8 is a diagram showing the relationship between the position before movement of the user 20 and the position after movement when the edge of the obstacle object 30 such as a wall is used as a reference.
 図8においては、ユーザ20の近くに障害物オブジェクト30がある場合に、ユーザ20の視線が障害物オブジェクト30のエッジの方向を向いているときに、エッジ注視検出用仮想オブジェクト300を仮想的に生成して、このオブジェクトを含む3D座標系(左手座標系)のXZ平面(障害物オブジェクト30がある空間を真上から見下ろした図)により表している。 In FIG. 8, when there is an obstacle object 30 near the user 20, the edge gaze detection virtual object 300 is virtually displayed when the line of sight of the user 20 faces the edge of the obstacle object 30. The 3D coordinate system (left-handed coordinate system) XZ plane including the object (a view of the space where the obstacle object 30 is located is viewed from directly above) is generated.
 ここで、例えば、図9に示した4つのXZ平面のうち、右上のXZ平面では、ユーザ20の位置が第一象限に存在するため、移動先の位置PMは、負のX軸上とされ、その方向DMは、左斜め下の方向とされる。また、左上のXZ平面では、ユーザ20の位置が第二象限に存在するため、移動先の位置PMは、正のX軸上とされ、その方向DMは、右斜め下の方向とされる。 Here, for example, among the four XZ planes shown in FIG. 9, in the upper right XZ plane, the position of the user 20 is in the first quadrant, so the destination position P M is on the negative X axis. The direction DM is an obliquely lower left direction. In the upper left XZ plane, since the position of the user 20 is in the second quadrant, the movement destination position P M is on the positive X axis, and the direction D M is an oblique lower right direction. The
 また、左下のXZ平面では、ユーザ20の位置が第三象限に存在するため、移動先の位置PMは、正のZ軸上とされ、その方向DMは、右斜め上の方向とされる。右下のXZ平面では、ユーザ20の位置が第四象限に存在するため、移動先の位置PMは、正のZ軸上とされ、その方向DMは、左斜め上の方向とされる。 In the lower left XZ plane, since the position of the user 20 exists in the third quadrant, the movement destination position P M is on the positive Z axis, and the direction D M is an upper right direction. The In the lower right XZ plane, since the position of the user 20 exists in the fourth quadrant, the movement destination position P M is on the positive Z axis, and the direction D M is the upper left diagonal direction. .
 このように、ユーザ20がXZ平面上のどの象限に位置しているかによって、移動先の位置PMとその方向DMが決定される。例えば、図8に示したユーザ20の立ち位置の場合、エッジ注視検出用仮想オブジェクト300のXZ平面上の第三象限に存在していることになるため(図7のS102)、移動後のユーザ20の位置PM(回り込み先の位置)は、正のZ軸上で、その方向DMは、右斜め上の方向であると決定される(図7のS103)。この回り込み先の決定方法を用いて実際の映像を表示した例を、図10乃至図13に示している。 In this way, the destination position P M and its direction D M are determined depending on which quadrant on the XZ plane the user 20 is located. For example, in the case of the standing position of the user 20 shown in FIG. 8, the edge gaze detection virtual object 300 exists in the third quadrant on the XZ plane (S102 in FIG. 7). The position P M of 20 (the position of the wraparound destination) is determined on the positive Z axis, and the direction D M is an obliquely upper right direction (S103 in FIG. 7). An example in which an actual video is displayed using this wraparound destination determination method is shown in FIGS.
(表示映像の例)
 図10乃至図13には、ユーザ20の頭部に装着されたヘッドマウントディスプレイ1の表示部103に表示される映像の例として、仮想的なモデルルームのCG映像を示している。この仮想的なモデルルームでは、CG(Computer Graphics)で作り出した仮想空間内を自由に歩き回ることができ、まるでその場にいるかのように体感することができる。なお、図10乃至図13においては、モデルルームの各部屋を仕切る壁が、障害物オブジェクト30に相当する。
(Example of display image)
FIGS. 10 to 13 show a CG image of a virtual model room as an example of an image displayed on the display unit 103 of the head mounted display 1 mounted on the head of the user 20. In this virtual model room, you can walk around in the virtual space created by CG (Computer Graphics) and feel as if you were on the spot. In FIGS. 10 to 13, the walls that partition the rooms of the model room correspond to the obstacle object 30.
 図10は、モデルルームのCG映像の第1の例を示す図である。 FIG. 10 is a diagram illustrating a first example of a CG image of a model room.
 図10のAは、ユーザ20がキッチンにいるときに、キッチンと廊下を仕切る壁のエッジを注視している状態を示している。 FIG. 10A shows a state where the user 20 is watching the edge of the wall separating the kitchen and the hallway when the user 20 is in the kitchen.
 このように、キッチンと廊下を仕切る壁のエッジが、注視領域200となったとき、図10のBに示すように、ユーザ20の位置が、壁を回り込んだ位置に移動されるとともに、その視線の方向が、回り込んだ廊下の先に向けられた映像が表示される。すなわち、ユーザ20によって、壁のエッジに注視領域200が向けられた段階で、キッチンから壁を回り込んだ位置に自動的に移動(瞬間移動)することになる。 Thus, when the edge of the wall partitioning the kitchen and the corridor becomes the gaze area 200, the position of the user 20 is moved to a position around the wall as shown in FIG. An image is displayed in which the direction of the line of sight is directed toward the end of the corridor that has turned around. That is, when the gaze area 200 is directed to the edge of the wall by the user 20, the user 20 automatically moves (instantaneously moves) from the kitchen to a position around the wall.
 図11は、モデルルームのCG映像の第2の例を示す図である。なお、図11に示す廊下は、図10に示した廊下と同一とされる。 FIG. 11 is a diagram showing a second example of the CG video of the model room. The hallway shown in FIG. 11 is the same as the hallway shown in FIG.
 図11のAは、ユーザ20が廊下にいるときに、廊下と浴室を仕切る壁のエッジを注視して、当該エッジが注視領域200となった状態を示している。このとき、図11のBに示すように、ユーザ20の位置が、壁を回り込んだ位置に移動されるとともに、その視線の方向が、回り込んだ浴室に向けられた映像が表示され、廊下から壁を回り込んだ位置に瞬間移動することになる。 FIG. 11A shows a state in which, when the user 20 is in the hallway, the edge of the wall separating the hallway and the bathroom is watched, and the edge becomes the gaze area 200. At this time, as shown in FIG. 11B, the position of the user 20 is moved to a position around the wall, and an image in which the direction of the line of sight is directed to the surrounding bathroom is displayed. Will move to a position around the wall.
 図12は、モデルルームのCG映像の第3の例を示す図である。 FIG. 12 is a diagram showing a third example of the CG video of the model room.
 図12のAは、ユーザ20がキッチンにいるときに、廊下と隣の部屋を仕切る壁のエッジを注視して、当該エッジが注視領域200となった状態を示している。このとき、図12のBに示すように、ユーザ20の位置が、壁を回り込んだ位置に移動されるとともに、その視線の方向が、回り込んだ隣の部屋に向けられた映像が表示され、キッチンから壁を回り込んだ位置に瞬間移動することになる。 FIG. 12A shows a state in which, when the user 20 is in the kitchen, the edge of the wall separating the hallway and the adjacent room is watched, and the edge becomes the watched area 200. At this time, as shown in FIG. 12B, the position of the user 20 is moved to the position around the wall, and an image in which the direction of the line of sight is directed to the adjacent room around is displayed. , I will move to the position around the wall from the kitchen.
 図13は、モデルルームのCG映像の第4の例を示す図である。なお、図13に示す隣の部屋は、図12に示した隣の部屋と同一とされる。 FIG. 13 is a diagram showing a fourth example of the CG video of the model room. The adjacent room shown in FIG. 13 is the same as the adjacent room shown in FIG.
 図13のAは、ユーザ20が隣の部屋にいるときに、隣の部屋と寝室を仕切る壁のエッジを注視して、当該エッジが注視領域200となった状態を示している。このとき、図13のBに示すように、ユーザ20の位置が、壁を回り込んだ位置に移動されるとともに、その視線の方向が、回り込んだ寝室に向けられた映像が表示され、隣の部屋から壁を回り込んだ位置に瞬間移動することになる。 FIG. 13A shows a state in which, when the user 20 is in the next room, the edge of the wall partitioning the next room and the bedroom is watched, and the edge becomes the gaze area 200. At this time, as shown in FIG. 13B, the position of the user 20 is moved to a position around the wall, and an image in which the direction of the line of sight is directed to the bedroom around is displayed. It will move instantly from the room to the position around the wall.
 このように、現在のユーザ20の位置を含む象限に応じて移動先の位置と方向を決定する方法を用いることで、ユーザ20が、壁等の障害物オブジェクト30のエッジを見ているとき(注視しているとき)に、当該障害物オブジェクト30を回り込んだ位置に移動(瞬間移動)するため、障害物オブジェクト30の向こう側に回り込みたい状況において、仮想空間の移動をよりスムーズに行うことができる。 Thus, when the user 20 is looking at the edge of the obstacle object 30 such as a wall by using the method of determining the position and direction of the movement destination according to the quadrant including the current position of the user 20 ( When moving around the obstacle object 30, the virtual space moves more smoothly in a situation where it is desired to move around the obstacle object 30. Can do.
 すなわち、従来の手法では、現在のユーザ20の位置から障害物オブジェクト30の付近の位置に一度移動した後で、さらに障害物オブジェクト30を回り込んだ位置に移動するために、ユーザ20が移動と向きを変える操作を行う必要があったが、本技術の手法では、現在のユーザ20の位置から直接、障害物オブジェクト30を回り込んだ位置に移動することができる。 That is, in the conventional method, after the user 20 moves from the current position of the user 20 to a position in the vicinity of the obstacle object 30, the user 20 moves and moves to the position around the obstacle object 30. Although it is necessary to perform an operation of changing the direction, the technique of the present technology can move the obstacle object 30 to a position that wraps around directly from the current position of the user 20.
(第2の例)
 次に、図14乃至図17を参照しながら、単一エッジ用回り込み先決定処理の第2の例として、仮想的に生成した仮想視点オブジェクトを、障害物オブジェクト30に沿って移動させることで移動先の位置を決定する方法を用いた場合について説明する。
(Second example)
Next, referring to FIGS. 14 to 17, as a second example of the single edge wraparound destination determination process, the virtual viewpoint object generated virtually is moved by moving along the obstacle object 30. The case where the method for determining the previous position is used will be described.
 図14は、単一エッジ用回り込み先決定処理の第2の例として、仮想視点オブジェクトを障害物オブジェクトに沿って移動させることで移動先の位置を決定する場合の処理の流れを説明するフローチャートである。 FIG. 14 is a flowchart for explaining the flow of processing when determining the position of the movement destination by moving the virtual viewpoint object along the obstacle object as a second example of the single edge wraparound destination determination processing. is there.
 ステップS121において、回り込み先制御部153は、ユーザ20の視線に応じた障害物オブジェクト30上の注視点を特定する。なお、ここでは、障害物オブジェクト30上の注視点は、エッジではない部分を含む。 In step S121, the wraparound destination control unit 153 specifies a gazing point on the obstacle object 30 according to the line of sight of the user 20. Here, the gazing point on the obstacle object 30 includes a portion that is not an edge.
 ステップS122において、回り込み先制御部153は、特定した注視点を接点とする仮想視点オブジェクトを生成する。 In step S122, the wraparound destination control unit 153 generates a virtual viewpoint object with the identified gazing point as a contact point.
 この仮想視点オブジェクトは、ユーザ20の視線が、壁等の障害物オブジェクト30のエッジを向いていないときに、移動先の位置を決定するために生成される仮想的なオブジェクトである。 This virtual viewpoint object is a virtual object that is generated to determine the position of the movement destination when the line of sight of the user 20 is not facing the edge of the obstacle object 30 such as a wall.
 ステップS123において、回り込み先制御部153は、特定した注視点からの法線に対して、所定の演算式を適用することで、障害物オブジェクトに沿ったベクトルを算出する。ここでは、例えば、障害物オブジェクト30が壁である場合に、壁沿いベクトルが求められる。 In step S123, the wraparound destination control unit 153 calculates a vector along the obstacle object by applying a predetermined arithmetic expression to the normal line from the specified gazing point. Here, for example, when the obstacle object 30 is a wall, a vector along the wall is obtained.
 ステップS124において、回り込み先制御部153は、算出したベクトルに基づいて、生成した仮想視点オブジェクトを、障害物オブジェクト30に接しながら移動させる。ここでは、例えば、障害物オブジェクト30が壁である場合に、特定した注視点に接した位置を移動軌跡の開始ポイントとして、壁沿いベクトルに基づき、仮想視点オブジェクトを壁に接しながら移動させる。 In step S124, the wraparound destination control unit 153 moves the generated virtual viewpoint object while contacting the obstacle object 30 based on the calculated vector. Here, for example, when the obstacle object 30 is a wall, the virtual viewpoint object is moved while being in contact with the wall based on the vector along the wall, with the position in contact with the specified gazing point as the start point of the movement trajectory.
 ステップS125において、回り込み先制御部153は、仮想視点オブジェクトの移動軌跡の長さが所定値となったかどうかを判定する。 In step S125, the wraparound destination control unit 153 determines whether the length of the movement trajectory of the virtual viewpoint object has reached a predetermined value.
 ステップS125において、移動軌跡の長さが所定値になっていないと判定された場合、処理は、ステップS124に戻り、上述した処理が繰り返される。そして、仮想視点オブジェクトが障害物オブジェクト30に接しながら移動することで、移動軌跡の長さが所定値になったと判定された場合、処理は、ステップS126に進められる。 If it is determined in step S125 that the length of the movement locus is not the predetermined value, the process returns to step S124, and the above-described process is repeated. If it is determined that the length of the movement trajectory has reached the predetermined value by moving the virtual viewpoint object while contacting the obstacle object 30, the process proceeds to step S126.
 ステップS126において、回り込み先制御部153は、移動軌跡の長さが所定値になった移動ポイントを、ユーザ20の移動先の位置として決定する。このようにして決定される移動先の位置(視点)が、壁等の障害物オブジェクト30を回り込んだ位置(回り込み先の位置)とされる。 In step S126, the wraparound destination control unit 153 determines the movement point at which the length of the movement locus has become a predetermined value as the position of the movement destination of the user 20. The position (viewpoint) of the moving destination determined in this way is set as a position (around position) around the obstacle object 30 such as a wall.
 ステップS126の処理が終了すると、処理は、図6のステップS15に戻り、それ以降の処理が実行される。 When the process in step S126 is completed, the process returns to step S15 in FIG. 6, and the subsequent processes are executed.
 以上、仮想視点オブジェクトを障害物オブジェクトに沿って移動させることで移動先の位置を決定する場合の処理の流れを説明した。次に、第2の例の具体例について、図15乃至図17を参照しながら説明する。 As described above, the flow of processing when the position of the movement destination is determined by moving the virtual viewpoint object along the obstacle object has been described. Next, a specific example of the second example will be described with reference to FIGS. 15 to 17.
 図15は、壁である障害物オブジェクト30に対し、仮想的に生成された仮想視点オブジェクト310の例を示している。 FIG. 15 shows an example of a virtual viewpoint object 310 that is virtually generated for the obstacle object 30 that is a wall.
 図15においては、ユーザ20の視線が、近くにある障害物オブジェクト30の方向を向いているときに、視線ベクトルsに基づき、障害物オブジェクト30上の注視点PGが特定され(図14のS121)、この注視点PGを接点にした仮想視点オブジェクト310が生成される(図14のS122)。 In Figure 15, the line of sight of the user 20, when facing the direction of the obstacle object 30 near, based on the line-of-sight vector s, gaze point P G on obstacle object 30 is specified (in FIG. 14 S121), the virtual view object 310 the gaze point P G in contact is generated (S122 of FIG. 14).
 また、この例では、障害物オブジェクト30が壁となるため、例えば、図16に示す演算が行われることで、壁沿いベクトルwが算出される(図14のS123)。すなわち、図16において、壁沿いベクトルwの始点を、進行ベクトルfの根元(始点)に位置するようにし、障害物オブジェクト30の面上の注視点PGからの法線を、進行ベクトルfの先(終点)に合わせるようにする。 Further, in this example, since the obstacle object 30 becomes a wall, for example, the along-wall vector w is calculated by performing the calculation shown in FIG. 16 (S123 in FIG. 14). That is, in FIG. 16, the starting point of the wall along the vector w, so as to be positioned at the base (starting point) of the traveling vector f, the normal of the gaze point P G on the surface of the obstacle object 30, the traveling vector f Try to match the destination (end point).
 このとき、法線ベクトルnの長さを、aの長さに合わせると、壁沿いベクトルwは、下記の式(1)により求めることができる。 At this time, when the length of the normal vector n is adjusted to the length of a, the wall-side vector w can be obtained by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
   ・・・(1)
Figure JPOXMLDOC01-appb-M000001
... (1)
 ここで、法線は正規化したものであり、係数aは、進行ベクトルfの逆ベクトルを法線に投影したときの長さである。つまり、係数aは、下記の式(2)により求めることができる。 Here, the normal is normalized, and the coefficient a is the length when the inverse vector of the progression vector f is projected onto the normal. That is, the coefficient a can be obtained by the following equation (2).
Figure JPOXMLDOC01-appb-M000002
   ・・・(2)
Figure JPOXMLDOC01-appb-M000002
... (2)
 なお、式(2)において、「Dot」は内積を表している。 In equation (2), “Dot” represents the inner product.
 以上より、式(1)及び式(2)の関係から、壁沿いベクトルwは、下記の式(3)により求めることができる。 From the above, the along-wall vector w can be obtained by the following equation (3) from the relationship between the equations (1) and (2).
Figure JPOXMLDOC01-appb-M000003
   ・・・(3)
Figure JPOXMLDOC01-appb-M000003
... (3)
 このようにして、壁沿いベクトルwが求められると、当該壁沿いベクトルwに基づき、仮想視点オブジェクト310を障害物オブジェクト30(壁)に接しながら移動させる(図14のS124)。 In this way, when the along-wall vector w is obtained, the virtual viewpoint object 310 is moved in contact with the obstacle object 30 (wall) based on the along-wall vector w (S124 in FIG. 14).
 図17は、仮想視点オブジェクト310の移動の例を示している。 FIG. 17 shows an example of movement of the virtual viewpoint object 310.
 図17においては、注視点PGに接した位置を移動軌跡の開始ポイントPSとして、仮想視点オブジェクト310を、障害物オブジェクト30(壁)に接しながら移動させて、その移動軌跡の長さLMが所定値となったとき(図14のS125の「YES」)、その移動ポイントPMを、ユーザ20の移動先の位置として決定する(図14のS126)。 In FIG. 17, the virtual viewpoint object 310 is moved while touching the obstacle object 30 (wall) with the position in contact with the gazing point P G as the start point P S of the movement locus, and the length L of the movement locus is reached. When M reaches a predetermined value (“YES” in S125 of FIG. 14), the movement point P M is determined as the movement destination position of the user 20 (S126 of FIG. 14).
 このように、仮想視点オブジェクト310を障害物オブジェクト30に沿って移動させることで移動先の位置を決定する方法を用いることで、ユーザ20が、障害物オブジェクト30のある部分(エッジを除いた部分を含む)を見ているとき(注視しているとき)に、当該障害物オブジェクト30を回り込んだ位置に移動(瞬間移動)するため、障害物オブジェクト30の向こう側に回り込みたい状況において、仮想空間の移動をよりスムーズに行うことができる。 In this way, by using the method of determining the position of the movement destination by moving the virtual viewpoint object 310 along the obstacle object 30, the user 20 can select a certain part of the obstacle object 30 (part excluding the edge). In a situation where it is desired to go around the obstacle object 30 in order to move (instantaneous movement) to the position around the obstacle object 30 when watching (including gazing). Space movement can be performed more smoothly.
(第3の例)
 次に、図18乃至図21を参照しながら、単一エッジ用回り込み先決定処理の第3の例として、移動先の位置を決定するに際し、障害物オブジェクト30の裏側の空間の属性によって回り込ませ方を変える方法を用いた場合について説明する。なお、図18乃至図21においては、モデルルームの各部屋を仕切る壁が、障害物オブジェクト30に相当する。
(Third example)
Next, referring to FIGS. 18 to 21, as a third example of the single edge wraparound destination determination process, when determining the position of the movement destination, the wraparound is performed according to the attribute of the space behind the obstacle object 30. A case where a method of changing the direction is used will be described. In FIGS. 18 to 21, the walls that partition the rooms of the model room correspond to the obstacle object 30.
 図18は、単一エッジ用回り込み先決定処理の第3の例として、障害物オブジェクトの裏側の空間属性によって回り込ませ方を変更する場合の処理の流れを説明するフローチャートである。 FIG. 18 is a flowchart for explaining the flow of processing when changing the wraparound method according to the space attribute on the back side of the obstacle object, as a third example of the wraparound destination determination processing for single edge.
 ステップS141において、回り込み先制御部153は、あらかじめ設定されている壁の裏側の空間属性を判定する。なお、ここでは、空間属性の一例として、モデルルームにおける"部屋"と"廊下"が割り当てられているものとする。 In step S141, the wraparound destination control unit 153 determines a preset space attribute on the back side of the wall. Here, it is assumed that “room” and “corridor” in the model room are assigned as an example of the space attribute.
 ステップS141において、空間属性が"部屋"であると判定された場合、処理は、ステップS142に進められる。 If it is determined in step S141 that the space attribute is “room”, the process proceeds to step S142.
 ステップS142において、回り込み先制御部153は、空間属性が"部屋"である場合の移動先の位置と方向として、壁の側面(空間の入り口付近)に回り込んだ位置と、側面に並行な向きに調整する。 In step S142, the wraparound destination control unit 153 determines the position and direction of the movement destination when the space attribute is “room” as the position that wraps around the side surface of the wall (near the entrance of the space) and the direction parallel to the side surface. Adjust to.
 また、ステップS141において、空間属性が"廊下"であると判定された場合、処理は、ステップS143に進められる。 If it is determined in step S141 that the space attribute is “corridor”, the process proceeds to step S143.
 ステップS143において、回り込み先制御部153は、空間属性が"廊下"である場合の移動先の位置と方向として、壁の裏側へ折り返して回り込んだ位置と、壁に並行な向きに調整する。 In step S143, the wrap-around destination control unit 153 adjusts the position and direction of the movement destination when the space attribute is “corridor” to the position where the wrap-around is performed on the back side of the wall and the direction parallel to the wall.
 ステップS142,又はS143の処理が終了すると、処理は、図6のステップS15に戻り、それ以降の処理が実行される。なお、ステップS142,又はS143の処理で調整される移動先の位置と方向は、例えば、上述した第1の例や第2の例の処理によって決定することができる。 When the processing in step S142 or S143 is completed, the processing returns to step S15 in FIG. 6 and the subsequent processing is executed. Note that the position and direction of the movement destination adjusted in the process of step S142 or S143 can be determined by the process of the first example or the second example described above, for example.
 以上、障害物オブジェクトの裏側の空間属性によって回り込ませ方を変更する場合の処理の流れを説明した。次に、第3の例の具体例について、図19乃至図21を参照しながら説明する。 So far, the flow of processing when changing the wraparound method according to the space attribute behind the obstacle object has been described. Next, a specific example of the third example will be described with reference to FIGS.
 図19は、モデルルームにおける属性分けされた空間の例を示している。 FIG. 19 shows an example of the attributed space in the model room.
 図19において、LDKと、2つの洋室等の部屋には、同一の第1の空間属性が割り当てられる。また、和室には、第2の空間属性が割り当てられ、洗面室、浴室、及びトイレには、第3の空間属性が割り当てられる。さらに、廊下には、第4の空間属性が割り当てられる。 In FIG. 19, the same first space attribute is assigned to the LDK and the two Western rooms. The second space attribute is assigned to the Japanese-style room, and the third space attribute is assigned to the washroom, bathroom, and toilet. Furthermore, a fourth space attribute is assigned to the hallway.
 例えば、第1の空間属性が割り当てられたLDKや洋室等の部屋に対しては、移動先の位置と方向として、壁の側面(空間の入り口付近)に回り込んだ位置と、側面に並行な向きが決定される(図18のS142)。 For example, for a room such as an LDK or a Western-style room assigned with the first space attribute, the position and direction of the destination are parallel to the side of the wall (near the entrance of the space) and the side. The direction is determined (S142 in FIG. 18).
 図20は、ユーザ20が、障害物オブジェクト30としての部屋を仕切る壁のエッジを注視して、当該エッジが注視領域200となった状態を示している。 FIG. 20 shows a state in which the user 20 looks at the edge of the wall that partitions the room as the obstacle object 30 and the edge becomes the gaze area 200.
 図20においては、回り込み先が部屋となるため、"部屋"である第1の空間属性に基づき、ユーザ20の位置(視点)が、障害物オブジェクト30(壁)の側面に回り込んだ位置に移動されるとともに、その視線の方向が、障害物オブジェクト30(壁)の側面に並行な向きとなる(図中の矢印320)。このとき、ヘッドマウントディスプレイ1には、部屋の入り口付近(壁の横)に瞬間移動した映像が表示されることになる。 In FIG. 20, since the wraparound destination is a room, the position (viewpoint) of the user 20 is at a position wrapping around the side surface of the obstacle object 30 (wall) based on the first space attribute that is “room”. While being moved, the direction of the line of sight is parallel to the side surface of the obstacle object 30 (wall) (arrow 320 in the figure). At this time, the head-mounted display 1 displays an image that has instantaneously moved near the entrance of the room (next to the wall).
 また、例えば、第4の空間属性が割り当てられた廊下に対しては、移動先の位置と方向として、壁の裏側へ折り返して回り込んだ位置と、壁に並行な向きが決定される(図18のS143)。 In addition, for example, for a corridor to which the fourth space attribute is assigned, the position and direction of the destination are determined as a position that turns back around the wall and the direction parallel to the wall (see FIG. 18 S143).
 図21は、ユーザ20が、障害物オブジェクト30としての廊下を仕切る壁のエッジを注視して、当該エッジが注視領域200となった状態を示している。 FIG. 21 shows a state in which the user 20 looks at the edge of the wall that partitions the corridor as the obstacle object 30 and the edge becomes the gaze area 200.
 図21においては、回り込み先が廊下となるため、"廊下"である第4の空間属性に基づき、ユーザ20の位置(視点)が、障害物オブジェクト30(壁)の裏側で折り返して回り込んだ位置に移動されるとともに、その視線の方向が障害物オブジェクト30(壁)に並行な向きとなる(図中の矢印320)。このとき、ヘッドマウントディスプレイ1には、廊下(壁の裏側にまで回り込んだ場所)に瞬間移動した映像が表示されることになる。 In FIG. 21, since the wraparound destination is a corridor, the position (viewpoint) of the user 20 is turned around on the back side of the obstacle object 30 (wall) based on the fourth space attribute of “corridor”. While moving to the position, the direction of the line of sight is parallel to the obstacle object 30 (wall) (arrow 320 in the figure). At this time, the head-mounted display 1 displays an image that has instantaneously moved in the corridor (a place that has gone around to the back side of the wall).
 なお、ここでは、部屋と廊下の空間属性を例示したが、第2の空間属性が割り当てられた和室や、第3の空間属性が割り当てられた洗面室、浴室、及びトイレなどについても同様に、空間属性ごとに、移動先の位置や視線の方向を決定するための処理が実行される。また、例えば、空間属性は、部屋ごとにあらかじめ設定しておくことができる。さらに、ここでは、空間属性に応じてユーザ20の位置(視点)及び視線の方向の両方を調整したが、少なくとも一方を調整すればよい。 In addition, although the space attribute of the room and the hallway was illustrated here, the washroom, the bathroom, the toilet, etc. to which the second space attribute is assigned and the third space attribute are similarly applied. For each spatial attribute, processing for determining the position of the movement destination and the direction of the line of sight is executed. For example, the space attribute can be set for each room in advance. Furthermore, here, both the position (viewpoint) of the user 20 and the direction of the line of sight are adjusted according to the space attribute, but at least one of them may be adjusted.
 このように、障害物オブジェクト30の裏側の空間の属性によって回り込ませ方を変える方法を用いることで、ユーザ20が、障害物オブジェクト30のある部分を見ているとき(注視しているとき)に、対象の空間の属性に応じて、当該障害物オブジェクト30を回り込んだ位置に移動(瞬間移動)するため、障害物オブジェクト30の向こう側に回り込みたい状況において、対象の空間ごとに、より適切に仮想空間の移動を行うことができる。 In this way, by using a method of changing the way of wrapping according to the attributes of the space behind the obstacle object 30, the user 20 is looking at a certain part of the obstacle object 30 (when gazing). In order to move to the position around the obstacle object 30 according to the attribute of the target space (instantaneous movement), it is more appropriate for each target space in a situation where it is desired to go around the obstacle object 30. It is possible to move the virtual space.
(第4の例)
 次に、図22乃至図24を参照しながら、単一エッジ用回り込み先決定処理の第4の例として、移動先の位置を決定するに際し、ユーザ20の体の傾き量や、エッジ部分の注視時間などのパラメータに応じて回り込み量を調整する方法を用いた場合を説明する。
(Fourth example)
Next, referring to FIG. 22 to FIG. 24, as a fourth example of the single edge wraparound destination determination process, when determining the position of the movement destination, the amount of inclination of the body of the user 20 or the gaze of the edge portion is determined. The case where the method of adjusting the amount of wraparound according to parameters such as time will be described.
 図22は、単一エッジ用回り込み先決定処理の第4の例として、ユーザの体の傾き量に応じた回り込み量の調整を行う場合の処理の流れを説明するフローチャートである。 FIG. 22 is a flowchart for explaining the flow of processing when adjusting the amount of wraparound according to the amount of inclination of the user's body as a fourth example of the wraparound destination determination processing for single edge.
 ステップS161において、回り込み先制御部153は、センサ部101により検出されたセンサデータに基づいて、障害物オブジェクト30のエッジを注視しているユーザ20の体の傾き量を算出する。 In step S161, the wraparound destination control unit 153 calculates the amount of inclination of the body of the user 20 who is gazing at the edge of the obstacle object 30 based on the sensor data detected by the sensor unit 101.
 ここでは、例えば、加速度センサやジャイロセンサにより検出されたセンサデータに基づき、ユーザ20の体の傾き量を検出することができる。なお、ユーザ20の体の傾き量としては、体全体の傾き量のほか、例えば、頭部等の体の一部の傾き量を用いるようにしてもよい。 Here, for example, the tilt amount of the body of the user 20 can be detected based on sensor data detected by an acceleration sensor or a gyro sensor. Note that as the amount of inclination of the body of the user 20, in addition to the amount of inclination of the entire body, for example, the amount of inclination of a part of the body such as the head may be used.
 ステップS162において、回り込み先制御部153は、算出した体の傾き量を、あらかじめ設定した閾値と比較して、体の傾き量が閾値よりも大きいかどうかを判定する。 In step S162, the wraparound destination control unit 153 compares the calculated body tilt amount with a preset threshold value to determine whether the body tilt amount is larger than the threshold value.
 ステップS162において、体の傾き量が閾値よりも大きいと判定された場合、処理は、ステップS163に進められる。 If it is determined in step S162 that the amount of body tilt is greater than the threshold, the process proceeds to step S163.
 ステップS163において、回り込み先制御部153は、移動先の位置と方向を決定するに際し、ユーザ20の体の傾き量に応じて、その回り込み量を調整する。ここでは、例えば、ユーザ20の体の傾き量が大きいほど、障害物オブジェクト30に対する回り込み量が大きくなるようにすることができる。また、ステップS163の処理で調整される移動先の位置は、例えば、上述した第1の例や第2の例の処理によって決定することができる。 In step S163, the wraparound destination control unit 153 adjusts the wraparound amount according to the inclination amount of the body of the user 20 when determining the position and direction of the movement destination. Here, for example, the amount of sneaking around the obstacle object 30 can be increased as the inclination of the body of the user 20 is increased. Further, the position of the movement destination adjusted in the process of step S163 can be determined by the processes of the first example and the second example described above, for example.
 なお、ステップS162において、体の傾き量が閾値よりも小さいと判定された場合、ステップS163の処理はスキップされる。すなわち、この場合には、移動先の位置を決定するに際して、体の傾き量に応じた調整は行われないことになる。ステップS163の処理が終了するか、又はスキップされると、処理は、図6のステップS15に戻り、それ以降の処理が実行される。 If it is determined in step S162 that the amount of body tilt is smaller than the threshold value, the process in step S163 is skipped. That is, in this case, when the position of the movement destination is determined, adjustment according to the amount of body tilt is not performed. When the process of step S163 ends or is skipped, the process returns to step S15 of FIG. 6, and the subsequent processes are executed.
 以上、ユーザの体の傾きに応じた回り込み量の調整を行う場合の処理の流れを説明した。次に、第4の例の具体例について、図23を参照しながら説明する。 The flow of processing when adjusting the amount of wraparound according to the inclination of the user's body has been described above. Next, a specific example of the fourth example will be described with reference to FIG.
 図23は、ユーザ20の体の傾き量に応じた回り込み量の調整の例を示している。 FIG. 23 shows an example of adjustment of the amount of wraparound according to the amount of inclination of the body of the user 20.
 図23のAは、ユーザ20が、障害物オブジェクト30のエッジを注視して、当該エッジが注視領域200となった状態を示している。このとき、ユーザ20の体(例えば頭部等)が傾いていないため(体の傾き量が閾値よりも小さいと判定されるため(図22のS162の「NO」))、移動先の位置の回り込み量は未調整とされる。 23A shows a state where the user 20 has watched the edge of the obstacle object 30 and the edge has become the watch area 200. At this time, since the body (for example, the head) of the user 20 is not tilted (it is determined that the body tilt amount is smaller than the threshold (“NO” in S162 in FIG. 22)), The amount of wraparound is not adjusted.
 一方で、図23のBは、障害物オブジェクト30のエッジが注視領域200となったときに、ユーザ20の体が傾いているため(体の傾き量が閾値よりも大きいと判定されるため(図22のS162の「YES」))、体の傾き量に応じて、移動先の位置の回り込み量が調整される(図22のS163)。 On the other hand, FIG. 23B shows that when the edge of the obstacle object 30 becomes the gaze area 200, the body of the user 20 is tilted (because it is determined that the amount of body tilt is greater than the threshold ( 22 is adjusted according to the amount of body tilt (S163 in FIG. 22).
 ここでは、ユーザ20の体の傾き量が大きいほど、移動先の位置の回り込み量を大きくして、障害物オブジェクト30の裏側に瞬間移動した映像を表示させるようにすることで、例えば、障害物オブジェクト30の裏側を、体を傾けて覗き込もうとしているユーザ20の心理に合致した映像が表示されることになる。 Here, the larger the amount of inclination of the body of the user 20 is, the larger the amount of wraparound at the position of the movement destination is, so that an image that is instantaneously moved is displayed on the back side of the obstacle object 30. An image that matches the psychology of the user 20 who is looking into the back side of the object 30 while tilting his / her body is displayed.
 また、図23においては、回り込み量に応じた矢印320を図示しているが、この矢印320を提示(アノテーション表示)することで、ユーザ20は、障害物オブジェクト30の裏側に瞬間移動する前に、どの程度回り込むのかを事前に予測することが可能となる。換言すれば、この矢印320は、ユーザ20の位置を移動する前に、ユーザの位置(視点)、及び視線の方向の少なくとも一方を特定するための特定情報であるとも言える。なお、矢印320は、非表示としてもよい。 Further, in FIG. 23, an arrow 320 corresponding to the amount of wraparound is illustrated, but by presenting this arrow 320 (annotation display), the user 20 can move immediately before moving to the back side of the obstacle object 30. It is possible to predict in advance how much it will wrap around. In other words, it can be said that the arrow 320 is specific information for specifying at least one of the user's position (viewpoint) and the direction of the line of sight before moving the position of the user 20. Note that the arrow 320 may be hidden.
 また、回り込み量を調整するためのパラメータとしては、ユーザ20の体の傾き量に限らず、例えば、ユーザ20による障害物オブジェクト30のエッジ部分の注視時間や、ユーザ20のジェスチャなどの他のパラメータを用いるようにしてもよい。以下、他のパラメータの例として、エッジ部分の注視時間を用いた場合を説明する。 Further, the parameters for adjusting the amount of wraparound are not limited to the amount of inclination of the body of the user 20, but other parameters such as, for example, the gaze time of the edge portion of the obstacle object 30 by the user 20 and the gesture of the user 20 May be used. Hereinafter, the case where the gaze time of the edge part is used as an example of another parameter will be described.
 図24は、単一エッジ用回り込み先決定処理の第4の例として、エッジ部分の注視時間に応じた回り込み量の調整を行う場合の処理の流れを説明するフローチャートである。 FIG. 24 is a flowchart for explaining the flow of processing when the wraparound amount is adjusted according to the gaze time of the edge portion as a fourth example of the single edge wraparound destination determination processing.
 ステップS181において、回り込み先制御部153は、センサ部101により検出されたセンサデータに基づいて、ユーザ20による障害物オブジェクト30のエッジ部分の注視時間を算出する。 In step S181, the wraparound destination control unit 153 calculates the gaze time of the edge portion of the obstacle object 30 by the user 20 based on the sensor data detected by the sensor unit 101.
 ステップS182において、回り込み先制御部153は、算出した注視時間を、あらかじめ設定した閾値と比較して、注視時間が閾値よりも大きいかどうかを判定する。 In step S182, the wraparound destination control unit 153 compares the calculated gaze time with a preset threshold value and determines whether or not the gaze time is greater than the threshold value.
 ステップS182において、注視時間が閾値よりも大きいと判定された場合、処理は、ステップS183に進められる。 If it is determined in step S182 that the gaze time is greater than the threshold, the process proceeds to step S183.
 ステップS183において、回り込み先制御部153は、移動先の位置と方向を決定するに際し、注視時間に応じて、その回り込み量を調整する。ここでは、例えば、障害物オブジェクト30のエッジ部分の注視時間が長い状態で移動するほど、障害物オブジェクト30に対する回り込み量が大きくなるようにすることができる。また、ステップS183の処理で調整される移動先の位置は、例えば、上述した第1の例や第2の例の処理によって決定することができる。 In step S183, the wraparound destination control unit 153 adjusts the wraparound amount according to the gaze time when determining the position and direction of the movement destination. Here, for example, as the gaze time of the edge portion of the obstacle object 30 moves in a longer state, the amount of sneaking around the obstacle object 30 can be increased. Further, the position of the movement destination adjusted in the process of step S183 can be determined by the process of the first example or the second example described above, for example.
 なお、ステップS182において、注視時間が閾値よりも小さいと判定された場合、ステップS183の処理はスキップされる。すなわち、この場合には、移動先の位置を決定するに際して、注視時間に応じた調整は行われないことになる。ステップS183の処理が終了するか、又はスキップされると、処理は、図6のステップS15に戻り、それ以降の処理が実行される。 If it is determined in step S182 that the gaze time is smaller than the threshold value, the process in step S183 is skipped. That is, in this case, when determining the position of the movement destination, adjustment according to the gaze time is not performed. When the process of step S183 ends or is skipped, the process returns to step S15 of FIG. 6 and the subsequent processes are executed.
 以上、エッジ部分の注視時間に応じた回り込み量の調整を行う場合の処理の流れを説明した。 The flow of processing when adjusting the amount of wraparound according to the gaze time of the edge has been described above.
 なお、第4の例では、回り込み量を調整するためのパラメータとして、ユーザ20の体の傾き量と、障害物オブジェクト30のエッジ部分の注視時間を用いた場合の処理を別の処理として説明したが、それらのパラメータを同時に用いた処理が実行されるようにしてもよい。例えば、当該処理を実行するに際しては、ユーザ20の体の傾き量が大きく、かつ、エッジ部分の注視時間が長い状態で移動するほど、障害物オブジェクト30に対する回り込み量が大きくなるように調整することができる。 In the fourth example, the processing when the amount of inclination of the body of the user 20 and the gaze time of the edge portion of the obstacle object 30 are used as parameters for adjusting the amount of wraparound has been described as separate processing. However, processing using these parameters simultaneously may be executed. For example, when executing the processing, adjustment is made so that the amount of sneaking around the obstacle object 30 increases as the amount of inclination of the body of the user 20 is large and the gaze time of the edge portion is long. Can do.
 このように、ユーザ20の体の傾き量やジェスチャ、エッジ部分の注視時間などのパラメータに応じて回り込み量を調整する方法を用いることで、ユーザ20が障害物オブジェクト30のある部分を見ているとき(注視しているとき)に、パラメータに応じた回り込み量で、当該障害物オブジェクト30を回り込んだ位置に移動(瞬間移動)するため、障害物オブジェクト30の向こう側に回り込みたい状況において、ユーザ20の心理に合致した仮想空間の移動を行うことができる。 In this way, by using a method of adjusting the amount of wraparound according to parameters such as the amount of body tilt of the user 20, the gesture, and the gaze time of the edge portion, the user 20 is viewing a portion where the obstacle object 30 is present. In the situation where the obstacle object 30 is moved (instantaneous movement) to the position where the obstacle object 30 is wrapped around with the amount of wraparound according to the parameter (when gazing), The virtual space that matches the psychology of the user 20 can be moved.
(第5の例)
 次に、図25を参照しながら、単一エッジ用回り込み先決定処理の第5の例として、移動先の位置を決定する前段の処理として、その障害となる対象物を取り除く方法を用いた場合を説明する。
(Fifth example)
Next, referring to FIG. 25, as a fifth example of the single-edge wraparound destination determination process, a method of removing the target object as an obstacle is used as the preceding process for determining the position of the movement destination. Will be explained.
 図25は、ユーザ20が、扉周辺を注視している場合にその扉を開けた状態にする場合の例を示している。なお、図25において、図25のA乃至Cは、その順に時系列で並んでいる。 FIG. 25 shows an example in which the user 20 opens the door when gazing around the door. 25, A to C in FIG. 25 are arranged in time series in that order.
 図25のAは、仮想空間内の廊下にいるユーザ20が、右側の閉まっている扉を見ている状態を示している。このとき、ユーザ20が見ている右側の扉が自動で開いた状態になるようにする(図25のB)。ここでは、例えば、制御部100が、センサ部101により検出されたセンサデータに基づき、ユーザ20による右側の扉の注視時間が閾値よりも大きいと判定された場合に、当該扉が開いた状態になるようにする。 FIG. 25A shows a state where the user 20 in the hallway in the virtual space is looking at the closed door on the right side. At this time, the right door viewed by the user 20 is automatically opened (B in FIG. 25). Here, for example, when the control unit 100 determines that the gaze time of the right door by the user 20 is longer than the threshold based on the sensor data detected by the sensor unit 101, the door is opened. To be.
 その後、図25のBは、ユーザ20が、開いた右側の扉の入り口付近(壁のエッジ付近)を注視して、当該エッジが注視領域200になった状態を示している。このとき、図25のCに示すように、ユーザ20の位置(視点)が、壁を回り込んだ位置(開いた右側の扉の入り口付近)に移動されるとともに、その視線の方向が、移動先の部屋内に向けられるようにする。 Thereafter, B in FIG. 25 shows a state in which the user 20 gazes near the entrance of the opened right door (near the edge of the wall) and the edge becomes the gaze area 200. At this time, as shown in FIG. 25C, the position (viewpoint) of the user 20 is moved to a position (around the entrance of the opened right door) around the wall, and the direction of the line of sight is moved. Be directed to the previous room.
 このように、移動先の位置を決定する前段の処理として、ユーザ20が、扉周辺を注視している場合にその扉を開けた状態にすることで、ユーザ20は扉を開けるという操作を行うことなく、視線LSを向けるだけで、壁を回り込んだ位置に移動することができる。なお、ここでは、障害となる対象物として、扉を一例に説明したが、移動先の位置を決定する前段の処理として、扉以外の対象物が取り除かれるようにしてもよい。 As described above, as a first stage process for determining the position of the movement destination, the user 20 performs an operation of opening the door by opening the door when the user 20 is gazing around the door. Instead, it is possible to move to a position around the wall simply by turning the line of sight L S. Here, the door has been described as an example of the obstacle object, but the object other than the door may be removed as the preceding process for determining the position of the movement destination.
(第6の例)
 次に、図26を参照しながら、単一エッジ用回り込み先決定処理の第6の例として、特定の領域を注視している場合に俯瞰視点の映像を表示する方法を用いた場合を説明する。
(Sixth example)
Next, with reference to FIG. 26, a sixth example of the single-edge wraparound destination determination process will be described in which a method of displaying an overhead view video when a specific area is being watched is used. .
 図26は、ユーザ20が、壁と天井の境界付近を注視している場合に俯瞰視点の映像を表示する場合の例を示している。 FIG. 26 shows an example of a case where the user 20 displays an image of an overhead view when watching the vicinity of the boundary between the wall and the ceiling.
 図26のAは、仮想空間内の部屋にいるユーザ20が、天井と壁の境界のエッジ付近を注視して、当該エッジが注視領域200となった状態を示している。このとき、図26のBに示すように、ユーザ20の視点を、俯瞰視点に切り替えて、ユーザ20が現在いる部屋を含むモデルルームを斜め上から見た映像が表示されるようにする。 FIG. 26A shows a state in which the user 20 in the room in the virtual space gazes near the edge of the boundary between the ceiling and the wall, and the edge becomes the gaze area 200. At this time, as shown in FIG. 26B, the viewpoint of the user 20 is switched to the overhead viewpoint, and an image of the model room including the room in which the user 20 is present viewed from obliquely above is displayed.
 このように、壁と天井の境界付近を注視している場合に俯瞰視点の映像を表示することで、ユーザ20は、任意のタイミングで、モデルルームの全体像を把握することができる。なお、ここでは、俯瞰視点の映像を表示させる条件として、壁と天井の境界付近を注視することを例示したが、その他の特定の領域を注視したときに、俯瞰視点の映像を表示させるようにしてもよい。また、モデルルームも一例であって、他の仮想空間内においても、特定の領域が注視されたときに俯瞰視点の映像を表示することができる。 As described above, by displaying the video of the overhead view when the vicinity of the boundary between the wall and the ceiling is being watched, the user 20 can grasp the entire image of the model room at an arbitrary timing. Here, as an example of the condition for displaying the overhead view video, the vicinity of the boundary between the wall and the ceiling is exemplified. However, when the other specific area is watched, the overhead view video is displayed. May be. The model room is also an example, and an image of an overhead view can be displayed when a specific area is watched even in another virtual space.
(第7の例)
 最後に、図27を参照しながら、単一エッジ用回り込み先決定処理の第7の例として、障害物オブジェクト30に対する回り込み先の映像を表示する方法を用いた場合を説明する。
(Seventh example)
Finally, with reference to FIG. 27, a case of using a method of displaying a wraparound destination image for the obstacle object 30 will be described as a seventh example of the single edge wraparound destination determination process.
 図27は、障害物オブジェクト30に対する回り込んだ先の映像を表示する場合の例を示している。 FIG. 27 shows an example in the case of displaying a video image of the obstacle wrapping around the obstacle object 30.
 図27においては、ユーザ20が、ある仮想空間内の通路にいるときに、障害物オブジェクト30のエッジ部分を注視して、当該エッジが注視領域200になっている。このとき、障害物オブジェクト30に対する回り込み先の位置は、死角になっているため、ユーザ20からは見えないが、小画面表示領域400に、回り込んだ先の映像が表示される。 In FIG. 27, when the user 20 is in a passage in a certain virtual space, the edge portion of the obstacle object 30 is watched, and the edge becomes the watch area 200. At this time, since the position of the wraparound destination with respect to the obstacle object 30 is a blind spot, it is not visible to the user 20, but the image of the wraparound destination is displayed in the small screen display area 400.
 ユーザ20は、この回り込んだ先の映像を確認して、障害物オブジェクト30を回り込んだ先に移動するかどうかの判断を行うことができる。そして、回り込んだ先に移動すると判断した場合には、上述した処理が行われることで、ユーザ20の位置(視点)が、障害物オブジェクト30を回り込んだ位置に移動される。 The user 20 can check the video of the wraparound destination and determine whether to move the obstacle object 30 to the wraparound destination. If it is determined to move to the destination, the above-described processing is performed, so that the position (viewpoint) of the user 20 is moved to the position around the obstacle object 30.
 ここでの障害物オブジェクト30は、モデルルームの部屋を仕切る壁のほか、例えば、百貨店やスーパーマーケット等の小売店の陳列棚などを含めることができる。この場合には、例えば、ユーザ20が、陳列棚のエッジ部分を注視したときに、小画面表示領域400がポップアップして、棚を回り込んだ先の映像が表示されることになる。 Here, the obstacle object 30 can include, for example, display shelves of retail stores such as department stores and supermarkets, in addition to the walls that partition the model room. In this case, for example, when the user 20 gazes at the edge portion of the display shelf, the small screen display area 400 pops up, and the image of the point around the shelf is displayed.
 すなわち、ヘッドマウントディスプレイ1においては、ユーザ20の視線LSの位置が仮想空間に配置された障害物オブジェクト30の周辺領域(例えばエッジ)にある場合に、視線LSの方向にユーザ20の位置(視点)を移動させるとともに視線の方向を変更することでユーザ20に視認される像を、ユーザ20の視野領域の全体に提示するだけでなく、視野領域の一部にのみ提示することができる。換言すれば、障害物オブジェクト30のエッジが注視領域200になって、移動先の位置(視点)と方向が決定され、ユーザ20に認識される像を提示する際に、ユーザ20の視点を移動させる場合と、ユーザ20の視点を移動させない場合とがあると言える。 That is, in the head mounted display 1, when the position of the user's line of sight L S is in a peripheral area (for example, an edge) of the obstacle object 30 arranged in the virtual space, the position of the user 20 in the direction of the line of sight L S. By moving the (viewpoint) and changing the direction of the line of sight, the image visually recognized by the user 20 can be presented not only on the entire visual field area of the user 20 but also on only a part of the visual field area. . In other words, the edge of the obstacle object 30 becomes the gaze area 200, the position (viewpoint) and direction of the destination are determined, and the user 20 moves the viewpoint when presenting an image recognized by the user 20. It can be said that there are cases where the viewpoint of the user 20 is not moved and cases where the viewpoint of the user 20 is not moved.
 このように、回り込んだ先の映像を表示することで、ユーザ20は、障害物オブジェクト30を回り込んだ先に移動するかどうかの判断を行うことができるため、障害物オブジェクト30の向こう側に回り込みたい状況において、より適切な判断を行うことができる。 Thus, by displaying the image of the wraparound destination, the user 20 can determine whether or not to move the obstacle object 30 to the wraparound destination. It is possible to make a more appropriate judgment in situations where it is desired to get around.
 以上、単一エッジ用回り込み先決定処理の詳細について説明について説明した。なお、上述した単一エッジ用回り込み先決定処理の第1の例乃至第7の例は一例であって、単一エッジ用回り込み先決定処理として、注視点近傍にエッジが1つ存在する場合における、障害物オブジェクト30に対する回り込み先の位置を決定するための他の処理が実行されるようにしてもよい。 In the foregoing, the description has been given regarding the details of the single edge wraparound destination determination process. Note that the first to seventh examples of the single edge wraparound destination determination process described above are examples, and the single edge wraparound destination determination process is performed when one edge exists in the vicinity of the gazing point. Another process for determining the position of the wraparound destination with respect to the obstacle object 30 may be executed.
(障害物上端点からの回り込み先決定処理)
 次に、図28乃至図31を参照して、図6のステップS16の処理に対応する障害物上端点からの回り込み先決定処理の詳細について説明する。
(Process to determine the wraparound destination from the upper end of the obstacle)
Next, the details of the wraparound destination determination process from the obstacle upper end point corresponding to the process of step S16 of FIG. 6 will be described with reference to FIGS.
 図28は、障害物上端点からの回り込み先決定処理の流れを説明するフローチャートである。 FIG. 28 is a flowchart for explaining the flow of the wraparound destination determination process from the obstacle upper end point.
 ステップS201において、回り込み先制御部153は、エッジがない障害物オブジェクトに対する回り込みポイントを決定する。ここでは、例えば、レイキャスティング法を用い、回り込みポイントを決定することができる。 In step S201, the wraparound destination control unit 153 determines a wraparound point for an obstacle object having no edge. Here, for example, a wraparound point can be determined using a ray casting method.
 ステップS202において、回り込み先制御部153は、決定した回り込みポイントに応じた移動先の位置(回り込み位置)と方向を決定する。 In step S202, the wraparound destination control unit 153 determines the position (wraparound position) and direction of the movement destination according to the determined wraparound point.
 ステップS202の処理が終了すると、処理は、図6のステップS16に戻り、それ以降の処理が実行される。 When the process in step S202 is completed, the process returns to step S16 in FIG. 6, and the subsequent processes are executed.
 以上、障害物上端点からの回り込み先決定処理の流れを説明した。次に、エッジがない障害物オブジェクトの端点まで回り込む方法の具体例について、図29乃至図31を参照しながら説明する。 The flow of the wraparound destination determination process from the obstacle upper end point has been described above. Next, a specific example of a method of going around to an end point of an obstacle object having no edge will be described with reference to FIGS.
 図29は、エッジがない障害物オブジェクト31に対する回り込みポイントを決定する方法の例を示している。 FIG. 29 shows an example of a method for determining a wraparound point for an obstacle object 31 having no edge.
 図29においては、ユーザ20の視線が、エッジがない障害物オブジェクト31の方向を向いているときに、その視線の方向に対応した視線ベクトルsの終点であって、障害物オブジェクト31上の点が、注視点PGとされる。 In FIG. 29, when the line of sight of the user 20 faces the direction of the obstacle object 31 having no edge, the point on the obstacle object 31 is the end point of the line-of-sight vector s corresponding to the direction of the line of sight. There is a gaze point P G.
 ここでは、障害物オブジェクト31に対する回り込み位置として利用するポイントを、レイキャスティング法を用いて決定する。レイキャスティング法は、ユーザ20の視点から光線(レイ)を放射(キャスト)して、最も手前の物体(オブジェクト)までの距離を測定する手法である。 Here, a point used as a wraparound position for the obstacle object 31 is determined using the ray casting method. The ray casting method is a method in which a ray (ray) is emitted (cast) from the viewpoint of the user 20 and a distance to the closest object (object) is measured.
 例えば、ユーザ20の視野角などの顔前方向の所定範囲に対応したレイキャスティング範囲RRCを設定して、このレイキャスティング範囲RRCの範囲内で、複数のレイキャスティングを行う。図29においては、5つの光線Ray(実線及び点線の矢印)を例示しているが、左右方向それぞれ障害物オブジェクト31にヒットした最も外側の光線Ray(実線の矢印で示した光線Ray)を、光線Ray-A,Ray-Bとして、それぞれヒットしたポイントを、この障害物オブジェクト31上の端点EPA,EPBとする。 For example, a ray casting range R RC corresponding to a predetermined range in the front direction of the face such as the viewing angle of the user 20 is set, and a plurality of ray castings are performed within the range of the ray casting range R RC . In FIG. 29, five light rays Ray (solid line and dotted line arrows) are illustrated, but the outermost light ray Ray (light ray Ray indicated by the solid line arrow) hitting the obstacle object 31 in the left and right direction respectively. The points hit as the rays Ray-A and Ray-B are the end points EP A and EP B on the obstacle object 31, respectively.
 このようにして求められた2つの端点EPA,EPBのうち、一方の端点EPを選択する方法としては、例えば、次の2つの方法がある。 As a method of selecting one end point EP from the two end points EP A and EP B thus obtained, for example, there are the following two methods.
 すなわち、第1の方法として、上述した図15乃至図17と同様に、壁沿いベクトルwの方向に、障害物オブジェクト31上を探索することで、端点EPA,EPBのうち、最初に見つかった方の端点EPを選択する方法である。例えば、図29においては、右手方向に探索を行った場合に、光線Ray-Bによる端点EPBを先に見つけることになるため、この端点EPBが選択される。 That is, as a first method, as in the above-described FIGS. 15 to 17, the obstacle object 31 is searched in the direction of the vector w along the wall, so that it is first found among the end points EP A and EP B. This is a method of selecting the other end point EP. For example, in FIG. 29, when searching in the right-hand direction, the end point EP B by the ray Ray-B is found first, so this end point EP B is selected.
 また、第2の方法として、光線Ray-Aと光線Ray-Bが、それぞれ視線ベクトルsとなす角度である角度θAと角度θBのうち、小さい方の角度の光線Rayを選択する。例えば、図29においては、θA > θBとなるため、光線Ray-Bによる端点EPBが選択される。 As a second method, the light ray Ray having a smaller angle is selected from the angles θ A and θ B which are angles formed by the light ray Ray-A and the light ray Ray-B, respectively. For example, in FIG. 29, since θ A > θ B , the end point EP B by the ray Ray- B is selected.
 このようにして、光線Ray-Bによる端点EPBが、エッジがない障害物オブジェクト31に対する回り込みポイントとして決定される(図28のS201)。 In this way, the end point EP B by Ray Ray-B, is determined as the point coupling loop with respect to the obstacle object 31 is not an edge (S201 in FIG. 28).
 図30は、回り込み位置としての移動先の位置(視点)を決定する方法の例を示している。 FIG. 30 shows an example of a method for determining the position (viewpoint) of the movement destination as the wraparound position.
 なお、図30においては、上述した回り込みポイントを決定する方法によって、回り込みポイントとして、光線Ray-Bによる端点EPBが決定されているため、光線Ray-Aによる端点EPAは利用しないことになる。 In FIG. 30, the end point EP B based on the ray Ray- A is not used because the end point EP B based on the ray Ray- B is determined as the wraparound point by the method for determining the wraparound point described above. .
 すなわち、図30においては、ユーザ20の移動先の位置として、光線Ray-Bによる端点EPBが利用されるが、ここでは、端点EPB、すなわち、障害物オブジェクト31と光線Ray-Bとの衝突点のオブジェクト表面から法線方向に、所定距離だけ離れた位置を、ユーザ20の移動先の位置PM(回り込み位置)とすることができる。このとき、移動後のユーザ20の視線の方向は、移動前の視線の方向と同一の方向のほか、例えば、光線Ray-Bの方向とするなど、他の方向を向くようにしてもよい。 That is, in FIG. 30, the end point EP B by the ray Ray- B is used as the position to which the user 20 moves, but here the end point EP B , that is, the obstacle object 31 and the ray Ray-B, is used. A position away from the object surface of the collision point by a predetermined distance in the normal direction can be set as a position P M (around position) of the movement destination of the user 20. At this time, the direction of the line of sight of the user 20 after the movement may be directed in another direction, for example, the direction of the ray Ray-B in addition to the same direction as the line of sight before the movement.
 このようにして、決定した回り込みポイント端点EPBから所定距離だけ離れた位置が、ユーザ20の移動先の位置PM(回り込み位置)として決定される(図28のS202)。 In this way, a position away from the determined wraparound point end point EP B by a predetermined distance is determined as the position P M (wraparound position) of the user 20 movement destination (S202 in FIG. 28).
 このように、エッジのない障害物オブジェクト31に対する回り込みポイントに応じた位置にユーザの位置(視点)を移動させる方法を用いることで、ユーザ20が、障害物オブジェクト31のある部分を見ているとき(注視しているとき)に、当該障害物オブジェクト31を回り込んだ位置に移動(瞬間移動)するため、障害物オブジェクト31の向こう側に回り込みたい状況において、仮想空間の移動をよりスムーズに行うことができる。 Thus, when the user 20 is looking at a certain part of the obstacle object 31 by using the method of moving the user's position (viewpoint) to the position corresponding to the wraparound point with respect to the obstacle object 31 having no edge. Since the object is moved (instantaneous movement) to the position around the obstacle object 31 (when gazing), the virtual space can be moved more smoothly in a situation where it is desired to go around the obstacle object 31. be able to.
 なお、図30には、従来の手法を用いた場合の移動後のユーザ20の位置を、点線により表しているが、従来の手法では、注視点PGの位置を目指して移動するのみであるため、現在のユーザ20の位置から障害物オブジェクト31の付近に一度移動した後で、さらに障害物オブジェクト31を回り込んだ位置に移動する必要がある。それに対して、本技術の手法では、現在のユーザ20の位置から直接、障害物オブジェクト31を回り込んだ位置(移動先の位置PM)に移動することができる。 Incidentally, in FIG. 30, the position of the user 20 after movement when using the conventional method, are expressed by dotted lines, in the conventional method, only to move the aim position of the gaze point P G Therefore, after moving from the current position of the user 20 to the vicinity of the obstacle object 31 once, it is necessary to move further to the position around the obstacle object 31. On the other hand, according to the technique of the present technology, it is possible to move directly from the current position of the user 20 to the position around the obstacle object 31 (movement destination position P M ).
 また、図31に示すように、1つの障害物オブジェクト(例えば林に密集している樹木)が細かすぎるために、ユーザ20の視点の位置として入り込めない配置の詳細オブジェクト群に対しては、クラスタによって、距離が近いもの同士をひとかたまりの大きな障害物オブジェクトとして取り扱うことができる。例えば、図31では、複数の樹木(障害物オブジェクト)の集まりを、1つの障害物オブジェクト32-1,32-2として取り扱っている。この場合においても、ひとかたまりの大きな障害物オブジェクト32-1,32-2を、エッジのない障害物オブジェクト32-1,32-2として上述の手法を適用することができる。 Further, as shown in FIG. 31, for a detailed object group having an arrangement that cannot be entered as the position of the viewpoint of the user 20 because one obstacle object (for example, a tree dense in the forest) is too fine, Clusters allow objects close to each other to be handled as a large block of obstacle objects. For example, in FIG. 31, a collection of a plurality of trees (obstacle objects) is handled as one obstacle object 32-1 and 32-2. Even in this case, the above-described method can be applied to a large group of obstacle objects 32-1 and 32-2 as obstacle objects 32-1 and 32-2 having no edge.
 以上、障害物上端点からの回り込み先決定処理の詳細について説明した。なお、上述した障害物上端点からの回り込み先決定処理は一例であって、障害物上端点からの回り込み先決定処理として、注視点近傍にエッジが存在しない場合における、障害物オブジェクト31に対する回り込み先の位置を決定するための他の処理が実行されるようにしてもよい。 As above, the details of the wraparound destination determination process from the obstacle upper end point have been described. The wraparound destination determination process from the obstacle upper end point described above is an example, and the wraparound destination determination process from the obstacle upper end point is a wraparound destination determination process for the obstacle object 31 when no edge exists near the point of interest. Another process for determining the position of the may be executed.
(複数エッジ用回り込み先決定処理)
 次に、図32乃至図38を参照して、図6のステップS17の処理に対応する複数エッジ用回り込み先決定処理の詳細について説明する。ここでは、複数エッジ用回り込み先決定処理として、第1の例乃至第3の例の3つの処理の例を示す。
(Multiple wraparound destination determination process)
Next, with reference to FIGS. 32 to 38, details of the multi-edge wraparound destination determination process corresponding to the process of step S17 of FIG. 6 will be described. Here, examples of the three processes of the first to third examples are shown as the wraparound destination determination process for multiple edges.
(第1の例)
 まず、図32及び図33を参照しながら、複数エッジ用回り込み先決定処理の第1の例として、回り込み先の候補を表示する方法を用いた場合を説明する。
(First example)
First, with reference to FIGS. 32 and 33, a case where a method of displaying wraparound destination candidates is used as a first example of the wraparound destination determination process for multiple edges will be described.
 図32は、複数エッジ用回り込み先決定処理の第1の例として、回り込み先の候補を表示する場合の処理の流れを説明するフローチャートである。 FIG. 32 is a flowchart for explaining the flow of processing when a wraparound destination candidate is displayed as a first example of the wraparound destination determination process for multiple edges.
 ステップS301において、回り込み先制御部153は、障害物オブジェクト30の各エッジの注視点からの距離を算出する。 In step S301, the wraparound destination control unit 153 calculates the distance from the gazing point of each edge of the obstacle object 30.
 ステップS302において、回り込み先制御部153は、注視点からの距離を算出したエッジのうち、1つのエッジを対象のエッジとして選択する。 In step S <b> 302, the wraparound destination control unit 153 selects one edge as the target edge among the edges for which the distance from the gazing point is calculated.
 ステップS303において、回り込み先制御部153は、算出した各エッジの注視点からの距離に基づいて、選択した対象のエッジが、注視点に最も近いエッジであるかどうかを判定する。 In step S303, the wraparound destination control unit 153 determines whether the selected target edge is the edge closest to the gazing point based on the calculated distance from the gazing point of each edge.
 ステップS303において、注視点に最も近いエッジであると判定された場合、処理は、ステップS304に進められる。ステップS304において、回り込み先制御部153は、対象のエッジの回り込み先を示す矢印を、強調表示(例えばハイライト表示)する。 If it is determined in step S303 that the edge is closest to the gazing point, the process proceeds to step S304. In step S304, the wraparound destination control unit 153 highlights (for example, highlights) an arrow indicating the wraparound destination of the target edge.
 一方で、ステップS303において、注視点に最も近いエッジではないと判定された場合、処理は、ステップS305に進められる。ステップS305において、回り込み先制御部153は、対象のエッジの回り込み先を示す矢印を、非強調表示(例えば薄く表示)する。 On the other hand, if it is determined in step S303 that the edge is not closest to the gazing point, the process proceeds to step S305. In step S305, the wraparound destination control unit 153 non-highlights (for example, displays lightly) an arrow indicating the wraparound destination of the target edge.
 ステップS304又はS305の処理が終了すると、処理は、ステップS306に進められる。ステップS306において、回り込み先制御部153は、全てのエッジを選択したかどうかを判定する。 When the process of step S304 or S305 is completed, the process proceeds to step S306. In step S306, the wraparound destination control unit 153 determines whether all edges have been selected.
 ステップS306において、全てのエッジを選択していないと判定された場合、処理は、ステップS302に戻され、ステップS302乃至S306の処理が繰り返される。すなわち、対象のエッジとして未選択のエッジ(他のエッジ)が選択され(S302)、選択された他のエッジが注視点に最も近い場合にはその矢印が強調表示され(S304)、それ以外の場合には矢印が非強調表示される(S305)。 If it is determined in step S306 that not all edges have been selected, the process returns to step S302, and the processes of steps S302 to S306 are repeated. That is, an unselected edge (other edge) is selected as the target edge (S302), and when the selected other edge is closest to the gazing point, the arrow is highlighted (S304). In this case, the arrow is not highlighted (S305).
 なお、ここでは、対象のエッジの回り込み先を示す矢印が、ハイライト表示される場合を例示したが、強調して表示されるような表示形態であれば、他の表示形態を用いることができる。同様に、薄く表示するのは、非強調表示の表示形態の一例であって、他の表示形態を用いるようにしてもよい。 Here, the case where the arrow indicating the wraparound destination of the target edge is highlighted is shown here, but any other display form can be used as long as the display form is highlighted. . Similarly, thin display is an example of a non-highlighted display form, and other display forms may be used.
 そして、ステップS306において、全てのエッジを選択したと判定された場合、処理は、ステップS307に進められる。ステップS307において、回り込み先制御部153は、強調表示された矢印により示される回り込み先の候補が、ユーザの意図に合っているかどうかを判定する。 If it is determined in step S306 that all edges have been selected, the process proceeds to step S307. In step S307, the wraparound destination control unit 153 determines whether or not the wraparound destination candidate indicated by the highlighted arrow matches the user's intention.
 ここでは、ある1つのエッジの回り込みを示す矢印が強調表示(例えばハイライト表示)され、当該エッジを除く他のエッジの回り込みを示す矢印が非強調表示(例えば薄く表示)されている場合に、この強調表示された矢印により示される回り込み先の候補が、ユーザ20の意図に合った回り込み先であるかどうかが判定される。 Here, when an arrow indicating a wraparound of one edge is highlighted (for example, highlighted) and an arrow indicating a wraparound of another edge other than the edge is not highlighted (for example, displayed lightly), It is determined whether the wraparound destination candidate indicated by the highlighted arrow is a wraparound destination that matches the intention of the user 20.
 ステップS306において、ユーザ20の意図に合った回り込み先の候補であると判定された場合、処理は、ステップS308に進められる。ステップS308において、回り込み先制御部153は、回り込み先の候補に応じた位置を、移動先の位置として決定する。 If it is determined in step S306 that the wraparound candidate matches the intention of the user 20, the process proceeds to step S308. In step S308, the wraparound destination control unit 153 determines the position corresponding to the wraparound destination candidate as the position of the movement destination.
 なお、ステップS307において、強調表示された矢印により示される回り込み先の候補が、ユーザ20の意図に合っていないと判定された場合、ステップS308の処理は、スキップされる。そして、ステップS308の処理が終了するか、又はスキップされると、処理は、図6のステップS17に戻り、それ以降の処理が実行される。また、回り込み先の候補がユーザ20の意図に合っていない場合には、例えば、他のエッジの矢印を強調表示して、再度、回り込み先の候補がユーザ20の意図に合っているかどうかを判定するようにしてもよい。 If it is determined in step S307 that the wraparound candidate indicated by the highlighted arrow does not match the intention of the user 20, the process of step S308 is skipped. When the process in step S308 ends or is skipped, the process returns to step S17 in FIG. 6 and the subsequent processes are executed. Further, when the wraparound destination candidate does not match the intention of the user 20, for example, the arrow of the other edge is highlighted to determine again whether the wraparound destination candidate matches the intention of the user 20. You may make it do.
 以上、回り込み先の候補を表示する場合の処理の流れを説明した。次に、第1の例の具体例について、図33を参照して説明する。 So far, the flow of processing when displaying wraparound candidates has been described. Next, a specific example of the first example will be described with reference to FIG.
 図33は、回り込み先の候補の表示の例を示している。 FIG. 33 shows an example of the display of the wraparound candidate.
 図33に示すように、壁等の障害物オブジェクト30-1乃至30-3を隣接して配置した仮想空間において、ユーザ20の視線LSが、障害物オブジェクト30-1乃至30-3の間の空間に向いているとき、図中の点線の円で表した領域が、注視点近傍領域340とされる。 As shown in FIG. 33, in a virtual space where obstacle objects 30-1 to 30-3 such as walls are arranged adjacent to each other, the line of sight L S of the user 20 is between the obstacle objects 30-1 to 30-3. The area indicated by the dotted circle in the figure is the gazing point vicinity area 340.
 このとき、障害物オブジェクト30-1のエッジ30E-1と、障害物オブジェクト30-2のエッジ30E-2と、障害物オブジェクト30-3のエッジ30E-3とが、注視点近傍領域340内に存在するため、それらのエッジ30Eのうち、いずれかのエッジ30Eをユーザ20が注視していることになる。 At this time, the edge 30E-1 of the obstacle object 30-1, the edge 30E-2 of the obstacle object 30-2, and the edge 30E-3 of the obstacle object 30-3 are within the gazing point vicinity region 340. Since it exists, the user 20 is gazing at one of the edges 30E.
 ここでは、各エッジの注視点からの距離を算出して、エッジ30E-1乃至30E-3のうち、エッジ30E-3が注視点から最も近いエッジであると判定されたため、障害物オブジェクト30-3の回り込み先を示す矢印320-3が強調表示(例えばハイライト表示)され(図32のS304)、障害物オブジェクト30-1,30-2の回り込み先を示す矢印320-1,320-2が非強調表示(例えば薄い表示)される(図32のS305)。 Here, the distance from the gazing point of each edge is calculated, and since the edge 30E-3 is determined to be the closest edge from the gazing point among the edges 30E-1 to 30E-3, the obstacle object 30- The arrow 320-3 indicating the wraparound destination of 3 is highlighted (for example, highlighted) (S304 in FIG. 32), and the arrows 320-1 and 320-2 indicating the wraparound destination of the obstacle objects 30-1 and 30-2 are displayed. Is not highlighted (for example, lightly displayed) (S305 in FIG. 32).
 換言すれば、矢印320は、ユーザ20の視線に最も近いエッジ30Eを特定するための特定情報であるとも言える。なお、ここでは、強調表示される矢印320-3とともに、矢印320-1,320-2を提示しているが、矢印320-1,320-2は、非表示としてもよい。 In other words, it can be said that the arrow 320 is identification information for identifying the edge 30E closest to the line of sight of the user 20. Here, arrows 320-1 and 320-2 are presented together with the highlighted arrow 320-3, but the arrows 320-1 and 320-2 may be hidden.
 そして、強調表示された矢印320-3が、ユーザ20の意図に合っている場合には、その回り込み先の候補に応じた位置を移動先の位置に決定し(図32のS308)、ユーザ20の位置(視点)が、障害物オブジェクト30-3を回り込んだ位置に移動(瞬間移動)することになる。 If the highlighted arrow 320-3 matches the intention of the user 20, the position corresponding to the wraparound destination candidate is determined as the movement destination position (S308 in FIG. 32), and the user 20 Is moved (instantaneous movement) to a position around the obstacle object 30-3.
 このように、回り込み先の候補を表示(アノテーション表示)する方法を用いることで、ユーザ20は、回り込み先の候補が複数ある場合でも、強調表示された矢印によって自身の回り込み先を確認して、間違わずに移動することができる。 In this way, by using the method of displaying the wraparound destination candidates (annotation display), even when there are a plurality of wraparound destination candidates, the user 20 checks his / her wraparound destination with the highlighted arrow, You can move without making a mistake.
(第2の例)
 次に、図34及び図35を参照しながら、複数エッジ用回り込み先決定処理の第2の例として、特定の障害物オブジェクト30をデフォルメ(変形)する方法を用いた場合を説明する。
(Second example)
Next, a case where a method of deforming (deforming) a specific obstacle object 30 is used as a second example of the multi-edge wraparound destination determination process will be described with reference to FIGS. 34 and 35.
 図34は、複数エッジ用回り込み先決定処理の第2の例として、特定の障害物オブジェクトをデフォルメする場合の処理の流れを説明するフローチャートである。 FIG. 34 is a flowchart for explaining the flow of processing when a specific obstacle object is deformed as a second example of the wraparound destination determination processing for multiple edges.
 ステップS321において、回り込み先制御部153は、デフォルメする障害物オブジェクト30を特定する。ここでは、例えば、その形状や位置などを変形することで、ユーザ20が注視しているエッジの判別が容易になるような障害物オブジェクト30が特定される。 In step S321, the wraparound destination control unit 153 identifies the obstacle object 30 to be deformed. Here, for example, an obstacle object 30 is specified that makes it easy to determine the edge that the user 20 is gazing by deforming its shape or position.
 ステップS322において、回り込み先制御部153は、特定した障害物オブジェクト30をデフォルメする。このデフォルメ処理では、ユーザ20が注視しているエッジの判別が容易になるように、例えば、特定した障害物オブジェクト30の形状や位置などを変形するなどの処理が行われる。 In step S322, the wraparound destination control unit 153 deforms the identified obstacle object 30. In the deformation process, for example, a process such as changing the shape or position of the specified obstacle object 30 is performed so that the edge that the user 20 is gazing at can be easily determined.
 ステップS323において、回り込み先制御部153は、注視領域200が決定したかどうかを判定する。ステップS323において、注視領域200が決定していないと判定された場合、ステップS323の判定処理が繰り返される。 In step S323, the wraparound destination control unit 153 determines whether or not the gaze area 200 has been determined. If it is determined in step S323 that the gaze area 200 has not been determined, the determination process in step S323 is repeated.
 ステップS323において、注視領域200が決定したと判定された場合、処理は、ステップS324に進められる。ステップS324において、回り込み先制御部153は、注視領域200に応じた回り込み先の位置を、移動先の位置として決定する。 If it is determined in step S323 that the gaze area 200 has been determined, the process proceeds to step S324. In step S324, the wraparound destination control unit 153 determines the wraparound destination position corresponding to the gaze area 200 as the movement destination position.
 ステップS324の処理が終了すると、処理は、図6のステップS17に戻り、それ以降の処理が実行される。 When the process in step S324 is completed, the process returns to step S17 in FIG. 6, and the subsequent processes are executed.
 以上、特定の障害物オブジェクトをデフォルメする場合の処理の流れを説明した。次に、第2の例の具体例について、図35を参照して説明する。 So far, the flow of processing when deforming a specific obstacle object has been described. Next, a specific example of the second example will be described with reference to FIG.
 図35は、特定の障害物オブジェクト30のデフォルメ表示の例を示している。 FIG. 35 shows an example of a default display of a specific obstacle object 30.
 図35のAに示すように、壁等の障害物オブジェクト30-1乃至30-3を隣接して配置した仮想空間において、ユーザ20の視線LSが、障害物オブジェクト30-1乃至30-3の間の空間を向いて、その空間が注視点近傍領域340となっている。 As shown in FIG. 35A, in the virtual space where obstacle objects 30-1 to 30-3 such as walls are arranged adjacent to each other, the line of sight L S of the user 20 is the obstacle objects 30-1 to 30-3. The space is a gazing point vicinity region 340.
 このとき、注視点近傍領域340内には、エッジ30E-1乃至30E-3が存在するため、ユーザ20によってどのエッジ30Eが注視されているのかを判別することができない。そのため、ここでは、ユーザ20が注視しているエッジ30Eの判別を容易に行うことができるように、特定の障害物オブジェクト30の形状や位置など変形してデフォルメする(図34のS322)。 At this time, since the edges 30E-1 to 30E-3 exist in the gazing point vicinity area 340, it is impossible to determine which edge 30E is being watched by the user 20. Therefore, here, the shape and position of the specific obstacle object 30 are deformed and deformed so that the edge 30E being watched by the user 20 can be easily identified (S322 in FIG. 34).
 より具体的には、図35のBに示すように、障害物オブジェクト30-1乃至30-3のうち、障害物オブジェクト30-1,30-3の横方向の長さを短くする(又は左側の方向に移動する)ことで、障害物オブジェクト30-1乃至30-3の間の空間が広がって、ユーザ20が注視しているエッジ30Eの判別が容易になる。これにより、図35のBでは、障害物オブジェクト30-3の奥側のエッジが、注視領域200とされる。 More specifically, as shown in FIG. 35B, among the obstacle objects 30-1 to 30-3, the lateral lengths of the obstacle objects 30-1 and 30-3 are shortened (or left side). ), The space between the obstacle objects 30-1 to 30-3 is widened, and the edge 30E at which the user 20 is gazing can be easily identified. As a result, in FIG. 35B, the back edge of the obstacle object 30-3 is set as the gaze area 200.
 なお、ユーザ20の視点が回り込み先に移動した後は、デフォルメされた障害物オブジェクト30-1,30-3は、元の状態に復元されるようにする。 Note that after the viewpoint of the user 20 moves to the wraparound destination, the deformed obstacle objects 30-1 and 30-3 are restored to the original state.
 このように、特定の障害物オブジェクト30をデフォルメする方法を用いることで、ユーザ20の注視点近傍に複数の回り込みポイントが存在する場合でも、ユーザ20が注視しているエッジ30Eの判別を容易にして、回り込み先に間違わずに移動することができる。 In this way, by using the method of deforming the specific obstacle object 30, even when there are a plurality of wraparound points in the vicinity of the gazing point of the user 20, it is easy to determine the edge 30E that the user 20 is gazing at. Therefore, it is possible to move without making a mistake in the wraparound destination.
(第3の例)
 最後に、図36乃至図38を参照しながら、複数エッジ用回り込み先決定処理の第3の例として、移動先の位置としてエッジ重心位置を決定する方法を用いた場合を説明する。
(Third example)
Finally, with reference to FIGS. 36 to 38, a case where a method of determining the edge centroid position as the movement destination position is described as a third example of the multi-edge wraparound destination determination process.
 図36は、複数エッジ用回り込み先決定処理の第3の例として、移動先の位置としてエッジ重心位置を決定する場合の処理の流れを説明するフローチャートである。 FIG. 36 is a flowchart for explaining the flow of processing when determining the edge centroid position as the position of the movement destination as a third example of the wraparound destination determination processing for multiple edges.
 ステップS341において、回り込み先制御部153は、エッジの重心位置を算出する。 In step S341, the wraparound destination control unit 153 calculates the position of the center of gravity of the edge.
 ステップS342において、回り込み先制御部153は、算出した重心位置を、移動先の位置として決定する。 In step S342, the wraparound destination control unit 153 determines the calculated center of gravity position as the position of the movement destination.
 ステップS342の処理が終了すると、処理は、図6のステップS17に戻り、それ以降の処理が実行される。 When the process in step S342 is completed, the process returns to step S17 in FIG. 6, and the subsequent processes are executed.
 以上、移動先の位置としてエッジ重心位置を決定する場合の処理の流れを説明した。次に、第3の例の具体例について、図37及び図38を参照して説明する。 In the above, the processing flow in the case of determining the edge gravity center position as the movement destination position has been described. Next, a specific example of the third example will be described with reference to FIGS.
 図37は、エッジ重心位置を、移動先の位置(視点)とする場合の例を示している。 FIG. 37 shows an example in which the edge barycenter position is set as the movement destination position (viewpoint).
 図37に示すように、壁等の障害物オブジェクト30-1乃至30-3を隣接して配置した仮想空間において、ユーザ20の視線LSが、障害物オブジェクト30-1乃至30-3の間の空間を向いて、その空間が注視点近傍領域340となっている。 As shown in FIG. 37, in a virtual space where obstacle objects 30-1 to 30-3 such as walls are arranged adjacent to each other, the line of sight L S of the user 20 is between the obstacle objects 30-1 to 30-3. The space is a gazing point vicinity region 340.
 このとき、注視点近傍領域340内には、エッジ30E-1乃至30E-3が存在するため、ユーザ20によってどのエッジ30Eが注視されているのかを判別することができず、また、ユーザ20が意図しない方向に回り込まないようにする必要がある。そのため、ここでは、注視点近傍領域340内に複数のエッジが存在して判別が困難である場合には、エッジ30Eの重心位置を算出して、その重心位置を、移動先の位置とする。 At this time, since the edges 30E-1 to 30E-3 exist in the gazing point vicinity area 340, it is impossible to determine which edge 30E is being watched by the user 20, and the user 20 It is necessary not to go around in an unintended direction. Therefore, here, when a plurality of edges exist in the gazing point vicinity region 340 and it is difficult to discriminate, the position of the center of gravity of the edge 30E is calculated, and the position of the center of gravity is set as the movement destination position.
 より具体的には、図37に示すように、障害物オブジェクト30-1のエッジ30E-1と、障害物オブジェクト30-2のエッジ30E-2と、障害物オブジェクト30-3のエッジ30E-3とが、注視点近傍領域340内に存在する場合には、例えば、次のようにして、エッジ30Eの重心位置gを算出することができる。 More specifically, as shown in FIG. 37, the edge 30E-1 of the obstacle object 30-1, the edge 30E-2 of the obstacle object 30-2, and the edge 30E-3 of the obstacle object 30-3. Is present in the gazing point vicinity region 340, for example, the barycentric position g of the edge 30E can be calculated as follows.
 図38は、障害物オブジェクト30-1乃至30-3の間の空間を、真上から見下ろした様子を示している。ここでは、図38に示すように、エッジ30E-1の位置P1と、エッジ30E-2の位置P2と、エッジ30E-3の位置P3とを結ぶことで形成される図形の重心位置gを算出して(図36のS341)、この重心位置gを移動先の位置として決定する(図36のS342)ことができる。これにより、ユーザ20は、重心位置gに移動した後で、自身の意図する方向のエッジ30Eを注視することができる。 FIG. 38 shows a state where the space between the obstacle objects 30-1 to 30-3 is looked down from directly above. Here, as shown in FIG. 38, the gravity center position g of the figure formed by connecting the position P1 of the edge 30E-1, the position P2 of the edge 30E-2, and the position P3 of the edge 30E-3 is calculated. Then, the barycentric position g can be determined as the movement destination position (S342 in FIG. 36). Thereby, after moving to the gravity center position g, the user 20 can gaze at the edge 30E in the direction intended by the user.
 なお、ここでは、エッジ位置を結んでできる図形(複数のエッジに対応した図形)が三角形である場合の重心位置を例示したが、他の図形においても同様に重心位置gを算出して、移動先の位置とすることができる。 Here, the centroid position in the case where the figure formed by connecting the edge positions (the figure corresponding to a plurality of edges) is a triangle is illustrated, but the centroid position g is similarly calculated and moved in other figures. It can be the previous position.
 このように、移動先の位置としてエッジ重心位置を決定する方法を用いることで、ユーザ20の注視点近傍に複数の回り込みポイントが存在する場合に、ユーザ20が意図しない方向に回り込んでしまうことを防止し、さらに、エッジの重心位置に移動後は、確実に、ユーザ20が意図する回り込み先に移動することができる。 In this way, by using the method of determining the edge gravity center position as the position of the movement destination, when there are a plurality of wraparound points in the vicinity of the gazing point of the user 20, the user 20 wraps around in an unintended direction. In addition, after moving to the center of gravity of the edge, the user 20 can surely move to the wraparound destination intended by the user 20.
 以上、複数エッジ用回り込み先決定処理の詳細について説明した。なお、上述した複数エッジ用回り込み先決定処理の第1の例乃至第3の例は一例であって、複数エッジ用回り込み先決定処理として、注視点近傍にエッジが複数存在する場合における、障害物オブジェクト30に対する回り込み先の位置を決定するための他の処理が実行されるようにしてもよい。 The details of the wraparound destination determination process for multiple edges have been described above. Note that the first to third examples of the multiple edge wraparound destination determination process described above are examples, and the multiple edge wraparound destination determination process is an obstacle when there are a plurality of edges in the vicinity of the gazing point. Another process for determining the position of the wraparound destination with respect to the object 30 may be executed.
<2.変形例> <2. Modification>
 上述した説明では、ヘッドマウントディスプレイ1を例示したが、本技術は、狭義のヘッドマウントディスプレイに限らず、例えば、眼鏡、眼鏡型ディスプレイ、眼鏡型カメラ、ヘッドホン、ヘッドセット(マイクロフォン付きヘッドホン)、イヤホン、イヤリング、耳かけカメラ、帽子、カメラ付き帽子、ヘアバンドなどを装着した場合にも適用することができる。また、本技術は、単独の装置として構成されるヘッドマウントディスプレイ1に対して適用することは勿論、ヘッドマウントディスプレイ1に接続されたゲーム機やパーソナルコンピュータ等の情報処理装置(仮想空間体感システムを構成する複数の機器のうち一部の機器)に適用してもよい。なお、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、全ての構成要素が同一筐体内にあるか否かは問わない。 In the above description, the head-mounted display 1 has been exemplified. However, the present technology is not limited to a head-mounted display in a narrow sense. For example, glasses, glasses-type display, glasses-type camera, headphones, headsets (headphones with microphones), earphones It can also be applied to earrings, ear-mounted cameras, hats, hats with cameras, hair bands, and the like. In addition, the present technology is applied not only to the head mounted display 1 configured as a single device, but also to an information processing device (virtual space experience system such as a game machine or a personal computer) connected to the head mounted display 1. The present invention may be applied to some devices among a plurality of devices. The system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing.
 また、上述した説明では、ヘッドマウントディスプレイ1の表示部103に表示される3次元映像(3D映像)を説明したが、本技術は、例えばモニタ等の表示装置に表示される2次元映像(2D映像)についても適用することができる。さらに、上述した説明では、ヘッドマウントディスプレイ1を装着したユーザ20に対し、3次元映像を表示して、あたかもそこにいるかのような感覚を体験できるようにするバーチャルリアリティ(仮想現実)を実現しているが、本技術は、仮想現実(VR)に限らず、現実空間に付加情報を表示させて現実世界を拡張する拡張現実(AR:Augmented Reality)などに適用するようにしてもよい。また、表示される映像としては、VR(Virtual Reality)空間内の映像に限らず、例えば、実空間の映像等の他の映像であってもよい。 In the above description, the three-dimensional video (3D video) displayed on the display unit 103 of the head mounted display 1 has been described. However, the present technology may be a two-dimensional video (2D) displayed on a display device such as a monitor. It can also be applied to (video). Furthermore, in the above description, a virtual reality (virtual reality) is realized that allows a user 20 wearing the head mounted display 1 to display a three-dimensional image and experience a feeling as if they were there. However, the present technology is not limited to virtual reality (VR), and may be applied to augmented reality (AR) that expands the real world by displaying additional information in a real space. Further, the displayed video is not limited to a video in a VR (Virtual Reality) space, and may be another video such as a video in a real space.
<3.コンピュータの構成> <3. Computer configuration>
 上述した一連の処理(例えば、図6に示したHMDの処理)は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、各装置のコンピュータにインストールされる。図39は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成の例を示すブロック図である。 The series of processes described above (for example, the HMD process shown in FIG. 6) can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer of each device. FIG. 39 is a block diagram illustrating an example of a hardware configuration of a computer that executes the above-described series of processes using a program.
 コンピュータ1000において、CPU(Central Processing Unit)1001、ROM(Read Only Memory)1002、RAM(Random Access Memory)1003は、バス1004により相互に接続されている。バス1004には、さらに、入出力インターフェース1005が接続されている。入出力インターフェース1005には、入力部1006、出力部1007、記録部1008、通信部1009、及び、ドライブ1010が接続されている。 In the computer 1000, a CPU (Central Processing Unit) 1001, a ROM (Read Only Memory) 1002, and a RAM (Random Access Memory) 1003 are connected to each other via a bus 1004. An input / output interface 1005 is further connected to the bus 1004. An input unit 1006, an output unit 1007, a recording unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
 入力部1006は、マイクロフォン、キーボード、マウスなどよりなる。出力部1007は、スピーカ、ディスプレイなどよりなる。記録部1008は、ハードディスクや不揮発性のメモリなどよりなる。通信部1009は、ネットワークインターフェースなどよりなる。ドライブ1010は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブル記録媒体1011を駆動する。 The input unit 1006 includes a microphone, a keyboard, a mouse, and the like. The output unit 1007 includes a speaker, a display, and the like. The recording unit 1008 includes a hard disk, a nonvolatile memory, and the like. The communication unit 1009 includes a network interface or the like. The drive 1010 drives a removable recording medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータ1000では、CPU1001が、ROM1002や記録部1008に記録されているプログラムを、入出力インターフェース1005及びバス1004を介して、RAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer 1000 configured as described above, the CPU 1001 loads the program recorded in the ROM 1002 or the recording unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program. A series of processing is performed.
 コンピュータ1000(CPU1001)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブル記録媒体1011に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線又は無線の伝送媒体を介して提供することができる。 The program executed by the computer 1000 (CPU 1001) can be provided by being recorded on a removable recording medium 1011 as a package medium, for example. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータ1000では、プログラムは、リムーバブル記録媒体1011をドライブ1010に装着することにより、入出力インターフェース1005を介して、記録部1008にインストールすることができる。また、プログラムは、有線又は無線の伝送媒体を介して、通信部1009で受信し、記録部1008にインストールすることができる。その他、プログラムは、ROM1002や記録部1008に、あらかじめインストールしておくことができる。 In the computer 1000, the program can be installed in the recording unit 1008 via the input / output interface 1005 by attaching the removable recording medium 1011 to the drive 1010. The program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the recording unit 1008. In addition, the program can be installed in the ROM 1002 or the recording unit 1008 in advance.
 ここで、本明細書において、コンピュータがプログラムに従って行う処理は、必ずしもフローチャートとして記載された順序に沿って時系列に行われる必要はない。すなわち、コンピュータがプログラムに従って行う処理は、並列的あるいは個別に実行される処理(例えば、並列処理あるいはオブジェクトによる処理)も含む。また、プログラムは、1のコンピュータ(プロセッサ)により処理されるものであってもよいし、複数のコンピュータによって分散処理されるものであってもよい。 Here, in the present specification, the processing performed by the computer according to the program does not necessarily have to be performed in chronological order in the order described as the flowchart. That is, the processing performed by the computer according to the program includes processing executed in parallel or individually (for example, parallel processing or object processing). The program may be processed by a single computer (processor) or may be distributedly processed by a plurality of computers.
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。例えば、上述した実施の形態に示した構成を単独で実施することは勿論、複数の構成を組み合わせて実施するようにしてもよい。 Note that the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology. For example, the configuration described in the above embodiment may be implemented alone, or may be implemented by combining a plurality of configurations.
 また、上述した処理の各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, each step of the above-described processing can be executed by one apparatus or can be executed by a plurality of apparatuses. Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
 なお、本技術は、以下のような構成をとることができる。 In addition, this technique can take the following structures.
(1)
 仮想空間におけるユーザの視点を移動する移動情報を取得する移動情報取得部と、
 前記ユーザの視線の方向に対応する方向情報を取得する方向情報取得部と、
 前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する表示制御部と
 を備える情報処理装置。
(2)
 前記ユーザの視点は、前記ユーザの視点の移動を低減可能な位置に移動される
 前記(1)に記載の情報処理装置。
(3)
 前記ユーザの視点は、前記周辺領域に対する前記ユーザの現在位置に応じた位置、前記オブジェクトに沿った位置、又は俯瞰視点の位置に移動される
 前記(1)又は(2)に記載の情報処理装置。
(4)
 前記視線の方向は、前記オブジェクト又は前記周辺領域に応じた所定の方向に変更される
 前記(1)乃至(3)のいずれかに記載の情報処理装置。
(5)
 前記オブジェクトは、前記仮想空間に配置された障害物である
 前記(1)乃至(4)のいずれかに記載の情報処理装置。
(6)
 前記オブジェクトは、エッジを含む
 前記(5)に記載の情報処理装置。
(7)
 前記ユーザの視点は、前記オブジェクトにおけるユーザ側の面に対する側面又は裏側の面の付近の位置に移動される
 前記(6)に記載の情報処理装置。
(8)
 前記周辺領域は、前記視線の位置にある前記オブジェクトを含む領域であって、前記オブジェクトのエッジを含む
 前記(1)乃至(7)のいずれかに記載の情報処理装置。
(9)
 前記移動情報は、前記ユーザの操作若しくは動作、又は前記ユーザの視線の滞留時間の検出結果を含む
 前記(1)乃至(8)のいずれかに記載の情報処理装置。
(10)
 前記方向情報は、前記ユーザの視線若しくは顔の向き、又は前記ユーザの操作若しくは動作の検出結果を含む
 前記(1)乃至(9)のいずれかに記載の情報処理装置。
(11)
 前記ユーザの視点を移動するに際して、前記ユーザの視点、及び前記視線の方向の少なくとも一方は、その移動先の空間の属性に応じて調整される
 前記(1)乃至(10)のいずれかに記載の情報処理装置。
(12)
 前記ユーザの視点を移動するに際して、前記ユーザの視点は、その移動先の回り込み量が所定のパラメータに応じて調整される
 前記(1)乃至(11)のいずれかに記載の情報処理装置。
(13)
 前記オブジェクトがエッジを含んでいない場合に、前記ユーザの視点は、前記オブジェクト上の所定の端点に応じた位置に移動される
 前記(5)に記載の情報処理装置。
(14)
 前記オブジェクトが複数のエッジを含む場合に、前記視線の位置に最も近いエッジを特定するための特定情報を提示するか、1又は複数の前記オブジェクトを変形するか、又は前記複数のエッジに対応した図形の重心位置に前記ユーザの視点を移動する
 前記(5)に記載の情報処理装置。
(15)
 前記表示制御部は、前記像を、前記ユーザの視野領域の全体又は一部のみに提示する
 前記(1)乃至(14)のいずれかに記載の情報処理装置。
(16)
 前記表示制御部は、前記視線の位置が、前記仮想空間の特定の位置を強調する他のオブジェクトにある場合、前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を維持した状態で前記ユーザに視認される像を提示する
 前記(1)乃至(15)のいずれかに記載の情報処理装置。
(17)
 前記表示制御部は、前記ユーザの視点を移動する前に、前記ユーザの視点、及び前記視線の方向の少なくとも一方を特定するための特定情報を提示する
 前記(1)乃至(16)のいずれかに記載の情報処理装置。
(18)
 ヘッドマウントディスプレイとして構成される
 前記(1)乃至(17)のいずれかに記載の情報処理装置。
(19)
 情報処理装置が、
 仮想空間におけるユーザの視点を移動する移動情報を取得し、
 前記ユーザの視線の方向に対応する方向情報を取得し、
 前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する
 情報処理方法。
(20)
 コンピュータを、
 仮想空間におけるユーザの視点を移動する移動情報を取得する移動情報取得部と、
 前記ユーザの視線の方向に対応する方向情報を取得する方向情報取得部と、
 前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する表示制御部と
 して機能させるためのプログラム。
(1)
A movement information acquisition unit for acquiring movement information for moving the viewpoint of the user in the virtual space;
A direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight;
When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. An information processing apparatus comprising: a display control unit configured to control a display device so as to present an image visually recognized by the user by changing a direction of the line of sight to the user.
(2)
The information processing apparatus according to (1), wherein the viewpoint of the user is moved to a position where movement of the viewpoint of the user can be reduced.
(3)
The information processing apparatus according to (1) or (2), wherein the viewpoint of the user is moved to a position corresponding to the current position of the user with respect to the surrounding area, a position along the object, or a position of an overhead viewpoint. .
(4)
The information processing apparatus according to any one of (1) to (3), wherein the direction of the line of sight is changed to a predetermined direction according to the object or the peripheral area.
(5)
The information processing apparatus according to any one of (1) to (4), wherein the object is an obstacle arranged in the virtual space.
(6)
The information processing apparatus according to (5), wherein the object includes an edge.
(7)
The information processing apparatus according to (6), wherein the viewpoint of the user is moved to a position in the vicinity of a side surface or a back surface of the object with respect to the user-side surface.
(8)
The information processing apparatus according to any one of (1) to (7), wherein the peripheral area is an area including the object at the line of sight and includes an edge of the object.
(9)
The information processing apparatus according to any one of (1) to (8), wherein the movement information includes a detection result of an operation or action of the user or a dwell time of the user's line of sight.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the direction information includes a detection result of the user's line of sight or face direction, or the user's operation or action.
(11)
At the time of moving the user's viewpoint, at least one of the user's viewpoint and the direction of the line of sight is adjusted according to an attribute of a space of the movement destination. (1) to (10) Information processing device.
(12)
The information processing apparatus according to any one of (1) to (11), wherein when the user's viewpoint is moved, the wraparound amount of the movement destination of the user is adjusted according to a predetermined parameter.
(13)
The information processing apparatus according to (5), wherein when the object does not include an edge, the viewpoint of the user is moved to a position corresponding to a predetermined end point on the object.
(14)
When the object includes a plurality of edges, present specific information for specifying an edge closest to the position of the line of sight, deform one or a plurality of the objects, or correspond to the plurality of edges The information processing apparatus according to (5), wherein the viewpoint of the user is moved to the position of the center of gravity of the figure.
(15)
The information processing apparatus according to any one of (1) to (14), wherein the display control unit presents the image only on all or a part of the visual field region of the user.
(16)
The display control unit is configured to move the user's viewpoint in the direction of the line of sight and maintain the direction of the line of sight when the position of the line of sight is in another object that emphasizes a specific position in the virtual space. The information processing apparatus according to any one of (1) to (15), wherein an image visually recognized by the user is presented.
(17)
The display control unit presents specific information for specifying at least one of the user's viewpoint and the direction of the line of sight before moving the user's viewpoint. Any one of (1) to (16) The information processing apparatus described in 1.
(18)
The information processing apparatus according to any one of (1) to (17), configured as a head mounted display.
(19)
Information processing device
Get movement information to move the user's viewpoint in virtual space,
Obtaining direction information corresponding to the direction of the user's line of sight;
When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. An information processing method for controlling a display device so as to present an image visually recognized by the user to the user by changing a direction of the line of sight.
(20)
Computer
A movement information acquisition unit for acquiring movement information for moving the viewpoint of the user in the virtual space;
A direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight;
When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. A program for causing an image visually recognized by the user by changing the direction of the line of sight to function as a display control unit that controls a display device to present the image to the user.
 1 ヘッドマウントディスプレイ, 11 本体部, 12 頭部接触部, 100 制御部, 101 センサ部, 102 記憶部, 103 表示部, 104 スピーカ, 105 入力端子, 106 出力端子, 107 通信部, 151 移動情報取得部, 152 方向情報取得部, 153 回り込み先制御部, 154 視点カメラ位置・姿勢制御部, 155 表示制御部, 1000 コンピュータ, 1001 CPU 1 head mounted display, 11 body unit, 12 head contact unit, 100 control unit, 101 sensor unit, 102 storage unit, 103 display unit, 104 speaker, 105 input terminal, 106 output terminal, 107 communication unit, 151 movement information acquisition Unit, 152 direction information acquisition unit, 153 wraparound destination control unit, 154 viewpoint camera position / posture control unit, 155 display control unit, 1000 computer, 1001 CPU

Claims (20)

  1.  仮想空間におけるユーザの視点を移動する移動情報を取得する移動情報取得部と、
     前記ユーザの視線の方向に対応する方向情報を取得する方向情報取得部と、
     前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する表示制御部と
     を備える情報処理装置。
    A movement information acquisition unit for acquiring movement information for moving the viewpoint of the user in the virtual space;
    A direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight;
    When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. An information processing apparatus comprising: a display control unit configured to control a display device so as to present an image visually recognized by the user by changing a direction of the line of sight to the user.
  2.  前記ユーザの視点は、前記ユーザの視点の移動を低減可能な位置に移動される
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the viewpoint of the user is moved to a position where movement of the viewpoint of the user can be reduced.
  3.  前記ユーザの視点は、前記周辺領域に対する前記ユーザの現在位置に応じた位置、前記オブジェクトに沿った位置、又は俯瞰視点の位置に移動される
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein the viewpoint of the user is moved to a position corresponding to the current position of the user with respect to the peripheral area, a position along the object, or a position of an overhead viewpoint.
  4.  前記視線の方向は、前記オブジェクト又は前記周辺領域に応じた所定の方向に変更される
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the direction of the line of sight is changed to a predetermined direction according to the object or the peripheral area.
  5.  前記オブジェクトは、前記仮想空間に配置された障害物である
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the object is an obstacle arranged in the virtual space.
  6.  前記オブジェクトは、エッジを含む
     請求項5に記載の情報処理装置。
    The information processing apparatus according to claim 5, wherein the object includes an edge.
  7.  前記ユーザの視点は、前記オブジェクトにおけるユーザ側の面に対する側面又は裏側の面の付近の位置に移動される
     請求項6に記載の情報処理装置。
    The information processing apparatus according to claim 6, wherein the viewpoint of the user is moved to a position in the vicinity of a side surface or a back surface of the object with respect to a user-side surface.
  8.  前記周辺領域は、前記視線の位置にある前記オブジェクトを含む領域であって、前記オブジェクトのエッジを含む
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the peripheral area is an area including the object at the position of the line of sight and includes an edge of the object.
  9.  前記移動情報は、前記ユーザの操作若しくは動作、又は前記ユーザの視線の滞留時間の検出結果を含む
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the movement information includes a detection result of an operation or action of the user or a dwell time of the user's line of sight.
  10.  前記方向情報は、前記ユーザの視線若しくは顔の向き、又は前記ユーザの操作若しくは動作の検出結果を含む
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the direction information includes a detection result of the user's line of sight or face direction, or an operation or action of the user.
  11.  前記ユーザの視点を移動するに際して、前記ユーザの視点、及び前記視線の方向の少なくとも一方は、その移動先の空間の属性に応じて調整される
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein at the time of moving the user's viewpoint, at least one of the user's viewpoint and the direction of the line of sight is adjusted according to an attribute of the destination space.
  12.  前記ユーザの視点を移動するに際して、前記ユーザの視点は、その移動先の回り込み量が所定のパラメータに応じて調整される
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein when moving the user's viewpoint, the amount of wraparound of the destination of the user is adjusted according to a predetermined parameter.
  13.  前記オブジェクトがエッジを含んでいない場合に、前記ユーザの視点は、前記オブジェクト上の所定の端点に応じた位置に移動される
     請求項5に記載の情報処理装置。
    The information processing apparatus according to claim 5, wherein when the object does not include an edge, the viewpoint of the user is moved to a position corresponding to a predetermined end point on the object.
  14.  前記オブジェクトが複数のエッジを含む場合に、前記視線の位置に最も近いエッジを特定するための特定情報を提示するか、1又は複数の前記オブジェクトを変形するか、又は前記複数のエッジに対応した図形の重心位置に前記ユーザの視点を移動する
     請求項5に記載の情報処理装置。
    When the object includes a plurality of edges, present specific information for specifying an edge closest to the position of the line of sight, deform one or a plurality of the objects, or correspond to the plurality of edges The information processing apparatus according to claim 5, wherein the viewpoint of the user is moved to the position of the center of gravity of the figure.
  15.  前記表示制御部は、前記像を、前記ユーザの視野領域の全体又は一部のみに提示する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the display control unit presents the image only on all or only part of the visual field region of the user.
  16.  前記表示制御部は、前記視線の位置が、前記仮想空間の特定の位置を強調する他のオブジェクトにある場合、前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を維持した状態で前記ユーザに視認される像を提示する
     請求項1に記載の情報処理装置。
    The display control unit is configured to move the user's viewpoint in the direction of the line of sight and maintain the direction of the line of sight when the position of the line of sight is in another object that emphasizes a specific position in the virtual space. The information processing apparatus according to claim 1, wherein an image visually recognized by the user is presented.
  17.  前記表示制御部は、前記ユーザの視点を移動する前に、前記ユーザの視点、及び前記視線の方向の少なくとも一方を特定するための特定情報を提示する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the display control unit presents specific information for specifying at least one of the viewpoint of the user and the direction of the line of sight before moving the viewpoint of the user.
  18.  ヘッドマウントディスプレイとして構成される
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the information processing apparatus is configured as a head-mounted display.
  19.  情報処理装置が、
     仮想空間におけるユーザの視点を移動する移動情報を取得し、
     前記ユーザの視線の方向に対応する方向情報を取得し、
     前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する
     情報処理方法。
    Information processing device
    Get movement information to move the user's viewpoint in virtual space,
    Obtaining direction information corresponding to the direction of the user's line of sight;
    When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. An information processing method for controlling a display device so as to present an image visually recognized by the user to the user by changing a direction of the line of sight.
  20.  コンピュータを、
     仮想空間におけるユーザの視点を移動する移動情報を取得する移動情報取得部と、
     前記ユーザの視線の方向に対応する方向情報を取得する方向情報取得部と、
     前記ユーザの視線の位置が前記仮想空間に配置されたオブジェクトの周辺領域にある場合、取得した前記移動情報及び前記方向情報の少なくとも一方の情報に基づき前記視線の方向に前記ユーザの視点を移動させるとともに前記視線の方向を変更することで前記ユーザに視認される像を、前記ユーザに提示するように表示装置を制御する表示制御部と
     して機能させるためのプログラム。
    Computer
    A movement information acquisition unit for acquiring movement information for moving the viewpoint of the user in the virtual space;
    A direction information acquisition unit that acquires direction information corresponding to the direction of the user's line of sight;
    When the position of the user's line of sight is in the peripheral region of the object arranged in the virtual space, the user's viewpoint is moved in the direction of the line of sight based on at least one of the acquired movement information and the direction information. A program for causing an image visually recognized by the user by changing the direction of the line of sight to function as a display control unit that controls a display device to present the image to the user.
PCT/JP2019/019622 2018-05-31 2019-05-17 Information processing device, information processing method, and program WO2019230437A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018104754A JP2021140196A (en) 2018-05-31 2018-05-31 Information processing apparatus, information processing method, and program
JP2018-104754 2018-05-31

Publications (1)

Publication Number Publication Date
WO2019230437A1 true WO2019230437A1 (en) 2019-12-05

Family

ID=68696680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/019622 WO2019230437A1 (en) 2018-05-31 2019-05-17 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2021140196A (en)
WO (1) WO2019230437A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220076442A1 (en) * 2019-05-21 2022-03-10 Preferred Networks, Inc. Data processing apparatus, image analysis method, and recording medium
WO2022201430A1 (en) * 2021-03-25 2022-09-29 京セラ株式会社 Wearable terminal device, program, and display method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024127566A1 (en) * 2022-12-14 2024-06-20 日本電信電話株式会社 Optimal viewpoint estimating device, optimal viewpoint estimating method, and optimal viewpoint estimating program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011048797A (en) * 2009-08-28 2011-03-10 Fujitsu Ltd Image display method, information processing device and image display program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011048797A (en) * 2009-08-28 2011-03-10 Fujitsu Ltd Image display method, information processing device and image display program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220076442A1 (en) * 2019-05-21 2022-03-10 Preferred Networks, Inc. Data processing apparatus, image analysis method, and recording medium
WO2022201430A1 (en) * 2021-03-25 2022-09-29 京セラ株式会社 Wearable terminal device, program, and display method
JP7478902B2 (en) 2021-03-25 2024-05-07 京セラ株式会社 Wearable terminal device, program, and display method

Also Published As

Publication number Publication date
JP2021140196A (en) 2021-09-16

Similar Documents

Publication Publication Date Title
JP7164630B2 (en) Dynamic Graphics Rendering Based on Predicted Saccade Landing Points
US9824500B2 (en) Virtual object pathing
US9898865B2 (en) System and method for spawning drawing surfaces
EP3137982B1 (en) Transitions between body-locked and world-locked augmented reality
US10321258B2 (en) Emulating spatial perception using virtual echolocation
US10373392B2 (en) Transitioning views of a virtual model
EP3345074B1 (en) Augmented reality control of computing device
CN107209386B (en) Augmented reality view object follower
EP3172695B1 (en) Pupil detection
US9584915B2 (en) Spatial audio with remote speakers
EP3092546B1 (en) Target positioning with gaze tracking
US10248192B2 (en) Gaze target application launcher
US10613642B2 (en) Gesture parameter tuning
EP3161544B1 (en) Stereoscopic image display
US9767609B2 (en) Motion modeling in visual tracking
WO2019230437A1 (en) Information processing device, information processing method, and program
JPWO2018155233A1 (en) Image processing apparatus, image processing method, and image system
EP3172645A1 (en) Alignable user interface
JP2022515978A (en) Visual indicator of user attention in AR / VR environment
US20220197382A1 (en) Partial Passthrough in Virtual Reality
US20180158242A1 (en) Information processing method and program for executing the information processing method on computer
US20240362879A1 (en) Anchor Objects for Artificial Reality Environments
US20230011453A1 (en) Artificial Reality Teleportation Via Hand Gestures
Rose et al. CAPTURE SHORTCUTS FOR SMART GLASSES USING ELECTROMYOGRAPHY
WO2022140432A1 (en) Partial passthrough in virtual reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19810558

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19810558

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP