JP6278546B1 - Information processing method, apparatus, and program for causing computer to execute information processing method - Google Patents

Information processing method, apparatus, and program for causing computer to execute information processing method Download PDF

Info

Publication number
JP6278546B1
JP6278546B1 JP2017110282A JP2017110282A JP6278546B1 JP 6278546 B1 JP6278546 B1 JP 6278546B1 JP 2017110282 A JP2017110282 A JP 2017110282A JP 2017110282 A JP2017110282 A JP 2017110282A JP 6278546 B1 JP6278546 B1 JP 6278546B1
Authority
JP
Japan
Prior art keywords
virtual
object
movement
user
position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2017110282A
Other languages
Japanese (ja)
Other versions
JP2018206029A (en
Inventor
圭佑 中原
圭佑 中原
Original Assignee
株式会社コロプラ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コロプラ filed Critical 株式会社コロプラ
Priority to JP2017110282A priority Critical patent/JP6278546B1/en
Application granted granted Critical
Publication of JP6278546B1 publication Critical patent/JP6278546B1/en
Publication of JP2018206029A publication Critical patent/JP2018206029A/en
Application status is Active legal-status Critical

Links

Abstract

To improve movement of a moving object such as a widget in a virtual space. A widget W is arranged between virtual objects K1 and K2 and a character object A in a virtual space. At that time, when the character object A is going to the right, the movement destination of the virtual viewpoint is predicted, and the movement destination of the widget W is specified. The widget W is moved to the specified destination. Here, the priority of the widget W is set to be higher than that of the virtual object K3. Therefore, the widget W is modified so as to be arranged between the character object A and the virtual object K3. [Selection] Figure 13

Description

  The present disclosure relates to a technique for displaying a virtual object in a virtual space.

  Patent Document 1 describes that when a widget in the virtual space is located outside the field of view, the widget returns to the initial position.

JP 2017-4357 A

  There is room for improvement in order to prevent the user from losing sight of the widget in the virtual space with respect to the technique described in Patent Document 1.

  Accordingly, an object of the present disclosure is to provide an information processing method and apparatus for improving movement of a movement target such as a widget in a virtual space, and a program for causing a computer to execute the information processing method.

  In order to solve the above-described problem, an information processing method of the present invention is an information processing method executed by a computer to provide a virtual space to a user via a head mounted device including a display unit. Identifying virtual space data including a character object associated with the object to be moved, a moving object, identifying a virtual viewpoint associated with the character object in the virtual space, and defining a visual field image based on the virtual viewpoint Displaying the visual field image on the display unit, starting to move the virtual viewpoint based on the movement of the head mounted device, and predicting a first position to which the virtual viewpoint is moved, Identifying a second position that is the destination of the movement target based on a first position; The serial second position, and a step of moving the moving object.

  According to this meeting, the moving object can be easily captured in the view image, and the user can recognize the moving object or processing based on the moving object in the view image. Therefore, the virtual experience using the head mounted device can be improved.

It is a figure showing the outline of a structure of the HMD system 100 according to a certain embodiment. It is a block diagram showing an example of the hardware constitutions of the computer 200 according to one situation. It is a figure which represents notionally the uvw visual field coordinate system set to the HMD apparatus 110 according to an embodiment. It is a figure which represents notionally the one aspect | mode which represents the virtual space 2 according to a certain embodiment. It is the figure showing the head of user 190 wearing HMD device 110 according to a certain embodiment from the top. 3 is a diagram illustrating a YZ cross section of a visual field region 23 viewed from the X direction in a virtual space 2. FIG. 3 is a diagram illustrating an XZ cross section of a visual field region 23 viewed from a Y direction in a virtual space 2. FIG. It is a figure showing schematic structure of the controller 160 according to a certain embodiment. FIG. 2 is a block diagram showing a computer 200 according to an embodiment as a module configuration. 3 is a flowchart showing processing executed by the HMD system 100. 3 is a flowchart showing a virtual viewpoint prediction process and a movement destination identification process to be performed, which are executed by the HMD system 100. It is a figure which shows the positional relationship of a character object and a moving object, and its visual field image. It is a figure which shows the positional relationship with a movement object, and its visual field image when a character object turns to the right. It is a figure which shows the positional relationship with the character object when the movement destination of a movement target is corrected, and its visual field image. It is a figure which shows the positional relationship with the character object when the movement destination of a movement object is corrected, another character object, and its visual field image. It is a figure which shows the positional relationship with a character object when not moving a moving target to a moving destination, and its visual field image. It is a figure which shows the graph of the sensor value detected by the sensor.

  Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. In the following description, the same parts are denoted by the same reference numerals. Their names and functions are also the same. Therefore, detailed description thereof will not be repeated.

[Configuration of HMD system]
A configuration of an HMD (Head Mount Device) system 100 will be described with reference to FIG. FIG. 1 is a diagram representing an outline of a configuration of an HMD system 100 according to an embodiment. In one aspect, the HMD system 100 is provided as a home system or a business system.

  The HMD system 100 includes an HMD device 110, an HMD sensor 120, a controller 160, and a computer 200. The HMD device 110 includes a display 112, a camera 116, a microphone 118, and a gaze sensor 140. The controller 160 can include a motion sensor 130.

  In one aspect, the computer 200 can be connected to the Internet and other networks 19, and can communicate with the server 150 and other computers connected to the network 19. In another aspect, the HMD device 110 may include a sensor 114 instead of the HMD sensor 120.

  The HMD device 110 may be worn on the user's head and provide a virtual space to the user during operation. More specifically, the HMD device 110 displays a right-eye image and a left-eye image on the display 112, respectively. When each eye of the user visually recognizes each image, the user can recognize the image as a three-dimensional image based on the parallax of both eyes. The display 112 may be configured integrally with the HMD device 110 or may be a separate body.

  The display 112 is realized as a non-transmissive display device, for example. In one aspect, the display 112 is disposed on the main body of the HMD device 110 so as to be positioned in front of both eyes of the user. Therefore, the user can immerse in the virtual space when viewing the three-dimensional image displayed on the display 112. In one embodiment, the virtual space includes, for example, a background, an object that can be operated by the user, and an image of a menu that can be selected by the user. In an embodiment, the display 112 may be realized as a liquid crystal display or an organic EL (Electro Luminescence) display included in a so-called smartphone or other information display terminal.

  In one aspect, the display 112 may include a sub-display for displaying an image for the right eye and a sub-display for displaying an image for the left eye. In another aspect, the display 112 may be configured to display a right-eye image and a left-eye image together. In this case, the display 112 includes a high-speed shutter. The high-speed shutter operates so that an image for the right eye and an image for the left eye can be displayed alternately so that the image is recognized only by one of the eyes.

  The camera 116 acquires a face image of the user wearing the HMD device 110. The face image acquired by the camera 116 can be used to detect the user's facial expression through image analysis processing. The camera 116 may be, for example, an infrared camera built in the main body of the HMD device 110 in order to detect pupil movement, eyelid opening / closing, eyebrow movement, and the like. Alternatively, the camera 116 may be an external camera disposed outside the HMD device 110 as shown in FIG. 1 in order to detect movements of the user's mouth, cheeks, and jaws. The camera 116 may be configured by both the infrared camera and the external camera described above.

  The microphone 118 acquires the voice uttered by the user. The voice acquired by the microphone 118 can be used to detect a user's emotion by voice analysis processing. The voice can also be used to give a voice instruction to the virtual space 2. Further, the sound may be sent to the HMD system used by another user via the network 19 and the server 150, and output from a speaker or the like connected to the HMD system. Thereby, the conversation (chat) between the users who share a virtual space is implement | achieved.

  The HMD sensor 120 includes a plurality of light sources (not shown). Each light source is realized by, for example, an LED (Light Emitting Diode) that emits infrared rays. The HMD sensor 120 has a position tracking function for detecting the movement of the HMD device 110. The HMD sensor 120 detects the position and inclination of the HMD device 110 in the real space using this function.

  In another aspect, HMD sensor 120 may be realized by a camera. In this case, the HMD sensor 120 can detect the position and inclination of the HMD device 110 by executing image analysis processing using image information of the HMD device 110 output from the camera.

  In another aspect, the HMD device 110 may include a sensor 114 instead of or in addition to the HMD sensor 120 as a position detector. The HMD device 110 can detect the position and inclination of the HMD device 110 itself using the sensor 114. For example, when the sensor 114 is an angular velocity sensor, a geomagnetic sensor, an acceleration sensor, a gyro sensor, or the like, the HMD device 110 uses any one of these sensors instead of the HMD sensor 120 to detect its position and inclination. Can be detected. As an example, when the sensor 114 is an angular velocity sensor, the angular velocity sensor detects angular velocities around the three axes of the HMD device 110 in real space over time. The HMD device 110 calculates the temporal change of the angle around the three axes of the HMD device 110 based on each angular velocity, and further calculates the inclination of the HMD device 110 based on the temporal change of the angle. The HMD device 110 may include a transmissive display device. In this case, the transmissive display device may be temporarily configured as a non-transmissive display device by adjusting the transmittance. Further, the view field image may include a configuration for presenting the real space in a part of the image configuring the virtual space. For example, an image captured by a camera mounted on the HMD device 110 may be displayed so as to be superimposed on a part of the field-of-view image, or a part of the transmission-type display device may be set to have a high transmittance. The real space may be visible from a part of the image.

  The gaze sensor 140 detects a direction (gaze direction) in which the gaze of the right eye and the left eye of the user 190 is directed. The detection of the direction is realized by, for example, a known eye tracking function. The gaze sensor 140 is realized by a sensor having the eye tracking function. In one aspect, the gaze sensor 140 preferably includes a right eye sensor and a left eye sensor. The gaze sensor 140 may be, for example, a sensor that irradiates the right eye and the left eye of the user 190 with infrared light and detects the rotation angle of each eyeball by receiving reflected light from the cornea and iris with respect to the irradiated light. . The gaze sensor 140 can detect the line-of-sight direction of the user 190 based on each detected rotation angle.

  The server 150 may send a program to the computer 200 to provide the virtual space 2 to the user.

  In another aspect, the server 150 can communicate with other computers 200 for providing virtual reality to HMD devices used by other users. For example, when a plurality of users play a participatory game in an amusement facility, each computer 200 communicates a signal based on each user's operation with another computer 200, and a plurality of users are common in the same virtual space. Allows you to enjoy the game.

  The controller 160 receives input of commands from the user 190 to the computer 200. In one aspect, the controller 160 is configured to be gripped by the user 190. In another aspect, the controller 160 is configured to be attachable to the body of the user 190 or a part of clothing. In another aspect, the controller 160 may be configured to output at least one of vibration, sound, and light based on a signal sent from the computer 200. In another aspect, the controller 160 receives an operation given by the user 190 to control the position and movement of an object arranged in a space that provides virtual reality.

  In one aspect, the motion sensor 130 is attached to the user's hand and detects the movement of the user's hand. For example, the motion sensor 130 detects the rotation speed, rotation speed, etc. of the hand. The detected signal is sent to the computer 200. The motion sensor 130 is provided in a glove-type controller 160, for example. In some embodiments, for safety in real space, it is desirable that the controller 160 be mounted on something that does not fly easily by being mounted on the hand of the user 190, such as a glove shape. In another aspect, a sensor that is not worn by the user 190 may detect the hand movement of the user 190. For example, a signal from a camera that captures the user 190 may be input to the computer 200 as a signal representing the operation of the user 190. The motion sensor 130 and the computer 200 are connected to each other by wire or wirelessly. In the case of wireless communication, the communication form is not particularly limited, and for example, Bluetooth (registered trademark) or other known communication methods are used.

[Hardware configuration]
A computer 200 according to the present embodiment will be described with reference to FIG. FIG. 2 is a block diagram illustrating an example of a hardware configuration of computer 200 according to one aspect. The computer 200 includes a processor 10, a memory 11, a storage 12, an input / output interface 13, and a communication interface 14 as main components. Each component is connected to the bus 15.

  The processor 10 executes a series of instructions included in the program stored in the memory 11 or the storage 12 based on a signal given to the computer 200 or based on the establishment of a predetermined condition. In one aspect, the processor 10 is realized as a CPU (Central Processing Unit), an MPU (Micro Processor Unit), an FPGA (Field-Programmable Gate Array), or other device.

  The memory 11 temporarily stores programs and data. The program is loaded from the storage 12, for example. Data stored in the memory 11 includes data input to the computer 200 and data generated by the processor 10. In one aspect, the memory 11 is realized as a RAM (Random Access Memory) or other volatile memory.

  The storage 12 holds programs and data permanently. The storage 12 is realized as, for example, a ROM (Read-Only Memory), a hard disk device, a flash memory, and other nonvolatile storage devices. The programs stored in the storage 12 include a program for providing a virtual space in the HMD system 100, a simulation program, a game program, a user authentication program, a program for realizing communication with another computer 200, and the like. The data stored in the storage 12 includes data and objects for defining the virtual space.

  In another aspect, the storage 12 may be realized as a removable storage device such as a memory card. In still another aspect, a configuration using a program and data stored in an external storage device may be used instead of the storage 12 built in the computer 200. According to such a configuration, for example, in a scene where a plurality of HMD systems 100 are used like an amusement facility, it is possible to update programs and data in a batch.

  In some embodiments, the input / output interface 13 communicates signals with the HMD device 110, the HMD sensor 120, or the motion sensor 130. In one aspect, the input / output interface 13 is realized using a USB (Universal Serial Bus) interface, a DVI (Digital Visual Interface), an HDMI (registered trademark) (High-Definition Multimedia Interface), or other terminals. The input / output interface 13 is not limited to the above.

  In certain embodiments, the input / output interface 13 may further communicate with the controller 160. For example, the input / output interface 13 receives an input of a signal output from the motion sensor 130. In another aspect, the input / output interface 13 sends an instruction output from the processor 10 to the controller 160. The command instructs the controller 160 to vibrate, output sound, emit light, and the like. When the controller 160 receives the command, the controller 160 executes vibration, sound output, or light emission according to the command.

  The communication interface 14 is connected to the network 19 and communicates with other computers (for example, the server 150) connected to the network 19. In one aspect, the communication interface 14 is realized as, for example, a local area network (LAN) or other wired communication interface, or a wireless communication interface such as WiFi (Wireless Fidelity), Bluetooth (registered trademark), NFC (Near Field Communication), or the like. Is done. The communication interface 14 is not limited to the above.

  In one aspect, the processor 10 accesses the storage 12, loads one or more programs stored in the storage 12 into the memory 11, and executes a series of instructions included in the program. The one or more programs may include an operating system of the computer 200, an application program for providing a virtual space, game software that can be executed in the virtual space using the controller 160, and the like. The processor 10 sends a signal for providing a virtual space to the HMD device 110 via the input / output interface 13. The HMD device 110 displays an image on the display 112 based on the signal.

  The server 150 is connected to each control device of the plurality of HMD systems 100 via the network 19.

  In the example illustrated in FIG. 2, the configuration in which the computer 200 is provided outside the HMD device 110 is illustrated. However, in another aspect, the computer 200 may be incorporated in the HMD device 110. As an example, a portable information communication terminal (for example, a smartphone) including the display 112 may function as the computer 200.

  Further, the computer 200 may be configured to be used in common for the plurality of HMD devices 110. According to such a configuration, for example, the same virtual space can be provided to a plurality of users, so that each user can enjoy the same application as other users in the same virtual space. In such a case, the plurality of HMD systems 100 in this embodiment may be directly connected to the computer 200 by the input / output interface 13. In addition, each function of the server 150 in the present embodiment may be implemented in the computer 200.

  In an embodiment, in the HMD system 100, a global coordinate system is set in advance. The global coordinate system has three reference directions (axes) parallel to the vertical direction in the real space, the horizontal direction orthogonal to the vertical direction, and the front-rear direction orthogonal to both the vertical direction and the horizontal direction. In the present embodiment, the global coordinate system is one of the viewpoint coordinate systems. Therefore, the horizontal direction, the vertical direction (vertical direction), and the front-rear direction in the global coordinate system are defined as an x-axis, a y-axis, and a z-axis, respectively. More specifically, in the global coordinate system, the x axis is parallel to the horizontal direction of the real space. The y axis is parallel to the vertical direction of the real space. The z axis is parallel to the front-rear direction of the real space.

  In one aspect, HMD sensor 120 includes an infrared sensor. When the infrared sensor detects the infrared rays emitted from each light source of the HMD device 110, the presence of the HMD device 110 is detected. The HMD sensor 120 further determines the position and inclination of the HMD device 110 in the real space according to the movement of the user 190 wearing the HMD device 110 based on the value of each point (each coordinate value in the global coordinate system). To detect. More specifically, the HMD sensor 120 can detect temporal changes in the position and tilt of the HMD device 110 using each value detected over time.

  The global coordinate system is parallel to the real space coordinate system. Therefore, each inclination of the HMD device 110 detected by the HMD sensor 120 corresponds to each inclination around the three axes of the HMD device 110 in the global coordinate system. The HMD sensor 120 sets the uvw visual field coordinate system to the HMD device 110 based on the inclination of the HMD device 110 in the global coordinate system. The uvw visual field coordinate system set in the HMD device 110 corresponds to a viewpoint coordinate system when the user 190 wearing the HMD device 110 views an object in the virtual space.

[Uvw visual field coordinate system]
The uvw visual field coordinate system will be described with reference to FIG. FIG. 3 is a diagram conceptually showing a uvw visual field coordinate system set in HMD device 110 according to an embodiment. The HMD sensor 120 detects the position and inclination of the HMD device 110 in the global coordinate system when the HMD device 110 is activated. The processor 10 sets the uvw visual field coordinate system in the HMD device 110 based on the detected value.

  As shown in FIG. 3, the HMD device 110 sets a three-dimensional uvw visual field coordinate system with the head (origin) of the user wearing the HMD device 110 as the center (origin). More specifically, the HMD device 110 uses the horizontal direction, the vertical direction, and the front-rear direction (x axis, y axis, z axis) that define the global coordinate system around each axis of the HMD device 110 in the global coordinate system. The three new directions obtained by inclining around the respective axes by the inclination of the pitch are the pitch direction (u-axis), yaw direction (v-axis), and roll direction (w-axis) of the uvw visual field coordinate system in the HMD device 110. Set as.

  In one aspect, when the user 190 wearing the HMD device 110 stands upright and is viewing the front, the processor 10 sets the uvw visual field coordinate system parallel to the global coordinate system in the HMD device 110. In this case, the horizontal direction (x-axis), vertical direction (y-axis), and front-rear direction (z-axis) in the global coordinate system are the pitch direction (u-axis) and yaw direction (v Axis) and the roll direction (w-axis).

  After the uvw visual field coordinate system is set in the HMD device 110, the HMD sensor 120 can detect the inclination of the HMD device 110 in the set uvw visual field coordinate system based on the movement of the HMD device 110. . In this case, the HMD sensor 120 detects the pitch angle (θu), yaw angle (θv), and roll angle (θw) of the HMD device 110 in the uvw visual field coordinate system as the inclination of the HMD device 110. The pitch angle (θu) represents the tilt angle of the HMD device 110 around the pitch direction in the uvw visual field coordinate system. The yaw angle (θv) represents the tilt angle of the HMD device 110 around the yaw direction in the uvw visual field coordinate system. The roll angle (θw) represents the tilt angle of the HMD device 110 around the roll direction in the uvw visual field coordinate system.

  Based on the detected tilt angle of the HMD device 110, the HMD sensor 120 sets the uvw visual field coordinate system in the HMD device 110 after the HMD device 110 has moved to the HMD device 110. The relationship between the HMD device 110 and the uvw visual field coordinate system of the HMD device 110 is always constant regardless of the position and inclination of the HMD device 110. When the position and inclination of the HMD device 110 change, the position and inclination of the uvw visual field coordinate system of the HMD device 110 in the global coordinate system change in conjunction with the change of the position and inclination.

  In one aspect, the HMD sensor 120 is based on the infrared light intensity acquired based on the output from the infrared sensor and the relative positional relationship between a plurality of points (for example, the distance between the points). The position of the device 110 in the real space may be specified as a relative position with respect to the HMD sensor 120. Further, the processor 10 may determine the origin of the uvw visual field coordinate system of the HMD device 110 in the real space (global coordinate system) based on the specified relative position.

[Virtual space]
The virtual space will be further described with reference to FIG. FIG. 4 is a diagram conceptually showing one aspect of expressing virtual space 2 according to an embodiment. The virtual space 2 has a spherical structure that covers the entire 360 ° direction of the center 21. In FIG. 4, the upper half of the celestial sphere in the virtual space 2 is illustrated in order not to complicate the description. In the virtual space 2, each mesh is defined. The position of each mesh is defined in advance as coordinate values in the XYZ coordinate system defined in the virtual space 2. The computer 200 associates each partial image constituting content (still image, moving image, etc.) that can be developed in the virtual space 2 with each corresponding mesh in the virtual space 2, and the virtual space image 22 that can be visually recognized by the user. Is provided to the user.

  In one aspect, the virtual space 2 defines an XYZ coordinate system with the center 21 as the origin. The XYZ coordinate system is, for example, parallel to the global coordinate system. Since the XYZ coordinate system is a kind of viewpoint coordinate system, the horizontal direction, vertical direction (vertical direction), and front-rear direction in the XYZ coordinate system are defined as an X axis, a Y axis, and a Z axis, respectively. Therefore, the X axis (horizontal direction) of the XYZ coordinate system is parallel to the x axis of the global coordinate system, the Y axis (vertical direction) of the XYZ coordinate system is parallel to the y axis of the global coordinate system, and The Z axis (front-rear direction) is parallel to the z axis of the global coordinate system.

  When the HMD device 110 is activated, that is, in the initial state of the HMD device 110, the virtual camera 1 is disposed at the center 21 of the virtual space 2. The virtual camera 1 similarly moves in the virtual space 2 in conjunction with the movement of the HMD device 110 in the real space. Thereby, changes in the position and orientation of the HMD device 110 in the real space are similarly reproduced in the virtual space 2.

  As in the case of the HMD device 110, the uvw visual field coordinate system is defined for the virtual camera 1. The uvw visual field coordinate system of the virtual camera 1 in the virtual space 2 is defined so as to be linked to the uvw visual field coordinate system of the HMD device 110 in the real space (global coordinate system). Therefore, when the inclination of the HMD device 110 changes, the inclination of the virtual camera 1 also changes accordingly. The virtual camera 1 can also move in the virtual space 2 in conjunction with the movement of the user wearing the HMD device 110 in the real space.

  Since the orientation of the virtual camera 1 is determined according to the position and inclination of the virtual camera 1, the reference line of sight (reference line of sight 5) when the user visually recognizes the virtual space image 22 depends on the orientation of the virtual camera 1. Determined. The processor 10 of the computer 200 defines the visual field region 23 in the virtual space 2 based on the reference line of sight 5. The view area 23 corresponds to the view of the user wearing the HMD device 110 in the virtual space 2.

  The gaze direction of the user 190 detected by the gaze sensor 140 is a direction in the viewpoint coordinate system when the user 190 visually recognizes the object. The uvw visual field coordinate system of the HMD device 110 is equal to the viewpoint coordinate system when the user 190 visually recognizes the display 112. The uvw visual field coordinate system of the virtual camera 1 is linked to the uvw visual field coordinate system of the HMD device 110. Therefore, the HMD system 100 according to a certain aspect can regard the line-of-sight direction of the user 190 detected by the gaze sensor 140 as the line-of-sight direction of the user in the uvw visual field coordinate system of the virtual camera 1.

[User's line of sight]
With reference to FIG. 5, determination of the user's line-of-sight direction will be described. FIG. 5 is a diagram showing the head of user 190 wearing HMD device 110 according to an embodiment from above.

  In one aspect, gaze sensor 140 detects each line of sight of user 190's right eye and left eye. In a certain aspect, when the user 190 is looking near, the gaze sensor 140 detects the lines of sight R1 and L1. In another aspect, when the user 190 is looking far away, the gaze sensor 140 detects the lines of sight R2 and L2. In this case, the angle formed by the lines of sight R2 and L2 with respect to the roll direction w is smaller than the angle formed by the lines of sight R1 and L1 with respect to the roll direction w. The gaze sensor 140 transmits the detection result to the computer 200.

  When the computer 200 receives the detection values of the lines of sight R1 and L1 from the gaze sensor 140 as the line-of-sight detection result, the computer 200 identifies the point of sight N1 that is the intersection of the lines of sight R1 and L1 based on the detection value. On the other hand, when the detected values of the lines of sight R2 and L2 are received from the gaze sensor 140, the computer 200 specifies the intersection of the lines of sight R2 and L2 as the point of sight. The computer 200 specifies the line-of-sight direction N0 of the user 190 based on the specified position of the gazing point N1. For example, the computer 200 detects the direction in which the straight line passing through the midpoint of the straight line connecting the right eye R and the left eye L of the user 190 and the gazing point N1 extends as the line-of-sight direction N0. The line-of-sight direction N0 is a direction in which the user 190 is actually pointing the line of sight with both eyes. The line-of-sight direction N0 corresponds to the direction in which the user 190 actually directs his / her line of sight with respect to the field-of-view area 23.

  In another aspect, HMD system 100 may include a television broadcast receiving tuner. According to such a configuration, the HMD system 100 can display a television program in the virtual space 2.

  In still another aspect, the HMD system 100 may include a communication circuit for connecting to the Internet or a call function for connecting to a telephone line.

[Visibility area]
With reference to FIGS. 6 and 7, the visual field region 23 will be described. FIG. 6 is a diagram illustrating a YZ cross section of the visual field region 23 viewed from the X direction in the virtual space 2. FIG. 7 is a diagram illustrating an XZ cross section of the visual field region 23 viewed from the Y direction in the virtual space 2.

  As shown in FIG. 6, the visual field region 23 in the YZ cross section includes a region 24. The region 24 is defined by the reference line of sight 5 of the virtual camera 1 and the YZ cross section of the virtual space 2. The processor 10 defines a range including the polar angle α around the reference line of sight 5 in the virtual space 2 as the region 24.

  As shown in FIG. 7, the visual field region 23 in the XZ cross section includes a region 25. The region 25 is defined by the reference line of sight 5 and the XZ cross section of the virtual space 2. The processor 10 defines a range including the azimuth angle β around the reference line of sight 5 in the virtual space 2 as a region 25.

  In one aspect, the HMD system 100 provides a virtual space to the user 190 by causing the display 112 to display a view field image based on a signal from the computer 200. The visual field image corresponds to a portion of the virtual space image 22 that is superimposed on the visual field region 23. When the user 190 moves the HMD device 110 worn on the head, the virtual camera 1 also moves in conjunction with the movement. As a result, the position of the visual field area 23 in the virtual space 2 changes. As a result, the view image displayed on the display 112 is updated to an image that is superimposed on the view region 23 in the virtual space 2 in the direction in which the user faces in the virtual space image 22. The user can visually recognize a desired direction in the virtual space 2.

  The user 190 can visually recognize only the virtual space image 22 developed in the virtual space 2 without visually recognizing the real world while wearing the HMD device 110. Therefore, the HMD system 100 can give the user a high sense of immersion in the virtual space 2.

  In one aspect, the processor 10 can move the virtual camera 1 in the virtual space 2 in conjunction with movement of the user 190 wearing the HMD device 110 in real space. In this case, the processor 10 specifies an image area (that is, the view area 23 in the virtual space 2) projected on the display 112 of the HMD device 110 based on the position and orientation of the virtual camera 1 in the virtual space 2. That is, the visual field of the user 190 in the virtual space 2 is defined by the virtual camera 1.

  According to an embodiment, the virtual camera 1 preferably includes two virtual cameras, that is, a virtual camera for providing an image for the right eye and a virtual camera for providing an image for the left eye. Moreover, it is preferable that appropriate parallax is set in the two virtual cameras so that the user 190 can recognize the three-dimensional virtual space 2. In the present embodiment, the virtual camera 1 includes two virtual cameras, and the roll direction (w) generated by combining the roll directions of the two virtual cameras is the roll direction (w) of the HMD device 110. The technical idea concerning this indication is illustrated as what is constituted so that it may be adapted.

[controller]
An example of the controller 160 will be described with reference to FIG. FIG. 8 is a diagram showing a schematic configuration of controller 160 according to an embodiment.

  As shown in the state (A) of FIG. 8, in one aspect, the controller 160 may include a right controller 160R and a left controller. The right controller 160R is operated with the right hand of the user 190. The left controller is operated with the left hand of the user 190. In one aspect, the right controller 160R and the left controller are configured symmetrically as separate devices. Therefore, the user 190 can freely move the right hand holding the right controller 160R and the left hand holding the left controller. In another aspect, the controller 160 may be an integrated controller that receives operations of both hands. Hereinafter, the right controller 160R will be described.

  The right controller 160R includes a grip 30, a frame 31, and a top surface 32. The grip 30 is configured to be held by the right hand of the user 190. For example, the grip 30 can be held by the palm of the right hand of the user 190 and three fingers (middle finger, ring finger, little finger).

  The grip 30 includes buttons 33 and 34 and a motion sensor 130. The button 33 is disposed on the side surface of the grip 30 and receives an operation with the middle finger of the right hand. The button 34 is disposed in front of the grip 30 and accepts an operation with the index finger of the right hand. In one aspect, the buttons 33 and 34 are configured as trigger buttons. The motion sensor 130 is built in the housing of the grip 30. Note that when the operation of the user 190 can be detected from around the user 190 by a camera or other device, the grip 30 may not include the motion sensor 130.

  The frame 31 includes a plurality of infrared LEDs 35 arranged along the circumferential direction. The infrared LED 35 emits infrared light in accordance with the progress of the program during the execution of the program using the controller 160. The infrared rays emitted from the infrared LED 35 can be used to detect the positions and postures (tilt, orientation), etc., of the right controller 160R and the left controller. In the example shown in FIG. 8, infrared LEDs 35 arranged in two rows are shown, but the number of arrays is not limited to that shown in FIG. An array of one or more columns may be used.

  The top surface 32 includes buttons 36 and 37 and an analog stick 38. The buttons 36 and 37 are configured as push buttons. The buttons 36 and 37 receive an operation with the thumb of the right hand of the user 190. In one aspect, the analog stick 38 accepts an operation in an arbitrary direction of 360 degrees from the initial position (neutral position). The operation includes, for example, an operation for moving an object arranged in the virtual space 2.

  In one aspect, the right controller 160R and the left controller include a battery for driving the infrared LED 35 and other members. The battery includes, but is not limited to, a rechargeable type, a button type, a dry battery type, and the like. In another aspect, the right controller 160R and the left controller may be connected to a USB interface of the computer 200, for example. In this case, the right controller 160R and the left controller do not require batteries.

  As shown in the state (A) and the state (B) of FIG. 8, for example, the yaw, roll, and pitch directions are defined for the right hand 810 of the user 190. When the user 190 extends the thumb and index finger, the direction in which the thumb extends is the yaw direction, the direction in which the index finger extends is the roll direction, and the direction perpendicular to the plane defined by the yaw direction axis and the roll direction axis is the pitch direction. Is defined as

[Control device for HMD device]
The control device of the HMD device 110 will be described with reference to FIG. In one embodiment, the control device is realized by a computer 200 having a known configuration. FIG. 9 is a block diagram showing a computer 200 according to an embodiment as a module configuration.

  As shown in FIG. 9, the computer 200 includes a display control module 220, a virtual space control module 230, a memory module 240, and a communication control module 250. The display control module 220 includes a virtual camera control module 221, a visual field region determination module 222, a visual field image generation module 223, and a reference visual line identification module 224 as submodules. The virtual space control module 230 includes a virtual space definition module 231, a virtual object control module 232, an operation object control module 233, and a prediction control module 234 as submodules.

  In an embodiment, the display control module 220 and the virtual space control module 230 are realized by the processor 10. In another embodiment, multiple processors 10 may operate as the display control module 220 and the virtual space control module 230. The memory module 240 is realized by the memory 11 or the storage 12. The communication control module 250 is realized by the communication interface 14.

  In one aspect, the display control module 220 controls image display on the display 112 of the HMD device 110. The virtual camera control module 221 arranges the virtual camera 1 in the virtual space 2 and controls the behavior, orientation, and the like of the virtual camera 1. The view area determination module 222 defines the view area 23 according to the orientation of the head of the user wearing the HMD device 110. The view image generation module 223 generates a view image to be displayed on the display 112 based on the determined view area 23. Further, the view image generation module 223 determines the display mode of the character object included in the view image. The reference line-of-sight identifying module 224 identifies the line of sight of the user 190 based on the signal from the gaze sensor 140.

  The virtual space control module 230 controls the virtual space 2 provided to the user 190. The virtual space definition module 231 defines the virtual space 2 in the HMD system 100 by generating virtual space data representing the virtual space 2.

The virtual object control module 232 generates a virtual object arranged in the virtual space 2. Further, the virtual object control module 232 controls the movement (movement, state change, etc.) of the virtual object and the character object in the virtual space 2. The virtual objects may include moving objects such as forests, mountains and other landscapes arranged according to the progress of the game story, animals , and widgets such as UI (User Interface) screens . The character object is an object associated with the user wearing the HMD device 110 in the virtual space 2, and may be referred to as an avatar. In the present disclosure, an object including an avatar is referred to as a character object.

  Further, the movement target is a virtual object that is set so as to be visible in the view field image by moving in the virtual space according to the movement of the user (HMD device 110). For example, a widget such as a UI (User Interface) screen that can be operated by the user is preferably visible in the view field image even when the user moves. In addition to widgets such as UI screens, other moving objects include stage objects such as enemy character objects, character objects associated with other users, and control objects. The stage object is, for example, an object for defining the appearance of the object (Light or the like for defining the texture) or an object for defining the movement path of the enemy character object. The widget, the enemy character object, and the character object of another user are information that is typically displayed so as to be visible. On the other hand, the stage object is control information that is typically invisible. By moving invisible control information such as a stage object, the user can recognize control based on the information in the view field image.

  The operation object control module 233 arranges an operation object for manipulating an object arranged in the virtual space 2 in the virtual space 2. In one aspect, the operation objects include, for example, a hand object (virtual hand) corresponding to the hand of the user wearing the HMD device 110, a finger object corresponding to the user's finger, a stick object corresponding to the stick used by the user, and the like. May be included. When the operation object is a finger object, in particular, the operation object corresponds to the axis portion in the direction (axial direction) indicated by the finger.

  The virtual space control module 230 detects the collision when each of the objects arranged in the virtual space 2 collides with another object. For example, the virtual space control module 230 can detect a timing at which a certain object and another object touch each other, and performs a predetermined process when the detection is performed. The virtual space control module 230 can detect the timing at which the object is away from the touched state, and performs a predetermined process when the detection is made. The virtual space control module 230 can detect that the object is in a touched state. Specifically, the operation object control module 233 touches the operation object and another object when the operation object touches another object (for example, a virtual object arranged by the virtual object control module 232). Is detected, and a predetermined process is performed.

  The memory module 240 holds data used for the computer 200 to provide the virtual space 2 to the user 190. In one aspect, the memory module 240 holds space information 241, object information 242, and user information 243. The space information 241 includes, for example, one or more templates defined for providing the virtual space 2. The object information 242 includes, for example, content reproduced in the virtual space 2, information for arranging objects used in the content, attribute information such as character object drawing data and size information, and the like. Yes. The content can include, for example, content representing a scene similar to a game or a real society. Further, the object information 242 includes information indicating relative priorities regarding the placement positions of the movement targets such as virtual objects, character objects associated with other users, and widgets such as menu screens. The user information 243 includes, for example, a program for causing the computer 200 to function as a control device of the HMD system 100, an application program that uses each content held in the object information 242, and the like.

  Data and programs stored in the memory module 240 are input by the user of the HMD device 110. Alternatively, the processor 10 downloads a program or data from a computer (for example, the server 150) operated by a provider providing the content, and stores the downloaded program or data in the memory module 240.

  The communication control module 250 can communicate with the server 150 and other information communication devices via the network 19.

  In an aspect, the display control module 220 and the virtual space control module 230 may be realized using, for example, Unity (registered trademark) provided by Unity Technologies. In another aspect, the display control module 220 and the virtual space control module 230 can also be realized as a combination of circuit elements that realize each process.

  Processing in the computer 200 is realized by hardware and software executed by the processor 10. Such software may be stored in advance in a memory module 240 such as a hard disk. The software may be stored in a CD-ROM or other non-volatile computer-readable data recording medium and distributed as a program product. Alternatively, the software may be provided as a program product that can be downloaded by an information provider connected to the Internet or other networks. Such software is read from a data recording medium by an optical disk drive or other data reader, or downloaded from the server 150 or other computer via the communication control module 250 and then temporarily stored in the memory module 240. The The software is read from the memory module 240 by the processor 10 and stored in the RAM in the form of an executable program. The processor 10 executes the program.

  The hardware configuring the computer 200 shown in FIG. 9 is general. Therefore, it can be said that the most essential part according to the present embodiment is a program stored in the computer 200. Since the hardware operation of computer 200 is well known, detailed description will not be repeated.

  The data recording medium is not limited to a CD-ROM, FD (Flexible Disk), and hard disk, but is a magnetic tape, cassette tape, optical disk (MO (Magnetic Optical Disc) / MD (Mini Disc) / DVD (Digital Versatile Disc)). ), IC (Integrated Circuit) card (including memory card), optical card, mask ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash ROM, etc. It may be a non-volatile data recording medium that carries a fixed program.

  The program here may include not only a program directly executable by the processor 10, but also a program in a source program format, a compressed program, an encrypted program, and the like.

[Control structure]
With reference to FIG. 10, a control structure of computer 200 according to the present embodiment will be described. FIG. 10 is a flowchart showing processing executed by the HMD system 100 used by the user 190 to provide the virtual space 2 to the user 190.

  In step S <b> 1, the processor 10 of the computer 200 specifies the virtual space image data including the character object associated with the user as the virtual space definition module 231 and defines the virtual space 2. Further, the processor 10 arranges virtual objects (including movement targets such as widgets) and operation objects in the virtual space 2 as the virtual object control module 232 and the operation object control module 233. The virtual space definition module 231 defines the operation in the virtual space 2 so as to be controllable.

  In step S <b> 2, the processor 10 initializes the virtual camera 1 as the virtual camera control module 221. For example, the processor 10 places the virtual camera 1 at a predetermined center point in the virtual space 2 in the work area of the memory, and directs the line of sight of the virtual camera 1 in the direction in which the user 190 is facing.

  In step S <b> 3, the processor 10 generates view image data for displaying an initial view image as the view image generation module 223. The generated view image data is sent to the HMD device 110 by the communication control module 250 via the view image generation module 223.

  In step S <b> 4, the display 112 of the HMD device 110 displays a view field image based on the signal received from the computer 200. The user 190 wearing the HMD device 110 can recognize the virtual space 2 when visually recognizing the view field image.

  In step S <b> 5, the HMD sensor 120 detects the position and inclination of the HMD device 110 based on a plurality of infrared lights transmitted from the HMD device 110. Further, the sensor 114 detects the acceleration of the movement of the HMD device 110. The detection result is sent to the computer 200 as motion detection data.

  In step S <b> 6, the processor 10 specifies the visual field direction of the user 190 wearing the HMD device 110 as the visual field region determination module 222 based on the position and inclination of the HMD device 110. Here, the processor 10 specifies the virtual viewpoint as the viewing direction. The virtual viewpoint is virtually set at a position at a preset distance from the virtual camera 1 in the direction of the line of sight from the virtual camera 1. The processor 10 executes the application program and places an object in the virtual space 2 based on instructions included in the application program.

  In step S <b> 7, the processor 10 generates view image data for displaying a view image corresponding to the movement of the HMD device 110 as the view image generation module 223, and outputs the generated view image data to the HMD device 110.

  In step S8, the display 112 of the HMD device 110 updates the view image based on the received view image data, and displays the updated view image.

  The processes in steps S5 to S10 described above are repeatedly executed. Therefore, the virtual viewpoint moves with the movement of the HMD device 110, and when the movement of the HMD device 110 is completed, the movement of the virtual viewpoint is completed. Furthermore, in the present embodiment, when the HMD device 110 starts to move, the movement destination (first position) of the virtual viewpoint when the movement is completed is predicted. Then, until the movement of the HMD device 110 is completed, the processor 10 specifies a movement destination (second position) based on the movement destination of the virtual viewpoint, and moves the movement target.

  Next, the virtual viewpoint prediction process (prediction of the first position) and the movement target specification process (specification of the second position) performed in steps S5 to S8 in FIG. 10 will be described. FIG. 11 is a flowchart showing the processing of the processor 10 that operates as a prediction control module that predicts the first position as the destination of the virtual viewpoint and identifies the second position as the destination of the movement target based on the prediction. is there.

  In step S101, the sensor 114 detects the acceleration or the like according to the movement of the HMD device 110, and the processor 10 detects that the HMD device 110 has moved. Step S101 corresponds to step S5 in FIG.

  In step S102, the processor 10 determines whether or not the acceleration of the HMD device 110 detected by the sensor 114 is equal to or greater than a threshold value. Here, the sensor 114 detects the angular acceleration of the yaw angle of the HMD device 110.

  In step S <b> 103, the processor 10 predicts the destination of the virtual viewpoint based on the acceleration of the HMD device 110 detected by the sensor 114.

  In step S <b> 104, the processor 10 specifies a movement destination to be moved based on the virtual viewpoint.

  In step S105, the processor 10 determines whether or not the position of the movement target before movement is included in the view field image based on the predicted movement destination of the virtual viewpoint. If it is determined in step S105 that the position of the movement target before movement is included in the view field image (step S105: YES), in step S106, the processor 10 places the movement object in a fixed arrangement in the view field image. It is determined whether or not it is arranged at a position. Note that the fixed arrangement position indicates a positional relationship set with the virtual viewpoint. Therefore, for example, when the UI screen is set to be displayed at the upper left in the view field image, the UI screen is displayed at a position in the virtual space based on the position of the virtual viewpoint so that the UI screen is always displayed at that position. The object is placed. Information regarding such an arrangement position (information indicating a positional relationship with an object to be fixedly arranged and a virtual viewpoint) is stored in the object information 242 in advance.

  When the processor 10 determines that the movement target is to be arranged at a fixed arrangement position in the view field image (step S106: YES), in step S107, the processor 10 sets the movement object to a virtual corresponding to the fixed arrangement position. Move to a position in space. Note that the fixed arrangement position is the same as the movement destination specified in step S104. You may perform the same process as S109-S113 as needed. When the processor 10 determines that the movement target is not arranged at a fixed arrangement position (step S106: NO), the movement process is not performed in step S108.

  On the other hand, in step S105, when the processor 10 determines that the position of the movement target before movement is not included in the view field image (step S105: NO), in step S109, the processor 10 moves the movement target movement destination. The presence / absence of other virtual objects or other objects is determined. More specifically, the processor 10 determines whether another virtual object or the like exists on a line connecting the character object associated with the user and the movement destination.

  In step S110, when the processor 10 determines that there is another virtual object (step S110: YES), in step S111, the processor 10 sets the relative priority between the movement target and the other virtual object to the object. Judgment is made based on the information 242. The object information 242 stores relative priorities of the movement target and other virtual objects in advance. In step S112, the processor 10 corrects the movement destination of the movement target in the virtual space so that an object having a relatively high priority is displayed in front when the object is displayed in the view field image.

  In step S110, when the processor 10 does not determine that there is another virtual object (step S110: NO), the processor 10 proceeds to step S113. In step S113, the processor 10 moves the movement target to the movement destination.

  In steps S7 and S8 shown in FIG. 10, when the moving target is moved to the moving destination based on the prediction of the moving destination of the virtual viewpoint, the processor 10 generates a view image in which the moving target is moved to the moving destination. Then, it can be displayed on the display 112.

  In FIG. 11, the processes in steps S <b> 105 to S <b> 108 are arbitrary processes and may not be performed.

  Next, the character object associated with the user, the positional relationship with the widget W to be moved, and the view image thereof will be described with reference to FIGS. In FIG. 12, the state (A) is a diagram showing the positional relationship between the virtual objects K1 and K2, the virtual viewpoint S, the widget W, and the character object A in the virtual space. As shown in the state (A), the widget W is arranged between the virtual objects K1 and K2 and the character object A in the virtual space. The widget W here is an interface for a user to perform an operation in the virtual space using a character object. The virtual viewpoint S is set at a position in the virtual space based on the line-of-sight direction of the character object A. The view image is generated in a range determined based on the virtual viewpoint. Further, the arrangement position of the widget W is determined based on the position of the virtual viewpoint S. In the state (A), the widget W is arranged on the line of sight of the character object A.

  The state (B) shows a view field image from the character object associated with the user. As shown in the state (B), the visual field image shows the widget W, the virtual objects K1 and K2, and the virtual viewpoint S. Although the virtual viewpoint S is expressed for convenience, it may not be displayed in the actual view image. In the state (B), the widget W to be moved is displayed in front of the virtual objects K1 and K2 according to the positional relationship shown in the state (A).

  FIG. 13 is a diagram illustrating a state when the character object A starts to turn right (begins to move) according to the movement of the HMD device 110. The state (A) is a diagram showing the positional relationship between the character object, the virtual objects K1 and K2, the widget W, and the virtual viewpoint S in the virtual space at that time.

  As shown in the state (A), when the character object A starts to turn to the right, the movement destination of the virtual viewpoint is predicted. Then, the destination of the widget W is specified based on the destination of the virtual viewpoint. In the state (A), the destination of the widget W is specified at the position indicated by the destination P1.

  It is assumed that a virtual object K3 exists between the movement destination P1 and the character object A in the virtual space. In this case, the relative priority between the virtual object K3 and the widget W is determined. When the priority of the widget W is high, the movement destination P1 of the widget W is modified to the movement destination P2 so that the widget W is arranged between the character object A and the virtual object K3.

  The state (B) shows a view field image at that time. As shown in the state (B), when the user starts to move, the destination of the virtual viewpoint is already predicted, and thus the widget W specified based on the destination is displayed. In the state (B), the widget W is displayed on the right side of the virtual viewpoint.

  FIG. 14 shows a state and a view image when the movement of the character object A is completed. As shown in the state (A), the movement of the character object A is completed, and the widget W is arranged between the character object A and the virtual object K3.

  In the state (B), a view field image generated based on the state (A) is shown. As shown in the state (B), the widget W is displayed in front of the virtual object K3.

  FIG. 15 shows a state in which the other character object B and virtual object K4 are between the character object A and the destination widget W. Here, the priority of the other character object B is set to be higher than that of the widget W and the virtual object K3. The widget W is set to have a higher priority than the virtual object K3.

  In such a case, in the state (A), when viewed from the character object A, the movement destination (movement destination P1) of the widget W is such that the other character object B, the widget W, and the virtual object K3 are arranged in this order. The destination P2 is corrected.

  The state (B) shows a view field image at that time. As shown in state (A), a field-of-view image in which another character object B is displayed in front and widget W is displayed behind it is generated.

  FIG. 16 shows a state when the widget W is not moved. In the state (A) of FIG. 16, the character object A starts to turn right and shows a state in which the movement is completed. When the character object A starts to turn to the right, the movement destination of the virtual viewpoint S is predicted. At this time, when the widget W before movement is included in the view field image based on the predicted movement destination of the virtual viewpoint S, the movement of the widget W is not performed. Therefore, as shown in the state (B), the position of the widget W in the virtual space does not change.

  Next, a control algorithm of the prediction control module 234 will be described. The processor 10 operates as the predictive control module 234 based on the acceleration detected by the sensor 114. FIG. 17 is a diagram illustrating an example of a graph of acceleration detected by the sensor 114. The sensor 114 can detect acceleration in the pitch direction (u-axis), yaw direction (v-axis), and roll direction (w-axis) of the uvw visual field coordinate system in the HMD device 110. FIG. 16 shows the acceleration in the yaw direction, but the directions may be combined.

  As shown in FIG. 17, when the user wearing the HMD device 110 is going in any direction, the acceleration of the sensor 114 gradually increases, and when the acceleration peak is reached, the acceleration turns in a decreasing direction. Finally, the acceleration tends to be zero.

  The processor 10 determines to perform the prediction process when the acceleration detected by the sensor 114 is equal to or higher than the reference value. When the acceleration is less than the reference value, the processor 10 does not perform the prediction process because the movement of the HMD device 110 is shaken.

  On the other hand, when the acceleration detected by the sensor 114 is a negative value, it indicates that the movement of the HMD device 110 is in a state of being completed. Therefore, the processor 10 can predict the time when the acceleration is zero based on the detected acceleration state change. Therefore, the processor 10 calculates (predicts) the angle in which direction the HMD device 110 is directed. The processor 10 calculates (predicts) the position of the virtual viewpoint based on the calculated angle.

  In FIG. 17, the processor 10 can perform a prediction process when an acceleration is set to a prediction determination value that is 0 or less. This prediction determination value is a value set in advance by the designer.

  In the present embodiment, the virtual space (VR space) in which the user 190 is immersed by the HMD device 110 has been described as an example, but a transmissive HMD device may be employed as the HMD device 110. In this case, an augmented reality (AR) space or a composite image is output by outputting a visual field image obtained by synthesizing a part of an image constituting the virtual space to the real space visually recognized by the user 190 via the transmissive HMD device. A virtual experience in a Reality (MR) space may be provided to the user 190.

  The subject matter disclosed in this specification is indicated as, for example, the following items.

(Item 1)
An information processing method executed by a computer to provide a virtual space to a user via a head mounted device (HMD device 110) including a display unit (display 112),
A step (step S1 in FIG. 10) of identifying virtual space data including the character object A associated with the user 190 and a movement target (widget W);
Identifying a virtual viewpoint S associated with the character object in the virtual space and defining a view field image based on the virtual viewpoint (step S7 in FIG. 10);
Displaying the visibility image on the display unit (step S8 in FIG. 10);
Starting to move the virtual viewpoint based on the movement of the head mounted device and predicting a first position to which the virtual viewpoint is moved (step S6 in FIG. 10 and step S103 in FIG. 11);
Identifying a second position that is a movement destination of the movement target based on the movement destination of the virtual viewpoint (step S104 in FIG. 11);
Moving the object to be moved to the second position (step S113 in FIG. 11);
An information processing method comprising:

  According to this disclosure, the moving object can be easily captured in the view field image, and the user can recognize the moving object or a process based on the moving object in the view field image. Therefore, the virtual experience using the head mounted device can be improved.

(Item 2)
The movement of the virtual viewpoint is completed in response to completion of the movement of the head mounted device (steps S5 and S6 in FIG. 10),
The movement target is moved to the second position until the movement of the virtual viewpoint is completed (step S113 in FIG. 11).
The information processing method according to item 1.

  According to this disclosure, even if there is a case where the process of moving the moving object is heavy and the frame is dropped, the moving object follows later by moving the line of sight while moving. VR sickness can be prevented.

(Item 3)
The virtual space data further includes a virtual object (other character object B in FIG. 15),
The relative priority of the movement target and the virtual object is stored in advance in the storage unit (user information 243),
When the virtual object exists between the character object and the second position, the second position is corrected based on the priority (step S112 in FIG. 11).
The information processing method according to item 1 or 2.

  According to this disclosure, it is possible to prevent the movement target and another virtual object from overlapping, so that any one of them cannot be seen or inconvenience occurs in the processing using the movement target.

(Item 4)
A step (step S102 in FIG. 11) of detecting the movement of the head mounted device by an acceleration sensor mounted on the head mounted device;
Item 4. The information processing method according to any one of Items 1 to 3, wherein when the acceleration of the acceleration sensor becomes equal to or greater than a reference value, it is determined to move the moving object.

  According to this disclosure, it is possible to appropriately determine whether prediction is necessary.

(Item 5)
If the moving object before movement is located within the range of the view image after the virtual viewpoint is moved to the first position, the moving object is not moved (step S108 in FIG. 11), item 1 The information processing method as described in any one of -4.

  According to this disclosure, it is possible to reduce processing for movement processing or to prevent VR sickness.

(Item 6)
When the arrangement position of the movement target in the field-of-view image is determined, the second position is specified so as to move the movement target to the arrangement position (step S107 in FIG. 11).
Item 6. The information processing method according to Item 5.

  According to this disclosure, it is easy for the user to see and an immersive feeling in the virtual space can be improved. For example, it is possible for a user to move a moving object with an arrangement position such as a physical strength gauge in a menu screen or a game in a virtual space for the user even if the head mounted device is moved. It becomes easy to grasp operability and state.

(Item 7)
A program that causes a computer to execute the information processing method according to any one of items 1 to 6.

(Item 8)
An apparatus comprising: at least a memory; and a processor coupled to the memory, wherein the information processing method according to any one of items 1 to 7 is executed under the control of the processor.

  DESCRIPTION OF SYMBOLS 1 ... Virtual camera, 2 ... Virtual space, 5 ... Base line of sight, 10 ... Processor, 11 ... Memory, 12 ... Storage, 13 ... Input / output interface, 14 ... Communication interface, 15 ... Bus, 19 ... Network, 21 ... Center, 22 ... Virtual space image, 23 ... Field of view, 24, 25 ... Area, 31 ... Frame, 32 ... Top surface, 33, 34, 36, 37 ... Button, 35 ... Infrared LED, 38 ... Analog stick, 100 ... HMD system 110 ... HMD device, 112 ... display, 114 ... sensor, 120 ... HMD sensor, 130 ... motion sensor, 140 ... gaze sensor, 150 ... server, 160 ... controller, 160L ... left controller, 160R ... right controller, 190 ... user , 200... Computer, 220. ... virtual camera control module, 222 ... view area determination module, 223 ... view image generation module, 224 ... reference line of sight identification module, 230 ... virtual space control module, 231 ... virtual space definition module, 232 ... virtual object control module, 233 ... Operation object control module, 234 ... Prediction control module, 240 ... Memory module, 241 ... Spatial information, 242 ... Object information, 243 ... User information, 250 ... Communication control module.

Claims (8)

  1. An information processing method executed by a computer to provide a virtual space to a user via a head mounted device including a display unit,
    Identifying virtual object data including a character object associated with the user and a moving object;
    Identifying a virtual viewpoint associated with the character object in the virtual space, and defining a visual field image based on the virtual viewpoint;
    Displaying the view image on the display unit;
    Starting to move the virtual viewpoint based on the movement of the head mounted device, and predicting a first position to which the virtual viewpoint is moved;
    Identifying a second position that is the destination of the movement target based on the first position;
    Moving the object to be moved to the second position;
    An information processing method comprising:
  2. Completing the movement of the virtual viewpoint in response to completion of the movement of the head mounted device;
    The movement target is moved to the second position until the movement of the virtual viewpoint is completed,
    The information processing method according to claim 1.
  3. The virtual space data further includes a virtual object;
    The relative priority of the moving object and the virtual object is stored in the storage unit in advance,
    Correcting the second position based on the priority when the virtual object exists between the character object and the second position;
    The information processing method according to claim 1 or 2.
  4. Detecting the movement of the head mounted device with an acceleration sensor mounted on the head mounted device;
    The information processing method according to any one of claims 1 to 3, wherein when the acceleration of the acceleration sensor becomes equal to or greater than a reference value, it is determined to move the moving object.
  5.   5. The moving object is not moved when the moving object before the movement is located within the range of the field-of-view image after the virtual viewpoint is moved to the first position. Information processing method according to item.
  6. When the arrangement position of the movement target in the field-of-view image is determined, the second position is specified so that the movement target is moved to the arrangement position.
    The information processing method according to claim 5.
  7.   The program which makes a computer perform the information processing method as described in any one of Claims 1-6.
  8. At least a memory, and a processor coupled to the memory, to execute the information processing method of any one of claims 1 to 6 by controlling the processor, device.
JP2017110282A 2017-06-02 2017-06-02 Information processing method, apparatus, and program for causing computer to execute information processing method Active JP6278546B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017110282A JP6278546B1 (en) 2017-06-02 2017-06-02 Information processing method, apparatus, and program for causing computer to execute information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2017110282A JP6278546B1 (en) 2017-06-02 2017-06-02 Information processing method, apparatus, and program for causing computer to execute information processing method

Publications (2)

Publication Number Publication Date
JP6278546B1 true JP6278546B1 (en) 2018-02-14
JP2018206029A JP2018206029A (en) 2018-12-27

Family

ID=61195748

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2017110282A Active JP6278546B1 (en) 2017-06-02 2017-06-02 Information processing method, apparatus, and program for causing computer to execute information processing method

Country Status (1)

Country Link
JP (1) JP6278546B1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250677A (en) * 1999-03-02 2000-09-14 Toshiba Corp Device and method for multimodal interface
JP2005165776A (en) * 2003-12-03 2005-06-23 Canon Inc Image processing method and image processor
JP2015230632A (en) * 2014-06-06 2015-12-21 大日本印刷株式会社 Display terminal device, program, and display method
WO2016092698A1 (en) * 2014-12-12 2016-06-16 キヤノン株式会社 Image processing device, image processing method, and program
JP6058184B1 (en) * 2016-03-10 2017-01-11 株式会社コロプラ Method and program for controlling head mounted display system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000250677A (en) * 1999-03-02 2000-09-14 Toshiba Corp Device and method for multimodal interface
JP2005165776A (en) * 2003-12-03 2005-06-23 Canon Inc Image processing method and image processor
JP2015230632A (en) * 2014-06-06 2015-12-21 大日本印刷株式会社 Display terminal device, program, and display method
WO2016092698A1 (en) * 2014-12-12 2016-06-16 キヤノン株式会社 Image processing device, image processing method, and program
JP6058184B1 (en) * 2016-03-10 2017-01-11 株式会社コロプラ Method and program for controlling head mounted display system

Also Published As

Publication number Publication date
JP2018206029A (en) 2018-12-27

Similar Documents

Publication Publication Date Title
US10269180B2 (en) Information processing apparatus and information processing method, display apparatus and display method, and information processing system
US20150352437A1 (en) Display control method for head mounted display (hmd) and image generation device
CN105264460A (en) Holographic object feedback
JP2012139318A (en) Display control program, display control apparatu, display control system, and display control method
US20160314624A1 (en) Systems and methods for transition between augmented reality and virtual reality
US20170076503A1 (en) Method for generating image to be displayed on head tracking type virtual reality head mounted display and image generation device
CN105393192A (en) Web-like hierarchical menu display configuration for a near-eye display
JP6266736B1 (en) Method for communicating via virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program
JP5996138B1 (en) Game program, method, and game system
US20190244433A1 (en) Method and apparatus for providing virtual space, and non-transitory computer readable data storage medium storing program causing computer to perform method
US20180189549A1 (en) Method for communication via virtual space, program for executing the method on computer, and information processing apparatus for executing the program
JP6002286B1 (en) Head mounted display control method and head mounted display control program
JP6093473B1 (en) Information processing method and program for causing computer to execute information processing method
US10013812B2 (en) Method and system for controlling a head-mounted display system
US20180373349A1 (en) Display control apparatus and display control method
EP3474271A1 (en) Display control apparatus and display control method
JP6097377B1 (en) Image display method and program
US10341612B2 (en) Method for providing virtual space, and system for executing the method
US10198855B2 (en) Method of providing virtual space, method of providing virtual experience, system and medium for implementing the methods
JP6118444B1 (en) Information processing method and program for causing computer to execute information processing method
US10573062B2 (en) Method and system for providing a virtual space
JP5980404B1 (en) Method of instructing operation to object in virtual space, and program
US20180074332A1 (en) Systems and methods for transition between augmented reality and virtual reality
US10365784B2 (en) Information processing method and apparatus for executing the information processing method
US10579109B2 (en) Control device and control method

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20171215

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20171221

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20180112

R150 Certificate of patent or registration of utility model

Ref document number: 6278546

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150