WO2023112838A1 - Information processing device - Google Patents

Information processing device Download PDF

Info

Publication number
WO2023112838A1
WO2023112838A1 PCT/JP2022/045377 JP2022045377W WO2023112838A1 WO 2023112838 A1 WO2023112838 A1 WO 2023112838A1 JP 2022045377 W JP2022045377 W JP 2022045377W WO 2023112838 A1 WO2023112838 A1 WO 2023112838A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
user
sight
line
control unit
Prior art date
Application number
PCT/JP2022/045377
Other languages
French (fr)
Japanese (ja)
Inventor
智仁 山▲崎▼
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2023112838A1 publication Critical patent/WO2023112838A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to an information processing device.
  • MR Mated Reality
  • the real environment perceived by the user is augmented by a computer.
  • this technology for example, it is possible to precisely superimpose the virtual space on the real space that the user sees through the MR glasses worn on the head.
  • Patent Literature 1 discloses an information processing device that displays a realistic virtual reality image while ensuring the safety of a user riding in a vehicle.
  • the information processing device includes AR goggles on which images of a plurality of virtual first objects are displayed.
  • the information processing device detects a second object that actually exists outside or inside the vehicle. Further, when it is determined that the image of the first object is superimposed on the visibility ensuring area that affects the visibility of the user with respect to the second object, the information processing device superimposes the image of the first object on the visibility ensuring area. For example, the position of the image of the first object is moved so as not to occur.
  • Patent Document 1 defines the positional relationship between the virtual first object and the real second object, and the second object is determined according to which object the user is paying attention to. It was not possible to change the state of the object.
  • the present invention provides a method in which, of two virtual objects arranged in a virtual space, when the target of the user's attention changes from a state in which the user focuses on one virtual object, the other virtual object is displayed.
  • the problem to be solved is to change the state of
  • An information processing apparatus includes an acquisition unit that acquires line-of-sight information indicating a user's line of sight, and a first virtual object and a second virtual object that overlap each other in a virtual space as seen from the user. a display control unit for displaying a display at a position where the line of sight of the user does not match; an operation control unit that transitions the second virtual object from inactive to active after the transition to a second state that does not intersect with the first virtual object.
  • the target of the user's attention changes from a state in which the user focuses on one of the two virtual objects arranged in the virtual space
  • the other virtual object It is possible to change the state of
  • FIG. 2 is a perspective view showing the appearance of the MR glasses 20 according to the first embodiment
  • FIG. FIG. 4 is a schematic diagram of a virtual space VS provided to a user U1 by using the MR glasses 20 according to the first embodiment
  • FIG. 4 is a schematic diagram of a virtual space VS provided to a user U1 by using the MR glasses 20 according to the first embodiment
  • 2 is a block diagram showing a configuration example of the MR glasses 20 according to the first embodiment
  • FIG. 1 is a block diagram showing a configuration example of a terminal device 10 according to the first embodiment
  • FIG. 4 is an explanatory diagram of an operation example of the display control unit 112 according to the first embodiment
  • 3 is a block diagram showing a configuration example of a server 30 according to the first embodiment
  • FIG. 4 is a flowchart showing operations of the server 30 according to the first embodiment
  • FIG. 1 First Embodiment
  • a terminal device 10 as an information processing device according to a first embodiment of the present invention
  • FIG. 1 is a diagram showing the overall configuration of an information processing system 1 according to the first embodiment of the present invention.
  • the information processing system 1 is a system that uses MR technology to provide a virtual space to a user U1 wearing MR glasses 20, which will be described later.
  • the information processing system 1 includes a terminal device 10, MR glasses 20, and a server 30.
  • the terminal device 10 is an example of an information processing device.
  • the terminal device 10 and the server 30 are communicably connected to each other via a communication network NET.
  • the terminal device 10 and the MR glasses 20 are connected so as to be able to communicate with each other.
  • the terminal device 10 and the MR glasses 20 are combined as a pair of the terminal device 10-1 and the MR glasses 20-1, a pair of the terminal device 10-2 and the MR glasses 20-2, and a pair of the terminal device 10-1 and the MR glasses 20-2.
  • a total of three pairs of devices 10-3 and MR glasses 20-3 are described.
  • the number of sets is merely an example, and the information processing system 1 can include any number of sets of the terminal device 10 and the MR glasses 20 .
  • the server 30 provides various data and cloud services to the terminal device 10 via the communication network NET.
  • the terminal device 10 displays virtual objects placed in the virtual space on the MR glasses 20 that the user wears on the head.
  • the virtual space is, for example, a celestial space.
  • the virtual objects are, for example, virtual objects representing data such as still images, moving images, 3DCG models, HTML files, and text files, and virtual objects representing applications. Examples of text files include memos, source codes, diaries, and recipes. Examples of applications include browsers, applications for using SNS, and applications for generating document files.
  • the terminal device 10 is preferably a mobile terminal device such as a smart phone and a tablet, for example.
  • the MR glasses 20 are a see-through wearable display worn on the user's head.
  • the MR glasses 20 are controlled by the terminal device 10 to display a virtual object on the display panel provided for each of the binocular lenses. Note that the MR glasses 20 are an example of a display device.
  • FIG. 2 is a perspective view showing the appearance of the MR glasses 20. As shown in FIG. As shown in FIG. 2, the MR glasses 20 have temples 91 and 92, a bridge 93, frames 94 and 95, and lenses 41L and 41R, like general spectacles.
  • An imaging device 26 is provided in the bridge 93 . The imaging device 26 images the outside world. The imaging device 26 also outputs imaging information indicating the captured image.
  • Each of the lenses 41L and 41R has a half mirror.
  • the frame 94 is provided with a liquid crystal panel or an organic EL panel for the left eye and an optical member for guiding light emitted from the display panel for the left eye to the lens 41L.
  • a liquid crystal panel or an organic EL panel is hereinafter generically referred to as a display panel.
  • the half mirror provided in the lens 41L transmits external light and guides it to the left eye, and reflects the light guided by the optical member to enter the left eye.
  • the frame 95 is provided with a right-eye display panel and an optical member that guides light emitted from the right-eye display panel to the lens 41R.
  • the half mirror provided in the lens 41R transmits external light and guides it to the right eye, and reflects the light guided by the optical member to enter the right eye.
  • the display 28 which will be described later, includes a lens 41L, a left-eye display panel, a left-eye optical member, and a lens 41R, a right-eye display panel, and a right-eye optical member.
  • the user can observe the image displayed by the display panel in a see-through state superimposed on the state of the outside world.
  • the image for the left eye is displayed on the display panel for the left eye
  • the image for the right eye is displayed on the display panel for the right eye, thereby allowing the user U1 to
  • FIG. 3 and 4 are schematic diagrams of the virtual space VS provided to the user U1 by using the MR glasses 20.
  • FIG. 3 As shown in FIG. 3, in the virtual space VS, virtual objects VO1 to VO5 representing various contents such as browsers, cloud services, images, and moving images are arranged.
  • the user U1 walks around the public space while wearing the MR glasses 20 on which the virtual objects VO1 to VO5 arranged in the virtual space VS are displayed, thereby turning the virtual space into a private space in the public space. You can experience space VS.
  • the user U1 can act in the public space while receiving benefits brought by the virtual objects VO1 to VO5 placed in the virtual space VS.
  • FIG. 4 it is possible for a plurality of users U1 to U3 to share the virtual space VS.
  • the plurality of users U1 to U3 share one or a plurality of virtual objects VO, and through the shared virtual objects VO, the user U1 ⁇ It becomes possible to communicate between users U3.
  • any combination of two virtual objects VO out of the plurality of virtual objects VO1 to VO5 does not overlap in the virtual space VS as seen from the user U1. is displayed. Details of the display method of the virtual object VO will be described later.
  • FIG. 5 is a block diagram showing a configuration example of the MR glasses 20.
  • the MR glasses 20 include a processing device 21 , a storage device 22 , a line-of-sight detection device 23 , a GPS device 24 , a motion detection device 25 , an imaging device 26 , a communication device 27 , a display 28 and a speaker 29 .
  • Each element of the MR glasses 20 is interconnected by one or more buses for communicating information.
  • the term "apparatus" in this specification may be replaced with another term such as a circuit, a device, or a unit.
  • the processing device 21 is a processor that controls the MR glasses 20 as a whole.
  • the processing device 21 is configured using, for example, one or more chips.
  • the processing device 21 is configured using, for example, a central processing unit (CPU) including an interface with peripheral devices, an arithmetic device, registers, and the like. Some or all of the functions of the processing device 21 are realized by hardware such as DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), and FPGA (Field Programmable Gate Array). You may The processing device 21 executes various processes in parallel or sequentially.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • the storage device 22 is a recording medium that can be read and written by the processing device 21 .
  • the storage device 22 also stores a plurality of programs including the control program PR1 executed by the processing device 21 .
  • the line-of-sight detection device 23 detects the line of sight of the user U1. Any method may be used to detect the line of sight by the line of sight detection device 23 .
  • the line-of-sight detection device 23 may detect line-of-sight information based on, for example, the position of the inner corner of the eye and the position of the iris.
  • the line-of-sight detection device 23 also outputs line-of-sight information indicating the line-of-sight direction of the user U1 to the processing device 21, which will be described later, based on the detection result.
  • the line-of-sight information output to the processing device 21 is output to the terminal device 10 via the communication device 27 .
  • the GPS device 24 receives radio waves from multiple satellites.
  • the GPS device 24 also generates position information from the received radio waves.
  • the positional information indicates the position of the MR glasses 20 .
  • the location information may be in any format as long as the location can be specified.
  • the position information indicates the latitude and longitude of the MR glasses 20, for example.
  • location information is obtained from GPS device 24 .
  • the MR glasses 20 may acquire position information by any method.
  • the acquired position information is supplied to the processing device 21 .
  • the position information output to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
  • the motion detection device 25 detects motion of the MR glasses 20 .
  • the motion detection device 25 corresponds to an inertial sensor such as an acceleration sensor that detects acceleration and a gyro sensor that detects angular acceleration.
  • the acceleration sensor detects acceleration in orthogonal X-, Y-, and Z-axes.
  • the gyro sensor detects angular acceleration around the X-, Y-, and Z-axes.
  • the motion detection device 25 can generate posture information indicating the posture of the MR glasses 20 based on the output information of the gyro sensor.
  • the motion detection device 25 supplies orientation information relating to the orientation of the MR glasses 20 to the processing device 21 .
  • the motion detection device 25 supplies motion information relating to the motion of the MR glasses 20 to the processing device 21 .
  • the motion information includes acceleration data indicating three-axis acceleration and angular acceleration data indicating three-axis angular acceleration.
  • the posture information and motion information supplied to the processing device 21 are transmitted to the terminal device 10 via the communication device 27 .
  • the imaging device 26 outputs imaging information obtained by imaging the outside world.
  • the imaging device 26 includes, for example, a lens, an imaging element, an amplifier, and an AD converter.
  • the light condensed through the lens is converted into an image pickup signal, which is an analog signal, by the image pickup device.
  • the amplifier amplifies the imaging signal and outputs it to the AD converter.
  • the AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal.
  • the converted imaging information is supplied to the processing device 21 .
  • the imaging information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
  • the communication device 27 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 27 is also called a network device, a network controller, a network card, a communication module, etc., for example.
  • the communication device 27 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 27 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 28 is a device that displays images.
  • the display 28 displays various images under the control of the processing device 21 .
  • the display 28 includes the lens 41L, the left-eye display panel, the left-eye optical member, and the lens 41R, the right-eye display panel, and the right-eye optical member, as described above.
  • Various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display panel.
  • the speaker 29 is a device that emits sound.
  • the speaker 29 emits various sounds under the control of the processing device 21 .
  • audio data which is a digital signal
  • the audio signal is amplified in amplitude by an amplifier (not shown).
  • the speaker 29 emits sound indicated by the audio signal after the amplitude is amplified.
  • the processing device 21 functions as an acquisition unit 211 and a display control unit 212, for example, by reading the control program PR1 from the storage device 22 and executing it.
  • the acquisition unit 211 acquires image information indicating an image displayed on the MR glasses 20 from the terminal device 10 . More specifically, the acquisition unit 211 acquires, for example, second image information, which is transmitted from the terminal device 10 and will be described later.
  • the acquisition unit 211 also receives line-of-sight information input from the line-of-sight detection device 23 , position information input from the GPS device 24 , posture information and motion information input from the motion detection device 25 , and input from the imaging device 26 . Acquire imaging information. After that, the acquisition unit 211 outputs the acquired line-of-sight information, position information, posture information, motion information, and imaging information to the communication device 27 .
  • the display control unit 212 causes the display 28 to display an image indicated by the second image information based on the second image information acquired from the terminal device 10 by the acquisition unit 211 .
  • FIG. 6 is a block diagram showing a configuration example of the terminal device 10. As shown in FIG.
  • the terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , a display 14 , an input device 15 and an inertial sensor 16 . Elements of the terminal device 10 are interconnected by one or more buses for communicating information.
  • the processing device 11 is a processor that controls the terminal device 10 as a whole. Also, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing unit 11 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 11 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 11 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 12 is a recording medium readable and writable by the processing device 11 .
  • the storage device 12 also stores a plurality of programs including the control program PR2 executed by the processing device 11 .
  • the communication device 13 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 13 is also called a network device, a network controller, a network card, a communication module, etc., for example.
  • the communication device 13 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 13 may have a wireless communication interface.
  • Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection.
  • a wireless communication interface there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 14 is a device that displays images and character information.
  • the display 14 displays various images under the control of the processing device 11 .
  • various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are preferably used as the display 14 .
  • the input device 15 accepts operations from the user U1 who wears the MR glasses 20 on his head.
  • the input device 15 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse.
  • the input device 15 may also serve as the display 14 .
  • the inertial sensor 16 is a sensor that detects inertial force.
  • the inertial sensor 16 includes, for example, one or more of an acceleration sensor, an angular velocity sensor, and a gyro sensor.
  • the processing device 11 detects the orientation of the terminal device 10 based on the output information from the inertial sensor 16 . Further, the processing device 11 receives selection of the virtual object VO, input of characters, and input of instructions in the celestial sphere virtual space VS based on the orientation of the terminal device 10 .
  • the user U1 directs the central axis of the terminal device 10 toward a predetermined area of the virtual space VS, and operates the input device 15 to select the virtual object VO arranged in the predetermined area.
  • the user U1's operation on the input device 15 is, for example, a double tap. By operating the terminal device 10 in this way, the user U1 can select the virtual object VO without looking at the input device 15 of the terminal device 10 .
  • the processing device 11 functions as an acquisition unit 111, a display control unit 112, a determination unit 113, and an operation control unit 114 by reading the control program PR2 from the storage device 12 and executing it.
  • the acquisition unit 111 acquires line-of-sight information indicating the line of sight of the user U1. More specifically, the acquisition unit 111 acquires line-of-sight information output from the MR glasses 20 via the communication device 13 . In addition, the acquisition unit 111 acquires first image information, which is described below and indicates an image displayed on the MR glasses 20 , from the server 30 via the communication device 13 .
  • the display control unit 112 moves the first virtual object VO1 and the second virtual object VO2 to positions not overlapping each other in the virtual space VS as seen from the user U1.
  • the first image information is image information indicating images of the first virtual object VO1 itself and the second virtual object VO2 itself.
  • the display control unit 112 provides layout information such that the first virtual object VO1 and the second virtual object VO2 do not overlap with each other from the viewpoint of the user U1 in the virtual space VS viewed by the user U1 through the MR glasses 20, and Second image information including the first image information is transmitted to the MR glasses 20 via the communication device 13 .
  • FIG. 7 is an explanatory diagram of how the display control unit 112 displays the first virtual object VO1 and the second virtual object VO2.
  • X, Y and Z axes are orthogonal to each other in the virtual space VS.
  • the X-axis extends in the horizontal direction of user U1.
  • the right direction along the X axis is the positive direction
  • the left direction along the X axis is the negative direction.
  • the Y-axis extends in the front-rear direction of the user U1.
  • the forward direction along the Y-axis is defined as the positive direction
  • the backward direction along the Y-axis is defined as the negative direction, as viewed from the user U1.
  • a horizontal plane is formed by these X-axis and Y-axis.
  • the Z-axis is orthogonal to the XY plane and extends in the vertical direction of the user U1. Further, as viewed from the user U1, the upward direction along the Z axis is the positive direction, and the downward direction along the Z axis is the negative direction.
  • a cylinder Y as a region where the user U1's line of sight V can exist while the user U1 is viewing the first virtual object VO1.
  • the cylinder Y has a first end point C1 at the position (x0, y0, z0) of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1, and the position (x1, y1, z1) of the first virtual object VO1 as It has a central axis L1 with a second end point C2.
  • the radius R1 of the bottom surface of the cylinder Y is the length that allows the bottom surface to contain the first virtual object VO1.
  • the position of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 does not match the position of the first end point C1.
  • the position (x1, y1, z1) of the first virtual object VO1 does not match the position of the second end point C2.
  • each position in these sets will match.
  • the display control unit 112 displays the second virtual object VO2 outside the cylinder Y in an inactive state in the virtual space VS.
  • active means that the virtual object VO is in operation, or that the virtual object VO is selected by the user U1.
  • active means that the motion of the virtual object VO has stopped, or that the virtual object VO has not been selected by the user U1.
  • the virtual object VO corresponds to a moving image application
  • “the virtual object VO is in operation” means that the moving image is being reproduced by the user U1's operation.
  • the virtual object VO corresponds to an e-mail application
  • “the virtual object VO is in operation” means that the e-mail is sent/received or a text file indicating the content of the e-mail is opened by the operation of the user U1. It means that you are being killed.
  • the virtual object VO corresponds to a music application
  • “the virtual object VO is in operation” means that the music is emitted from the speaker 29 by the operation of the user U1.
  • the display control unit 112 displays the second virtual object VO2 in the virtual space VS in the first state in which the line of sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active. Let Specifically, the display control unit 112 displays the entire second virtual object VO2 in a first state in which the entire line of sight V of the user U1 is positioned inside the cylinder Y and the first virtual object VO1 is active. It is displayed outside the cylinder Y.
  • the display position of the second virtual object VO2 is an area within a predetermined distance from the first virtual object VO1. Specifically, the second virtual object VO2 is displayed within the field of view of the user U1 while the user U1 is viewing the first virtual object VO1. With this process, the terminal device 10 can call the attention of the user U1 to the second virtual object VO2.
  • the above "predetermined distance” is an example of the "first distance”.
  • the determination unit 113 determines that the first state in which the line of sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active is the line of sight V of the user U1. has transitioned to a second state that does not intersect with the first virtual object VO1.
  • the motion control unit 114 transitions the second virtual object VO2 from inactive to active. Specifically, in FIG. 7, after the line of sight V of the user U1 changes from being within the cylinder Y to being outside the cylinder Y, the motion control unit 114 changes the second virtual object VO2 from inactive to active. transition to
  • the motion control unit 114 preferably transitions the second virtual object VO2 from inactive to active after the line of sight V of the user U1 remains stationary for a predetermined period of time. Specifically, in the second state described above, the motion control unit 114 controls the second virtual object VO2 on the condition that the change in the line of sight V of the user U1 per unit time is within a predetermined range for a predetermined period of time. transition from inactive to active. As a result, the terminal device 10 can transition the second virtual object VO2 from inactive to active after the point of the line of sight V of the user U1 is not wandering and the state is almost stationary.
  • predetermined range is an example of the "second range”.
  • the above "predetermined time” is an example of the "second time”.
  • the operation control unit 114 transitions the first virtual object VO1 from active to inactive after transitioning from the first state to the second state.
  • the terminal device 10 can switch which virtual object VO to activate, the first virtual object VO1 or the second virtual object VO2, according to the position of the line of sight V of the user U1.
  • the terminal device 10 can exclusively activate the first virtual object VO1 and the second virtual object VO2.
  • the state transition of the first virtual object VO1 and the state transition of the second virtual object VO2 may be executed simultaneously, or one may precede the other.
  • FIG. 8 is a block diagram showing a configuration example of the server 30.
  • the server 30 comprises a processing device 31 , a storage device 32 , a communication device 33 , a display 34 and an input device 35 .
  • Each element of server 30 is interconnected by one or more buses for communicating information.
  • the processing device 31 is a processor that controls the server 30 as a whole. Also, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing unit 31 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 31 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 32 is a recording medium readable and writable by the processing device 31 .
  • the storage device 32 also stores a plurality of programs including the control program PR3 executed by the processing device 31 .
  • the communication device 33 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 33 is also called a network device, a network controller, a network card, a communication module, or the like, for example.
  • the communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 33 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 34 is a device that displays images and character information.
  • the display 34 displays various images under the control of the processing device 31 .
  • various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display 34 .
  • the input device 35 is a device that accepts operations by the administrator of the information processing system 1 .
  • the input device 35 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse.
  • the input device 35 may also serve as the display 34 .
  • the processing device 31 functions as an acquisition unit 311 and a generation unit 312, for example, by reading the control program PR3 from the storage device 32 and executing it.
  • the acquisition unit 311 acquires various data from the terminal device 10 using the communication device 33 .
  • the data includes, for example, data indicating the operation content for the virtual object VO, which is input to the terminal device 10 by the user U1 wearing the MR glasses 20 on the head.
  • the generation unit 312 generates first image information indicating an image displayed on the MR glasses 20 .
  • This first image information is transmitted to the terminal device 10 via the communication device 33 .
  • the display control unit 112 provided in the terminal device 10 displays the first virtual object VO1 and the second virtual object VO2 as seen from the user U1 based on the first image information received via the communication device 13. are displayed at positions that do not overlap each other in the virtual space VS.
  • step S ⁇ b>1 the processing device 11 functions as the display control unit 112 .
  • the processing device 11 displays the first virtual object VO1 in the virtual space VS.
  • step S2 the processing device 11 functions as the acquisition unit 111.
  • the processing device 11 acquires the line of sight V of the user U1.
  • step S3 the processing device 11 functions as the display control unit 112.
  • the processing device 11 displays the second virtual object VO2 in an inactive state so that it does not overlap the first virtual object VO1 as seen from the user U1 in the virtual space VS. It is assumed that the first virtual object VO1 is active at the stage of step S3 at the latest.
  • step S4 the processing device 11 functions as the determination unit 113.
  • the processing device 11 changes the line-of-sight V of the user U1 to intersect the first virtual object VO1 from the first state in which the line-of-sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active. It is determined whether or not the state has changed to the second state in which the state is not performed. If it is determined that the state has transitioned from the first state to the second state, that is, if the determination result of step S4 is affirmative, the processing device 11 executes the process of step S5. If it is not determined that the state has transitioned from the first state to the second state, that is, if the determination result of step S4 is negative, the processing device 11 executes the process of step S4.
  • step S5 the processing device 11 functions as the operation control unit 114.
  • the processing device 11 transitions the second virtual object VO2 from inactive to active.
  • step S6 the processing device 11 functions as the operation control unit 114.
  • the processing device 11 transitions the first virtual object VO1 from active to inactive. After that, the processing device 11 ends all the operations shown in FIG.
  • the terminal device 10 as an information processing device includes the acquisition unit 111, the display control unit 112, and the operation control unit 114.
  • the acquisition unit 111 acquires line-of-sight information indicating the line-of-sight V of the user U1.
  • the display control unit 112 causes the first virtual object VO1 and the second virtual object VO2 to be displayed at positions not overlapping each other in the virtual space VS as seen from the user U1.
  • the motion control unit 114 changes the line-of-sight V of the user U1 to the first virtual object VO1 from the first state in which the line-of-sight V of the user U1 indicated by the line-of-sight information intersects with the first virtual object VO1 and the first virtual object VO1 is active. After transitioning to a second state that does not intersect with the virtual object VO1, the second virtual object VO2 is transitioned from inactive to active.
  • the terminal device 10 can newly display the second virtual object VO2 so as not to overlap the previously displayed first virtual object VO1.
  • the terminal device 10 can switch which virtual object VO to activate between the first virtual object VO1 and the second virtual object VO2 according to the direction of the line of sight V of the user U1.
  • the operation control unit 114 transitions the first virtual object VO1 from active to inactive after transitioning from the first state to the second state.
  • the terminal device 10 can exclusively activate the first virtual object VO1 and the second virtual object VO2. For example, if the first virtual object VO1 is the first application and the second virtual object VO2 is the second application, the user U1 removes the line of sight V from the first virtual object VO1, and the first application is executed. You can stop working and run a second application. That is, the user U1 can switch the application to be operated only by removing the line of sight V from the first virtual object VO1 without inputting an instruction to the second virtual object VO2.
  • the display control unit 112 starts displaying the second virtual object VO2 in the virtual space VS in the first state.
  • the terminal device 10 can cause the second virtual object VO2 to appear in the first state.
  • the terminal device 10 can display a second application, which is the second virtual object VO2, while the first application, which is the first virtual object VO1, is running.
  • the operation control unit 114 in the above-described second state, on the condition that the change per unit time of the line of sight V of the user U1 is within the second range continues for the second time. , transitions the second virtual object VO2 from inactive to active.
  • the terminal device 10 can transition the second virtual object VO2 from inactive to active after the point of the line of sight V of the user U1 is not wandering and the second virtual object VO2 is in a near stationary state.
  • the display control unit 112 displays the second virtual object VO2 in the area within the first distance from the first virtual object VO1.
  • the terminal device 10 can increase the degree to which the user U1's attention can be drawn to the second virtual object VO2.
  • the display control unit 112 in the virtual space VS, displays a second 2 Display the virtual object VO2.
  • the terminal device 10 can display the first virtual object VO1 and the second virtual object VO2 at positions that do not overlap each other in the virtual space VS as seen from the user U1.
  • the display control unit 112 displays the second virtual object VO2 in the virtual space VS in the first state described above.
  • the method of displaying the second virtual object VO2 is not limited to this method.
  • the display control unit 112 may display the second virtual object VO2 at a position that intersects the line of sight V of the user U1 after the line of sight V of the user U1 remains stationary for a predetermined time in the second state.
  • the display control unit 112 sets the second virtual object VO2 to the second virtual object VO2 on the condition that the change per unit time of the line of sight V of the user U1 continues within a predetermined range for a predetermined time.
  • the terminal device 10 includes the display control section 112 and the operation control section 114 .
  • the display control unit 112 causes the first virtual object VO1 and the second virtual object VO2 to be displayed at positions that do not overlap each other in the virtual space VS as seen from the user U1.
  • the motion control unit 114 changes the line-of-sight V of the user U1 from the first state in which the line-of-sight V of the user U1 indicated by the line-of-sight information and the first virtual object VO1 intersect and the first virtual object VO1 is active. After transitioning to a second state that does not intersect with one virtual object VO1, the second virtual object VO2 is transitioned from inactive to active.
  • the server 30 may include components similar to the display control section 112 and the operation control section 114 .
  • the server 30 It is preferred to perform an action.
  • the server 30 is an example of an information processing device.
  • the cylinder Y was assumed as the region where the line of sight V of the user U1 may exist while the user U1 is viewing the first virtual object VO1.
  • the shape of the area is not limited to a cylinder.
  • FIG. 10 is an explanatory diagram of an operation example of the display control unit 112 according to the third modification.
  • the shape of said region may be, for example, a cone N, as shown in FIG.
  • the cone N has the position (x0, y0, z0) of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 as the first end point C1, and the position (x1, y1, z1) of the first virtual object VO1 as It has a central axis L2 with a second end point C2.
  • the first end point C1 is the vertex of the cone N.
  • the radius R2 of the base of the cone N is the length that allows the base to contain the first virtual object VO1.
  • the position of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 does not match the position of the first end point C1.
  • the position (x1, y1, z1) of the first virtual object VO1 does not match the position of the second end point C2.
  • each position in these sets will match.
  • the terminal device 10 and the MR glasses 20 are implemented separately.
  • the method of realizing the terminal device 10 and the MR glasses 20 in the embodiment of the present invention is not limited to this.
  • the terminal device 10 and the MR glasses 20 may be realized within a single housing by providing the MR glasses 20 with the same functions as the terminal device 10 .
  • the information processing system 1 includes the MR glasses 20 .
  • an HMD Head Mounted Display
  • VR Virtual Reality
  • AR Augmented Reality
  • Any one of the AR glasses may be provided.
  • the information processing system 1 may include, instead of the MR glasses 20, any one of a normal smartphone and tablet equipped with an imaging device.
  • the generation unit 312 provided in the server 30 generates the first image information indicating the image displayed on the MR glasses 20 .
  • the device that generates the first image information is not limited to the server 30 .
  • the terminal device 10 may generate the above first image information.
  • the information processing system 1 does not have to include the server 30 as an essential component.
  • the storage device 12, the storage device 22, and the storage device 32 are ROM and RAM. discs, Blu-ray discs), smart cards, flash memory devices (e.g. cards, sticks, key drives), CD-ROMs (Compact Disc-ROMs), registers, removable discs, hard disks, floppies ) disk, magnetic strip, database, server or other suitable storage medium.
  • the program may be transmitted from a network via an electric communication line. Also, the program may be transmitted from the communication network NET via an electric communication line.
  • the information, signals, etc. described may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
  • input/output information and the like may be stored in a specific location (for example, memory), or may be managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
  • the determination may be made by a value (0 or 1) represented using 1 bit, or by a true/false value (Boolean: true or false). Alternatively, it may be performed by numerical comparison (for example, comparison with a predetermined value).
  • each function illustrated in FIGS. 1 to 10 is realized by any combination of at least one of hardware and software.
  • the method of realizing each functional block is not particularly limited. That is, each functional block may be implemented using one device physically or logically coupled, or directly or indirectly using two or more physically or logically separated devices (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices.
  • a functional block may be implemented by combining software in the one device or the plurality of devices.
  • software, instructions, information, etc. may be transmitted and received via a transmission medium.
  • the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
  • wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.
  • wireless technology infrared, microwave, etc.
  • system and “network” are used interchangeably.
  • Information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using corresponding other information. may be represented as
  • the terminal device 10 and the server 30 may be mobile stations (MS).
  • a mobile station is defined by those skilled in the art as subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be called a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term. Also, in the present disclosure, terms such as “mobile station”, “user terminal”, “user equipment (UE)”, “terminal”, etc. may be used interchangeably.
  • connection refers to any direct or indirect connection between two or more elements. Any connection or coupling is meant, including the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to each other. Couplings or connections between elements may be physical couplings or connections, logical couplings or connections, or a combination thereof. For example, “connection” may be replaced with "access.”
  • two elements are defined using at least one of one or more wires, cables, and printed electrical connections and, as some non-limiting and non-exhaustive examples, in the radio frequency domain. , electromagnetic energy having wavelengths in the microwave and optical (both visible and invisible) regions, and the like.
  • the phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
  • determining and “determining” as used in this disclosure may encompass a wide variety of actions.
  • “Judgement” and “determination” are, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (eg, lookup in a table, database, or other data structure);
  • "judgment” and “determination” are used for receiving (e.g., receiving information), transmitting (e.g., transmitting information), input, output, access (accessing) (for example, accessing data in memory) may include deeming that a "judgement” or “decision” has been made.
  • judgment and “decision” are considered to be “judgment” and “decision” by resolving, selecting, choosing, establishing, comparing, etc. can contain.
  • judgment and “decision” may include considering that some action is “judgment” and “decision”.
  • judgment (decision) may be read as “assuming”, “expecting”, “considering”, and the like.
  • notification of predetermined information is not limited to explicit notification, but is performed implicitly (for example, not notification of the predetermined information). good too.
  • Judgment Unit 114 Operation control unit 211 Acquisition unit 212 Display control unit 311 Acquisition unit 312 Generation unit C1, C2 End point L1, L2 Central axis PR1, PR2, PR3 Control program , R1, R2... radius, U1, U2, U3... user, VO... virtual object, VO1... first virtual object, VO2... second virtual object

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This information processing device comprises: an acquisition unit that acquires visual line information indicating the visual line of a user; a display control unit that displays a first virtual object and a second virtual object at positions not overlapping each other, as seen from the user, in a virtual space; and an operation control unit that causes the second virtual object to transition from inactive to active after a transition has occurred from a first state where the visual line of the user indicated by the visual line information intersects the first virtual object and the first virtual object is active to a second state where the visual line of the user does not intersect the first virtual object.

Description

情報処理装置Information processing equipment
 本発明は、情報処理装置に関する。 The present invention relates to an information processing device.
 MR(Mixed Reality)技術において、ユーザが知覚する現実の環境はコンピュータにより拡張される。この技術を用いることにより、例えば、ユーザが頭部に装着するMRグラスを通じて視認する現実空間に、仮想空間を精緻に重ね合わせることが可能となる。 In MR (Mixed Reality) technology, the real environment perceived by the user is augmented by a computer. By using this technology, for example, it is possible to precisely superimpose the virtual space on the real space that the user sees through the MR glasses worn on the head.
 例えば、特許文献1は、車両に乗車するユーザの安全を確保しつつ、リアリティのある仮想現実映像を表示する情報処理装置を開示している。当該情報処理装置は、複数の仮想的な第1オブジェクトの映像が表示されるARゴーグルを備える。また、当該情報処理装置は、車両の外部又は内部に実在する第2オブジェクトを検出する。また、当該情報処理装置は、第2オブジェクトに対するユーザの視認性に影響を与える視認性確保エリアに第1オブジェクトの映像が重畳すると判断された場合、第1オブジェクトの映像が視認性確保エリアに重畳しないように、例えば第1オブジェクトの映像の位置を移動させる。 For example, Patent Literature 1 discloses an information processing device that displays a realistic virtual reality image while ensuring the safety of a user riding in a vehicle. The information processing device includes AR goggles on which images of a plurality of virtual first objects are displayed. Also, the information processing device detects a second object that actually exists outside or inside the vehicle. Further, when it is determined that the image of the first object is superimposed on the visibility ensuring area that affects the visibility of the user with respect to the second object, the information processing device superimposes the image of the first object on the visibility ensuring area. For example, the position of the image of the first object is moved so as not to occur.
特開2020-204984号公報JP 2020-204984 A
 しかし、特許文献1に係る技術は、仮想的な第1オブジェクトと実在する第2オブジェクトの位置関係を規定するものであって、ユーザがどちらのオブジェクトに注目していているかに応じて、第2オブジェクトの状態を変化させることはできなかった。 However, the technique according to Patent Document 1 defines the positional relationship between the virtual first object and the real second object, and the second object is determined according to which object the user is paying attention to. It was not possible to change the state of the object.
 そこで、本発明は、仮想空間に配置される2つの仮想オブジェクトのうち、ユーザが一方の仮想オブジェクトに着目している状態から、ユーザが着目する対象に変化があった場合に、他方の仮想オブジェクトの状態を変化させることを解決課題とする。 In view of the above, the present invention provides a method in which, of two virtual objects arranged in a virtual space, when the target of the user's attention changes from a state in which the user focuses on one virtual object, the other virtual object is displayed. The problem to be solved is to change the state of
 本発明の好適な態様に係る情報処理装置は、ユーザの視線を示す視線情報を取得する取得部と、前記ユーザから見て、第1仮想オブジェクトと第2仮想オブジェクトとを仮想空間上の互いに重ならない位置に表示させる表示制御部と、前記視線情報の示す前記ユーザの視線と前記第1仮想オブジェクトとが交差し、且つ、前記第1仮想オブジェクトがアクティブである第1状態が、前記ユーザの視線が前記第1仮想オブジェクトと交差しない第2状態に遷移した後に、前記第2仮想オブジェクトを非アクティブからアクティブに遷移させる動作制御部と、を備える情報処理装置である。 An information processing apparatus according to a preferred aspect of the present invention includes an acquisition unit that acquires line-of-sight information indicating a user's line of sight, and a first virtual object and a second virtual object that overlap each other in a virtual space as seen from the user. a display control unit for displaying a display at a position where the line of sight of the user does not match; an operation control unit that transitions the second virtual object from inactive to active after the transition to a second state that does not intersect with the first virtual object.
 本発明によれば、仮想空間に配置される2つの仮想オブジェクトのうち、ユーザが一方の仮想オブジェクトに着目している状態から、ユーザが着目する対象に変化があった場合に、他方の仮想オブジェクトの状態を変化させることが可能となる。 According to the present invention, when the target of the user's attention changes from a state in which the user focuses on one of the two virtual objects arranged in the virtual space, the other virtual object It is possible to change the state of
第1実施形態に係る情報処理システム1の全体構成を示す図。The figure which shows the whole structure of the information processing system 1 which concerns on 1st Embodiment. 第1実施形態に係るMRグラス20の外観を示す斜視図。2 is a perspective view showing the appearance of the MR glasses 20 according to the first embodiment; FIG. 第1実施形態に係るMRグラス20を用いることによりユーザU1に提供される仮想空間VSの模式図。FIG. 4 is a schematic diagram of a virtual space VS provided to a user U1 by using the MR glasses 20 according to the first embodiment; 第1実施形態に係るMRグラス20を用いることによりユーザU1に提供される仮想空間VSの模式図。FIG. 4 is a schematic diagram of a virtual space VS provided to a user U1 by using the MR glasses 20 according to the first embodiment; 第1実施形態に係るMRグラス20の構成例を示すブロック図。2 is a block diagram showing a configuration example of the MR glasses 20 according to the first embodiment; FIG. 第1実施形態に係る端末装置10の構成例を示すブロック図。1 is a block diagram showing a configuration example of a terminal device 10 according to the first embodiment; FIG. 第1実施形態に係る表示制御部112の動作例についての説明図。FIG. 4 is an explanatory diagram of an operation example of the display control unit 112 according to the first embodiment; 第1実施形態に係るサーバ30の構成例を示すブロック図。3 is a block diagram showing a configuration example of a server 30 according to the first embodiment; FIG. 第1実施形態に係るサーバ30の動作を示すフローチャート。4 is a flowchart showing operations of the server 30 according to the first embodiment; 変形例に係る表示制御部112の動作例についての説明図。Explanatory drawing about the example of an operation of the display control part 112 which concerns on a modification.
1:第1実施形態
 以下、図1~図9を参照することにより、本発明の第1実施形態に係る情報処理装置としての端末装置10を含む情報処理システム1の構成について説明する。
1: First Embodiment Hereinafter, the configuration of an information processing system 1 including a terminal device 10 as an information processing device according to a first embodiment of the present invention will be described with reference to FIGS. 1 to 9. FIG.
1-1:第1実施形態の構成
1-1-1:全体構成
 図1は、本発明の第1実施形態に係る情報処理システム1の全体構成を示す図である。情報処理システム1は、後述のMRグラス20を装着したユーザU1に対して、MR技術を用いて、仮想空間を提供するシステムである。
1-1: Configuration of the first embodiment
1-1-1: Overall Configuration FIG. 1 is a diagram showing the overall configuration of an information processing system 1 according to the first embodiment of the present invention. The information processing system 1 is a system that uses MR technology to provide a virtual space to a user U1 wearing MR glasses 20, which will be described later.
 情報処理システム1は、端末装置10、MRグラス20、及びサーバ30を備える。端末装置10は、情報処理装置の一例である。情報処理システム1において、端末装置10とサーバ30とは、通信網NETを介して互いに通信可能に接続される。また、端末装置10とMRグラス20とは互いに通信可能に接続される。なお、図1において、端末装置10とMRグラス20との組として、端末装置10-1とMRグラス20-1との組、端末装置10-2とMRグラス20-2との組、及び端末装置10-3とMRグラス20-3との組の合計3組が記載される。しかし、当該組数はあくまで一例であって、情報処理システム1は、任意の数の端末装置10とMRグラス20との組を備えることが可能である。 The information processing system 1 includes a terminal device 10, MR glasses 20, and a server 30. The terminal device 10 is an example of an information processing device. In the information processing system 1, the terminal device 10 and the server 30 are communicably connected to each other via a communication network NET. Also, the terminal device 10 and the MR glasses 20 are connected so as to be able to communicate with each other. In FIG. 1, the terminal device 10 and the MR glasses 20 are combined as a pair of the terminal device 10-1 and the MR glasses 20-1, a pair of the terminal device 10-2 and the MR glasses 20-2, and a pair of the terminal device 10-1 and the MR glasses 20-2. A total of three pairs of devices 10-3 and MR glasses 20-3 are described. However, the number of sets is merely an example, and the information processing system 1 can include any number of sets of the terminal device 10 and the MR glasses 20 .
 サーバ30は、通信網NETを介して、端末装置10に対して各種データ及びクラウドサービスを提供する。 The server 30 provides various data and cloud services to the terminal device 10 via the communication network NET.
 端末装置10は、ユーザが頭部に装着するMRグラス20に対して、仮想空間に配置される仮想オブジェクトを表示させる。当該仮想空間は、一例として、天球型の空間である。また、仮想オブジェクトは、例として、静止画像、動画、3DCGモデル、HTMLファイル、及びテキストファイル等のデータを示す仮想オブジェクト、及びアプリケーションを示す仮想オブジェクトである。ここで、テキストファイルとしては、例として、メモ、ソースコード、日記、及びレシピが挙げられる。また、アプリケーションとしては、例として、ブラウザ、SNSを用いるためのアプリケーション、及びドキュメントファイルを生成するためのアプリケーションが挙げられる。なお、端末装置10は、例として、スマートフォン、及びタブレット等の携帯端末装置であることが好適である。 The terminal device 10 displays virtual objects placed in the virtual space on the MR glasses 20 that the user wears on the head. The virtual space is, for example, a celestial space. The virtual objects are, for example, virtual objects representing data such as still images, moving images, 3DCG models, HTML files, and text files, and virtual objects representing applications. Examples of text files include memos, source codes, diaries, and recipes. Examples of applications include browsers, applications for using SNS, and applications for generating document files. Note that the terminal device 10 is preferably a mobile terminal device such as a smart phone and a tablet, for example.
 MRグラス20は、ユーザの頭部に装着するシースルー型のウェアラブルディスプレイである。MRグラス20は、端末装置10が制御することによって、両眼用のレンズの各々に設けられた表示パネルに仮想オブジェクトを表示させる。なお、MRグラス20は、表示装置の一例である。 The MR glasses 20 are a see-through wearable display worn on the user's head. The MR glasses 20 are controlled by the terminal device 10 to display a virtual object on the display panel provided for each of the binocular lenses. Note that the MR glasses 20 are an example of a display device.
1-1-2:MRグラスの構成
 図2は、MRグラス20の外観を示す斜視図である。図2に示されるようにMRグラス20の外観は、一般的な眼鏡と同様にテンプル91及び92、ブリッジ93、フレーム94及び95、並びにレンズ41L及び41Rを有する。ブリッジ93には撮像装置26が設けられる。撮像装置26は外界を撮像する。また、撮像装置26は、撮像した画像を示す撮像情報を出力する。
1-1-2: Configuration of MR Glasses FIG. 2 is a perspective view showing the appearance of the MR glasses 20. As shown in FIG. As shown in FIG. 2, the MR glasses 20 have temples 91 and 92, a bridge 93, frames 94 and 95, and lenses 41L and 41R, like general spectacles. An imaging device 26 is provided in the bridge 93 . The imaging device 26 images the outside world. The imaging device 26 also outputs imaging information indicating the captured image.
 レンズ41L及び41Rの各々は、ハーフミラーを備えている。フレーム94には、左眼用の液晶パネル又は有機ELパネルと、左眼用の表示パネルから射出された光をレンズ41Lに導光する光学部材が設けられる。液晶パネル又は有機ELパネルを、以下、表示パネルと総称する。レンズ41Lに設けられるハーフミラーは、外界の光を透過させて左眼に導くと共に、光学部材によって導光された光を反射して、左眼に入射させる。フレーム95には、右眼用の表示パネルと、右眼用の表示パネルから射出された光をレンズ41Rに導光する光学部材が設けられる。レンズ41Rに設けられるハーフミラーは、外界の光を透過させて右眼に導くと共に、光学部材によって導光された光を反射して、右眼に入射させる。 Each of the lenses 41L and 41R has a half mirror. The frame 94 is provided with a liquid crystal panel or an organic EL panel for the left eye and an optical member for guiding light emitted from the display panel for the left eye to the lens 41L. A liquid crystal panel or an organic EL panel is hereinafter generically referred to as a display panel. The half mirror provided in the lens 41L transmits external light and guides it to the left eye, and reflects the light guided by the optical member to enter the left eye. The frame 95 is provided with a right-eye display panel and an optical member that guides light emitted from the right-eye display panel to the lens 41R. The half mirror provided in the lens 41R transmits external light and guides it to the right eye, and reflects the light guided by the optical member to enter the right eye.
 後述するディスプレイ28は、レンズ41L、左眼用の表示パネル、及び左眼用の光学部材、並びにレンズ41R、右眼用の表示パネル、及び右眼用の光学部材を含む。 The display 28, which will be described later, includes a lens 41L, a left-eye display panel, a left-eye optical member, and a lens 41R, a right-eye display panel, and a right-eye optical member.
 以上の構成において、ユーザは表示パネルが表示する画像を、外界の様子と重ね合わせたシースルーの状態で観察できる。また、MRグラス20において、視差を伴う両眼画像のうち、左眼用画像を左眼用の表示パネルに表示させ、右眼用画像を右眼用の表示パネルに表示させることによって、ユーザU1に対し、表示された画像があたかも奥行き、及び立体感を持つかのように知覚させることが可能となる。 With the above configuration, the user can observe the image displayed by the display panel in a see-through state superimposed on the state of the outside world. Further, in the MR glasses 20, of the binocular images with parallax, the image for the left eye is displayed on the display panel for the left eye, and the image for the right eye is displayed on the display panel for the right eye, thereby allowing the user U1 to On the other hand, it is possible to perceive the displayed image as if it had depth and stereoscopic effect.
 図3及び図4は、MRグラス20を用いることによりユーザU1に提供される仮想空間VSの模式図である。図3に示されるように、当該仮想空間VSには、例えばブラウザ、クラウドサービス、画像、及び動画等の各種コンテンツを示す仮想オブジェクトVO1~VO5が配置される。ユーザU1は、当該仮想空間VSに配置される仮想オブジェクトVO1~VO5が表示されたMRグラス20を装着した状態で公共の空間を行き来することにより、公共の空間において、プライベートな空間としての当該仮想空間VSを体験できる。延いては、当該ユーザU1は、当該仮想空間VSに配置された仮想オブジェクトVO1~VO5のもたらす便益を受けながら、公共の空間において行動できる。 3 and 4 are schematic diagrams of the virtual space VS provided to the user U1 by using the MR glasses 20. FIG. As shown in FIG. 3, in the virtual space VS, virtual objects VO1 to VO5 representing various contents such as browsers, cloud services, images, and moving images are arranged. The user U1 walks around the public space while wearing the MR glasses 20 on which the virtual objects VO1 to VO5 arranged in the virtual space VS are displayed, thereby turning the virtual space into a private space in the public space. You can experience space VS. Ultimately, the user U1 can act in the public space while receiving benefits brought by the virtual objects VO1 to VO5 placed in the virtual space VS.
 また、図4に示されるように、複数のユーザU1~ユーザU3で仮想空間VSを共有することも可能である。複数のユーザU1~ユーザU3で仮想空間VSを共有することにより、当該複数のユーザU1~ユーザU3で、1又は複数の仮想オブジェクトVOを共用すると共に、共用した仮想オブジェクトVOを介して、ユーザU1~ユーザU3間でコミュニケーションを行うことが可能となる。 Also, as shown in FIG. 4, it is possible for a plurality of users U1 to U3 to share the virtual space VS. By sharing the virtual space VS among a plurality of users U1 to U3, the plurality of users U1 to U3 share one or a plurality of virtual objects VO, and through the shared virtual objects VO, the user U1 ~ It becomes possible to communicate between users U3.
 本実施形態においては、図3及び図4に示されるように、複数の仮想オブジェクトVO1~VO5のうち、いずれの組み合わせの2つの仮想オブジェクトVOも、ユーザU1から見て、仮想空間VSにおいて重ならないように表示される。仮想オブジェクトVOの表示方法の詳細については、後述する。 In this embodiment, as shown in FIGS. 3 and 4, any combination of two virtual objects VO out of the plurality of virtual objects VO1 to VO5 does not overlap in the virtual space VS as seen from the user U1. is displayed. Details of the display method of the virtual object VO will be described later.
 図5は、MRグラス20の構成例を示すブロック図である。MRグラス20は、処理装置21、記憶装置22、視線検出装置23、GPS装置24、動き検出装置25、撮像装置26、通信装置27、ディスプレイ28、及びスピーカ29を備える。MRグラス20が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。なお、本明細書における「装置」という用語は、回路、デバイス又はユニット等の他の用語に読替えてもよい。 FIG. 5 is a block diagram showing a configuration example of the MR glasses 20. As shown in FIG. The MR glasses 20 include a processing device 21 , a storage device 22 , a line-of-sight detection device 23 , a GPS device 24 , a motion detection device 25 , an imaging device 26 , a communication device 27 , a display 28 and a speaker 29 . Each element of the MR glasses 20 is interconnected by one or more buses for communicating information. Note that the term "apparatus" in this specification may be replaced with another term such as a circuit, a device, or a unit.
 処理装置21は、MRグラス20の全体を制御するプロセッサである。処理装置21は、例えば、単数又は複数のチップを用いて構成される。また、処理装置21は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU:Central Processing Unit)を用いて構成される。なお、処理装置21の機能の一部又は全部を、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、及びFPGA(Field Programmable Gate Array)等のハードウェアによって実現してもよい。処理装置21は、各種の処理を並列的又は逐次的に実行する。 The processing device 21 is a processor that controls the MR glasses 20 as a whole. The processing device 21 is configured using, for example, one or more chips. The processing device 21 is configured using, for example, a central processing unit (CPU) including an interface with peripheral devices, an arithmetic device, registers, and the like. Some or all of the functions of the processing device 21 are realized by hardware such as DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), and FPGA (Field Programmable Gate Array). You may The processing device 21 executes various processes in parallel or sequentially.
 記憶装置22は、処理装置21による読取及び書込が可能な記録媒体である。また、記憶装置22は、処理装置21が実行する制御プログラムPR1を含む複数のプログラムを記憶する。 The storage device 22 is a recording medium that can be read and written by the processing device 21 . The storage device 22 also stores a plurality of programs including the control program PR1 executed by the processing device 21 .
 視線検出装置23は、ユーザU1の視線を検出する。視線検出装置23による視線の検出は、どのような方法を用いてもよい。視線検出装置23は、例えば、目頭の位置と虹彩の位置に基づいて視線情報を検出してもよい。また、視線検出装置23は、検出結果に基づいてユーザU1の視線の方向を示す視線情報を、後述の処理装置21に出力する。処理装置21に出力された視線情報は、通信装置27を介して、端末装置10に出力される。 The line-of-sight detection device 23 detects the line of sight of the user U1. Any method may be used to detect the line of sight by the line of sight detection device 23 . The line-of-sight detection device 23 may detect line-of-sight information based on, for example, the position of the inner corner of the eye and the position of the iris. The line-of-sight detection device 23 also outputs line-of-sight information indicating the line-of-sight direction of the user U1 to the processing device 21, which will be described later, based on the detection result. The line-of-sight information output to the processing device 21 is output to the terminal device 10 via the communication device 27 .
 GPS装置24は、複数の衛星からの電波を受信する。また、GPS装置24は、受信した電波から位置情報を生成する。位置情報は、MRグラス20の位置を示す。位置情報は、位置を特定できるのであれば、どのような形式であってもよい。位置情報は、例えば、MRグラス20の緯度と経度とを示す。一例として、位置情報はGPS装置24から得られる。しかし、MRグラス20は、どのような方法によって位置情報を取得してもよい。取得された位置情報は、処理装置21に供給される。処理装置21に出力された位置情報は、通信装置27を介して、端末装置10に送信される。 The GPS device 24 receives radio waves from multiple satellites. The GPS device 24 also generates position information from the received radio waves. The positional information indicates the position of the MR glasses 20 . The location information may be in any format as long as the location can be specified. The position information indicates the latitude and longitude of the MR glasses 20, for example. As an example, location information is obtained from GPS device 24 . However, the MR glasses 20 may acquire position information by any method. The acquired position information is supplied to the processing device 21 . The position information output to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
 動き検出装置25は、MRグラス20の動きを検出する。動き検出装置25としては、加速度を検出する加速度センサ及び角加速度を検出するジャイロセンサなどの慣性センサが該当する。加速度センサは、直交するX軸、Y軸、及びZ軸の加速度を検出する。ジャイロセンサは、X軸、Y軸、及びZ軸を回転の中心軸とする角加速度を検出する。動き検出装置25は、ジャイロセンサの出力情報に基づいて、MRグラス20の姿勢を示す姿勢情報を生成できる。動き検出装置25は、MRグラス20の姿勢に係る姿勢情報を処理装置21に供給する。また、動き検出装置25は、MRグラス20の動きに係る動き情報を処理装置21に供給する。動き情報は、3軸の加速度を各々示す加速度データ及び3軸の角加速度を各々示す角加速度データを含む。処理装置21に供給された姿勢情報及び動き情報は、通信装置27を介して、端末装置10に送信される。 The motion detection device 25 detects motion of the MR glasses 20 . The motion detection device 25 corresponds to an inertial sensor such as an acceleration sensor that detects acceleration and a gyro sensor that detects angular acceleration. The acceleration sensor detects acceleration in orthogonal X-, Y-, and Z-axes. The gyro sensor detects angular acceleration around the X-, Y-, and Z-axes. The motion detection device 25 can generate posture information indicating the posture of the MR glasses 20 based on the output information of the gyro sensor. The motion detection device 25 supplies orientation information relating to the orientation of the MR glasses 20 to the processing device 21 . In addition, the motion detection device 25 supplies motion information relating to the motion of the MR glasses 20 to the processing device 21 . The motion information includes acceleration data indicating three-axis acceleration and angular acceleration data indicating three-axis angular acceleration. The posture information and motion information supplied to the processing device 21 are transmitted to the terminal device 10 via the communication device 27 .
 撮像装置26は、外界を撮像して得られた撮像情報を出力する。また、撮像装置26は、例えば、レンズ、撮像素子、増幅器、及びAD変換器を備える。レンズを介して集光された光は、撮像素子によってアナログ信号である撮像信号に変換される。増幅器は撮像信号を増幅した上でAD変換器に出力する。AD変換器はアナログ信号である増幅された撮像信号をデジタル信号である撮像情報に変換する。変換された撮像情報は、処理装置21に供給される。処理装置21に供給された撮像情報は、通信装置27を介して、端末装置10に送信される。 The imaging device 26 outputs imaging information obtained by imaging the outside world. Also, the imaging device 26 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. The light condensed through the lens is converted into an image pickup signal, which is an analog signal, by the image pickup device. The amplifier amplifies the imaging signal and outputs it to the AD converter. The AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal. The converted imaging information is supplied to the processing device 21 . The imaging information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
 通信装置27は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。また、通信装置27は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置27は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置27は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、USBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 27 is hardware as a transmission/reception device for communicating with other devices. The communication device 27 is also called a network device, a network controller, a network card, a communication module, etc., for example. The communication device 27 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 27 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ28は、画像を表示するデバイスである。ディスプレイ28は、処理装置21による制御のもとで各種の画像を表示する。ディスプレイ28は、上記のように、レンズ41L、左眼用の表示パネル、及び左眼用の光学部材、並びにレンズ41R、右眼用の表示パネル、及び右眼用の光学部材を含む。表示パネルとしては、例えば、液晶表示パネル及び有機EL表示パネル等の各種の表示パネルが好適に利用される。 The display 28 is a device that displays images. The display 28 displays various images under the control of the processing device 21 . The display 28 includes the lens 41L, the left-eye display panel, the left-eye optical member, and the lens 41R, the right-eye display panel, and the right-eye optical member, as described above. Various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display panel.
 スピーカ29は音声を放音するデバイスである。スピーカ29は、処理装置21による制御のもとで、各種の音声を放音する。例えば、デジタル信号である音声データが、図示しないDA変換器によってアナログ信号である音声信号に変換される。当該音声信号は、図示しないアンプによって、振幅が増幅される。スピーカ29は、振幅が増幅された後の音声信号によって示される音声を放音する。 The speaker 29 is a device that emits sound. The speaker 29 emits various sounds under the control of the processing device 21 . For example, audio data, which is a digital signal, is converted into an audio signal, which is an analog signal, by a DA converter (not shown). The audio signal is amplified in amplitude by an amplifier (not shown). The speaker 29 emits sound indicated by the audio signal after the amplitude is amplified.
 処理装置21は、例えば、記憶装置22から制御プログラムPR1を読み出して実行することによって、取得部211、及び表示制御部212として機能する。 The processing device 21 functions as an acquisition unit 211 and a display control unit 212, for example, by reading the control program PR1 from the storage device 22 and executing it.
 取得部211は、端末装置10からMRグラス20に表示される画像を示す画像情報を取得する。より詳細には、取得部211は、例えば、端末装置10から送信される後述の第2画像情報を取得する。 The acquisition unit 211 acquires image information indicating an image displayed on the MR glasses 20 from the terminal device 10 . More specifically, the acquisition unit 211 acquires, for example, second image information, which is transmitted from the terminal device 10 and will be described later.
 また、取得部211は、視線検出装置23から入力される視線情報、GPS装置24から入力される位置情報、動き検出装置25から入力される姿勢情報及び動き情報、及び撮像装置26から入力される撮像情報を取得する。その上で、取得部211は、取得した視線情報、位置情報、姿勢情報、動き情報、及び撮像情報を、通信装置27に出力する。 The acquisition unit 211 also receives line-of-sight information input from the line-of-sight detection device 23 , position information input from the GPS device 24 , posture information and motion information input from the motion detection device 25 , and input from the imaging device 26 . Acquire imaging information. After that, the acquisition unit 211 outputs the acquired line-of-sight information, position information, posture information, motion information, and imaging information to the communication device 27 .
 表示制御部212は、取得部211によって端末装置10から取得された第2画像情報に基づいて、ディスプレイ28に対して、第2画像情報によって示される画像を表示させる。 The display control unit 212 causes the display 28 to display an image indicated by the second image information based on the second image information acquired from the terminal device 10 by the acquisition unit 211 .
1-1-3:端末装置の構成
 図6は、端末装置10の構成例を示すブロック図である。端末装置10は、処理装置11、記憶装置12、通信装置13、ディスプレイ14、入力装置15、及び慣性センサ16を備える。端末装置10が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。
1-1-3: Configuration of Terminal Device FIG. 6 is a block diagram showing a configuration example of the terminal device 10. As shown in FIG. The terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , a display 14 , an input device 15 and an inertial sensor 16 . Elements of the terminal device 10 are interconnected by one or more buses for communicating information.
 処理装置11は、端末装置10の全体を制御するプロセッサである。また、処理装置11は、例えば、単数又は複数のチップを用いて構成される。処理装置11は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置11が有する機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアによって実現してもよい。処理装置11は、各種の処理を並列的又は逐次的に実行する。 The processing device 11 is a processor that controls the terminal device 10 as a whole. Also, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing unit 11 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 11 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 11 executes various processes in parallel or sequentially.
 記憶装置12は、処理装置11が読取及び書込が可能な記録媒体である。また、記憶装置12は、処理装置11が実行する制御プログラムPR2を含む複数のプログラムを記憶する。 The storage device 12 is a recording medium readable and writable by the processing device 11 . The storage device 12 also stores a plurality of programs including the control program PR2 executed by the processing device 11 .
 通信装置13は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置13は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置13は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置13は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 13 is hardware as a transmission/reception device for communicating with other devices. The communication device 13 is also called a network device, a network controller, a network card, a communication module, etc., for example. The communication device 13 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 13 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ14は、画像及び文字情報を表示するデバイスである。ディスプレイ14は、処理装置11の制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL(Electro Luminescence)表示パネル等の各種の表示パネルがディスプレイ14として好適に利用される。 The display 14 is a device that displays images and character information. The display 14 displays various images under the control of the processing device 11 . For example, various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are preferably used as the display 14 .
 入力装置15は、MRグラス20を頭部に装着したユーザU1からの操作を受け付ける。例えば、入力装置15は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置15は、タッチパネルを含んで構成される場合、ディスプレイ14を兼ねてもよい。 The input device 15 accepts operations from the user U1 who wears the MR glasses 20 on his head. For example, the input device 15 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse. Here, when the input device 15 includes a touch panel, the input device 15 may also serve as the display 14 .
 慣性センサ16は、慣性力を検出するセンサである。慣性センサ16は、例えば、加速度センサ、角速度センサ、及びジャイロセンサのうち、1以上のセンサを含む。処理装置11は、慣性センサ16の出力情報に基づいて、端末装置10の姿勢を検出する。更に、処理装置11は、端末装置10の姿勢に基づいて、天球型の仮想空間VSにおいて、仮想オブジェクトVOの選択、文字の入力、及び指示の入力を受け付ける。例えば、ユーザU1が端末装置10の中心軸を仮想空間VSの所定領域に向けた状態で、入力装置15を操作することによって、所定領域に配置される仮想オブジェクトVOが選択される。入力装置15に対するユーザU1の操作は、例えば、ダブルタップである。このようにユーザU1は端末装置10を操作することによって、端末装置10の入力装置15を見なくても仮想オブジェクトVOを選択できる。 The inertial sensor 16 is a sensor that detects inertial force. The inertial sensor 16 includes, for example, one or more of an acceleration sensor, an angular velocity sensor, and a gyro sensor. The processing device 11 detects the orientation of the terminal device 10 based on the output information from the inertial sensor 16 . Further, the processing device 11 receives selection of the virtual object VO, input of characters, and input of instructions in the celestial sphere virtual space VS based on the orientation of the terminal device 10 . For example, the user U1 directs the central axis of the terminal device 10 toward a predetermined area of the virtual space VS, and operates the input device 15 to select the virtual object VO arranged in the predetermined area. The user U1's operation on the input device 15 is, for example, a double tap. By operating the terminal device 10 in this way, the user U1 can select the virtual object VO without looking at the input device 15 of the terminal device 10 .
 処理装置11は、記憶装置12から制御プログラムPR2を読み出して実行することによって、取得部111、表示制御部112、判定部113、及び動作制御部114として機能する。 The processing device 11 functions as an acquisition unit 111, a display control unit 112, a determination unit 113, and an operation control unit 114 by reading the control program PR2 from the storage device 12 and executing it.
 取得部111は、ユーザU1の視線を示す視線情報を取得する。より詳細には、取得部111は、通信装置13を介して、MRグラス20から出力される視線情報を取得する。また、取得部111は、通信装置13を介して、サーバ30から、MRグラス20に表示される画像を示す、後述の第1画像情報を取得する。 The acquisition unit 111 acquires line-of-sight information indicating the line of sight of the user U1. More specifically, the acquisition unit 111 acquires line-of-sight information output from the MR glasses 20 via the communication device 13 . In addition, the acquisition unit 111 acquires first image information, which is described below and indicates an image displayed on the MR glasses 20 , from the server 30 via the communication device 13 .
 表示制御部112は、取得部111によって取得された第1画像情報に基づいて、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを仮想空間VS上の互いに重ならない位置に表示させる。詳細には、第1画像情報は、第1仮想オブジェクトVO1自体、及び第2仮想オブジェクトVO2自体の画像を示す画像情報である。また、表示制御部112は、ユーザU1がMRグラス20を通じて視聴する仮想空間VSにおいて、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とが互いに重ならないようなレイアウト情報と、上記の第1画像情報とを含む第2画像情報を、通信装置13を介して、MRグラス20に対して送信する。 Based on the first image information acquired by the acquisition unit 111, the display control unit 112 moves the first virtual object VO1 and the second virtual object VO2 to positions not overlapping each other in the virtual space VS as seen from the user U1. display. Specifically, the first image information is image information indicating images of the first virtual object VO1 itself and the second virtual object VO2 itself. In addition, the display control unit 112 provides layout information such that the first virtual object VO1 and the second virtual object VO2 do not overlap with each other from the viewpoint of the user U1 in the virtual space VS viewed by the user U1 through the MR glasses 20, and Second image information including the first image information is transmitted to the MR glasses 20 via the communication device 13 .
 図7は、表示制御部112による第1仮想オブジェクトVO1と第2仮想オブジェクトVO2の表示方法の説明図である。以降の説明では、仮想空間VSにおいて、相互に直交するX軸、Y軸及びZ軸を想定する。一例として、X軸はユーザU1の左右方向に延伸する。更に、ユーザU1から見て、X軸に沿う右方向を正方向とし、X軸に沿う左方向を負方向とする。また、Y軸はユーザU1の前後方向に延伸する。更に、ユーザU1から見て、Y軸に沿う前方向を正方向とし、Y軸に沿う後ろ方向を負方向とする。これらのX軸とY軸とで水平面が構成される。また、Z軸はXY平面に直交し、ユーザU1の上下方向に延伸する。更に、ユーザU1から見て、Z軸に沿う上方向を正方向とし、Z軸に沿う下方向を負方向とする。 FIG. 7 is an explanatory diagram of how the display control unit 112 displays the first virtual object VO1 and the second virtual object VO2. In the following description, it is assumed that X, Y and Z axes are orthogonal to each other in the virtual space VS. As an example, the X-axis extends in the horizontal direction of user U1. Further, as viewed from the user U1, the right direction along the X axis is the positive direction, and the left direction along the X axis is the negative direction. Also, the Y-axis extends in the front-rear direction of the user U1. Further, the forward direction along the Y-axis is defined as the positive direction, and the backward direction along the Y-axis is defined as the negative direction, as viewed from the user U1. A horizontal plane is formed by these X-axis and Y-axis. Also, the Z-axis is orthogonal to the XY plane and extends in the vertical direction of the user U1. Further, as viewed from the user U1, the upward direction along the Z axis is the positive direction, and the downward direction along the Z axis is the negative direction.
 仮想空間VSにおいて、ユーザU1の左眼球の瞳孔EL又は右眼球の瞳孔ERの位置、すなわちユーザU1の視線Vが有する一方の端点の座標は、(x,y,z)=(x0,y0,z0)であるものとする。また、表示制御部112は、第1仮想オブジェクトVO1を、(x,y,z)=(x1,y1,z1)の位置に表示させるものとする。 In the virtual space VS, the position of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1, that is, the coordinates of one end point of the line of sight V of the user U1 are (x, y, z)=(x0, y0, z0). Also, the display control unit 112 displays the first virtual object VO1 at the position of (x, y, z)=(x1, y1, z1).
 ここで、ユーザU1が第1仮想オブジェクトVO1を目視している間に、ユーザU1の視線Vが存在し得る領域として、円筒Yを想定する。円筒Yは、ユーザU1の左眼球の瞳孔EL又は右眼球の瞳孔ERの位置(x0,y0,z0)を第1の端点C1とし、第1仮想オブジェクトVO1の位置(x1,y1,z1)を第2の端点C2とする中心軸L1を有する。更に、円筒Yの底面の半径R1は、当該底面が第1仮想オブジェクトVO1を含むことを可能とする長さである。なお、図7においては、説明の便宜上、ユーザU1の左眼球の瞳孔EL又は右眼球の瞳孔ERの位置と、第1の端点C1の位置とは一致していない。同様に、図7においては、第1仮想オブジェクトVO1の位置(x1,y1,z1)と、第2の端点C2の位置とは一致していない。しかし、本発明の実用上、これらの組における各々の位置は一致する。 Here, assume a cylinder Y as a region where the user U1's line of sight V can exist while the user U1 is viewing the first virtual object VO1. The cylinder Y has a first end point C1 at the position (x0, y0, z0) of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1, and the position (x1, y1, z1) of the first virtual object VO1 as It has a central axis L1 with a second end point C2. Furthermore, the radius R1 of the bottom surface of the cylinder Y is the length that allows the bottom surface to contain the first virtual object VO1. In FIG. 7, for convenience of explanation, the position of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 does not match the position of the first end point C1. Similarly, in FIG. 7, the position (x1, y1, z1) of the first virtual object VO1 does not match the position of the second end point C2. However, for the purposes of the present invention, each position in these sets will match.
 表示制御部112は、仮想空間VSにおいて、第2仮想オブジェクトVO2を、円筒Yの外部に、非アクティブな状態で表示させる。具体的には、第2仮想オブジェクトVO2の表示位置(x,y,z)=(x2,y2,z2)は、当該第2仮想オブジェクトVO2の全体が、円筒Yの範囲外となるような位置となる。ここで、「アクティブ」とは、仮想オブジェクトVOが動作中である、又は仮想オブジェクトVOがユーザU1によって選択されているという意味である。一方で、「非アクティブ」とは、仮想オブジェクトVOの動作が停止している、又は仮想オブジェクトVOがユーザU1によって選択されていないという意味である。例として、仮想オブジェクトVOが動画のアプリケーションに対応する場合、「仮想オブジェクトVOが動作中」であるとは、ユーザU1の操作によって、当該動画が再生されているという意味である。また、仮想オブジェクトVOがメールのアプリケーションに対応する場合、「仮想オブジェクトVOが動作中」であるとは、ユーザU1の操作によって、当該メールが送受信されたり、当該メールの内容を示すテキストファイルが開かれたりしているという意味である。また、仮想オブジェクトVOが音楽のアプリケーションに対応する場合、「仮想オブジェクトVOが動作中」であるとは、ユーザU1の操作によって、当該音楽がスピーカ29から放音されているという意味である。 The display control unit 112 displays the second virtual object VO2 outside the cylinder Y in an inactive state in the virtual space VS. Specifically, the display position (x, y, z)=(x2, y2, z2) of the second virtual object VO2 is a position where the entire second virtual object VO2 is outside the range of the cylinder Y. becomes. Here, "active" means that the virtual object VO is in operation, or that the virtual object VO is selected by the user U1. On the other hand, "inactive" means that the motion of the virtual object VO has stopped, or that the virtual object VO has not been selected by the user U1. As an example, when the virtual object VO corresponds to a moving image application, "the virtual object VO is in operation" means that the moving image is being reproduced by the user U1's operation. Further, when the virtual object VO corresponds to an e-mail application, "the virtual object VO is in operation" means that the e-mail is sent/received or a text file indicating the content of the e-mail is opened by the operation of the user U1. It means that you are being killed. Further, when the virtual object VO corresponds to a music application, "the virtual object VO is in operation" means that the music is emitted from the speaker 29 by the operation of the user U1.
 表示制御部112は、ユーザU1の視線Vと第1仮想オブジェクトVO1とが交差し、且つ、第1仮想オブジェクトVO1がアクティブである第1状態において、第2仮想オブジェクトVO2を、仮想空間VSに表示させる。具体的には、表示制御部112は、ユーザU1の視線Vの全体が円筒Yの内部に位置すると共に、第1仮想オブジェクトVO1がアクティブである第1状態において、第2仮想オブジェクトVO2の全体を円筒Yの外部に表示させる。 The display control unit 112 displays the second virtual object VO2 in the virtual space VS in the first state in which the line of sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active. Let Specifically, the display control unit 112 displays the entire second virtual object VO2 in a first state in which the entire line of sight V of the user U1 is positioned inside the cylinder Y and the first virtual object VO1 is active. It is displayed outside the cylinder Y.
 第2仮想オブジェクトVO2の表示位置は、第1仮想オブジェクトVO1から所定距離以内の領域とする。具体的には、ユーザU1が第1仮想オブジェクトVO1を目視している間の当該ユーザU1の視野内に、第2仮想オブジェクトVO2を表示させる。この処理により、端末装置10は、第2仮想オブジェクトVO2に対して、ユーザU1の注意を喚起できる。上記の「所定距離」は「第1距離」の一例である。 The display position of the second virtual object VO2 is an area within a predetermined distance from the first virtual object VO1. Specifically, the second virtual object VO2 is displayed within the field of view of the user U1 while the user U1 is viewing the first virtual object VO1. With this process, the terminal device 10 can call the attention of the user U1 to the second virtual object VO2. The above "predetermined distance" is an example of the "first distance".
 説明を図6に戻すと、判定部113は、ユーザU1の視線Vと第1仮想オブジェクトVO1とが交差し、且つ、第1仮想オブジェクトVO1がアクティブである第1状態が、ユーザU1の視線Vが第1仮想オブジェクトVO1と交差しない第2状態に遷移したか否かを判定する。
 動作制御部114は、判定部113によって、上記の第1状態が上記の第2状態に遷移したと判定された後に、第2仮想オブジェクトVO2を、非アクティブからアクティブに遷移させる。具体的には、図7において、ユーザU1の視線Vが、円筒Yに収まる状態から円筒Yに収まらない状態に変化した後に、動作制御部114は、第2仮想オブジェクトVO2を、非アクティブからアクティブに遷移させる。
Returning to FIG. 6, the determination unit 113 determines that the first state in which the line of sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active is the line of sight V of the user U1. has transitioned to a second state that does not intersect with the first virtual object VO1.
After the determination unit 113 determines that the first state has transitioned to the second state, the motion control unit 114 transitions the second virtual object VO2 from inactive to active. Specifically, in FIG. 7, after the line of sight V of the user U1 changes from being within the cylinder Y to being outside the cylinder Y, the motion control unit 114 changes the second virtual object VO2 from inactive to active. transition to
 動作制御部114は、上記の第2状態において、ユーザU1の視線Vが所定時間静止した後に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させることが好適である。詳細には、動作制御部114は、上記の第2状態において、ユーザU1の視線Vの単位時間当たりの変化が所定範囲内となることが所定時間継続したことを条件に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させる。その結果、端末装置10は、ユーザU1の視線Vの先がさまよっておらず、静止状態に近い状態となった後に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移できる。上記の「所定範囲」は「第2範囲」の一例である。また、上記の「所定時間」は「第2時間」の一例である。 In the second state, the motion control unit 114 preferably transitions the second virtual object VO2 from inactive to active after the line of sight V of the user U1 remains stationary for a predetermined period of time. Specifically, in the second state described above, the motion control unit 114 controls the second virtual object VO2 on the condition that the change in the line of sight V of the user U1 per unit time is within a predetermined range for a predetermined period of time. transition from inactive to active. As a result, the terminal device 10 can transition the second virtual object VO2 from inactive to active after the point of the line of sight V of the user U1 is not wandering and the state is almost stationary. The above "predetermined range" is an example of the "second range". Also, the above "predetermined time" is an example of the "second time".
 また、動作制御部114は、上記の第1状態から上記の第2状態に遷移した後に、第1仮想オブジェクトVO1をアクティブから非アクティブに遷移させる。この処理により、端末装置10は、ユーザU1の視線Vの位置に応じて、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2のどちらの仮想オブジェクトVOをアクティブにするかを切り替えられる。また、端末装置10は、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを排他的にアクティブにできる。これらの第1仮想オブジェクトVO1の状態遷移と、第2仮想オブジェクトVO2の状態遷移は、同時に実行されてもよく、一方が他方に先行して実行されてもよい。 Also, the operation control unit 114 transitions the first virtual object VO1 from active to inactive after transitioning from the first state to the second state. With this process, the terminal device 10 can switch which virtual object VO to activate, the first virtual object VO1 or the second virtual object VO2, according to the position of the line of sight V of the user U1. Also, the terminal device 10 can exclusively activate the first virtual object VO1 and the second virtual object VO2. The state transition of the first virtual object VO1 and the state transition of the second virtual object VO2 may be executed simultaneously, or one may precede the other.
1-1-4:サーバの構成
 図8は、サーバ30の構成例を示すブロック図である。サーバ30は、処理装置31、記憶装置32、通信装置33、ディスプレイ34、及び入力装置35を備える。サーバ30が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。
1-1-4: Server Configuration FIG. 8 is a block diagram showing a configuration example of the server 30. As shown in FIG. The server 30 comprises a processing device 31 , a storage device 32 , a communication device 33 , a display 34 and an input device 35 . Each element of server 30 is interconnected by one or more buses for communicating information.
 処理装置31は、サーバ30の全体を制御するプロセッサである。また、処理装置31は、例えば、単数又は複数のチップを用いて構成される。処理装置31は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置31の機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアによって実現してもよい。処理装置31は、各種の処理を並列的又は逐次的に実行する。 The processing device 31 is a processor that controls the server 30 as a whole. Also, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing unit 31 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 31 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
 記憶装置32は、処理装置31が読取及び書込が可能な記録媒体である。また、記憶装置32は、処理装置31が実行する制御プログラムPR3を含む複数のプログラムを記憶する。 The storage device 32 is a recording medium readable and writable by the processing device 31 . The storage device 32 also stores a plurality of programs including the control program PR3 executed by the processing device 31 .
 通信装置33は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置33は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、及び通信モジュール等とも呼ばれる。通信装置33は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置33は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 33 is hardware as a transmission/reception device for communicating with other devices. The communication device 33 is also called a network device, a network controller, a network card, a communication module, or the like, for example. The communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 33 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ34は、画像及び文字情報を表示するデバイスである。ディスプレイ34は、処理装置31による制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL表示パネル等の各種の表示パネルがディスプレイ34として好適に利用される。 The display 34 is a device that displays images and character information. The display 34 displays various images under the control of the processing device 31 . For example, various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display 34 .
 入力装置35は、情報処理システム1の管理者による操作を受け付ける機器である。例えば、入力装置35は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置35は、タッチパネルを含んで構成される場合、ディスプレイ34を兼ねてもよい。 The input device 35 is a device that accepts operations by the administrator of the information processing system 1 . For example, the input device 35 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse. Here, when the input device 35 includes a touch panel, the input device 35 may also serve as the display 34 .
 処理装置31は、例えば、記憶装置32から制御プログラムPR3を読み出して実行することによって、取得部311、及び生成部312として機能する。 The processing device 31 functions as an acquisition unit 311 and a generation unit 312, for example, by reading the control program PR3 from the storage device 32 and executing it.
 取得部311は、通信装置33を用いて、端末装置10から各種のデータを取得する。当該データには、例として、MRグラス20を頭部に装着したユーザU1によって端末装置10に入力される、仮想オブジェクトVOに対する操作内容を示すデータが含まれる。 The acquisition unit 311 acquires various data from the terminal device 10 using the communication device 33 . The data includes, for example, data indicating the operation content for the virtual object VO, which is input to the terminal device 10 by the user U1 wearing the MR glasses 20 on the head.
 生成部312は、MRグラス20に表示される画像を示す第1画像情報を生成する。この第1画像情報は、通信装置33を介して端末装置10に送信される。端末装置10に備わる表示制御部112は、上記のように、通信装置13を介して受信した第1画像情報に基づいて、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを仮想空間VS上の互いに重ならない位置に表示させる。 The generation unit 312 generates first image information indicating an image displayed on the MR glasses 20 . This first image information is transmitted to the terminal device 10 via the communication device 33 . As described above, the display control unit 112 provided in the terminal device 10 displays the first virtual object VO1 and the second virtual object VO2 as seen from the user U1 based on the first image information received via the communication device 13. are displayed at positions that do not overlap each other in the virtual space VS.
1-2:第1実施形態の動作
 図9は、第1実施形態に係る端末装置10の動作を示すフローチャートである。以下、図9を参照することにより、端末装置10の動作について説明する。
1-2: Operation of the First Embodiment FIG. 9 is a flow chart showing the operation of the terminal device 10 according to the first embodiment. The operation of the terminal device 10 will be described below with reference to FIG.
 ステップS1において、処理装置11は、表示制御部112として機能する。処理装置11は、仮想空間VSに第1仮想オブジェクトVO1を表示させる。 In step S<b>1 , the processing device 11 functions as the display control unit 112 . The processing device 11 displays the first virtual object VO1 in the virtual space VS.
 ステップS2において、処理装置11は、取得部111として機能する。処理装置11は、ユーザU1の視線Vを取得する。 In step S2, the processing device 11 functions as the acquisition unit 111. The processing device 11 acquires the line of sight V of the user U1.
 ステップS3において、処理装置11は、表示制御部112として機能する。処理装置11は、仮想空間VSにおいて、ユーザU1から見て、第1仮想オブジェクトVO1と重ならないように、第2仮想オブジェクトVO2を非アクティブの状態で表示させる。なお、遅くともステップS3の段階において、第1仮想オブジェクトVO1はアクティブとなっているものとする。 In step S3, the processing device 11 functions as the display control unit 112. The processing device 11 displays the second virtual object VO2 in an inactive state so that it does not overlap the first virtual object VO1 as seen from the user U1 in the virtual space VS. It is assumed that the first virtual object VO1 is active at the stage of step S3 at the latest.
 ステップS4において、処理装置11は、判定部113として機能する。処理装置11は、ユーザU1の視線Vと第1仮想オブジェクトVO1とが交差し、且つ、第1仮想オブジェクトVO1がアクティブである第1状態から、ユーザU1の視線Vが第1仮想オブジェクトVO1と交差しない第2状態に遷移したか否かを判定する。第1状態から第2状態に遷移したと判定された場合、すなわちステップS4の判定結果が肯定的である場合には、処理装置11は、ステップS5の処理を実行する。第1状態から第2状態に遷移したと判定されなかった場合、すなわちステップS4の判定結果が否定的である場合には、処理装置11はステップS4の処理を実行する。 In step S4, the processing device 11 functions as the determination unit 113. The processing device 11 changes the line-of-sight V of the user U1 to intersect the first virtual object VO1 from the first state in which the line-of-sight V of the user U1 intersects the first virtual object VO1 and the first virtual object VO1 is active. It is determined whether or not the state has changed to the second state in which the state is not performed. If it is determined that the state has transitioned from the first state to the second state, that is, if the determination result of step S4 is affirmative, the processing device 11 executes the process of step S5. If it is not determined that the state has transitioned from the first state to the second state, that is, if the determination result of step S4 is negative, the processing device 11 executes the process of step S4.
 ステップS5において、処理装置11は、動作制御部114として機能する。処理装置11は、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させる。 In step S5, the processing device 11 functions as the operation control unit 114. The processing device 11 transitions the second virtual object VO2 from inactive to active.
 ステップS6において、処理装置11は、動作制御部114として機能する。処理装置11は、第1仮想オブジェクトVO1をアクティブから非アクティブに遷移させる。その後、処理装置11は、図9に示される全ての動作を終了する。 In step S6, the processing device 11 functions as the operation control unit 114. The processing device 11 transitions the first virtual object VO1 from active to inactive. After that, the processing device 11 ends all the operations shown in FIG.
1-3:第1実施形態が奏する効果
 以上の説明によれば、情報処理装置としての端末装置10は、取得部111、表示制御部112、及び動作制御部114を備える。取得部111は、ユーザU1の視線Vを示す視線情報を取得する。表示制御部112は、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを仮想空間VS上の互いに重ならない位置に表示させる。動作制御部114は、視線情報の示すユーザU1の視線Vと第1仮想オブジェクトVO1とが交差し、且つ、第1仮想オブジェクトVO1がアクティブである第1状態から、ユーザU1の視線Vが第1仮想オブジェクトVO1と交差しない第2状態に遷移した後に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させる。
1-3: Effects of the First Embodiment According to the above description, the terminal device 10 as an information processing device includes the acquisition unit 111, the display control unit 112, and the operation control unit 114. FIG. The acquisition unit 111 acquires line-of-sight information indicating the line-of-sight V of the user U1. The display control unit 112 causes the first virtual object VO1 and the second virtual object VO2 to be displayed at positions not overlapping each other in the virtual space VS as seen from the user U1. The motion control unit 114 changes the line-of-sight V of the user U1 to the first virtual object VO1 from the first state in which the line-of-sight V of the user U1 indicated by the line-of-sight information intersects with the first virtual object VO1 and the first virtual object VO1 is active. After transitioning to a second state that does not intersect with the virtual object VO1, the second virtual object VO2 is transitioned from inactive to active.
 端末装置10は、上記の構成を用いることにより、先行して表示済みの第1仮想オブジェクトVO1と重ならないように、第2仮想オブジェクトVO2を新たに表示できる。また端末装置10は、ユーザU1の視線Vの方向に応じて、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2のうち、どちらの仮想オブジェクトVOをアクティブにするかを切り替えることができる。 By using the above configuration, the terminal device 10 can newly display the second virtual object VO2 so as not to overlap the previously displayed first virtual object VO1. In addition, the terminal device 10 can switch which virtual object VO to activate between the first virtual object VO1 and the second virtual object VO2 according to the direction of the line of sight V of the user U1.
 また以上の説明によれば、動作制御部114は、第1状態から第2状態に遷移した後に、第1仮想オブジェクトVO1をアクティブから非アクティブに遷移させる。 Also, according to the above description, the operation control unit 114 transitions the first virtual object VO1 from active to inactive after transitioning from the first state to the second state.
 端末装置10は、上記の構成を用いることにより、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを排他的にアクティブにできる。例えば、第1仮想オブジェクトVO1が第1のアプリケーションであり、第2仮想オブジェクトVO2が第2のアプリケーションである場合、ユーザU1が第1仮想オブジェクトVO1から視線Vを外すことによって、第1のアプリケーションの動作を停止し、第2のアプリケーションを動作させられる。即ち、ユーザU1は、第1仮想オブジェクトVO1から視線Vを外すだけで、第2仮想オブジェクトVO2に対する指示を入力しなくても、動作させるアプリケーションを切り替えることができる。 By using the above configuration, the terminal device 10 can exclusively activate the first virtual object VO1 and the second virtual object VO2. For example, if the first virtual object VO1 is the first application and the second virtual object VO2 is the second application, the user U1 removes the line of sight V from the first virtual object VO1, and the first application is executed. You can stop working and run a second application. That is, the user U1 can switch the application to be operated only by removing the line of sight V from the first virtual object VO1 without inputting an instruction to the second virtual object VO2.
 また以上の説明によれば、表示制御部112は、第1状態において、仮想空間VSに第2仮想オブジェクトVO2を表示させることを開始する。 According to the above description, the display control unit 112 starts displaying the second virtual object VO2 in the virtual space VS in the first state.
 端末装置10は、上記の構成を用いることにより、第1状態において、第2仮想オブジェクトVO2を出現させられる。端末装置10は、例えば、第1仮想オブジェクトVO1である第1のアプリケーションの動作中に、第2仮想オブジェクトVO2である第2のアプリケーションを表示できる。 By using the above configuration, the terminal device 10 can cause the second virtual object VO2 to appear in the first state. For example, the terminal device 10 can display a second application, which is the second virtual object VO2, while the first application, which is the first virtual object VO1, is running.
 また以上の説明によれば、動作制御部114は、上記の第2状態において、ユーザU1の視線Vの単位時間当たりの変化が第2範囲内となることが第2時間継続したことを条件に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させる。 Further, according to the above description, the operation control unit 114, in the above-described second state, on the condition that the change per unit time of the line of sight V of the user U1 is within the second range continues for the second time. , transitions the second virtual object VO2 from inactive to active.
 端末装置10は、上記の構成を用いることにより、ユーザU1の視線Vの先がさまよっておらず、静止状態に近い状態となった後に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移できる。 By using the above configuration, the terminal device 10 can transition the second virtual object VO2 from inactive to active after the point of the line of sight V of the user U1 is not wandering and the second virtual object VO2 is in a near stationary state.
 また以上の説明によれば、表示制御部112は、第1仮想オブジェクトVO1から第1距離以内の領域に第2仮想オブジェクトVO2を表示させる。 Also, according to the above description, the display control unit 112 displays the second virtual object VO2 in the area within the first distance from the first virtual object VO1.
 端末装置10は、上記の構成を用いることにより、第2仮想オブジェクトVO2に対して、ユーザU1の注意を喚起できる度合いが高まる。 By using the above configuration, the terminal device 10 can increase the degree to which the user U1's attention can be drawn to the second virtual object VO2.
 また以上の説明によれば、表示制御部112は、仮想空間VSにおいて、ユーザU1の視線Vに一致する中心軸L1と、第1仮想オブジェクトVO1を含む底面とを有する円筒Yの外部に、第2仮想オブジェクトVO2を表示させる。 Further, according to the above description, the display control unit 112, in the virtual space VS, displays a second 2 Display the virtual object VO2.
 端末装置10は、上記の構成を用いることにより、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを仮想空間VS上の互いに重ならない位置に表示できる。 By using the above configuration, the terminal device 10 can display the first virtual object VO1 and the second virtual object VO2 at positions that do not overlap each other in the virtual space VS as seen from the user U1.
2:変形例
 本開示は、以上に例示した実施形態に限定されない。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様を併合してもよい。
2: Modifications The present disclosure is not limited to the embodiments illustrated above. Specific modification modes are exemplified below. Two or more aspects arbitrarily selected from the following examples may be combined.
2-1:変形例1
 上記の実施形態に係る端末装置10において、表示制御部112は、上記の第1状態において、仮想空間VSに第2仮想オブジェクトVO2を表示させていた。しかし、第2仮想オブジェクトVO2の表示方法は、この方法に限定されない。例えば、表示制御部112は、上記の第2状態において、ユーザU1の視線Vが所定時間静止した後に、第2仮想オブジェクトVO2を、ユーザU1の視線Vと交差する位置に表示させてもよい。詳細には、表示制御部112は、上記の第2状態において、ユーザU1の視線Vの単位時間当たりの変化が所定範囲内となることが所定時間継続したことを条件に、第2仮想オブジェクトVO2を、ユーザU1の視線Vと交差する位置に表示させてもよい。第2仮想オブジェクトVO2が、ユーザU1の視線Vの先に表示されることにより、ユーザU1は、仮想空間VSにおいて、第2仮想オブジェクトVO2を探す必要がなくなる。上記の「所定時間」は「第1時間」の一例である。また上記の「所定範囲」は「第1範囲」の一例である。
2-1: Modification 1
In the terminal device 10 according to the embodiment described above, the display control unit 112 displays the second virtual object VO2 in the virtual space VS in the first state described above. However, the method of displaying the second virtual object VO2 is not limited to this method. For example, the display control unit 112 may display the second virtual object VO2 at a position that intersects the line of sight V of the user U1 after the line of sight V of the user U1 remains stationary for a predetermined time in the second state. Specifically, in the second state, the display control unit 112 sets the second virtual object VO2 to the second virtual object VO2 on the condition that the change per unit time of the line of sight V of the user U1 continues within a predetermined range for a predetermined time. may be displayed at a position intersecting with the line of sight V of the user U1. Since the second virtual object VO2 is displayed ahead of the line of sight V of the user U1, the user U1 does not need to search for the second virtual object VO2 in the virtual space VS. The above "predetermined time" is an example of the "first time". Also, the above "predetermined range" is an example of the "first range".
2-2:変形例2
 上記の実施形態に係る端末装置10は、表示制御部112、及び動作制御部114を備えていた。その上で、表示制御部112が、ユーザU1から見て、第1仮想オブジェクトVO1と第2仮想オブジェクトVO2とを仮想空間VS上の互いに重ならない位置に表示させていた。また動作制御部114が、視線情報の示すユーザU1の視線Vと第1仮想オブジェクトVO1とが交差し、且つ、第1仮想オブジェクトVO1がアクティブである第1状態から、ユーザU1の視線Vが第1仮想オブジェクトVO1と交差しない第2状態に遷移した後に、第2仮想オブジェクトVO2を非アクティブからアクティブに遷移させていた。しかし、これらの動作は、端末装置10以外の装置で実行されてもよい。例えば、サーバ30が、表示制御部112及び動作制御部114と同様の構成要素を備えてもよい。とりわけ、通常の情報処理システムに比較して、端末装置10とサーバ30との通信速度が速く、サーバ30から端末装置10に送信されるデータ量が多量である態様においては、サーバ30がこれらの動作を実行することが好適である。本変形例2においては、サーバ30が情報処理装置の一例となる。
2-2: Modification 2
The terminal device 10 according to the above embodiment includes the display control section 112 and the operation control section 114 . In addition, the display control unit 112 causes the first virtual object VO1 and the second virtual object VO2 to be displayed at positions that do not overlap each other in the virtual space VS as seen from the user U1. Further, the motion control unit 114 changes the line-of-sight V of the user U1 from the first state in which the line-of-sight V of the user U1 indicated by the line-of-sight information and the first virtual object VO1 intersect and the first virtual object VO1 is active. After transitioning to a second state that does not intersect with one virtual object VO1, the second virtual object VO2 is transitioned from inactive to active. However, these operations may be performed by a device other than the terminal device 10. FIG. For example, the server 30 may include components similar to the display control section 112 and the operation control section 114 . In particular, in a mode in which the communication speed between the terminal device 10 and the server 30 is high and the amount of data transmitted from the server 30 to the terminal device 10 is large compared to a normal information processing system, the server 30 It is preferred to perform an action. In Modified Example 2, the server 30 is an example of an information processing device.
2-3:変形例3
 上記の実施形態に係る端末装置10において、ユーザU1が第1仮想オブジェクトVO1を目視している間に、ユーザU1の視線Vが存在し得る領域として、円筒Yが想定されていた。しかし、当該領域の形状は円筒に限定されない。
2-3: Modification 3
In the terminal device 10 according to the above-described embodiment, the cylinder Y was assumed as the region where the line of sight V of the user U1 may exist while the user U1 is viewing the first virtual object VO1. However, the shape of the area is not limited to a cylinder.
 図10は、本変形例3に係る表示制御部112の動作例についての説明図である。図10に示されるように、上記の領域の形状は、例えば円錐Nであってもよい。円錐Nは、ユーザU1の左眼球の瞳孔EL又は右眼球の瞳孔ERの位置(x0,y0,z0)を第1の端点C1とし、第1仮想オブジェクトVO1の位置(x1,y1,z1)を第2の端点C2とする中心軸L2を有する。第1の端点C1は、円錐Nの頂点となる。更に、円錐Nの底面の半径R2は、当該底面が第1仮想オブジェクトVO1を含むことを可能とする長さである。なお、図10においては、説明の便宜上、ユーザU1の左眼球の瞳孔EL又は右眼球の瞳孔ERの位置と、第1の端点C1の位置とは一致していない。同様に、図10においては、第1仮想オブジェクトVO1の位置(x1,y1,z1)と、第2の端点C2の位置とは一致していない。しかし、本発明の実用上、これらの組における各々の位置は一致する。 FIG. 10 is an explanatory diagram of an operation example of the display control unit 112 according to the third modification. The shape of said region may be, for example, a cone N, as shown in FIG. The cone N has the position (x0, y0, z0) of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 as the first end point C1, and the position (x1, y1, z1) of the first virtual object VO1 as It has a central axis L2 with a second end point C2. The first end point C1 is the vertex of the cone N. Furthermore, the radius R2 of the base of the cone N is the length that allows the base to contain the first virtual object VO1. In FIG. 10, for convenience of explanation, the position of the pupil EL of the left eyeball or the pupil ER of the right eyeball of the user U1 does not match the position of the first end point C1. Similarly, in FIG. 10, the position (x1, y1, z1) of the first virtual object VO1 does not match the position of the second end point C2. However, for the purposes of the present invention, each position in these sets will match.
2-4:変形例4
 上記の実施形態に係る情報処理システム1において、端末装置10とMRグラス20とは別体として実現されている。しかし、本発明の実施形態における、端末装置10とMRグラス20の実現方法は、これには限定されない。例えば、MRグラス20が、端末装置10と同一の機能を備えることにより、端末装置10とMRグラス20とが単一の筐体内において実現されてもよい。
2-4: Modification 4
In the information processing system 1 according to the above-described embodiment, the terminal device 10 and the MR glasses 20 are implemented separately. However, the method of realizing the terminal device 10 and the MR glasses 20 in the embodiment of the present invention is not limited to this. For example, the terminal device 10 and the MR glasses 20 may be realized within a single housing by providing the MR glasses 20 with the same functions as the terminal device 10 .
2-5:変形例5
 上記の実施形態に係る情報処理システム1は、MRグラス20を備える。しかし、情報処理システム1は、MRグラス20の代わりに、VR(Virtual Reality)技術が採用されたHMD(Head Mounted Display)、AR(Augmented Reality)技術が採用されたHMD、及びAR技術が採用されたARグラスのうちいずれか1つを備えてもよい。あるいは、情報処理システム1は、MRグラス20の代わりに、撮像装置を備えた通常のスマートフォン及びタブレットのうちいずれか1つを備えてもよい。これらのHMD、ARグラス、スマートフォン、及びタブレットは、表示装置の例である。
2-5: Modification 5
The information processing system 1 according to the above embodiment includes the MR glasses 20 . However, in the information processing system 1, instead of the MR glasses 20, an HMD (Head Mounted Display) employing VR (Virtual Reality) technology, an HMD employing AR (Augmented Reality) technology, and an AR technology are employed. Any one of the AR glasses may be provided. Alternatively, the information processing system 1 may include, instead of the MR glasses 20, any one of a normal smartphone and tablet equipped with an imaging device. These HMDs, AR glasses, smartphones, and tablets are examples of display devices.
2-6:変形例6
 上記の実施形態に係る情報処理システム1において、サーバ30に備わる生成部312が、MRグラス20に表示される画像を示す第1画像情報を生成していた。しかし、第1画像情報を生成する装置は、サーバ30に限定されない。例えば、端末装置10が、上記の第1画像情報を生成してもよい。この場合、情報処理システム1は、サーバ30を必須の構成要素としなくてもよい。
2-6: Modification 6
In the information processing system 1 according to the above embodiment, the generation unit 312 provided in the server 30 generates the first image information indicating the image displayed on the MR glasses 20 . However, the device that generates the first image information is not limited to the server 30 . For example, the terminal device 10 may generate the above first image information. In this case, the information processing system 1 does not have to include the server 30 as an essential component.
3:その他
(1)上述した実施形態では、記憶装置12、記憶装置22、及び記憶装置32は、ROM及びRAMなどを例示したが、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリデバイス(例えば、カード、スティック、キードライブ)、CD-ROM(Compact Disc-ROM)、レジスタ、リムーバブルディスク、ハードディスク、フロッピー(登録商標)ディスク、磁気ストリップ、データベース、サーバその他の適切な記憶媒体である。また、プログラムは、電気通信回線を介してネットワークから送信されてもよい。また、プログラムは、電気通信回線を介して通信網NETから送信されてもよい。
3: Others (1) In the above-described embodiment, the storage device 12, the storage device 22, and the storage device 32 are ROM and RAM. discs, Blu-ray discs), smart cards, flash memory devices (e.g. cards, sticks, key drives), CD-ROMs (Compact Disc-ROMs), registers, removable discs, hard disks, floppies ) disk, magnetic strip, database, server or other suitable storage medium. Also, the program may be transmitted from a network via an electric communication line. Also, the program may be transmitted from the communication network NET via an electric communication line.
(2)上述した実施形態において、説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 (2) In the embodiments described above, the information, signals, etc. described may be represented using any of a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
(3)上述した実施形態において、入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 (3) In the above-described embodiments, input/output information and the like may be stored in a specific location (for example, memory), or may be managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
(4)上述した実施形態において、判定は、1ビットを用いて表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 (4) In the above-described embodiment, the determination may be made by a value (0 or 1) represented using 1 bit, or by a true/false value (Boolean: true or false). Alternatively, it may be performed by numerical comparison (for example, comparison with a predetermined value).
(5)上述した実施形態において例示した処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 (5) The order of the processing procedures, sequences, flowcharts, etc. exemplified in the above embodiments may be changed as long as there is no contradiction. For example, the methods described in this disclosure present elements of the various steps using a sample order, and are not limited to the specific order presented.
(6)図1~図10に例示された各機能は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 (6) Each function illustrated in FIGS. 1 to 10 is realized by any combination of at least one of hardware and software. Also, the method of realizing each functional block is not particularly limited. That is, each functional block may be implemented using one device physically or logically coupled, or directly or indirectly using two or more physically or logically separated devices (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices. A functional block may be implemented by combining software in the one device or the plurality of devices.
(7)上述した実施形態において例示したプログラムは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称を用いて呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 (7) The programs illustrated in the above embodiments, whether referred to as software, firmware, middleware, microcode, hardware description language or by other names, instructions, instruction sets, code, code shall be interpreted broadly to mean segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and the like.
 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 In addition, software, instructions, information, etc. may be transmitted and received via a transmission medium. For example, the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
(8)前述の各形態において、「システム」及び「ネットワーク」という用語は、互換的に使用される。 (8) In each of the above aspects, the terms "system" and "network" are used interchangeably.
(9)本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 (9) Information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using corresponding other information. may be represented as
(10)上述した実施形態において、端末装置10、及びサーバ30は、移動局(MS:Mobile Station)である場合が含まれる。移動局は、当業者によって、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、又はいくつかの他の適切な用語によって呼ばれる場合もある。また、本開示においては、「移動局」、「ユーザ端末(user terminal)」、「ユーザ装置(UE:User Equipment)」、「端末」等の用語は、互換的に使用され得る。 (10) In the above-described embodiments, the terminal device 10 and the server 30 may be mobile stations (MS). A mobile station is defined by those skilled in the art as subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be called a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term. Also, in the present disclosure, terms such as "mobile station", "user terminal", "user equipment (UE)", "terminal", etc. may be used interchangeably.
(11)上述した実施形態において、「接続された(connected)」、「結合された(coupled)」という用語、又はこれらのあらゆる変形は、2又はそれ以上の要素間の直接的又は間接的なあらゆる接続又は結合を意味し、互いに「接続」又は「結合」された2つの要素間に1又はそれ以上の中間要素が存在することを含められる。要素間の結合又は接続は、物理的な結合又は接続であっても、論理的な結合又は接続であっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」を用いて読み替えられてもよい。本開示において使用する場合、2つの要素は、1又はそれ以上の電線、ケーブル及びプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域及び光(可視及び不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」又は「結合」されると考えることができる。 (11) In the above-described embodiments, the terms "connected," "coupled," or any variation thereof refer to any direct or indirect connection between two or more elements. Any connection or coupling is meant, including the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to each other. Couplings or connections between elements may be physical couplings or connections, logical couplings or connections, or a combination thereof. For example, "connection" may be replaced with "access." As used in this disclosure, two elements are defined using at least one of one or more wires, cables, and printed electrical connections and, as some non-limiting and non-exhaustive examples, in the radio frequency domain. , electromagnetic energy having wavelengths in the microwave and optical (both visible and invisible) regions, and the like.
(12)上述した実施形態において、「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 (12) In the above-described embodiments, the phrase "based on" does not mean "based only on," unless expressly specified otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
(13)本開示において使用される「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などによって読み替えられてもよい。 (13) The terms "determining" and "determining" as used in this disclosure may encompass a wide variety of actions. "Judgement" and "determination" are, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (eg, lookup in a table, database, or other data structure); Also, "judgment" and "determination" are used for receiving (e.g., receiving information), transmitting (e.g., transmitting information), input, output, access (accessing) (for example, accessing data in memory) may include deeming that a "judgement" or "decision" has been made. In addition, "judgment" and "decision" are considered to be "judgment" and "decision" by resolving, selecting, choosing, establishing, comparing, etc. can contain. In other words, "judgment" and "decision" may include considering that some action is "judgment" and "decision". Also, "judgment (decision)" may be read as "assuming", "expecting", "considering", and the like.
(14)上述した実施形態において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。更に、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 (14) In the above-described embodiments, where "include," "including," and variations thereof are used, these terms are synonymous with the term "comprising." , is intended to be inclusive. Furthermore, the term "or" as used in this disclosure is not intended to be an exclusive OR.
(15)本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 (15) In this disclosure, where articles have been added by translation, such as a, an, and the in English, the disclosure includes the plural nouns following these articles. good.
(16)本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」等の用語も、「異なる」と同様に解釈されてもよい。 (16) In the present disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean that "A and B are different from C". Terms such as "separate," "coupled," etc. may also be interpreted in the same manner as "different."
(17)本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行う通知に限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 (17) Each aspect/embodiment described in the present disclosure may be used alone, may be used in combination, or may be used by switching according to execution. In addition, notification of predetermined information (for example, notification of “being X”) is not limited to explicit notification, but is performed implicitly (for example, not notification of the predetermined information). good too.
 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施できる。したがって、本開示の記載は、例示説明を目的とし、本開示に対して何ら制限的な意味を有さない。 Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described in this disclosure. The present disclosure can be practiced with modifications and variations without departing from the spirit and scope of the present disclosure as defined by the claims. Accordingly, the description of the present disclosure is for illustrative purposes and is not meant to be limiting in any way on the present disclosure.
1…情報処理システム、10…端末装置、11…処理装置、12…記憶装置、13…通信装置、14…ディスプレイ、15…入力装置、16…慣性センサ、20…MRグラス、21…処理装置、22…記憶装置、23…視線検出装置、24…GPS装置、25…動き検出装置、26…撮像装置、27…通信装置、28…ディスプレイ、30…サーバ、31…処理装置、32…記憶装置、33…通信装置、34…ディスプレイ、35…入力装置、41L、41R…レンズ、91、92…テンプル、93…ブリッジ、94、95…フレーム、111…取得部、112…表示制御部、113…判定部、114…動作制御部、211…取得部、212…表示制御部、311…取得部、312…生成部、C1、C2…端点、L1、L2…中心軸、PR1、PR2、PR3…制御プログラム、R1、R2…半径、U1、U2、U3…ユーザ、VO…仮想オブジェクト、VO1…第1仮想オブジェクト、VO2…第2仮想オブジェクト REFERENCE SIGNS LIST 1 information processing system 10 terminal device 11 processing device 12 storage device 13 communication device 14 display 15 input device 16 inertial sensor 20 MR glass 21 processing device 22... Storage device 23... Line of sight detection device 24... GPS device 25... Motion detection device 26... Imaging device 27... Communication device 28... Display 30... Server 31... Processing device 32... Storage device 33... Communication device, 34... Display, 35... Input device, 41L, 41R... Lens, 91, 92... Temple, 93... Bridge, 94, 95... Frame, 111... Acquisition unit, 112... Display control unit, 113... Judgment Unit 114 Operation control unit 211 Acquisition unit 212 Display control unit 311 Acquisition unit 312 Generation unit C1, C2 End point L1, L2 Central axis PR1, PR2, PR3 Control program , R1, R2... radius, U1, U2, U3... user, VO... virtual object, VO1... first virtual object, VO2... second virtual object

Claims (7)

  1.  ユーザの視線を示す視線情報を取得する取得部と、
     前記ユーザから見て、第1仮想オブジェクトと第2仮想オブジェクトとを仮想空間上の互いに重ならない位置に表示させる表示制御部と、
     前記視線情報の示す前記ユーザの視線と前記第1仮想オブジェクトとが交差し、且つ、前記第1仮想オブジェクトがアクティブである第1状態が、前記ユーザの視線が前記第1仮想オブジェクトと交差しない第2状態に遷移した後に、前記第2仮想オブジェクトを非アクティブからアクティブに遷移させる動作制御部と、
     を備える情報処理装置。
    an acquisition unit that acquires line-of-sight information indicating the user's line of sight;
    a display control unit that displays the first virtual object and the second virtual object at positions that do not overlap each other in the virtual space as seen from the user;
    A first state in which the line of sight of the user indicated by the line of sight information intersects the first virtual object and the first virtual object is active is a state in which the line of sight of the user does not intersect the first virtual object. an operation control unit that transitions the second virtual object from inactive to active after transitioning to two states;
    Information processing device.
  2.  前記動作制御部は、前記第1状態が前記第2状態に遷移した後に、前記第1仮想オブジェクトをアクティブから非アクティブに遷移させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the action control unit transitions the first virtual object from active to inactive after the first state transitions to the second state.
  3.  前記表示制御部は、前記第1状態において、前記仮想空間に前記第2仮想オブジェクトを表示させることを開始する、請求項1又は2に記載の情報処理装置。 The information processing apparatus according to claim 1 or 2, wherein said display control unit starts displaying said second virtual object in said virtual space in said first state.
  4.  前記表示制御部は、前記第2状態において、前記ユーザの視線の単位時間当たりの変化が第1範囲内となることが第1時間継続したことを条件に、前記第2仮想オブジェクトを、前記ユーザの視線と交差する位置に表示させることを開始する、請求項1又は2に記載の情報処理装置。 In the second state, the display control unit controls the second virtual object to be displayed by the user on condition that a change in the line of sight of the user per unit time is within a first range for a first period of time. 3. The information processing apparatus according to claim 1, wherein the display is started at a position intersecting the line of sight of the .
  5.  前記動作制御部は、前記第2状態において、前記ユーザの視線の単位時間当たりの変化が第2範囲内となることが第2時間継続したことを条件に、前記第2仮想オブジェクトを非アクティブからアクティブに遷移させる、請求項1から請求項3のいずれか1項に記載の情報処理装置。 In the second state, the motion control unit changes the second virtual object from being inactive on condition that a change in the line of sight of the user per unit time is within a second range for a second period of time. 4. The information processing apparatus according to any one of claims 1 to 3, configured to transition to active.
  6.  前記表示制御部は、前記第1仮想オブジェクトから第1距離以内の領域に前記第2仮想オブジェクトを表示させる、請求項1から請求項5のいずれか1項に記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 5, wherein the display control unit displays the second virtual object in a region within a first distance from the first virtual object.
  7.  前記表示制御部は、前記仮想空間において、前記ユーザの視線に一致する中心軸と、前記第1仮想オブジェクトを含む底面とを有する円筒又は円錐の外部に、前記第2仮想オブジェクトを表示させる、請求項1から請求項6のいずれか1項に記載の情報処理装置。 wherein, in the virtual space, the display control unit displays the second virtual object outside a cylinder or cone having a central axis that matches the line of sight of the user and a bottom surface that includes the first virtual object. The information processing apparatus according to any one of claims 1 to 6.
PCT/JP2022/045377 2021-12-16 2022-12-08 Information processing device WO2023112838A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021204072 2021-12-16
JP2021-204072 2021-12-16

Publications (1)

Publication Number Publication Date
WO2023112838A1 true WO2023112838A1 (en) 2023-06-22

Family

ID=86774647

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/045377 WO2023112838A1 (en) 2021-12-16 2022-12-08 Information processing device

Country Status (1)

Country Link
WO (1) WO2023112838A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017126009A (en) * 2016-01-15 2017-07-20 キヤノン株式会社 Display control device, display control method, and program
JP2018088118A (en) * 2016-11-29 2018-06-07 パイオニア株式会社 Display control device, control method, program and storage media
WO2019181488A1 (en) * 2018-03-20 2019-09-26 ソニー株式会社 Information processing device, information processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017126009A (en) * 2016-01-15 2017-07-20 キヤノン株式会社 Display control device, display control method, and program
JP2018088118A (en) * 2016-11-29 2018-06-07 パイオニア株式会社 Display control device, control method, program and storage media
WO2019181488A1 (en) * 2018-03-20 2019-09-26 ソニー株式会社 Information processing device, information processing method, and program

Similar Documents

Publication Publication Date Title
US20210405761A1 (en) Augmented reality experiences with object manipulation
KR102349716B1 (en) Method for sharing images and electronic device performing thereof
EP3172644B1 (en) Multi-user gaze projection using head mounted display devices
JP6717773B2 (en) Method and apparatus for modifying presentation of information based on visual complexity of environmental information
AU2014281726B2 (en) Virtual object orientation and visualization
Billinghurst Grand challenges for augmented reality
US20180150997A1 (en) Interaction between a touch-sensitive device and a mixed-reality device
US20150193977A1 (en) Self-Describing Three-Dimensional (3D) Object Recognition and Control Descriptors for Augmented Reality Interfaces
US20180158243A1 (en) Collaborative manipulation of objects in virtual reality
KR20190083464A (en) Electronic device controlling image display based on scroll input and method thereof
US20230252739A1 (en) Triggering a collaborative augmented reality environment using an ultrasound signal
WO2023112838A1 (en) Information processing device
US20230161959A1 (en) Ring motion capture and message composition system
US11893989B2 (en) Voice-controlled settings and navigation
WO2023149256A1 (en) Display control device
WO2022125384A1 (en) Gaze dependent ocular mode controller for mixed reality
WO2023145890A1 (en) Terminal device
WO2023149255A1 (en) Display control device
WO2023145265A1 (en) Message transmitting device and message receiving device
WO2023145273A1 (en) Display control device
WO2023162499A1 (en) Display control device
WO2023145892A1 (en) Display control device, and server
WO2023149498A1 (en) Display control device
WO2023079875A1 (en) Information processing device
WO2023176317A1 (en) Display control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22907365

Country of ref document: EP

Kind code of ref document: A1