WO2023145892A1 - Display control device, and server - Google Patents

Display control device, and server Download PDF

Info

Publication number
WO2023145892A1
WO2023145892A1 PCT/JP2023/002690 JP2023002690W WO2023145892A1 WO 2023145892 A1 WO2023145892 A1 WO 2023145892A1 JP 2023002690 W JP2023002690 W JP 2023002690W WO 2023145892 A1 WO2023145892 A1 WO 2023145892A1
Authority
WO
WIPO (PCT)
Prior art keywords
display control
messages
display
user
virtual object
Prior art date
Application number
PCT/JP2023/002690
Other languages
French (fr)
Japanese (ja)
Inventor
智仁 山▲崎▼
進 関野
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2023145892A1 publication Critical patent/WO2023145892A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/373Details of the operation on graphic patterns for modifying the size of the graphic pattern
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention relates to a display control device and a server. More particularly, the present invention relates to a display control device and a server that cause a display device to display a virtual object corresponding to a message.
  • XR technology including VR (Virtual Reality) technology and AR (Augmented Reality) technology
  • a message indicated by a virtual object is displayed in the virtual space displayed on the XR glasses that the user wears on his or her head.
  • Patent Literature 1 discloses a technology related to a method and apparatus related to an interactive virtual environment for communication. Specifically, Patent Literature 1 discloses a technique of displaying a virtual object representing a "doodle message" in a virtual space in which information can be exchanged between users.
  • the virtual space displayed on the XR glasses worn by the user on the head may be filled with a plurality of virtual objects.
  • the conventional technology there is a problem that the user's view is obstructed and the convenience is lowered.
  • a display control device for displaying a virtual space including a virtual object on a display device worn on the head of a user, the display control device comprising: an acquisition unit for acquiring a plurality of messages; a generation unit for generating a plurality of individual objects corresponding to the plurality of messages on a one-to-one basis; and a display control unit for causing the display device to display a virtual object that is a collection of the plurality of individual objects,
  • the display control unit sets the size of the virtual object to a first size when the number of the plurality of messages is a first number, and displays the size of the virtual object in the virtual space from the center of the virtual space.
  • the size of the virtual object is a second number larger than the first size. and a second distance, which is longer than the first distance, from the center of the virtual space to the center of the virtual object in the virtual space.
  • the present invention when displaying virtual objects corresponding to the number of messages in the virtual space, even if the number of messages increases, the user's convenience can be prevented from deteriorating.
  • FIG. 1 is a perspective view showing the appearance of XR glasses 20 according to the first embodiment
  • FIG. 2 is a block diagram showing a configuration example of the XR glasses 20 according to the first embodiment
  • FIG. 1 is a block diagram showing a configuration example of a terminal device 10 according to the first embodiment
  • FIG. 4 is an explanatory diagram showing an example of operations of a generation unit 113 and a display control unit 114
  • FIG. 4 is an explanatory diagram showing an example of operations of a generation unit 113 and a display control unit 114
  • FIG. 10 is a diagram showing an example of how a plurality of individual objects SO1 to SO10 are aligned; 3 is a block diagram showing a configuration example of a server 30; FIG. A table showing an example of a message database MD. 4 is a flowchart showing the operation of the terminal device 10 according to the first embodiment; The block diagram which shows the structural example of 10 A of terminal devices.
  • FIG. 4 is an explanatory diagram of an operation example of a display control unit 114A and a reception unit 115;
  • FIG. 4 is an explanatory diagram of an operation example of a display control unit 114A and a reception unit 115;
  • 9 is a flowchart showing the operation of the terminal device 10A according to the second embodiment;
  • FIG. 2 is a block diagram showing a configuration example of a terminal device 10B; 3 is a block diagram showing a configuration example of a server 30A; FIG. A table showing an example of a location information database LD.
  • FIG. 4 is an explanatory diagram showing an example of operations of a determination unit 314, an extraction unit 315, and an output unit 312A; A flow chart which shows operation of server 30A.
  • FIG. 4 is an explanatory diagram of a virtual object VO9 generated when the terminal device 10B and the server 30A are combined;
  • FIG. 1 First Embodiment
  • a configuration of an information processing system 1 including a terminal device 10 as a display control device according to a first embodiment of the present invention will be described with reference to FIGS. 1 to 10.
  • FIG. 1 First Embodiment
  • FIG. 1 is a diagram showing the overall configuration of an information processing system 1 according to the first embodiment of the present invention.
  • the information processing system 1 is a system that uses XR technology to provide a virtual space to a user U1 wearing XR glasses 20, which will be described later.
  • the information processing system 1 includes a terminal device 10, XR glasses 20, and a server 30.
  • the terminal device 10 is an example of a display control device.
  • the terminal device 10 and the server 30 are communicably connected to each other via a communication network NET.
  • the terminal device 10 and the XR glasses 20 are connected so as to be able to communicate with each other.
  • the suffix "-X" is used for the reference numerals.
  • X is an arbitrary integer of 1 or more.
  • the terminal device 10-1 and the XR glasses 20 are connected so as to be able to communicate with each other.
  • two terminal devices 10 and one XR glass 20 are shown in FIG. A glass 20 may be provided.
  • the user U1 uses a set of the terminal device 10-1 and the XR glasses 20.
  • the XR glasses 20 display a plurality of individual objects, which will be described later, corresponding to messages addressed to the user U1.
  • the message may include a message transmitted from the terminal device 10-1 to the terminal device 10-2.
  • the message may also include a message sent from another terminal device (not shown in FIG. 1) to the terminal device 10-1.
  • the message may be a message generated by the terminal device 10-1 itself.
  • the server 30 provides various data and cloud services to the terminal device 10 via the communication network NET.
  • the terminal device 10-1 displays virtual objects placed in the virtual space on the XR glasses 20 worn by the user on the head.
  • the virtual space is, for example, a celestial space.
  • the virtual objects are, for example, virtual objects representing data such as still images, moving images, 3DCG models, HTML files, and text files, and virtual objects representing applications. Examples of text files include memos, source codes, diaries, and recipes. Examples of applications include browsers, applications for using SNS, and applications for generating document files.
  • the terminal device 10 is preferably a mobile terminal device such as a smart phone and a tablet, for example.
  • the terminal device 10-1 is an example of a display control device.
  • the terminal device 10-2 is a device for user U2 to send a message to user U1.
  • the terminal device 10-2 may display a virtual object placed in the virtual space on the display 14 described later or XR glasses (not shown) connected to the terminal device 10-2.
  • the configuration of the terminal device 10-2 is basically the same as that of the terminal device 10-1.
  • the terminal device 10-2 is preferably a mobile terminal device such as a smart phone and a tablet, for example.
  • the XR glasses 20 are a see-through wearable display worn on the head of user U1.
  • the XR glasses 20 display a virtual object on the display panel provided for each of the binocular lenses under the control of the terminal device 10-1.
  • the XR glass 20 is an example of a display device.
  • the XR glasses 20 are AR glasses will be described below.
  • the fact that the XR glasses 20 are AR glasses is only an example, and the XR glasses 20 may be VR glasses or MR (Mixed Reality) glasses.
  • FIG. 2 is a perspective view showing the appearance of the XR glasses 20. As shown in FIG. As shown in FIG. 2, the XR glasses 20 have temples 91 and 92, a bridge 93, frames 94 and 95, and lenses 41L and 41R, like general eyeglasses.
  • An imaging device 26 is provided on the bridge 93 .
  • the imaging device 26 images the outside world.
  • the imaging device 26 also outputs imaging information indicating the captured image.
  • Each of the lenses 41L and 41R has a half mirror.
  • a frame 94 is provided with a liquid crystal panel or an organic EL panel for the left eye.
  • a liquid crystal panel or an organic EL panel is hereinafter generically referred to as a display panel.
  • the frame 94 is provided with an optical member that guides the light emitted from the display panel for the left eye to the lens 41L.
  • the half mirror provided in the lens 41L transmits external light and guides it to the left eye, and reflects the light guided by the optical member to enter the left eye.
  • the frame 95 is provided with a right-eye display panel and an optical member that guides light emitted from the right-eye display panel to the lens 41R.
  • the half mirror provided in the lens 41R transmits external light and guides it to the right eye, and reflects the light guided by the optical member to enter the right eye.
  • the display 28 which will be described later, includes a lens 41L, a left-eye display panel, a left-eye optical member, and a lens 41R, a right-eye display panel, and a right-eye optical member.
  • the user U1 can observe the image displayed by the display panel in a see-through state in which the image is superimposed on the appearance of the outside world. Further, in the XR glasses 20, of the binocular images with parallax, the image for the left eye is displayed on the display panel for the left eye, and the image for the right eye is displayed on the display panel for the right eye. On the other hand, it is possible to perceive the displayed image as if it had depth and stereoscopic effect.
  • FIG. 3 is a block diagram showing a configuration example of the XR glasses 20.
  • the XR glasses 20 include a processing device 21 , a storage device 22 , a line-of-sight detection device 23 , a GPS device 24 , a motion detection device 25 , an imaging device 26 , a communication device 27 and a display 28 .
  • Each element of the XR glasses 20 is interconnected by one or more buses for communicating information.
  • the term "apparatus" in this specification may be replaced with another term such as a circuit, a device, or a unit.
  • the processing device 21 is a processor that controls the XR glasses 20 as a whole.
  • the processing device 21 is configured using, for example, one or more chips.
  • the processing device 21 is configured using, for example, a central processing unit (CPU) including an interface with peripheral devices, an arithmetic device, registers, and the like. Some or all of the functions of the processing device 21 are implemented by hardware such as DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), and FPGA (Field Programmable Gate Array). may be realized.
  • the processing device 21 executes various processes in parallel or sequentially.
  • the storage device 22 is a recording medium that can be read and written by the processing device 21 .
  • the storage device 22 also stores a plurality of programs including the control program PR1 executed by the processing device 21 .
  • the line-of-sight detection device 23 detects the line-of-sight of the user U1 and generates line-of-sight information indicating the detection result. Any method may be used to detect the line of sight by the line of sight detection device 23 .
  • the line-of-sight detection device 23 may detect line-of-sight information based on, for example, the position of the inner corner of the eye and the position of the iris.
  • the line-of-sight information indicates the line-of-sight direction of the user U1.
  • the line-of-sight detection device 23 supplies the line-of-sight information to the processing device 21, which will be described later.
  • the line-of-sight information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
  • the GPS device 24 receives radio waves from multiple satellites.
  • the GPS device 24 also generates position information from the received radio waves.
  • the positional information indicates the position of the XR glasses 20 .
  • the location information may be in any format as long as the location can be specified.
  • the position information indicates the latitude and longitude of the XR glasses 20, for example.
  • location information is obtained from GPS device 24 .
  • the XR glasses 20 may acquire position information by any method.
  • the acquired position information is supplied to the processing device 21 .
  • the position information output to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
  • the motion detection device 25 detects motion of the XR glasses 20 .
  • the motion detection device 25 corresponds to an inertial sensor such as an acceleration sensor that detects acceleration and a gyro sensor that detects angular acceleration.
  • the acceleration sensor detects acceleration in orthogonal X-, Y-, and Z-axes.
  • the gyro sensor detects angular acceleration around the X-, Y-, and Z-axes.
  • the motion detection device 25 can generate posture information indicating the posture of the XR glasses 20 based on the output information of the gyro sensor.
  • the motion information includes acceleration data indicating three-axis acceleration and angular acceleration data indicating three-axis angular acceleration.
  • the motion detection device 25 supplies posture information indicating the posture of the XR glasses 20 and motion information related to the motion of the XR glasses 20 to the processing device 21 .
  • the posture information and motion information supplied to the processing device 21 are transmitted to the terminal device 10 via the communication device 27 .
  • the imaging device 26 outputs imaging information obtained by imaging the outside world.
  • the imaging device 26 includes, for example, a lens, an imaging element, an amplifier, and an AD converter.
  • the light condensed through the lens is converted into an image pickup signal, which is an analog signal, by the image pickup device.
  • the amplifier amplifies the imaging signal and outputs it to the AD converter.
  • the AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal.
  • the converted imaging information is supplied to the processing device 21 .
  • the imaging information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
  • the communication device 27 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 27 is also called a network device, a network controller, a network card, a communication module, etc., for example.
  • the communication device 27 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 27 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 28 is a device that displays images.
  • the display 28 displays various images under the control of the processing device 21 .
  • the display 28 includes the lens 41L, the left-eye display panel, the left-eye optical member, and the lens 41R, the right-eye display panel, and the right-eye optical member, as described above.
  • Various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display panel.
  • the processing device 21 functions as an acquisition unit 211 and a display control unit 212, for example, by reading the control program PR1 from the storage device 22 and executing it.
  • the acquisition unit 211 acquires image information indicating an image displayed on the XR glasses 20 from the terminal device 10-1.
  • the acquisition unit 211 also receives line-of-sight information supplied from the line-of-sight detection device 23 , position information supplied from the GPS device 24 , posture information and motion information supplied from the motion detection device 25 , and information supplied from the imaging device 26 . Acquire imaging information. After that, the acquisition unit 211 supplies the acquired line-of-sight information, position information, posture information, motion information, and imaging information to the communication device 27 .
  • the display control unit 212 Based on the image information acquired from the terminal device 10-1 by the acquisition unit 211, the display control unit 212 causes the display 28 to display an image indicated by the image information.
  • FIG. 4 is a block diagram showing a configuration example of the terminal device 10. As shown in FIG.
  • the terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , a display 14 , an input device 15 and an inertial sensor 16 . Elements of the terminal device 10 are interconnected by one or more buses for communicating information. As the configuration of the terminal device 10, the configuration of the terminal device 10-1 will be basically described below.
  • the processing device 11 is a processor that controls the terminal device 10 as a whole. Also, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing unit 11 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 11 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 11 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 12 is a recording medium readable and writable by the processing device 11 .
  • the storage device 12 also stores a plurality of programs including the control program PR2 executed by the processing device 11 .
  • the communication device 13 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 13 is also called a network device, a network controller, a network card, a communication module, or the like, for example.
  • the communication device 13 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 13 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 14 is a device that displays images and character information.
  • the display 14 displays various images under the control of the processing device 11 .
  • various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are preferably used as the display 14 .
  • the display 14 may not be an essential component. In this case, the XR glasses 20 further have the same function as the display 14 .
  • the input device 15 accepts operations from the user U1 who wears the XR glasses 20 on his head.
  • the input device 15 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse.
  • the input device 15 may also serve as the display 14 .
  • the inertial sensor 16 is a sensor that detects inertial force.
  • the inertial sensor 16 includes, for example, one or more of an acceleration sensor, an angular velocity sensor, and a gyro sensor.
  • the processing device 11 detects the orientation of the terminal device 10 based on the output information from the inertial sensor 16 . Further, the processing device 11 receives selection of the virtual object VO, input of characters, and input of instructions in the celestial sphere virtual space VS based on the orientation of the terminal device 10 .
  • the user U1 directs the central axis of the terminal device 10 toward a predetermined area of the virtual space VS, and operates the input device 15 to select the virtual object VO arranged in the predetermined area.
  • the user U1's operation on the input device 15 is, for example, a double tap. By operating the terminal device 10 in this way, the user U1 can select the virtual object VO without looking at the input device 15 of the terminal device 10 .
  • the terminal device 10 preferably has a GPS device similar to the GPS device 24 provided in the XR glasses 20.
  • the processing device 11 functions as an output unit 111, an acquisition unit 112, a generation unit 113, and a display control unit 114 by reading the control program PR2 from the storage device 12 and executing it.
  • the output unit 111 outputs a message created by the user U1 using the input device 15 to the server 30.
  • the message specifies the sender of the message, the receiver of the message, and the content of the message.
  • the content of the message includes at least one of text and images.
  • the output unit 111 outputs a message addressed to the user U1 acquired by the acquisition unit 112-1, which will be described later.
  • information indicating that it has been read is transmitted to the server 30.
  • the acquisition unit 112 acquires a plurality of messages addressed to the user U from the server 30. If the terminal device 10 is the terminal device 10-1 used by the user U1 who is the first user, the obtaining unit 112-1 obtains from the server 30 a plurality of messages addressed to the user U1.
  • the generating unit 113 generates multiple individual objects corresponding to the multiple messages acquired by the acquiring unit 112 on a one-to-one basis.
  • the display control unit 114 causes the XR glasses 20 as a display device to display the virtual object, which is a collection of multiple individual objects generated by the generation unit 113 .
  • user U1 can visually confirm the number of multiple messages.
  • FIG. 5 and 6 are explanatory diagrams showing an example of the operation of the generation unit 113 and the display control unit 114.
  • FIG. In the following description, it is assumed that X, Y and Z axes are orthogonal to each other in the virtual space VS.
  • the X-axis extends in the front-rear direction of user U1.
  • the forward direction along the X axis is the X1 direction
  • the backward direction along the X axis is the X2 direction.
  • the Y-axis extends in the horizontal direction of the user U1.
  • the right direction along the Y axis is the Y1 direction
  • the left direction along the Y axis is the X2 direction.
  • a horizontal plane is formed by these X-axis and Y-axis.
  • the Z-axis is orthogonal to the XY plane and extends in the vertical direction of the user U1.
  • the downward direction along the Z axis is the Z1 direction
  • the upward direction along the Z axis is the Z2 direction.
  • the coordinates of the user U1 in the virtual space VS correspond to the position of the user U1 in the real space.
  • the position of the user U1 in the physical space is indicated by position information generated in the XR glasses 20 worn on the head of the user U1.
  • the virtual object VO1 and the individual objects SO1 to SO3 are, for example, spheres.
  • the image information used by the display control unit 114 to display the individual objects SO1 to SO3 on the XR glasses 20 may be information stored in the storage device 12.
  • the image information may be information acquired from the server 30 by the acquisition unit 112 .
  • individual objects SO1 to SO3 may correspond one-to-one to all messages addressed to user U1.
  • individual objects SO1 to SO3 may correspond one-to-one only to unread messages among all messages addressed to user U1.
  • one individual object may correspond to a plurality of read messages.
  • the viewing angle when the user U1 visually recognizes the virtual object VO1 is ⁇ 1 .
  • the display control unit 114 further increases the size of the virtual object VO1 as the number of messages acquired by the acquisition unit 112 increases. That is, the display control unit 114 increases the size of the virtual object VO1 as the number of individual objects SO included in the virtual object VO1 increases. Further, the display control unit 114 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages is larger. In other words, the display control unit 114 increases the distance from the center of the virtual space VS to the center of the virtual object VO1 as the number of messages increases.
  • the display control unit 114 sets the size of the virtual object VO1 to the first size, and displays the size of the virtual object VO1 from the center of the virtual space VS. Let the distance to the center be the first distance. Further, when the number of the plurality of messages is a second number larger than the first number, the display control unit 114 sets the size of the virtual object VO1 to a second size larger than the first size. At the same time, the distance from the center of the virtual space VS to the center of the virtual object VO1 is set to a second distance longer than the first distance.
  • the viewing angle when the user U1 visually recognizes the virtual object VO2 is ⁇ 2 .
  • the viewing angle ⁇ 1 when the user views the virtual object VO1 in FIG. 5 and the viewing angle ⁇ 2 when the user views the virtual object VO2 in FIG. Equal is preferred.
  • the proportion occupied by the virtual object VO1 and the proportion occupied by the virtual object VO2 in the field of view of the user U1 are equal.
  • the area of virtual object VO2 in the field of view of user U1 remains equal to the area of virtual object VO1. I don't feel like my field of vision has narrowed.
  • the viewing angle ⁇ 1 and the viewing angle ⁇ 2 may not necessarily be equal.
  • VO1 may gradually increase.
  • the display control unit 114 changes the display mode of the virtual object VO1 according to the number of messages addressed to the user U1. For example, as the number of individual objects SO included in the virtual object VO1 increases, the display control unit 114 changes at least one of the color of the virtual object VO1 and the color of the individual objects SO included in the virtual object VO1. You may let Alternatively, the display control unit 114 may change the shape of the virtual object VO1 as the number of individual objects SO increases. For example, each time the number of individual objects SO reaches a predetermined number, a plurality of individual objects SO may be arranged to represent a specific character in the virtual object VO1.
  • FIG. 7 is a diagram showing an example of how a plurality of individual objects SO1 to SO10 are aligned. In the example shown in FIG. 7, in the virtual object VO3, the individual objects SO1 to SO10 are aligned to represent the letter "N".
  • the terminal device 10 can more appeal to the user U1 that the number of multiple messages has increased.
  • the display control unit 114 may vary the display modes of the individual objects SO according to the transmission source devices corresponding to each of the plurality of messages addressed to the user U1. For example, the display control unit 114 may change at least one of the shape and color of the individual object SO according to the device that sent the message.
  • the user U1 can distinguish the plurality of individual objects SO according to the transmission source devices of the plurality of messages, simply by visually recognizing the plurality of individual objects SO.
  • FIG. 8 is a block diagram showing a configuration example of the server 30.
  • the server 30 comprises a processing device 31 , a storage device 32 , a communication device 33 , a display 34 and an input device 35 .
  • Each element of server 30 is interconnected by one or more buses for communicating information.
  • the processing device 31 is a processor that controls the server 30 as a whole. Also, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing unit 31 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 31 may be realized by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
  • CPU central processing unit
  • the storage device 32 is a recording medium readable and writable by the processing device 31 .
  • the storage device 32 also stores a plurality of programs including the control program PR3 executed by the processing device 31 .
  • the storage device 32 stores a message database MD in which information related to messages transmitted and received between a plurality of users U is stored.
  • FIG. 9 is a table showing an example of the message database MD.
  • the acquisition unit 311 provided in the server 30 acquires messages transmitted and received between the users U from the terminal device 10 . More specifically, the acquisition unit 311 acquires information indicating the sender of a message output from the terminal device 10, information indicating the recipient of the message, and information indicating the content of the message. These information are stored in the message database MD. Further, the acquisition unit 311 acquires information indicating that each message has been read by the user U. Based on this information, the message management unit 313, which will be described later, adds a flag indicating whether or not each message has been read in the message database MD. As an example, a flag with a value of '0' indicates that the message is unread.
  • a flag with a value of "1” indicates that the message has already been read.
  • "n" is an integer of 2 or more.
  • the communication device 33 is hardware as a transmission/reception device for communicating with other devices.
  • the communication device 33 is also called a network device, a network controller, a network card, a communication module, etc., for example.
  • the communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 33 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
  • the display 34 is a device that displays images and character information.
  • the display 34 displays various images under the control of the processing device 31 .
  • various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display 34 .
  • the input device 35 is a device that accepts operations by the administrator of the information processing system 1 .
  • the input device 35 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse.
  • the input device 35 may also serve as the display 34 .
  • the processing device 31 functions as an acquisition unit 311, an output unit 312, and a message management unit 313, for example, by reading the control program PR3 from the storage device 32 and executing it.
  • the acquisition unit 311 acquires various data from the terminal device 10 via the communication device 33 .
  • the data includes, for example, data indicating the operation content for the virtual object VO, which is input to the terminal device 10 by the user U1 wearing the XR glasses 20 on the head.
  • the acquisition unit 311 acquires messages transmitted and received between users U. Furthermore, as an example, when a message addressed to user U1 has been read by user U1 in terminal device 10-1, acquisition unit 311 acquires information indicating that the message has been read.
  • the output unit 312 transmits image information indicating an image displayed on the XR glasses 20 to the terminal device 10 .
  • the image information may be stored in the storage device 32 .
  • the image information may be generated by a generating unit (not shown).
  • the output unit 312 transmits a message addressed to the user U1 stored in the message database MD to the terminal device 10-1.
  • the message management unit 313 manages the message database MD. As an example, in the terminal device 10-1, when the acquisition unit 311 acquires information indicating that the message addressed to the user U1 has been read by the user U1, the message management unit 313 links the message to the message. The flag "0" indicating unread is changed to the flag "1" indicating read.
  • FIG. 10 is a flow chart showing the operation of the terminal device 10 according to the first embodiment, especially the terminal device 10-1 used by the user U1. The operation of the terminal device 10-1 will be described below with reference to FIG.
  • step S1 the processing device 11-1 functions as an acquisition unit 112-1.
  • Processing device 11-1 acquires a plurality of messages addressed to user U1.
  • step S2 the processing device 11-1 functions as the generator 113-1.
  • the processing device 11-1 generates a plurality of individual objects SO corresponding to the plurality of messages on a one-to-one basis.
  • the processing device 11-1 functions as the display control unit 114-1.
  • the processing device 11-1 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO.
  • the processing device 11-1 increases the size of the virtual object VO as the number of messages acquired in step S1 increases.
  • the processing device 11-1 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages acquired in step S1 increases. After that, the processing device 11-1 executes the process of step S1.
  • the terminal device 10 as a display control device has a virtual space including a virtual object VO on the XR glasses 20 as a display device worn on the head. It is a display control device for displaying VS.
  • the terminal device 10 includes an acquisition unit 112 , a generation unit 113 and a display control unit 114 .
  • Acquisition unit 112 acquires a plurality of messages.
  • the generation unit 113 generates a plurality of individual objects SO corresponding to a plurality of messages on a one-to-one basis.
  • the display control unit 114 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO.
  • the display control unit 114 sets the size of the virtual object VO to the first size, and sets the size of the virtual object VO to the first size in the virtual space VS. Let the distance to the center of the virtual object VO be the first distance. Furthermore, when the number of messages is a second number that is larger than the first number, the display control unit 114 sets the size of the virtual object VO to a second size that is larger than the first size. , in the virtual space VS, the distance from the center of the virtual space VS to the center of the virtual object VO is set as a second distance longer than the first distance.
  • the terminal device 10 Since the terminal device 10 has the above configuration, it is possible to display the virtual objects VO corresponding to the number of messages in the virtual space VS, and to suppress the deterioration of the user's convenience even if the number of messages increases. It becomes possible. Specifically, in the virtual space VS, as the number of individual objects SO corresponding to the number of messages increases, the terminal device 10 increases the size of the virtual object VO, which is a collection of the individual objects SO. At the same time, the virtual object VO is displayed at a position farther from the user U1. Since the terminal device 10 has this configuration, even if the number of individual objects SO increases on the display 28 provided on the XR glasses 20 worn on the head of the user U1, the viewability thereof is ensured. As a result, deterioration in user convenience is suppressed.
  • the display control unit 114 changes the display mode of the virtual object VO according to the number of messages.
  • the terminal device 10 has the above configuration, when the number of multiple messages increases, it becomes possible to further appeal to the user U1 that the number of multiple messages has increased.
  • the display control unit 114 makes the display modes of the plurality of individual objects SO different from each other according to the transmission source device corresponding to each of the plurality of messages.
  • the terminal device 10 Since the terminal device 10 has the above configuration, the user U1 can distinguish the plurality of individual objects SO according to the transmission source devices of the plurality of messages simply by visually recognizing the plurality of individual objects SO. becomes.
  • FIG. 2 Second Embodiment
  • a configuration of an information processing system 1A including a terminal device 10A as a display control device according to a second embodiment of the present invention will be described with reference to FIGS. 11 to 14.
  • FIG. 11 Second Embodiment
  • An information processing system 1A according to the second embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 in that a terminal device 10A is provided instead. Otherwise, the overall configuration of the information processing system 1A is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
  • FIG. 11 is a block diagram showing a configuration example of the terminal device 10A. Unlike the terminal device 10, the terminal device 10A includes a processing device 11A instead of the processing device 11 and a storage device 12A instead of the storage device 12.
  • FIG. 11 is a block diagram showing a configuration example of the terminal device 10A. Unlike the terminal device 10, the terminal device 10A includes a processing device 11A instead of the processing device 11 and a storage device 12A instead of the storage device 12.
  • FIG. 11 is a block diagram showing a configuration example of the terminal device 10A
  • the storage device 12A stores the control program PR2A instead of the control program PR2.
  • the processing device 11A includes a display control unit 114A instead of the display control unit 114 included in the processing device 11.
  • the processing device 11A also includes a reception unit 115 and a determination unit 116 in addition to the components included in the processing device 11 .
  • the reception unit 115 receives an operation from the user U1 on the virtual object VO.
  • the accepting unit 115 may accept an operation using the input device 15 by the user U1.
  • the accepting unit 115 may accept an operation of the user U1 touching the virtual object VO in the virtual space VS.
  • the operation of selecting the virtual object VO in the virtual space VS is an example of the first operation.
  • the receiving unit 115 may also receive an operation by the user U1 to select one message from a list of multiple messages displayed in the virtual space VS, as will be described later. There is a one-to-one correspondence between these multiple messages and multiple individual objects SO.
  • One individual object SO is selected by the user U1 selecting one message from a list of a plurality of messages.
  • the accepting unit 115 may accept an operation of the user U1 touching one individual object SO in the virtual space VS.
  • the operation of selecting one individual object SO in the virtual space VS is an example of the second operation.
  • the display control unit 114A has the same function as the display control unit 114. Further, when the operation by the user U1 is the first operation, the display control unit 114A displays a list of multiple messages in the virtual space VS. On the other hand, when the operation by the user U1 is the second operation, the display control unit 114A causes the virtual space VS to display the content of the message corresponding to one individual object SO designated by the second operation. .
  • FIG. 12 and 13 are explanatory diagrams of operation examples of the display control unit 114A and the reception unit 115.
  • FIG. 12 when the user U1 performs an operation of selecting the virtual object VO2 using the terminal device 10-1, the display control unit 114A displays a list L of a plurality of messages in the virtual space VS. More specifically, as shown in FIG. 12, a list of titles of messages A to F is displayed as list L in virtual space VS by display control unit 114A. Messages A to F correspond one-to-one to individual objects SO1 to SO6, respectively. Note that in FIG. 12, the list L is displayed on the left side of the virtual object VO2 as seen from the user U1, but this display location is merely an example.
  • the list L may be displayed anywhere in the virtual space VS.
  • the list L is preferably displayed at a position that does not overlap the virtual object VO2 when viewed from the user U1. The same applies when the user U1 performs an operation of touching the virtual object VO7 in the virtual space VS.
  • the user U1 can visually recognize, in a list format, a plurality of messages corresponding one-to-one to the plurality of individual objects SO included in the virtual object VO.
  • FIG. 12 when the user U1 performs an operation to select one message from a plurality of messages shown in the list L, a message object MO1 indicating the contents of the message is displayed as shown in FIG.
  • the message object MO1 includes at least one of text and images.
  • the user U1 performs an operation to select "message F" corresponding to the individual object SO6 from the plurality of messages shown in the list L.
  • the user U1 performs an operation to touch the individual object SO6 in the virtual space VS.
  • the user U1 can visually recognize the specific contents of each of the multiple messages corresponding to the multiple individual objects SO included in the virtual object VO.
  • the determination unit 116 determines the importance of each of the plurality of messages. For example, the determination unit 116 may determine the importance by analyzing the contents of each of the multiple messages. Alternatively, the determination unit 116 may determine the importance based on the transmission source device corresponding to each of the multiple messages. Alternatively, the determining unit 116 may determine the degree of importance based on the operation of the user U1 using the input device 15. FIG. For example, as shown in FIG. 13, while the contents of a message corresponding to one individual object SO are displayed, the user U1 judges the importance of the message and inputs the judgment result using the input device 15. do. The determining unit 116 may determine the degree of importance based on the content of input from the input device 15 .
  • the display control unit 114A displays the individual objects SO corresponding to the messages having a degree of importance equal to or greater than a predetermined value, and the individual objects SO corresponding to the messages having a degree of importance less than the predetermined value. is displayed near the user U1, that is, near the center of the virtual space VS. Further, the display control unit 114A may display an individual object SO having a higher degree of importance in the virtual object VO near the user U1, that is, near the approximate center of the virtual space VS.
  • the "predetermined value" described above is an example of the "first value”.
  • user U1 can preferentially check messages that are more important to him/herself.
  • FIG. 14 is a flowchart showing the operation of the terminal device 10A according to the second embodiment, especially the terminal device 10A-1 used by the user U1. The operation of the terminal device 10A-1 will be described below with reference to FIG.
  • step S11 the processing device 11A-1 functions as an acquisition unit 112-1.
  • the processing device 11A-1 acquires a plurality of messages addressed to the user U1.
  • step S12 the processing device 11A-1 functions as the generator 113-1.
  • the processing device 11A-1 generates a plurality of individual objects SO corresponding to the plurality of messages on a one-to-one basis.
  • the processing device 11A-1 functions as the display control section 114A-1.
  • the processing device 11A-1 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO.
  • the processing device 11A-1 increases the size of the virtual object VO as the number of messages acquired in step S11 increases. Also, the processing device 11A-1 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages acquired in step S11 increases.
  • step S14 the processing device 11A-1 functions as the reception unit 115.
  • Processing device 11A-1 receives an operation from user U1. If the operation from the user U1 is the first operation on the virtual object VO, the processing device 11A-1 executes the process of step S15. If the operation from the user U1 is the second operation on the individual object SO, the processing device 11A-1 executes the process of step S16.
  • step S15 the processing device 11A-1 functions as the display control section 114A.
  • the processing device 11A-1 causes the list L of a plurality of messages to be displayed in the virtual space VS. After that, the processing device 11A-1 executes the process of step S14.
  • step S16 the processing device 11A-1 functions as the display control section 114A.
  • the processing device 11A-1 causes the contents of the message corresponding to one individual object SO to be displayed in the virtual space VS. After that, the processing device 11A-1 executes the process of step S11.
  • the terminal device 10A as a display control device further includes the reception unit 115 that receives an operation on the virtual object VO. If the above operation is the first operation, the display control unit 114 causes the list L of the plurality of messages to be displayed in the virtual space VS.
  • the user U1 can visually recognize, in a list format, a plurality of messages corresponding one-to-one to a plurality of individual objects SO included in the virtual object VO.
  • the display control unit 114 when the above operation is the second operation of designating one individual object SO among the plurality of individual objects SO, The content of the message corresponding to one individual object SO is displayed in the virtual space VS.
  • the user U1 can visually recognize the specific contents of each of the multiple messages corresponding one-to-one to the multiple individual objects SO included in the virtual object VO.
  • the terminal device 10A as a display control device further includes the determination unit 116 that determines the importance of each of a plurality of messages.
  • the display control unit 114 selects individual objects SO corresponding to messages having a degree of importance greater than or equal to the first value, and displays individual objects SO corresponding to messages having a degree of importance less than the first value.
  • the individual object SO is displayed near the user U1 as compared with the individual object SO.
  • the terminal device 10A Since the terminal device 10A has the above configuration, the user U can preferentially check messages that are more important to him/herself.
  • 3-1 Configuration of Third Embodiment 3-1-1: Overall Configuration
  • An information processing system 1B according to the third embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 in that a terminal device 10B is provided instead. Otherwise, the overall configuration of the information processing system 1B is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
  • FIG. 15 is a block diagram showing a configuration example of the terminal device 10B.
  • the terminal device 10B includes a processing device 11B instead of the processing device 11 and a storage device 12B instead of the storage device 12.
  • FIG. The terminal device 10 ⁇ /b>B also includes a sound pickup device 17 in addition to the components included in the terminal device 10 .
  • the sound pickup device 17 picks up the voice of the user U1 and converts the picked-up voice into an electric signal.
  • the sound collecting device 17 is specifically a microphone. An electrical signal converted from the voice by the sound collecting device 17 is output to the voice recognition unit 117, which will be described later.
  • the storage device 12B stores the control program PR2B instead of the control program PR2.
  • the processing device 11B includes an acquisition unit 112B instead of the acquisition unit 112.
  • the processing device 11B also includes a speech recognition unit 117 and a message generation unit 118 in addition to the components included in the processing device 11.
  • FIG. 1 A speech recognition unit 117 and a message generation unit 118 in addition to the components included in the processing device 11.
  • the voice recognition unit 117 recognizes the voice collected by the sound collection device 17. More specifically, the speech recognition unit 117 generates text by performing speech recognition based on the electrical signal acquired from the sound pickup device 17 .
  • the message generation unit 118 generates a message corresponding to the text generated by the speech recognition unit 117.
  • 112 A of acquisition parts acquire the message which the message production
  • the plurality of messages acquired by acquisition unit 112A may include messages generated by multiple users U including user U1 in addition to messages generated by message generation unit 118 .
  • the plurality of messages are generated by the terminal device 10B-1 as a display control device and one or more terminal devices 10B connected to the terminal device 10B-1 via the communication network NET.
  • the terminal device 10B Since the terminal device 10B has the above configuration, it can generate a message based on the voice uttered by the user U1 and generate the individual object SO based on the generated message.
  • the messages generated by the message generator 118 are included in the plurality of messages acquired in step S1.
  • a plurality of messages can be sent to the terminal device 10B-1 as the display control device and the communication network NET. is generated by one or a plurality of terminal devices 10B connected to the terminal device 10B-1 via.
  • the user U1 can confirm messages generated by users U other than the user U1.
  • the terminal device 10B as a display control device further includes the sound pickup device 17, the speech recognition section 117, and the message generation section 118.
  • the sound pickup device 17 picks up the voice of the user U and outputs an electrical signal representing the voice.
  • the speech recognition unit 117 generates text based on the electrical signal output from the sound pickup device 17 .
  • Message generator 118 generates a message corresponding to the text generated by speech recognizer 117 .
  • the plurality of messages includes messages generated by the message generator 118 .
  • the terminal device 10B Since the terminal device 10B has the above configuration, it can generate the individual object SO based on the voice uttered by the user U1.
  • FIG. 4-1 Configuration of Fourth Embodiment 4-1-1: Overall Configuration
  • An information processing system 1C according to the fourth embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 and a server 30A instead of the server 30, respectively. Otherwise, the overall configuration of the information processing system 1C is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
  • the terminal device 10C includes a processing device 11C instead of the processing device 11 and a storage device 12C instead of the storage device 12.
  • the storage device 12C stores a control program PR2C instead of the control program PR2.
  • the processing device 11C includes an output section 111C instead of the output section 111.
  • FIG. Otherwise, the configuration of the terminal device 10C is the same as the configuration of the terminal device 10 according to the first embodiment shown in FIG. 4, so illustration and description thereof will be omitted.
  • the output unit 111C has the same function as the output unit 111 has. Further, the output unit 111C outputs the device ID (identifer) of the terminal device 10, the user name using the terminal device 10, and the location information acquired by the terminal device 10 from the XR glasses 20 to the server 30A. For example, if the terminal device 10 is the terminal device 10-1 used by user U1 who is the first user, the output unit 111C-1 outputs the device ID of the terminal device 10-1, the user name of the terminal device 10-1, and the device ID of the terminal device 10-1. and position information generated by the XR glasses 20 worn on the head of the user U1 are output to the server 30A.
  • the terminal device 10-1 is an example of a first display control device.
  • the output unit 111C also outputs coordinates indicating the display position in the virtual space VS of the virtual object VO displayed in the virtual space VS by the display control unit 114 to the server 30A.
  • a virtual space VS including a virtual object VO is displayed on the XR glasses 20 connected to the terminal device 10-1.
  • XR glasses 20 are an example of a first display device.
  • the virtual space VS displayed on the XR glasses 20 is an example of the first virtual space.
  • FIG. 16 is a block diagram showing a configuration example of the server 30A. Unlike the server 30, the server 30A includes a processing device 31A instead of the processing device 31 and a storage device 32A instead of the storage device 32. FIG.
  • the storage device 32A stores the control program PR3A instead of the control program PR3.
  • the storage device 32A also stores a location information database LD.
  • FIG. 17 is a table showing an example of the location information database LD.
  • the terminal device 10C provides the server 30A with the device ID of the terminal device 10C, the user name using the terminal device 10, and the location information acquired by the terminal device 10C from the XR glasses 20, or the terminal device 10C outputs the generated location information.
  • the location information database LD stores these device IDs, user names, and location information.
  • the location information database LD shown in FIG. (x, y, z) (x u1 , y u1 , z u1 ), where 1 is the positional information acquired from the XR glasses 20, are stored in a mutually linked state.
  • "L" is an integer of 2 or more.
  • the processing device 31A includes an acquisition unit 311A instead of the acquisition unit 311 and an output unit 312A instead of the output unit 312.
  • the processing device 31A also includes a determination unit 314 and an extraction unit 315 in addition to the constituent elements included in the processing device 31 .
  • the determination unit 314 determines whether or not the number of messages output by the output unit 312A to the terminal device 10C-1 is equal to or greater than a predetermined number.
  • the acquisition unit 311A receives the virtual message displayed in the virtual space VS by the display control unit 114-1 from the terminal device 10C-1. Coordinates indicating the display position of the object VO in the virtual space VS are acquired. Note that the obtaining unit 311A may obtain the coordinates indicating the display position of the individual object SO in addition to the coordinates indicating the display position of the virtual object VO.
  • the extracting unit 315 determines, among the users U who are permitted to share the virtual space VS with the user U1, the location within a predetermined distance from the display position of the virtual object VO in the virtual space VS. to extract users U other than the user U1.
  • the extracted other user U is an example of the second user.
  • the second user is a user U who exists within a predetermined distance from the display position in the virtual space VS and is permitted to share the virtual space VS when the number of messages is equal to or greater than a predetermined number. It should be noted that this "predetermined distance" preferably increases according to the number of messages output to the terminal device 10C-1 by the output unit 312A.
  • the output unit 312A supplies the terminal device 10C used by the user U extracted by the extraction unit 315 with a plurality of messages identical to the plurality of messages output to the terminal device 10C-1 and the virtual object VO in the virtual space VS. and the coordinates indicating the display position of the .
  • the same multiple messages and the coordinates indicating the display position of the virtual object VO are examples of control information.
  • the terminal device 10C to which the output unit 312A transmits the control information is an example of a second display control device.
  • the XR glass 20 connected to the terminal device 10C or the display 14 provided in the terminal device 10C is an example of a second display device.
  • the terminal device 10C that has obtained the same plurality of messages generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis, similar to the terminal device 10C-1.
  • the terminal device 10C displays the virtual space VS in which the virtual object VO, which is an aggregate of the plurality of individual objects SO, is arranged at the display position obtained from the server 30A. It is displayed on the display 14 provided in the device 10C.
  • the virtual space VS displayed on the XR glasses 20 connected to the terminal device 10C or the display 14 provided in the terminal device 10C is an example of the second virtual space.
  • the same plurality of messages output from the output unit 312A and the coordinates indicating the display position of the virtual object VO are information for displaying the second virtual space including the virtual object VO on the second display device.
  • the output unit 312A outputs the coordinates indicating the display position of the individual object SO to the terminal device 10C.
  • the terminal device 10C displays the individual object SO at the display position in the virtual space VS obtained from the server 30A.
  • the other user U2 can also visually recognize the virtual object VO.
  • FIG. 18 is an explanatory diagram showing an example of operations of the determination unit 314, the extraction unit 315, and the output unit 312A. It is assumed that a virtual space VS including a virtual object VO4 is displayed on the XR glasses 20 worn on the head by the user U1. As the number of messages addressed to user U1 increases, the number of individual objects SO included in virtual object VO4 also increases. Also, as the number of individual objects SO included in the virtual object VO4 increases, the virtual object VO4 moves away from the user U1.
  • the output unit 312A transmits to the terminal device 10C-2 used by the user U2 the same plurality of messages that were transmitted to the terminal device 10C-1.
  • the acquiring unit 112 provided in the terminal device 10C-2 acquires the same plurality of messages and the coordinates indicating the display position of the virtual object VO4 in the virtual space VS from the server 30A.
  • the acquisition unit 112 provided in the terminal device 10C-2 generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis.
  • FIG. 19 is a flow chart showing the operation of the server 30A according to the fourth embodiment. The operation of the server 30A will be described below with reference to FIG.
  • step S21 the processing device 31A functions as the output unit 312A.
  • the processing device 31A transmits a plurality of messages to the terminal device 10C-1 used by the user U1.
  • the terminal device 10C-1 causes the XR glasses 20 to display the virtual object VO4 based on the plurality of messages.
  • step S22 the processing device 31A functions as the determination unit 314.
  • the processing device 31A determines whether or not the number of messages sent to the terminal device 10C-1 is equal to or greater than a predetermined number.
  • the determination result is affirmative, that is, when the processing device 31A determines that the number of the plurality of messages is equal to or greater than the predetermined number, the processing device 31A executes the process of step S23.
  • the determination result is negative, that is, when the processing device 31A determines that the number of the plurality of messages is less than the predetermined number, the processing device 31A executes the process of step S21.
  • step S23 the processing device 31A functions as an acquisition unit 311A.
  • the processing device 31A acquires, from the terminal device 10C-1, the coordinates indicating the display position in the virtual space VS of the virtual object VO4 displayed in the virtual space VS by the display control unit 114-1.
  • step S24 the processing device 31A functions as the extraction unit 315.
  • the processing device 31A extracts users U other than the user U1 who exist within a predetermined distance from the display position of the virtual object VO4 in the virtual space VS from among the users U who share the virtual space VS with the user U1.
  • the processing device 31A should extract the user U2.
  • the processing device 31A functions as the output unit 312A.
  • the processing device 31A indicates, to the terminal device 10C-2 used by the user U2, a plurality of messages identical to the plurality of messages output to the terminal device 10C-1, and the display position of the virtual object VO4 in the virtual space VS.
  • the terminal device 10C-2 that has obtained the same plurality of messages generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis, similar to the terminal device 10C-1.
  • the terminal device 10C-2 displays the virtual object VO4, which is an aggregate of the plurality of individual objects SO, at the display position in the virtual space VS obtained from the server 30A.
  • the server 30A is a server that transmits a plurality of messages to the terminal devices 10 to 10C as the first display control device. be.
  • the XR glasses 20 as the first display device are worn on the head of the user U1 as the first user.
  • the server 30A includes an acquisition unit 311A and an output unit 312A.
  • the obtaining unit 311A obtains the display position of the virtual object VO4 in the first virtual space VS.
  • the output unit 312A outputs a second message that exists within a predetermined distance from the display position in the first virtual space VS and is allowed to share the virtual object VO.
  • Control information is transmitted to terminal devices 10 to 10C as display control devices.
  • the terminal devices 10 to 10C as the second display control device display the second virtual space on the XR glasses 20 as the second display device worn on the head of the user U2 as the second user.
  • the above control information is information for displaying the virtual object VO4 in the second virtual space on the second display device.
  • the server 30A has the above configuration, when the number of individual objects SO included in the virtual object VO4 displayed on the XR glasses 20 is equal to or greater than a predetermined number, the other user U2 can also The virtual object VO4 can be visually recognized.
  • FIG. 20 is an explanatory diagram of the virtual object VO5 generated when the terminal device 10B and the server 30A are combined.
  • the terminal device 10B generates a plurality of individual objects SO based on the cheers of individual spectators in a soccer stadium, and the XR glasses 20 as AR glasses are configured by the plurality of individual objects SO above the soccer stadium.
  • the virtual object VO5 to be displayed may be displayed.
  • the virtual object VO5 is shared by a plurality of spectators wearing XR glasses 20 as AR glasses on their heads. may be arranged in a shape representing In this case, the virtual object VO5 and the virtual space VS, which are composed of individual objects SO corresponding to a plurality of messages on a one-to-one basis, are shared by a plurality of users U who gather at a predetermined location.
  • the predetermined place may be a venue for some event or a public facility such as a school.
  • multiple users U who share the virtual object VO5 and the virtual space VS may participate in the same event. For example, an e-sports tournament corresponds to the event.
  • the terminal devices 10 to 10C also receive messages from the second user U, in addition to messages addressed to the user U1 and messages generated by the user U1 himself. A message sent to a third user U may be retrieved.
  • the terminal devices 10 to 10C each include a display control section 114 or a display control section 114A.
  • the servers 30 to 30A may be configured to include the display control unit 114 or the display control unit 114A.
  • the servers 30 to 30A may set coordinates indicating the display positions of the virtual object VO and the individual object SO in the virtual space VS.
  • the server 30 ⁇ /b>A has a determination section 314 .
  • the terminal device 10C may be configured to include the determination unit 314 instead of the server 30A. Specifically, the terminal device 10C determines whether the number of acquired messages or the number of individual objects SO displayed on the XR glasses 20 is equal to or greater than a predetermined number, and sends the determination result to the server. It may be configured to output to 30A.
  • the display control unit 114 or the display control unit 114A provided in the terminal devices 10 to 10C is controlled by the user U when the contents of the message are displayed.
  • the read individual object SO may be erased.
  • the determination unit 314 determines whether the number of unread messages is equal to or greater than a predetermined number, not the number of multiple messages output to the terminal device 10C-1. It may be determined whether
  • the acquisition unit 112 provided in the terminal device 10 to the terminal device 10C acquires a plurality of messages from the server 30 or the server 30A.
  • the acquiring unit 112 may acquire only message IDs corresponding to each of the plurality of messages from the server 30 or server 30A.
  • the generation unit 113 generates individual objects SO that correspond one-to-one with a plurality of message IDs instead of a plurality of messages. Also, in this case, as shown in FIG.
  • the acquiring unit 112 acquires the content of the message from the server 30 or the server 30A for the first time. can be obtained. After that, the display control unit 114 or the display control unit 114A may display the content of the message in the virtual space VS.
  • the individual object SO corresponds to the message generated by the user U on a one-to-one basis.
  • the messages to which the individual object SO corresponds are not limited to messages generated by the user U.
  • the message may be a notification to user U generated by an application.
  • the server 30 or the server 30A sends messages stored in the message database MD to the terminal devices 10 to 10C. Output.
  • the method of outputting a message from server 30 or server 30A to terminal devices 10 to 10C is not limited to this.
  • the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment may further include a content server. More specifically, the server 30 or the server 30A may acquire content including a message for the user U from the content server, and output the acquired content to the terminal devices 10 to 10C.
  • the terminal devices 10 to 10C and the XR glasses 20 are implemented separately.
  • the method of realizing the terminal devices 10 to 10C and the XR glasses 20 in the embodiment of the present invention is not limited to this.
  • the XR glasses 20 may have the same functions as the terminal device 10 .
  • the terminal devices 10 to 10C and the XR glasses 20 may be implemented within a single housing. The same applies to the information processing system 1A according to the second embodiment to the information processing system 1C according to the fourth embodiment.
  • the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment include, as an example, XR glasses 20 as AR glasses.
  • the XR glass 20 includes an HMD (Head Mounted Display) employing VR (Virtual Reality) technology, an HMD employing MR (Mixed Reality) technology, and MR technology.
  • the information processing system 1 to information processing system 1C may include, instead of the XR glasses 20, one of ordinary smartphones and tablets equipped with imaging devices.
  • These HMDs, MR glasses, smartphones, and tablets are examples of display devices.
  • the storage devices 12 to 12C, 22, and 32 to 32A are examples of ROM and RAM, but flexible disks, magneto-optical disks ( compact discs, digital versatile discs, Blu-ray discs), smart cards, flash memory devices (e.g. cards, sticks, key drives), CD-ROMs (Compact Disc-ROM), registers, removable A disk, hard disk, floppy disk, magnetic strip, database, server or other suitable storage medium.
  • the program may be transmitted from a network via an electric communication line.
  • the program may be transmitted from the communication network NET via an electric communication line.
  • the information, signals, etc. described may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
  • input/output information and the like may be stored in a specific location (for example, memory), or may be managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
  • the determination may be made by a value (0 or 1) represented using 1 bit, or by a true/false value (Boolean: true or false). Alternatively, it may be performed by numerical comparison (for example, comparison with a predetermined value).
  • each function illustrated in FIGS. 1 to 20 is implemented by any combination of at least one of hardware and software.
  • the method of realizing each functional block is not particularly limited. That is, each functional block may be implemented using one device physically or logically coupled, or directly or indirectly using two or more physically or logically separated devices (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices.
  • a functional block may be implemented by combining software in the one device or the plurality of devices.
  • software, instructions, information, etc. may be transmitted and received via a transmission medium.
  • the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
  • wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.
  • wireless technology infrared, microwave, etc.
  • system and “network” are used interchangeably.
  • Information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using corresponding other information. may be represented as
  • the terminal device 10 to terminal device 10C and the server 30 to server 30A may be mobile stations (MS).
  • a mobile station is defined by those skilled in the art as subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be called a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term. Also, in the present disclosure, terms such as “mobile station”, “user terminal”, “user equipment (UE)", “terminal”, etc. may be used interchangeably.
  • connection refers to any direct or indirect connection between two or more elements. Any connection or coupling is meant, including the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to each other. Couplings or connections between elements may be physical couplings or connections, logical couplings or connections, or a combination thereof. For example, “connection” may be replaced with "access.”
  • two elements are defined using at least one of one or more wires, cables, and printed electrical connections and, as some non-limiting and non-exhaustive examples, in the radio frequency domain. , electromagnetic energy having wavelengths in the microwave and optical (both visible and invisible) regions, and the like.
  • the phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
  • determining and “determining” as used in this disclosure may encompass a wide variety of actions.
  • “Judgement” and “determination” are, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (eg, lookup in a table, database, or other data structure);
  • "judgment” and “determination” are used for receiving (e.g., receiving information), transmitting (e.g., transmitting information), input, output, access (accessing) (for example, accessing data in memory) may include deeming that a "judgment” or “decision” has been made.
  • judgment and “decision” are considered to be “judgment” and “decision” by resolving, selecting, choosing, establishing, comparing, etc. can contain.
  • judgment and “decision” may include considering that some action is “judgment” and “decision”.
  • judgment (decision) may be replaced by "assuming", “expecting”, “considering”, and the like.
  • the term "A and B are different” may mean “A and B are different from each other.” The term may also mean that "A and B are different from C”. Terms such as “separate,” “coupled,” etc. may also be interpreted in the same manner as “different.”
  • notification of predetermined information is not limited to explicit notification, but is performed implicitly (for example, not notification of the predetermined information). good too.
  • reception unit 116... determination unit, 117... voice recognition unit, 118 ... message generation unit 311, 311A ... acquisition unit 312, 312A ... output unit 313 ... message management unit 314 ... determination unit 315 ... extraction unit MO1 ... message object PR1 to PR3A ... control program SO, SO1 ⁇ SO10 individual objects, U, U1 to U2 users, VO, VO1 to VO5 virtual objects

Abstract

This display control device comprises an acquiring unit for acquiring a plurality of messages, a generating unit for generating a plurality of individual objects corresponding on a one-to-one basis to the plurality of messages, and a display control unit for causing a display device to display a virtual object, which is a collection of the plurality of individual objects, wherein the display control unit increases a size of the virtual object and increases a distance from a center of a virtual space to a center of the virtual object in accordance with an increase in the number of the plurality of messages.

Description

表示制御装置及びサーバDisplay control device and server
 本発明は、表示制御装置及びサーバに関する。とりわけ本発明は、メッセージに対応する仮想オブジェクトを表示装置に表示させる表示制御装置及びサーバに関する。 The present invention relates to a display control device and a server. More particularly, the present invention relates to a display control device and a server that cause a display device to display a virtual object corresponding to a message.
 VR(Virtual Reality)技術及びAR(Augmented Reality)技術を含むXR技術によって、ユーザが頭部に装着するXRグラスに表示される仮想空間に対して、仮想オブジェクトによって示されるメッセージが表示される。 By means of XR technology, including VR (Virtual Reality) technology and AR (Augmented Reality) technology, a message indicated by a virtual object is displayed in the virtual space displayed on the XR glasses that the user wears on his or her head.
 例えば、特許文献1は、通信のためのインタラクティブな仮想環境に係る方法及び装置に係る技術を開示している。具体的には、特許文献1は、ユーザ間での情報のやり取りが可能な仮想空間に対して、「落書きメッセージ」を示す仮想オブジェクトを表示する技術を開示している。 For example, Patent Literature 1 discloses a technology related to a method and apparatus related to an interactive virtual environment for communication. Specifically, Patent Literature 1 discloses a technique of displaying a virtual object representing a "doodle message" in a virtual space in which information can be exchanged between users.
特表2010-535363号公報Japanese Patent Publication No. 2010-535363
 仮想空間にメッセージの数に対応する数の仮想オブジェクトを表示する場合、メッセージの数が増加すると、仮想オブジェクトの数も増加する。このため、ユーザが頭部に装着するXRグラスに表示される仮想空間が、複数の仮想オブジェクトによって埋め尽くされる場合があり得る。即ち、従来の技術では、ユーザの視界が妨げられることによって利便性が低下するといった問題があった。 When displaying a number of virtual objects corresponding to the number of messages in the virtual space, as the number of messages increases, the number of virtual objects also increases. Therefore, the virtual space displayed on the XR glasses worn by the user on the head may be filled with a plurality of virtual objects. In other words, in the conventional technology, there is a problem that the user's view is obstructed and the convenience is lowered.
 そこで、本発明は、仮想空間にメッセージの数に対応する仮想オブジェクトを表示する場合に、メッセージの数が増加しても、ユーザの利便性の低下を抑制できる表示制御装置を提供することを目的とする。 SUMMARY OF THE INVENTION Accordingly, it is an object of the present invention to provide a display control device capable of suppressing deterioration in user convenience even when the number of messages increases when virtual objects corresponding to the number of messages are displayed in a virtual space. and
 本発明の好適な態様に係る表示制御装置は、ユーザの頭部に装着される表示装置に、仮想オブジェクトを含む仮想空間を表示させる表示制御装置であって、複数のメッセージを取得する取得部と、前記複数のメッセージに1対1に対応する複数の個別オブジェクトを生成する生成部と、前記複数の個別オブジェクトの集合体である仮想オブジェクトを前記表示装置に表示させる表示制御部とを備え、前記表示制御部は、前記複数のメッセージの数が第1の数の場合に、前記仮想オブジェクトの大きさを第1の大きさとすると共に、前記仮想空間において、当該仮想空間の中心から前記仮想オブジェクトの中心までの距離を第1の距離とし、前記複数のメッセージの数が第1の数よりも大きな第2の数の場合に、前記仮想オブジェクトの大きさを前記第1の大きさよりも大きな第2の大きさとすると共に、前記仮想空間において、前記仮想空間の中心から前記仮想オブジェクトの中心までの距離を前記第1の距離よりも長い第2の距離とする、表示制御装置である。 A display control device according to a preferred aspect of the present invention is a display control device for displaying a virtual space including a virtual object on a display device worn on the head of a user, the display control device comprising: an acquisition unit for acquiring a plurality of messages; a generation unit for generating a plurality of individual objects corresponding to the plurality of messages on a one-to-one basis; and a display control unit for causing the display device to display a virtual object that is a collection of the plurality of individual objects, The display control unit sets the size of the virtual object to a first size when the number of the plurality of messages is a first number, and displays the size of the virtual object in the virtual space from the center of the virtual space. When the distance to the center is a first distance and the number of the plurality of messages is a second number larger than the first number, the size of the virtual object is a second number larger than the first size. and a second distance, which is longer than the first distance, from the center of the virtual space to the center of the virtual object in the virtual space.
 本発明によれば、仮想空間にメッセージの数に対応する仮想オブジェクトを表示する場合に、メッセージの数が増加しても、ユーザの利便性の低下を抑制できる。 According to the present invention, when displaying virtual objects corresponding to the number of messages in the virtual space, even if the number of messages increases, the user's convenience can be prevented from deteriorating.
第1実施形態に係る情報処理システム1の全体構成を示す図。The figure which shows the whole structure of the information processing system 1 which concerns on 1st Embodiment. 第1実施形態に係るXRグラス20の外観を示す斜視図。1 is a perspective view showing the appearance of XR glasses 20 according to the first embodiment; FIG. 第1実施形態に係るXRグラス20の構成例を示すブロック図。2 is a block diagram showing a configuration example of the XR glasses 20 according to the first embodiment; FIG. 第1実施形態に係る端末装置10の構成例を示すブロック図。1 is a block diagram showing a configuration example of a terminal device 10 according to the first embodiment; FIG. 生成部113、及び表示制御部114の動作の一例を示す説明図。4 is an explanatory diagram showing an example of operations of a generation unit 113 and a display control unit 114; FIG. 生成部113、及び表示制御部114の動作の一例を示す説明図。4 is an explanatory diagram showing an example of operations of a generation unit 113 and a display control unit 114; FIG. 複数の個別オブジェクトSO1~個別オブジェクトSO10の整列の態様の一例を示す図。FIG. 10 is a diagram showing an example of how a plurality of individual objects SO1 to SO10 are aligned; サーバ30の構成例を示すブロック図。3 is a block diagram showing a configuration example of a server 30; FIG. メッセージデータベースMDの例を示す表。A table showing an example of a message database MD. 第1実施形態に係る端末装置10の動作を示すフローチャート。4 is a flowchart showing the operation of the terminal device 10 according to the first embodiment; 端末装置10Aの構成例を示すブロック図。The block diagram which shows the structural example of 10 A of terminal devices. 表示制御部114Aと受付部115の動作例の説明図。FIG. 4 is an explanatory diagram of an operation example of a display control unit 114A and a reception unit 115; 表示制御部114Aと受付部115の動作例の説明図。FIG. 4 is an explanatory diagram of an operation example of a display control unit 114A and a reception unit 115; 第2実施形態に係る端末装置10Aの動作を示すフローチャート。9 is a flowchart showing the operation of the terminal device 10A according to the second embodiment; 端末装置10Bの構成例を示すブロック図。FIG. 2 is a block diagram showing a configuration example of a terminal device 10B; サーバ30Aの構成例を示すブロック図。3 is a block diagram showing a configuration example of a server 30A; FIG. 位置情報データベースLDの例を示す表。A table showing an example of a location information database LD. 判定部314、抽出部315、及び出力部312Aの動作の一例を示す説明図。FIG. 4 is an explanatory diagram showing an example of operations of a determination unit 314, an extraction unit 315, and an output unit 312A; サーバ30Aの動作を示すフローチャート。A flow chart which shows operation of server 30A. 端末装置10Bとサーバ30Aとを組み合わせる場合に生成される仮想オブジェクトVO9の説明図。FIG. 4 is an explanatory diagram of a virtual object VO9 generated when the terminal device 10B and the server 30A are combined;
1:第1実施形態
 以下、図1~図10を参照しつつ、本発明の第1実施形態に係る表示制御装置としての端末装置10を含む情報処理システム1の構成について説明する。
1: First Embodiment Hereinafter, a configuration of an information processing system 1 including a terminal device 10 as a display control device according to a first embodiment of the present invention will be described with reference to FIGS. 1 to 10. FIG.
1-1:第1実施形態の構成
1-1-1:全体構成
 図1は、本発明の第1実施形態に係る情報処理システム1の全体構成を示す図である。情報処理システム1は、後述のXRグラス20を装着したユーザU1に対して、XR技術を用いて、仮想空間を提供するシステムである。
1-1: Configuration of First Embodiment 1-1-1: Overall Configuration FIG. 1 is a diagram showing the overall configuration of an information processing system 1 according to the first embodiment of the present invention. The information processing system 1 is a system that uses XR technology to provide a virtual space to a user U1 wearing XR glasses 20, which will be described later.
 情報処理システム1は、端末装置10、XRグラス20、及びサーバ30を備える。端末装置10は、表示制御装置の一例である。情報処理システム1において、端末装置10とサーバ30とは、通信網NETを介して互いに通信可能に接続される。また、端末装置10とXRグラス20とは互いに通信可能に接続される。以下の説明において、ユーザごとに使用される端末装置10とXRグラス20を区別する場合は、符号に添え字「-X」を用いる。Xは1以上の任意の整数である。また、端末装置10とXRグラス20各々の構成要素についても同様である。なお、図1において、端末装置10-1とXRグラス20とが互いに通信可能に接続される。また、図1においては、2台の端末装置10、1台のXRグラス20が示されるが、これらの台数は1例に過ぎず、情報処理システム1は、任意の台数の端末装置10及びXRグラス20を備えてもよい。 The information processing system 1 includes a terminal device 10, XR glasses 20, and a server 30. The terminal device 10 is an example of a display control device. In the information processing system 1, the terminal device 10 and the server 30 are communicably connected to each other via a communication network NET. Also, the terminal device 10 and the XR glasses 20 are connected so as to be able to communicate with each other. In the following description, when distinguishing between the terminal device 10 and the XR glasses 20 used for each user, the suffix "-X" is used for the reference numerals. X is an arbitrary integer of 1 or more. In addition, the same applies to the constituent elements of the terminal device 10 and the XR glasses 20, respectively. In FIG. 1, the terminal device 10-1 and the XR glasses 20 are connected so as to be able to communicate with each other. In addition, although two terminal devices 10 and one XR glass 20 are shown in FIG. A glass 20 may be provided.
 図1において、ユーザU1が端末装置10-1及びXRグラス20の組を使用することを前提とする。図1に示される情報処理システム1において、XRグラス20には、ユーザU1宛てのメッセージに対応する、後述の複数の個別オブジェクトが表示される。当該メッセージは、端末装置10-1から端末装置10-2に対して送信されたメッセージを含んでもよい。また、当該メッセージは、図1に図示されない他の端末装置から端末装置10-1に対して送信されたメッセージを含んでもよい。更に、当該メッセージは、端末装置10-1自身が生成したメッセージであってもよい。  In FIG. 1, it is assumed that the user U1 uses a set of the terminal device 10-1 and the XR glasses 20. In the information processing system 1 shown in FIG. 1, the XR glasses 20 display a plurality of individual objects, which will be described later, corresponding to messages addressed to the user U1. The message may include a message transmitted from the terminal device 10-1 to the terminal device 10-2. The message may also include a message sent from another terminal device (not shown in FIG. 1) to the terminal device 10-1. Furthermore, the message may be a message generated by the terminal device 10-1 itself.
 サーバ30は、通信網NETを介して、端末装置10に対して各種データ及びクラウドサービスを提供する。 The server 30 provides various data and cloud services to the terminal device 10 via the communication network NET.
 端末装置10-1は、ユーザが頭部に装着するXRグラス20に対して、仮想空間に配置される仮想オブジェクトを表示させる。当該仮想空間は、一例として、天球型の空間である。また、仮想オブジェクトは、例として、静止画像、動画、3DCGモデル、HTMLファイル、及びテキストファイル等のデータを示す仮想オブジェクト、及びアプリケーションを示す仮想オブジェクトである。ここで、テキストファイルとしては、例として、メモ、ソースコード、日記、及びレシピが挙げられる。また、アプリケーションとしては、例として、ブラウザ、SNSを用いるためのアプリケーション、及びドキュメントファイルを生成するためのアプリケーションが挙げられる。なお、端末装置10は、例として、スマートフォン、及びタブレット等の携帯端末装置であることが好適である。なお、端末装置10-1は、表示制御装置の一例である。 The terminal device 10-1 displays virtual objects placed in the virtual space on the XR glasses 20 worn by the user on the head. The virtual space is, for example, a celestial space. The virtual objects are, for example, virtual objects representing data such as still images, moving images, 3DCG models, HTML files, and text files, and virtual objects representing applications. Examples of text files include memos, source codes, diaries, and recipes. Examples of applications include browsers, applications for using SNS, and applications for generating document files. Note that the terminal device 10 is preferably a mobile terminal device such as a smart phone and a tablet, for example. Note that the terminal device 10-1 is an example of a display control device.
 端末装置10-2は、ユーザU2が、ユーザU1に対してメッセージを送信するための装置である。当該端末装置10-2は、後述のディスプレイ14、又は当該端末装置10-2に接続される図示しないXRグラスに対して、仮想空間に配置される仮想オブジェクトを表示させてもよい。とりわけ、当該端末装置10-2が、仮想オブジェクトを表示する装置である場合、端末装置10-2が有する構成は、基本的には端末装置10-1と同一の構成を含む。また、端末装置10-2も端末装置10-1と同様に、例として、スマートフォン、及びタブレット等の携帯端末装置であることが好適である。 The terminal device 10-2 is a device for user U2 to send a message to user U1. The terminal device 10-2 may display a virtual object placed in the virtual space on the display 14 described later or XR glasses (not shown) connected to the terminal device 10-2. In particular, when the terminal device 10-2 is a device that displays a virtual object, the configuration of the terminal device 10-2 is basically the same as that of the terminal device 10-1. Also, similarly to the terminal device 10-1, the terminal device 10-2 is preferably a mobile terminal device such as a smart phone and a tablet, for example.
 XRグラス20は、ユーザU1の頭部に装着するシースルー型のウェアラブルディスプレイである。XRグラス20は、端末装置10-1が制御することによって、両眼用のレンズの各々に設けられた表示パネルに仮想オブジェクトを表示させる。なお、XRグラス20は、表示装置の一例である。また、以下ではXRグラス20がARグラスである場合の例について説明する。しかし、XRグラス20がARグラスであることはあくまで一例であり、XRグラス20は、VRグラス又はMR(Mixed Reality)グラスであってもよい。 The XR glasses 20 are a see-through wearable display worn on the head of user U1. The XR glasses 20 display a virtual object on the display panel provided for each of the binocular lenses under the control of the terminal device 10-1. Note that the XR glass 20 is an example of a display device. Also, an example in which the XR glasses 20 are AR glasses will be described below. However, the fact that the XR glasses 20 are AR glasses is only an example, and the XR glasses 20 may be VR glasses or MR (Mixed Reality) glasses.
1-1-2:XRグラスの構成
 図2は、XRグラス20の外観を示す斜視図である。図2に示されるようにXRグラス20の外観は、一般的な眼鏡と同様にテンプル91及び92、ブリッジ93、フレーム94及び95、並びにレンズ41L及び41Rを有する。
1-1-2: Configuration of XR Glasses FIG. 2 is a perspective view showing the appearance of the XR glasses 20. As shown in FIG. As shown in FIG. 2, the XR glasses 20 have temples 91 and 92, a bridge 93, frames 94 and 95, and lenses 41L and 41R, like general eyeglasses.
 ブリッジ93には撮像装置26が設けられる。撮像装置26は外界を撮像する。また、撮像装置26は、撮像した画像を示す撮像情報を出力する。 An imaging device 26 is provided on the bridge 93 . The imaging device 26 images the outside world. The imaging device 26 also outputs imaging information indicating the captured image.
 レンズ41L及び41Rの各々は、ハーフミラーを備えている。フレーム94には、左眼用の液晶パネル又は有機ELパネルが設けられる。液晶パネル又は有機ELパネルは、以下、表示パネルと総称する。フレーム94には、左眼用の表示パネルから射出された光をレンズ41Lに導光する光学部材が設けられる。レンズ41Lに設けられるハーフミラーは、外界の光を透過させて左眼に導くと共に、光学部材によって導光された光を反射して、左眼に入射させる。フレーム95には、右眼用の表示パネルと、右眼用の表示パネルから射出された光をレンズ41Rに導光する光学部材が設けられる。レンズ41Rに設けられるハーフミラーは、外界の光を透過させて右眼に導くと共に、光学部材によって導光された光を反射して、右眼に入射させる。 Each of the lenses 41L and 41R has a half mirror. A frame 94 is provided with a liquid crystal panel or an organic EL panel for the left eye. A liquid crystal panel or an organic EL panel is hereinafter generically referred to as a display panel. The frame 94 is provided with an optical member that guides the light emitted from the display panel for the left eye to the lens 41L. The half mirror provided in the lens 41L transmits external light and guides it to the left eye, and reflects the light guided by the optical member to enter the left eye. The frame 95 is provided with a right-eye display panel and an optical member that guides light emitted from the right-eye display panel to the lens 41R. The half mirror provided in the lens 41R transmits external light and guides it to the right eye, and reflects the light guided by the optical member to enter the right eye.
 後述するディスプレイ28は、レンズ41L、左眼用の表示パネル、及び左眼用の光学部材、並びにレンズ41R、右眼用の表示パネル、及び右眼用の光学部材を含む。 The display 28, which will be described later, includes a lens 41L, a left-eye display panel, a left-eye optical member, and a lens 41R, a right-eye display panel, and a right-eye optical member.
 以上の構成において、ユーザU1は表示パネルが表示する画像を、外界の様子と重ね合わせたシースルーの状態で観察できる。また、XRグラス20において、視差を伴う両眼画像のうち、左眼用画像を左眼用の表示パネルに表示させ、右眼用画像を右眼用の表示パネルに表示させることによって、ユーザU1に対し、表示された画像があたかも奥行き、及び立体感を持つかのように知覚させることが可能となる。 With the above configuration, the user U1 can observe the image displayed by the display panel in a see-through state in which the image is superimposed on the appearance of the outside world. Further, in the XR glasses 20, of the binocular images with parallax, the image for the left eye is displayed on the display panel for the left eye, and the image for the right eye is displayed on the display panel for the right eye. On the other hand, it is possible to perceive the displayed image as if it had depth and stereoscopic effect.
 図3は、XRグラス20の構成例を示すブロック図である。XRグラス20は、処理装置21、記憶装置22、視線検出装置23、GPS装置24、動き検出装置25、撮像装置26、通信装置27、及びディスプレイ28を備える。XRグラス20が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。なお、本明細書における「装置」という用語は、回路、デバイス又はユニット等の他の用語に読替えてもよい。 FIG. 3 is a block diagram showing a configuration example of the XR glasses 20. As shown in FIG. The XR glasses 20 include a processing device 21 , a storage device 22 , a line-of-sight detection device 23 , a GPS device 24 , a motion detection device 25 , an imaging device 26 , a communication device 27 and a display 28 . Each element of the XR glasses 20 is interconnected by one or more buses for communicating information. Note that the term "apparatus" in this specification may be replaced with another term such as a circuit, a device, or a unit.
 処理装置21は、XRグラス20の全体を制御するプロセッサである。処理装置21は、例えば、単数又は複数のチップを用いて構成される。また、処理装置21は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU:Central Processing Unit)を用いて構成される。なお、処理装置21が有する機能の一部又は全部を、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、及びFPGA(Field Programmable Gate Array)等のハードウェアによって実現してもよい。処理装置21は、各種の処理を並列的又は逐次的に実行する。 The processing device 21 is a processor that controls the XR glasses 20 as a whole. The processing device 21 is configured using, for example, one or more chips. The processing device 21 is configured using, for example, a central processing unit (CPU) including an interface with peripheral devices, an arithmetic device, registers, and the like. Some or all of the functions of the processing device 21 are implemented by hardware such as DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), and FPGA (Field Programmable Gate Array). may be realized. The processing device 21 executes various processes in parallel or sequentially.
 記憶装置22は、処理装置21による読取及び書込が可能な記録媒体である。また、記憶装置22は、処理装置21が実行する制御プログラムPR1を含む複数のプログラムを記憶する。 The storage device 22 is a recording medium that can be read and written by the processing device 21 . The storage device 22 also stores a plurality of programs including the control program PR1 executed by the processing device 21 .
 視線検出装置23は、ユーザU1の視線を検出し、検出結果を示す視線情報を生成する。視線検出装置23による視線の検出には、どのような方法を用いてもよい。視線検出装置23は、例えば、目頭の位置と虹彩の位置に基づいて視線情報を検出してもよい。視線情報はユーザU1の視線の方向を示す。視線検出装置23は、視線情報を後述の処理装置21に供給する。処理装置21に供給された視線情報は、通信装置27を介して、端末装置10に送信される。 The line-of-sight detection device 23 detects the line-of-sight of the user U1 and generates line-of-sight information indicating the detection result. Any method may be used to detect the line of sight by the line of sight detection device 23 . The line-of-sight detection device 23 may detect line-of-sight information based on, for example, the position of the inner corner of the eye and the position of the iris. The line-of-sight information indicates the line-of-sight direction of the user U1. The line-of-sight detection device 23 supplies the line-of-sight information to the processing device 21, which will be described later. The line-of-sight information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
 GPS装置24は、複数の衛星からの電波を受信する。また、GPS装置24は、受信した電波から位置情報を生成する。位置情報は、XRグラス20の位置を示す。位置情報は、位置を特定できるのであれば、どのような形式であってもよい。位置情報は、例えば、XRグラス20の緯度と経度とを示す。一例として、位置情報はGPS装置24から得られる。しかし、XRグラス20は、どのような方法によって位置情報を取得してもよい。取得された位置情報は、処理装置21に供給される。処理装置21に出力された位置情報は、通信装置27を介して、端末装置10に送信される。 The GPS device 24 receives radio waves from multiple satellites. The GPS device 24 also generates position information from the received radio waves. The positional information indicates the position of the XR glasses 20 . The location information may be in any format as long as the location can be specified. The position information indicates the latitude and longitude of the XR glasses 20, for example. As an example, location information is obtained from GPS device 24 . However, the XR glasses 20 may acquire position information by any method. The acquired position information is supplied to the processing device 21 . The position information output to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
 動き検出装置25は、XRグラス20の動きを検出する。動き検出装置25としては、加速度を検出する加速度センサ及び角加速度を検出するジャイロセンサなどの慣性センサが該当する。加速度センサは、直交するX軸、Y軸、及びZ軸の加速度を検出する。ジャイロセンサは、X軸、Y軸、及びZ軸を回転の中心軸とする角加速度を検出する。動き検出装置25は、ジャイロセンサの出力情報に基づいて、XRグラス20の姿勢を示す姿勢情報を生成できる。動き情報は、3軸の加速度を各々示す加速度データ及び3軸の角加速度を各々示す角加速度データを含む。また、動き検出装置25は、XRグラス20の姿勢を示す姿勢情報、及びXRグラス20の動きに係る動き情報を処理装置21に供給する。処理装置21に供給された姿勢情報及び動き情報は、通信装置27を介して、端末装置10に送信される。 The motion detection device 25 detects motion of the XR glasses 20 . The motion detection device 25 corresponds to an inertial sensor such as an acceleration sensor that detects acceleration and a gyro sensor that detects angular acceleration. The acceleration sensor detects acceleration in orthogonal X-, Y-, and Z-axes. The gyro sensor detects angular acceleration around the X-, Y-, and Z-axes. The motion detection device 25 can generate posture information indicating the posture of the XR glasses 20 based on the output information of the gyro sensor. The motion information includes acceleration data indicating three-axis acceleration and angular acceleration data indicating three-axis angular acceleration. In addition, the motion detection device 25 supplies posture information indicating the posture of the XR glasses 20 and motion information related to the motion of the XR glasses 20 to the processing device 21 . The posture information and motion information supplied to the processing device 21 are transmitted to the terminal device 10 via the communication device 27 .
 撮像装置26は、外界を撮像して得られた撮像情報を出力する。また、撮像装置26は、例えば、レンズ、撮像素子、増幅器、及びAD変換器を備える。レンズを介して集光された光は、撮像素子によってアナログ信号である撮像信号に変換される。増幅器は撮像信号を増幅した上でAD変換器に出力する。AD変換器はアナログ信号である増幅された撮像信号をデジタル信号である撮像情報に変換する。変換された撮像情報は、処理装置21に供給される。処理装置21に供給された撮像情報は、通信装置27を介して、端末装置10に送信される。 The imaging device 26 outputs imaging information obtained by imaging the outside world. Also, the imaging device 26 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. The light condensed through the lens is converted into an image pickup signal, which is an analog signal, by the image pickup device. The amplifier amplifies the imaging signal and outputs it to the AD converter. The AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal. The converted imaging information is supplied to the processing device 21 . The imaging information supplied to the processing device 21 is transmitted to the terminal device 10 via the communication device 27 .
 通信装置27は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。また、通信装置27は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置27は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置27は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 27 is hardware as a transmission/reception device for communicating with other devices. The communication device 27 is also called a network device, a network controller, a network card, a communication module, etc., for example. The communication device 27 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 27 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ28は、画像を表示するデバイスである。ディスプレイ28は、処理装置21による制御のもとで各種の画像を表示する。ディスプレイ28は、上記のように、レンズ41L、左眼用の表示パネル、及び左眼用の光学部材、並びにレンズ41R、右眼用の表示パネル、及び右眼用の光学部材を含む。表示パネルとしては、例えば、液晶表示パネル及び有機EL表示パネル等の各種の表示パネルが好適に利用される。 The display 28 is a device that displays images. The display 28 displays various images under the control of the processing device 21 . The display 28 includes the lens 41L, the left-eye display panel, the left-eye optical member, and the lens 41R, the right-eye display panel, and the right-eye optical member, as described above. Various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display panel.
 処理装置21は、例えば、記憶装置22から制御プログラムPR1を読み出して実行することによって、取得部211、及び表示制御部212として機能する。 The processing device 21 functions as an acquisition unit 211 and a display control unit 212, for example, by reading the control program PR1 from the storage device 22 and executing it.
 取得部211は、端末装置10-1からXRグラス20に表示される画像を示す画像情報を取得する。 The acquisition unit 211 acquires image information indicating an image displayed on the XR glasses 20 from the terminal device 10-1.
 また、取得部211は、視線検出装置23から供給される視線情報、GPS装置24から供給される位置情報、動き検出装置25から供給される姿勢情報及び動き情報、及び撮像装置26から供給される撮像情報を取得する。その上で、取得部211は、取得した視線情報、位置情報、姿勢情報、動き情報、及び撮像情報を、通信装置27に供給する。 The acquisition unit 211 also receives line-of-sight information supplied from the line-of-sight detection device 23 , position information supplied from the GPS device 24 , posture information and motion information supplied from the motion detection device 25 , and information supplied from the imaging device 26 . Acquire imaging information. After that, the acquisition unit 211 supplies the acquired line-of-sight information, position information, posture information, motion information, and imaging information to the communication device 27 .
 表示制御部212は、取得部211によって端末装置10-1から取得された画像情報に基づいて、ディスプレイ28に対して、当該画像情報によって示される画像を表示させる。 Based on the image information acquired from the terminal device 10-1 by the acquisition unit 211, the display control unit 212 causes the display 28 to display an image indicated by the image information.
1-1-3:端末装置の構成
 図4は、端末装置10の構成例を示すブロック図である。端末装置10は、処理装置11、記憶装置12、通信装置13、ディスプレイ14、入力装置15、及び慣性センサ16を備える。端末装置10が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。以下では基本的に、端末装置10の構成として、端末装置10-1の構成について述べる。
1-1-3: Configuration of Terminal Device FIG. 4 is a block diagram showing a configuration example of the terminal device 10. As shown in FIG. The terminal device 10 includes a processing device 11 , a storage device 12 , a communication device 13 , a display 14 , an input device 15 and an inertial sensor 16 . Elements of the terminal device 10 are interconnected by one or more buses for communicating information. As the configuration of the terminal device 10, the configuration of the terminal device 10-1 will be basically described below.
 処理装置11は、端末装置10の全体を制御するプロセッサである。また、処理装置11は、例えば、単数又は複数のチップを用いて構成される。処理装置11は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置11が有する機能の一部又は全部を、DSP、ASIC、PLD、及びFPGA等のハードウェアによって実現してもよい。処理装置11は、各種の処理を並列的又は逐次的に実行する。 The processing device 11 is a processor that controls the terminal device 10 as a whole. Also, the processing device 11 is configured using, for example, a single chip or a plurality of chips. The processing unit 11 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 11 may be implemented by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 11 executes various processes in parallel or sequentially.
 記憶装置12は、処理装置11が読取及び書込が可能な記録媒体である。また、記憶装置12は、処理装置11が実行する制御プログラムPR2を含む複数のプログラムを記憶する。 The storage device 12 is a recording medium readable and writable by the processing device 11 . The storage device 12 also stores a plurality of programs including the control program PR2 executed by the processing device 11 .
 通信装置13は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置13は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、及び通信モジュール等とも呼ばれる。通信装置13は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置13は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、及びUSBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 13 is hardware as a transmission/reception device for communicating with other devices. The communication device 13 is also called a network device, a network controller, a network card, a communication module, or the like, for example. The communication device 13 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 13 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ14は、画像及び文字情報を表示するデバイスである。ディスプレイ14は、処理装置11の制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL(Electro Luminescence)表示パネル等の各種の表示パネルがディスプレイ14として好適に利用される。なお、端末装置10にXRグラス20が接続される場合、ディスプレイ14は、必須の構成要素としなくてもよい。この場合、当該XRグラス20が、ディスプレイ14と同一の機能を更に有することとなる。 The display 14 is a device that displays images and character information. The display 14 displays various images under the control of the processing device 11 . For example, various display panels such as a liquid crystal display panel and an organic EL (Electro Luminescence) display panel are preferably used as the display 14 . Note that when the XR glasses 20 are connected to the terminal device 10, the display 14 may not be an essential component. In this case, the XR glasses 20 further have the same function as the display 14 .
 入力装置15は、XRグラス20を頭部に装着したユーザU1からの操作を受け付ける。例えば、入力装置15は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置15は、タッチパネルを含んで構成される場合、ディスプレイ14を兼ねてもよい。 The input device 15 accepts operations from the user U1 who wears the XR glasses 20 on his head. For example, the input device 15 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse. Here, when the input device 15 includes a touch panel, the input device 15 may also serve as the display 14 .
 慣性センサ16は、慣性力を検出するセンサである。慣性センサ16は、例えば、加速度センサ、角速度センサ、及びジャイロセンサのうち、1以上のセンサを含む。処理装置11は、慣性センサ16の出力情報に基づいて、端末装置10の姿勢を検出する。更に、処理装置11は、端末装置10の姿勢に基づいて、天球型の仮想空間VSにおいて、仮想オブジェクトVOの選択、文字の入力、及び指示の入力を受け付ける。例えば、ユーザU1が端末装置10の中心軸を仮想空間VSの所定領域に向けた状態で、入力装置15を操作することによって、所定領域に配置される仮想オブジェクトVOが選択される。入力装置15に対するユーザU1の操作は、例えば、ダブルタップである。このようにユーザU1は端末装置10を操作することによって、端末装置10の入力装置15を見なくても仮想オブジェクトVOを選択できる。 The inertial sensor 16 is a sensor that detects inertial force. The inertial sensor 16 includes, for example, one or more of an acceleration sensor, an angular velocity sensor, and a gyro sensor. The processing device 11 detects the orientation of the terminal device 10 based on the output information from the inertial sensor 16 . Further, the processing device 11 receives selection of the virtual object VO, input of characters, and input of instructions in the celestial sphere virtual space VS based on the orientation of the terminal device 10 . For example, the user U1 directs the central axis of the terminal device 10 toward a predetermined area of the virtual space VS, and operates the input device 15 to select the virtual object VO arranged in the predetermined area. The user U1's operation on the input device 15 is, for example, a double tap. By operating the terminal device 10 in this way, the user U1 can select the virtual object VO without looking at the input device 15 of the terminal device 10 .
 なお、端末装置10にXRグラス20が接続されない場合、端末装置10は、XRグラス20に備わるGPS装置24と同様のGPS装置を備えることが好適である。 If the XR glasses 20 are not connected to the terminal device 10, the terminal device 10 preferably has a GPS device similar to the GPS device 24 provided in the XR glasses 20.
 処理装置11は、記憶装置12から制御プログラムPR2を読み出して実行することによって、出力部111、取得部112、生成部113、及び表示制御部114として機能する。 The processing device 11 functions as an output unit 111, an acquisition unit 112, a generation unit 113, and a display control unit 114 by reading the control program PR2 from the storage device 12 and executing it.
 出力部111は、ユーザU1が入力装置15を用いることによって作成されたメッセージをサーバ30に出力する。当該メッセージにおいては、メッセージの送信者、メッセージの受信者、及びメッセージの内容が指定される。当該メッセージの内容は、テキスト、及び画像のうち少なくとも一方を含む。 The output unit 111 outputs a message created by the user U1 using the input device 15 to the server 30. The message specifies the sender of the message, the receiver of the message, and the content of the message. The content of the message includes at least one of text and images.
 更に、出力部111は、端末装置10が、第1ユーザであるユーザU1によって使用される端末装置10-1である場合、後述の取得部112-1によって取得されたユーザU1宛てのメッセージが、ユーザU1によって既読となると、既読となったことを示す情報を、サーバ30に送信する。 Furthermore, when the terminal device 10 is the terminal device 10-1 used by the user U1 who is the first user, the output unit 111 outputs a message addressed to the user U1 acquired by the acquisition unit 112-1, which will be described later. When it is read by the user U1, information indicating that it has been read is transmitted to the server 30. FIG.
 取得部112は、ユーザU宛ての複数のメッセージを、サーバ30から取得する。端末装置10が、第1ユーザであるユーザU1によって使用される端末装置10-1である場合、取得部112-1は、ユーザU1宛ての複数のメッセージを、サーバ30から取得する。 The acquisition unit 112 acquires a plurality of messages addressed to the user U from the server 30. If the terminal device 10 is the terminal device 10-1 used by the user U1 who is the first user, the obtaining unit 112-1 obtains from the server 30 a plurality of messages addressed to the user U1.
 生成部113は、取得部112によって取得された複数のメッセージに1対1に対応する複数の個別オブジェクトを生成する。 The generating unit 113 generates multiple individual objects corresponding to the multiple messages acquired by the acquiring unit 112 on a one-to-one basis.
 表示制御部114は、生成部113によって生成された複数の個別オブジェクトの集合体である仮想オブジェクトを、表示装置としてのXRグラス20に表示させる。 The display control unit 114 causes the XR glasses 20 as a display device to display the virtual object, which is a collection of multiple individual objects generated by the generation unit 113 .
 この結果、ユーザU1は、複数のメッセージの数を視認できる。 As a result, user U1 can visually confirm the number of multiple messages.
 図5及び図6は、生成部113、及び表示制御部114の動作の一例を示す説明図である。以降の説明では、仮想空間VSにおいて、相互に直交するX軸、Y軸及びZ軸を想定する。一例として、X軸はユーザU1の前後方向に延伸する。更に、ユーザU1から見て、X軸に沿う前方向をX1方向とし、X軸に沿う後ろ方向をX2方向とする。また、Y軸はユーザU1の左右方向に延伸する。更に、ユーザU1から見て、Y軸に沿う右方向をY1方向とし、Y軸に沿う左方向をX2方向とする。これらのX軸とY軸とで水平面が構成される。また、Z軸はXY平面に直交し、ユーザU1の上下方向に延伸する。更に、ユーザU1から見て、Z軸に沿う下方向をZ1方向とし、Z軸に沿う上方向をZ2方向とする。 5 and 6 are explanatory diagrams showing an example of the operation of the generation unit 113 and the display control unit 114. FIG. In the following description, it is assumed that X, Y and Z axes are orthogonal to each other in the virtual space VS. As an example, the X-axis extends in the front-rear direction of user U1. Furthermore, as viewed from the user U1, the forward direction along the X axis is the X1 direction, and the backward direction along the X axis is the X2 direction. Also, the Y-axis extends in the horizontal direction of the user U1. Furthermore, as viewed from the user U1, the right direction along the Y axis is the Y1 direction, and the left direction along the Y axis is the X2 direction. A horizontal plane is formed by these X-axis and Y-axis. Also, the Z-axis is orthogonal to the XY plane and extends in the vertical direction of the user U1. Furthermore, when viewed from the user U1, the downward direction along the Z axis is the Z1 direction, and the upward direction along the Z axis is the Z2 direction.
 図5を参照すると、仮想空間VSにおいて、ユーザU1は座標(x,y,z)=(xu1,yu1,zu1)に位置するものとする。このユーザU1の座標は、仮想空間VSの略中心であることを前提とする。「略中心」とは、仮想空間VSの中心に加えて、当該中心から、本発明に係る情報処理システム1の動作上、支障のない範囲の領域を含む位置である。仮想空間VSにおけるユーザU1の座標は、現実空間におけるユーザU1の位置に対応する。また、現実空間におけるユーザU1の位置は、ユーザU1の頭部に装着されるXRグラス20において生成される位置情報によって示される。 Referring to FIG. 5, it is assumed that user U1 is located at coordinates (x, y, z)=(x u1 , y u1 , z u1 ) in virtual space VS. It is assumed that the coordinates of this user U1 are approximately at the center of the virtual space VS. “Substantially the center” is a position that includes not only the center of the virtual space VS but also an area within a range from the center that does not hinder the operation of the information processing system 1 according to the present invention. The coordinates of the user U1 in the virtual space VS correspond to the position of the user U1 in the real space. Also, the position of the user U1 in the physical space is indicated by position information generated in the XR glasses 20 worn on the head of the user U1.
 図5に示される例において、表示制御部114によって、(x,y,z)=(x,y,z)の座標に表示される仮想オブジェクトVO1は、各々がユーザU1宛てのメッセージに1対1に対応する個別オブジェクトSO1~個別オブジェクトSO3の集合体である。図5において、仮想オブジェクトVO1、及び個別オブジェクトSO1~個別オブジェクトSO3は、例として球体である。また、個別オブジェクトSO1の座標(x,y,z)=(x,y,z)、個別オブジェクトSO2の座標(x,y,z)=(x,y,z)、及び個別オブジェクトSO3の座標(x,y,z)=(x,y,z)は、全て仮想オブジェクトVO1内の座標である。また、個別オブジェクトSO1~個別オブジェクトSO3としての各球体は、その全体が、仮想オブジェクトVO1としての球体の内側に存在する。すなわち、仮想オブジェクトVO1は、ユーザU1宛てのメッセージの数と同一個数の個別オブジェクトSO1~個別オブジェクトSO3を含む。 In the example shown in FIG. 5, the virtual object VO1 displayed at coordinates (x, y, z)=(x 1 , y 1 , z 1 ) by the display control unit 114 is a message addressed to the user U1. is a set of individual objects SO1 to SO3 corresponding one-to-one to . In FIG. 5, the virtual object VO1 and the individual objects SO1 to SO3 are, for example, spheres. Also, the coordinates (x, y, z) of the individual object SO1 = ( x2 , y2 , z2 ), the coordinates (x, y, z) of the individual object SO2 = ( x3 , y3 , z3 ), And the coordinates (x, y, z)=(x 4 , y 4 , z 4 ) of the individual object SO3 are all coordinates within the virtual object VO1. Further, each sphere as the individual objects SO1 to SO3 exists entirely inside the sphere as the virtual object VO1. That is, the virtual object VO1 includes the same number of individual objects SO1 to SO3 as the number of messages addressed to the user U1.
 なお、表示制御部114が、XRグラス20に個別オブジェクトSO1~個別オブジェクトSO3を表示させる際に用いる画像情報は、記憶装置12に格納された情報であってもよい。あるいは、当該画像情報は、取得部112がサーバ30から取得した情報であってもよい。 The image information used by the display control unit 114 to display the individual objects SO1 to SO3 on the XR glasses 20 may be information stored in the storage device 12. Alternatively, the image information may be information acquired from the server 30 by the acquisition unit 112 .
 また、個別オブジェクトSO1~個別オブジェクトSO3は、ユーザU1宛ての全てのメッセージに対して1対1に対応してもよい。あるいは、個別オブジェクトSO1~個別オブジェクトSO3は、ユーザU1宛ての全てのメッセージのうち、未読メッセージに対してのみ、1対1に対応してもよい。あるいは、複数の既読メッセージに対して1個の個別オブジェクトが対応してもよい。 Also, individual objects SO1 to SO3 may correspond one-to-one to all messages addressed to user U1. Alternatively, individual objects SO1 to SO3 may correspond one-to-one only to unread messages among all messages addressed to user U1. Alternatively, one individual object may correspond to a plurality of read messages.
 また、図5において、ユーザU1が、仮想オブジェクトVO1を視認する際の視野角は、θである。 Also, in FIG. 5, the viewing angle when the user U1 visually recognizes the virtual object VO1 is θ1 .
 説明を図4に戻すと、表示制御部114は、更に、取得部112によって取得された複数のメッセージの数が多いほど、仮想オブジェクトVO1の大きさを大きくする。すなわち、表示制御部114は、仮想オブジェクトVO1に含まれる個別オブジェクトSOの数が増えるほど、仮想オブジェクトVO1の大きさを大きくする。また、表示制御部114は、上記の複数のメッセージの数が多いほど、仮想空間VSにおいて、ユーザU1から遠い位置に仮想オブジェクトVO1を配置する。換言すれば、表示制御部114は、上記の複数のメッセージの数が多いほど、仮想空間VSの中心から仮想オブジェクトVO1の中心までの距離を長くする。すなわち、表示制御部114は、上記の複数のメッセージの数が第1の数の場合に、仮想オブジェクトVO1の大きさを第1の大きさとすると共に、仮想空間VSの中心から、仮想オブジェクトVO1の中心までの距離を第1の距離とする。また、表示制御部114は、上記の複数のメッセージの数が第1の数よりも大きな第2の数の場合に、仮想オブジェクトVO1の大きさを第1の大きさよりも大きな第2の大きさとすると共に、仮想空間VSの中心から、仮想オブジェクトVO1の中心までの距離を第1の距離よりも長い第2の距離とする。 Returning to FIG. 4, the display control unit 114 further increases the size of the virtual object VO1 as the number of messages acquired by the acquisition unit 112 increases. That is, the display control unit 114 increases the size of the virtual object VO1 as the number of individual objects SO included in the virtual object VO1 increases. Further, the display control unit 114 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages is larger. In other words, the display control unit 114 increases the distance from the center of the virtual space VS to the center of the virtual object VO1 as the number of messages increases. That is, when the number of the plurality of messages is the first number, the display control unit 114 sets the size of the virtual object VO1 to the first size, and displays the size of the virtual object VO1 from the center of the virtual space VS. Let the distance to the center be the first distance. Further, when the number of the plurality of messages is a second number larger than the first number, the display control unit 114 sets the size of the virtual object VO1 to a second size larger than the first size. At the same time, the distance from the center of the virtual space VS to the center of the virtual object VO1 is set to a second distance longer than the first distance.
 図6に示される例において、(x,y,z)=(x,y,z)の座標に表示される仮想オブジェクトVO2は、各々がユーザU1宛てのメッセージに1対1に対応する個別オブジェクトSO1~個別オブジェクトSO6の集合体である。図6において、仮想オブジェクトVO2、個別オブジェクトSO1~個別オブジェクトSO6の形状は、例として球体である。仮想オブジェクトVO2は、ユーザU1宛てのメッセージの数と同一個数の個別オブジェクトSO1~個別オブジェクトSO6を含む。 In the example shown in FIG. 6, virtual objects VO2 displayed at coordinates (x, y, z)=(x 5 , y 5 , z 5 ) correspond one-to-one to messages addressed to user U1. It is a set of individual objects SO1 to SO6 that are connected to each other. In FIG. 6, the shapes of the virtual object VO2 and the individual objects SO1 to SO6 are, for example, spheres. The virtual object VO2 contains the same number of individual objects SO1 to SO6 as the number of messages addressed to the user U1.
 図5に示される仮想オブジェクトVO1と、図6に示される仮想オブジェクトVO2とを比較すると、仮想オブジェクトVO2は、仮想オブジェクトVO1に比較して、より多くの個別オブジェクトSOを含む。また、仮想オブジェクトVO2は仮想オブジェクトVO1よりも大きい。更に、仮想オブジェクトVO2の座標(x,y,z)=(x,y,z)は、仮想オブジェクトVO1の座標(x,y,z)=(x,y,z)より、仮想空間VSの略中心であるユーザU1の座標(x,y,z)=(xU1,yU1,zU1)から見て、遠くに位置する。 Comparing the virtual object VO1 shown in FIG. 5 and the virtual object VO2 shown in FIG. 6, the virtual object VO2 contains more individual objects SO than the virtual object VO1. Also, the virtual object VO2 is larger than the virtual object VO1. Furthermore, the coordinates (x, y, z)=(x 5 , y 5 , z 5 ) of the virtual object VO2 are the coordinates (x, y, z)=(x 1 , y 1 , z 1 ) of the virtual object VO1. It is positioned farther from the coordinates (x, y, z)=(x U1 , y U1 , z U1 ) of the user U1, which is approximately the center of the virtual space VS.
 また、図6において、ユーザU1が、仮想オブジェクトVO2を視認する場合の視野角は、θである。図5と図6とを比較すると、図5において、ユーザが仮想オブジェクトVO1を視認する場合の視野角θと、図6において、ユーザが仮想オブジェクトVO2を視認する場合の視野角θとは等しいことが好適である。θとθとが等しい場合には、ユーザU1の視野において、仮想オブジェクトVO1が占める割合と、仮想オブジェクトVO2が占める割合とは等しい。ユーザU1宛てのメッセージに1対1に対応する個別オブジェクトSOの数が増加しても、ユーザU1の視野における仮想オブジェクトVO2の面積は、仮想オブジェクトVO1の面積と等しいままであるため、ユーザU1は視界が狭くなったとは感じない。ただし、個別オブジェクトSOの数が増加すると、ユーザU1の視界における個々の個別オブジェクトSOの面積は小さくなる。なお、視野角θと視野角θは、必ずしも等しくなくてもよい。 Also, in FIG. 6, the viewing angle when the user U1 visually recognizes the virtual object VO2 is θ2 . 5 and 6, the viewing angle θ 1 when the user views the virtual object VO1 in FIG. 5 and the viewing angle θ 2 when the user views the virtual object VO2 in FIG. Equal is preferred. When θ 1 and θ 2 are equal, the proportion occupied by the virtual object VO1 and the proportion occupied by the virtual object VO2 in the field of view of the user U1 are equal. Even if the number of individual objects SO that correspond one-to-one to the message addressed to user U1 increases, the area of virtual object VO2 in the field of view of user U1 remains equal to the area of virtual object VO1. I don't feel like my field of vision has narrowed. However, when the number of individual objects SO increases, the area of each individual object SO in the field of view of user U1 becomes smaller. Note that the viewing angle θ1 and the viewing angle θ2 may not necessarily be equal.
 なお、説明の便宜上、仮想オブジェクトVO1と仮想オブジェクトVO2に対しては異なる符号を用いているが、ユーザU1宛てのメッセージの数、すなわち個別オブジェクトSOの数が増加するに伴って、同一の仮想オブジェクトVO1が徐々に大きくなってもよい。 For convenience of explanation, different symbols are used for the virtual object VO1 and the virtual object VO2. VO1 may gradually increase.
 また、表示制御部114は、ユーザU1宛ての複数のメッセージの数に応じて、仮想オブジェクトVO1の表示態様を変更する。例えば、仮想オブジェクトVO1に含まれる個別オブジェクトSOの数が増加するに伴って、表示制御部114は、仮想オブジェクトVO1の色、及び仮想オブジェクトVO1に含まれる個別オブジェクトSOの色のうち少なくとも一方を変化させてもよい。あるいは、表示制御部114は、個別オブジェクトSOの数が増加するに伴って、仮想オブジェクトVO1の形状を変更してもよい。例えば、個別オブジェクトSOの数が所定個数に達するたびに、仮想オブジェクトVO1において、複数の個別オブジェクトSOは特定の文字を表すように整列してもよい。図7は、複数の個別オブジェクトSO1~個別オブジェクトSO10の整列の態様の一例を示す図である。図7に示される例では、仮想オブジェクトVO3において、複数の個別オブジェクトSO1~個別オブジェクトSO10は、「N」の文字を表すように整列する。 Also, the display control unit 114 changes the display mode of the virtual object VO1 according to the number of messages addressed to the user U1. For example, as the number of individual objects SO included in the virtual object VO1 increases, the display control unit 114 changes at least one of the color of the virtual object VO1 and the color of the individual objects SO included in the virtual object VO1. You may let Alternatively, the display control unit 114 may change the shape of the virtual object VO1 as the number of individual objects SO increases. For example, each time the number of individual objects SO reaches a predetermined number, a plurality of individual objects SO may be arranged to represent a specific character in the virtual object VO1. FIG. 7 is a diagram showing an example of how a plurality of individual objects SO1 to SO10 are aligned. In the example shown in FIG. 7, in the virtual object VO3, the individual objects SO1 to SO10 are aligned to represent the letter "N".
 この結果、端末装置10は、複数のメッセージの数が増加した場合に、ユーザU1に対して、当該複数のメッセージの数が増加したことを、より訴求できる。 As a result, when the number of multiple messages increases, the terminal device 10 can more appeal to the user U1 that the number of multiple messages has increased.
 また、表示制御部114は、ユーザU1宛ての複数のメッセージの各々に対応する送信元装置に応じて、個別オブジェクトSOの表示態様を互いに異ならせてもよい。例えば、表示制御部114は、メッセージの送信元装置に応じて、個別オブジェクトSOの形状及び色のうち少なくとも一方を変更してもよい。 Further, the display control unit 114 may vary the display modes of the individual objects SO according to the transmission source devices corresponding to each of the plurality of messages addressed to the user U1. For example, the display control unit 114 may change at least one of the shape and color of the individual object SO according to the device that sent the message.
 この結果、ユーザU1は、複数の個別オブジェクトSOを視認するだけで、複数のメッセージの送信元装置に応じて、当該複数の個別オブジェクトSOを区別することが可能となる。 As a result, the user U1 can distinguish the plurality of individual objects SO according to the transmission source devices of the plurality of messages, simply by visually recognizing the plurality of individual objects SO.
1-1-4:サーバの構成
 図8は、サーバ30の構成例を示すブロック図である。サーバ30は、処理装置31、記憶装置32、通信装置33、ディスプレイ34、及び入力装置35を備える。サーバ30が有する各要素は、情報を通信するための単体又は複数のバスによって相互に接続される。
1-1-4: Server Configuration FIG. 8 is a block diagram showing a configuration example of the server 30. As shown in FIG. The server 30 comprises a processing device 31 , a storage device 32 , a communication device 33 , a display 34 and an input device 35 . Each element of server 30 is interconnected by one or more buses for communicating information.
 処理装置31は、サーバ30の全体を制御するプロセッサである。また、処理装置31は、例えば、単数又は複数のチップを用いて構成される。処理装置31は、例えば、周辺装置とのインタフェース、演算装置及びレジスタ等を含む中央処理装置(CPU)を用いて構成される。なお、処理装置31が有する機能の一部又は全部を、DSP、ASIC、PLD、FPGA等のハードウェアによって実現してもよい。処理装置31は、各種の処理を並列的又は逐次的に実行する。 The processing device 31 is a processor that controls the server 30 as a whole. Also, the processing device 31 is configured using, for example, a single chip or a plurality of chips. The processing unit 31 is configured using, for example, a central processing unit (CPU) including interfaces with peripheral devices, arithmetic units, registers, and the like. A part or all of the functions of the processing device 31 may be realized by hardware such as DSP, ASIC, PLD, and FPGA. The processing device 31 executes various processes in parallel or sequentially.
 記憶装置32は、処理装置31が読取及び書込が可能な記録媒体である。また、記憶装置32は、処理装置31が実行する制御プログラムPR3を含む複数のプログラムを記憶する。更に、記憶装置32は、複数のユーザU間において送受信されるメッセージに係る情報が格納されるメッセージデータベースMDを記憶する。 The storage device 32 is a recording medium readable and writable by the processing device 31 . The storage device 32 also stores a plurality of programs including the control program PR3 executed by the processing device 31 . Furthermore, the storage device 32 stores a message database MD in which information related to messages transmitted and received between a plurality of users U is stored.
 図9は、メッセージデータベースMDの例を示す表である。後述のように、サーバ30に備わる取得部311によって、端末装置10からユーザU間において送受信されるメッセージが取得される。より詳細には、取得部311は、端末装置10から出力されるメッセージの送信者を示す情報、メッセージの受信者を示す情報、及びメッセージの内容を示す情報を取得する。これらの情報は、メッセージデータベースMDに格納される。更に、取得部311は、各メッセージがユーザUによって既読となったことを示す情報を取得する。後述のメッセージ管理部313は、当該情報に基づいて、メッセージデータベースMDにおいて、各々のメッセージが既読となったか否かを示すフラグを付加する。例として、値が「0」のフラグは、メッセージが未読であることを示す。一方で、値が「1」のフラグは、メッセージが既読であることを示す。図9に示されるメッセージデータベースMDには、一例として、メッセージIDである「1」、当該メッセージID=1のメッセージの送信者である「U1」、受信者である「U2」、メッセージの内容、及び未読であることを示すフラグ「0」が互いに紐づけられた状態で格納される。なお、図9に示される表において、「n」は2以上の整数である。 FIG. 9 is a table showing an example of the message database MD. As will be described later, the acquisition unit 311 provided in the server 30 acquires messages transmitted and received between the users U from the terminal device 10 . More specifically, the acquisition unit 311 acquires information indicating the sender of a message output from the terminal device 10, information indicating the recipient of the message, and information indicating the content of the message. These information are stored in the message database MD. Further, the acquisition unit 311 acquires information indicating that each message has been read by the user U. Based on this information, the message management unit 313, which will be described later, adds a flag indicating whether or not each message has been read in the message database MD. As an example, a flag with a value of '0' indicates that the message is unread. On the other hand, a flag with a value of "1" indicates that the message has already been read. As an example, the message database MD shown in FIG. 9 stores the message ID "1", the sender of the message with the message ID=1 "U1", the recipient "U2", the content of the message, and a flag "0" indicating unread are associated with each other and stored. In addition, in the table shown in FIG. 9, "n" is an integer of 2 or more.
 図8に説明を戻すと、通信装置33は、他の装置と通信を行うための、送受信デバイスとしてのハードウェアである。通信装置33は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置33は、有線接続用のコネクターを備え、上記コネクターに対応するインタフェース回路を備えていてもよい。また、通信装置33は、無線通信インタフェースを備えていてもよい。有線接続用のコネクター及びインタフェース回路としては有線LAN、IEEE1394、USBに準拠した製品が挙げられる。また、無線通信インタフェースとしては無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 Returning to FIG. 8, the communication device 33 is hardware as a transmission/reception device for communicating with other devices. The communication device 33 is also called a network device, a network controller, a network card, a communication module, etc., for example. The communication device 33 may include a connector for wired connection and an interface circuit corresponding to the connector. Further, the communication device 33 may have a wireless communication interface. Products conforming to wired LAN, IEEE1394, and USB are examples of connectors and interface circuits for wired connection. Also, as a wireless communication interface, there are products conforming to wireless LAN, Bluetooth (registered trademark), and the like.
 ディスプレイ34は、画像及び文字情報を表示するデバイスである。ディスプレイ34は、処理装置31による制御のもとで各種の画像を表示する。例えば、液晶表示パネル及び有機EL表示パネル等の各種の表示パネルがディスプレイ34として好適に利用される。 The display 34 is a device that displays images and character information. The display 34 displays various images under the control of the processing device 31 . For example, various display panels such as a liquid crystal display panel and an organic EL display panel are preferably used as the display 34 .
 入力装置35は、情報処理システム1の管理者による操作を受け付ける機器である。例えば、入力装置35は、キーボード、タッチパッド、タッチパネル又はマウス等のポインティングデバイスを含んで構成される。ここで、入力装置35は、タッチパネルを含んで構成される場合、ディスプレイ34を兼ねてもよい。 The input device 35 is a device that accepts operations by the administrator of the information processing system 1 . For example, the input device 35 includes a pointing device such as a keyboard, touch pad, touch panel, or mouse. Here, when the input device 35 includes a touch panel, the input device 35 may also serve as the display 34 .
 処理装置31は、例えば、記憶装置32から制御プログラムPR3を読み出して実行することによって、取得部311、出力部312、及びメッセージ管理部313として機能する。 The processing device 31 functions as an acquisition unit 311, an output unit 312, and a message management unit 313, for example, by reading the control program PR3 from the storage device 32 and executing it.
 取得部311は、通信装置33を介して、端末装置10から各種のデータを取得する。当該データには、一例として、XRグラス20を頭部に装着したユーザU1によって端末装置10に入力される、仮想オブジェクトVOに対する操作内容を示すデータが含まれる。 The acquisition unit 311 acquires various data from the terminal device 10 via the communication device 33 . The data includes, for example, data indicating the operation content for the virtual object VO, which is input to the terminal device 10 by the user U1 wearing the XR glasses 20 on the head.
 とりわけ、取得部311は、ユーザU間において送受信されるメッセージを取得する。更に、取得部311は、一例として、端末装置10-1において、ユーザU1宛てのメッセージがユーザU1によって既読となった場合には、既読となったことを示す情報を取得する。 Above all, the acquisition unit 311 acquires messages transmitted and received between users U. Furthermore, as an example, when a message addressed to user U1 has been read by user U1 in terminal device 10-1, acquisition unit 311 acquires information indicating that the message has been read.
 出力部312は、XRグラス20に表示される画像を示す画像情報を、端末装置10に送信する。当該画像情報は、記憶装置32に格納されていてもよい。あるいは、当該画像情報は、図示しない生成部によって生成されてもよい。 The output unit 312 transmits image information indicating an image displayed on the XR glasses 20 to the terminal device 10 . The image information may be stored in the storage device 32 . Alternatively, the image information may be generated by a generating unit (not shown).
 また出力部312は、一例として、端末装置10-1に対し、メッセージデータベースMDに格納されるユーザU1宛てのメッセージを送信する。他の端末装置10に対しても同様である。 Also, as an example, the output unit 312 transmits a message addressed to the user U1 stored in the message database MD to the terminal device 10-1. The same applies to other terminal devices 10 as well.
 メッセージ管理部313は、メッセージデータベースMDを管理する。一例として、端末装置10-1において、ユーザU1宛てのメッセージがユーザU1によって既読となったことを示す情報が取得部311によって取得された場合には、メッセージ管理部313は、当該メッセージに紐づく未読であることを示すフラグ「0」を、既読であることを示すフラグ「1」に変更する。 The message management unit 313 manages the message database MD. As an example, in the terminal device 10-1, when the acquisition unit 311 acquires information indicating that the message addressed to the user U1 has been read by the user U1, the message management unit 313 links the message to the message. The flag "0" indicating unread is changed to the flag "1" indicating read.
1-2:第1実施形態の動作
 図10は、第1実施形態に係る端末装置10、とりわけユーザU1によって使用される端末装置10-1の動作を示すフローチャートである。以下、図10を参照して、端末装置10-1の動作について説明する。
1-2: Operation of First Embodiment FIG. 10 is a flow chart showing the operation of the terminal device 10 according to the first embodiment, especially the terminal device 10-1 used by the user U1. The operation of the terminal device 10-1 will be described below with reference to FIG.
 ステップS1において、処理装置11-1は、取得部112-1として機能する。処理装置11-1は、ユーザU1宛ての複数のメッセージを取得する。 In step S1, the processing device 11-1 functions as an acquisition unit 112-1. Processing device 11-1 acquires a plurality of messages addressed to user U1.
 ステップS2において、処理装置11-1は、生成部113-1として機能する。処理装置11-1は、複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。 In step S2, the processing device 11-1 functions as the generator 113-1. The processing device 11-1 generates a plurality of individual objects SO corresponding to the plurality of messages on a one-to-one basis.
 ステップS3において、処理装置11-1は、表示制御部114-1として機能する。処理装置11-1は、複数の個別オブジェクトSOの集合体である仮想オブジェクトVOを、表示装置としてのXRグラス20に表示させる。とりわけ、処理装置11-1は、ステップS1において取得された複数のメッセージの数が多いほど、仮想オブジェクトVOの大きさを大きくする。また、処理装置11-1は、ステップS1において取得された複数のメッセージの数が多いほど、仮想空間VSにおいて、ユーザU1から遠い位置に仮想オブジェクトVO1を配置する。その後、処理装置11-1は、ステップS1の処理を実行する。 In step S3, the processing device 11-1 functions as the display control unit 114-1. The processing device 11-1 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO. In particular, the processing device 11-1 increases the size of the virtual object VO as the number of messages acquired in step S1 increases. Also, the processing device 11-1 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages acquired in step S1 increases. After that, the processing device 11-1 executes the process of step S1.
1-3:第1実施形態が奏する効果
 以上の説明によれば、表示制御装置としての端末装置10は、頭部に装着される表示装置としてのXRグラス20に、仮想オブジェクトVOを含む仮想空間VSを表示させる表示制御装置である。当該端末装置10は、取得部112、生成部113、及び表示制御部114を備える。取得部112は、複数のメッセージを取得する。生成部113は、複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。表示制御部114は、複数の個別オブジェクトSOの集合体である仮想オブジェクトVOを、表示装置としてのXRグラス20に表示させる。また、表示制御部114は、複数のメッセージの数が第1の数である場合に、仮想オブジェクトVOの大きさを第1の大きさとすると共に、仮想空間VSにおいて、当該仮想空間VSの中心から仮想オブジェクトVOの中心までの距離を第1の距離とする。更に、表示制御部114は、複数のメッセージの数が第1の数よりも大きな第2の数の場合に、仮想オブジェクトVOの大きさを第1の大きさよりも大きな第2の大きさとすると共に、仮想空間VSにおいて、当該仮想空間VSの中心から仮想オブジェクトVOの中心までの距離を、第1の距離よりも長い第2の距離とする。
1-3: Effect of the First Embodiment According to the above description, the terminal device 10 as a display control device has a virtual space including a virtual object VO on the XR glasses 20 as a display device worn on the head. It is a display control device for displaying VS. The terminal device 10 includes an acquisition unit 112 , a generation unit 113 and a display control unit 114 . Acquisition unit 112 acquires a plurality of messages. The generation unit 113 generates a plurality of individual objects SO corresponding to a plurality of messages on a one-to-one basis. The display control unit 114 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO. Further, when the number of the plurality of messages is the first number, the display control unit 114 sets the size of the virtual object VO to the first size, and sets the size of the virtual object VO to the first size in the virtual space VS. Let the distance to the center of the virtual object VO be the first distance. Furthermore, when the number of messages is a second number that is larger than the first number, the display control unit 114 sets the size of the virtual object VO to a second size that is larger than the first size. , in the virtual space VS, the distance from the center of the virtual space VS to the center of the virtual object VO is set as a second distance longer than the first distance.
 端末装置10は、上記の構成を備えるので、仮想空間VSにメッセージの数に対応する仮想オブジェクトVOを表示しながら、メッセージの数が増加しても、ユーザの利便性の低下を抑制することが可能となる。具体的には、仮想空間VSにおいて、メッセージの数に対応した個別オブジェクトSOの数が増加するに伴って、端末装置10は、当該個別オブジェクトSOの集合体である仮想オブジェクトVOの大きさを大きくすると共に、ユーザU1からより遠い位置に仮想オブジェクトVOを表示する。端末装置10が、この構成を備えるので、ユーザU1が頭部に装着するXRグラス20に備わるディスプレイ28において、個別オブジェクトSOの数が増加しても、その一覧性が確保される。この結果、ユーザの利便性の低下が抑制される。 Since the terminal device 10 has the above configuration, it is possible to display the virtual objects VO corresponding to the number of messages in the virtual space VS, and to suppress the deterioration of the user's convenience even if the number of messages increases. It becomes possible. Specifically, in the virtual space VS, as the number of individual objects SO corresponding to the number of messages increases, the terminal device 10 increases the size of the virtual object VO, which is a collection of the individual objects SO. At the same time, the virtual object VO is displayed at a position farther from the user U1. Since the terminal device 10 has this configuration, even if the number of individual objects SO increases on the display 28 provided on the XR glasses 20 worn on the head of the user U1, the viewability thereof is ensured. As a result, deterioration in user convenience is suppressed.
 また以上の説明によれば、表示制御部114は、複数のメッセージの数に応じて、仮想オブジェクトVOの表示態様を変更する。 Also, according to the above description, the display control unit 114 changes the display mode of the virtual object VO according to the number of messages.
 端末装置10は、上記の構成を備えるので、複数のメッセージの数が増加した場合に、当該複数のメッセージの数が増加したことを、ユーザU1に対してより訴求することが可能となる。 Since the terminal device 10 has the above configuration, when the number of multiple messages increases, it becomes possible to further appeal to the user U1 that the number of multiple messages has increased.
 また以上の説明によれば、表示制御部114は、複数のメッセージの各々に対応する送信元装置に応じて、複数の個別オブジェクトSOの表示態様を互いに異ならせる。 Also, according to the above description, the display control unit 114 makes the display modes of the plurality of individual objects SO different from each other according to the transmission source device corresponding to each of the plurality of messages.
 端末装置10は、上記の構成を備えるので、ユーザU1は、複数の個別オブジェクトSOを視認するだけで、複数のメッセージの送信元装置に応じて、当該複数の個別オブジェクトSOを区別することが可能となる。 Since the terminal device 10 has the above configuration, the user U1 can distinguish the plurality of individual objects SO according to the transmission source devices of the plurality of messages simply by visually recognizing the plurality of individual objects SO. becomes.
2:第2実施形態
 以下、図11~図14を参照しつつ、本発明の第2実施形態に係る表示制御装置としての端末装置10Aを含む情報処理システム1Aの構成について説明する。
2: Second Embodiment Hereinafter, a configuration of an information processing system 1A including a terminal device 10A as a display control device according to a second embodiment of the present invention will be described with reference to FIGS. 11 to 14. FIG.
2-1:第2実施形態の構成
2-1-1:全体構成
 本発明の第2実施形態に係る情報処理システム1Aは、第1実施形態に係る情報処理システム1に比較して、端末装置10の代わりに端末装置10Aを備える点で異なる。それ以外の点では、情報処理システム1Aの全体構成は、図1に示される第1実施形態に係る情報処理システム1の全体構成と同一であるので、その図示と説明を省略する。
2-1: Configuration of Second Embodiment 2-1-1: Overall Configuration An information processing system 1A according to the second embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 in that a terminal device 10A is provided instead. Otherwise, the overall configuration of the information processing system 1A is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
2-1-2:端末装置の構成
 図11は、端末装置10Aの構成例を示すブロック図である。端末装置10Aは端末装置10と異なり、処理装置11の代わりに処理装置11Aを、記憶装置12の代わりに記憶装置12Aを備える。
2-1-2: Configuration of Terminal Device FIG. 11 is a block diagram showing a configuration example of the terminal device 10A. Unlike the terminal device 10, the terminal device 10A includes a processing device 11A instead of the processing device 11 and a storage device 12A instead of the storage device 12. FIG.
 記憶装置12Aは、記憶装置12と異なり、制御プログラムPR2の代わりに制御プログラムPR2Aを記憶する。 Unlike the storage device 12, the storage device 12A stores the control program PR2A instead of the control program PR2.
 処理装置11Aは、処理装置11に備わる表示制御部114の代わりに、表示制御部114Aを備える。また、処理装置11Aは、処理装置11に備わる構成要素に加えて、受付部115と決定部116を備える。 The processing device 11A includes a display control unit 114A instead of the display control unit 114 included in the processing device 11. The processing device 11A also includes a reception unit 115 and a determination unit 116 in addition to the components included in the processing device 11 .
 受付部115は、仮想オブジェクトVOに対するユーザU1からの操作を受け付ける。例えば、受付部115は、ユーザU1による入力装置15を用いた操作を受け付けてもよい。あるいは、受付部115は、仮想空間VSにおいて、ユーザU1が仮想オブジェクトVOに接触する操作を受け付けてもよい。これらの、仮想空間VSにおいて仮想オブジェクトVOが選択される操作は、第1操作の一例である。 The reception unit 115 receives an operation from the user U1 on the virtual object VO. For example, the accepting unit 115 may accept an operation using the input device 15 by the user U1. Alternatively, the accepting unit 115 may accept an operation of the user U1 touching the virtual object VO in the virtual space VS. The operation of selecting the virtual object VO in the virtual space VS is an example of the first operation.
 また受付部115は、ユーザU1が、後述のように、仮想空間VSに表示される複数のメッセージの一覧から、一つのメッセージを選択する操作を受け付けてもよい。これらの複数のメッセージと複数の個別オブジェクトSOとは1対1に対応する。ユーザU1が複数のメッセージの一覧から一つのメッセージを選択することによって、一つの個別オブジェクトSOが選択される。あるいは受付部115は、仮想空間VSにおいて、ユーザU1が一つの個別オブジェクトSOに接触する操作を受け付けてもよい。これらの、仮想空間VSにおいて一つの個別オブジェクトSOが選択される操作は、第2操作の一例である。 The receiving unit 115 may also receive an operation by the user U1 to select one message from a list of multiple messages displayed in the virtual space VS, as will be described later. There is a one-to-one correspondence between these multiple messages and multiple individual objects SO. One individual object SO is selected by the user U1 selecting one message from a list of a plurality of messages. Alternatively, the accepting unit 115 may accept an operation of the user U1 touching one individual object SO in the virtual space VS. The operation of selecting one individual object SO in the virtual space VS is an example of the second operation.
 表示制御部114Aは、表示制御部114と同一の機能を有する。更に、表示制御部114Aは、ユーザU1による操作が上記の第1操作である場合、複数のメッセージの一覧を、仮想空間VSに表示させる。一方で、表示制御部114Aは、ユーザU1による操作が上記の第2操作である場合、当該第2操作によって指定された一つの個別オブジェクトSOに対応するメッセージの内容を、仮想空間VSに表示させる。 The display control unit 114A has the same function as the display control unit 114. Further, when the operation by the user U1 is the first operation, the display control unit 114A displays a list of multiple messages in the virtual space VS. On the other hand, when the operation by the user U1 is the second operation, the display control unit 114A causes the virtual space VS to display the content of the message corresponding to one individual object SO designated by the second operation. .
 図12及び図13は、表示制御部114Aと受付部115の動作例の説明図である。図12において、ユーザU1が端末装置10-1を用いて仮想オブジェクトVO2を選択する操作を行った場合、表示制御部114Aによって、仮想空間VSに複数のメッセージの一覧Lが表示される。より詳細には図12に示されるように、表示制御部114Aによって、仮想空間VSに、メッセージA~メッセージFのタイトルのリストが一覧Lとして表示される。メッセージA~メッセージFの各々は、個別オブジェクトSO1~個別オブジェクトSO6に1対1に対応する。なお、図12において、ユーザU1から見て、一覧Lは、仮想オブジェクトVO2の左側に表示されているが、当該表示場所は、あくまで一例である。一覧Lは、仮想空間VSの任意の箇所に表示されてよい。とりわけ、一覧Lは、ユーザU1から見て、仮想オブジェクトVO2と重ならない位置に表示されることが好適である。ユーザU1が、仮想空間VSにおいて仮想オブジェクトVO7に接触する操作を行った場合についても同様である。 12 and 13 are explanatory diagrams of operation examples of the display control unit 114A and the reception unit 115. FIG. In FIG. 12, when the user U1 performs an operation of selecting the virtual object VO2 using the terminal device 10-1, the display control unit 114A displays a list L of a plurality of messages in the virtual space VS. More specifically, as shown in FIG. 12, a list of titles of messages A to F is displayed as list L in virtual space VS by display control unit 114A. Messages A to F correspond one-to-one to individual objects SO1 to SO6, respectively. Note that in FIG. 12, the list L is displayed on the left side of the virtual object VO2 as seen from the user U1, but this display location is merely an example. The list L may be displayed anywhere in the virtual space VS. In particular, the list L is preferably displayed at a position that does not overlap the virtual object VO2 when viewed from the user U1. The same applies when the user U1 performs an operation of touching the virtual object VO7 in the virtual space VS.
 この結果、ユーザU1は、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOに1対1に対応する複数のメッセージを、一覧形式で視認できる。 As a result, the user U1 can visually recognize, in a list format, a plurality of messages corresponding one-to-one to the plurality of individual objects SO included in the virtual object VO.
 図12において、ユーザU1が、一覧Lに示される複数のメッセージから一のメッセージを選択する操作を行った場合、図13に示されるように、メッセージの内容を示すメッセージオブジェクトMO1が表示される。メッセージオブジェクトMO1は、テキスト及び画像の少なくとも一方を含む。図13に示される例においては、ユーザU1が一覧Lに示される複数のメッセージから、個別オブジェクトSO6に対応する「メッセージF」を選択する操作を行った場合を示す。ユーザU1が、仮想空間VSにおいて個別オブジェクトSO6に接触する操作を行った場合についても同様である。  In FIG. 12, when the user U1 performs an operation to select one message from a plurality of messages shown in the list L, a message object MO1 indicating the contents of the message is displayed as shown in FIG. The message object MO1 includes at least one of text and images. In the example shown in FIG. 13, the user U1 performs an operation to select "message F" corresponding to the individual object SO6 from the plurality of messages shown in the list L. The same is true when the user U1 performs an operation to touch the individual object SO6 in the virtual space VS.
 この結果、ユーザU1は、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOに1対1に対応する複数のメッセージの各々の具体的な内容を視認できる。 As a result, the user U1 can visually recognize the specific contents of each of the multiple messages corresponding to the multiple individual objects SO included in the virtual object VO.
 説明を図11に戻すと、決定部116は、複数のメッセージの各々について重要度を決定する。例えば、決定部116は、複数のメッセージの各々の内容を解析することによって、重要度を決定してもよい。あるいは、決定部116は、複数のメッセージの各々に対応する送信元装置に基づいて、重要度を決定してもよい。あるいは、決定部116は、入力装置15を用いたユーザU1の操作に基づいて、重要度を決定してもよい。例えば、図13に示されるように、一つの個別オブジェクトSOに対応するメッセージの内容が表示された状態で、ユーザU1が当該メッセージの重要度を判断し、入力装置15を用いて判断結果を入力する。決定部116は、入力装置15からの入力内容に基づいて、重要度を決定してもよい。 Returning the description to FIG. 11, the determination unit 116 determines the importance of each of the plurality of messages. For example, the determination unit 116 may determine the importance by analyzing the contents of each of the multiple messages. Alternatively, the determination unit 116 may determine the importance based on the transmission source device corresponding to each of the multiple messages. Alternatively, the determining unit 116 may determine the degree of importance based on the operation of the user U1 using the input device 15. FIG. For example, as shown in FIG. 13, while the contents of a message corresponding to one individual object SO are displayed, the user U1 judges the importance of the message and inputs the judgment result using the input device 15. do. The determining unit 116 may determine the degree of importance based on the content of input from the input device 15 .
 表示制御部114Aは、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOのうち、重要度が所定値以上のメッセージに対応する個別オブジェクトSOを、重要度が所定値未満のメッセージに対応する個別オブジェクトSOに比較して、ユーザU1の近傍、すなわち仮想空間VSの略中心の近傍に表示させる。更に、表示制御部114Aは、仮想オブジェクトVO内において、重要度が高い個別オブジェクトSOほど、ユーザU1の近傍、すなわち仮想空間VSの略中心の近傍に表示させてもよい。なお、上記の「所定値」は、「第1の値」の一例である。 Of the plurality of individual objects SO included in the virtual object VO, the display control unit 114A displays the individual objects SO corresponding to the messages having a degree of importance equal to or greater than a predetermined value, and the individual objects SO corresponding to the messages having a degree of importance less than the predetermined value. is displayed near the user U1, that is, near the center of the virtual space VS. Further, the display control unit 114A may display an individual object SO having a higher degree of importance in the virtual object VO near the user U1, that is, near the approximate center of the virtual space VS. The "predetermined value" described above is an example of the "first value".
 この結果、ユーザU1は、自身にとってより重要度の高いメッセージを優先的に確認できる。 As a result, user U1 can preferentially check messages that are more important to him/herself.
2-2:第2実施形態の動作
 図14は、第2実施形態に係る端末装置10A、とりわけユーザU1によって使用される端末装置10A-1の動作を示すフローチャートである。以下、図14を参照しつつ、端末装置10A-1の動作について説明する。
2-2: Operation of Second Embodiment FIG. 14 is a flowchart showing the operation of the terminal device 10A according to the second embodiment, especially the terminal device 10A-1 used by the user U1. The operation of the terminal device 10A-1 will be described below with reference to FIG.
 ステップS11において、処理装置11A-1は、取得部112-1として機能する。処理装置11A-1は、ユーザU1宛ての複数のメッセージを取得する。 In step S11, the processing device 11A-1 functions as an acquisition unit 112-1. The processing device 11A-1 acquires a plurality of messages addressed to the user U1.
 ステップS12において、処理装置11A-1は、生成部113-1として機能する。処理装置11A-1は、複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。 In step S12, the processing device 11A-1 functions as the generator 113-1. The processing device 11A-1 generates a plurality of individual objects SO corresponding to the plurality of messages on a one-to-one basis.
 ステップS13において、処理装置11A-1は、表示制御部114A-1として機能する。処理装置11A-1は、複数の個別オブジェクトSOの集合体である仮想オブジェクトVOを、表示装置としてのXRグラス20に表示させる。処理装置11A-1は、ステップS11において取得された複数のメッセージの数が多いほど、仮想オブジェクトVOの大きさを大きくする。また、処理装置11A-1は、ステップS11において取得された複数のメッセージの数が多いほど、仮想空間VSにおいて、ユーザU1から遠い位置に仮想オブジェクトVO1を配置する。 In step S13, the processing device 11A-1 functions as the display control section 114A-1. The processing device 11A-1 causes the XR glasses 20 as a display device to display a virtual object VO, which is an aggregate of a plurality of individual objects SO. The processing device 11A-1 increases the size of the virtual object VO as the number of messages acquired in step S11 increases. Also, the processing device 11A-1 arranges the virtual object VO1 at a position farther from the user U1 in the virtual space VS as the number of the plurality of messages acquired in step S11 increases.
 ステップS14において、処理装置11A-1は、受付部115として機能する。処理装置11A-1は、ユーザU1からの操作を受け付ける。ユーザU1からの操作が、仮想オブジェクトVOに対する第1操作である場合、処理装置11A-1は、ステップS15の処理を実行する。ユーザU1からの操作が、個別オブジェクトSOに対する第2操作である場合、処理装置11A-1は、ステップS16の処理を実行する。 In step S14, the processing device 11A-1 functions as the reception unit 115. Processing device 11A-1 receives an operation from user U1. If the operation from the user U1 is the first operation on the virtual object VO, the processing device 11A-1 executes the process of step S15. If the operation from the user U1 is the second operation on the individual object SO, the processing device 11A-1 executes the process of step S16.
 ステップS15において、処理装置11A-1は、表示制御部114Aとして機能する。処理装置11A-1は、複数のメッセージの一覧Lを、仮想空間VSに表示させる。その後、処理装置11A-1は、ステップS14の処理を実行する。 In step S15, the processing device 11A-1 functions as the display control section 114A. The processing device 11A-1 causes the list L of a plurality of messages to be displayed in the virtual space VS. After that, the processing device 11A-1 executes the process of step S14.
 ステップS16において、処理装置11A-1は、表示制御部114Aとして機能する。処理装置11A-1は、一つの個別オブジェクトSOに対応するメッセージの内容を、仮想空間VSに表示させる。その後、処理装置11A-1は、ステップS11の処理を実行する。 In step S16, the processing device 11A-1 functions as the display control section 114A. The processing device 11A-1 causes the contents of the message corresponding to one individual object SO to be displayed in the virtual space VS. After that, the processing device 11A-1 executes the process of step S11.
2-3:第2実施形態が奏する効果
 以上の説明によれば、表示制御装置としての端末装置10Aは、仮想オブジェクトVOに対する操作を受け付ける受付部115を更に備える。表示制御部114は、上記の操作が第1操作である場合、複数のメッセージの一覧Lを、仮想空間VSに表示させる。
2-3: Effects of the Second Embodiment According to the above description, the terminal device 10A as a display control device further includes the reception unit 115 that receives an operation on the virtual object VO. If the above operation is the first operation, the display control unit 114 causes the list L of the plurality of messages to be displayed in the virtual space VS.
 端末装置10Aは、上記の構成を備えるので、ユーザU1は、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOに1対1に対応する複数のメッセージを一覧形式で視認できる。 Since the terminal device 10A has the above configuration, the user U1 can visually recognize, in a list format, a plurality of messages corresponding one-to-one to a plurality of individual objects SO included in the virtual object VO.
 また以上の説明によれば、表示制御装置としての端末装置10Aにおいて、表示制御部114は、上記の操作が複数の個別オブジェクトSOのうち一の個別オブジェクトSOを指定する第2操作である場合、一の個別オブジェクトSOに対応するメッセージの内容を、仮想空間VSに表示させる。 Further, according to the above description, in the terminal device 10A as the display control device, the display control unit 114, when the above operation is the second operation of designating one individual object SO among the plurality of individual objects SO, The content of the message corresponding to one individual object SO is displayed in the virtual space VS.
 端末装置10Aは、上記の構成を備えるので、ユーザU1は、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOに1対1に対応する複数のメッセージの各々の具体的な内容を視認できる。 Since the terminal device 10A has the above configuration, the user U1 can visually recognize the specific contents of each of the multiple messages corresponding one-to-one to the multiple individual objects SO included in the virtual object VO.
 また以上の説明によれば、表示制御装置としての端末装置10Aは、複数のメッセージの各々について重要度を決定する決定部116を更に備える。表示制御部114は、仮想オブジェクトVOに含まれる複数の個別オブジェクトSOのうち、重要度が第1の値以上のメッセージに対応する個別オブジェクトSOを、重要度が第1の値未満のメッセージに対応する個別オブジェクトSOに比較して、ユーザU1の近傍に表示させる。 Also, according to the above description, the terminal device 10A as a display control device further includes the determination unit 116 that determines the importance of each of a plurality of messages. Of the plurality of individual objects SO included in the virtual object VO, the display control unit 114 selects individual objects SO corresponding to messages having a degree of importance greater than or equal to the first value, and displays individual objects SO corresponding to messages having a degree of importance less than the first value. The individual object SO is displayed near the user U1 as compared with the individual object SO.
 端末装置10Aは、上記の構成を備えるので、ユーザUは、自身にとってより重要度の高いメッセージを優先的に確認できる。 Since the terminal device 10A has the above configuration, the user U can preferentially check messages that are more important to him/herself.
3:第3実施形態
 以下、図15を参照しつつ、本発明の第3実施形態に係る表示制御装置としての端末装置10Bを含む情報処理システム1Bの構成について説明する。
3: Third Embodiment A configuration of an information processing system 1B including a terminal device 10B as a display control device according to a third embodiment of the present invention will be described below with reference to FIG.
3-1:第3実施形態の構成
3-1-1:全体構成
 本発明の第3実施形態に係る情報処理システム1Bは、第1実施形態に係る情報処理システム1に比較して、端末装置10の代わりに端末装置10Bを備える点で異なる。それ以外の点では、情報処理システム1Bの全体構成は、図1に示される第1実施形態に係る情報処理システム1の全体構成と同一であるので、その図示と説明を省略する。
3-1: Configuration of Third Embodiment 3-1-1: Overall Configuration An information processing system 1B according to the third embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 in that a terminal device 10B is provided instead. Otherwise, the overall configuration of the information processing system 1B is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
3-1-2:端末装置の構成
 図15は、端末装置10Bの構成例を示すブロック図である。端末装置10Bは端末装置10と異なり、処理装置11の代わりに処理装置11Bを、記憶装置12の代わりに記憶装置12Bを備える。また、端末装置10Bは、端末装置10に備わる構成要素に加えて、収音装置17を備える。
3-1-2: Configuration of Terminal Device FIG. 15 is a block diagram showing a configuration example of the terminal device 10B. Unlike the terminal device 10, the terminal device 10B includes a processing device 11B instead of the processing device 11 and a storage device 12B instead of the storage device 12. FIG. The terminal device 10</b>B also includes a sound pickup device 17 in addition to the components included in the terminal device 10 .
 収音装置17は、ユーザU1の音声を収音し、収音した音声を電気信号に変換する。収音装置17は、具体的にはマイクである。収音装置17によって音声から変換された電気信号は、後述の音声認識部117に出力される。 The sound pickup device 17 picks up the voice of the user U1 and converts the picked-up voice into an electric signal. The sound collecting device 17 is specifically a microphone. An electrical signal converted from the voice by the sound collecting device 17 is output to the voice recognition unit 117, which will be described later.
 記憶装置12Bは、記憶装置12と異なり、制御プログラムPR2の代わりに制御プログラムPR2Bを記憶する。 Unlike the storage device 12, the storage device 12B stores the control program PR2B instead of the control program PR2.
 処理装置11Bは、処理装置11と異なり、取得部112の代わりに取得部112Bを備える。また、処理装置11Bは、処理装置11に備わる構成要素に加えて、音声認識部117とメッセージ生成部118を備える。 Unlike the processing device 11, the processing device 11B includes an acquisition unit 112B instead of the acquisition unit 112. The processing device 11B also includes a speech recognition unit 117 and a message generation unit 118 in addition to the components included in the processing device 11. FIG.
 音声認識部117は、収音装置17が収音した音声を認識する。より詳細には、音声認識部117は、収音装置17から取得した電気信号に基づいて音声認識を実行することによって、テキストを生成する。 The voice recognition unit 117 recognizes the voice collected by the sound collection device 17. More specifically, the speech recognition unit 117 generates text by performing speech recognition based on the electrical signal acquired from the sound pickup device 17 .
 メッセージ生成部118は、音声認識部117によって生成されたテキストに対応するメッセージを生成する。メッセージ生成部118が生成したメッセージは、取得部112Aが取得する。すなわち取得部112Aが取得する複数のメッセージには、メッセージ生成部118が生成したメッセージが含まれる。なお、取得部112Aが取得する当該複数のメッセージは、メッセージ生成部118が生成したメッセージに加えて、ユーザU1を含む複数のユーザUが生成したメッセージを含んでもよい。換言すれば、当該複数のメッセージは、表示制御装置としての端末装置10B-1、及び通信網NETを介して当該端末装置10B-1に接続する1つ又は複数の端末装置10Bが生成する。 The message generation unit 118 generates a message corresponding to the text generated by the speech recognition unit 117. 112 A of acquisition parts acquire the message which the message production|generation part 118 produced|generated. That is, the messages generated by the message generator 118 are included in the plurality of messages acquired by the acquirer 112A. The plurality of messages acquired by acquisition unit 112A may include messages generated by multiple users U including user U1 in addition to messages generated by message generation unit 118 . In other words, the plurality of messages are generated by the terminal device 10B-1 as a display control device and one or more terminal devices 10B connected to the terminal device 10B-1 via the communication network NET.
 端末装置10Bは、上記の構成を備えるので、ユーザU1が発した音声に基づいてメッセージを生成し、当該生成したメッセージに基づいて、個別オブジェクトSOを生成できる。 Since the terminal device 10B has the above configuration, it can generate a message based on the voice uttered by the user U1 and generate the individual object SO based on the generated message.
3-2:第3実施形態の動作
 第3実施形態に係る端末装置10B、とりわけユーザU1が使用する端末装置10B-1の動作は、基本的に図10に示される第1実施形態に係る端末装置10-1の動作と同一であるため、その図示と詳細な説明は省略する。端末装置10B-1の動作では、ステップS1において取得される複数のメッセージに、メッセージ生成部118が生成したメッセージが含まれる。
3-2: Operation of the Third Embodiment The operation of the terminal device 10B according to the third embodiment, particularly the terminal device 10B-1 used by the user U1, basically corresponds to the operation of the terminal device according to the first embodiment shown in FIG. Since the operation is the same as that of the device 10-1, its illustration and detailed description are omitted. In the operation of the terminal device 10B-1, the messages generated by the message generator 118 are included in the plurality of messages acquired in step S1.
3-3:第3実施形態が奏する効果
 以上の説明によれば、表示制御装置としての端末装置10B-1において、複数のメッセージは、表示制御装置としての端末装置10B-1、及び通信網NETを介して当該端末装置10B-1に接続する1つ又は複数の端末装置10Bが生成する。
3-3: Effect of the Third Embodiment According to the above description, in the terminal device 10B-1 as the display control device, a plurality of messages can be sent to the terminal device 10B-1 as the display control device and the communication network NET. is generated by one or a plurality of terminal devices 10B connected to the terminal device 10B-1 via.
 端末装置10Bは、上記の構成を備えるので、ユーザU1は、ユーザU1以外のユーザUによって生成されたメッセージを確認できる。 Since the terminal device 10B has the above configuration, the user U1 can confirm messages generated by users U other than the user U1.
 また以上の説明によれば、表示制御装置としての端末装置10Bは、収音装置17、音声認識部117、及びメッセージ生成部118を更に備える。収音装置17は、ユーザUの音声を収音し、当該音声を表す電気信号を出力する。音声認識部117は、収音装置17から出力される電気信号に基づいてテキストを生成する。メッセージ生成部118は、音声認識部117によって生成されたテキストに対応するメッセージを生成する。また、上記の複数のメッセージは、メッセージ生成部118が生成したメッセージを含む。 According to the above description, the terminal device 10B as a display control device further includes the sound pickup device 17, the speech recognition section 117, and the message generation section 118. The sound pickup device 17 picks up the voice of the user U and outputs an electrical signal representing the voice. The speech recognition unit 117 generates text based on the electrical signal output from the sound pickup device 17 . Message generator 118 generates a message corresponding to the text generated by speech recognizer 117 . Also, the plurality of messages includes messages generated by the message generator 118 .
 端末装置10Bは、上記の構成を備えるので、ユーザU1の発する音声に基づいて、個別オブジェクトSOを生成できる。 Since the terminal device 10B has the above configuration, it can generate the individual object SO based on the voice uttered by the user U1.
4:第4実施形態
 以下、図16~図19を参照しつつ、本発明の第4実施形態に係る表示制御装置としての端末装置10Cを含む情報処理システム1Cの構成について説明する。
4: Fourth Embodiment A configuration of an information processing system 1C including a terminal device 10C as a display control device according to a fourth embodiment of the present invention will be described below with reference to FIGS. 16 to 19. FIG.
4-1:第4実施形態の構成
4-1-1:全体構成
 本発明の第4実施形態に係る情報処理システム1Cは、第1実施形態に係る情報処理システム1に比較して、端末装置10の代わりに端末装置10Cを、サーバ30の代わりにサーバ30Aを備える点で異なる。それ以外の点では、情報処理システム1Cの全体構成は、図1に示される第1実施形態に係る情報処理システム1の全体構成と同一であるので、その図示と説明を省略する。
4-1: Configuration of Fourth Embodiment 4-1-1: Overall Configuration An information processing system 1C according to the fourth embodiment of the present invention has terminal devices as compared with the information processing system 1 according to the first embodiment. 10 and a server 30A instead of the server 30, respectively. Otherwise, the overall configuration of the information processing system 1C is the same as the overall configuration of the information processing system 1 according to the first embodiment shown in FIG. 1, so illustration and description thereof will be omitted.
4-1-2:端末装置の構成
 端末装置10Cは端末装置10と異なり、処理装置11の代わりに処理装置11Cを、記憶装置12の代わりに記憶装置12Cを備える。記憶装置12Cは、記憶装置12と異なり、制御プログラムPR2の代わりに制御プログラムPR2Cを記憶する。処理装置11Cは、処理装置11と異なり、出力部111の代わりに出力部111Cを備える。それ以外の点では、端末装置10Cの構成は、図4に示される第1実施形態に係る端末装置10の構成と同一であるため、その図示と説明を省略する。
4-1-2: Configuration of Terminal Device Unlike the terminal device 10, the terminal device 10C includes a processing device 11C instead of the processing device 11 and a storage device 12C instead of the storage device 12. FIG. Unlike the storage device 12, the storage device 12C stores a control program PR2C instead of the control program PR2. Unlike the processing device 11, the processing device 11C includes an output section 111C instead of the output section 111. FIG. Otherwise, the configuration of the terminal device 10C is the same as the configuration of the terminal device 10 according to the first embodiment shown in FIG. 4, so illustration and description thereof will be omitted.
 出力部111Cは、出力部111が有する機能と同一の機能を有する。更に出力部111Cは、端末装置10の装置ID(identifer)、端末装置10を使用するユーザ名、及び端末装置10がXRグラス20から取得した位置情報を、サーバ30Aに出力する。例えば、端末装置10が、第1ユーザであるユーザU1が使用する端末装置10-1である場合、出力部111C-1は、端末装置10-1の機器ID、端末装置10-1のユーザ名である「U1」、及びユーザU1の頭部に装着されるXRグラス20が生成する位置情報を、サーバ30Aに出力する。端末装置10-1は、第1表示制御装置の一例である。また、出力部111Cは、表示制御部114によって仮想空間VSに表示された仮想オブジェクトVOの、当該仮想空間VSでの表示位置を示す座標を、サーバ30Aに出力する。端末装置10-1と接続されるXRグラス20には仮想オブジェクトVOを含む仮想空間VSが表示される。XRグラス20は第1表示装置の一例である。XRグラス20に表示される仮想空間VSは第1仮想空間の一例である。 The output unit 111C has the same function as the output unit 111 has. Further, the output unit 111C outputs the device ID (identifer) of the terminal device 10, the user name using the terminal device 10, and the location information acquired by the terminal device 10 from the XR glasses 20 to the server 30A. For example, if the terminal device 10 is the terminal device 10-1 used by user U1 who is the first user, the output unit 111C-1 outputs the device ID of the terminal device 10-1, the user name of the terminal device 10-1, and the device ID of the terminal device 10-1. and position information generated by the XR glasses 20 worn on the head of the user U1 are output to the server 30A. The terminal device 10-1 is an example of a first display control device. The output unit 111C also outputs coordinates indicating the display position in the virtual space VS of the virtual object VO displayed in the virtual space VS by the display control unit 114 to the server 30A. A virtual space VS including a virtual object VO is displayed on the XR glasses 20 connected to the terminal device 10-1. XR glasses 20 are an example of a first display device. The virtual space VS displayed on the XR glasses 20 is an example of the first virtual space.
4-1-3:サーバの構成
 図16は、サーバ30Aの構成例を示すブロック図である。サーバ30Aは、サーバ30と異なり、処理装置31の代わりに処理装置31Aを、記憶装置32の代わりに記憶装置32Aを備える。
4-1-3: Server Configuration FIG. 16 is a block diagram showing a configuration example of the server 30A. Unlike the server 30, the server 30A includes a processing device 31A instead of the processing device 31 and a storage device 32A instead of the storage device 32. FIG.
 記憶装置32Aは、記憶装置32と異なり、制御プログラムPR3の代わりに制御プログラムPR3Aを記憶する。また、記憶装置32Aは、記憶装置32によって記憶される構成要素に加えて、位置情報データベースLDを記憶する。 Unlike the storage device 32, the storage device 32A stores the control program PR3A instead of the control program PR3. In addition to the components stored by the storage device 32, the storage device 32A also stores a location information database LD.
 図17は、位置情報データベースLDの例を示す表である。上記のように、端末装置10Cは、サーバ30Aに対して、端末装置10Cの装置ID、端末装置10を使用するユーザ名、及び端末装置10CがXRグラス20から取得した位置情報、又は当該端末装置10Cが生成した位置情報を出力する。位置情報データベースLDには、これらの装置ID、ユーザ名、及び位置情報が格納される。図17に示される位置情報データベースLDには、一例として、ユーザU1が使用する端末装置10C-1の装置IDである「0001」、ユーザU1のユーザ名である「U1」、及び端末装置10C-1がXRグラス20から取得した位置情報である(x,y,z)=(xu1,yu1,zu1)が互いに紐づけられた状態で格納される。なお、図17に示される表において、「L」は2以上の整数である。 FIG. 17 is a table showing an example of the location information database LD. As described above, the terminal device 10C provides the server 30A with the device ID of the terminal device 10C, the user name using the terminal device 10, and the location information acquired by the terminal device 10C from the XR glasses 20, or the terminal device 10C outputs the generated location information. The location information database LD stores these device IDs, user names, and location information. As an example, the location information database LD shown in FIG. (x, y, z)=(x u1 , y u1 , z u1 ), where 1 is the positional information acquired from the XR glasses 20, are stored in a mutually linked state. In addition, in the table shown in FIG. 17, "L" is an integer of 2 or more.
 処理装置31Aは、処理装置31と異なり、取得部311の代わりに取得部311Aを、出力部312の代わりに出力部312Aを備える。また、処理装置31Aは、処理装置31に備わる構成要素に加えて、判定部314、及び抽出部315を備える。 Unlike the processing device 31, the processing device 31A includes an acquisition unit 311A instead of the acquisition unit 311 and an output unit 312A instead of the output unit 312. The processing device 31A also includes a determination unit 314 and an extraction unit 315 in addition to the constituent elements included in the processing device 31 .
 判定部314は、出力部312Aが端末装置10C-1に出力した複数のメッセージの数が所定数以上であるか否かを判定する。判定部314が、当該複数のメッセージの数が所定数以上であると判定した場合に、取得部311Aは、端末装置10C-1から、表示制御部114-1が仮想空間VSに表示させた仮想オブジェクトVOの、当該仮想空間VSでの表示位置を示す座標を取得する。なお、取得部311Aは、仮想オブジェクトVOの表示位置を示す座標に加えて、個別オブジェクトSOの表示位置を示す座標を取得してもよい。 The determination unit 314 determines whether or not the number of messages output by the output unit 312A to the terminal device 10C-1 is equal to or greater than a predetermined number. When the determination unit 314 determines that the number of the plurality of messages is equal to or greater than the predetermined number, the acquisition unit 311A receives the virtual message displayed in the virtual space VS by the display control unit 114-1 from the terminal device 10C-1. Coordinates indicating the display position of the object VO in the virtual space VS are acquired. Note that the obtaining unit 311A may obtain the coordinates indicating the display position of the individual object SO in addition to the coordinates indicating the display position of the virtual object VO.
 抽出部315は、位置情報データベースLDを参照することによって、ユーザU1と仮想空間VSを共有することが許容されるユーザUのうち、仮想空間VSにおいて、仮想オブジェクトVOの表示位置から所定の距離以内に存在する、ユーザU1以外の他のユーザUを抽出する。抽出された他のユーザUは、第2ユーザの一例である。第2ユーザは、複数のメッセージの数が所定数以上である場合に、仮想空間VSにおいて、表示位置から所定の距離以内に存在し、仮想空間VSの共有が許容されるユーザUである。なお、この「所定の距離」は、出力部312Aによって端末装置10C-1に出力された複数のメッセージの数に応じて増加することが好適である。 By referring to the location information database LD, the extracting unit 315 determines, among the users U who are permitted to share the virtual space VS with the user U1, the location within a predetermined distance from the display position of the virtual object VO in the virtual space VS. to extract users U other than the user U1. The extracted other user U is an example of the second user. The second user is a user U who exists within a predetermined distance from the display position in the virtual space VS and is permitted to share the virtual space VS when the number of messages is equal to or greater than a predetermined number. It should be noted that this "predetermined distance" preferably increases according to the number of messages output to the terminal device 10C-1 by the output unit 312A.
 出力部312Aは、抽出部315によって抽出されたユーザUが使用する端末装置10Cに対して、端末装置10C-1に出力した複数のメッセージと同一の複数のメッセージと、仮想空間VSにおける仮想オブジェクトVOの表示位置を示す座標とを送信する。同一の複数のメッセージと、仮想オブジェクトVOの表示位置を示す座標とは、制御情報の一例である。また、出力部312Aが制御情報を送信する先の端末装置10Cは、第2表示制御装置の一例である。更に、当該端末装置10Cに接続されるXRグラス20又は端末装置10Cに備わるディスプレイ14は、第2表示装置の一例である。当該同一の複数のメッセージを取得した端末装置10Cは、端末装置10C-1と同様に、当該同一の複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。また、当該端末装置10Cは、当該複数の個別オブジェクトSOの集合体である仮想オブジェクトVOがサーバ30Aから取得した表示位置に配置された仮想空間VSを端末装置10Cに接続されるXRグラス20又は端末装置10Cに備わるディスプレイ14に表示させる。端末装置10Cに接続されるXRグラス20又は端末装置10Cに備わるディスプレイ14に表示される仮想空間VSは、第2仮想空間の一例である。ここで、出力部312Aから出力される同一の複数のメッセージと、仮想オブジェクトVOの表示位置を示す座標とは、第2表示装置に、仮想オブジェクトVOを含む第2仮想空間を表示させるための情報である。なお、上記のように、取得部311Aが、仮想オブジェクトVOの表示位置を示す座標に加えて、個別オブジェクトSOの表示位置を示す座標を取得した場合には、出力部312Aは、端末装置10Cに対して、仮想空間VSにおける個別オブジェクトSOの表示位置を示す座標を出力する。この場合、端末装置10Cは、個別オブジェクトSOを、サーバ30Aから取得した仮想空間VSにおける表示位置に表示させる。 The output unit 312A supplies the terminal device 10C used by the user U extracted by the extraction unit 315 with a plurality of messages identical to the plurality of messages output to the terminal device 10C-1 and the virtual object VO in the virtual space VS. and the coordinates indicating the display position of the . The same multiple messages and the coordinates indicating the display position of the virtual object VO are examples of control information. Also, the terminal device 10C to which the output unit 312A transmits the control information is an example of a second display control device. Further, the XR glass 20 connected to the terminal device 10C or the display 14 provided in the terminal device 10C is an example of a second display device. The terminal device 10C that has obtained the same plurality of messages generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis, similar to the terminal device 10C-1. In addition, the terminal device 10C displays the virtual space VS in which the virtual object VO, which is an aggregate of the plurality of individual objects SO, is arranged at the display position obtained from the server 30A. It is displayed on the display 14 provided in the device 10C. The virtual space VS displayed on the XR glasses 20 connected to the terminal device 10C or the display 14 provided in the terminal device 10C is an example of the second virtual space. Here, the same plurality of messages output from the output unit 312A and the coordinates indicating the display position of the virtual object VO are information for displaying the second virtual space including the virtual object VO on the second display device. is. As described above, when the acquiring unit 311A acquires the coordinates indicating the display position of the individual object SO in addition to the coordinates indicating the display position of the virtual object VO, the output unit 312A outputs the coordinates indicating the display position of the individual object SO to the terminal device 10C. On the other hand, it outputs the coordinates indicating the display position of the individual object SO in the virtual space VS. In this case, the terminal device 10C displays the individual object SO at the display position in the virtual space VS obtained from the server 30A.
 この結果、仮想空間VSにおいて、ユーザU1によって視認される仮想オブジェクトVOに含まれる個別オブジェクトSOの数が所定数以上である場合に、他のユーザU2も、当該仮想オブジェクトVOを視認できる。 As a result, in the virtual space VS, when the number of individual objects SO included in the virtual object VO visually recognized by the user U1 is equal to or greater than a predetermined number, the other user U2 can also visually recognize the virtual object VO.
 図18は、判定部314、抽出部315、及び出力部312Aの動作の一例を示す説明図である。ユーザU1が頭部に装着するXRグラス20において、仮想オブジェクトVO4を含む仮想空間VSが表示されているものとする。ユーザU1宛ての複数のメッセージの数が増加するに伴って、仮想オブジェクトVO4に含まれる個別オブジェクトSOの数も増加する。また、仮想オブジェクトVO4に含まれる個別オブジェクトSOの数が増加するに伴って、仮想オブジェクトVO4は、ユーザU1から遠ざかる。サーバ30Aに備わる判定部314が、ユーザU1宛ての複数のメッセージの数が所定数以上であると判定した場合に、抽出部315は、仮想空間VSにおける仮想オブジェクトVO4の表示位置である(x,y,z)=(x12,y12,z12)から所定の距離以内に存在する、ユーザU1以外のユーザUを抽出する。図18に示される例において、抽出部315は、当該ユーザU1以外のユーザUとして、(x,y,z)=(xU2,yU2,zU2)に位置するユーザU2を抽出したとする。出力部312Aは、ユーザU2が使用する端末装置10C-2に対して、端末装置10C-1に送信したのと同一の複数のメッセージを送信する。端末装置10C-2に備わる取得部112は、サーバ30Aから当該同一の複数のメッセージと、仮想空間VSにおける仮想オブジェクトVO4の表示位置を示す座標を取得する。端末装置10C-2に備わる取得部112は、当該同一の複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。端末装置10C-2に備わる表示制御部114は、当該複数の個別オブジェクトSOの集合体である仮想オブジェクトVO4を、サーバ30Aから取得した仮想空間VSにおける表示位置である(x,y,z)=(x12,y12,z12)に表示させる。その結果、仮想空間VSにおいて、ユーザU1とユーザU2の双方が、仮想オブジェクトVO9を視認できる。 FIG. 18 is an explanatory diagram showing an example of operations of the determination unit 314, the extraction unit 315, and the output unit 312A. It is assumed that a virtual space VS including a virtual object VO4 is displayed on the XR glasses 20 worn on the head by the user U1. As the number of messages addressed to user U1 increases, the number of individual objects SO included in virtual object VO4 also increases. Also, as the number of individual objects SO included in the virtual object VO4 increases, the virtual object VO4 moves away from the user U1. When the determining unit 314 provided in the server 30A determines that the number of messages addressed to the user U1 is equal to or greater than a predetermined number, the extracting unit 315 extracts the display position of the virtual object VO4 in the virtual space VS (x, y, z)=(x 12 , y 12 , z 12 ), extracting users U other than user U1 who are present within a predetermined distance. In the example shown in FIG. 18, it is assumed that the extraction unit 315 has extracted the user U2 located at (x,y,z)=( xU2 , yU2 , zU2 ) as the user U other than the user U1. . The output unit 312A transmits to the terminal device 10C-2 used by the user U2 the same plurality of messages that were transmitted to the terminal device 10C-1. The acquiring unit 112 provided in the terminal device 10C-2 acquires the same plurality of messages and the coordinates indicating the display position of the virtual object VO4 in the virtual space VS from the server 30A. The acquisition unit 112 provided in the terminal device 10C-2 generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis. The display control unit 114 provided in the terminal device 10C-2 displays the virtual object VO4, which is an aggregate of the plurality of individual objects SO, at the display position in the virtual space VS acquired from the server 30A (x, y, z)= Display at (x 12 , y 12 , z 12 ). As a result, both the user U1 and the user U2 can visually recognize the virtual object VO9 in the virtual space VS.
4-2:第4実施形態の動作
 図19は、第4実施形態に係るサーバ30Aの動作を示すフローチャートである。以下、図19を参照しつつ、サーバ30Aの動作について説明する。
4-2: Operation of Fourth Embodiment FIG. 19 is a flow chart showing the operation of the server 30A according to the fourth embodiment. The operation of the server 30A will be described below with reference to FIG.
 ステップS21において、処理装置31Aは、出力部312Aとして機能する。処理装置31Aは、ユーザU1が使用する端末装置10C-1に対して、複数のメッセージを送信する。端末装置10C-1は、当該複数のメッセージに基づいて、XRグラス20に、仮想オブジェクトVO4を表示させる。 In step S21, the processing device 31A functions as the output unit 312A. The processing device 31A transmits a plurality of messages to the terminal device 10C-1 used by the user U1. The terminal device 10C-1 causes the XR glasses 20 to display the virtual object VO4 based on the plurality of messages.
 ステップS22において、処理装置31Aは、判定部314として機能する。処理装置31Aは、端末装置10C-1に送信された複数のメッセージの数が所定数以上である否かを判定する。判定結果が肯定、すなわち、処理装置31Aが、当該複数のメッセージの数が所定数以上であると判定した場合には、処理装置31Aは、ステップS23の処理を実行する。判定結果が否定、すなわち、処理装置31Aが、当該複数のメッセージの数が所定数未満であると判定した場合には、処理装置31Aは、ステップS21の処理を実行する。 In step S22, the processing device 31A functions as the determination unit 314. The processing device 31A determines whether or not the number of messages sent to the terminal device 10C-1 is equal to or greater than a predetermined number. When the determination result is affirmative, that is, when the processing device 31A determines that the number of the plurality of messages is equal to or greater than the predetermined number, the processing device 31A executes the process of step S23. When the determination result is negative, that is, when the processing device 31A determines that the number of the plurality of messages is less than the predetermined number, the processing device 31A executes the process of step S21.
 ステップS23において、処理装置31Aは、取得部311Aとして機能する。処理装置31Aは、端末装置10C-1から、表示制御部114-1によって仮想空間VSに表示された仮想オブジェクトVO4の、当該仮想空間VSでの表示位置を示す座標を取得する。 In step S23, the processing device 31A functions as an acquisition unit 311A. The processing device 31A acquires, from the terminal device 10C-1, the coordinates indicating the display position in the virtual space VS of the virtual object VO4 displayed in the virtual space VS by the display control unit 114-1.
 ステップS24において、処理装置31Aは、抽出部315として機能する。処理装置31Aは、ユーザU1と仮想空間VSを共有するユーザUのうち、仮想空間VSにおいて、仮想オブジェクトVO4の表示位置から所定の距離以内に存在する、ユーザU1以外のユーザUを抽出する。ここでは例として、処理装置31Aは、ユーザU2を抽出したものとする。 In step S24, the processing device 31A functions as the extraction unit 315. The processing device 31A extracts users U other than the user U1 who exist within a predetermined distance from the display position of the virtual object VO4 in the virtual space VS from among the users U who share the virtual space VS with the user U1. Here, as an example, the processing device 31A should extract the user U2.
 ステップS25において、処理装置31Aは、出力部312Aとして機能する。処理装置31Aは、ユーザU2が使用する端末装置10C-2に対して、端末装置10C-1に出力した複数のメッセージと同一の複数のメッセージと、仮想空間VSにおける仮想オブジェクトVO4の表示位置を示す座標とを送信する。当該同一の複数のメッセージを取得した端末装置10C-2は、端末装置10C-1と同様に、当該同一の複数のメッセージに1対1に対応する複数の個別オブジェクトSOを生成する。また、当該端末装置10C-2は、当該複数の個別オブジェクトSOの集合体である仮想オブジェクトVO4を、サーバ30Aから取得した仮想空間VSにおける表示位置に表示させる。 In step S25, the processing device 31A functions as the output unit 312A. The processing device 31A indicates, to the terminal device 10C-2 used by the user U2, a plurality of messages identical to the plurality of messages output to the terminal device 10C-1, and the display position of the virtual object VO4 in the virtual space VS. Send coordinates and The terminal device 10C-2 that has obtained the same plurality of messages generates a plurality of individual objects SO corresponding to the same plurality of messages on a one-to-one basis, similar to the terminal device 10C-1. Also, the terminal device 10C-2 displays the virtual object VO4, which is an aggregate of the plurality of individual objects SO, at the display position in the virtual space VS obtained from the server 30A.
4-3:第4実施形態が奏する効果
 以上の説明によれば、サーバ30Aは、上記の第1表示制御装置としての端末装置10~端末装置10Cに対して、複数のメッセージを送信するサーバである。この場合、第1表示装置としてのXRグラス20は、第1ユーザとしてのユーザU1の頭部に装着される。サーバ30Aは、取得部311A、及び出力部312Aを備える。取得部311Aは、第1仮想空間VSにおける仮想オブジェクトVO4の表示位置を取得する。出力部312Aは、複数のメッセージの数が所定数以上である場合に、第1仮想空間VSにおいて、上記の表示位置から所定の距離以内に存在し、仮想オブジェクトVOの共有が許容される第2表示制御装置としての端末装置10~端末装置10Cに対して、制御情報を送信する。第2表示制御装置としての端末装置10~端末装置10Cは、第2ユーザとしてのユーザU2の頭部に装着される第2表示装置としてのXRグラス20に対して、第2仮想空間を表示させる。上記の制御情報は、第2表示装置に対して、第2仮想空間における仮想オブジェクトVO4を表示させるための情報である。
4-3: Effects of the Fourth Embodiment According to the above description, the server 30A is a server that transmits a plurality of messages to the terminal devices 10 to 10C as the first display control device. be. In this case, the XR glasses 20 as the first display device are worn on the head of the user U1 as the first user. The server 30A includes an acquisition unit 311A and an output unit 312A. The obtaining unit 311A obtains the display position of the virtual object VO4 in the first virtual space VS. When the number of messages is equal to or greater than a predetermined number, the output unit 312A outputs a second message that exists within a predetermined distance from the display position in the first virtual space VS and is allowed to share the virtual object VO. Control information is transmitted to terminal devices 10 to 10C as display control devices. The terminal devices 10 to 10C as the second display control device display the second virtual space on the XR glasses 20 as the second display device worn on the head of the user U2 as the second user. . The above control information is information for displaying the virtual object VO4 in the second virtual space on the second display device.
 サーバ30Aは、上記の構成を備えるので、XRグラス20に表示される仮想オブジェクトVO4に含まれる個別オブジェクトSOの数が所定数以上である場合に、仮想空間VSにおいて、他のユーザU2も、当該仮想オブジェクトVO4を視認できる。 Since the server 30A has the above configuration, when the number of individual objects SO included in the virtual object VO4 displayed on the XR glasses 20 is equal to or greater than a predetermined number, the other user U2 can also The virtual object VO4 can be visually recognized.
5:変形例
 本開示は、以上に例示した実施形態に限定されない。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様を併合してもよい。
5: Modifications The present disclosure is not limited to the embodiments illustrated above. Specific modification modes are exemplified below. Two or more aspects arbitrarily selected from the following examples may be combined.
5-1:変形例1
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cの各々に含まれる構成要素、及び技術的特徴を、互いに組み合わせる構成としてもよい。例えば、第3実施形態に係る端末装置10Bと、第4実施形態に係るサーバ30Aとを組み合わせてもよい。図20は、端末装置10Bとサーバ30Aとを組み合わせる場合に生成される仮想オブジェクトVO5の説明図である。例えば、端末装置10Bが、サッカースタジアムにおける個々の観客の歓声に基づいて複数の個別オブジェクトSOを生成すると共に、ARグラスとしてのXRグラス20において、サッカースタジアムの上空に、複数の個別オブジェクトSOによって構成される仮想オブジェクトVO5を表示させてもよい。当該仮想オブジェクトVO5は、ARグラスとしてのXRグラス20を頭部に装着した複数の観客で共有されると共に、個別オブジェクトSOの個数に基づいて、例として、複数の個別オブジェクトSOが、特定の文字を表す形状に整列してもよい。この場合、複数のメッセージに1対1に対応する個別オブジェクトSOによって構成される仮想オブジェクトVO5及び仮想空間VSは、所定の場所に集まる複数のユーザUによって共有される。所定の場所は、何らかのイベントを行う会場、又は学校等の公共施設であってもよい。更に、仮想オブジェクトVO5及び仮想空間VSを共有する複数のユーザUは、同じイベントに参加する者であってもよい。例えば、eスポーツの大会がイベントに該当する。
5-1: Modification 1
The components and technical features included in each of the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment may be combined with each other. For example, the terminal device 10B according to the third embodiment and the server 30A according to the fourth embodiment may be combined. FIG. 20 is an explanatory diagram of the virtual object VO5 generated when the terminal device 10B and the server 30A are combined. For example, the terminal device 10B generates a plurality of individual objects SO based on the cheers of individual spectators in a soccer stadium, and the XR glasses 20 as AR glasses are configured by the plurality of individual objects SO above the soccer stadium. The virtual object VO5 to be displayed may be displayed. The virtual object VO5 is shared by a plurality of spectators wearing XR glasses 20 as AR glasses on their heads. may be arranged in a shape representing In this case, the virtual object VO5 and the virtual space VS, which are composed of individual objects SO corresponding to a plurality of messages on a one-to-one basis, are shared by a plurality of users U who gather at a predetermined location. The predetermined place may be a venue for some event or a public facility such as a school. Furthermore, multiple users U who share the virtual object VO5 and the virtual space VS may participate in the same event. For example, an e-sports tournament corresponds to the event.
 また、第3実施形態に係る情報処理システム1B以外においても、端末装置10~端末装置10Cは、ユーザU1宛てのメッセージ、及びユーザU1自身によって生成されたメッセージ以外に、第2のユーザUから、第3のユーザUに送信されたメッセージを取得してもよい。 In addition to the information processing system 1B according to the third embodiment, the terminal devices 10 to 10C also receive messages from the second user U, in addition to messages addressed to the user U1 and messages generated by the user U1 himself. A message sent to a third user U may be retrieved.
5-2:変形例2
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、端末装置10~端末装置10Cは、表示制御部114又は表示制御部114Aを備える。しかし、端末装置10~端末装置10Cではなく、サーバ30~サーバ30Aが、表示制御部114又は表示制御部114Aを備える構成としてもよい。また、サーバ30~サーバ30Aが、仮想空間VSにおける、仮想オブジェクトVO及び個別オブジェクトSOの表示位置を示す座標を設定する構成としてもよい。
5-2: Modification 2
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the terminal devices 10 to 10C each include a display control section 114 or a display control section 114A. However, instead of the terminal devices 10 to 10C, the servers 30 to 30A may be configured to include the display control unit 114 or the display control unit 114A. Alternatively, the servers 30 to 30A may set coordinates indicating the display positions of the virtual object VO and the individual object SO in the virtual space VS.
5-3:変形例3
 第4実施形態に係る情報処理システム1Cにおいて、サーバ30Aは、判定部314を備える。しかし、サーバ30Aではなく、端末装置10Cが判定部314を備える構成としてもよい。具体的には、端末装置10Cの側が、取得した複数のメッセージの個数、又はXRグラス20に表示されている個別オブジェクトSOの個数が所定数以上であるか否かを判定し、判定結果をサーバ30Aに出力する構成としてもよい。
5-3: Modification 3
In the information processing system 1</b>C according to the fourth embodiment, the server 30</b>A has a determination section 314 . However, the terminal device 10C may be configured to include the determination unit 314 instead of the server 30A. Specifically, the terminal device 10C determines whether the number of acquired messages or the number of individual objects SO displayed on the XR glasses 20 is equal to or greater than a predetermined number, and sends the determination result to the server. It may be configured to output to 30A.
5-4:変形例4
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、端末装置10~端末装置10Cに備わる表示制御部114又は表示制御部114Aは、ユーザUによってメッセージの内容が既読となった個別オブジェクトSOを消失させてもよい。とりわけ第4実施形態に係る情報処理システム1Cに備わるサーバ30Aにおいて、判定部314は、端末装置10C-1に出力された複数のメッセージの数ではなく、未読メッセージの数が所定数以上である否かを判定してもよい。
5-4: Modification 4
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the display control unit 114 or the display control unit 114A provided in the terminal devices 10 to 10C is controlled by the user U when the contents of the message are displayed. The read individual object SO may be erased. In particular, in the server 30A provided in the information processing system 1C according to the fourth embodiment, the determination unit 314 determines whether the number of unread messages is equal to or greater than a predetermined number, not the number of multiple messages output to the terminal device 10C-1. It may be determined whether
5-5:変形例5
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、端末装置10~端末装置10Cに備わる取得部112は、サーバ30又はサーバ30Aから、複数のメッセージを取得する。しかし、当該取得部112は、サーバ30又はサーバ30Aから、複数のメッセージの各々に対応するメッセージIDのみを取得してもよい。この場合、生成部113は、複数のメッセージではなく複数のメッセージIDと1対1に対応する個別オブジェクトSOを生成する。また、この場合、一例として図13に示されるように、ユーザU1が個別オブジェクトSOに対応するメッセージの内容を視認する段階において初めて、取得部112が、サーバ30又はサーバ30Aから、当該メッセージの内容を取得してもよい。その後、表示制御部114又は表示制御部114Aが、仮想空間VSに当該メッセージの内容を表示させてもよい。
5-5: Modification 5
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the acquisition unit 112 provided in the terminal device 10 to the terminal device 10C acquires a plurality of messages from the server 30 or the server 30A. . However, the acquiring unit 112 may acquire only message IDs corresponding to each of the plurality of messages from the server 30 or server 30A. In this case, the generation unit 113 generates individual objects SO that correspond one-to-one with a plurality of message IDs instead of a plurality of messages. Also, in this case, as shown in FIG. 13 as an example, when the user U1 visually recognizes the content of the message corresponding to the individual object SO, the acquiring unit 112 acquires the content of the message from the server 30 or the server 30A for the first time. can be obtained. After that, the display control unit 114 or the display control unit 114A may display the content of the message in the virtual space VS.
5-6:変形例6
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、個別オブジェクトSOは、ユーザUによって生成されたメッセージに1対1に対応する。しかし、個別オブジェクトSOが対応するメッセージは、ユーザUによって生成されたメッセージに限定されない。例えば、当該メッセージは、アプリケーションによって生成された、ユーザUへの通知であってもよい。
5-6: Modification 6
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the individual object SO corresponds to the message generated by the user U on a one-to-one basis. However, the messages to which the individual object SO corresponds are not limited to messages generated by the user U. For example, the message may be a notification to user U generated by an application.
5-7:変形例7
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、サーバ30又はサーバ30Aは、端末装置10~端末装置10Cに対して、メッセージデータベースMDに格納されるメッセージを出力する。しかし、サーバ30又はサーバ30Aから端末装置10~端末装置10Cに対するメッセージの出力方法は、これに限定されない。例えば、第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cは、更に、コンテンツサーバを備えてもよい。より詳細には、サーバ30又はサーバ30Aは、コンテンツサーバからユーザUに対するメッセージを含むコンテンツを取得し、取得した当該コンテンツを、端末装置10~端末装置10Cに出力してもよい。
5-7: Modification 7
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the server 30 or the server 30A sends messages stored in the message database MD to the terminal devices 10 to 10C. Output. However, the method of outputting a message from server 30 or server 30A to terminal devices 10 to 10C is not limited to this. For example, the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment may further include a content server. More specifically, the server 30 or the server 30A may acquire content including a message for the user U from the content server, and output the acquired content to the terminal devices 10 to 10C.
5-8:変形例8
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cにおいて、端末装置10~端末装置10CとXRグラス20とは別体として実現されている。しかし、本発明の実施形態における、端末装置10~端末装置10CとXRグラス20の実現方法は、これには限定されない。例えば、XRグラス20が、端末装置10と同一の機能を備えてもよい。換言すれば、端末装置10~端末装置10CとXRグラス20とが単一の筐体内において実現されてもよい。第2実施形態に係る情報処理システム1A~第4実施形態に係る情報処理システム1Cにおいても同様である。
5-8: Modification 8
In the information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment, the terminal devices 10 to 10C and the XR glasses 20 are implemented separately. However, the method of realizing the terminal devices 10 to 10C and the XR glasses 20 in the embodiment of the present invention is not limited to this. For example, the XR glasses 20 may have the same functions as the terminal device 10 . In other words, the terminal devices 10 to 10C and the XR glasses 20 may be implemented within a single housing. The same applies to the information processing system 1A according to the second embodiment to the information processing system 1C according to the fourth embodiment.
5-9:変形例9
 第1実施形態に係る情報処理システム1~第4実施形態に係る情報処理システム1Cは、一例として、ARグラスとしてのXRグラス20を備える。しかし、情報処理システム1~情報処理システム1Cは、XRグラス20として、VR(Virtual Reality)技術が採用されたHMD(Head Mounted Display)、MR(Mixed Reality)技術が採用されたHMD、及びMR技術が採用されたMRグラスのうちいずれか1つを備えてもよい。あるいは、情報処理システム1~情報処理システム1Cは、XRグラス20の代わりに、撮像装置を備えた通常のスマートフォン及びタブレットのうちいずれか1つを備えてもよい。これらのHMD、MRグラス、スマートフォン、及びタブレットは、表示装置の例である。
5-9: Modification 9
The information processing system 1 according to the first embodiment to the information processing system 1C according to the fourth embodiment include, as an example, XR glasses 20 as AR glasses. However, in the information processing systems 1 to 1C, the XR glass 20 includes an HMD (Head Mounted Display) employing VR (Virtual Reality) technology, an HMD employing MR (Mixed Reality) technology, and MR technology. may be provided with any one of the MR glasses employing Alternatively, the information processing system 1 to information processing system 1C may include, instead of the XR glasses 20, one of ordinary smartphones and tablets equipped with imaging devices. These HMDs, MR glasses, smartphones, and tablets are examples of display devices.
6:その他
(1)上述した実施形態では、記憶装置12~記憶装置12C、記憶装置22、及び記憶装置32~記憶装置32Aは、ROM及びRAMなどを例示したが、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリデバイス(例えば、カード、スティック、キードライブ)、CD-ROM(Compact Disc-ROM)、レジスタ、リムーバブルディスク、ハードディスク、フロッピー(登録商標)ディスク、磁気ストリップ、データベース、サーバその他の適切な記憶媒体である。また、プログラムは、電気通信回線を介してネットワークから送信されてもよい。また、プログラムは、電気通信回線を介して通信網NETから送信されてもよい。
6: Others (1) In the above-described embodiments, the storage devices 12 to 12C, 22, and 32 to 32A are examples of ROM and RAM, but flexible disks, magneto-optical disks ( compact discs, digital versatile discs, Blu-ray discs), smart cards, flash memory devices (e.g. cards, sticks, key drives), CD-ROMs (Compact Disc-ROM), registers, removable A disk, hard disk, floppy disk, magnetic strip, database, server or other suitable storage medium. Also, the program may be transmitted from a network via an electric communication line. Also, the program may be transmitted from the communication network NET via an electric communication line.
(2)上述した実施形態において、説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 (2) In the embodiments described above, the information, signals, etc. described may be represented using any of a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
(3)上述した実施形態において、入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 (3) In the above-described embodiments, input/output information and the like may be stored in a specific location (for example, memory), or may be managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
(4)上述した実施形態において、判定は、1ビットを用いて表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 (4) In the above-described embodiment, the determination may be made by a value (0 or 1) represented using 1 bit, or by a true/false value (Boolean: true or false). Alternatively, it may be performed by numerical comparison (for example, comparison with a predetermined value).
(5)上述した実施形態において例示した処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 (5) The order of the processing procedures, sequences, flowcharts, etc. exemplified in the above embodiments may be changed as long as there is no contradiction. For example, the methods described in this disclosure present elements of the various steps using a sample order, and are not limited to the specific order presented.
(6)図1~図20に例示された各機能は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 (6) Each function illustrated in FIGS. 1 to 20 is implemented by any combination of at least one of hardware and software. Also, the method of realizing each functional block is not particularly limited. That is, each functional block may be implemented using one device physically or logically coupled, or directly or indirectly using two or more physically or logically separated devices (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices. A functional block may be implemented by combining software in the one device or the plurality of devices.
(7)上述した実施形態において例示したプログラムは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称を用いて呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 (7) The programs illustrated in the above embodiments, whether referred to as software, firmware, middleware, microcode, hardware description language or by other names, instructions, instruction sets, code, code shall be interpreted broadly to mean segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and the like.
 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 In addition, software, instructions, information, etc. may be transmitted and received via a transmission medium. For example, the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
(8)前述の各形態において、「システム」及び「ネットワーク」という用語は、互換的に使用される。 (8) In each of the above aspects, the terms "system" and "network" are used interchangeably.
(9)本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 (9) Information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using corresponding other information. may be represented as
(10)上述した実施形態において、端末装置10~端末装置10C、サーバ30~サーバ30Aは、移動局(MS:Mobile Station)である場合が含まれる。移動局は、当業者によって、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、又はいくつかの他の適切な用語によって呼ばれる場合もある。また、本開示においては、「移動局」、「ユーザ端末(user terminal)」、「ユーザ装置(UE:User Equipment)」、「端末」等の用語は、互換的に使用され得る。 (10) In the above-described embodiments, the terminal device 10 to terminal device 10C and the server 30 to server 30A may be mobile stations (MS). A mobile station is defined by those skilled in the art as subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be called a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable term. Also, in the present disclosure, terms such as "mobile station", "user terminal", "user equipment (UE)", "terminal", etc. may be used interchangeably.
(11)上述した実施形態において、「接続された(connected)」、「結合された(coupled)」という用語、又はこれらのあらゆる変形は、2又はそれ以上の要素間の直接的又は間接的なあらゆる接続又は結合を意味し、互いに「接続」又は「結合」された2つの要素間に1又はそれ以上の中間要素が存在することを含められる。要素間の結合又は接続は、物理的な結合又は接続であっても、論理的な結合又は接続であっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」を用いて読み替えられてもよい。本開示において使用する場合、2つの要素は、1又はそれ以上の電線、ケーブル及びプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域及び光(可視及び不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」又は「結合」されると考えられる。 (11) In the above-described embodiments, the terms "connected," "coupled," or any variation thereof refer to any direct or indirect connection between two or more elements. Any connection or coupling is meant, including the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to each other. Couplings or connections between elements may be physical couplings or connections, logical couplings or connections, or a combination thereof. For example, "connection" may be replaced with "access." As used in this disclosure, two elements are defined using at least one of one or more wires, cables, and printed electrical connections and, as some non-limiting and non-exhaustive examples, in the radio frequency domain. , electromagnetic energy having wavelengths in the microwave and optical (both visible and invisible) regions, and the like.
(12)上述した実施形態において、「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 (12) In the above-described embodiments, the phrase "based on" does not mean "based only on," unless expressly specified otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
(13)本開示において使用される「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などによって読み替えられてもよい。 (13) The terms "determining" and "determining" as used in this disclosure may encompass a wide variety of actions. "Judgement" and "determination" are, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, inquiring (eg, lookup in a table, database, or other data structure); Also, "judgment" and "determination" are used for receiving (e.g., receiving information), transmitting (e.g., transmitting information), input, output, access (accessing) (for example, accessing data in memory) may include deeming that a "judgment" or "decision" has been made. In addition, "judgment" and "decision" are considered to be "judgment" and "decision" by resolving, selecting, choosing, establishing, comparing, etc. can contain. In other words, "judgment" and "decision" may include considering that some action is "judgment" and "decision". Also, "judgment (decision)" may be replaced by "assuming", "expecting", "considering", and the like.
(14)上述した実施形態において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。更に、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 (14) In the above-described embodiments, where "include," "including," and variations thereof are used, these terms are synonymous with the term "comprising." , is intended to be inclusive. Furthermore, the term "or" as used in this disclosure is not intended to be an exclusive OR.
(15)本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 (15) In this disclosure, where articles have been added by translation, such as a, an, and the in English, the disclosure includes the plural nouns following these articles. good.
(16)本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」等の用語も、「異なる」と同様に解釈されてもよい。 (16) In the present disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean that "A and B are different from C". Terms such as "separate," "coupled," etc. may also be interpreted in the same manner as "different."
(17)本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行う通知に限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 (17) Each aspect/embodiment described in the present disclosure may be used alone, may be used in combination, or may be used by switching according to execution. In addition, notification of predetermined information (for example, notification of “being X”) is not limited to explicit notification, but is performed implicitly (for example, not notification of the predetermined information). good too.
 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施できる。したがって、本開示の記載は、例示説明を目的とし、本開示に対して何ら制限的な意味を有さない。 Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described in this disclosure. The present disclosure can be practiced with modifications and variations without departing from the spirit and scope of the present disclosure as defined by the claims. Accordingly, the description of the present disclosure is for illustrative purposes and is not meant to be limiting in any way on the present disclosure.
1~1C…情報処理システム、10~10C…端末装置、11~11C…処理装置、12~12C…記憶装置、13…通信装置、14…ディスプレイ、15…入力装置、16…慣性センサ、17…収音装置、20…ARグラス、21…処理装置、22…記憶装置、23…視線検出装置、24…GPS装置、25…動き検出装置、26…撮像装置、27…通信装置、28…ディスプレイ、30、30A…サーバ、31、31A…処理装置、32、32A…記憶装置、33…通信装置、34…ディスプレイ、35…入力装置、41L、41R…レンズ、91、92…テンプル、93…ブリッジ、94、95…フレーム、111、111C…出力部、112~112B…取得部、113…生成部、114、114A…表示制御部、115…受付部、116…決定部、117…音声認識部、118…メッセージ生成部、311、311A…取得部、312、312A…出力部、313…メッセージ管理部、314…判定部、315…抽出部、MO1…メッセージオブジェクト、PR1~PR3A…制御プログラム、SO、SO1~SO10…個別オブジェクト、U、U1~U2…ユーザ、VO、VO1~VO5…仮想オブジェクト 1 to 1C... Information processing system 10 to 10C... Terminal device 11 to 11C... Processing device 12 to 12C... Storage device 13... Communication device 14... Display 15... Input device 16... Inertial sensor 17... Sound collecting device 20 AR glasses 21 processing device 22 storage device 23 line-of-sight detection device 24 GPS device 25 motion detection device 26 imaging device 27 communication device 28 display 30, 30A... server, 31, 31A... processing device, 32, 32A... storage device, 33... communication device, 34... display, 35... input device, 41L, 41R... lens, 91, 92... temple, 93... bridge, 94, 95... frame, 111, 111C... output unit, 112 to 112B... acquisition unit, 113... generation unit, 114, 114A... display control unit, 115... reception unit, 116... determination unit, 117... voice recognition unit, 118 ... message generation unit 311, 311A ... acquisition unit 312, 312A ... output unit 313 ... message management unit 314 ... determination unit 315 ... extraction unit MO1 ... message object PR1 to PR3A ... control program SO, SO1 ~ SO10 individual objects, U, U1 to U2 users, VO, VO1 to VO5 virtual objects

Claims (9)

  1.  ユーザの頭部に装着される表示装置に、仮想オブジェクトを含む仮想空間を表示させる表示制御装置であって、
     複数のメッセージを取得する取得部と、
     前記複数のメッセージに1対1に対応する複数の個別オブジェクトを生成する生成部と、
     前記複数の個別オブジェクトの集合体である仮想オブジェクトを前記表示装置に表示させる表示制御部と、を備え、
     前記表示制御部は、
      前記複数のメッセージの数が第1の数の場合に、前記仮想オブジェクトの大きさを第1の大きさとすると共に、前記仮想空間において、当該仮想空間の中心から前記仮想オブジェクトの中心までの距離を第1の距離とし、
      前記複数のメッセージの数が第1の数よりも大きな第2の数の場合に、前記仮想オブジェクトの大きさを前記第1の大きさよりも大きな第2の大きさとすると共に、前記仮想空間において、前記仮想空間の中心から前記仮想オブジェクトの中心までの距離を前記第1の距離よりも長い第2の距離とする、
    表示制御装置。
    A display control device for displaying a virtual space including a virtual object on a display device worn on a user's head,
    an acquisition unit that acquires a plurality of messages;
    a generation unit that generates a plurality of individual objects corresponding to the plurality of messages on a one-to-one basis;
    a display control unit that causes the display device to display a virtual object that is an aggregate of the plurality of individual objects;
    The display control unit
    When the number of the plurality of messages is a first number, the size of the virtual object is set to the first size, and in the virtual space, the distance from the center of the virtual space to the center of the virtual object is Let the first distance be
    when the number of the plurality of messages is a second number larger than the first number, the size of the virtual object is set to a second size larger than the first size, and in the virtual space, setting a distance from the center of the virtual space to the center of the virtual object as a second distance longer than the first distance;
    Display controller.
  2.  前記表示制御部は、前記複数のメッセージの数に応じて、前記仮想オブジェクトの表示態様を変更する、請求項1に記載の表示制御装置。 The display control device according to claim 1, wherein the display control unit changes the display mode of the virtual object according to the number of the plurality of messages.
  3.  前記表示制御部は、前記複数のメッセージの各々に対応する送信元装置に応じて、前記複数の個別オブジェクトの表示態様を互いに異ならせる、、請求項1又は請求項2に記載の表示制御装置。 3. The display control device according to claim 1 or 2, wherein the display control unit makes the display modes of the plurality of individual objects different from each other according to the transmission source device corresponding to each of the plurality of messages.
  4.  前記仮想オブジェクトに対する前記ユーザの操作を受け付ける受付部を更に備え、
     前記表示制御部は、前記操作が第1操作である場合、前記複数のメッセージの一覧を、前記仮想空間に表示させる、請求項1から請求項3のいずれか1項に記載の表示制御装置。
    further comprising a reception unit that receives the user's operation on the virtual object;
    4. The display control device according to any one of claims 1 to 3, wherein said display control unit displays a list of said plurality of messages in said virtual space when said operation is a first operation.
  5.  前記仮想オブジェクトに対する前記ユーザの操作を受け付ける受付部を更に備え、
     前記表示制御部は、前記操作が前記複数の個別オブジェクトのうち一の個別オブジェクトを指定する第2操作である場合、前記一の個別オブジェクトに対応するメッセージの内容を、前記仮想空間に表示させる、請求項1から請求項3のいずれか1項に記載の表示制御装置。
    further comprising a reception unit that receives the user's operation on the virtual object;
    When the operation is a second operation of designating one individual object among the plurality of individual objects, the display control unit causes the content of a message corresponding to the one individual object to be displayed in the virtual space. The display control device according to any one of claims 1 to 3.
  6.  前記複数のメッセージの各々について重要度を決定する決定部を更に備え、
     前記表示制御部は、前記仮想オブジェクトに含まれる前記複数の個別オブジェクトのうち、前記重要度が第1の値以上のメッセージに対応する第1の個別オブジェクトを、前記重要度が第1の値未満のメッセージに対応する第2の個別オブジェクトに比較して、前記ユーザの近傍に表示させる、請求項1から請求項5のいずれか1項に記載の表示制御装置。
    further comprising a determination unit that determines the importance of each of the plurality of messages;
    The display control unit selects, from among the plurality of individual objects included in the virtual object, a first individual object corresponding to a message having a degree of importance equal to or greater than a first value, 6. The display control device according to any one of claims 1 to 5, which is displayed near the user in comparison with a second individual object corresponding to the message of .
  7.  前記複数のメッセージは、当該表示制御装置、及び当該表示制御装置に接続する1つ又は複数の端末装置によって生成される、請求項1から請求項6のいずれか1項に記載の表示制御装置。 The display control device according to any one of claims 1 to 6, wherein the plurality of messages are generated by the display control device and one or more terminal devices connected to the display control device.
  8.  前記ユーザの音声を収音し、前記音声を表す電気信号を出力する収音装置と、
     前記収音装置から出力される電気信号に基づいてテキストを生成する音声認識部と、
     前記音声認識部によって生成されたテキストに対応するメッセージを生成するメッセージ生成部と、を更に備え、
     前記複数のメッセージは、前記メッセージ生成部によって生成されたメッセージを含む、請求項7に記載の表示制御装置。
    a sound collecting device that collects the voice of the user and outputs an electrical signal representing the voice;
    a speech recognition unit that generates text based on the electrical signal output from the sound pickup device;
    a message generation unit that generates a message corresponding to the text generated by the speech recognition unit;
    8. The display control device according to claim 7, wherein said plurality of messages include messages generated by said message generator.
  9.  請求項1から請求項8のいずれか1項に記載の表示制御装置に対して、前記複数のメッセージを送信するサーバであって、
     前記表示制御装置は、第1表示制御装置であり、前記表示装置は、第1ユーザとしての前記ユーザの頭部に装着される第1表示装置であり、前記仮想空間は第1仮想空間であり、
     前記第1表示制御装置から、前記第1仮想空間における前記仮想オブジェクトの表示位置を取得する取得部と、
     前記複数のメッセージの数が所定数以上である場合に、前記第1仮想空間において、前記表示位置から所定の距離以内に存在し、前記仮想オブジェクトの共有が許容される第2表示制御装置に対して、制御情報を送信する出力部と、を備え、
     前記第2表示制御装置は、第2ユーザの頭部に装着される第2表示装置に対して、第2仮想空間を表示させ、
     前記制御情報は、前記第2表示装置に対して、前記第2仮想空間における前記仮想オブジェクトを表示させるための情報である、
     サーバ。
    A server that transmits the plurality of messages to the display control device according to any one of claims 1 to 8,
    The display control device is a first display control device, the display device is a first display device worn on the head of the user as a first user, and the virtual space is a first virtual space. ,
    an acquisition unit that acquires the display position of the virtual object in the first virtual space from the first display control device;
    to a second display control device that exists within a predetermined distance from the display position in the first virtual space and that is allowed to share the virtual object when the number of the plurality of messages is equal to or greater than a predetermined number; and an output unit for transmitting control information,
    The second display control device causes a second display device mounted on the head of a second user to display a second virtual space,
    The control information is information for causing the second display device to display the virtual object in the second virtual space.
    server.
PCT/JP2023/002690 2022-01-31 2023-01-27 Display control device, and server WO2023145892A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-013454 2022-01-31
JP2022013454 2022-01-31

Publications (1)

Publication Number Publication Date
WO2023145892A1 true WO2023145892A1 (en) 2023-08-03

Family

ID=87471727

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/002690 WO2023145892A1 (en) 2022-01-31 2023-01-27 Display control device, and server

Country Status (1)

Country Link
WO (1) WO2023145892A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012069097A (en) * 2010-08-26 2012-04-05 Canon Inc Display method for data retrieval result, display device for data retrieval result, and program
JP2021099544A (en) * 2019-12-19 2021-07-01 富士フイルムビジネスイノベーション株式会社 Information processing device and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012069097A (en) * 2010-08-26 2012-04-05 Canon Inc Display method for data retrieval result, display device for data retrieval result, and program
JP2021099544A (en) * 2019-12-19 2021-07-01 富士フイルムビジネスイノベーション株式会社 Information processing device and program

Similar Documents

Publication Publication Date Title
US11087728B1 (en) Computer vision and mapping for audio applications
US20210407203A1 (en) Augmented reality experiences using speech and text captions
EP4172730A1 (en) Augmented reality experiences with object manipulation
CN105190484B (en) Personal holographic billboard
WO2018031745A1 (en) Word flow annotation
CN107943275B (en) Simulated environment display system and method
KR20160145976A (en) Method for sharing images and electronic device performing thereof
CN113647116A (en) Head mounted device for generating binaural audio
WO2022006116A1 (en) Augmented reality eyewear with speech bubbles and translation
Starner Wearable computing
WO2023145892A1 (en) Display control device, and server
CN115735175A (en) Eye-worn device capable of sharing gaze response viewing
US20230161959A1 (en) Ring motion capture and message composition system
US20230217007A1 (en) Hyper-connected and synchronized ar glasses
WO2023149255A1 (en) Display control device
WO2023145890A1 (en) Terminal device
WO2023149256A1 (en) Display control device
WO2023034021A1 (en) Social connection through distributed and connected real-world objects
CN117616381A (en) Speech controlled setup and navigation
WO2023112838A1 (en) Information processing device
WO2023162499A1 (en) Display control device
WO2023145265A1 (en) Message transmitting device and message receiving device
WO2023079875A1 (en) Information processing device
WO2023145273A1 (en) Display control device
WO2023120472A1 (en) Avatar generation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23747102

Country of ref document: EP

Kind code of ref document: A1