WO2019130991A1 - Information processing device - Google Patents

Information processing device Download PDF

Info

Publication number
WO2019130991A1
WO2019130991A1 PCT/JP2018/044278 JP2018044278W WO2019130991A1 WO 2019130991 A1 WO2019130991 A1 WO 2019130991A1 JP 2018044278 W JP2018044278 W JP 2018044278W WO 2019130991 A1 WO2019130991 A1 WO 2019130991A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
user
virtual object
image
hmd
Prior art date
Application number
PCT/JP2018/044278
Other languages
French (fr)
Japanese (ja)
Inventor
敬幸 古田
雄太 樋口
和輝 東
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2019130991A1 publication Critical patent/WO2019130991A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • One aspect of the present invention relates to an information processing apparatus.
  • VR virtual reality
  • a technique called is known for example, a user object (avatar, character, etc.) interlocked with the user's action (for example, the action of a part of the body such as the head and hands) is generated on the virtual space and the user's action It is controlled according to Then, by displaying an image showing a view seen from the user object on the HMD, the user is provided with an experience as if the user object exists in the virtual space.
  • one aspect of the present invention is to provide an information processing apparatus capable of improving the convenience of a virtual reality experience of a user.
  • An information processing apparatus is an information processing apparatus for providing an image of a virtual space displayed on a display device worn by a user, the real space image obtained by imaging a real space in the vicinity of the user
  • An image acquisition unit to acquire, a virtual object generation unit that recognizes an object included in a real space image and generates a virtual object corresponding to the object in a virtual space, and at least a part of a virtual space including the virtual object
  • an image generation unit configured to generate the virtual space image to be displayed on the display device.
  • an object included in the real space image obtained by imaging the real space near the user is generated as a virtual object in the virtual space, and a virtual space image including the virtual object (A virtual space image in which the virtual object is reflected) is generated.
  • the user wearing the display device can visually recognize an object present in the vicinity of the user via the virtual space image. Therefore, according to the information processing apparatus, the convenience of the virtual reality experience of the user can be improved.
  • an information processing apparatus capable of improving the convenience of the user's virtual reality experience.
  • FIG. 1 is a diagram showing a functional configuration of an information processing system 100 including an information processing apparatus 10 according to an embodiment of the present invention.
  • the information processing apparatus 10 is an apparatus for providing a user with a virtual space in which arbitrary VR contents such as a game space and a chat space are expanded, via a head mounted display (HMD) 1 (display device) mounted to the user. is there. That is, the information processing apparatus 10 is an apparatus that provides a user with a virtual reality (VR) experience through an image of a virtual space displayed on the HMD 1.
  • the information processing apparatus 10 has a function of generating a virtual object corresponding to the object from the object existing in the real space on the virtual space.
  • the information processing apparatus 10 includes a communication unit 11, an image acquisition unit 12, a virtual object generation unit 13, a virtual object storage unit 14, a sharing setting unit 15, and an image generation unit 16. , An object detection unit 17 and a virtual object update unit 18.
  • the information processing apparatus 10 is, for example, a game terminal, a personal computer, a tablet terminal or the like that can communicate with the plurality of HMDs 1 attached by each of the plurality of users.
  • the implementation form of the information processing apparatus 10 is not limited to a specific form.
  • the information processing device 10 may be a computer device incorporated in the same device as the HMD 1.
  • the information processing apparatus 10 may be a server device or the like that can communicate with each of the HMDs 1 (or each computer terminal that controls the operation of each HMD 1) of each of a plurality of users via a communication line such as the Internet. Further, the information processing apparatus 10 may be physically configured by a single device or may be configured by a plurality of devices. For example, the information processing apparatus 10 is realized by a computer terminal (a computer terminal provided for each HMD 1) that controls the operation of each HMD 1 with some functions (for example, the functions of the image generation unit 16), and other functions are performed. You may be comprised as a distributed system implement
  • the HMD 1 is a display device mounted on the body (for example, the head) of the user.
  • the HMD 1 includes, for example, a display unit that displays an image (an image for the left eye and an image for the right eye) in front of each eye of the user in a state of being worn on the head of the user.
  • a stereoscopic image three-dimensional image
  • the display unit described above may be a display integrally configured with a main unit mounted on the user's body, such as a glasses type or helmet type, or a device detachably attachable to the main unit of the HMD 1 (For example, a display of a terminal such as a smartphone attached to the main unit) may function as the display unit.
  • the HMD 1 includes, for example, a sensor (eg, an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, a gyro sensor, etc.) capable of detecting the position, orientation (tilt), velocity, acceleration, etc. of the user's head (ie, the HMD 1). There is.
  • the HMD 1 periodically transmits information on the motion (position, orientation, velocity, acceleration, etc.) of the head of the user detected by such a sensor to the information processing apparatus 10 as motion information on the head of the user.
  • the HMD 1 includes, for example, a sensor such as an infrared camera that detects an action of the user's eyes (for example, the position and the movement of the black eye portion, etc.).
  • the sensor is, for example, a sensor having a known eye tracking function.
  • the said sensor detects the operation
  • the HMD 1 periodically transmits the operation information of the user's eyes detected as described above to the information processing apparatus 10.
  • the HMD 1 also includes a microphone (not shown) for inputting the voice of the user wearing the HMD 1 and a speaker (not shown) for outputting voice and the like of each user as accessories.
  • the voice acquired by the microphone is transmitted to the information processing apparatus 10.
  • the speaker outputs the voice or the like of the other user received from the information processing device 10. With such a microphone and a speaker, it is possible to make conversation (chat) between a plurality of users.
  • the microphone and the speaker may be devices integrated with the HMD 1 or may be devices different from the HMD 1.
  • the HMD 1 also includes a camera 2 (photographing device) for photographing a space in the vicinity of the user wearing the HMD 1 (in the present embodiment, the space in front of the user) as an accessory.
  • the HMD 1 and the camera 2 can communicate with each other.
  • the camera 2 may be a camera integrally configured with the main body of the HMD 1 or may be a camera provided to a main body of the HMD 1 and a device (e.g., a smartphone or the like) that is detachable.
  • the camera 2 of the HMD 1 recognizes a specific area 4 on the desk 3 in front of the user 5 wearing the HMD 1, and an object present on the specific area 4 is Take an image.
  • the specific area 4 is defined by a mat of green back placed on the desk 3 or the like.
  • a sensor or a marker that can be recognized by the camera 2 that can communicate with the camera 2 is embedded at a specific position (for example, a central position or four corners) of the mat, and the camera 2 communicates with the sensor (or The specific area 4 may be recognized based on the position of the sensor (or the marker) grasped by the recognition of the marker).
  • the camera 2 is not necessarily a device attached to the HMD 1 and may be a camera (a separate device from the HMD 1) fixedly disposed at a position capable of photographing a space including the specific area 4 Good. Further, the camera 2 may be configured by a plurality of fixed cameras that capture a space including the specific area 4 from a plurality of different angles. In this case, it is possible to obtain a three-dimensional image of an object present on the specific region 4 based on images of different angles taken by different fixed cameras.
  • the camera 2 starts capturing an image of the real space including the specific area 4 in response to an operation by the user on a controller attached to the HMD 1 (or a controller separate from the HMD 1).
  • the video taken by the camera 2 is transmitted to the HMD 1 as needed, and displayed superimposed on the virtual space image displayed on the HMD 1.
  • the virtual space image is an image of the virtual space of the angle specified based on the motion information of the head and eyes of the user wearing the HMD 1.
  • the video captured by the camera 2 may be displayed on a small window (so-called wipe) provided at a corner (for example, the upper right corner or the like) of the virtual space image.
  • the user can grasp the state of the space (real space) including the specific area 4 by viewing the virtual space image and simultaneously experiencing the virtual reality and confirming the small window-like screen. it can.
  • the object on the specific area 4 is not generated as a virtual object. Therefore, the object in the specific area 4 can not be treated (eg, carried) as a thing in the virtual space, and can not be recognized by users other than the user.
  • the communication unit 11 transmits / receives data to / from an external device such as the HMD 1 (including a microphone, a speaker, a camera 2, a controller, and the like that are accessories of the HMD 1) via a wired or wireless communication network.
  • the communication unit 11 receives from the HMD 1 the motion information of the head and eyes of the user acquired in the HMD 1 as described above.
  • the communication unit 11 transmits the image generated by the image generation unit 16 described later to the HMD 1.
  • an image of a virtual space of an angle determined based on the motion information of the head and eyes of each user is displayed.
  • the communication unit 11 also receives the voice of each user input to the above-described microphone, and transmits the received voice of each user to the speaker of each user. By such processing, the voice is shared among the users, and the above-described chat is realized.
  • the image acquisition unit 12 acquires a real space image obtained by imaging a real space near the user.
  • the image acquisition unit 12 acquires an image (details will be described later) acquired by the above-described camera 2 as a real space image through the communication unit 11.
  • the virtual object generation unit 13 recognizes an object included in the real space image, and generates a virtual object corresponding to the object in the virtual space.
  • the virtual object generation unit 13 generates a virtual object corresponding to an object designated by the user among a plurality of objects included in the real space image. That is, the virtual object generation unit 13 does not immediately generate virtual objects corresponding to all the objects included in the real space image, but generates only virtual objects corresponding to the object designated by the user.
  • objectification By such processing, only the virtual object desired by the user can be generated, and the processing load of generation of the virtual object (hereinafter also referred to as “objectification”) can be reduced. That is, the processing load and usage of hardware resources such as processors and memories can be reduced.
  • FIG. 2B shows a state in which two objects 6 (6A, 6B) exist on the specific area 4 in front of the user 5.
  • the object 6A is a plastic bottle containing a beverage
  • the object 6B is a notebook PC operated by the user 5.
  • the camera 2 acquires a real space image including the specific area 4 and transmits it to the HMD 1.
  • the real space image including the objects 6A and 6B is displayed on the HMD 1.
  • the user 5 designates a target area including the object 6 (here, the object 6B as an example here) which is the object of objectification in the real space image by the operation using the above-mentioned controller or the like. Subsequently, the real space image and the information indicating the target area are transmitted from the HMD 1 to the information processing apparatus 10.
  • the object 6 here, the object 6B as an example here
  • the real space image and the information indicating the target area are transmitted from the HMD 1 to the information processing apparatus 10.
  • the image acquisition unit 12 acquires these pieces of information (information indicating the real space image and the target area) via the communication unit 11. Then, the virtual object generation unit 13 performs known image recognition on the target area in the real space image. By such processing, appearance information of the object 6B included in the target area is extracted. As shown in FIG. 2B, the virtual object generation unit 13 generates a virtual object 8 corresponding to the object 6B based on the appearance information extracted in this manner.
  • the user object 7 associated with the user 5 is disposed on the virtual space V.
  • the virtual object generation unit 13 is configured such that the relative position of the virtual object 8 to the user object 7 in the virtual space V matches the relative position of the object 6B to the user 5 in the real space. Determine the position of 8.
  • the user can perform an operation on the object 6B in the real space by performing an operation (for example, an operation to carry) on the virtual object 8 in the virtual space V via the user object 7 .
  • the relative position of the virtual object 8 to the user object 7 may not coincide with the relative position of the object 6B to the user 5. That is, the virtual object generation unit 13 may generate the virtual object 8 at an arbitrary position (for example, a position designated by the user 5) in the virtual space.
  • the virtual object storage unit 14 stores information on the virtual object generated by the virtual object generation unit 13 (hereinafter, “virtual object information”).
  • virtual object information includes, for each virtual object, a virtual object ID for uniquely identifying a virtual object, appearance information for drawing a virtual object, generation time at which a virtual object is generated, virtual A camera ID for uniquely identifying the camera 2 (or the user 5 of the camera 2 or the like) who acquired the real space image on which the object is generated, a user (or the HMD 1 or the like who is permitted to share the virtual object) Share setting information indicating the device).
  • the camera ID is associated with the real space image as additional information, for example, when the real space image is photographed by the camera 2.
  • the virtual space V is a space shared by a plurality of users. That is, the virtual space V includes at least a first user (user 5 in this case) wearing the first HMD (HMD 1, first display device) and a second user (user 5) wearing the second HMD (HMD 1, second display device) Is a space shared with different users).
  • the virtual space V is, for example, a chat space for conducting business communication such as a meeting among a plurality of users.
  • the first user may not want the contents of the virtual object generated by the objectification to be known to users other than the specific user.
  • a virtual object corresponding to a memo or the like in which confidential information is described may be desired to be viewed only by a user having a specific job title or higher.
  • the sharing setting unit 15 shares the virtual object with the second user according to the operation content received from the first user for the virtual object generated by the virtual object generating unit 13 based on the specification by the first user.
  • a sharing setting screen for setting a user who is permitted to share the virtual object 8 is displayed on the first HMD.
  • the sharing setting screen for example, information indicating the appearance and the like of the virtual object 8 which is the target of the sharing setting, a screen for setting a user who permits sharing of the virtual object 8 and the like are displayed.
  • the sharing setting screen may be a setting screen capable of performing the sharing setting of each of the plurality of virtual objects.
  • the user 5 designates a user (or a user who does not permit sharing) who permits sharing of the virtual object 8 by performing an operation using the above-described controller or the like on the sharing setting screen.
  • the sharing setting unit 15 acquires setting information generated by such processing, and sets sharing setting information of the virtual object 8 based on the setting information. Specifically, the sharing setting unit 15 accesses virtual object information of the virtual object 8 stored in the virtual object storage unit 14, and sets or updates sharing setting information of the virtual object information.
  • the image generation unit 16 generates a virtual space image indicating at least a part of the virtual space V including the virtual object 8 generated by the virtual object generation unit 13. Specifically, the image generation unit 16 displays the virtual object 8 in the virtual space image (an image of an angle determined based on the motion information of the head and eyes of the user wearing the HMD 1) displayed on the HMD 1 In the case of including, a virtual space image including the virtual object 8 is generated.
  • the image generation unit 16 When the virtual space V is shared by a plurality of users, the image generation unit 16 generates a virtual space image for each user (for each HMD 1).
  • the image generation unit 16 does not display the virtual object 8 that is not permitted to share with the second user in the virtual space image displayed on the HMD 1 (second HMD) of the second user. That is, even if the virtual object 8 is included in the virtual space image for the second HMD, the image generation unit 16 hides the virtual object 8 in the virtual space image.
  • the image generation unit 16 is permitted to share the virtual object 8 with the second user, and the virtual object 8 is included in the virtual space image for the second HMD, the virtual space for the second HMD is Display the virtual object 8 on the image.
  • the virtual space for the second HMD is Display the virtual object 8 on the image.
  • the virtual space image for each user (for each HMD 1) generated by the image generation unit 16 is transmitted to the HMD 1 of each user. Through such processing, each user visually recognizes, via the HMD 1, a virtual space image in which display or non-display of the virtual object 8 according to the above-described sharing setting is reflected.
  • the object detection unit 17 detects an object corresponding to the virtual object from the real space image acquired by the image acquisition unit 12 after the virtual object is generated by the virtual object generation unit 13.
  • the object detection unit 17 detects the object when the same object as the already-objectized object is included in the further acquired real space image.
  • the object detection unit 17 may be appearance information (e.g., similar to the appearance of an object included in the real space image further acquired). Appearance information that is recognized as having a certain degree of similarity or more by known image recognition based on the outline, color, shape, etc. of an object, a camera ID indicating the camera 2 that has captured the real space image obtained further Virtual object information is searched in which the generation time past the time when the acquired real space image is acquired is associated.
  • the object detection unit 17 corresponds an object included in the further acquired real space image to a virtual object indicated by the extracted virtual object information. Detect as an object.
  • the image acquisition unit 12 acquires a real space image including the object 6B.
  • the object detection unit 17 selects appearance information similar to the appearance of the object 6B included in the real space image, and the real space
  • the virtual object information in which the camera ID indicating the camera 2 that captured the image and the generation time past the time when the real space image was captured is associated with each other is searched.
  • virtual object information of the virtual object 8 is extracted.
  • the object detection unit 17 detects the object 6B included in the real space image as an object corresponding to the virtual object 8.
  • the virtual object update unit 18 updates the state of the virtual object corresponding to the object based on the state of the object detected by the object detection unit 17.
  • the virtual object update unit 18 updates the state of the virtual object 8 corresponding to the object 6B based on the state of the object 6B detected by the object detection unit 17.
  • the state of the object 6B included in the real space image acquired at a time later than the time when the virtual object 8 is first generated may be different from the state of the object 6B at the time of generation of the virtual object 8.
  • the screen (a part of the appearance of the object 6B) of the object 6B (notebook PC) at the later time may be different from the screen of the object 6B at the time of generation of the virtual object 8.
  • the virtual object updating unit 18 updates the state of the virtual object 8 corresponding to the object 6B (here, the content of the screen) to the content of the screen of the object 6B captured in the real space image acquired at the later time point Do. Specifically, the virtual object update unit 18 adds the appearance information of the virtual object information of the virtual object 8 stored in the virtual object storage unit 14 to the real space image captured at the later time point. Update based on the contents of the screen. In addition, the virtual object update unit 18 changes the generation time of the virtual object information of the virtual object 8 to the time when the update is performed. By the processing as described above, the content of the latest object 6B in the real space can be reflected on the virtual object 8 corresponding to the object 6B.
  • the virtual object generation unit 13, the object detection unit 17, and the virtual object update unit 18 described above may execute the following processing.
  • the object detection unit 17 detects an object corresponding to a virtual object already generated from the real space image further acquired as described above, the first process of updating the virtual object (that is, the above-described process) Selection by the user as to which of the process of the virtual object update unit 18 and the second process of generating a new virtual object corresponding to the object (that is, the process of the virtual object generation unit 13 described above) Accept
  • the object detection unit 17 causes the HMD 1 of the user to display a selection screen for selecting one of the first process and the second process, and the result of the selection operation by the user from the user (user selection) To get
  • the virtual object update unit 18 executes the first process.
  • the virtual object generation unit 13 executes the second process. According to such a configuration, the first process of updating a virtual object that has already been created and the second process of creating a new new virtual object are appropriately switched and executed according to the user's request. be able to.
  • FIG. 3 is a sequence diagram showing processing until a virtual object is generated.
  • FIG. 4 is a sequence diagram showing processing from generation of a virtual object to display of a virtual space image corresponding to the sharing setting on each HMD 1.
  • FIG. 5 is a sequence diagram showing processing (updating of a virtual object or creation of a new virtual object) when an object which has already been turned into an object from a real space image is detected.
  • the information processing apparatus 10 generates a virtual space V shared by a plurality of users (step S1). Specifically, a virtual space V in which various objects such as user objects associated with each user are arranged at an initial position is generated. Virtual space data (an image of the virtual space viewed from each user object) indicating the virtual space V generated in this manner is transmitted to the HMD 1 (here, the first HMD and the second HMD) of each user (step S2) . Thereby, each user experiences virtual reality as if it were in the virtual space V via each HMD 1.
  • the first HMD instructs the camera 2 to start shooting in response to an operation from the user 5 (first user) of the first HMD with respect to the controller etc. (step S3).
  • the camera 2 having received the shooting start instruction starts shooting of the real space including the specific area 4 (see FIG. 2), and acquires an image of the real space (step S4).
  • the video taken by the camera 2 is transmitted to the first HMD at any time (step S5), and displayed superimposed on the virtual space image displayed on the first HMD (step S6). For example, an image captured by the camera 2 is displayed on a small window-like screen (wipe) provided at a corner of the virtual space image.
  • the first HMD instructs the camera 2 to acquire a real space image in response to an operation from the user 5 on the controller etc.
  • the real space image is a still image as a basis for extracting a virtual object.
  • the camera 2 having received the image acquisition instruction acquires a real space image obtained by imaging the real space including the specific area 4 (step S8).
  • the real space image acquired by the camera 2 is transmitted to the first HMD (step S9) and displayed on the first HMD (step S10).
  • the first HMD indicates a target region (here, a region including the object 6B as an example) including an object to be objectified in the real space image by receiving an operation on the controller or the like by the user 5.
  • Information is acquired (step S11).
  • the real space image acquired in step S9 and the information indicating the target area acquired in step S11 are transmitted to the information processing apparatus 10 (step S12).
  • the image acquisition unit 12 acquires information indicating the real space image and the target area transmitted in step S12 (step S13).
  • the virtual object generation unit 13 generates a virtual object 8 corresponding to the object 6B included in the target area by executing known image recognition on the target area in the real space image (step S14). .
  • virtual object information on the virtual object 8 is stored in the virtual object storage unit 14.
  • the sharing setting unit 15 transmits data such as the appearance of the virtual object 8 to the first HMD (step S15), and the sharing setting screen described above (for example, the target of the sharing setting)
  • the setting screen including the appearance of the virtual object 8 is displayed on the first HMD (step S16).
  • the first HMD (for example, the controller attached to the first HMD) acquires setting information indicating the content of the sharing setting input by the user 5 on the sharing setting screen (step S17), and transmits the setting information to the information processing apparatus 10 (Step S18).
  • the sharing setting unit 15 sets sharing setting information of the virtual object 8 based on the setting information (step S19).
  • the image generation unit 16 generates a virtual space image indicating at least a part of the virtual space V including the virtual object 8 generated by the virtual object generation unit 13 (step S20).
  • the image generation unit 16 generates a virtual space image for each user (for each HMD 1), transmits a virtual space image for the first HMD to the first HMD, and transmits a virtual space image for the second HMD to the second HMD (Steps S21 and S22).
  • a virtual space image is displayed in each of the first HMD and the second HMD (steps S23 and S24).
  • step S20 the image generation unit 16 converts the virtual object 8 into a virtual space image for the second HMD. Do not display That is, the image generation unit 16 generates a virtual space image in which the virtual object 8 is not displayed. In this case, the virtual object 8 is not displayed in the virtual space image displayed on the second HMD in step S24.
  • the image generation unit 16 displays the virtual object 8 in the virtual space image for the second HMD in step S20. As a result, the virtual object 8 is displayed on the virtual space image displayed on the second HMD in step S24.
  • steps S31 to S36 are the same as the processes of steps S8 to S13, and thus detailed description will be omitted.
  • the object detection unit 17 detects an object 6B corresponding to the virtual object 8 already generated from the real space image acquired in step S36 (step S37).
  • the object detection unit 17 is different from the first process (the process of the virtual object update unit 18) for updating the virtual object 8 and the new virtual object (the virtual object 8 already generated) corresponding to the object 6B.
  • the selection of the user 5 as to which of the second process (process of the virtual object generation unit 13) for generating a new object) is to be executed is received.
  • the object detection unit 17 notifies the first HMD that the object 6B corresponding to the virtual object 8 already generated is detected from the real space image (step S38).
  • the object detection unit 17 causes the first HMD to display a notification pop-up or the like.
  • the first HMD (the controller or the like) accepts the selection of the user 5 as to which of the first processing and the second processing is to be executed (step S39), and transmits the result of the selection to the information processing apparatus 10.
  • the information processing apparatus 10 executes a process according to the selection of the user 5 (step S41). Specifically, when the object detection unit 17 receives the selection of the user 5 indicating execution of the first process, the virtual object update unit 18 executes the first process. In this example, the virtual object update unit 18 updates the state of the virtual object 8 based on the state of the object 6B detected from the real space image acquired in step S36.
  • the virtual object generation unit 13 executes the second process.
  • the virtual object generation unit 13 generates a new virtual object based on the state of the object 6B detected from the real space image acquired in step S36. In this case, the new virtual object generated in this manner and the virtual object 8 already generated coexist in the virtual space V.
  • the object 6 included in the real space image obtained by imaging the real space near the user 5 is generated as the virtual object 8 in the virtual space V A virtual space image including the virtual object 8 (a virtual space image in which the virtual object 8 is captured) is generated.
  • the user 5 wearing the HMD 1 can visually recognize the object 6 present in the vicinity of the user via the virtual space image. Therefore, according to the information processing apparatus 10, the convenience of the virtual reality experience of the user 5 can be improved.
  • the information processing apparatus 10 further includes an object detection unit 17 and a virtual object update unit 18.
  • an object detection unit 17 detects whether a real space image including the already-objectified object 6B is acquired again.
  • the virtual object 8 corresponding to the object 6B is updated based on the state of the object 6B included in the real space image. be able to.
  • the latest state of the object 6B in the real space can be recognized through the virtual object 8 on the virtual space V.
  • the object detection unit 17 detects an object 6B corresponding to the virtual object 8 from the acquired real space image
  • the first process of updating the virtual object 8 and a new virtual object corresponding to the object 6B Accept the user's selection as to which of the second processes to generate.
  • the virtual object update unit 18 performs the first process
  • the object detection unit 17 indicates that the second process is to be performed.
  • the virtual object generation unit 13 executes the second process. According to this configuration, it is possible to appropriately switch and execute the update of the existing virtual object 8 and the generation of a new virtual object according to the user's request.
  • the virtual object generation unit 13 selects an object 6 (the object 6B in the example of FIG. 2) specified by the user 5 among the plurality of objects 6 (the objects 6A and 6B in the example of FIG. 2) included in the real space image. Create a virtual object 8 corresponding to.
  • an object 6 the object 6B in the example of FIG. 2 specified by the user 5
  • the plurality of objects 6 the objects 6A and 6B in the example of FIG. 2 included in the real space image.
  • unnecessary object formation processing can be omitted to reduce the processing amount of the processor, and memory used by the unnecessary virtual objects can be reduced. It is possible to suppress the increase of the amount.
  • the virtual space V is a space shared by at least the first user wearing the first HMD and the second user wearing the second HMD, and the information processing apparatus 10 includes the sharing setting unit 15 described above. .
  • the image generation unit 16 does not display a virtual object that is not permitted to share with the second user in the virtual space image displayed on the second HMD. According to this configuration, by performing the sharing setting as described above for each virtual object, a specific virtual object (for example, an object corresponding to a document or the like in which confidential information is described) can be It can be viewed only by the above users. This makes it possible to more smoothly carry out business communication such as a meeting via the virtual space V.
  • each functional block is realized by one physically and / or logically coupled device, or directly and / or indirectly two or more physically and / or logically separated devices. It may be connected (for example, wired and / or wirelessly) and realized by the plurality of devices.
  • the information processing apparatus 10 in the above embodiment may function as a computer that performs the processing of the information processing apparatus 10 in the above embodiment.
  • FIG. 6 is a diagram showing an example of the hardware configuration of the information processing apparatus 10 according to the present embodiment.
  • the above-described information processing apparatus 10 may be physically configured as a computer apparatus including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
  • the term “device” can be read as a circuit, a device, a unit or the like.
  • the hardware configuration of the information processing device 10 may be configured to include one or more of the devices illustrated in FIG. 6 or may be configured without including some devices.
  • Each function in the information processing apparatus 10 causes the processor 1001 to perform an operation by reading predetermined software (program) on hardware such as the processor 1001, the memory 1002, etc., communication by the communication device 1004, the memory 1002 and the storage 1003. This is realized by controlling the reading and / or writing of data in
  • the processor 1001 operates, for example, an operating system to control the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU) including an interface with a peripheral device, a control device, an arithmetic device, a register, and the like.
  • CPU central processing unit
  • the processor 1001 reads a program (program code), a software module, and / or data from the storage 1003 and / or the communication device 1004 to the memory 1002, and executes various processing according to these.
  • a program a program that causes a computer to execute at least a part of the operations described in the above embodiments is used.
  • the virtual object generation unit 13 of the information processing apparatus 10 may be realized by a control program stored in the memory 1002 and operated by the processor 1001, and similarly realized for other functional blocks shown in FIG. It is also good.
  • the various processes described above have been described to be executed by one processor 1001, but may be executed simultaneously or sequentially by two or more processors 1001.
  • the processor 1001 may be implemented by one or more chips.
  • the program may be transmitted from the network via a telecommunication line.
  • the memory 1002 is a computer readable recording medium, and includes, for example, at least one of a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). It may be done.
  • the memory 1002 may be called a register, a cache, a main memory (main storage device) or the like.
  • the memory 1002 may store a program (program code), a software module, etc. that can be executed to execute the information processing method (for example, the procedure shown in the sequence diagrams of FIGS. 3 to 5) according to the above embodiment. it can.
  • the storage 1003 is a computer readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (for example, a compact disk, a digital versatile disk, Blu-ray A (registered trademark) disk, a smart card, a flash memory (for example, a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, and the like may be used.
  • the storage 1003 may be called an auxiliary storage device.
  • the above-described storage medium may be, for example, a database including the memory 1002 and / or the storage 1003, a server, or any other suitable medium.
  • the communication device 1004 is hardware (transmission / reception device) for performing communication between computers via a wired and / or wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, and the like) that receives external input.
  • the output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that performs output to the outside.
  • the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
  • each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be configured by a single bus or may be configured by different buses among the devices.
  • the information processing apparatus 10 includes hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), and a field programmable gate array (FPGA). And part or all of each functional block may be realized by the hardware.
  • processor 1001 may be implemented in at least one of these hardware.
  • the input / output information may be stored in a specific place (for example, a memory) or may be managed by a management table. Information to be input or output may be overwritten, updated or added. The output information etc. may be deleted. The input information or the like may be transmitted to another device.
  • the determination may be performed by a value (0 or 1) represented by one bit, may be performed by a true / false value (Boolean: true or false), or may be compared with a numerical value (for example, a predetermined value). Comparison with the value).
  • Software may be called software, firmware, middleware, microcode, hardware description language, or any other name, and may be instructions, instruction sets, codes, code segments, program codes, programs, subprograms, software modules. Should be interpreted broadly to mean applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc.
  • software, instructions and the like may be transmitted and received via a transmission medium.
  • software may use a wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.
  • wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave
  • data, instructions, commands, information, signals, bits, symbols, chips etc may be voltage, current, electromagnetic waves, magnetic fields or particles, light fields or photons, or any of these May be represented by a combination of
  • information, parameters, and the like described in the present specification may be represented by an absolute value, may be represented by a relative value from a predetermined value, or may be represented by corresponding other information. .
  • the phrase “based on” does not mean “based only on,” unless expressly stated otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
  • determining may encompass a wide variety of operations. “Decision” may be, for example, judging, calculating, computing, processing, deriving, investigating, looking up (eg table, database or other (Searching in the data structure of (a)), ascertaining it may be regarded as “decided”, and the like. Also, “determination” may be receiving (e.g., receiving information), transmitting (e.g., transmitting information), input (input), output (output), accessing (accessing) (e.g. For example, it can be regarded as “determining” access to data in memory. Also, “determining” may include considering “resolving", selecting, choosing, establishing, comparing, etc., as “determining”. That is, “determination” may include considering that some action is "decision”.
  • SYMBOLS 1 ... HMD (display apparatus), 5 ... user, 6, 6A, 6B ... object, 7 ... user object, 8 ... virtual object, 10 ... information processor, 12 ... image acquisition part, 13 ... virtual object production

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An information processing device 10 according to an embodiment is for providing an image of a virtual space V to be displayed on an HMD 1 worn by a user. The information processing device 10 comprises: an image acquisition unit 12 for acquiring a real-space image obtained by imaging a real space in the vicinity of the user; a virtual object generation unit 13 for recognizing an object 6 included in the real-space image and generating, in the virtual space V, a virtual object 8 corresponding to the object 6; and an image generation unit 16 for generating a virtual space image to be displayed on the HMD 1 that shows at least a part of the virtual space V including the virtual object 8.

Description

情報処理装置Information processing device
 本発明の一側面は、情報処理装置に関する。 One aspect of the present invention relates to an information processing apparatus.
 従来、HMD(Head Mounted Display)等を装着したユーザに、あたかも当該ユーザが仮想空間上に存在するかのような視界を提供することにより、仮想空間に没入させる仮想現実(VR:Virtual Reality)と呼ばれる技術が知られている(例えば、特許文献1参照)。このようなVR技術では、例えば、ユーザの動作(例えば頭部及び手等の身体の一部の動作)に連動するユーザオブジェクト(アバター、キャラクタ等)が、仮想空間上に生成され、ユーザの動作に応じて制御される。そして、ユーザオブジェクトから見た視界を示す画像がHMDに表示されることにより、あたかも仮想空間上にユーザオブジェクトとして存在しているかのような体験が、ユーザに提供される。 Conventionally, by providing a user with a head mounted display (HMD) or the like with a view as if the user were present in the virtual space, the virtual reality (VR: Virtual Reality) is embedded with the virtual space. A technique called is known (see, for example, Patent Document 1). In such VR technology, for example, a user object (avatar, character, etc.) interlocked with the user's action (for example, the action of a part of the body such as the head and hands) is generated on the virtual space and the user's action It is controlled according to Then, by displaying an image showing a view seen from the user object on the HMD, the user is provided with an experience as if the user object exists in the virtual space.
特開2017-55851号公報JP, 2017-55851, A
 しかしながら、例えば仮想空間を介して複数のユーザ間で打合せ等のビジネスコミュニケーションを行う場合等には、ユーザが手元にある物(例えば、議事メモ、ノートPC等)を全く認識できないことは、ユーザの利便性を損なう結果となり得る。 However, for example, when performing business communication such as a meeting between a plurality of users via a virtual space, it is impossible for the user to recognize at all things (for example, proceedings memo, notebook PC, etc.) at hand. It may result in the loss of convenience.
 そこで、本発明の一側面は、ユーザの仮想現実体験の利便性を向上させることができる情報処理装置を提供することを目的とする。 Therefore, one aspect of the present invention is to provide an information processing apparatus capable of improving the convenience of a virtual reality experience of a user.
 本発明の一側面に係る情報処理装置は、ユーザに装着された表示装置に表示される仮想空間の画像を提供する情報処理装置であって、ユーザの付近の実空間を撮像した実空間画像を取得する画像取得部と、実空間画像に含まれる物体を認識し、仮想空間上に上記物体に対応する仮想オブジェクトを生成する仮想オブジェクト生成部と、上記仮想オブジェクトを含む仮想空間の少なくとも一部を示す仮想空間画像であって、表示装置に表示される該仮想空間画像を生成する画像生成部と、を備える。 An information processing apparatus according to an aspect of the present invention is an information processing apparatus for providing an image of a virtual space displayed on a display device worn by a user, the real space image obtained by imaging a real space in the vicinity of the user An image acquisition unit to acquire, a virtual object generation unit that recognizes an object included in a real space image and generates a virtual object corresponding to the object in a virtual space, and at least a part of a virtual space including the virtual object And an image generation unit configured to generate the virtual space image to be displayed on the display device.
 本発明の一側面に係る情報処理装置によれば、ユーザの付近の実空間を撮像した実空間画像に含まれる物体が、仮想空間上の仮想オブジェクトとして生成され、当該仮想オブジェクトを含む仮想空間画像(当該仮想オブジェクトが写り込んだ仮想空間画像)が生成される。このような処理により、表示装置を装着したユーザは、仮想空間画像を介して自身の付近に存在する物体を視認することができる。したがって、上記情報処理装置によれば、ユーザの仮想現実体験の利便性を向上させることができる。 According to the information processing apparatus according to one aspect of the present invention, an object included in the real space image obtained by imaging the real space near the user is generated as a virtual object in the virtual space, and a virtual space image including the virtual object (A virtual space image in which the virtual object is reflected) is generated. By such processing, the user wearing the display device can visually recognize an object present in the vicinity of the user via the virtual space image. Therefore, according to the information processing apparatus, the convenience of the virtual reality experience of the user can be improved.
 本発明の一側面によれば、ユーザの仮想現実体験の利便性を向上させることができる情報処理装置を提供することができる。 According to one aspect of the present invention, it is possible to provide an information processing apparatus capable of improving the convenience of the user's virtual reality experience.
一実施形態に係る情報処理装置を含む情報処理システムの機能構成を示す図である。It is a figure showing functional composition of an information processing system containing an information processor concerning one embodiment. 実空間画像の取得及び仮想オブジェクトの生成について説明するための図である。It is a figure for demonstrating acquisition of a real space image, and generation of a virtual object. 情報処理システムの動作の一例を示すシーケンス図である。It is a sequence diagram which shows an example of operation | movement of an information processing system. 情報処理システムの動作の一例を示すシーケンス図である。It is a sequence diagram which shows an example of operation | movement of an information processing system. 情報処理システムの動作の一例を示すシーケンス図である。It is a sequence diagram which shows an example of operation | movement of an information processing system. 情報処理装置のハードウェア構成の一例を示すブロック図である。It is a block diagram showing an example of the hardware constitutions of an information processor.
 以下、添付図面を参照して、本発明の一実施形態について詳細に説明する。なお、図面の説明において同一又は相当要素には同一符号を付し、重複する説明を省略する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the attached drawings. In the description of the drawings, the same or corresponding elements will be denoted by the same reference symbols, without redundant description.
 図1は、本発明の一実施形態に係る情報処理装置10を含む情報処理システム100の機能構成を示す図である。情報処理装置10は、ゲーム空間及びチャット空間等の任意のVRコンテンツが展開される仮想空間を、ユーザに装着されたHMD(Head Mounted Display)1(表示装置)を介してユーザに提供する装置である。すなわち、情報処理装置10は、HMD1に表示される仮想空間の画像を介してユーザに仮想現実(VR)体験を提供する装置である。情報処理装置10は、実空間に存在する物体から当該物体に対応する仮想オブジェクトを仮想空間上に生成する機能を有する。 FIG. 1 is a diagram showing a functional configuration of an information processing system 100 including an information processing apparatus 10 according to an embodiment of the present invention. The information processing apparatus 10 is an apparatus for providing a user with a virtual space in which arbitrary VR contents such as a game space and a chat space are expanded, via a head mounted display (HMD) 1 (display device) mounted to the user. is there. That is, the information processing apparatus 10 is an apparatus that provides a user with a virtual reality (VR) experience through an image of a virtual space displayed on the HMD 1. The information processing apparatus 10 has a function of generating a virtual object corresponding to the object from the object existing in the real space on the virtual space.
 本実施形態では、同一の仮想空間上に複数のユーザの各々によって操作される複数のユーザオブジェクト(アバター、キャラクタ等)が存在する場合について説明する。ただし、後述する情報処理装置10の処理は、同一の仮想空間上に一人のユーザのユーザオブジェクトのみが存在する場合にも適用され得る。 In the present embodiment, a case where a plurality of user objects (avatar, characters, etc.) operated by each of a plurality of users exist in the same virtual space will be described. However, the processing of the information processing apparatus 10 described later can be applied even when only a user object of one user exists in the same virtual space.
 図1に示されるように、情報処理装置10は、通信部11と、画像取得部12と、仮想オブジェクト生成部13と、仮想オブジェクト記憶部14と、共有設定部15と、画像生成部16と、物体検出部17と、仮想オブジェクト更新部18と、を備えている。情報処理装置10は、例えば、複数のユーザの各々によって装着された複数のHMD1と通信可能なゲーム端末、パーソナルコンピュータ、タブレット端末等である。ただし、情報処理装置10の実装形態は、特定の形態に限定されない。例えば、情報処理装置10は、HMD1と同じ装置に内蔵されたコンピュータ装置であってもよい。また、情報処理装置10は、インターネット等の通信回線を介して複数のユーザの各々のHMD1(又は各HMD1の動作を制御する各コンピュータ端末)と通信可能なサーバ装置等であってもよい。また、情報処理装置10は、物理的に単一の装置によって構成されてもよいし、複数の装置によって構成されてもよい。例えば、情報処理装置10は、一部の機能(例えば、画像生成部16の機能)が各HMD1の動作を制御するコンピュータ端末(HMD1毎に設けられたコンピュータ端末)により実現され、他の機能が当該コンピュータ端末と通信可能なサーバ装置により実現される分散システムとして構成されてもよい。 As illustrated in FIG. 1, the information processing apparatus 10 includes a communication unit 11, an image acquisition unit 12, a virtual object generation unit 13, a virtual object storage unit 14, a sharing setting unit 15, and an image generation unit 16. , An object detection unit 17 and a virtual object update unit 18. The information processing apparatus 10 is, for example, a game terminal, a personal computer, a tablet terminal or the like that can communicate with the plurality of HMDs 1 attached by each of the plurality of users. However, the implementation form of the information processing apparatus 10 is not limited to a specific form. For example, the information processing device 10 may be a computer device incorporated in the same device as the HMD 1. Further, the information processing apparatus 10 may be a server device or the like that can communicate with each of the HMDs 1 (or each computer terminal that controls the operation of each HMD 1) of each of a plurality of users via a communication line such as the Internet. Further, the information processing apparatus 10 may be physically configured by a single device or may be configured by a plurality of devices. For example, the information processing apparatus 10 is realized by a computer terminal (a computer terminal provided for each HMD 1) that controls the operation of each HMD 1 with some functions (for example, the functions of the image generation unit 16), and other functions are performed. You may be comprised as a distributed system implement | achieved by the server apparatus which can communicate with the said computer terminal.
 HMD1は、ユーザの身体(例えば頭部)に装着される表示装置である。HMD1は、例えば、ユーザの頭部に装着された状態においてユーザの各目の前に画像(左目用画像及び右目用画像)を表示する表示部を備えている。左目用画像と右目用画像とに互いに異なる画像(映像)が表示されることにより、立体的な画像(3次元画像)がユーザによって認識される。なお、上述した表示部は、眼鏡型、ヘルメット型等のようなユーザの身体に装着される本体部と一体的に構成されたディスプレイであってもよいし、HMD1の本体部と着脱自在なデバイス(例えば、本体部に取り付けられるスマートフォン等の端末のディスプレイ)が、上記表示部として機能してもよい。 The HMD 1 is a display device mounted on the body (for example, the head) of the user. The HMD 1 includes, for example, a display unit that displays an image (an image for the left eye and an image for the right eye) in front of each eye of the user in a state of being worn on the head of the user. By displaying different images (videos) on the left-eye image and the right-eye image, a stereoscopic image (three-dimensional image) is recognized by the user. The display unit described above may be a display integrally configured with a main unit mounted on the user's body, such as a glasses type or helmet type, or a device detachably attachable to the main unit of the HMD 1 (For example, a display of a terminal such as a smartphone attached to the main unit) may function as the display unit.
 HMD1は、例えば、ユーザの頭部(すなわち、HMD1)の位置、向き(傾き)、速度、加速度等を検出可能なセンサ(例えば、加速度センサ、角速度センサ、地磁気センサ、ジャイロセンサ等)を備えている。HMD1は、このようなセンサにより検出されたユーザの頭部の動作(位置、向き、速度、加速度等)の情報を、ユーザの頭部の動作情報として定期的に情報処理装置10に送信する。 The HMD 1 includes, for example, a sensor (eg, an acceleration sensor, an angular velocity sensor, a geomagnetic sensor, a gyro sensor, etc.) capable of detecting the position, orientation (tilt), velocity, acceleration, etc. of the user's head (ie, the HMD 1). There is. The HMD 1 periodically transmits information on the motion (position, orientation, velocity, acceleration, etc.) of the head of the user detected by such a sensor to the information processing apparatus 10 as motion information on the head of the user.
 また、HMD1は、例えば、ユーザの目の動作(例えば、黒目部分の位置及び動き等)を検出する赤外線カメラ等のセンサを備えている。当該センサは、例えば、公知のアイトラッキング機能を有するセンサである。当該センサは、例えば、ユーザの右目及び左目に照射した赤外光の角膜等からの反射光を受けることで、各眼球の動作を検出する。HMD1は、上述のようにして検出されるユーザの目の動作情報を、定期的に情報処理装置10に送信する。 Further, the HMD 1 includes, for example, a sensor such as an infrared camera that detects an action of the user's eyes (for example, the position and the movement of the black eye portion, etc.). The sensor is, for example, a sensor having a known eye tracking function. The said sensor detects the operation | movement of each eyeball, for example by receiving the reflected light from the cornea etc. of the infrared light irradiated to the user's right eye and left eye. The HMD 1 periodically transmits the operation information of the user's eyes detected as described above to the information processing apparatus 10.
 また、HMD1は、当該HMD1を装着したユーザの音声を入力するためのマイク(不図示)と、各ユーザの音声等を出力するスピーカー(不図示)と、を付属品として備えている。マイクにより取得された音声は、情報処理装置10に送信される。また、スピーカーは、情報処理装置10から受信した他ユーザの音声等を出力する。このようなマイク及びスピーカーにより、複数ユーザ間で会話(チャット)を行うことが可能となっている。なお、マイク及びスピーカーは、HMD1と一体型のデバイスであってもよいし、HMD1とは異なるデバイスであってもよい。 The HMD 1 also includes a microphone (not shown) for inputting the voice of the user wearing the HMD 1 and a speaker (not shown) for outputting voice and the like of each user as accessories. The voice acquired by the microphone is transmitted to the information processing apparatus 10. In addition, the speaker outputs the voice or the like of the other user received from the information processing device 10. With such a microphone and a speaker, it is possible to make conversation (chat) between a plurality of users. The microphone and the speaker may be devices integrated with the HMD 1 or may be devices different from the HMD 1.
 また、HMD1は、当該HMD1を装着したユーザの付近の空間(本実施形態では、ユーザの前方の空間)を撮影するためのカメラ2(撮影装置)を付属品として備えている。HMD1とカメラ2とは相互に通信可能となっている。カメラ2は、HMD1の本体部と一体的に構成されたカメラであってもよいし、HMD1の本体部と着脱自在なデバイス(例えばスマートフォン等)に設けられたカメラであってもよい。 The HMD 1 also includes a camera 2 (photographing device) for photographing a space in the vicinity of the user wearing the HMD 1 (in the present embodiment, the space in front of the user) as an accessory. The HMD 1 and the camera 2 can communicate with each other. The camera 2 may be a camera integrally configured with the main body of the HMD 1 or may be a camera provided to a main body of the HMD 1 and a device (e.g., a smartphone or the like) that is detachable.
 本実施形態では、図2に示されるように、HMD1のカメラ2は、当該HMD1を装着したユーザ5の前方の机3上の特定領域4を認識し、当該特定領域4上に存在する物体を撮像する。例えば、特定領域4は、机3の上に置かれたグリーンバックのマット等によって規定される。例えば、当該マットの特定位置(例えば中央位置又は四隅等)にカメラ2と通信可能なセンサ(又はカメラ2により認識可能なマーカ)が埋め込まれており、カメラ2は、当該センサとの通信(又はマーカの認識)によって把握した当該センサ(又はマーカ)の位置に基づいて特定領域4を認識してもよい。なお、カメラ2は、必ずしもHMD1に付随する装置でなくてもよく、特定領域4を含む空間を撮影可能な位置に固定的に配置されるカメラ(HMD1とは別体の装置)であってもよい。また、カメラ2は、特定領域4を含む空間を互いに異なる複数のアングルから撮影する複数の固定カメラによって構成されていてもよい。この場合、複数の固定カメラにより撮影された互いに異なる複数のアングルの画像に基づいて、特定領域4上に存在する物体の三次元画像を得ることが可能となる。 In the present embodiment, as shown in FIG. 2, the camera 2 of the HMD 1 recognizes a specific area 4 on the desk 3 in front of the user 5 wearing the HMD 1, and an object present on the specific area 4 is Take an image. For example, the specific area 4 is defined by a mat of green back placed on the desk 3 or the like. For example, a sensor (or a marker that can be recognized by the camera 2) that can communicate with the camera 2 is embedded at a specific position (for example, a central position or four corners) of the mat, and the camera 2 communicates with the sensor (or The specific area 4 may be recognized based on the position of the sensor (or the marker) grasped by the recognition of the marker). Note that the camera 2 is not necessarily a device attached to the HMD 1 and may be a camera (a separate device from the HMD 1) fixedly disposed at a position capable of photographing a space including the specific area 4 Good. Further, the camera 2 may be configured by a plurality of fixed cameras that capture a space including the specific area 4 from a plurality of different angles. In this case, it is possible to obtain a three-dimensional image of an object present on the specific region 4 based on images of different angles taken by different fixed cameras.
 例えば、カメラ2は、HMD1に付随するコントローラ(或いは、HMD1とは別体としてのコントローラ)に対するユーザからの操作に応じて、特定領域4を含む実空間の撮影を開始する。カメラ2により撮影された映像は、随時、HMD1に送信され、HMD1に表示される仮想空間画像に重畳して表示される。ここで、仮想空間画像は、当該HMD1を装着したユーザの頭部及び目の動作情報に基づいて特定されたアングルの仮想空間の画像である。例えば、カメラ2により撮影された映像は、仮想空間画像の隅部(例えば右上隅等)に設けられた小窓状の画面(いわゆるワイプ)に表示されてもよい。この場合、ユーザは、仮想空間画像を視認することにより仮想現実を体験すると同時に、当該小窓状の画面を確認することにより、特定領域4を含む空間(実空間)の状態を把握することができる。なお、この段階では、特定領域4上の物体は、仮想オブジェクトとして生成されていない。このため、当該特定領域4上の物体を仮想空間上で物のように扱ったり(例えば持ち運んだり)、当該ユーザ以外のユーザに認識させたりすることはできない。 For example, the camera 2 starts capturing an image of the real space including the specific area 4 in response to an operation by the user on a controller attached to the HMD 1 (or a controller separate from the HMD 1). The video taken by the camera 2 is transmitted to the HMD 1 as needed, and displayed superimposed on the virtual space image displayed on the HMD 1. Here, the virtual space image is an image of the virtual space of the angle specified based on the motion information of the head and eyes of the user wearing the HMD 1. For example, the video captured by the camera 2 may be displayed on a small window (so-called wipe) provided at a corner (for example, the upper right corner or the like) of the virtual space image. In this case, the user can grasp the state of the space (real space) including the specific area 4 by viewing the virtual space image and simultaneously experiencing the virtual reality and confirming the small window-like screen. it can. At this stage, the object on the specific area 4 is not generated as a virtual object. Therefore, the object in the specific area 4 can not be treated (eg, carried) as a thing in the virtual space, and can not be recognized by users other than the user.
 通信部11は、有線又は無線の通信ネットワークを介して、HMD1(HMD1の付属品であるマイク、スピーカー、カメラ2、コントローラ等を含む)等の外部装置との間でデータを送受信する。本実施形態では、通信部11は、上述のようにHMD1において取得されたユーザの頭部及び目の動作情報を、HMD1から受信する。また、通信部11は、後述する画像生成部16により生成された画像をHMD1に送信する。この処理により、各ユーザに装着された各HMD1おいて、各ユーザの頭部及び目の動作情報に基づいて決定されたアングルの仮想空間の画像が表示される。また、通信部11は、上述したマイクに入力された各ユーザの音声を受信すると共に、受信された各ユーザの音声を各ユーザのスピーカーに送信する。このような処理により、ユーザ間で音声が共有され、上述したチャットが実現される。 The communication unit 11 transmits / receives data to / from an external device such as the HMD 1 (including a microphone, a speaker, a camera 2, a controller, and the like that are accessories of the HMD 1) via a wired or wireless communication network. In the present embodiment, the communication unit 11 receives from the HMD 1 the motion information of the head and eyes of the user acquired in the HMD 1 as described above. Also, the communication unit 11 transmits the image generated by the image generation unit 16 described later to the HMD 1. By this processing, in each HMD 1 worn by each user, an image of a virtual space of an angle determined based on the motion information of the head and eyes of each user is displayed. The communication unit 11 also receives the voice of each user input to the above-described microphone, and transmits the received voice of each user to the speaker of each user. By such processing, the voice is shared among the users, and the above-described chat is realized.
 画像取得部12は、ユーザの付近の実空間を撮像した実空間画像を取得する。例えば、画像取得部12は、通信部11を介して、上述したカメラ2により取得された画像(詳しくは後述)を実空間画像として取得する。また、仮想オブジェクト生成部13は、実空間画像に含まれる物体を認識し、仮想空間上に当該物体に対応する仮想オブジェクトを生成する。本実施形態では、仮想オブジェクト生成部13は、実空間画像に含まれる複数の物体のうちユーザにより指定された物体に対応する仮想オブジェクトを生成する。つまり、仮想オブジェクト生成部13は、実空間画像に含まれる全ての物体に対応する仮想オブジェクトを直ちに生成するのではなく、ユーザにより指定された物体に対応する仮想オブジェクトのみを生成する。このような処理により、ユーザが所望する仮想オブジェクトのみを生成でき、仮想オブジェクトの生成(以下「オブジェクト化」ともいう。)の処理負荷を低減できる。すなわち、プロセッサ、メモリ等のハードウェア資源の処理負荷及び使用量を低減できる。 The image acquisition unit 12 acquires a real space image obtained by imaging a real space near the user. For example, the image acquisition unit 12 acquires an image (details will be described later) acquired by the above-described camera 2 as a real space image through the communication unit 11. Also, the virtual object generation unit 13 recognizes an object included in the real space image, and generates a virtual object corresponding to the object in the virtual space. In the present embodiment, the virtual object generation unit 13 generates a virtual object corresponding to an object designated by the user among a plurality of objects included in the real space image. That is, the virtual object generation unit 13 does not immediately generate virtual objects corresponding to all the objects included in the real space image, but generates only virtual objects corresponding to the object designated by the user. By such processing, only the virtual object desired by the user can be generated, and the processing load of generation of the virtual object (hereinafter also referred to as “objectification”) can be reduced. That is, the processing load and usage of hardware resources such as processors and memories can be reduced.
 図2の(B)は、ユーザ5の前方の特定領域4上に2つの物体6(6A,6B)が存在している状態を表している。物体6Aは、飲料が入ったペットボトルであり、物体6Bは、ユーザ5により操作されるノートPCである。例えば、上述したコントローラ等に対するユーザからの操作(画像取得指示)に応じて、カメラ2は、特定領域4を含む実空間画像を取得してHMD1に送信する。このような処理により、物体6A,6Bが含まれる実空間画像がHMD1に表示される。そして、例えば、ユーザ5は、上記コントローラ等を用いた操作により、実空間画像のうちオブジェクト化の対象となる物体6(ここでは一例として物体6B)を含む対象領域を指定する。続いて、当該実空間画像と対象領域を示す情報とが、HMD1から情報処理装置10へと送信される。 FIG. 2B shows a state in which two objects 6 (6A, 6B) exist on the specific area 4 in front of the user 5. The object 6A is a plastic bottle containing a beverage, and the object 6B is a notebook PC operated by the user 5. For example, in response to an operation (image acquisition instruction) from the user on the controller or the like described above, the camera 2 acquires a real space image including the specific area 4 and transmits it to the HMD 1. By such processing, the real space image including the objects 6A and 6B is displayed on the HMD 1. Then, for example, the user 5 designates a target area including the object 6 (here, the object 6B as an example here) which is the object of objectification in the real space image by the operation using the above-mentioned controller or the like. Subsequently, the real space image and the information indicating the target area are transmitted from the HMD 1 to the information processing apparatus 10.
 画像取得部12は、通信部11を介してこれらの情報(実空間画像及び対象領域を示す情報)を取得する。そして、仮想オブジェクト生成部13は、実空間画像における対象領域に対して公知の画像認識を実行する。このような処理により、当該対象領域に含まれる物体6Bの外観情報が抽出される。図2の(B)に示されるように、仮想オブジェクト生成部13は、このように抽出された外観情報に基づいて、物体6Bに対応する仮想オブジェクト8を生成する。 The image acquisition unit 12 acquires these pieces of information (information indicating the real space image and the target area) via the communication unit 11. Then, the virtual object generation unit 13 performs known image recognition on the target area in the real space image. By such processing, appearance information of the object 6B included in the target area is extracted. As shown in FIG. 2B, the virtual object generation unit 13 generates a virtual object 8 corresponding to the object 6B based on the appearance information extracted in this manner.
 なお、図2の(B)の例では、仮想空間V上にユーザ5に対応付けられたユーザオブジェクト7が配置されている。本実施形態では一例として、仮想オブジェクト生成部13は、仮想空間V上におけるユーザオブジェクト7に対する仮想オブジェクト8の相対位置と実空間におけるユーザ5に対する物体6Bの相対位置とが一致するように、仮想オブジェクト8の位置を決定する。このような処理により、ユーザは、仮想空間V上でユーザオブジェクト7を介して仮想オブジェクト8に対する操作(例えば持ち運ぶ操作等)を行うことにより、実空間の物体6Bに対する操作を行うことが可能となる。ただし、ユーザオブジェクト7に対する仮想オブジェクト8の相対位置は、ユーザ5に対する物体6Bの相対位置と一致しなくてもよい。すなわち、仮想オブジェクト生成部13は、仮想空間上の任意の位置(例えば、ユーザ5によって指定された位置)に仮想オブジェクト8を生成してもよい。 In the example of FIG. 2B, the user object 7 associated with the user 5 is disposed on the virtual space V. In the present embodiment, as an example, the virtual object generation unit 13 is configured such that the relative position of the virtual object 8 to the user object 7 in the virtual space V matches the relative position of the object 6B to the user 5 in the real space. Determine the position of 8. With such processing, the user can perform an operation on the object 6B in the real space by performing an operation (for example, an operation to carry) on the virtual object 8 in the virtual space V via the user object 7 . However, the relative position of the virtual object 8 to the user object 7 may not coincide with the relative position of the object 6B to the user 5. That is, the virtual object generation unit 13 may generate the virtual object 8 at an arbitrary position (for example, a position designated by the user 5) in the virtual space.
 仮想オブジェクト記憶部14は、仮想オブジェクト生成部13により生成された仮想オブジェクトに関する情報(以下「仮想オブジェクト情報」)を記憶する。本実施形態では一例として、仮想オブジェクト情報は、仮想オブジェクト毎に、仮想オブジェクトを一意に特定するための仮想オブジェクトID、仮想オブジェクトを描画するための外観情報、仮想オブジェクトが生成された生成時刻、仮想オブジェクトを生成する基となった実空間画像を取得したカメラ2(或いは当該カメラ2のユーザ5等)を一意に特定するためのカメラID、仮想オブジェクトの共有を許可されたユーザ(又はHMD1等の機器)を示す共有設定情報等を含んでいる。なお、カメラIDは、例えば、カメラ2により実空間画像が撮影された際に、当該実空間画像に対して付加情報として関連付けられる。 The virtual object storage unit 14 stores information on the virtual object generated by the virtual object generation unit 13 (hereinafter, “virtual object information”). In this embodiment, as an example, virtual object information includes, for each virtual object, a virtual object ID for uniquely identifying a virtual object, appearance information for drawing a virtual object, generation time at which a virtual object is generated, virtual A camera ID for uniquely identifying the camera 2 (or the user 5 of the camera 2 or the like) who acquired the real space image on which the object is generated, a user (or the HMD 1 or the like who is permitted to share the virtual object) Share setting information indicating the device). The camera ID is associated with the real space image as additional information, for example, when the real space image is photographed by the camera 2.
 ここで、仮想空間Vは、複数のユーザによって共有される空間である。すなわち、仮想空間Vは、少なくとも第1HMD(HMD1、第1表示装置)を装着した第1ユーザ(ここではユーザ5)と第2HMD(HMD1、第2表示装置)を装着した第2ユーザ(ユーザ5とは異なるユーザ)とに共有される空間である。仮想空間Vは、例えば、複数ユーザ間で打合せ等のビジネスコミュニケーションを行うためのチャット空間である。このような場合、第1ユーザは、実空間の物体をオブジェクト化しつつも、当該オブジェクト化により生成された仮想オブジェクトの内容を特定のユーザ以外のユーザには知られたくない場合がある。例えば、機密情報が記載されたメモ等に対応する仮想オブジェクトについては、特定の役職以上のユーザのみに閲覧させたい場合がある。 Here, the virtual space V is a space shared by a plurality of users. That is, the virtual space V includes at least a first user (user 5 in this case) wearing the first HMD (HMD 1, first display device) and a second user (user 5) wearing the second HMD (HMD 1, second display device) Is a space shared with different users). The virtual space V is, for example, a chat space for conducting business communication such as a meeting among a plurality of users. In such a case, while making the object in the real space into an object, the first user may not want the contents of the virtual object generated by the objectification to be known to users other than the specific user. For example, a virtual object corresponding to a memo or the like in which confidential information is described may be desired to be viewed only by a user having a specific job title or higher.
 そこで、共有設定部15は、第1ユーザによる指定に基づいて仮想オブジェクト生成部13により生成された仮想オブジェクトについて、第1ユーザから受け付けた操作内容に応じて、第2ユーザと当該仮想オブジェクトを共有するか否かを設定する。図2の(B)の例において、例えば、仮想オブジェクト8の共有を許可するユーザを設定するための共有設定画面が第1HMDに表示される。共有設定画面には、例えば、共有設定の対象となる仮想オブジェクト8の外観等を示す情報、当該仮想オブジェクト8の共有を許可するユーザを設定するための画面等が表示される。なお、仮想オブジェクト生成部13により複数の仮想オブジェクトが生成された場合には、上記共有設定画面は、当該複数の仮想オブジェクトの各々の共有設定を行うことが可能な設定画面であってもよい。 Therefore, the sharing setting unit 15 shares the virtual object with the second user according to the operation content received from the first user for the virtual object generated by the virtual object generating unit 13 based on the specification by the first user. Set whether or not to In the example of FIG. 2B, for example, a sharing setting screen for setting a user who is permitted to share the virtual object 8 is displayed on the first HMD. On the sharing setting screen, for example, information indicating the appearance and the like of the virtual object 8 which is the target of the sharing setting, a screen for setting a user who permits sharing of the virtual object 8 and the like are displayed. When a plurality of virtual objects are generated by the virtual object generation unit 13, the sharing setting screen may be a setting screen capable of performing the sharing setting of each of the plurality of virtual objects.
 ユーザ5(第1ユーザ)は、当該共有設定画面に対して、上述したコントローラ等を用いた操作を行うことにより、仮想オブジェクト8の共有を許可するユーザ(或いは、共有を許可しないユーザ)を指定する。共有設定部15は、このような処理により生成された設定情報を取得し、当該設定情報に基づいて仮想オブジェクト8の共有設定情報を設定する。具体的には、共有設定部15は、仮想オブジェクト記憶部14に記憶された仮想オブジェクト8の仮想オブジェクト情報にアクセスし、当該仮想オブジェクト情報の共有設定情報を設定又は更新する。 The user 5 (first user) designates a user (or a user who does not permit sharing) who permits sharing of the virtual object 8 by performing an operation using the above-described controller or the like on the sharing setting screen. Do. The sharing setting unit 15 acquires setting information generated by such processing, and sets sharing setting information of the virtual object 8 based on the setting information. Specifically, the sharing setting unit 15 accesses virtual object information of the virtual object 8 stored in the virtual object storage unit 14, and sets or updates sharing setting information of the virtual object information.
 画像生成部16は、仮想オブジェクト生成部13により生成された仮想オブジェクト8を含む仮想空間Vの少なくとも一部を示す仮想空間画像を生成する。具体的には、画像生成部16は、HMD1に表示される仮想空間画像(当該HMD1を装着したユーザの頭部及び目の動作情報に基づいて決定されたアングルの画像)に仮想オブジェクト8が写り込む場合には、当該仮想オブジェクト8を含む仮想空間画像を生成する。なお、仮想空間Vが複数のユーザによって共有される場合には、画像生成部16は、ユーザ毎(HMD1毎)の仮想空間画像を生成する。 The image generation unit 16 generates a virtual space image indicating at least a part of the virtual space V including the virtual object 8 generated by the virtual object generation unit 13. Specifically, the image generation unit 16 displays the virtual object 8 in the virtual space image (an image of an angle determined based on the motion information of the head and eyes of the user wearing the HMD 1) displayed on the HMD 1 In the case of including, a virtual space image including the virtual object 8 is generated. When the virtual space V is shared by a plurality of users, the image generation unit 16 generates a virtual space image for each user (for each HMD 1).
 ここで、上述した共有設定部15の処理によって、ある仮想オブジェクト(例えば、仮想オブジェクト8)について、あるユーザ(以下、第2ユーザ)との共有が許可されていない場合(すなわち、仮想オブジェクト8の閲覧が第2ユーザに許可されていない場合)があり得る。このような場合、画像生成部16は、第2ユーザとの共有が許可されていない仮想オブジェクト8を、第2ユーザのHMD1(第2HMD)に表示される仮想空間画像に表示しない。すなわち、画像生成部16は、第2HMD用の仮想空間画像に仮想オブジェクト8が含まれていたとしても、当該仮想空間画像において仮想オブジェクト8を非表示にする。一方、画像生成部16は、仮想オブジェクト8について第2ユーザとの共有が許可されており、第2HMD用の仮想空間画像に仮想オブジェクト8が含まれている場合には、第2HMD用の仮想空間画像に仮想オブジェクト8を表示する。このような処理により、仮想オブジェクト8の共有が許可されたユーザにのみ、仮想オブジェクト8を認識させることができる。 Here, in the case where sharing with a certain user (hereinafter, the second user) is not permitted for a certain virtual object (for example, the virtual object 8) by the processing of the sharing setting unit 15 described above (that is, There may be a case where viewing is not permitted to the second user. In such a case, the image generation unit 16 does not display the virtual object 8 that is not permitted to share with the second user in the virtual space image displayed on the HMD 1 (second HMD) of the second user. That is, even if the virtual object 8 is included in the virtual space image for the second HMD, the image generation unit 16 hides the virtual object 8 in the virtual space image. On the other hand, when the image generation unit 16 is permitted to share the virtual object 8 with the second user, and the virtual object 8 is included in the virtual space image for the second HMD, the virtual space for the second HMD is Display the virtual object 8 on the image. By such processing, only the user who is permitted to share the virtual object 8 can recognize the virtual object 8.
 画像生成部16により生成されたユーザ毎(HMD1毎)の仮想空間画像は、各ユーザのHMD1に送信される。このような処理により、各ユーザは、HMD1を介して、上述した共有設定に応じた仮想オブジェクト8の表示又は非表示が反映された仮想空間画像を視認することになる。 The virtual space image for each user (for each HMD 1) generated by the image generation unit 16 is transmitted to the HMD 1 of each user. Through such processing, each user visually recognizes, via the HMD 1, a virtual space image in which display or non-display of the virtual object 8 according to the above-described sharing setting is reflected.
 物体検出部17は、仮想オブジェクト生成部13により仮想オブジェクトが生成された後に画像取得部12により更に取得された実空間画像から、当該仮想オブジェクトに対応する物体を検出する。 The object detection unit 17 detects an object corresponding to the virtual object from the real space image acquired by the image acquisition unit 12 after the virtual object is generated by the virtual object generation unit 13.
 具体的には、物体検出部17は、当該更に取得された実空間画像の中に、既にオブジェクト化された物体と同一の物体が含まれている場合に、当該物体を検出する。例えば、物体検出部17は、仮想オブジェクト記憶部14に記憶されている一以上の仮想オブジェクト情報のうちから、当該更に取得された実空間画像に含まれる物体の外観に類似する外観情報(例えば、物体の輪郭、色、形状等に基づく公知の画像認識により、一定以上類似と認識される外観情報)と、当該更に取得された実空間画像を撮影したカメラ2を示すカメラIDと、当該更に取得された実空間画像が取得された時刻よりも過去の生成時刻と、が対応付けられた仮想オブジェクト情報を検索する。このような条件に合致する仮想オブジェクト情報が抽出された場合、物体検出部17は、当該更に取得された実空間画像に含まれる物体を、抽出された仮想オブジェクト情報により示される仮想オブジェクトに対応する物体として検出する。 Specifically, the object detection unit 17 detects the object when the same object as the already-objectized object is included in the further acquired real space image. For example, among the one or more pieces of virtual object information stored in the virtual object storage unit 14, the object detection unit 17 may be appearance information (e.g., similar to the appearance of an object included in the real space image further acquired). Appearance information that is recognized as having a certain degree of similarity or more by known image recognition based on the outline, color, shape, etc. of an object, a camera ID indicating the camera 2 that has captured the real space image obtained further Virtual object information is searched in which the generation time past the time when the acquired real space image is acquired is associated. When virtual object information meeting such conditions is extracted, the object detection unit 17 corresponds an object included in the further acquired real space image to a virtual object indicated by the extracted virtual object information. Detect as an object.
 上述した物体検出部17の処理について、図2の(B)の例を用いて具体的に説明する。この例において、物体6B(ノートPCの画面部分を含む)に対応する仮想オブジェクト8が生成された時点よりも後の時点において、画像取得部12が、当該物体6Bを含む実空間画像を取得する。この場合、物体検出部17は、仮想オブジェクト記憶部14に記憶されている一以上の仮想オブジェクト情報のうちから、当該実空間画像に含まれる物体6Bの外観に類似する外観情報と、当該実空間画像を撮影したカメラ2を示すカメラIDと、当該実空間画像が撮影された時刻よりも過去の生成時刻と、が対応付けられた仮想オブジェクト情報を検索する。このような処理により、仮想オブジェクト8の仮想オブジェクト情報が抽出される。その結果、物体検出部17は、当該実空間画像に含まれる物体6Bを、仮想オブジェクト8に対応する物体として検出する。 The process of the object detection unit 17 described above will be specifically described using the example of FIG. In this example, at a time after the virtual object 8 corresponding to the object 6B (including the screen portion of the notebook PC) is generated, the image acquisition unit 12 acquires a real space image including the object 6B. . In this case, among the one or more pieces of virtual object information stored in the virtual object storage unit 14, the object detection unit 17 selects appearance information similar to the appearance of the object 6B included in the real space image, and the real space The virtual object information in which the camera ID indicating the camera 2 that captured the image and the generation time past the time when the real space image was captured is associated with each other is searched. By such processing, virtual object information of the virtual object 8 is extracted. As a result, the object detection unit 17 detects the object 6B included in the real space image as an object corresponding to the virtual object 8.
 仮想オブジェクト更新部18は、物体検出部17により検出された物体の状態に基づいて、当該物体に対応する仮想オブジェクトの状態を更新する。上述した図2の(B)の例では、仮想オブジェクト更新部18は、物体検出部17により検出された物体6Bの状態に基づいて、当該物体6Bに対応する仮想オブジェクト8の状態を更新する。例えば、仮想オブジェクト8が最初に生成された時点よりも後の時点において取得された実空間画像に含まれる物体6Bの状態は、仮想オブジェクト8の生成時点における物体6Bの状態と異なる場合がある。具体的には、当該後の時点における物体6B(ノートPC)の画面(物体6Bの外観の一部)は、仮想オブジェクト8の生成時点における物体6Bの画面と異なっている場合がある。そこで、仮想オブジェクト更新部18は、物体6Bに対応する仮想オブジェクト8の状態(ここでは画面の内容)を、当該後の時点において取得された実空間画像に写った物体6Bの画面の内容に更新する。具体的には、仮想オブジェクト更新部18は、仮想オブジェクト記憶部14に記憶された仮想オブジェクト8の仮想オブジェクト情報の外観情報を、当該後の時点において取得された実空間画像に写った物体6Bの画面の内容に基づいて更新する。また、仮想オブジェクト更新部18は、仮想オブジェクト8の仮想オブジェクト情報の生成時刻を、当該更新が行われた時刻に変更する。以上のような処理により、実空間における最新の物体6Bの内容を、当該物体6Bに対応する仮想オブジェクト8に反映させることができる。 The virtual object update unit 18 updates the state of the virtual object corresponding to the object based on the state of the object detected by the object detection unit 17. In the example of FIG. 2B described above, the virtual object update unit 18 updates the state of the virtual object 8 corresponding to the object 6B based on the state of the object 6B detected by the object detection unit 17. For example, the state of the object 6B included in the real space image acquired at a time later than the time when the virtual object 8 is first generated may be different from the state of the object 6B at the time of generation of the virtual object 8. Specifically, the screen (a part of the appearance of the object 6B) of the object 6B (notebook PC) at the later time may be different from the screen of the object 6B at the time of generation of the virtual object 8. Therefore, the virtual object updating unit 18 updates the state of the virtual object 8 corresponding to the object 6B (here, the content of the screen) to the content of the screen of the object 6B captured in the real space image acquired at the later time point Do. Specifically, the virtual object update unit 18 adds the appearance information of the virtual object information of the virtual object 8 stored in the virtual object storage unit 14 to the real space image captured at the later time point. Update based on the contents of the screen. In addition, the virtual object update unit 18 changes the generation time of the virtual object information of the virtual object 8 to the time when the update is performed. By the processing as described above, the content of the latest object 6B in the real space can be reflected on the virtual object 8 corresponding to the object 6B.
 ここで、古い状態の物体に対応する仮想オブジェクトと新しい状態の物体に対応する仮想オブジェクトとを仮想空間V上に並存させ、見比べたい場合もあり得る。そこで、上述した仮想オブジェクト生成部13、物体検出部17、及び仮想オブジェクト更新部18は、以下のような処理を実行してもよい。 Here, there may be a case where the virtual object corresponding to the object in the old state and the virtual object corresponding to the object in the new state coexist on the virtual space V for comparison. Therefore, the virtual object generation unit 13, the object detection unit 17, and the virtual object update unit 18 described above may execute the following processing.
 すなわち、物体検出部17は、上述のように更に取得された実空間画像から、既に生成されている仮想オブジェクトに対応する物体を検出した場合、当該仮想オブジェクトを更新する第1処理(すなわち、上述した仮想オブジェクト更新部18の処理)と当該物体に対応する新たな仮想オブジェクトを生成する第2処理(すなわち、上述した仮想オブジェクト生成部13の処理)とのいずれを実行するかについてのユーザの選択を受け付ける。例えば、物体検出部17は、第1処理と第2処理とのいずれかを選択させるための選択画面をユーザのHMD1に表示させ、ユーザからの上記コントローラ等による選択操作の結果(ユーザの選択)を取得する。 That is, when the object detection unit 17 detects an object corresponding to a virtual object already generated from the real space image further acquired as described above, the first process of updating the virtual object (that is, the above-described process) Selection by the user as to which of the process of the virtual object update unit 18 and the second process of generating a new virtual object corresponding to the object (that is, the process of the virtual object generation unit 13 described above) Accept For example, the object detection unit 17 causes the HMD 1 of the user to display a selection screen for selecting one of the first process and the second process, and the result of the selection operation by the user from the user (user selection) To get
 物体検出部17が第1処理を実行することを示すユーザの選択を受け付けた場合、仮想オブジェクト更新部18が上記第1処理を実行する。一方、物体検出部17が第2処理を実行することを示すユーザの選択を受け付けた場合、仮想オブジェクト生成部13が上記第2処理を実行する。このような構成によれば、既に生成された仮想オブジェクトを更新する第1処理と、新規に新たな仮想オブジェクトを生成する第2処理とを、ユーザの希望に応じて、適切に切り替えて実行することができる。 When the object detection unit 17 receives a user selection indicating that the first process is to be performed, the virtual object update unit 18 executes the first process. On the other hand, when the object detection unit 17 receives a selection of a user indicating execution of the second process, the virtual object generation unit 13 executes the second process. According to such a configuration, the first process of updating a virtual object that has already been created and the second process of creating a new new virtual object are appropriately switched and executed according to the user's request. be able to.
 次に、図2~図5を参照して、情報処理システム100の動作の一例について説明する。図3は、仮想オブジェクトを生成するまでの処理を示すシーケンス図である。図4は、仮想オブジェクトが生成されてから共有設定に応じた仮想空間画像が各HMD1に表示されるまでの処理を示すシーケンス図である。図5は、実空間画像から既にオブジェクト化された物体が検出された場合の処理(仮想オブジェクトの更新又は新規生成)を示すシーケンス図である。 Next, an example of an operation of the information processing system 100 will be described with reference to FIGS. FIG. 3 is a sequence diagram showing processing until a virtual object is generated. FIG. 4 is a sequence diagram showing processing from generation of a virtual object to display of a virtual space image corresponding to the sharing setting on each HMD 1. FIG. 5 is a sequence diagram showing processing (updating of a virtual object or creation of a new virtual object) when an object which has already been turned into an object from a real space image is detected.
 図3に示されるように、まず、情報処理装置10が、複数のユーザによって共有される仮想空間Vを生成する(ステップS1)。具体的には、各ユーザに対応付けられたユーザオブジェクト等の各種オブジェクトが初期位置に配置された仮想空間Vが生成される。このように生成された仮想空間Vを示す仮想空間データ(各ユーザオブジェクトから見た仮想空間の画像)は、各ユーザのHMD1(ここでは、第1HMD及び第2HMD)に送信される(ステップS2)。これにより、各ユーザは、各HMD1を介して、あたかも仮想空間Vにいるかのような仮想現実を体験する。 As shown in FIG. 3, first, the information processing apparatus 10 generates a virtual space V shared by a plurality of users (step S1). Specifically, a virtual space V in which various objects such as user objects associated with each user are arranged at an initial position is generated. Virtual space data (an image of the virtual space viewed from each user object) indicating the virtual space V generated in this manner is transmitted to the HMD 1 (here, the first HMD and the second HMD) of each user (step S2) . Thereby, each user experiences virtual reality as if it were in the virtual space V via each HMD 1.
 続いて、第1HMDが、上記コントローラ等に対する第1HMDのユーザ5(第1ユーザ)からの操作に応じて、カメラ2に対して撮影開始を指示する(ステップS3)。撮影開始指示を受け取ったカメラ2は、特定領域4(図2参照)を含む実空間の撮影を開始し、当該実空間の映像を取得する(ステップS4)。カメラ2により撮影された映像は、随時第1HMDに送信され(ステップS5)、第1HMDに表示される仮想空間画像に重畳して表示される(ステップS6)。例えば、カメラ2により撮影された映像は、仮想空間画像の隅部に設けられた小窓状の画面(ワイプ)に表示される。 Subsequently, the first HMD instructs the camera 2 to start shooting in response to an operation from the user 5 (first user) of the first HMD with respect to the controller etc. (step S3). The camera 2 having received the shooting start instruction starts shooting of the real space including the specific area 4 (see FIG. 2), and acquires an image of the real space (step S4). The video taken by the camera 2 is transmitted to the first HMD at any time (step S5), and displayed superimposed on the virtual space image displayed on the first HMD (step S6). For example, an image captured by the camera 2 is displayed on a small window-like screen (wipe) provided at a corner of the virtual space image.
 続いて、第1HMDが、上記コントローラ等に対するユーザ5からの操作に応じて、カメラ2に対して実空間画像の取得を指示する(ステップS7)。ここで、実空間画像は、仮想オブジェクトを抽出する基となる静止画像である。画像取得指示を受け取ったカメラ2は、特定領域4を含む実空間を撮像した実空間画像を取得する(ステップS8)。カメラ2により取得された実空間画像は、第1HMDに送信され(ステップS9)、第1HMDに表示される(ステップS10)。 Subsequently, the first HMD instructs the camera 2 to acquire a real space image in response to an operation from the user 5 on the controller etc. (step S7). Here, the real space image is a still image as a basis for extracting a virtual object. The camera 2 having received the image acquisition instruction acquires a real space image obtained by imaging the real space including the specific area 4 (step S8). The real space image acquired by the camera 2 is transmitted to the first HMD (step S9) and displayed on the first HMD (step S10).
 続いて、第1HMDは、ユーザ5による上記コントローラ等に対する操作を受け付けることにより、実空間画像のうちオブジェクト化の対象となる物体を含む対象領域(ここでは一例として、物体6Bを含む領域)を示す情報を取得する(ステップS11)。続いて、ステップS9において取得された実空間画像とステップS11において取得された対象領域を示す情報とが、情報処理装置10に送信される(ステップS12)。 Subsequently, the first HMD indicates a target region (here, a region including the object 6B as an example) including an object to be objectified in the real space image by receiving an operation on the controller or the like by the user 5. Information is acquired (step S11). Subsequently, the real space image acquired in step S9 and the information indicating the target area acquired in step S11 are transmitted to the information processing apparatus 10 (step S12).
 続いて、画像取得部12が、ステップS12において送信された実空間画像及び対象領域を示す情報を取得する(ステップS13)。続いて、仮想オブジェクト生成部13が、実空間画像における対象領域に対して公知の画像認識を実行することにより、当該対象領域に含まれる物体6Bに対応する仮想オブジェクト8を生成する(ステップS14)。この際、仮想オブジェクト8に関する仮想オブジェクト情報が、仮想オブジェクト記憶部14に記憶される。 Subsequently, the image acquisition unit 12 acquires information indicating the real space image and the target area transmitted in step S12 (step S13). Subsequently, the virtual object generation unit 13 generates a virtual object 8 corresponding to the object 6B included in the target area by executing known image recognition on the target area in the real space image (step S14). . At this time, virtual object information on the virtual object 8 is stored in the virtual object storage unit 14.
 続いて、図4に示されるように、共有設定部15が、仮想オブジェクト8の外観等のデータを第1HMDに送信し(ステップS15)、上述した共有設定画面(例えば、共有設定の対象となる仮想オブジェクト8の外観を含む設定画面)を第1HMDに表示させる(ステップS16)。第1HMD(例えば、第1HMDに付随する上記コントローラ)は、共有設定画面に対してユーザ5によって入力された共有設定の内容を示す設定情報を取得し(ステップS17)、情報処理装置10に送信する(ステップS18)。続いて、共有設定部15が、当該設定情報に基づいて、仮想オブジェクト8の共有設定情報を設定する(ステップS19)。 Subsequently, as illustrated in FIG. 4, the sharing setting unit 15 transmits data such as the appearance of the virtual object 8 to the first HMD (step S15), and the sharing setting screen described above (for example, the target of the sharing setting) The setting screen including the appearance of the virtual object 8 is displayed on the first HMD (step S16). The first HMD (for example, the controller attached to the first HMD) acquires setting information indicating the content of the sharing setting input by the user 5 on the sharing setting screen (step S17), and transmits the setting information to the information processing apparatus 10 (Step S18). Subsequently, the sharing setting unit 15 sets sharing setting information of the virtual object 8 based on the setting information (step S19).
 続いて、画像生成部16が、仮想オブジェクト生成部13により生成された仮想オブジェクト8を含む仮想空間Vの少なくとも一部を示す仮想空間画像を生成する(ステップS20)。ここで、画像生成部16は、ユーザ毎(HMD1毎)に仮想空間画像を生成し、第1HMD用の仮想空間画像を第1HMDに送信すると共に、第2HMD用の仮想空間画像を第2HMDに送信する(ステップS21,S22)。続いて、第1HMD及び第2HMDの各々において、仮想空間画像が表示される(ステップS23,S24)。 Subsequently, the image generation unit 16 generates a virtual space image indicating at least a part of the virtual space V including the virtual object 8 generated by the virtual object generation unit 13 (step S20). Here, the image generation unit 16 generates a virtual space image for each user (for each HMD 1), transmits a virtual space image for the first HMD to the first HMD, and transmits a virtual space image for the second HMD to the second HMD (Steps S21 and S22). Subsequently, a virtual space image is displayed in each of the first HMD and the second HMD (steps S23 and S24).
 ここで、仮想オブジェクト8について、第2HMDのユーザ(第2ユーザ)との共有が許可されていない場合には、ステップS20において、画像生成部16は、第2HMD用の仮想空間画像に仮想オブジェクト8を表示しない。すなわち、画像生成部16は、仮想オブジェクト8が非表示とされた仮想空間画像を生成する。この場合、ステップS24において第2HMDに表示される仮想空間画像には、仮想オブジェクト8が表示されない。一方、仮想オブジェクト8について、第2ユーザとの共有が許可されている場合には、ステップS20において、画像生成部16は、第2HMD用の仮想空間画像に仮想オブジェクト8を表示する。その結果、ステップS24において第2HMDに表示される仮想空間画像には、仮想オブジェクト8が表示される。 Here, in the case where sharing of the virtual object 8 with the second HMD user (second user) is not permitted, in step S20, the image generation unit 16 converts the virtual object 8 into a virtual space image for the second HMD. Do not display That is, the image generation unit 16 generates a virtual space image in which the virtual object 8 is not displayed. In this case, the virtual object 8 is not displayed in the virtual space image displayed on the second HMD in step S24. On the other hand, if sharing of the virtual object 8 with the second user is permitted, the image generation unit 16 displays the virtual object 8 in the virtual space image for the second HMD in step S20. As a result, the virtual object 8 is displayed on the virtual space image displayed on the second HMD in step S24.
 続いて、図5を参照して、仮想オブジェクト8が生成された後に、実空間画像が再度取得された場合の処理の一例について説明する。 Subsequently, with reference to FIG. 5, an example of processing in the case where the real space image is acquired again after the virtual object 8 is generated will be described.
 ステップS31~S36の処理は、ステップS8~S13の処理と同様であるため、詳細な説明を省略する。続いて、物体検出部17が、ステップS36において取得された実空間画像から、既に生成された仮想オブジェクト8に対応する物体6Bを検出する(ステップS37)。 The processes of steps S31 to S36 are the same as the processes of steps S8 to S13, and thus detailed description will be omitted. Subsequently, the object detection unit 17 detects an object 6B corresponding to the virtual object 8 already generated from the real space image acquired in step S36 (step S37).
 続いて、物体検出部17は、当該仮想オブジェクト8を更新する第1処理(仮想オブジェクト更新部18の処理)と当該物体6Bに対応する新たな仮想オブジェクト(既に生成された仮想オブジェクト8とは異なる新規のオブジェクト)を生成する第2処理(仮想オブジェクト生成部13の処理)とのいずれを実行するかについてのユーザ5の選択を受け付ける。このために、例えば、物体検出部17は、既に生成された仮想オブジェクト8に対応する物体6Bが実空間画像から検出されたことを第1HMDに通知する(ステップS38)。例えば、物体検出部17は、第1HMDに通知ポップアップ等を表示させる。 Subsequently, the object detection unit 17 is different from the first process (the process of the virtual object update unit 18) for updating the virtual object 8 and the new virtual object (the virtual object 8 already generated) corresponding to the object 6B. The selection of the user 5 as to which of the second process (process of the virtual object generation unit 13) for generating a new object) is to be executed is received. For this purpose, for example, the object detection unit 17 notifies the first HMD that the object 6B corresponding to the virtual object 8 already generated is detected from the real space image (step S38). For example, the object detection unit 17 causes the first HMD to display a notification pop-up or the like.
 続いて、第1HMD(上記コントローラ等)が、第1処理及び第2処理のいずれを実行するかについてのユーザ5の選択を受け付け(ステップS39)、当該選択の結果を情報処理装置10に送信する(ステップS40)。続いて、情報処理装置10は、ユーザ5の選択に応じた処理を実行する(ステップS41)。具体的には、物体検出部17が第1処理を実行することを示すユーザ5の選択を受け付けた場合、仮想オブジェクト更新部18が上記第1処理を実行する。この例では、仮想オブジェクト更新部18が、ステップS36において取得された実空間画像から検出された物体6Bの状態に基づいて、仮想オブジェクト8の状態を更新する。一方、物体検出部17が第2処理を実行することを示すユーザ5の選択を受け付けた場合、仮想オブジェクト生成部13が上記第2処理を実行する。この例では、仮想オブジェクト生成部13が、ステップS36において取得された実空間画像から検出された物体6Bの状態に基づいて、新たな仮想オブジェクトを生成する。この場合、このように生成された新たな仮想オブジェクトと既に生成済みの仮想オブジェクト8とは、仮想空間V上において併存する。 Subsequently, the first HMD (the controller or the like) accepts the selection of the user 5 as to which of the first processing and the second processing is to be executed (step S39), and transmits the result of the selection to the information processing apparatus 10. (Step S40). Subsequently, the information processing apparatus 10 executes a process according to the selection of the user 5 (step S41). Specifically, when the object detection unit 17 receives the selection of the user 5 indicating execution of the first process, the virtual object update unit 18 executes the first process. In this example, the virtual object update unit 18 updates the state of the virtual object 8 based on the state of the object 6B detected from the real space image acquired in step S36. On the other hand, when the object detection unit 17 receives the selection of the user 5 indicating execution of the second process, the virtual object generation unit 13 executes the second process. In this example, the virtual object generation unit 13 generates a new virtual object based on the state of the object 6B detected from the real space image acquired in step S36. In this case, the new virtual object generated in this manner and the virtual object 8 already generated coexist in the virtual space V.
 以上述べた情報処理装置10によれば、ユーザ5の付近の実空間を撮像した実空間画像に含まれる物体6(本実施形態では物体6B)が、仮想空間V上の仮想オブジェクト8として生成され、当該仮想オブジェクト8を含む仮想空間画像(当該仮想オブジェクト8が写り込んだ仮想空間画像)が生成される。このような処理により、HMD1を装着したユーザ5は、仮想空間画像を介して自身の付近に存在する物体6を視認することができる。したがって、情報処理装置10によれば、ユーザ5の仮想現実体験の利便性を向上させることができる。 According to the information processing apparatus 10 described above, the object 6 (in the present embodiment, the object 6B) included in the real space image obtained by imaging the real space near the user 5 is generated as the virtual object 8 in the virtual space V A virtual space image including the virtual object 8 (a virtual space image in which the virtual object 8 is captured) is generated. By such processing, the user 5 wearing the HMD 1 can visually recognize the object 6 present in the vicinity of the user via the virtual space image. Therefore, according to the information processing apparatus 10, the convenience of the virtual reality experience of the user 5 can be improved.
 また、情報処理装置10は、物体検出部17と、仮想オブジェクト更新部18と、を備えている。これにより、既にオブジェクト化された物体6Bを含む実空間画像が再度取得された場合に、当該実空間画像に含まれる物体6Bの状態に基づいて、当該物体6Bに対応する仮想オブジェクト8を更新することができる。その結果、実空間における物体6Bの最新の状態を仮想空間V上の仮想オブジェクト8を通じて認識することが可能となる。また、同一の物体に対応する仮想オブジェクトを複数生成しないことにより、使用メモリ量を抑制できる。 The information processing apparatus 10 further includes an object detection unit 17 and a virtual object update unit 18. Thus, when a real space image including the already-objectified object 6B is acquired again, the virtual object 8 corresponding to the object 6B is updated based on the state of the object 6B included in the real space image. be able to. As a result, the latest state of the object 6B in the real space can be recognized through the virtual object 8 on the virtual space V. Further, by not generating a plurality of virtual objects corresponding to the same object, it is possible to suppress the memory usage.
 また、物体検出部17は、更に取得された実空間画像から仮想オブジェクト8に対応する物体6Bを検出した場合、当該仮想オブジェクト8を更新する第1処理と当該物体6Bに対応する新たな仮想オブジェクトを生成する第2処理とのいずれを実行するかについてのユーザの選択を受け付ける。物体検出部17が第1処理を実行することを示すユーザの選択を受け付けた場合、仮想オブジェクト更新部18が第1処理を実行し、物体検出部17が第2処理を実行することを示すユーザの選択を受け付けた場合、仮想オブジェクト生成部13が第2処理を実行する。この構成によれば、ユーザの希望に応じて、既存の仮想オブジェクト8の更新と新たな仮想オブジェクトの生成とを適切に切り替えて実行することができる。 Further, when the object detection unit 17 detects an object 6B corresponding to the virtual object 8 from the acquired real space image, the first process of updating the virtual object 8 and a new virtual object corresponding to the object 6B Accept the user's selection as to which of the second processes to generate. When the object detection unit 17 receives the selection of the user indicating that the first process is to be performed, the virtual object update unit 18 performs the first process, and the object detection unit 17 indicates that the second process is to be performed. When the selection of is received, the virtual object generation unit 13 executes the second process. According to this configuration, it is possible to appropriately switch and execute the update of the existing virtual object 8 and the generation of a new virtual object according to the user's request.
 また、仮想オブジェクト生成部13は、実空間画像に含まれる複数の物体6(図2の例では、物体6A,6B)のうちユーザ5により指定された物体6(図2の例では、物体6B)に対応する仮想オブジェクト8を生成する。この構成によれば、ユーザが所望する物体に対応する仮想オブジェクトのみを生成することができるため、無駄なオブジェクト化処理を省略してプロセッサの処理量を低減できると共に、無駄な仮想オブジェクトによる使用メモリ量の増大を抑制できる。 In addition, the virtual object generation unit 13 selects an object 6 (the object 6B in the example of FIG. 2) specified by the user 5 among the plurality of objects 6 (the objects 6A and 6B in the example of FIG. 2) included in the real space image. Create a virtual object 8 corresponding to. According to this configuration, since only virtual objects corresponding to objects desired by the user can be generated, unnecessary object formation processing can be omitted to reduce the processing amount of the processor, and memory used by the unnecessary virtual objects can be reduced. It is possible to suppress the increase of the amount.
 また、仮想空間Vは、少なくとも第1HMDを装着した第1ユーザと第2HMDを装着した第2ユーザとに共有される空間であり、情報処理装置10は、上述した共有設定部15を備えている。画像生成部16は、第2ユーザとの共有が許可されていない仮想オブジェクトを、第2HMDに表示される仮想空間画像に表示しない。この構成によれば、仮想オブジェクト毎に上述したような共有設定を行うことにより、特定の仮想オブジェクト(例えば機密情報が記載された書類等に対応するオブジェクト)を、特定のユーザ(例えば特定の役職以上のユーザ)にのみ閲覧させることができる。これにより、仮想空間Vを介した打合せ等のビジネスコミュニケーションをより円滑に進めることが可能となる。 The virtual space V is a space shared by at least the first user wearing the first HMD and the second user wearing the second HMD, and the information processing apparatus 10 includes the sharing setting unit 15 described above. . The image generation unit 16 does not display a virtual object that is not permitted to share with the second user in the virtual space image displayed on the second HMD. According to this configuration, by performing the sharing setting as described above for each virtual object, a specific virtual object (for example, an object corresponding to a document or the like in which confidential information is described) can be It can be viewed only by the above users. This makes it possible to more smoothly carry out business communication such as a meeting via the virtual space V.
 なお、上記実施形態の説明に用いたブロック図(図1)は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及び/又はソフトウェアの任意の組み合わせによって実現される。また、各機能ブロックの実現手段は特に限定されない。すなわち、各機能ブロックは、物理的及び/又は論理的に結合した1つの装置により実現されてもよいし、物理的及び/又は論理的に分離した2つ以上の装置を直接的及び/又は間接的に(例えば、有線及び/又は無線で)接続し、これら複数の装置により実現されてもよい。 In addition, the block diagram (FIG. 1) used for description of the said embodiment has shown the block of a function unit. These functional blocks (components) are realized by any combination of hardware and / or software. Moreover, the implementation means of each functional block is not particularly limited. That is, each functional block may be realized by one physically and / or logically coupled device, or directly and / or indirectly two or more physically and / or logically separated devices. It may be connected (for example, wired and / or wirelessly) and realized by the plurality of devices.
 例えば、上記実施形態における情報処理装置10は、上記実施形態の情報処理装置10の処理を行うコンピュータとして機能してもよい。図6は、本実施形態に係る情報処理装置10のハードウェア構成の一例を示す図である。上述の情報処理装置10は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、及びバス1007等を含むコンピュータ装置として構成されてもよい。 For example, the information processing apparatus 10 in the above embodiment may function as a computer that performs the processing of the information processing apparatus 10 in the above embodiment. FIG. 6 is a diagram showing an example of the hardware configuration of the information processing apparatus 10 according to the present embodiment. The above-described information processing apparatus 10 may be physically configured as a computer apparatus including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニット等に読み替えることができる。情報処理装置10のハードウェア構成は、図6に示された各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following description, the term "device" can be read as a circuit, a device, a unit or the like. The hardware configuration of the information processing device 10 may be configured to include one or more of the devices illustrated in FIG. 6 or may be configured without including some devices.
 情報処理装置10における各機能は、プロセッサ1001、メモリ1002等のハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ1001が演算を行い、通信装置1004による通信、メモリ1002及びストレージ1003におけるデータの読み出し及び/又は書き込みを制御することで実現される。 Each function in the information processing apparatus 10 causes the processor 1001 to perform an operation by reading predetermined software (program) on hardware such as the processor 1001, the memory 1002, etc., communication by the communication device 1004, the memory 1002 and the storage 1003. This is realized by controlling the reading and / or writing of data in
 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタ等を含む中央処理装置(CPU:Central Processing Unit)で構成されてもよい。 The processor 1001 operates, for example, an operating system to control the entire computer. The processor 1001 may be configured by a central processing unit (CPU) including an interface with a peripheral device, a control device, an arithmetic device, a register, and the like.
 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュール、及び/又はデータを、ストレージ1003及び/又は通信装置1004からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態で説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。例えば、情報処理装置10の仮想オブジェクト生成部13は、メモリ1002に格納され、プロセッサ1001で動作する制御プログラムによって実現されてもよく、図1に示した他の機能ブロックについても同様に実現されてもよい。上述の各種処理は、1つのプロセッサ1001で実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップで実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されてもよい。 Also, the processor 1001 reads a program (program code), a software module, and / or data from the storage 1003 and / or the communication device 1004 to the memory 1002, and executes various processing according to these. As a program, a program that causes a computer to execute at least a part of the operations described in the above embodiments is used. For example, the virtual object generation unit 13 of the information processing apparatus 10 may be realized by a control program stored in the memory 1002 and operated by the processor 1001, and similarly realized for other functional blocks shown in FIG. It is also good. The various processes described above have been described to be executed by one processor 1001, but may be executed simultaneously or sequentially by two or more processors 1001. The processor 1001 may be implemented by one or more chips. The program may be transmitted from the network via a telecommunication line.
 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)等の少なくとも1つで構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)等と呼ばれてもよい。メモリ1002は、上記実施形態に係る情報処理方法(例えば図3~図5のシーケンス図に示される手順等)を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュール等を保存することができる。 The memory 1002 is a computer readable recording medium, and includes, for example, at least one of a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). It may be done. The memory 1002 may be called a register, a cache, a main memory (main storage device) or the like. The memory 1002 may store a program (program code), a software module, etc. that can be executed to execute the information processing method (for example, the procedure shown in the sequence diagrams of FIGS. 3 to 5) according to the above embodiment. it can.
 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)等の光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップ等の少なくとも1つで構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及び/又はストレージ1003を含むデータベース、サーバ、その他の適切な媒体であってもよい。 The storage 1003 is a computer readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (for example, a compact disk, a digital versatile disk, Blu-ray A (registered trademark) disk, a smart card, a flash memory (for example, a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, and the like may be used. The storage 1003 may be called an auxiliary storage device. The above-described storage medium may be, for example, a database including the memory 1002 and / or the storage 1003, a server, or any other suitable medium.
 通信装置1004は、有線及び/又は無線ネットワークを介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等ともいう。 The communication device 1004 is hardware (transmission / reception device) for performing communication between computers via a wired and / or wireless network, and is also called, for example, a network device, a network controller, a network card, a communication module, or the like.
 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサ等)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプ等)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, and the like) that receives external input. The output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that performs output to the outside. The input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
 また、プロセッサ1001及びメモリ1002等の各装置は、情報を通信するためのバス1007で接続される。バス1007は、単一のバスで構成されてもよいし、装置間で異なるバスで構成されてもよい。 Also, each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information. The bus 1007 may be configured by a single bus or may be configured by different buses among the devices.
 また、情報処理装置10は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)等のハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つで実装されてもよい。 Further, the information processing apparatus 10 includes hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), and a field programmable gate array (FPGA). And part or all of each functional block may be realized by the hardware. For example, processor 1001 may be implemented in at least one of these hardware.
 以上、本発明について詳細に説明したが、当業者にとっては、本発明が本明細書中に説明した実施形態に限定されるものではないということは明らかである。本発明は、特許請求の範囲の記載により定まる本発明の趣旨及び範囲を逸脱することなく修正及び変更された態様として実施することができる。したがって、本明細書の記載は、例示説明を目的とするものであり、本発明に対して何ら制限的な意味を有するものではない。 Although the present invention has been described above in detail, it is apparent to those skilled in the art that the present invention is not limited to the embodiments described herein. The present invention can be implemented as a modified and changed embodiment without departing from the spirit and scope of the present invention defined by the description of the claims. Accordingly, the description in the present specification is for the purpose of illustration and does not have any limiting meaning on the present invention.
 本明細書で説明した各態様/実施形態の処理手順、シーケンス、フローチャート等は、矛盾の無い限り、順序を入れ替えてもよい。例えば、本明細書で説明した方法については、例示的な順序で様々なステップの要素を提示しており、提示した特定の順序に限定されない。 As long as there is no contradiction, the processing procedure, sequence, flow chart, etc. of each aspect / embodiment described in this specification may be reversed. For example, for the methods described herein, elements of the various steps are presented in an exemplary order and are not limited to the particular order presented.
 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルで管理されてもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 The input / output information may be stored in a specific place (for example, a memory) or may be managed by a management table. Information to be input or output may be overwritten, updated or added. The output information etc. may be deleted. The input information or the like may be transmitted to another device.
 判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be performed by a value (0 or 1) represented by one bit, may be performed by a true / false value (Boolean: true or false), or may be compared with a numerical value (for example, a predetermined value). Comparison with the value).
 本明細書で説明した各態様/実施形態は単独で用いられてもよいし、組み合わせて用いられてもよいし、実行に伴って切り替えて用いられてもよい。 Each aspect / embodiment described in the present specification may be used alone, may be used in combination, and may be switched and used along with execution.
 ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能等を意味するよう広く解釈されるべきである。 Software may be called software, firmware, middleware, microcode, hardware description language, or any other name, and may be instructions, instruction sets, codes, code segments, program codes, programs, subprograms, software modules. Should be interpreted broadly to mean applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc.
 また、ソフトウェア、命令等は、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、同軸ケーブル、光ファイバケーブル、ツイストペア及びデジタル加入者回線(DSL)等の有線技術及び/又は赤外線、無線及びマイクロ波等の無線技術を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び/又は無線技術は、伝送媒体の定義内に含まれる。 Also, software, instructions and the like may be transmitted and received via a transmission medium. For example, software may use a wireline technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or a website, server or other using wireless technology such as infrared, radio and microwave When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.
 本明細書で説明した情報及び信号等は、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップ等は、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described herein may be represented using any of a variety of different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips etc that may be mentioned throughout the above description may be voltage, current, electromagnetic waves, magnetic fields or particles, light fields or photons, or any of these May be represented by a combination of
 なお、本明細書で説明した用語及び/又は本明細書の理解に必要な用語については、同一の又は類似する意味を有する用語と置き換えてもよい。 The terms described in the present specification and / or the terms necessary for the understanding of the present specification may be replaced with terms having the same or similar meanings.
 また、本明細書で説明した情報、パラメータ等は、絶対値で表されてもよいし、所定の値からの相対値で表されてもよいし、対応する別の情報で表されてもよい。 In addition, the information, parameters, and the like described in the present specification may be represented by an absolute value, may be represented by a relative value from a predetermined value, or may be represented by corresponding other information. .
 上述したパラメータに使用される名称はいかなる点においても限定的なものではない。さらに、これらのパラメータを使用する数式等は、本明細書で明示的に開示したものと異なる場合もある。 The names used for the parameters described above are in no way limiting. In addition, the formulas etc. that use these parameters may differ from those explicitly disclosed herein.
 本明細書で使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」との両方を意味する。 As used herein, the phrase "based on" does not mean "based only on," unless expressly stated otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
 「含む(include)」、「含んでいる(including)」、及びそれらの変形が、本明細書あるいは特許請求の範囲で使用されている限り、これら用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本明細書あるいは特許請求の範囲において使用されている用語「又は(or)」及び「或いは(or)」は、排他的論理和ではないことが意図される。 As long as “include”, “including”, and variations thereof are used in the present specification or claims, these terms are as used in the term “comprising”. Is intended to be comprehensive. Furthermore, it is intended that the terms "or" and "or" as used in the present specification or in the claims are not exclusive ORs.
 本明細書において、文脈又は技術的に明らかに1つのみしか存在しない装置であることが示されていなければ、複数の装置をも含むものとする。 In the present specification, a plurality of devices are also included unless a context or technically apparent device is shown as having only one.
 本明細書で使用する「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up)(例えば、テーブル、データベースまたは別のデータ構造での探索)、確認(ascertaining)した事を「決定」したとみなす事等を含み得る。また、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「決定」したとみなす事等を含み得る。また、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)等した事を「決定」したとみなす事を含み得る。つまり、「決定」は、何らかの動作を「決定」したとみなす事を含み得る。 The term "determining" as used herein may encompass a wide variety of operations. “Decision” may be, for example, judging, calculating, computing, processing, deriving, investigating, looking up (eg table, database or other (Searching in the data structure of (a)), ascertaining it may be regarded as “decided”, and the like. Also, "determination" may be receiving (e.g., receiving information), transmitting (e.g., transmitting information), input (input), output (output), accessing (accessing) (e.g. For example, it can be regarded as "determining" access to data in memory. Also, "determining" may include considering "resolving", selecting, choosing, establishing, comparing, etc., as "determining". That is, "determination" may include considering that some action is "decision".
 本開示の全体において、文脈から明らかに単数を示したものではなければ、複数のものを含むものとする。 Throughout this disclosure, unless the context clearly indicates otherwise, it is intended to include the plural.
 1…HMD(表示装置)、5…ユーザ、6,6A,6B…物体、7…ユーザオブジェクト、8…仮想オブジェクト、10…情報処理装置、12…画像取得部、13…仮想オブジェクト生成部、15…共有設定部、16…画像生成部、17…物体検出部、18…仮想オブジェクト更新部、V…仮想空間。 DESCRIPTION OF SYMBOLS 1 ... HMD (display apparatus), 5 ... user, 6, 6A, 6B ... object, 7 ... user object, 8 ... virtual object, 10 ... information processor, 12 ... image acquisition part, 13 ... virtual object production | generation part, 15 ... shared setting unit, 16 ... image generation unit, 17 ... object detection unit, 18 ... virtual object update unit, V ... virtual space.

Claims (5)

  1.  ユーザに装着された表示装置に表示される仮想空間の画像を提供する情報処理装置であって、
     前記ユーザの付近の実空間を撮像した実空間画像を取得する画像取得部と、
     前記実空間画像に含まれる物体を認識し、前記仮想空間上に前記物体に対応する仮想オブジェクトを生成する仮想オブジェクト生成部と、
     前記仮想オブジェクトを含む前記仮想空間の少なくとも一部を示す仮想空間画像であって、前記表示装置に表示される該仮想空間画像を生成する画像生成部と、
    を備える情報処理装置。
    An information processing apparatus for providing an image of a virtual space displayed on a display device mounted on a user, comprising:
    An image acquisition unit that acquires a real space image obtained by imaging a real space near the user;
    A virtual object generation unit that recognizes an object included in the real space image and generates a virtual object corresponding to the object in the virtual space;
    An virtual space image showing at least a part of the virtual space including the virtual object, the image generation unit generating the virtual space image to be displayed on the display device;
    An information processing apparatus comprising:
  2.  前記仮想オブジェクトが生成された後に前記画像取得部により更に取得された前記実空間画像から、当該仮想オブジェクトに対応する物体を検出する物体検出部と、
     前記物体検出部により検出された物体の状態に基づいて、当該物体に対応する前記仮想オブジェクトの状態を更新する仮想オブジェクト更新部と、
    を更に備える、請求項1に記載の情報処理装置。
    An object detection unit for detecting an object corresponding to the virtual object from the real space image further acquired by the image acquisition unit after the virtual object is generated;
    A virtual object update unit that updates the state of the virtual object corresponding to the object based on the state of the object detected by the object detection unit;
    The information processing apparatus according to claim 1, further comprising:
  3.  前記物体検出部は、前記更に取得された前記実空間画像から前記仮想オブジェクトに対応する物体を検出した場合、当該仮想オブジェクトを更新する第1処理と当該物体に対応する新たな仮想オブジェクトを生成する第2処理とのいずれを実行するかについての前記ユーザの選択を受け付け、
     前記物体検出部が前記第1処理を実行することを示す前記ユーザの選択を受け付けた場合、前記仮想オブジェクト更新部が前記第1処理を実行し、
     前記物体検出部が前記第2処理を実行することを示す前記ユーザの選択を受け付けた場合、前記仮想オブジェクト生成部が前記第2処理を実行する、
    請求項2に記載の情報処理装置。
    When the object detection unit detects an object corresponding to the virtual object from the further acquired real space image, the object detection unit generates a first process of updating the virtual object and a new virtual object corresponding to the object. Accept the user's choice as to which of the second processes to be performed;
    When the object detection unit receives the selection of the user indicating that the first process is to be performed, the virtual object update unit executes the first process,
    When the object detection unit receives the selection of the user indicating execution of the second process, the virtual object generation unit executes the second process.
    The information processing apparatus according to claim 2.
  4.  前記仮想オブジェクト生成部は、前記実空間画像に含まれる複数の物体のうち前記ユーザにより指定された物体に対応する前記仮想オブジェクトを生成する、
    請求項1~3のいずれか一項に記載の情報処理装置。
    The virtual object generation unit generates the virtual object corresponding to an object designated by the user among a plurality of objects included in the real space image.
    The information processing apparatus according to any one of claims 1 to 3.
  5.  前記仮想空間は、少なくとも第1表示装置を装着した第1ユーザと第2表示装置を装着した第2ユーザとに共有される空間であり、
     前記情報処理装置は、前記第1ユーザによる指定に基づいて生成された前記仮想オブジェクトについて、前記第1ユーザから受け付けた操作内容に応じて、前記第2ユーザと当該仮想オブジェクトを共有するか否かを設定する共有設定部を更に備え、
     前記画像生成部は、前記第2ユーザとの共有が許可されていない前記仮想オブジェクトを、前記第2表示装置に表示される前記仮想空間画像に表示しない、
    請求項4に記載の情報処理装置。
    The virtual space is a space shared by a first user wearing at least a first display device and a second user wearing a second display device,
    Whether the information processing apparatus shares the virtual object with the second user according to the operation content received from the first user for the virtual object generated based on the specification by the first user And a sharing setting unit for setting
    The image generation unit does not display the virtual object, for which sharing with the second user is not permitted, in the virtual space image displayed on the second display device.
    The information processing apparatus according to claim 4.
PCT/JP2018/044278 2017-12-26 2018-11-30 Information processing device WO2019130991A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017249034A JP2021043476A (en) 2017-12-26 2017-12-26 Information processing apparatus
JP2017-249034 2017-12-26

Publications (1)

Publication Number Publication Date
WO2019130991A1 true WO2019130991A1 (en) 2019-07-04

Family

ID=67063503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/044278 WO2019130991A1 (en) 2017-12-26 2018-11-30 Information processing device

Country Status (2)

Country Link
JP (1) JP2021043476A (en)
WO (1) WO2019130991A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021068195A (en) * 2019-10-24 2021-04-30 克己 横道 Information processing system, information processing method, and program
JP2021162876A (en) * 2020-03-30 2021-10-11 日産自動車株式会社 Image generation system, image generation device, and image generation method
JP2022032540A (en) * 2020-08-12 2022-02-25 武志 小畠 Infrared examination analysis diagnosis apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022269888A1 (en) * 2021-06-25 2022-12-29 京セラ株式会社 Wearable terminal device, program, display method, and virtual image delivery system
WO2024047720A1 (en) * 2022-08-30 2024-03-07 京セラ株式会社 Virtual image sharing method and virtual image sharing system
WO2024147184A1 (en) * 2023-01-05 2024-07-11 日本電信電話株式会社 Virtual space display system, terminal device, and virtual space display program
WO2024147194A1 (en) * 2023-01-06 2024-07-11 マクセル株式会社 Information processing device and information processing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014197348A (en) * 2013-03-29 2014-10-16 キヤノン株式会社 Server device, information processing method and program
WO2015111283A1 (en) * 2014-01-23 2015-07-30 ソニー株式会社 Image display device and image display method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014197348A (en) * 2013-03-29 2014-10-16 キヤノン株式会社 Server device, information processing method and program
WO2015111283A1 (en) * 2014-01-23 2015-07-30 ソニー株式会社 Image display device and image display method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021068195A (en) * 2019-10-24 2021-04-30 克己 横道 Information processing system, information processing method, and program
JP7023005B2 (en) 2019-10-24 2022-02-21 克己 横道 Information processing systems, information processing methods and programs
JP2021162876A (en) * 2020-03-30 2021-10-11 日産自動車株式会社 Image generation system, image generation device, and image generation method
JP7413122B2 (en) 2020-03-30 2024-01-15 日産自動車株式会社 Image generation system, image generation device, and image generation method
JP2022032540A (en) * 2020-08-12 2022-02-25 武志 小畠 Infrared examination analysis diagnosis apparatus
JP7298921B2 (en) 2020-08-12 2023-06-27 株式会社赤外線高精度技術利用機構 Infrared Investigation Analysis Diagnosis Device

Also Published As

Publication number Publication date
JP2021043476A (en) 2021-03-18

Similar Documents

Publication Publication Date Title
WO2019130991A1 (en) Information processing device
US11366516B2 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
WO2021135601A1 (en) Auxiliary photographing method and apparatus, terminal device, and storage medium
US10254847B2 (en) Device interaction with spatially aware gestures
US20200335065A1 (en) Information processing device
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
KR20160145976A (en) Method for sharing images and electronic device performing thereof
CN110136228B (en) Face replacement method, device, terminal and storage medium for virtual character
EP3772217A1 (en) Output control apparatus, display terminal, remote control system, control method, and carrier medium
WO2022057435A1 (en) Search-based question answering method, and storage medium
CN111259183A (en) Image recognizing method and device, electronic equipment and medium
WO2020083178A1 (en) Digital image display method, apparatus, electronic device, and storage medium
JP6999822B2 (en) Terminal device and control method of terminal device
CN114143280A (en) Session display method and device, electronic equipment and storage medium
JP7094759B2 (en) System, information processing method and program
EP4035353A1 (en) Apparatus, image processing system, communication system, method for setting, image processing method, and recording medium
WO2023037812A1 (en) Online dialogue support system
JP7267105B2 (en) Information processing device and program
US20240363044A1 (en) Display control device
WO2023079875A1 (en) Information processing device
US20240289080A1 (en) Display control device
WO2023149379A1 (en) Information processing device
JP2024075801A (en) Display Control Device
JP2023181639A (en) Information processing device
JP2022069212A (en) Control apparatus, program, and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18896643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP