WO2023188104A1 - Remote experience system, information processing device, information processing method, and program - Google Patents

Remote experience system, information processing device, information processing method, and program Download PDF

Info

Publication number
WO2023188104A1
WO2023188104A1 PCT/JP2022/015971 JP2022015971W WO2023188104A1 WO 2023188104 A1 WO2023188104 A1 WO 2023188104A1 JP 2022015971 W JP2022015971 W JP 2022015971W WO 2023188104 A1 WO2023188104 A1 WO 2023188104A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
expression
user
real environment
detection
Prior art date
Application number
PCT/JP2022/015971
Other languages
French (fr)
Japanese (ja)
Inventor
正樹 春名
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2022/015971 priority Critical patent/WO2023188104A1/en
Publication of WO2023188104A1 publication Critical patent/WO2023188104A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to a remote experience system, information processing device, information processing method, and program that provide information about a remote location to a user.
  • Patent Document 1 discloses a technology related to a remote conference in which in-house workers who work in the company and remote workers who work remotely coexist.
  • a virtual room is displayed on the terminals of in-house workers and remote workers, and the in-house workers and remote workers are displayed as virtual participants in the virtual room.
  • the sense of realism is enhanced by reflecting the movements of virtual workers and remote workers in the movements of virtual workers.
  • Patent Document 1 discloses participation of virtual participants in a virtual space, it does not disclose a technique for allowing remote participants in an event using a real environment to experience the real environment.
  • the present disclosure has been made in view of the above, and aims to provide a remote experience system that allows remote participants in an event using a real environment to experience the real environment.
  • a remote experience system is installed in a real environment where an event is held, and includes a device equipped with a device-mounted sensor that detects information about objects in the real environment.
  • a real environment sensor that is installed in a real environment and detects information about objects in the real environment;
  • a participant motion detection device that detects the motion of a first user who participates in the event from a location away from the real environment;
  • a first expression device capable of expressions that can be recognized by a user with at least one of the five senses;
  • a second expression device capable of presenting images to a second user who participates in the event from a location remote from the real environment; and information processing.
  • the real environment sensor includes a photographing device that detects information about objects in the real environment by photographing the real environment.
  • the information processing device includes a motion information receiving unit that receives motion information indicating the motion acquired by the participant motion detection device from the participant motion detection device, and uses the motion information to cause the device to perform a motion according to the motion information.
  • a device control unit that generates control information to cause the device to perform the operation; a control information transmitting unit that sends the control information to the device; and a device-mounted sensor that receives first detection information detected by the device-mounted sensor from the real environment sensor. and a detection information receiving section that receives second detection information detected by the real environment sensor.
  • the information processing device further uses the first detection information to generate first expression information to be transmitted to the first user, and uses the second detection information to generate a video corresponding to the virtual viewpoint of the second user in the real environment. and an expression information transmitter that transmits the first expression information to the first expression device and the second expression information to the second expression device.
  • the remote experience system has the effect of allowing remote participants in an event using a real environment to experience the real environment.
  • a diagram schematically showing distance education using the remote experience system of the embodiment Flowchart illustrating an example of a device control procedure in the information processing apparatus according to the embodiment
  • Flowchart illustrating an example of a procedure for generating expression information in the information processing apparatus according to the embodiment A diagram showing an example in which real participants, real avatar users, and virtual avatar users participate in an event.
  • FIG. 1 is a diagram showing a configuration example of a remote experience system according to an embodiment.
  • the remote experience system 100 of this embodiment is used for an event using the real environment 10.
  • the real environment 10 is, for example, a place where an event is held.
  • Events include, for example, distance education, remote experience, remote skill transfer, remote maintenance, remote training, experiential travel, healing, astronomical observation, on-site games, on-site golf, local sightseeing, boat trips, car trips, exploration, etc. Yes, but not limited to.
  • the remote experience system 100 of this embodiment there are two ways to participate in an event: one is to participate as a real participant who actually exists in the real environment 10, and the other is to use the device 1 as a real avatar that is your alter ego. It is possible to provide three types of participation methods: a participation method using a virtual avatar that does not exist in the real environment 10, and a participation method using a virtual avatar that does not exist in the real environment 10.
  • a user who participates using a real avatar and a user who participates using a virtual avatar are both virtual participants that do not exist in the real environment 10, and are remote participants.
  • information obtained by the device 1 from the real environment 10 can be transmitted to the user through the actions of the device 1, such as when the device 1 moves in the real environment 10 or when the device 1 touches objects around it. I can do it.
  • real avatars it is necessary to prepare devices 1 according to the number of users participating, and costs may be incurred depending on the number of participants in the event. Therefore, in the remote experience system 100 of this embodiment, by allowing participation as a virtual avatar, it is possible to reduce costs and increase the number of participants in an event. Details of the real avatar and virtual avatar will be described later. The following will mainly explain examples in which real participants, real avatars, and virtual avatars can coexist, but the remote experience system is not limited to this. 100 may be constructed.
  • the remote experience system 100 includes a device 1, sensors 3-1 to 3-3, and expression devices 4-1 and 4-2 installed in a real environment 10, and users who participate in an event.
  • the information processing device 2 is provided with participant motion detection devices 5-1, 5-2 and expression devices 6-1, 6-2, which are installed at a participation location 7, which is a place where participants participate.
  • the participant motion detection devices 5-1 and 5-2 installed at the participation location 7 detect the motions of users participating in the event, and transmit the detected information to the information processing device 2 as motion information.
  • the participant motion detection devices 5-1 and 5-2 are set at a participation location 7 where a user (first user) who participates using at least a real avatar exists, and detects the motion of the user.
  • the participant motion detection devices 5-1 and 5-2 are set at a participation location 7 where a user (second user) who participates using a virtual avatar exists and detects the motion of the user. good.
  • the participant motion detection devices 5-1 and 5-2 are, for example, positioning sensors, cameras, microphones, etc.
  • the expression devices 6-1 and 6-2 receive expression information indicating the state of the real environment 10 received from the information processing device 2, and transmit the state of the real environment 10 to the user based on the expression information.
  • the expression devices 6-1 and 6-2 are devices capable of expressions that the user can recognize with at least one of the five senses, and can, for example, present images to the user.
  • the expression devices 6-1 and 6-2 are, for example, a display, a speaker, a haptic groove, etc.
  • the displays used as the expression devices 6-1 and 6-2 may be head-mounted displays, VR (Virtual Reality) goggles, etc., or may be terminals such as smartphones and personal computers.
  • the expression devices 6-1 and 6-2 may be, for example, hologram displays, aerial projection displays, or the like. Furthermore, the expression devices 6-1 and 6-2 may include an olfactory expression device, a blower, an air conditioner, and the like.
  • participant motion detection devices 5-1 and 5-2 when the participant motion detection devices 5-1 and 5-2 are referred to without being individually distinguished, they will be referred to as participant motion detection device 5, and the expression devices 6-1 and 6-2 will be referred to without being individually distinguished. It is sometimes referred to as an expression device 6.
  • an expression device 6 Although two participant motion detection devices 5 and two expression devices 6 are shown in FIG. 1, the number of participant motion detection devices 5 and expression devices 6 is not limited to the example shown in FIG. 1.
  • one participating location 7 is shown in FIG. 1, when users participate from a plurality of different locations, a similar device is provided at each participating location 7.
  • the participant motion detection device 5 and the expression device 6 are basically provided for each user, but some of them may be shared.
  • the expression device 6 may transmit information to a plurality of users.
  • the participant motion detection device 5 may detect the motions of a plurality of users.
  • the sensors 3-1 to 3-3 are real environment sensors that are installed in the real environment 10 and detect information regarding objects in the real environment 10.
  • the object includes, for example, at least one of animals, plants, and air that exist in the real environment 10.
  • the sensors 3-1 to 3-3 are sensors that detect the environment in the real environment 10, and detect, for example, objects, sounds, temperature, wind speed, air volume, humidity, illuminance, smell, type of gas, etc. in the real environment 10. It is a sensor that That is, the sensors 3-1 to 3-3 detect at least one of temperature, humidity, wind, and odor in the real environment 10, for example.
  • object detection may be performed, for example, by photographing an image or by collecting sound.
  • the sensors 3-1 to 3-3 may include a photographing device such as a camera, or may include a microphone. Detection information indicating the detection result is transmitted to the information processing device 2.
  • the sensors 3-1 to 3-3 are shown without being individually distinguished, they will be referred to as sensor 3.
  • the device 1 is an avatar that is an alter ego of a user who participates in an event, and more specifically, it is a real avatar that exists as an object.
  • the device 1 may be, for example, a humanoid robot, a movable robot having a manipulator and a head that simulates a human face, or a manipulator. good.
  • the device 1 operates based on control information received from the information processing device 2.
  • the device 1 includes a control information receiving section 11, a drive control section 12, a driving section 13, sensors 14-1 and 14-2, and a detection information transmitting section 15.
  • the control information receiving unit 11 receives control information from the information processing device 2 and outputs the received control information to the drive control unit 12.
  • the drive control section 12 controls the drive section 13 based on the control information.
  • the drive unit 13 includes one or more actuators.
  • the actuator is, for example, a physical effector such as a manipulator that operates each joint in the device 1, a moving device, a speaker, or the like.
  • the sensors 14-1 and 14-2 are device-mounted sensors that detect information regarding objects in the real environment 10. Similar to the sensor 3, the sensors 14-1 and 14-2 include a photographing device such as a camera, a speaker, etc., but, for example, they detect at least one of temperature, humidity, wind, and odor in the actual environment 10. May be detected. Further, the sensors 14-1 and 14-2 may detect force-tactile sensation. The sensors 14-1 and 14-2 output detection information indicating detection results to the detection information transmitter 15. The detection information transmitter 15 transmits the detection information to the information processing device 2. Since the sensors 14-1 and 14-2 are mounted on the device 1, they can detect information depending on the location of the device 1. Hereinafter, when the sensors 14-1 and 14-2 are shown without being individually distinguished, they will be referred to as the sensor 14.
  • the senor 14 acquires an image depending on the movement of the device 1 and the direction of the face. can do.
  • the number of sensors 14 is not limited to the example shown in FIG.
  • the device 1 may include a manipulator, and the sensor 14 may include a force tactile sensor that detects force tactile sensation at the hand of the manipulator. Since the expression device 6 corresponding to the user using the real avatar includes a haptic device that transmits the haptic sensation to the hand of the user using the real avatar, the haptic sensation detected by the device 1 can be transmitted to the hand of the user using the real avatar. can be communicated to users who use it.
  • the expression devices 4-1 and 4-2 are devices that perform expressions to convey the virtual environment superimposed on the real environment 10 to users who participate in an event in the real environment 10.
  • the expression devices 4-1 and 4-2 are, for example, displays, speakers, and the like.
  • the displays used as the expression devices 4-1 and 4-2 are the same as the display used as the expression device 6 described above, but it is sufficient that they can express a portion of the virtual environment as described later.
  • the expression devices 4-1 and 4-2 will be referred to as expression device 4 when shown without distinguishing them individually.
  • the expression device 4 may not be provided in the real environment 10 depending on the configuration of the device 1 and the participation method of the users who participate in the event.
  • the sensor 3 may not be provided in the real environment 10.
  • FIG. 1 three sensors 3 and two expression devices 4 are illustrated, but the respective numbers of sensors 3 and expression devices 4 are not limited to the example shown in FIG. 1.
  • the number of devices 1 is not limited to the example shown in FIG.
  • the information processing device 2 includes a control information transmitter 21 , a device controller 22 , an operation information receiver 23 , a detection information receiver 24 , an expression information generator 25 , and an expression information transmitter 26 .
  • the motion information receiving unit 23 receives motion information indicating the motion acquired by the participant motion detection device 5 from the participant motion detection device 5 at the participation location 7, and outputs the received motion information to the device control unit 22. .
  • Identification information that can identify the corresponding user is added to the operation information. This identification information may be user identification information determined when the user registers to participate in the event, or may be identification information of the participant motion detection device 5.
  • the identification information added to the movement information is the identification information of the participant movement detection device 5, the identification information of the participant movement detection device 5 and the user identification information are associated at the time of registration for participation in the event, etc. Suppose that
  • the device control unit 22 uses the operation information to generate control information for causing the device 1 to perform an operation according to the operation information. That is, the device control unit 22 uses the operation information to generate control information for controlling the device 1 so that the device 1, which is the real avatar of the user, performs an operation corresponding to the user's operation, The generated control information is output to the control information transmitter 21.
  • the control information is, for example, information indicating how to drive each actuator of the device 1.
  • the device control unit 22 holds correspondence information indicating the correspondence between the device 1 and the user who uses the device 1 as a real avatar, and uses the correspondence information and the identification information added to the operation information to The device 1 corresponding to the operation information is identified.
  • the correspondence information may be determined, for example, when the user registers to participate in the event, or may be selected by the user when the event starts.
  • the user may check the functions and positions of each device 1, select the device 1 that will become the actual avatar, and notify the information processing device 2 of the selected device 1 via the user's terminal (not shown) or the like.
  • the control information transmitter 21 transmits the control information received from the device controller 22 to the device 1.
  • the detection information receiving unit 24 receives detection information from at least one of the sensor 14 and the sensor 3, and outputs the received detection information to the expression information generating unit 25.
  • the expression information generation unit 25 uses the detection information to generate expression information indicating the content to be expressed by the expression device 6, and outputs the generated expression information to the expression information transmission unit 26.
  • the expression information is transmitted to the expression device 6. Note that, similar to the correspondence between the device 1 and the user described above, the expression information generation unit 25 holds correspondence information indicating the correspondence between the expression device 6 and the user. Further, the expression information generation unit 25 generates expression information to be transmitted to users participating in the real environment 10, and outputs the generated expression information to the expression information transmission unit 26. It is transmitted to the expression device 4.
  • the communication line between the information processing device 2 and each device may be a wireless line, a wired line, or a mixture of a wireless line and a wired line.
  • Communication between the information processing device 2 and each device may be performed using any communication method, but for example, the Beyond 5G (5th Generation) 5th generation mobile communication system, which achieves large-capacity, low-latency transmission, ), information can be transmitted to the user with low delay and the sense of realism can be enhanced.
  • FIG. 2 is a diagram schematically showing distance education using the remote experience system 100 of this embodiment.
  • real participants 8-1 to 8-4 actually exist in a forest, which is the real environment 10.
  • the forest which is the real environment 10
  • the real participant 8-1 is, for example, a host such as a teacher
  • the real participants 8-2 to 8-4, users who participate as virtual avatars 9, and users who participate using the device 1 as a real avatar are: , is a student.
  • real participants 8-1 to 8-4 without distinguishing them individually, they will be referred to as real participants 8.
  • the real environment 10 is a forest, so the presence of birds, animals, plants, etc. causes changes in smell, air flow, and sound due to the movement of birds and animals. .
  • participants will be able to experience something unique to the real environment 10.
  • By transmitting various states in addition to images and sounds to the user it is possible, for example, to promote the user's understanding of the surrounding world.
  • users who remotely participate in distance education are also provided with a more realistic experience of the real environment 10.
  • the movements of users who participate using the device 1 as real avatars are detected by the participant movement detection device 5 at the participation location 7 shown in FIG. It is transmitted to the processing device 2. Since the information processing device 2 controls the operation of the device 1 using the operation information, the device 1 performs an operation according to the operation of the real avatar user. In addition, since the information processing device 2 causes the expression device 6 to express information indicating the state of the real environment 10, the real avatar user can, for example, express plants, animals, etc. in the forest that is the real environment 10 using images, sounds, etc. It can be recognized by Therefore, when the real avatar user performs an action of touching a plant in the real environment 10, the device 1, which is the real avatar, can touch the plant in the real environment 10. The haptic sensation caused by touching the plant is detected by the sensor 14 in the device 1, and transmitted via the information processing device 2 to the expression device 6, such as a haptic glove. Thereby, the expression device 6 can transmit the haptic sensation to the user.
  • the real avatar user can detect not only images and sounds but also the real environment 10. You can also experience the state of the air.
  • the real environment 10 can be more appropriately used for remote participants in an event using the real environment 10. It is possible to experience the environment and realize interaction with the real environment. By merging the real environment 10 and virtual objects, the real avatar user can experience the real environment 10 with a sense of presence.
  • the air condition may be expressed as similar to the detected information by using a scent reproduction device, an air conditioner, a blower, etc. as the expression device 6 for expressing the air condition.
  • the information may be converted into at least one of visual information and auditory information.
  • haptic detection information may be converted into at least one of visual information and auditory information.
  • device 1 which is a real avatar
  • device 1 is directly transmitted as a video to real avatar users and users who participate using virtual avatars (hereinafter also referred to as virtual avatar users), and real participants directly view device 1.
  • the information processing device 2 may generate a video simulating a real avatar user and replace the part of the device 1 with the video.
  • the virtual avatar does not actually exist in the real environment 10, but by specifying the position of the virtual avatar in the real environment 10, information simulating the presence of the virtual avatar user at that position is provided to the user. provided.
  • information simulating the presence of the virtual avatar user at that position is provided to the user.
  • a 360-degree free viewpoint camera a microphone for realizing 360-degree stereophonic sound, etc. as the sensor 3
  • images and sounds can be provided to the virtual avatar user according to the virtual avatar's position and face orientation. can do.
  • the position, face direction, etc. of the virtual avatar may be determined by the participant motion detection device 5 detecting the motion of the virtual avatar user and the information processing device 2 based on the motion information, similarly to the real avatar user. , may be specified by the virtual avatar user using a terminal (not shown).
  • the virtual avatar user can also use the real environment 10 as well as the real avatar user. By transmitting the air condition in the real environment 10, the virtual avatar user can also experience the air condition in the real environment 10.
  • the information processing device 2 displays images of the real environment 10 around the virtual avatar.
  • the virtual avatar may be made to perform actions such as touching objects or moving objects in the virtual space.
  • what kind of object the object is from the video of the real environment 10 is determined, and the result of the determination is used to generate information indicating the tactile sensation obtained by touching the object, and the information is transmitted to the expression device. 6, the sensation of touching the object may be transmitted to the virtual avatar user.
  • the initial values of the position and orientation of the virtual avatar in the real environment 10 may be specified by the virtual avatar user or may be determined in advance.
  • the information processing device 2 generates a video that simulates the virtual avatar user or a three-dimensional video of an arbitrary shape based on the video shot of the virtual avatar user as the virtual avatar, and transfers the video to the real environment 10.
  • the information may be superimposed on the video and transmitted as expression information to the expression device 6 and the expression device 4.
  • the virtual avatar may also be provided as a video to the real avatar user and real participants, or the virtual avatar user may be able to select whether or not to display the virtual avatar.
  • an image simulating the real avatar user or a three-dimensional image of an arbitrary shape may be generated and displayed in the location corresponding to the device 1, and whether or not to display the image may be determined.
  • a real avatar user may be selectable.
  • the above-described virtual space generated for the virtual avatar user may be reflected in the expression information sent to the real avatar user so that the real avatar user can share information with the virtual avatar user.
  • the representation device 4 by allowing the representation device 4 to represent the virtual space so that the real participants participating in the real environment 10 can share information with the virtual avatar users, the virtual space is superimposed on the real environment 10 that the real participants directly see. You may also choose to do so.
  • FIG. 3 is a flowchart showing an example of a control procedure for the device 1 in the information processing device 2 of this embodiment.
  • the information processing device 2 acquires operation information (step S1). Specifically, the motion information receiving unit 23 acquires motion information by receiving motion information from the participant motion detection device 5 that detects the motion of the real avatar user, and sends the acquired motion information to the device control unit 22. Output.
  • the information processing device 2 generates control information for reflecting the motion information on the movement of the real avatar (step S2). Specifically, the information processing device 2 uses the motion information to generate control information for causing the device 1, which is the real avatar of the real avatar user corresponding to the motion information, to perform the motion corresponding to the motion information. , outputs the generated control information to the control information transmitter 21.
  • the information processing device 2 transmits control information (step S3).
  • the control information transmitter 21 transmits control information to the device 1.
  • the movements of the real avatar user are reflected in the device 1, which is the real avatar corresponding to the real avatar user.
  • FIG. 4 is a flowchart illustrating an example of a procedure for generating virtual information regarding a virtual avatar in the information processing device 2 of this embodiment.
  • the information processing device 2 performs step S1 similarly to the example shown in FIG.
  • the motion information is motion information regarding the virtual avatar user
  • the motion information receiving section 23 outputs the motion information to the expression information generating section 25.
  • the information processing device 2 generates virtual information corresponding to the virtual avatar that reflects the motion information (step S4).
  • the expression information generation unit 25 uses the motion information to generate virtual information that is expression information such as images and sounds corresponding to a virtual space including the virtual avatar of the virtual avatar user corresponding to the motion information. .
  • virtual information for expressing a virtual space in which the movements of the virtual avatar user are reflected is generated.
  • information regarding the generation of virtual avatars and the virtual space may be generated using techniques such as spatial reconstruction methods, MR (Mixed Reality), and AR (Augmented Reality).
  • the generation of virtual avatars and the generation of information regarding virtual spaces are not limited to these examples.
  • FIG. 5 is a flowchart illustrating an example of an expression information generation processing procedure in the information processing device 2 of this embodiment.
  • the information processing device 2 acquires information detected by the sensor (step S11).
  • the detection information receiving unit 24 acquires detection information by receiving detection information from at least one of the sensor 14 and the sensor 3, and outputs the acquired detection information to the expression information generation unit 25.
  • the information processing device 2 generates first combination information by combining virtual information with the detection information to be combined among the detection information (step S12).
  • the expression information generation unit 25 generates the first combination information by combining the virtual information generated in step S4 shown in FIG. 4 with the detection information to be combined among the detection information.
  • the detection information to be synthesized is the detection information to be synthesized with the virtual information. For example, if the virtual information is a video, the detection information to be synthesized is a video, and if the virtual information is video and sound, the detection information to be synthesized is the detection information to be synthesized with the virtual information.
  • the detected information is images and sounds.
  • the expression information generation unit 25 uses the detection information to generate information indicating images and sounds corresponding to the virtual avatar or the real avatar, and synthesizes the generated information with virtual information.
  • the virtual information is combined with the corresponding detection information or information generated using the detection information of the sensor 3 for each remote participant. Note that the virtual information does not need to be combined.
  • the information processing device 2 converts the detected information to be converted out of the detected information into another type of information, and generates second combined information by combining the converted information with the first combined information (step S13).
  • the expression information generation unit 25 converts the detection information to be converted out of the detection information into another type of information, and generates the second composite information by combining the converted information with the first composite information. , and outputs the second composite information to the expression information transmitter 26 as expression information.
  • the air condition detection information such as temperature and humidity into visual information
  • the air condition detection information is the detection information to be converted, and other types of information are characters, images, etc.
  • This conversion can be done by, for example, converting the content of another type of information that allows a person to feel a similar state in advance, using supervised machine learning using clinical trial results, depending on the value of the detected information to be converted. By learning and inputting the detection information to be converted into the trained model, the conversion result can be obtained.
  • the conversion method is not limited to this example.
  • Examples of expressing the state of the air using visual and auditory information include, for example, generating audio information that exaggerates the sounds of moving birds and insects, displaying shaky images like a mirage to indicate hot and humid weather, and expressing cold.
  • the edges may be displayed to show a clear feeling, but the present invention is not limited to these examples.
  • step S13 is not executed, and the first composite information is outputted to the expression information transmitter 26 as expression information. Note that if there are multiple remote participants, steps S12 and S13 are performed for each remote participant.
  • the information processing device 2 transmits the second synthesis information and the detection information not to be synthesized and not to be converted to the expression device 6 of the remote participant (step S14).
  • the expression information generation unit 25 converts the second information and the detection information that is neither a synthesis target nor a conversion target into the expression device 6 of the remote participant, that is, the expression device 6 of the virtual avatar user and the real avatar user. Send to.
  • the information processing device 2 transmits the virtual information to the expression device 4 of the real environment 10 (step S15).
  • the expression information generation unit 25 transmits the virtual information to the expression device 4 of the real environment 10, that is, the expression device 4 that transmits information to the real participants.
  • the information converted in step S13 and the virtual information may be combined and transmitted to the expression device 4 of the real participant. Since the real participants can directly feel the state of the air in the real environment 10, the information converted in step S13 does not need to be sent to the expression device 4, but the converted information may also be transmitted to the real participants. By being transmitted, the real participants can share the information with the virtual avatar user and the real avatar user.
  • processing for example, every control cycle, information indicating the state of the real environment 10 is transmitted to remote participants, and virtual information is transmitted to real participants.
  • the expression device 6 corresponding to the real avatar user is the first expression device and the expression device 6 corresponding to the virtual avatar user (second user) is the second expression device, then the first expression device is , it is sufficient that the first user is capable of expressing an image that can be recognized with at least one of the five senses, and the second expression device is sufficient as long as it is capable of presenting an image to at least the second user.
  • the sensor 3 includes a photographing device that detects information regarding objects in the real environment 10 by photographing at least the real environment 10 in order to generate an image to be provided to the virtual avatar user.
  • the expression information generation unit 25 generates first expression information to be transmitted to the first user (real avatar user) using the first detection information that is the detection information received from the sensor 14 , and uses the first detection information that is the detection information received from the sensor 3 2 detection information is used to generate second expression information including an image corresponding to the virtual viewpoint in the real environment 10 of the second user (virtual avatar user), and the expression information transmitter 26 transmits the first expression information to the first expression device.
  • the expression information is transmitted, and the second expression information is transmitted to the second expression device.
  • the expression information generation unit 25 recognizes the first detection information indicating the detection result of at least one of temperature, humidity, wind, and odor in the real environment 10 by at least one of the visual and auditory senses.
  • the information may be converted into first expression information that can be expressed.
  • the expression information generation unit 25 is capable of recognizing second detection information indicating a detection result of at least one of temperature, humidity, wind, and odor in the real environment 10 with at least one of visual and auditory information. It may also be converted into second expression information.
  • the sensor may include information related to taste, and the expression information generation unit 25 may generate information related to taste in the real environment 10.
  • the expression device 4 and the expression device 6 include a taste expression device that reproduces taste. Alternatively, information regarding taste may be converted into other types of methods such as visual or auditory information.
  • the expression information generation unit 25 may generate the first expression information using the second detection information. That is, the information acquired by the sensor 3 may be used to generate expression information to be transmitted to the real avatar user.
  • the expression information generation unit 25 may generate the second expression information so that the virtual avatar is displayed at the virtual viewpoint, and the participant movement detection device 5 detects the movement of the virtual avatar user and generates the expression information.
  • the unit 25 detects objects in the real environment 10 and the virtual avatar in the virtual space, which is a range including the virtual viewpoint, based on the motion information of the virtual avatar user and the second detection information (video data of the real environment 10). virtually generates a video that changes in accordance with the movements of the virtual avatar user, synthesizes virtual information indicating the generated video with second expression information, and transmits the second expression information into which the virtual information is synthesized.
  • the expression information may be sent to a second expression device.
  • the expression information generation section 25 may combine the virtual information with the first expression information, and the expression information transmission section 26 may transmit the first expression information with the combined virtual information to the first expression device.
  • the virtual avatar may have only a virtual viewpoint, or may be able to hear and speak in addition to the virtual viewpoint.
  • the expression information transmitting unit 26 may transmit the virtual information to the expression device 4, which is a third expression device that can present images to real participants (third users). Further, the information processing device 2 may be able to set whether information regarding the virtual avatar user and the real avatar user is made public or private. Information regarding virtual avatar users and real avatar users includes, for example, facial information, eyeball information, voice prints, and other frequently used security information, as well as personal information such as real names, ages, and dates of birth. but not limited to. Furthermore, if the user does not wish to make his or her actual face or voice public for privacy reasons, the user may be able to change the face or voice and make it public.
  • FIG. 6 is a diagram showing an example in which real participants, real avatar users, and virtual avatar users participate in an event.
  • real participants 8-1 to 8-3 participate in the event in the real environment 10
  • the real avatar user 70-1 is the real avatar user.
  • Users 70-2 and 70-3, who are virtual avatar users participate in the event using device 1 and participate in the event from participating locations 7-2 and 7-3 using virtual avatars 9-1 and 9-2, respectively.
  • Participating in each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6. Note that when the virtual avatars 9-1 and 9-2 are fixed without moving, the participant motion detection device 5 does not need to be provided to the users 70-2 and 70-3 who are virtual avatar users.
  • the device 1 includes a sensor 14 that detects information regarding each of visual, auditory, olfactory, and tactile sensations, and includes a physical acting device, a moving device, and a speaker as actuators.
  • the sensors 3-1 and 3-2 installed in the real environment 10 are cameras that acquire images for generating images from the viewpoints of the virtual avatars 9-1 and 9-2, and This is a microphone that collects sound to generate sound at position -2.
  • two sensors 3 are shown for the sake of simplification, but in general, a camera that acquires images for generating images from the viewpoints of the virtual avatars 9-1 and 9-2.
  • a plurality of microphones each collect sound to generate sound at the positions of the virtual avatars 9-1 and 9-2.
  • a sensor that detects odor, temperature, wind speed, air volume, humidity, type of gas, etc. of the actual environment 10 may be provided.
  • all information to be transmitted to the user 70-1, who is a real avatar user is detected by the sensor 14 installed in the device 1.
  • the field of view of the sensor 14, such as a camera that obtains visual information, in the device 1 is changed, and the device 1 operates in accordance with the movement of the real avatar user.
  • the image in the real environment 10 is displayed by the expression device 6. The same applies to sounds, smells, etc., and as the device 1 moves, information corresponding to sounds and smells in the real environment 10 is transmitted to the real avatar user.
  • the device 1 touches an actual object in the real environment 10 in accordance with the movement of the real avatar user
  • information regarding haptic sensation is detected by the sensor 14 and transmitted to the real avatar user. This allows the real avatar user to recognize the real environment 10 with a sense of realism.
  • users 70-2 and 70-3 who are virtual avatar users, are provided with images from the viewpoints of virtual avatars 9-1 and 9-2, and images from virtual avatars 9-1 and 9-1, respectively, using the detection information acquired by the sensor 3. , 9-2 are transmitted. Further, when a sensor that detects a smell is provided as the sensor 3, the information processing device 2 uses the detection information by the sensor 3 to provide the users 70-2 and 70-3, who are virtual avatar users, with the expression device. 6 can convey information indicating odor.
  • the expression device 4 is not provided in the real environment 10. You don't have to. As described above, when projecting the image corresponding to the real avatar at the location where the device 1 is present, and projecting the images corresponding to the locations corresponding to the virtual avatars 9-1 and 9-2, these are projected. A representation device 4 is provided. Similarly, when the real participants 8-1 to 8-3 are allowed to recognize the sounds corresponding to the virtual avatars 9-1 and 9-2, the expression device 4 allows the user 70-2 who is the virtual avatar user to recognize the sound. , 70-3 are transmitted to the actual participants by the expression device 4.
  • FIG. 7 is a diagram showing an example in which a real avatar user and a virtual avatar user participate in an event.
  • users 70-1, 70-4, and 70-5 who are real avatar users, are connected to devices whose real avatars correspond to their respective locations.
  • Users 70-2 and 70-3, who are virtual avatar users participate in the event using virtual avatars 9-1 and 9-3 from participation locations 7-2 and 7-3, respectively, using virtual avatars 9-1 and 9-3.
  • each of the participating locations 7-1 to 7-5 is provided with a participant motion detection device 5 and an expression device 6.
  • Each of the devices 1-1 to 1-3 is the device 1 described above.
  • each of devices 1-1 to 1-3 is similar to device 1 shown in FIG. 6.
  • the method of transmitting information to the real avatar user and the virtual avatar user in the example shown in FIG. 7 is also the same as in the example shown in FIG. 6.
  • there is no real participant 8 there is no need to provide the expression device 4 in the real environment 10.
  • FIG. 8 is a diagram showing an example of generating a virtual space corresponding to a virtual avatar.
  • a user 70-1 participates in an event using device 1, which is a real avatar, and users 70-2 and 70-3, who are virtual avatar users, use virtual avatar 9 from participating locations 7-2 and 7-3, respectively. I am participating in the event using -1 and 9-2.
  • each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6.
  • the information processing device 2 acquires motion information from the participant motion detection device 5 corresponding to the user 70-3, who is a virtual avatar user, and uses the acquired information to A virtual space 90 is generated around the .
  • the information processing device 2 then synthesizes virtual information indicating the virtual space 90 with images, sounds, etc. from the viewpoint of the virtual avatar 9-2 generated using the detection information of the sensor 3, and uses the synthesized information as expression information. , and is transmitted to the expression device 6 corresponding to the user 70-3, who is the virtual avatar user.
  • the combined information is also sent to the user 70-1 who is the real avatar user and the user 70-2 who is the virtual avatar user.
  • FIG. 9 is a diagram showing an example of a video in which virtual spaces are synthesized.
  • a video 201 of the virtual space corresponding to the virtual avatar 9-2 is synthesized with a video 200 of the real environment 10 from the viewpoint of the user 70-2, who is a virtual avatar user. It shows.
  • the image 200 is a real image based on information captured by the sensor 3
  • the image 201 is information indicating a virtual space generated based on the information captured by the sensor 3.
  • the video 201 reflects the movement of the virtual avatar 9-2 in the virtual space. Therefore, when the virtual avatar 9-2 touches an object in the virtual space, the object also changes in the virtual space.
  • the user 70-3 who is the virtual avatar user corresponding to the virtual avatar 9-2 is also transmitted the video in which the virtual space has been synthesized, so that the user 70-3 who is the virtual avatar user can You can experience the same sensation as when you touch an object.
  • the real participants 8-1 to 8-3 can express the images projected onto the actual real environment 10. It is possible to visually recognize the state in which the virtual spaces are superimposed.
  • FIG. 10 is a diagram showing the reduction of sensors in equipment.
  • a user 70-1 participates in an event using device 1, which is a real avatar, and users 70-2 and 70-3, who are virtual avatar users, use virtual avatar 9 from participating locations 7-2 and 7-3, respectively. I am participating in the event using -1 and 9-2.
  • each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6.
  • the sensors 3-1 to 3-4 include a camera for acquiring a 360-degree free viewpoint image and a microphone for reproducing sound according to the position.
  • Sensors 3-1 to 3-4 detect images of the real environment 10 from the viewpoints of users 70-2 and 70-3 who are virtual avatar users, and sounds at the positions of users 70-2 and 70-3 who are virtual avatar users. It is used to generate information indicating the image of the real environment 10 from the viewpoint of the real avatar and information indicating the sound at the position of the real avatar.
  • the sensor 14 included in the device 1 does not need to include a sensor that detects information related to vision and hearing.
  • the configuration of the device 1 can be made simpler than the example shown in FIG.
  • the first information may be generated from the information, and the other may be generated from the detection information of the sensor 14. Further, by using the sensor 3 that detects odor, the number of sensors that detect information related to smell among the sensors 14 included in the device 1 may be reduced.
  • FIG. 11 is a diagram showing an example of the configuration of a computer system that implements the information processing device 2 of this embodiment.
  • this computer system includes a control section 101, an input section 102, a storage section 103, a display section 104, a communication section 105, and an output section 106, which are connected via a system bus 107.
  • the control section 101 and the storage section 103 constitute a processing circuit.
  • control unit 101 is, for example, a processor such as a CPU (Central Processing Unit), and executes a program in which processing in the information processing device 2 of this embodiment is described. Note that a part of the control unit 101 may be realized by dedicated hardware such as a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array).
  • the input unit 102 includes, for example, a keyboard and a mouse, and is used by a user of the computer system to input various information.
  • the storage unit 103 includes various memories such as RAM (Random Access Memory) and ROM (Read Only Memory), and storage devices such as a hard disk, and stores programs to be executed by the control unit 101 and in the process of processing. Store the necessary data obtained.
  • the storage unit 103 is also used as a temporary storage area for programs.
  • the display unit 104 is composed of a display, an LCD (liquid crystal display panel), etc., and displays various screens to the user of the computer system.
  • the communication unit 105 is a receiver and a transmitter that perform communication processing.
  • the output unit 106 is a printer, a speaker, or the like. Note that FIG. 11 is an example, and the configuration of the computer system is not limited to the example shown in FIG.
  • a computer program can be loaded from a CD-ROM or DVD-ROM set in a CD (Compact Disc)-ROM drive or a DVD (Digital Versatile Disc)-ROM drive (not shown). is installed in the storage unit 103. Then, when the program is executed, the program read from the storage unit 103 is stored in the main storage area of the storage unit 103. In this state, the control unit 101 executes processing as the information processing apparatus 2 of this embodiment according to the program stored in the storage unit 103.
  • a CD-ROM or DVD-ROM is used as a recording medium to provide a program that describes processing in the information processing device 2; however, the present invention is not limited to this; Depending on the capacity of the program, for example, a program provided via a transmission medium such as the Internet via the communication unit 105 may be used.
  • the program of this embodiment includes, for example, the step of receiving motion information indicating the motion of a first user who participates in an event at a location remote from the real environment 10 where the event is held, and using the motion information to A step of generating control information for causing a device installed in the device to perform an operation according to the operation information, a step of transmitting the control information to the device, and a step of detecting information regarding an object mounted on the device in the real environment 10. receiving first detection information detected by the device-mounted sensor from the device-mounted sensor; and second detection information detected by the real-environment sensor from the real-environment sensor installed in the real environment 10 and detecting information about an object in the real environment 10. and causing the computer system to perform the step of receiving detection information.
  • the program of this embodiment further includes a step of generating first expression information to be transmitted to the first user using the first detection information, and a step of generating first expression information to be transmitted to the first user using the second detection information. a step of generating second expression information including an image corresponding to a virtual viewpoint in the real environment 10 of a second user participating in the event;
  • the computer system is caused to execute the steps of transmitting the first expression information to the expression device and transmitting the second expression information to the second expression device capable of presenting the video to the second user.
  • the device control section 22 and the expression information generation section 25 shown in FIG. 1 are realized by the computer program stored in the storage section 103 shown in FIG. 11 being executed by the control section 101 shown in FIG. Ru.
  • the storage unit 103 shown in FIG. 11 is also used to realize the device control unit 22 and the expression information generation unit 25 shown in FIG.
  • the control information transmitting section 21, motion information receiving section 23, detection information receiving section 24, and expression information transmitting section 26 shown in FIG. 1 are realized by the communication section 105 shown in FIG.
  • the information processing device 2 may be realized by a plurality of computer systems.
  • the information processing device 2 may be realized by a cloud computer system.
  • some of the functions of the information processing device 2 may be realized by another device provided separately from the information processing device 2.
  • Another device may be provided in the real environment 10, may be provided near the residence of the user participating in the event, or may be provided at another location.
  • a user can remotely participate in the event.
  • This allows remote participants in an event using the real environment 10 to experience the real environment 10 more appropriately.
  • both participation by a real avatar using the device 1 and participation by a virtual avatar even when the number of participants in an event increases, it is possible to respond flexibly while keeping costs down.
  • transmitting information indicating the air condition of the real environment 10 to the remote participants it is possible to make the remote participants experience a state closer to the real environment 10 .
  • information corresponding to the real avatar is generated using the detection information of the sensor 3 used to provide information corresponding to the virtual avatar to the virtual avatar user, the configuration of the device 1 can be simplified and the cost will be reduced. can be reduced.

Abstract

A remote experience system (100) according to the present disclosure comprises: equipment (1) that has sensors (14-1, 14-2); sensors (3-1 to 3-3) that are placed in a real environment; participant-action detecting devices (5-1, 5-2) that detect the action of a first user participating in an event from a place that is away from the real environment; representation devices (6-1, 6-2) that can provide representations to the first user; representation devices (6-1, 6-2) that can present images to a second user; and an information processing device (2). The information processing device (2) comprises: an equipment control unit (22) that generates equipment control information using action information received from the participant-action detecting devices (5-1, 5-2); and a representation information generating unit (25) that uses first detection information received from the sensors (14-1, 14-2) to generate first representation information to be transmitted to the first user, and that uses second detection information received from the sensors (3-1 to 3-3) to generate second representation information including an image corresponding to a virtual view point of the second user in the real environment.

Description

遠隔体験システム、情報処理装置、情報処理方法およびプログラムRemote experience system, information processing device, information processing method and program
 本開示は、離れた場所の情報をユーザに提供する遠隔体験システム、情報処理装置、情報処理方法およびプログラムに関する。 The present disclosure relates to a remote experience system, information processing device, information processing method, and program that provide information about a remote location to a user.
 遠隔教育、遠隔会議などをはじめとして、離れた場所におけるイベントにユーザが参加するシステムの導入が進んでいる。例えば、教師が教育現場に存在し、教育現場から離れた場所に位置するユーザ(受講者)へ情報を一方向に送信する技術が知られている。 Systems that allow users to participate in events in remote locations, such as distance education and remote conferences, are being introduced. For example, a technique is known in which a teacher is present at an educational site and transmits information in one direction to a user (student) located at a location remote from the educational site.
 また、仮想空間を生成し、仮想空間とともに参加者を示す仮想参加者を表示することで、離れた参加者間の対面での会話を模擬する技術も知られている。例えば、特許文献1には、社内にいる出勤者である社内勤務者と遠隔勤務している遠隔勤務者とが混在する遠隔会議に関する技術が開示されている。特許文献1に記載の技術では、社内勤務者および遠隔勤務者の端末に、仮想の部屋を表示するとともに、仮想の部屋内に社内勤務者および遠隔勤務者を仮想参加者として表示し、社内勤務者および遠隔勤務者の動きを仮想勤務者の動きに反映させることで臨場感を高めている。 Additionally, a technique is known that simulates a face-to-face conversation between distant participants by creating a virtual space and displaying virtual participants representing the participants together with the virtual space. For example, Patent Document 1 discloses a technology related to a remote conference in which in-house workers who work in the company and remote workers who work remotely coexist. In the technology described in Patent Document 1, a virtual room is displayed on the terminals of in-house workers and remote workers, and the in-house workers and remote workers are displayed as virtual participants in the virtual room. The sense of realism is enhanced by reflecting the movements of virtual workers and remote workers in the movements of virtual workers.
特許第6888854号公報Patent No. 6888854
 一方、イベントの内容によっては、仮想空間ではなく実環境の様々な状態を体感することが望まれることがある。上記特許文献1には、仮想空間における仮想参加者の参加は開示されているが、実環境を用いたイベントにおけるリモート参加者に、実環境を体感させる技術は開示されていない。 On the other hand, depending on the content of the event, it may be desirable to experience various states of the real environment rather than the virtual space. Although the above Patent Document 1 discloses participation of virtual participants in a virtual space, it does not disclose a technique for allowing remote participants in an event using a real environment to experience the real environment.
 本開示は、上記に鑑みてなされたものであって、実環境を用いたイベントにおけるリモート参加者に、実環境を体感させることが可能な遠隔体験システムを得ることを目的とする。 The present disclosure has been made in view of the above, and aims to provide a remote experience system that allows remote participants in an event using a real environment to experience the real environment.
 上述した課題を解決し、目的を達成するために、本開示にかかる遠隔体験システムは、イベントが開催される実環境に設置され、実環境における物体に関する情報を検出する機器搭載センサを備える機器と、実環境に設置され、実環境における物体に関する情報を検出する実環境センサと、イベントに実環境とは離れた場所から参加する第1ユーザの動作を検出する参加者動作検出装置と、第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置と、イベントに実環境と離れた場所から参加する第2ユーザに映像を提示可能な第2表現装置と、情報処理装置と、を備える。実環境センサは、実環境を撮影することで実環境における物体に関する情報を検出する撮影装置を含む。情報処理装置は、参加者動作検出装置から、参加者動作検出装置によって取得された動作を示す動作情報を受信する動作情報受信部と、動作情報を用いて、動作情報に応じた動作を機器に行わせるための制御情報を生成する機器制御部と、制御情報を機器へ送信する制御情報送信部と、機器搭載センサから機器搭載センサによって検出された第1検出情報を受信し、実環境センサから実環境センサによって検出された第2検出情報を受信する検出情報受信部と、を備える。情報処理装置は、さらに、第1検出情報を用いて第1ユーザに伝達する第1表現情報を生成し、第2検出情報を用いて、第2ユーザの実環境における仮想視点に対応する映像を含む第2表現情報を生成する表現情報生成部と、第1表現装置へ第1表現情報を送信し、第2表現装置へ第2表現情報を送信する表現情報送信部と、を備える。 In order to solve the above-mentioned problems and achieve the purpose, a remote experience system according to the present disclosure is installed in a real environment where an event is held, and includes a device equipped with a device-mounted sensor that detects information about objects in the real environment. , a real environment sensor that is installed in a real environment and detects information about objects in the real environment; a participant motion detection device that detects the motion of a first user who participates in the event from a location away from the real environment; a first expression device capable of expressions that can be recognized by a user with at least one of the five senses; a second expression device capable of presenting images to a second user who participates in the event from a location remote from the real environment; and information processing. A device. The real environment sensor includes a photographing device that detects information about objects in the real environment by photographing the real environment. The information processing device includes a motion information receiving unit that receives motion information indicating the motion acquired by the participant motion detection device from the participant motion detection device, and uses the motion information to cause the device to perform a motion according to the motion information. a device control unit that generates control information to cause the device to perform the operation; a control information transmitting unit that sends the control information to the device; and a device-mounted sensor that receives first detection information detected by the device-mounted sensor from the real environment sensor. and a detection information receiving section that receives second detection information detected by the real environment sensor. The information processing device further uses the first detection information to generate first expression information to be transmitted to the first user, and uses the second detection information to generate a video corresponding to the virtual viewpoint of the second user in the real environment. and an expression information transmitter that transmits the first expression information to the first expression device and the second expression information to the second expression device.
 本開示にかかる遠隔体験システムは、実環境を用いたイベントにおけるリモート参加者に、実環境を体感させることができるという効果を奏する。 The remote experience system according to the present disclosure has the effect of allowing remote participants in an event using a real environment to experience the real environment.
実施の形態にかかる遠隔体験システムの構成例を示す図A diagram showing a configuration example of a remote experience system according to an embodiment. 実施の形態の遠隔体験システムを用いた遠隔教育を模式的に示す図A diagram schematically showing distance education using the remote experience system of the embodiment 実施の形態の情報処理装置における機器の制御手順の一例を示すフローチャートFlowchart illustrating an example of a device control procedure in the information processing apparatus according to the embodiment 実施の形態の情報処理装置における仮想アバタに関する仮想情報の生成手順の一例を示すフローチャートA flowchart illustrating an example of a procedure for generating virtual information regarding a virtual avatar in an information processing apparatus according to an embodiment 実施の形態の情報処理装置における表現情報の生成処理手順の一例を示すフローチャートFlowchart illustrating an example of a procedure for generating expression information in the information processing apparatus according to the embodiment 実参加者と実アバタユーザと仮想アバタユーザとがイベントに参加する例を示す図A diagram showing an example in which real participants, real avatar users, and virtual avatar users participate in an event. 実アバタユーザと仮想アバタユーザとがイベントに参加する例を示す図Diagram showing an example of real avatar users and virtual avatar users participating in an event 仮想アバタに対応する仮想空間を生成する例を示す図Diagram showing an example of generating a virtual space corresponding to a virtual avatar 仮想空間が合成された映像の一例を示す図Diagram showing an example of a video in which virtual space is synthesized 機器におけるセンサの削減を示す図Diagram showing the reduction of sensors in equipment 実施の形態の情報処理装置を実現するコンピュ-タシステムの構成例を示す図A diagram showing a configuration example of a computer system that implements an information processing device according to an embodiment.
 以下に、実施の形態にかかる遠隔体験システム、情報処理装置、情報処理方法およびプログラムを図面に基づいて詳細に説明する。 Below, a remote experience system, an information processing device, an information processing method, and a program according to an embodiment will be described in detail based on the drawings.
実施の形態.
 図1は、実施の形態にかかる遠隔体験システムの構成例を示す図である。本実施の形態の遠隔体験システム100は、実環境10を利用したイベントに用いられる。実環境10は、例えば、イベントの開催される場所である。イベントは、例えば、遠隔教育、遠隔体験、遠隔技能伝承、遠隔メンテナンス、遠隔講習、体験型旅行、癒し、天体観測、オンサイトゲーム、オンサイトゴルフ、ご当地観光、船旅、車旅、探検などであるがこれらに限定されない。
Embodiment.
FIG. 1 is a diagram showing a configuration example of a remote experience system according to an embodiment. The remote experience system 100 of this embodiment is used for an event using the real environment 10. The real environment 10 is, for example, a place where an event is held. Events include, for example, distance education, remote experience, remote skill transfer, remote maintenance, remote training, experiential travel, healing, astronomical observation, on-site games, on-site golf, local sightseeing, boat trips, car trips, exploration, etc. Yes, but not limited to.
 本実施の形態の遠隔体験システム100を用いると、イベントへの参加方法として、実環境10に実際に存在する実参加者として参加する参加方法と、機器1を自身の分身である実アバタとして用いて参加する参加方法と、実環境10に実体のないアバタである仮想アバタを用いた参加方法との3種類を提供可能である。実アバタを用いて参加するユーザ、および仮想アバタを用いて参加するユーザは、いずれも実環境10には存在しない仮想参加者であり、リモート参加者である。 When the remote experience system 100 of this embodiment is used, there are two ways to participate in an event: one is to participate as a real participant who actually exists in the real environment 10, and the other is to use the device 1 as a real avatar that is your alter ego. It is possible to provide three types of participation methods: a participation method using a virtual avatar that does not exist in the real environment 10, and a participation method using a virtual avatar that does not exist in the real environment 10. A user who participates using a real avatar and a user who participates using a virtual avatar are both virtual participants that do not exist in the real environment 10, and are remote participants.
 実アバタを用いる場合には、機器1が実環境10で移動したり、機器1が周囲の物体を触ったりといった機器1の動作によって機器1が実環境10から得た情報をユーザに伝達することができる。一方で、実アバタを用いると、参加するユーザの数に応じて機器1を用意する必要があり、イベントの参加人数によってはコストを要することになる。このため、本実施の形態の遠隔体験システム100では、仮想アバタとしての参加も可能とすることで、コストを抑えてイベントへの参加人数を増加させることができる。実アバタおよび仮想アバタの詳細については後述する。以下、実参加者と実アバタと仮想アバタとが混在可能な例を主として説明するが、これに限らず、実参加者と実アバタだけ、または実アバタだけが参加可能なように、遠隔体験システム100を構築してもよい。 When using a real avatar, information obtained by the device 1 from the real environment 10 can be transmitted to the user through the actions of the device 1, such as when the device 1 moves in the real environment 10 or when the device 1 touches objects around it. I can do it. On the other hand, when real avatars are used, it is necessary to prepare devices 1 according to the number of users participating, and costs may be incurred depending on the number of participants in the event. Therefore, in the remote experience system 100 of this embodiment, by allowing participation as a virtual avatar, it is possible to reduce costs and increase the number of participants in an event. Details of the real avatar and virtual avatar will be described later. The following will mainly explain examples in which real participants, real avatars, and virtual avatars can coexist, but the remote experience system is not limited to this. 100 may be constructed.
 図1に示すように、遠隔体験システム100は、実環境10に設置される機器1、センサ3-1~3-3および表現装置4-1,4-2と、イベントに参加するユーザが存在する場所である参加場所7に設置される参加者動作検出装置5-1,5-2および表現装置6-1,6-2と、情報処理装置2とを備える。 As shown in FIG. 1, the remote experience system 100 includes a device 1, sensors 3-1 to 3-3, and expression devices 4-1 and 4-2 installed in a real environment 10, and users who participate in an event. The information processing device 2 is provided with participant motion detection devices 5-1, 5-2 and expression devices 6-1, 6-2, which are installed at a participation location 7, which is a place where participants participate.
 参加場所7に設置される参加者動作検出装置5-1,5-2は、イベントに参加するユーザの動作を検出し、検出した情報を動作情報として情報処理装置2へ送信する。参加者動作検出装置5-1,5-2は、少なくとも実アバタを用いて参加するユーザ(第1ユーザ)が存在する参加場所7に設定され当該ユーザの動作を検出する。参加者動作検出装置5-1,5-2は、後述するように、仮想アバタを用いて参加するユーザ(第2ユーザ)が存在する参加場所7に設定され当該ユーザの動作を検出してもよい。 The participant motion detection devices 5-1 and 5-2 installed at the participation location 7 detect the motions of users participating in the event, and transmit the detected information to the information processing device 2 as motion information. The participant motion detection devices 5-1 and 5-2 are set at a participation location 7 where a user (first user) who participates using at least a real avatar exists, and detects the motion of the user. As will be described later, the participant motion detection devices 5-1 and 5-2 are set at a participation location 7 where a user (second user) who participates using a virtual avatar exists and detects the motion of the user. good.
 参加者動作検出装置5-1,5-2は、例えば、測位センサ、カメラ、マイクなどである。表現装置6-1,6-2は、情報処理装置2から受信する実環境10の状態を示す表現情報を受信し、表現情報に基づいて、実環境10の状態をユーザに伝達する。表現装置6-1,6-2は、ユーザが五感のうちの少なくとも1つで認識できる表現が可能な装置であり、例えば、ユーザに映像を提示可能である。表現装置6-1,6-2は、例えば、ディスプレイ、スピ-カ、力触覚グル-ブなどである。表現装置6-1,6-2として用いられるディスプレイは、ヘッドマウントディスプレイ、VR(Virtual Reality)ゴ-グルなどであってもよいし、スマ-トフォン、パーソナルコンピュータなどの端末であってもよいし、ユーザが搭乗することでユーザとともにディスプレイを移動させることが可能なコックピットディスプレイなどであってもよい。また、表現装置6-1,6-2は、例えば、ホログラムディスプレイ、空中投影ディスプレイなどであってもよい。また、表現装置6-1,6-2は、嗅覚表現デバイス、送風機、空気調和機などを含んでいてもよい。 The participant motion detection devices 5-1 and 5-2 are, for example, positioning sensors, cameras, microphones, etc. The expression devices 6-1 and 6-2 receive expression information indicating the state of the real environment 10 received from the information processing device 2, and transmit the state of the real environment 10 to the user based on the expression information. The expression devices 6-1 and 6-2 are devices capable of expressions that the user can recognize with at least one of the five senses, and can, for example, present images to the user. The expression devices 6-1 and 6-2 are, for example, a display, a speaker, a haptic groove, etc. The displays used as the expression devices 6-1 and 6-2 may be head-mounted displays, VR (Virtual Reality) goggles, etc., or may be terminals such as smartphones and personal computers. , or a cockpit display that allows the user to move the display with the user when the user gets on board. Furthermore, the expression devices 6-1 and 6-2 may be, for example, hologram displays, aerial projection displays, or the like. Furthermore, the expression devices 6-1 and 6-2 may include an olfactory expression device, a blower, an air conditioner, and the like.
 以下、参加者動作検出装置5-1,5-2を個別に区別せずに示すときには参加者動作検出装置5と記載し、表現装置6-1,6-2を個別に区別せずに示すときには表現装置6と記載する。図1では、参加者動作検出装置5および表現装置6をそれぞれ2台図示しているが、参加者動作検出装置5および表現装置6のそれぞれの台数は図1に示した例に限定されない。また、図1では、参加場所7を1箇所図示しているが、複数の異なる場所からユーザが参加する場合には、各参加場所7に同様の装置が設けられる。なお、同一の参加場所7から複数のユーザがイベントに参加する場合、参加者動作検出装置5および表現装置6は、基本的にはユーザごとに設けられるが、一部が共有されてもよい。例えば、表現装置6としてホログラムディスプレイ、空中投影ディスプレイなどが用いられる場合には、表現装置6が複数のユーザに情報を伝達してもよい。また、参加者動作検出装置5が複数のユーザの動作を検出してもよい。 Hereinafter, when the participant motion detection devices 5-1 and 5-2 are referred to without being individually distinguished, they will be referred to as participant motion detection device 5, and the expression devices 6-1 and 6-2 will be referred to without being individually distinguished. It is sometimes referred to as an expression device 6. Although two participant motion detection devices 5 and two expression devices 6 are shown in FIG. 1, the number of participant motion detection devices 5 and expression devices 6 is not limited to the example shown in FIG. 1. Furthermore, although one participating location 7 is shown in FIG. 1, when users participate from a plurality of different locations, a similar device is provided at each participating location 7. Note that when a plurality of users participate in an event from the same participation location 7, the participant motion detection device 5 and the expression device 6 are basically provided for each user, but some of them may be shared. For example, when a hologram display, an aerial projection display, or the like is used as the expression device 6, the expression device 6 may transmit information to a plurality of users. Furthermore, the participant motion detection device 5 may detect the motions of a plurality of users.
 センサ3-1~3-3は、実環境10に設置され、実環境10における物体に関する情報を検出する実環境センサである。物体は、例えば、実環境10に存在する動物、植物および空気のうちの少なくとも1つを含む。センサ3-1~3-3は、実環境10における環境を検出するセンサであり、例えば、実環境10における物、音、温度、風速、風量、湿度、照度、匂い、気体の種類などを検出するセンサである。すなわち、センサ3-1~3-3は、例えば、実環境10における温度、湿度、風および匂いのうちの少なくとも1つを検出する。また、物の検出は、例えば映像として撮影されることで行われてもよいし、集音することで行われてもよい。すなわち、センサ3-1~3-3は、カメラなどの撮影装置を含んでいてもよいしマイクを含んでいてもよい。検出結果を示す検出情報を情報処理装置2へ送信する。以下、センサ3-1~3-3を個別に区別せずに示すときにはセンサ3と記載する。 The sensors 3-1 to 3-3 are real environment sensors that are installed in the real environment 10 and detect information regarding objects in the real environment 10. The object includes, for example, at least one of animals, plants, and air that exist in the real environment 10. The sensors 3-1 to 3-3 are sensors that detect the environment in the real environment 10, and detect, for example, objects, sounds, temperature, wind speed, air volume, humidity, illuminance, smell, type of gas, etc. in the real environment 10. It is a sensor that That is, the sensors 3-1 to 3-3 detect at least one of temperature, humidity, wind, and odor in the real environment 10, for example. Furthermore, object detection may be performed, for example, by photographing an image or by collecting sound. That is, the sensors 3-1 to 3-3 may include a photographing device such as a camera, or may include a microphone. Detection information indicating the detection result is transmitted to the information processing device 2. Hereinafter, when the sensors 3-1 to 3-3 are shown without being individually distinguished, they will be referred to as sensor 3.
 機器1は、イベントに参加するユーザの分身となるアバタであり、詳細には、物体として実在する実アバタである。機器1は、例えば、人型のロボットであってもよいし、マニピュレ-タと人の顔を模擬する頭部とを有する移動可能なロボットであってもよいし、マニピュレ-タであってもよい。後述するように、機器1は、情報処理装置2から受信した制御情報に基づいて動作する。機器1は、制御情報受信部11、駆動制御部12、駆動部13、センサ14-1,14-2および検出情報送信部15を備える。 The device 1 is an avatar that is an alter ego of a user who participates in an event, and more specifically, it is a real avatar that exists as an object. The device 1 may be, for example, a humanoid robot, a movable robot having a manipulator and a head that simulates a human face, or a manipulator. good. As will be described later, the device 1 operates based on control information received from the information processing device 2. The device 1 includes a control information receiving section 11, a drive control section 12, a driving section 13, sensors 14-1 and 14-2, and a detection information transmitting section 15.
 制御情報受信部11は、情報処理装置2から制御情報を受信し、受信した制御情報を駆動制御部12へ出力する。駆動制御部12は、制御情報に基づいて、駆動部13を制御する。駆動部13は、1つ以上のアクチュエ-タを備える。アクチュエ-タは、例えば、マニピュレ-タなど機器1における各関節部を動作させる物理作用器、移動器、スピ-カなどである。 The control information receiving unit 11 receives control information from the information processing device 2 and outputs the received control information to the drive control unit 12. The drive control section 12 controls the drive section 13 based on the control information. The drive unit 13 includes one or more actuators. The actuator is, for example, a physical effector such as a manipulator that operates each joint in the device 1, a moving device, a speaker, or the like.
 センサ14-1,14-2は、機器搭載センサであり、実環境10における物体に関する情報を検出する。センサ14-1,14-2は、センサ3と同様に、カメラなどの撮影装置、スピ-カなどを含むが、例えば、実環境10における温度、湿度、風および匂いのうちの少なくとも1つを検出してもよい。また、センサ14-1,14-2は、力触覚を検出してもよい。センサ14-1,14-2は、検出結果を示す検出情報を検出情報送信部15へ出力する。検出情報送信部15は、検出情報を情報処理装置2へ送信する。センサ14-1,14-2は、機器1に搭載されるため、機器1の場所に応じた情報を検出することができる。以下、センサ14-1,14-2を個別に区別せずに示すときにはセンサ14と記載する。例えば、機器1に顔を模擬した部分が設けられており目に相当する部分にセンサ14としてカメラが設けられていると、センサ14は、機器1の移動や顔の向きに依存した映像を取得することができる。図1では、センサ14を2台図示しているが、センサ14の台数は図1に示した例に限定されない。 The sensors 14-1 and 14-2 are device-mounted sensors that detect information regarding objects in the real environment 10. Similar to the sensor 3, the sensors 14-1 and 14-2 include a photographing device such as a camera, a speaker, etc., but, for example, they detect at least one of temperature, humidity, wind, and odor in the actual environment 10. May be detected. Further, the sensors 14-1 and 14-2 may detect force-tactile sensation. The sensors 14-1 and 14-2 output detection information indicating detection results to the detection information transmitter 15. The detection information transmitter 15 transmits the detection information to the information processing device 2. Since the sensors 14-1 and 14-2 are mounted on the device 1, they can detect information depending on the location of the device 1. Hereinafter, when the sensors 14-1 and 14-2 are shown without being individually distinguished, they will be referred to as the sensor 14. For example, if the device 1 is provided with a part that simulates a face and a camera is provided as the sensor 14 in the part corresponding to the eyes, the sensor 14 acquires an image depending on the movement of the device 1 and the direction of the face. can do. Although two sensors 14 are shown in FIG. 1, the number of sensors 14 is not limited to the example shown in FIG.
 例えば、機器1は、マニピュレ-タを含み、センサ14は、マニピュレ-タの手先の力触覚を検出する力触覚センサを含んでいてもよい。実アバタを用いるユーザに対応する表現装置6が、実アバタを用いるユーザの手に力触覚を伝達する力触覚デバイスを含むことで、後述するように、機器1が検出した力触覚を、実アバタを用いるユーザに伝達することができる。 For example, the device 1 may include a manipulator, and the sensor 14 may include a force tactile sensor that detects force tactile sensation at the hand of the manipulator. Since the expression device 6 corresponding to the user using the real avatar includes a haptic device that transmits the haptic sensation to the hand of the user using the real avatar, the haptic sensation detected by the device 1 can be transmitted to the hand of the user using the real avatar. can be communicated to users who use it.
 表現装置4-1,4-2は、実環境10でイベントに参加するユーザに、実環境10に重畳される仮想環境をユーザに伝達するための表現を行う装置である。表現装置4-1,4-2は、例えば、ディスプレイ、スピ-カなどである。表現装置4-1,4-2として用いられるディスプレイは、上述した表現装置6としても用いられるディスプレイと同様であるが、後述するように仮想環境となる部分を表現できればよい。以下、表現装置4-1,4-2を個別に区別せずに示すときには表現装置4と記載する。 The expression devices 4-1 and 4-2 are devices that perform expressions to convey the virtual environment superimposed on the real environment 10 to users who participate in an event in the real environment 10. The expression devices 4-1 and 4-2 are, for example, displays, speakers, and the like. The displays used as the expression devices 4-1 and 4-2 are the same as the display used as the expression device 6 described above, but it is sufficient that they can express a portion of the virtual environment as described later. Hereinafter, the expression devices 4-1 and 4-2 will be referred to as expression device 4 when shown without distinguishing them individually.
 なお、後述するように、機器1の構成およびイベントに参加するユーザの参加方法によっては、実環境10に表現装置4が設けられなくてもよい。また、機器1の構成およびイベントに参加するユーザの参加方法によっては、実環境10にセンサ3が設けられなくてもよい。また、図1では、センサ3を3台図示し、表現装置4を2台図示しているが、センサ3および表現装置4のそれぞれの台数は図1に示した例に限定されない。また、図1では、機器1を1台図示しているが、機器1の数も図1に示した例に限定されない。 Note that, as will be described later, the expression device 4 may not be provided in the real environment 10 depending on the configuration of the device 1 and the participation method of the users who participate in the event. Furthermore, depending on the configuration of the device 1 and the participation method of users participating in the event, the sensor 3 may not be provided in the real environment 10. Further, in FIG. 1, three sensors 3 and two expression devices 4 are illustrated, but the respective numbers of sensors 3 and expression devices 4 are not limited to the example shown in FIG. 1. Further, although one device 1 is shown in FIG. 1, the number of devices 1 is not limited to the example shown in FIG.
 情報処理装置2は、制御情報送信部21、機器制御部22、動作情報受信部23、検出情報受信部24、表現情報生成部25および表現情報送信部26を備える。 The information processing device 2 includes a control information transmitter 21 , a device controller 22 , an operation information receiver 23 , a detection information receiver 24 , an expression information generator 25 , and an expression information transmitter 26 .
 動作情報受信部23は、参加場所7における参加者動作検出装置5から、参加者動作検出装置5によって取得された動作を示す動作情報を受信し、受信した動作情報を機器制御部22へ出力する。動作情報には、対応するユーザを識別できる識別情報が付加されている。この識別情報は、ユーザのイベントへの参加登録時などに、定められたユーザ識別情報であってもよいし、参加者動作検出装置5の識別情報であってもよい。動作情報に付加される識別情報が参加者動作検出装置5の識別情報である場合には、参加者動作検出装置5の識別情報とユーザ識別情報とがイベントへの参加登録時などに対応づけられているとする。 The motion information receiving unit 23 receives motion information indicating the motion acquired by the participant motion detection device 5 from the participant motion detection device 5 at the participation location 7, and outputs the received motion information to the device control unit 22. . Identification information that can identify the corresponding user is added to the operation information. This identification information may be user identification information determined when the user registers to participate in the event, or may be identification information of the participant motion detection device 5. When the identification information added to the movement information is the identification information of the participant movement detection device 5, the identification information of the participant movement detection device 5 and the user identification information are associated at the time of registration for participation in the event, etc. Suppose that
 機器制御部22は、動作情報を用いて、動作情報に応じた動作を機器1に行わせるための制御情報を生成する。すなわち、機器制御部22は、動作情報を用いて、ユーザの動作に対応した動作を、当該ユーザの実アバタとなる機器1が行うように、機器1を制御するための制御情報を生成し、生成した制御情報を制御情報送信部21へ出力する。制御情報は、例えば、機器1の各アクチュエ-タをどのように駆動するかを示す情報である。なお、機器制御部22は、機器1と当該機器1を実アバタとして用いるユーザとの対応を示す対応情報を保持し、対応情報と動作情報に付加されている識別情報とを用いて、受信した動作情報に対応する機器1を特定する。対応情報は、例えば、ユーザのイベントへの参加登録時に決定されてもよいし、イベントの開始時に、ユーザによって選択されてもよい。各機器1の機能や位置を確認し、ユーザが実アバタとなる機器1を選択し、図示しないユーザの端末などを介して選択した機器1を情報処理装置2へ通知してもよい。制御情報送信部21は、機器制御部22から受け取った制御情報を機器1へ送信する。 The device control unit 22 uses the operation information to generate control information for causing the device 1 to perform an operation according to the operation information. That is, the device control unit 22 uses the operation information to generate control information for controlling the device 1 so that the device 1, which is the real avatar of the user, performs an operation corresponding to the user's operation, The generated control information is output to the control information transmitter 21. The control information is, for example, information indicating how to drive each actuator of the device 1. Note that the device control unit 22 holds correspondence information indicating the correspondence between the device 1 and the user who uses the device 1 as a real avatar, and uses the correspondence information and the identification information added to the operation information to The device 1 corresponding to the operation information is identified. The correspondence information may be determined, for example, when the user registers to participate in the event, or may be selected by the user when the event starts. The user may check the functions and positions of each device 1, select the device 1 that will become the actual avatar, and notify the information processing device 2 of the selected device 1 via the user's terminal (not shown) or the like. The control information transmitter 21 transmits the control information received from the device controller 22 to the device 1.
 検出情報受信部24は、センサ14およびセンサ3のうちの少なくとも一方から検出情報を受信し、受信した検出情報を表現情報生成部25へ出力する。表現情報生成部25は、検出情報を用いて、表現装置6が表現する内容を示す表現情報を生成し、生成した表現情報を表現情報送信部26へ出力し、表現情報送信部26は、当該表現情報を表現装置6へ送信する。なお、上述した機器1とユーザとの対応と同様に、表現情報生成部25は、表現装置6とユーザとの対応を示す対応情報を保持しているとする。また、表現情報生成部25は、実環境10において参加するユーザに伝達する表現情報を生成し、生成した表現情報を表現情報送信部26へ出力し、表現情報送信部26は、当該表現情報を表現装置4へ送信する。 The detection information receiving unit 24 receives detection information from at least one of the sensor 14 and the sensor 3, and outputs the received detection information to the expression information generating unit 25. The expression information generation unit 25 uses the detection information to generate expression information indicating the content to be expressed by the expression device 6, and outputs the generated expression information to the expression information transmission unit 26. The expression information is transmitted to the expression device 6. Note that, similar to the correspondence between the device 1 and the user described above, the expression information generation unit 25 holds correspondence information indicating the correspondence between the expression device 6 and the user. Further, the expression information generation unit 25 generates expression information to be transmitted to users participating in the real environment 10, and outputs the generated expression information to the expression information transmission unit 26. It is transmitted to the expression device 4.
 なお、情報処理装置2と各装置との間の通信回線は、無線回線であってもよいし有線回線であってもよいし、無線回線と有線回線との混在であってもよい。情報処理装置2と各装置との間の通信はどのような通信方式で行われてもよいが、例えば、大容量で低遅延の伝送を実現するBeyond 5G(5th Generation:第5世代移動通信システム)に従った通信を行うことで、低遅延でユーザに情報を伝達することができ、臨場感を高めることができる。すなわち、情報処理装置2と参加者動作検出装置5との間の通信回線、情報処理装置2とセンサ14との間の通信回線、情報処理装置2とセンサ3との間の通信回線、情報処理装置2と機器1との間の通信回線、情報処理装置2と表現装置6との間の通信回線、および、情報処理装置2と表現装置4との間の通信回線のうちの少なくとも1つはBeyond 5G回線であってもよい。 Note that the communication line between the information processing device 2 and each device may be a wireless line, a wired line, or a mixture of a wireless line and a wired line. Communication between the information processing device 2 and each device may be performed using any communication method, but for example, the Beyond 5G (5th Generation) 5th generation mobile communication system, which achieves large-capacity, low-latency transmission, ), information can be transmitted to the user with low delay and the sense of realism can be enhanced. That is, the communication line between the information processing device 2 and the participant motion detection device 5, the communication line between the information processing device 2 and the sensor 14, the communication line between the information processing device 2 and the sensor 3, and the information processing At least one of the communication line between the device 2 and the device 1, the communication line between the information processing device 2 and the expression device 6, and the communication line between the information processing device 2 and the expression device 4 is Beyond 5G line may be used.
 次に、本実施の形態の遠隔体験システム100の適用例について説明する。図2は、本実施の形態の遠隔体験システム100を用いた遠隔教育を模式的に示す図である。図2に示した例では、例えば、実環境10である森に、実参加者8-1~8-4が実際に存在している。また、実環境10である森には、実アバタとして用いられる機器1が存在している。さらに、実環境10には実体がない仮想アバタ9として参加しているユーザも存在している。実参加者8-1は、例えば、教師などの主催者であり、実参加者8-2~8-4、仮想アバタ9として参加するユーザ、および機器1を実アバタとして用いて参加するユーザは、生徒である。以下、実参加者8-1~8-4を個別に区別せずに示すときには、実参加者8と記載する。 Next, an application example of the remote experience system 100 of this embodiment will be described. FIG. 2 is a diagram schematically showing distance education using the remote experience system 100 of this embodiment. In the example shown in FIG. 2, for example, real participants 8-1 to 8-4 actually exist in a forest, which is the real environment 10. Furthermore, in the forest, which is the real environment 10, there is a device 1 used as a real avatar. Furthermore, there are also users who participate in the real environment 10 as virtual avatars 9 who do not have a physical presence. The real participant 8-1 is, for example, a host such as a teacher, and the real participants 8-2 to 8-4, users who participate as virtual avatars 9, and users who participate using the device 1 as a real avatar are: , is a student. Hereinafter, when referring to the real participants 8-1 to 8-4 without distinguishing them individually, they will be referred to as real participants 8.
 図2に示した例では、実環境10は森であるため、鳥、動物、植物などが存在することにより、匂い、空気の流れ、鳥や動物が移動することによる音の変化などが発生する。また、森に存在する動物、植物に触れることで、参加者は、実環境10ならではの体験が可能となる。映像、音に加えて様々な状態をユーザへ伝達することで、例えばユーザの環世界の理解を促進することができる。本実施の形態では、リモートで遠隔教育に参加するユーザにも、より臨場感のある実環境10の体験を提供する。 In the example shown in FIG. 2, the real environment 10 is a forest, so the presence of birds, animals, plants, etc. causes changes in smell, air flow, and sound due to the movement of birds and animals. . In addition, by touching the animals and plants that exist in the forest, participants will be able to experience something unique to the real environment 10. By transmitting various states in addition to images and sounds to the user, it is possible, for example, to promote the user's understanding of the surrounding world. In this embodiment, users who remotely participate in distance education are also provided with a more realistic experience of the real environment 10.
 機器1を実アバタとして用いて参加するユーザ(以下、実アバタユーザとも呼ぶ)の動作は、図1に示した参加場所7において参加者動作検出装置5によって検出され、動作を示す動作情報が情報処理装置2へ送信される。情報処理装置2は、動作情報を用いて機器1の動作を制御するため、機器1は実アバタユーザの動作に応じた動作を行うことになる。また、情報処理装置2は、実環境10の状態を示す情報を表現装置6によって表現させているため、実アバタユーザは、例えば、実環境10である森における植物、動物などを映像、音などにより認識することができる。したがって、実アバタユーザが実環境10における植物を触る動作を行うと、実アバタである機器1が実環境10における植物を触ることができる。植物を触ったことによる力触覚は、機器1におけるセンサ14により検出され、情報処理装置2を介して、力触覚グロ-ブなどの表現装置6に送信される。これにより、表現装置6がユーザに力触覚を伝達することができる。 The movements of users who participate using the device 1 as real avatars (hereinafter also referred to as real avatar users) are detected by the participant movement detection device 5 at the participation location 7 shown in FIG. It is transmitted to the processing device 2. Since the information processing device 2 controls the operation of the device 1 using the operation information, the device 1 performs an operation according to the operation of the real avatar user. In addition, since the information processing device 2 causes the expression device 6 to express information indicating the state of the real environment 10, the real avatar user can, for example, express plants, animals, etc. in the forest that is the real environment 10 using images, sounds, etc. It can be recognized by Therefore, when the real avatar user performs an action of touching a plant in the real environment 10, the device 1, which is the real avatar, can touch the plant in the real environment 10. The haptic sensation caused by touching the plant is detected by the sensor 14 in the device 1, and transmitted via the information processing device 2 to the expression device 6, such as a haptic glove. Thereby, the expression device 6 can transmit the haptic sensation to the user.
 また、センサ3として、温度、風速、風量、湿度、匂い、気体の種類などのように空気の状態を検出するセンサを用いることで、実アバタユーザは、映像や音だけでなく、実環境10における空気の状態も体験することができる。このように、本実施の形態では、実環境10に存在する機器1を実アバタとしてユーザが参加することができるため、実環境10を用いたイベントにおけるリモート参加者に、実環境10をより適切に体感させることができ、実環境とのインタラクションを実現することができる。実環境10と仮想的な物体とを融合させて、実アバタユーザに臨場感のある実環境10を体験させることができる。 In addition, by using a sensor that detects air conditions such as temperature, wind speed, air volume, humidity, odor, and gas type as the sensor 3, the real avatar user can detect not only images and sounds but also the real environment 10. You can also experience the state of the air. In this way, in this embodiment, since a user can participate using the device 1 existing in the real environment 10 as a real avatar, the real environment 10 can be more appropriately used for remote participants in an event using the real environment 10. It is possible to experience the environment and realize interaction with the real environment. By merging the real environment 10 and virtual objects, the real avatar user can experience the real environment 10 with a sense of presence.
 なお、空気の状態を表現する表現装置6として、匂い再現デバイス、空気調和機、送風機などを用いることで、空気の状態が検出情報と類似するものとして表現されてもよいが、空気の状態が視覚情報および聴覚情報のうちの少なくとも一方に変換されてもよい。同様に力触覚の検出情報についても、視覚情報および聴覚情報のうちの少なくとも一方に変換されてもよい。 Note that the air condition may be expressed as similar to the detected information by using a scent reproduction device, an air conditioner, a blower, etc. as the expression device 6 for expressing the air condition. The information may be converted into at least one of visual information and auditory information. Similarly, haptic detection information may be converted into at least one of visual information and auditory information.
 また、実アバタである機器1に関しては、機器1がそのまま映像として実アバタユーザおよび仮想アバタを用いて参加するユーザ(以下、仮想アバタユーザとも呼ぶ)に送信され、実参加者は機器1を直接目視するようにしてもよいが、情報処理装置2が、実アバタユーザを模擬した映像を生成し、当該映像で機器1の部分を置き換えてもよい。 Regarding device 1, which is a real avatar, device 1 is directly transmitted as a video to real avatar users and users who participate using virtual avatars (hereinafter also referred to as virtual avatar users), and real participants directly view device 1. Although it may be visually observed, the information processing device 2 may generate a video simulating a real avatar user and replace the part of the device 1 with the video.
 仮想アバタは、実環境10には実体は存在しないが、実環境10における仮想アバタの位置が指定されることで、当該位置に、仮想アバタユーザが存在することを模擬した情報が、当該ユーザに提供される。例えば、センサ3として、360度自由視点カメラ、360度立体音響を実現するためのマイクなどを用いることで、仮想アバタの位置と顔の向きなどに応じた映像や音を、仮想アバタユーザに提供することができる。仮想アバタの位置、顔の向きなどは、実アバタユーザと同様に、参加者動作検出装置5によって仮想アバタユーザの動作が検出され、情報処理装置2が動作情報に基づいて決定してもよいし、図示しない端末を用いて仮想アバタユーザが指定してもよい。また、センサ3として、温度、風速、風量、湿度、匂い、気体の種類などのように空気の状態を検出するセンサを用いる場合、仮想アバタユーザにも、実アバタユーザと同様に、実環境10における空気の状態を伝達することで、仮想アバタユーザも、実環境10における空気の状態を体験することができる。 The virtual avatar does not actually exist in the real environment 10, but by specifying the position of the virtual avatar in the real environment 10, information simulating the presence of the virtual avatar user at that position is provided to the user. provided. For example, by using a 360-degree free viewpoint camera, a microphone for realizing 360-degree stereophonic sound, etc. as the sensor 3, images and sounds can be provided to the virtual avatar user according to the virtual avatar's position and face orientation. can do. The position, face direction, etc. of the virtual avatar may be determined by the participant motion detection device 5 detecting the motion of the virtual avatar user and the information processing device 2 based on the motion information, similarly to the real avatar user. , may be specified by the virtual avatar user using a terminal (not shown). In addition, when using a sensor that detects air conditions such as temperature, wind speed, air volume, humidity, odor, and gas type as the sensor 3, the virtual avatar user can also use the real environment 10 as well as the real avatar user. By transmitting the air condition in the real environment 10, the virtual avatar user can also experience the air condition in the real environment 10.
 また、仮想アバタは、実環境10に実在する物体を触ったり物体を動かしたりといった実際の物との相互作用はできないが、情報処理装置2が、仮想アバタの周囲に、実環境10の映像に基づく仮想空間を生成することで、仮想空間上で仮想アバタに物体を触ったり物体を動かしたりといった動作をさせるようにしてもよい。このとき、実環境10の映像から物体がどのようなものであるかを判別し、判別した結果を用いて、物体を触ることによって得られる力触覚を示す情報を生成し、当該情報を表現装置6へ送信することで、仮想アバタユーザに、物体を触った感覚を伝達するようにしてもよい。仮想アバタの実環境10における位置および向きの初期値は仮想アバタユーザによって指定されてもよいし、あらかじめ定められてもよい。 Furthermore, although the virtual avatar cannot interact with real objects such as touching or moving objects that exist in the real environment 10, the information processing device 2 displays images of the real environment 10 around the virtual avatar. By generating a virtual space based on the virtual space, the virtual avatar may be made to perform actions such as touching objects or moving objects in the virtual space. At this time, what kind of object the object is from the video of the real environment 10 is determined, and the result of the determination is used to generate information indicating the tactile sensation obtained by touching the object, and the information is transmitted to the expression device. 6, the sensation of touching the object may be transmitted to the virtual avatar user. The initial values of the position and orientation of the virtual avatar in the real environment 10 may be specified by the virtual avatar user or may be determined in advance.
 また、情報処理装置2は、仮想アバタとして、仮想アバタユーザを撮影した映像をもとに、仮想アバタユーザを模擬する映像、または任意の形状の立体の映像を生成し、当該映像を実環境10の映像に重畳して表現情報として表現装置6および表現装置4へ送信してもよい。仮想アバタは、実アバタユーザおよび実参加者にも映像として提供されてもよいし、仮想アバタを表示するか否かを仮想アバタユーザが選択可能であってもよい。実アバタについても同様に、機器1に対応する箇所に、実アバタユーザを模擬する映像、または任意の形状の立体の映像を生成して表示させてもよく、当該映像を表示するか否かを実アバタユーザが選択可能であってもよい。 In addition, the information processing device 2 generates a video that simulates the virtual avatar user or a three-dimensional video of an arbitrary shape based on the video shot of the virtual avatar user as the virtual avatar, and transfers the video to the real environment 10. The information may be superimposed on the video and transmitted as expression information to the expression device 6 and the expression device 4. The virtual avatar may also be provided as a video to the real avatar user and real participants, or the virtual avatar user may be able to select whether or not to display the virtual avatar. Similarly, for the real avatar, an image simulating the real avatar user or a three-dimensional image of an arbitrary shape may be generated and displayed in the location corresponding to the device 1, and whether or not to display the image may be determined. A real avatar user may be selectable.
 また、実アバタユーザが、仮想アバタユーザと情報を共有できるように、実アバタユーザへ送信する表現情報に、仮想アバタユーザのために生成した上述した仮想空間を反映させてもよい。また、実環境10で参加する実参加者が、仮想アバタユーザと情報を共有できるように、表現装置4によって仮想空間を表現させることで、実参加者が直接見る実環境10に仮想空間が重畳されるようにしもよい。 Furthermore, the above-described virtual space generated for the virtual avatar user may be reflected in the expression information sent to the real avatar user so that the real avatar user can share information with the virtual avatar user. In addition, by allowing the representation device 4 to represent the virtual space so that the real participants participating in the real environment 10 can share information with the virtual avatar users, the virtual space is superimposed on the real environment 10 that the real participants directly see. You may also choose to do so.
 次に、本実施の形態の情報処理装置2の動作について説明する。図3は、本実施の形態の情報処理装置2における機器1の制御手順の一例を示すフローチャートである。情報処理装置2は、動作情報を取得する(ステップS1)。詳細には、動作情報受信部23が、実アバタユーザの動作を検出する参加者動作検出装置5から動作情報を受信することで、動作情報を取得し、取得した動作情報を機器制御部22へ出力する。 Next, the operation of the information processing device 2 of this embodiment will be explained. FIG. 3 is a flowchart showing an example of a control procedure for the device 1 in the information processing device 2 of this embodiment. The information processing device 2 acquires operation information (step S1). Specifically, the motion information receiving unit 23 acquires motion information by receiving motion information from the participant motion detection device 5 that detects the motion of the real avatar user, and sends the acquired motion information to the device control unit 22. Output.
 次に、情報処理装置2は、動作情報を実アバタの動きに反映するための制御情報を生成する(ステップS2)。詳細には、情報処理装置2が、動作情報を用いて、当該動作情報に対応する実アバタユーザの実アバタである機器1に、動作情報に対応する動作を行わせるための制御情報を生成し、生成した制御情報を制御情報送信部21へ出力する。 Next, the information processing device 2 generates control information for reflecting the motion information on the movement of the real avatar (step S2). Specifically, the information processing device 2 uses the motion information to generate control information for causing the device 1, which is the real avatar of the real avatar user corresponding to the motion information, to perform the motion corresponding to the motion information. , outputs the generated control information to the control information transmitter 21.
 次に、情報処理装置2は、制御情報を送信する(ステップS3)。詳細には、制御情報送信部21が、制御情報を機器1へ送信する。以上述べた処理が、例えば、制御周期ごとに行われることで、実アバタユーザの動きが、当該実アバタユーザに対応する実アバタである機器1に反映される。 Next, the information processing device 2 transmits control information (step S3). Specifically, the control information transmitter 21 transmits control information to the device 1. By performing the above-described processing, for example, every control cycle, the movements of the real avatar user are reflected in the device 1, which is the real avatar corresponding to the real avatar user.
 図4は、本実施の形態の情報処理装置2における仮想アバタに関する仮想情報の生成手順の一例を示すフローチャートである。情報処理装置2は、図3に示した例と同様に、ステップS1を実施する。ただし、動作情報は、仮想アバタユーザに関する動作情報であり、動作情報受信部23は、動作情報を表現情報生成部25へ出力する。 FIG. 4 is a flowchart illustrating an example of a procedure for generating virtual information regarding a virtual avatar in the information processing device 2 of this embodiment. The information processing device 2 performs step S1 similarly to the example shown in FIG. However, the motion information is motion information regarding the virtual avatar user, and the motion information receiving section 23 outputs the motion information to the expression information generating section 25.
 次に、情報処理装置2は、動作情報を反映した仮想アバタに対応する仮想情報を生成する(ステップS4)。詳細には、表現情報生成部25が、動作情報を用いて、当該動作情報に対応する仮想アバタユーザの仮想アバタを含む仮想空間に対応する映像、音などの表現情報である仮想情報を生成する。以上述べた処理が、例えば、制御周期ごとに行われることで、仮想アバタユーザの動きが反映された仮想空間を表現するための仮想情報が生成される。なお、仮想アバタの生成や仮想空間に関する情報は、例えば空間再構成法、MR(Mixed Reality:ミックスド・リアリティ)、AR(Augmented Reality:オーグメンテッド・リアリティ)などの技術を用いて生成することができるが、仮想アバタの生成や仮想空間に関する情報の生成はこれらの例に限定されない。 Next, the information processing device 2 generates virtual information corresponding to the virtual avatar that reflects the motion information (step S4). Specifically, the expression information generation unit 25 uses the motion information to generate virtual information that is expression information such as images and sounds corresponding to a virtual space including the virtual avatar of the virtual avatar user corresponding to the motion information. . By performing the above-described processing, for example, every control cycle, virtual information for expressing a virtual space in which the movements of the virtual avatar user are reflected is generated. Note that information regarding the generation of virtual avatars and the virtual space may be generated using techniques such as spatial reconstruction methods, MR (Mixed Reality), and AR (Augmented Reality). However, the generation of virtual avatars and the generation of information regarding virtual spaces are not limited to these examples.
 図5は、本実施の形態の情報処理装置2における表現情報の生成処理手順の一例を示すフローチャートである。情報処理装置2は、センサによる検出情報を取得する(ステップS11)。詳細には、検出情報受信部24が、センサ14およびセンサ3のうち少なくとも一方から検出情報を受信することで検出情報を取得し、取得した検出情報を表現情報生成部25へ出力する。 FIG. 5 is a flowchart illustrating an example of an expression information generation processing procedure in the information processing device 2 of this embodiment. The information processing device 2 acquires information detected by the sensor (step S11). Specifically, the detection information receiving unit 24 acquires detection information by receiving detection information from at least one of the sensor 14 and the sensor 3, and outputs the acquired detection information to the expression information generation unit 25.
 次に、情報処理装置2は、検出情報のうち合成対象の検出情報に仮想情報を合成することで第1合成情報を生成する(ステップS12)。詳細には、表現情報生成部25が、検出情報のうち合成対象の検出情報に、図4に示したステップS4で生成された仮想情報を合成することで、第1合成情報を生成する。合成対象の検出情報は、仮想情報と合成される検出情報であり、例えば、仮想情報が映像の場合には合成対象の検出情報は映像であり、仮想情報が映像および音の場合には合成対象の検出情報は映像および音である。なお、ここでは、映像および音は、機器1に搭載されたセンサ14によって取得される例を説明するが、映像および音が、360度自由視点カメラなどであるセンサ3によって取得される場合には、表現情報生成部25は、検出情報を用いて、仮想アバタまたは実アバタに対応する映像および音を示す情報を生成し、生成した情報に仮想情報を合成する。リモート参加者が複数の場合には、リモート参加者ごとに、対応する検出情報、またはセンサ3の検出情報を用いて生成された情報に仮想情報が合成される。なお、仮想情報の合成は行われなくてもよい。 Next, the information processing device 2 generates first combination information by combining virtual information with the detection information to be combined among the detection information (step S12). Specifically, the expression information generation unit 25 generates the first combination information by combining the virtual information generated in step S4 shown in FIG. 4 with the detection information to be combined among the detection information. The detection information to be synthesized is the detection information to be synthesized with the virtual information. For example, if the virtual information is a video, the detection information to be synthesized is a video, and if the virtual information is video and sound, the detection information to be synthesized is the detection information to be synthesized with the virtual information. The detected information is images and sounds. Note that here, an example will be described in which the video and sound are acquired by the sensor 14 installed in the device 1, but in the case where the video and sound are acquired by the sensor 3, which is a 360-degree free viewpoint camera, etc. The expression information generation unit 25 uses the detection information to generate information indicating images and sounds corresponding to the virtual avatar or the real avatar, and synthesizes the generated information with virtual information. When there are multiple remote participants, the virtual information is combined with the corresponding detection information or information generated using the detection information of the sensor 3 for each remote participant. Note that the virtual information does not need to be combined.
 次に、情報処理装置2は、検出情報のうち変換対象の検出情報を別の種類の情報に変換し、変換した情報を第1合成情報に合成することで第2合成情報を生成する(ステップS13)。詳細には、表現情報生成部25が、検出情報のうち変換対象の検出情報を別の種類の情報に変換し、変換した情報を第1合成情報に合成することで第2合成情報を生成し、第2合成情報を表現情報として表現情報送信部26へ出力する。例えば、温度、湿度などのように空気の状態の検出情報を視覚情報に変換する場合、空気の状態の検出情報が変換対象の検出情報となり、別の種類の情報は文字や画像などである。この変換は、例えば、変換対象の検出情報の値に応じて、人が類似する状態を感じることができる別の種類の情報の内容を、例えば、治験結果を用いた教師あり機械学習などによってあらかじめ学習しておき、学習済モデルに変換対象の検出情報を入力することで、変換結果を得ることができる。変換方法はこの例に限定されない。空気の状態を、視覚情報、聴覚情報などで表す例としては、例えば、鳥や虫の動く音を誇張した音声情報を生成する、蒸し暑さを表す蜃気楼のような映像の揺れを表示する、寒さを表すエッジのクリア感を示すように表示するなどを例示できるが、これらの例に限定されない。また、同様に、味覚に関する情報を視覚情報、聴覚情報などに変換してもよい。なお、変換対象の検出情報がない場合には、ステップS13は実施されず、第1合成情報が表現情報として表現情報送信部26へ出力される。なお、リモート参加者が複数の場合には、リモート参加者ごとに、ステップS12,S13が行われる。 Next, the information processing device 2 converts the detected information to be converted out of the detected information into another type of information, and generates second combined information by combining the converted information with the first combined information (step S13). Specifically, the expression information generation unit 25 converts the detection information to be converted out of the detection information into another type of information, and generates the second composite information by combining the converted information with the first composite information. , and outputs the second composite information to the expression information transmitter 26 as expression information. For example, when converting air condition detection information such as temperature and humidity into visual information, the air condition detection information is the detection information to be converted, and other types of information are characters, images, etc. This conversion can be done by, for example, converting the content of another type of information that allows a person to feel a similar state in advance, using supervised machine learning using clinical trial results, depending on the value of the detected information to be converted. By learning and inputting the detection information to be converted into the trained model, the conversion result can be obtained. The conversion method is not limited to this example. Examples of expressing the state of the air using visual and auditory information include, for example, generating audio information that exaggerates the sounds of moving birds and insects, displaying shaky images like a mirage to indicate hot and humid weather, and expressing cold. For example, the edges may be displayed to show a clear feeling, but the present invention is not limited to these examples. Similarly, information regarding taste may be converted into visual information, auditory information, or the like. Note that if there is no detection information to be converted, step S13 is not executed, and the first composite information is outputted to the expression information transmitter 26 as expression information. Note that if there are multiple remote participants, steps S12 and S13 are performed for each remote participant.
 次に、情報処理装置2は、第2合成情報と合成対象外かつ変換対象外の検出情報とを、リモート参加者の表現装置6へ送信する(ステップS14)。詳細には、表現情報生成部25が、第2情報と、合成対象でなくかつ変換対象でもない検出情報とを、リモート参加者の表現装置6、すなわち仮想アバタユーザおよび実アバタユーザの表現装置6へ送信する。 Next, the information processing device 2 transmits the second synthesis information and the detection information not to be synthesized and not to be converted to the expression device 6 of the remote participant (step S14). Specifically, the expression information generation unit 25 converts the second information and the detection information that is neither a synthesis target nor a conversion target into the expression device 6 of the remote participant, that is, the expression device 6 of the virtual avatar user and the real avatar user. Send to.
 次に、情報処理装置2は、仮想情報を実環境10の表現装置4へ送信する(ステップS15)。詳細には、表現情報生成部25が、仮想情報を実環境10の表現装置4、すなわち実参加者に情報を伝達する表現装置4へ送信する。なお、ステップS13で変換された情報と仮想情報とが合成されて実参加者の表現装置4へ送信されてもよい。実参加者は、実環境10において空気の状態を直接感じることができるため、ステップS13で変換された情報が表現装置4へ送信されなくてもよいが、変換された情報が実参加者にも伝達されることで、実参加者が、仮想アバタユーザおよび実アバタユーザと情報を共有することができる。以上述べた処理が、例えば、制御周期ごとに行われることで、リモート参加者には実環境10の状態を示す情報が伝達され、実参加者には仮想情報が伝達される。 Next, the information processing device 2 transmits the virtual information to the expression device 4 of the real environment 10 (step S15). Specifically, the expression information generation unit 25 transmits the virtual information to the expression device 4 of the real environment 10, that is, the expression device 4 that transmits information to the real participants. Note that the information converted in step S13 and the virtual information may be combined and transmitted to the expression device 4 of the real participant. Since the real participants can directly feel the state of the air in the real environment 10, the information converted in step S13 does not need to be sent to the expression device 4, but the converted information may also be transmitted to the real participants. By being transmitted, the real participants can share the information with the virtual avatar user and the real avatar user. By performing the above-described processing, for example, every control cycle, information indicating the state of the real environment 10 is transmitted to remote participants, and virtual information is transmitted to real participants.
 なお、実アバタユーザ(第1ユーザ)に対応する表現装置6を第1表現装置とし、仮想アバタユーザ(第2ユーザ)に対応する表現装置6を第2表現装置とすると、第1表現装置は、第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能であればよく、第2表現装置は、少なくとも第2ユーザに映像を提示可能であればよい。また、センサ3は、仮想アバタユーザに提供する映像を生成するために、少なくとも実環境10を撮影することで実環境10における物体に関する情報を検出する撮影装置を含む。そして、表現情報生成部25は、センサ14から受信した検出情報である第1検出情報を用いて第1ユーザ(実アバタユーザ)に伝達する第1表現情報を生成し、センサ3から受信した第2検出情報を用いて、第2ユーザ(仮想アバタユーザ)の実環境10における仮想視点に対応する映像を含む第2表現情報を生成し、表現情報送信部26は、第1表現装置へ第1表現情報を送信し、第2表現装置へ第2表現情報を送信する。 Note that if the expression device 6 corresponding to the real avatar user (first user) is the first expression device and the expression device 6 corresponding to the virtual avatar user (second user) is the second expression device, then the first expression device is , it is sufficient that the first user is capable of expressing an image that can be recognized with at least one of the five senses, and the second expression device is sufficient as long as it is capable of presenting an image to at least the second user. Further, the sensor 3 includes a photographing device that detects information regarding objects in the real environment 10 by photographing at least the real environment 10 in order to generate an image to be provided to the virtual avatar user. Then, the expression information generation unit 25 generates first expression information to be transmitted to the first user (real avatar user) using the first detection information that is the detection information received from the sensor 14 , and uses the first detection information that is the detection information received from the sensor 3 2 detection information is used to generate second expression information including an image corresponding to the virtual viewpoint in the real environment 10 of the second user (virtual avatar user), and the expression information transmitter 26 transmits the first expression information to the first expression device. The expression information is transmitted, and the second expression information is transmitted to the second expression device.
 また、表現情報生成部25は、上述したように、実環境10における温度、湿度、風および匂いのうちの少なくとも1つの検出結果を示す第1検出情報を、視覚および聴覚のうち少なくとも一方で認識することが可能な第1表現情報に変換してもよい。また、表現情報生成部25は、実環境10における温度、湿度、風および匂いのうちの少なくとも1つの検出結果を示す第2検出情報を、視覚および聴覚のうち少なくとも一方で認識することが可能な第2表現情報に変換してもよい。また、センサに味覚に関する情報が含まれ、表現情報生成部25は、実環境10における味覚に関する情報を生成してもよい。この場合、表現装置4、表現装置6は、味覚を再現する味覚表現デバイスを含む。または、味覚に関する情報を、視覚、聴覚など他の種類の方法に変換してもよい。 Further, as described above, the expression information generation unit 25 recognizes the first detection information indicating the detection result of at least one of temperature, humidity, wind, and odor in the real environment 10 by at least one of the visual and auditory senses. The information may be converted into first expression information that can be expressed. Furthermore, the expression information generation unit 25 is capable of recognizing second detection information indicating a detection result of at least one of temperature, humidity, wind, and odor in the real environment 10 with at least one of visual and auditory information. It may also be converted into second expression information. Further, the sensor may include information related to taste, and the expression information generation unit 25 may generate information related to taste in the real environment 10. In this case, the expression device 4 and the expression device 6 include a taste expression device that reproduces taste. Alternatively, information regarding taste may be converted into other types of methods such as visual or auditory information.
 また、表現情報生成部25は、第2検出情報を用いて第1表現情報を生成してもよい。すなわち、センサ3によって取得された情報を用いて実アバタユーザへ送信する表現情報を生成してもよい。 Furthermore, the expression information generation unit 25 may generate the first expression information using the second detection information. That is, the information acquired by the sensor 3 may be used to generate expression information to be transmitted to the real avatar user.
 また、表現情報生成部25は、仮想視点に仮想アバタが表示されるように第2表現情報を生成してもよく、参加者動作検出装置5が仮想アバタユーザの動作を検出し、表現情報生成部25は、仮想アバタユーザの動作情報と第2検出情報(実環境10の映像デ-タ)とに基づいて、仮想視点を含む範囲である仮想空間内における実環境10の物体と仮想アバタとが仮想アバタユーザの動作に伴って変化する映像を仮想的に生成し、生成した映像を示す仮想情報を第2表現情報と合成し、表現情報送信部26は、仮想情報が合成された第2表現情報を第2表現装置へ送信してもよい。また、表現情報生成部25は、仮想情報を第1表現情報と合成し、表現情報送信部26は、仮想情報が合成された第1表現情報を第1表現装置へ送信してもよい。なお、仮想アバタは、仮想視点だけを有してもよいし、仮想視点に加えて聴覚や発話も可能であってもよい。 Further, the expression information generation unit 25 may generate the second expression information so that the virtual avatar is displayed at the virtual viewpoint, and the participant movement detection device 5 detects the movement of the virtual avatar user and generates the expression information. The unit 25 detects objects in the real environment 10 and the virtual avatar in the virtual space, which is a range including the virtual viewpoint, based on the motion information of the virtual avatar user and the second detection information (video data of the real environment 10). virtually generates a video that changes in accordance with the movements of the virtual avatar user, synthesizes virtual information indicating the generated video with second expression information, and transmits the second expression information into which the virtual information is synthesized. The expression information may be sent to a second expression device. Further, the expression information generation section 25 may combine the virtual information with the first expression information, and the expression information transmission section 26 may transmit the first expression information with the combined virtual information to the first expression device. Note that the virtual avatar may have only a virtual viewpoint, or may be able to hear and speak in addition to the virtual viewpoint.
 また、表現情報送信部26は、実参加者(第3ユーザ)に映像を提示可能な第3表現装置である表現装置4に、仮想情報を送信してもよい。また、情報処理装置2は、仮想アバタユーザおよび実アバタユーザに関する情報を、公開するか非公開とするかを設定可能であってもよい。仮想アバタユーザおよび実アバタユーザに関する情報は、例えば、顔情報、眼球情報、声紋などのセキュリティ利用頻度が高い情報、本名、年齢、生年月日などの個人情報のうちの少なく一部であるがこれらに限定されない。また、プライバシーとして実際の顔や声の公開を希望しない場合には、顔や声を変更して公開できるようにしてもよい。 Furthermore, the expression information transmitting unit 26 may transmit the virtual information to the expression device 4, which is a third expression device that can present images to real participants (third users). Further, the information processing device 2 may be able to set whether information regarding the virtual avatar user and the real avatar user is made public or private. Information regarding virtual avatar users and real avatar users includes, for example, facial information, eyeball information, voice prints, and other frequently used security information, as well as personal information such as real names, ages, and dates of birth. but not limited to. Furthermore, if the user does not wish to make his or her actual face or voice public for privacy reasons, the user may be able to change the face or voice and make it public.
 次に、参加者の参加形態について例を挙げて説明する。図6は、実参加者と実アバタユーザと仮想アバタユーザとがイベントに参加する例を示す図である。図6に示した例では、実環境10において実参加者8-1~8-3がイベントに参加し、参加場所7-1からは、実アバタユーザであるユーザ70-1が実アバタである機器1を用いてイベントに参加し、参加場所7-2,7-3からは、それぞれ仮想アバタユーザであるユーザ70-2,70-3が仮想アバタ9-1,9-2を用いてイベントに参加している。図示は省略しているが、参加場所7-1~7-3には、それぞれ参加者動作検出装置5および表現装置6が設けられる。なお、仮想アバタ9-1,9-2を動かさずに固定とする場合には参加者動作検出装置5は、仮想アバタユーザであるユーザ70-2,70-3に設けられなくてもよい。 Next, the participation form of participants will be explained using an example. FIG. 6 is a diagram showing an example in which real participants, real avatar users, and virtual avatar users participate in an event. In the example shown in FIG. 6, real participants 8-1 to 8-3 participate in the event in the real environment 10, and from the participation location 7-1, the real avatar user 70-1 is the real avatar user. Users 70-2 and 70-3, who are virtual avatar users, participate in the event using device 1 and participate in the event from participating locations 7-2 and 7-3 using virtual avatars 9-1 and 9-2, respectively. Participating in Although not shown, each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6. Note that when the virtual avatars 9-1 and 9-2 are fixed without moving, the participant motion detection device 5 does not need to be provided to the users 70-2 and 70-3 who are virtual avatar users.
 機器1は、センサ14として、視覚、聴覚、嗅覚および力触覚のそれぞれに関する情報を検出するセンサを備え、アクチュエ-タとして物理作用器、移動器およびスピ-カを備える。また、実環境10に設置されたセンサ3-1,3-2は、仮想アバタ9-1,9-2の視点の映像を生成するための映像を取得するカメラ、仮想アバタ9-1,9-2の位置での音を生成するための集音を行うマイクなどである。また、図6では、図の簡略化のためにセンサ3を2つ図示しているが、一般には、仮想アバタ9-1,9-2の視点の映像を生成するための映像を取得するカメラ、仮想アバタ9-1,9-2の位置での音を生成するための集音を行うマイクはそれぞれ複数である。また、センサ3およびセンサ14のうちの少なくとも一方として、さらに、実環境10の匂い、温度、風速、風量、湿度、気体の種類などを検出するセンサが設けられてもよい。 The device 1 includes a sensor 14 that detects information regarding each of visual, auditory, olfactory, and tactile sensations, and includes a physical acting device, a moving device, and a speaker as actuators. In addition, the sensors 3-1 and 3-2 installed in the real environment 10 are cameras that acquire images for generating images from the viewpoints of the virtual avatars 9-1 and 9-2, and This is a microphone that collects sound to generate sound at position -2. In addition, in FIG. 6, two sensors 3 are shown for the sake of simplification, but in general, a camera that acquires images for generating images from the viewpoints of the virtual avatars 9-1 and 9-2. , a plurality of microphones each collect sound to generate sound at the positions of the virtual avatars 9-1 and 9-2. Furthermore, as at least one of the sensor 3 and the sensor 14, a sensor that detects odor, temperature, wind speed, air volume, humidity, type of gas, etc. of the actual environment 10 may be provided.
 図6に示した例では、実アバタユーザであるユーザ70-1に伝達する情報を全て機器1に搭載されたセンサ14によって検出している。上述したように、機器1が、実アバタユーザの動作に応じて動作することで、例えば、機器1における視覚の情報を得るカメラなどのセンサ14の視野が変更され、実アバタユーザの動作に応じた実環境10における映像が表現装置6によって表示される。音、匂いなども同様であり、機器1の移動に伴って実環境10における音、匂いに対応する情報が実アバタユーザに伝達される。また、実アバタユーザの動作に応じて機器1が、実環境10における実際の物体と触れることで力触覚に関する情報がセンサ14によって検出され、実アバタユーザに伝達される。これにより、実アバタユーザが、臨場感のある実環境10を認識することができる。 In the example shown in FIG. 6, all information to be transmitted to the user 70-1, who is a real avatar user, is detected by the sensor 14 installed in the device 1. As described above, when the device 1 operates in accordance with the movement of the real avatar user, the field of view of the sensor 14, such as a camera that obtains visual information, in the device 1 is changed, and the device 1 operates in accordance with the movement of the real avatar user. The image in the real environment 10 is displayed by the expression device 6. The same applies to sounds, smells, etc., and as the device 1 moves, information corresponding to sounds and smells in the real environment 10 is transmitted to the real avatar user. Further, when the device 1 touches an actual object in the real environment 10 in accordance with the movement of the real avatar user, information regarding haptic sensation is detected by the sensor 14 and transmitted to the real avatar user. This allows the real avatar user to recognize the real environment 10 with a sense of realism.
 また、仮想アバタユーザであるユーザ70-2,70-3には、センサ3によって取得された検出情報を用いて、それぞれ仮想アバタ9-1,9-2の視点の映像、仮想アバタ9-1,9-2における音などが伝達される。また、センサ3として匂いを検出するセンサが設けられている場合には、情報処理装置2は、センサ3による検出情報を用いて、仮想アバタユーザであるユーザ70-2,70-3に表現装置6によって匂いを示す情報を伝達することができる。 Furthermore, users 70-2 and 70-3, who are virtual avatar users, are provided with images from the viewpoints of virtual avatars 9-1 and 9-2, and images from virtual avatars 9-1 and 9-1, respectively, using the detection information acquired by the sensor 3. , 9-2 are transmitted. Further, when a sensor that detects a smell is provided as the sensor 3, the information processing device 2 uses the detection information by the sensor 3 to provide the users 70-2 and 70-3, who are virtual avatar users, with the expression device. 6 can convey information indicating odor.
 実参加者8-1~8-3が、機器1をそのまま視認し、仮想アバタ9-1,9-2を視認しなくてよい場合には、実環境10には、表現装置4は設けられなくてよい。上述したように、機器1の存在する箇所に実アバタに対応する映像を投影し、仮想アバタ9-1,9-2に対応する箇所に対応する映像を投影する場合には、これらを投影する表現装置4が設けられる。実参加者8-1~8-3が、仮想アバタ9-1,9-2に対応する音を認識可能にする場合にも、同様に、表現装置4によって仮想アバタユーザであるユーザ70-2,70-3が発生させた音が表現装置4によって実参加者に伝達される。 In the case where the real participants 8-1 to 8-3 visually recognize the device 1 as it is and do not need to visually recognize the virtual avatars 9-1 and 9-2, the expression device 4 is not provided in the real environment 10. You don't have to. As described above, when projecting the image corresponding to the real avatar at the location where the device 1 is present, and projecting the images corresponding to the locations corresponding to the virtual avatars 9-1 and 9-2, these are projected. A representation device 4 is provided. Similarly, when the real participants 8-1 to 8-3 are allowed to recognize the sounds corresponding to the virtual avatars 9-1 and 9-2, the expression device 4 allows the user 70-2 who is the virtual avatar user to recognize the sound. , 70-3 are transmitted to the actual participants by the expression device 4.
 図7は、実アバタユーザと仮想アバタユーザとがイベントに参加する例を示す図である。図7に示した例では、参加場所7-1,7-4,7-5から、実アバタユーザであるユーザ70-1,70-4,70-5がそれぞれに対応する実アバタである機器1-1~1-3を用いてイベントに参加し、参加場所7-2,7-3からは、それぞれ仮想アバタユーザであるユーザ70-2,70-3が仮想アバタ9-1,9-2を用いてイベントに参加している。図示は省略しているが、参加場所7-1~7-5には、それぞれ参加者動作検出装置5および表現装置6が設けられる。機器1-1~1-3は、それぞれが上述した機器1である。 FIG. 7 is a diagram showing an example in which a real avatar user and a virtual avatar user participate in an event. In the example shown in FIG. 7, from participating locations 7-1, 7-4, and 7-5, users 70-1, 70-4, and 70-5, who are real avatar users, are connected to devices whose real avatars correspond to their respective locations. Users 70-2 and 70-3, who are virtual avatar users, participate in the event using virtual avatars 9-1 and 9-3 from participation locations 7-2 and 7-3, respectively, using virtual avatars 9-1 and 9-3. I am participating in an event using 2. Although not shown, each of the participating locations 7-1 to 7-5 is provided with a participant motion detection device 5 and an expression device 6. Each of the devices 1-1 to 1-3 is the device 1 described above.
 機器1-1~1-3のそれぞれの構成は、図6に示した機器1と同様である。図7に示した例における実アバタユーザと仮想アバタユーザとに対する情報の伝達方法も、図6に示した例と同様である。図7に示した例では、実参加者8が存在しないため、実環境10に表現装置4が設けられなくてよい。 The configuration of each of devices 1-1 to 1-3 is similar to device 1 shown in FIG. 6. The method of transmitting information to the real avatar user and the virtual avatar user in the example shown in FIG. 7 is also the same as in the example shown in FIG. 6. In the example shown in FIG. 7, since there is no real participant 8, there is no need to provide the expression device 4 in the real environment 10.
 図8は、仮想アバタに対応する仮想空間を生成する例を示す図である。図8に示した例では、図6に示した例と同様に、実環境10において実参加者8-1~8-3がイベントに参加し、参加場所7-1からは、実アバタユーザであるユーザ70-1が実アバタである機器1を用いてイベントに参加し、参加場所7-2,7-3からは、それぞれ仮想アバタユーザであるユーザ70-2,70-3が仮想アバタ9-1,9-2を用いてイベントに参加している。図示は省略しているが、参加場所7-1~7-3には、それぞれ参加者動作検出装置5および表現装置6が設けられる。 FIG. 8 is a diagram showing an example of generating a virtual space corresponding to a virtual avatar. In the example shown in FIG. 8, similar to the example shown in FIG. A user 70-1 participates in an event using device 1, which is a real avatar, and users 70-2 and 70-3, who are virtual avatar users, use virtual avatar 9 from participating locations 7-2 and 7-3, respectively. I am participating in the event using -1 and 9-2. Although not shown, each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6.
 図8に示した例では、情報処理装置2が、仮想アバタユーザであるユーザ70-3に対応する参加者動作検出装置5から動作情報を取得し、取得した情報を用いて仮想アバタ9-2の周囲に仮想空間90を生成する。そして、情報処理装置2は、仮想空間90を示す仮想情報を、センサ3の検出情報を用いて生成された仮想アバタ9-2の視点の映像、音などに合成し、合成した情報を表現情報として、仮想アバタユーザであるユーザ70-3に対応する表現装置6へ送信する。また、合成された情報は、実アバタユーザであるユーザ70-1および仮想アバタユーザであるユーザ70-2にも送信される。 In the example shown in FIG. 8, the information processing device 2 acquires motion information from the participant motion detection device 5 corresponding to the user 70-3, who is a virtual avatar user, and uses the acquired information to A virtual space 90 is generated around the . The information processing device 2 then synthesizes virtual information indicating the virtual space 90 with images, sounds, etc. from the viewpoint of the virtual avatar 9-2 generated using the detection information of the sensor 3, and uses the synthesized information as expression information. , and is transmitted to the expression device 6 corresponding to the user 70-3, who is the virtual avatar user. The combined information is also sent to the user 70-1 who is the real avatar user and the user 70-2 who is the virtual avatar user.
 図9は、仮想空間が合成された映像の一例を示す図である。図9に示した例では、例えば、仮想アバタユーザであるユーザ70-2の視点での実環境10の映像200に、仮想アバタ9-2に対応する仮想空間の映像201が合成された例を示している。映像200は、センサ3によって撮影された情報に基づく実映像であるが、映像201は、センサ3によって撮影された情報に基づいて生成された仮想空間を示す情報である。映像201には、仮想アバタ9-2の仮想空間上の動きが反映される。このため、仮想空間上で、仮想アバタ9-2が物体に触れると、仮想空間上で当該物体も変化する。仮想アバタ9-2に対応する仮想アバタユーザであるユーザ70-3にも同様に、仮想空間が合成された映像が伝達されるため、仮想アバタユーザであるユーザ70-3は、実環境10における物体に触れた場合と同様の感覚を体験することができる。 FIG. 9 is a diagram showing an example of a video in which virtual spaces are synthesized. In the example shown in FIG. 9, a video 201 of the virtual space corresponding to the virtual avatar 9-2 is synthesized with a video 200 of the real environment 10 from the viewpoint of the user 70-2, who is a virtual avatar user. It shows. The image 200 is a real image based on information captured by the sensor 3, whereas the image 201 is information indicating a virtual space generated based on the information captured by the sensor 3. The video 201 reflects the movement of the virtual avatar 9-2 in the virtual space. Therefore, when the virtual avatar 9-2 touches an object in the virtual space, the object also changes in the virtual space. Similarly, the user 70-3 who is the virtual avatar user corresponding to the virtual avatar 9-2 is also transmitted the video in which the virtual space has been synthesized, so that the user 70-3 who is the virtual avatar user can You can experience the same sensation as when you touch an object.
 また、実参加者8-1~8-3に対応する表現装置4へ仮想情報が送信されることで、実参加者8-1~8-3は、実際の実環境10に、投影された仮想空間が重畳された状態を視認することができる。 In addition, by transmitting the virtual information to the expression devices 4 corresponding to the real participants 8-1 to 8-3, the real participants 8-1 to 8-3 can express the images projected onto the actual real environment 10. It is possible to visually recognize the state in which the virtual spaces are superimposed.
 図10は、機器におけるセンサの削減を示す図である。図10に示した例では、図6に示した例と同様に、実環境10において実参加者8-1~8-3がイベントに参加し、参加場所7-1からは、実アバタユーザであるユーザ70-1が実アバタである機器1を用いてイベントに参加し、参加場所7-2,7-3からは、それぞれ仮想アバタユーザであるユーザ70-2,70-3が仮想アバタ9-1,9-2を用いてイベントに参加している。図示は省略しているが、参加場所7-1~7-3には、それぞれ参加者動作検出装置5および表現装置6が設けられる。 FIG. 10 is a diagram showing the reduction of sensors in equipment. In the example shown in FIG. 10, similar to the example shown in FIG. A user 70-1 participates in an event using device 1, which is a real avatar, and users 70-2 and 70-3, who are virtual avatar users, use virtual avatar 9 from participating locations 7-2 and 7-3, respectively. I am participating in the event using -1 and 9-2. Although not shown, each of the participating locations 7-1 to 7-3 is provided with a participant motion detection device 5 and an expression device 6.
 図10に示した例では、センサ3-1~3-4は、360度自由視点映像を取得するためのカメラと、位置に応じた音を再現するためのマイクとを含む。センサ3-1~3-4は、仮想アバタユーザであるユーザ70-2,70-3の視点での実環境10の映像、仮想アバタユーザであるユーザ70-2,70-3の位置における音を示す情報の生成に用いられるとともに、実アバタの視点での実環境10の映像、実アバタの位置における音を示す情報の生成に用いられる。これにより、機器1が備えるセンサ14は、視覚、聴覚に関する情報を検出するセンサを含まなくてよい。図10に示すように、センサ3を用いて実アバタに関する情報を生成することで、機器1の構成を図6に示した例より簡素化することができる。図10に示した例では、実アバタに関して、視覚および聴覚に関する情報をセンサ3の検出情報から生成するようにしたが、これに限らず、視覚および聴覚に関する情報のうちの一方をセンサ3の検出情報から生成し、他方をセンサ14の検出情報から生成するようにしてもよい。また、匂いを検出するセンサ3を用いることで、機器1が備えるセンサ14のうち嗅覚に関する情報を検出するセンサを削減するようにしてもよい。 In the example shown in FIG. 10, the sensors 3-1 to 3-4 include a camera for acquiring a 360-degree free viewpoint image and a microphone for reproducing sound according to the position. Sensors 3-1 to 3-4 detect images of the real environment 10 from the viewpoints of users 70-2 and 70-3 who are virtual avatar users, and sounds at the positions of users 70-2 and 70-3 who are virtual avatar users. It is used to generate information indicating the image of the real environment 10 from the viewpoint of the real avatar and information indicating the sound at the position of the real avatar. Thereby, the sensor 14 included in the device 1 does not need to include a sensor that detects information related to vision and hearing. As shown in FIG. 10, by generating information regarding the real avatar using the sensor 3, the configuration of the device 1 can be made simpler than the example shown in FIG. In the example shown in FIG. 10, regarding the real avatar, visual and auditory information is generated from the detection information of the sensor 3, but the present invention is not limited to this. The first information may be generated from the information, and the other may be generated from the detection information of the sensor 14. Further, by using the sensor 3 that detects odor, the number of sensors that detect information related to smell among the sensors 14 included in the device 1 may be reduced.
 次に、本実施の形態の情報処理装置2のハ-ドウェア構成について説明する。本実施の形態の情報処理装置2は、コンピュ-タシステム上で、情報処理装置2における処理が記述されたコンピュ-タプログラムであるプログラムが実行されることにより、コンピュ-タシステムが情報処理装置2として機能する。図11は、本実施の形態の情報処理装置2を実現するコンピュ-タシステムの構成例を示す図である。図11に示すように、このコンピュ-タシステムは、制御部101と入力部102と記憶部103と表示部104と通信部105と出力部106とを備え、これらはシステムバス107を介して接続されている。制御部101と記憶部103とは処理回路を構成する。 Next, the hardware configuration of the information processing device 2 of this embodiment will be explained. The information processing device 2 of this embodiment is configured so that the computer system can be used as the information processing device 2 by executing a program, which is a computer program in which processing in the information processing device 2 is described, on the computer system. Function. FIG. 11 is a diagram showing an example of the configuration of a computer system that implements the information processing device 2 of this embodiment. As shown in FIG. 11, this computer system includes a control section 101, an input section 102, a storage section 103, a display section 104, a communication section 105, and an output section 106, which are connected via a system bus 107. ing. The control section 101 and the storage section 103 constitute a processing circuit.
 図11において、制御部101は、例えば、CPU(Central Processing Unit)等のプロセッサであり、本実施の形態の情報処理装置2における処理が記述されたプログラムを実行する。なお、制御部101の一部が、GPU(Graphics Processing Unit)、FPGA(Field-Programmable Gate Array)などの専用ハ-ドウェアにより実現されてもよい。入力部102は、たとえばキーボード、マウスなどで構成され、コンピュ-タシステムの使用者が、各種情報の入力を行うために使用する。記憶部103は、RAM(Random Access Memory),ROM(Read Only Memory)などの各種メモリおよびハ-ドディスクなどのストレ-ジデバイスを含み、上記制御部101が実行すべきプログラム、処理の過程で得られた必要なデ-タ、などを記憶する。また、記憶部103は、プログラムの一時的な記憶領域としても使用される。表示部104は、ディスプレイ、LCD(液晶表示パネル)などで構成され、コンピュ-タシステムの使用者に対して各種画面を表示する。通信部105は、通信処理を実施する受信機および送信機である。出力部106は、プリンタ、スピ-カなどである。なお、図11は、一例であり、コンピュ-タシステムの構成は図11の例に限定されない。 In FIG. 11, the control unit 101 is, for example, a processor such as a CPU (Central Processing Unit), and executes a program in which processing in the information processing device 2 of this embodiment is described. Note that a part of the control unit 101 may be realized by dedicated hardware such as a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array). The input unit 102 includes, for example, a keyboard and a mouse, and is used by a user of the computer system to input various information. The storage unit 103 includes various memories such as RAM (Random Access Memory) and ROM (Read Only Memory), and storage devices such as a hard disk, and stores programs to be executed by the control unit 101 and in the process of processing. Store the necessary data obtained. The storage unit 103 is also used as a temporary storage area for programs. The display unit 104 is composed of a display, an LCD (liquid crystal display panel), etc., and displays various screens to the user of the computer system. The communication unit 105 is a receiver and a transmitter that perform communication processing. The output unit 106 is a printer, a speaker, or the like. Note that FIG. 11 is an example, and the configuration of the computer system is not limited to the example shown in FIG.
 ここで、本実施の形態のプログラムが実行可能な状態になるまでのコンピュ-タシステムの動作例について説明する。上述した構成をとるコンピュ-タシステムには、たとえば、図示しないCD(Compact Disc)-ROMドライブまたはDVD(Digital Versatile Disc)-ROMドライブにセットされたCD-ROMまたはDVD-ROMから、コンピュ-タプログラムが記憶部103にインストールされる。そして、プログラムの実行時に、記憶部103から読み出されたプログラムが記憶部103の主記憶領域に格納される。この状態で、制御部101は、記憶部103に格納されたプログラムに従って、本実施の形態の情報処理装置2としての処理を実行する。 Here, an example of the operation of the computer system until the program of this embodiment becomes executable will be described. In a computer system having the above-mentioned configuration, for example, a computer program can be loaded from a CD-ROM or DVD-ROM set in a CD (Compact Disc)-ROM drive or a DVD (Digital Versatile Disc)-ROM drive (not shown). is installed in the storage unit 103. Then, when the program is executed, the program read from the storage unit 103 is stored in the main storage area of the storage unit 103. In this state, the control unit 101 executes processing as the information processing apparatus 2 of this embodiment according to the program stored in the storage unit 103.
 なお、上記の説明においては、CD-ROMまたはDVD-ROMを記録媒体として、情報処理装置2における処理を記述したプログラムを提供しているが、これに限らず、コンピュ-タシステムの構成、提供するプログラムの容量などに応じて、たとえば、通信部105を経由してインタ-ネットなどの伝送媒体により提供されたプログラムを用いることとしてもよい。 Note that in the above description, a CD-ROM or DVD-ROM is used as a recording medium to provide a program that describes processing in the information processing device 2; however, the present invention is not limited to this; Depending on the capacity of the program, for example, a program provided via a transmission medium such as the Internet via the communication unit 105 may be used.
 本実施の形態のプログラムは、例えば、イベントが開催される実環境10から離れた場所でイベントに参加する第1ユーザの動作を示す動作情報を受信するステップと、動作情報を用いて、実環境10に設置された機器に動作情報に応じた動作を行わせるための制御情報を生成するステップと、制御情報を機器へ送信するステップと、機器に搭載され実環境10における物体に関する情報を検出する機器搭載センサから機器搭載センサによって検出された第1検出情報を受信するステップと、実環境10に設置され実環境10における物体に関する情報を検出する実環境センサから実環境センサによって検出された第2検出情報を受信するステップと、をコンピュ-タシステムに実行させる。本実施の形態のプログラムは、さらに、第1検出情報を用いて第1ユーザに伝達する第1表現情報を生成するステップと、第2検出情報を用いて、実環境10とは離れた場所からイベントに参加する第2ユーザの実環境10における仮想視点に対応する映像を含む第2表現情報を生成するステップと、第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置へ第1表現情報を送信し、第2ユーザに映像を提示可能な第2表現装置へ第2表現情報を送信するステップと、をコンピュ-タシステムに実行させる。 The program of this embodiment includes, for example, the step of receiving motion information indicating the motion of a first user who participates in an event at a location remote from the real environment 10 where the event is held, and using the motion information to A step of generating control information for causing a device installed in the device to perform an operation according to the operation information, a step of transmitting the control information to the device, and a step of detecting information regarding an object mounted on the device in the real environment 10. receiving first detection information detected by the device-mounted sensor from the device-mounted sensor; and second detection information detected by the real-environment sensor from the real-environment sensor installed in the real environment 10 and detecting information about an object in the real environment 10. and causing the computer system to perform the step of receiving detection information. The program of this embodiment further includes a step of generating first expression information to be transmitted to the first user using the first detection information, and a step of generating first expression information to be transmitted to the first user using the second detection information. a step of generating second expression information including an image corresponding to a virtual viewpoint in the real environment 10 of a second user participating in the event; The computer system is caused to execute the steps of transmitting the first expression information to the expression device and transmitting the second expression information to the second expression device capable of presenting the video to the second user.
 図1に示した機器制御部22および表現情報生成部25は、図11に示した記憶部103に記憶されたコンピュ-タプログラムが図11に示した制御部101により実行されることにより実現される。図1に示した機器制御部22および表現情報生成部25の実現には、図11に示した記憶部103も用いられる。図1に示した制御情報送信部21、動作情報受信部23、検出情報受信部24および表現情報送信部26は、図11に示した通信部105により実現される。また、情報処理装置2は複数のコンピュ-タシステムにより実現されてもよい。例えば、情報処理装置2は、クラウドコンピュ-タシステムにより実現されてもよい。また、情報処理装置2の機能の一部が、情報処理装置2とは別に設けられた別の装置により実現されてもよい。別の装置は、実環境10に設けられてもよいし、イベントに参加するユーザの居所付近に設けられてもよいしその他の場所に設けられてもよい。 The device control section 22 and the expression information generation section 25 shown in FIG. 1 are realized by the computer program stored in the storage section 103 shown in FIG. 11 being executed by the control section 101 shown in FIG. Ru. The storage unit 103 shown in FIG. 11 is also used to realize the device control unit 22 and the expression information generation unit 25 shown in FIG. The control information transmitting section 21, motion information receiving section 23, detection information receiving section 24, and expression information transmitting section 26 shown in FIG. 1 are realized by the communication section 105 shown in FIG. Further, the information processing device 2 may be realized by a plurality of computer systems. For example, the information processing device 2 may be realized by a cloud computer system. Furthermore, some of the functions of the information processing device 2 may be realized by another device provided separately from the information processing device 2. Another device may be provided in the real environment 10, may be provided near the residence of the user participating in the event, or may be provided at another location.
 以上に述べたように、本実施の形態では、実環境10を利用したイベントにおいて、実体のある機器1を実アバタとして用いることで、リモートでのユーザのイベントへの参加を実現する。これにより、実環境10を用いたイベントにおけるリモート参加者に、実環境10をより適切に体感させることができる。また、機器1を用いた実アバタによる参加と、仮想アバタによる参加との両方を用いることで、イベントへの参加者が増えた場合にも、コストを抑えて柔軟に対応することができる。また、実環境10の空気の状態を示す情報をリモート参加者に伝達することで、リモート参加者に、より実環境10に近い状態を体感させることができる。また、仮想アバタに対応する情報を仮想アバタユーザへ提供するために用いられるセンサ3の検出情報を用いて、実アバタに対応する情報を生成すると、機器1の構成を簡素化することができコストを削減することができる。 As described above, in this embodiment, by using the physical device 1 as a real avatar in an event using the real environment 10, a user can remotely participate in the event. This allows remote participants in an event using the real environment 10 to experience the real environment 10 more appropriately. Further, by using both participation by a real avatar using the device 1 and participation by a virtual avatar, even when the number of participants in an event increases, it is possible to respond flexibly while keeping costs down. Further, by transmitting information indicating the air condition of the real environment 10 to the remote participants, it is possible to make the remote participants experience a state closer to the real environment 10 . Furthermore, if information corresponding to the real avatar is generated using the detection information of the sensor 3 used to provide information corresponding to the virtual avatar to the virtual avatar user, the configuration of the device 1 can be simplified and the cost will be reduced. can be reduced.
 以上の実施の形態に示した構成は、一例を示すものであり、別の公知の技術と組み合わせることも可能であるし、実施の形態同士を組み合わせることも可能であるし、要旨を逸脱しない範囲で、構成の一部を省略、変更することも可能である。 The configurations shown in the embodiments above are merely examples, and can be combined with other known techniques, or can be combined with other embodiments, within the scope of the gist. It is also possible to omit or change part of the configuration.
 1,1-1~1-3 機器、2 情報処理装置、3-1~3-4,14-1,14-2 センサ、4-1,4-2,6-1,6-2 表現装置、5-1,5-2 参加者動作検出装置、7,7-1~7-5 参加場所、8-1~8-4 実参加者、9,9-1,9-2 仮想アバタ、10 実環境、11 制御情報受信部、12 駆動制御部、13 駆動部、15 検出情報送信部、21 制御情報送信部、22 機器制御部、23 動作情報受信部、24 検出情報受信部、25 表現情報生成部、26 表現情報送信部、100 遠隔体験システム。 1, 1-1 to 1-3 equipment, 2 information processing device, 3-1 to 3-4, 14-1, 14-2 sensor, 4-1, 4-2, 6-1, 6-2 expression device , 5-1, 5-2 Participant motion detection device, 7, 7-1 to 7-5 Participation location, 8-1 to 8-4 Real participants, 9, 9-1, 9-2 Virtual avatar, 10 Real environment, 11 Control information receiving section, 12 Drive control section, 13 Driving section, 15 Detection information transmitting section, 21 Control information transmitting section, 22 Device control section, 23 Operation information receiving section, 24 Detection information receiving section, 25 Expression information Generation unit, 26 Expression information transmission unit, 100 Remote experience system.

Claims (17)

  1.  イベントが開催される実環境に設置され、前記実環境における物体に関する情報を検出する機器搭載センサを備える機器と、
     前記実環境に設置され、前記実環境における物体に関する情報を検出する実環境センサと、
     前記イベントに前記実環境とは離れた場所から参加する第1ユーザの動作を検出する参加者動作検出装置と、
     前記第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置と、
     前記イベントに前記実環境とは離れた場所から参加する第2ユーザに映像を提示可能な第2表現装置と、
     情報処理装置と、
     を備え、
     前記実環境センサは、前記実環境を撮影することで前記実環境における物体に関する情報を検出する撮影装置を含み、
     前記情報処理装置は、
     前記参加者動作検出装置から、前記参加者動作検出装置によって取得された動作を示す動作情報を受信する動作情報受信部と、
     前記動作情報を用いて、前記動作情報に応じた動作を前記機器に行わせるための制御情報を生成する機器制御部と、
     前記制御情報を前記機器へ送信する制御情報送信部と、
     前記機器搭載センサから前記機器搭載センサによって検出された第1検出情報を受信し、前記実環境センサから前記実環境センサによって検出された第2検出情報を受信する検出情報受信部と、
     前記第1検出情報を用いて前記第1ユーザに伝達する第1表現情報を生成し、前記第2検出情報を用いて、前記第2ユーザの前記実環境における仮想視点に対応する映像を含む第2表現情報を生成する表現情報生成部と、
     前記第1表現装置へ前記第1表現情報を送信し、前記第2表現装置へ前記第2表現情報を送信する表現情報送信部と、
     を備えることを特徴とする遠隔体験システム。
    A device installed in a real environment where an event is held, and equipped with a device-mounted sensor that detects information about objects in the real environment;
    a real environment sensor that is installed in the real environment and detects information about objects in the real environment;
    a participant motion detection device that detects the motion of a first user who participates in the event from a location away from the actual environment;
    a first expression device capable of expressing expressions that the first user can recognize with at least one of the five senses;
    a second expression device capable of presenting an image to a second user who participates in the event from a location remote from the actual environment;
    an information processing device;
    Equipped with
    The real environment sensor includes a photographing device that detects information about objects in the real environment by photographing the real environment,
    The information processing device includes:
    a motion information receiving unit that receives, from the participant motion detection device, motion information indicating the motion acquired by the participant motion detection device;
    a device control unit that uses the operation information to generate control information for causing the device to perform an operation according to the operation information;
    a control information transmitter that transmits the control information to the device;
    a detection information receiving unit that receives first detection information detected by the device-mounted sensor from the device-mounted sensor, and receives second detection information detected by the real-environment sensor from the real-environment sensor;
    The first detection information is used to generate first expression information to be transmitted to the first user, and the second detection information is used to generate a first representation information that includes an image corresponding to a virtual viewpoint of the second user in the real environment. 2. An expression information generation unit that generates expression information;
    an expression information transmitter that transmits the first expression information to the first expression device and transmits the second expression information to the second expression device;
    A remote experience system characterized by comprising:
  2.  前記物体は、前記実環境に存在する動物、植物および空気のうちの少なくとも1つを含むことを特徴とする請求項1に記載の遠隔体験システム。 The remote experience system according to claim 1, wherein the object includes at least one of animals, plants, and air existing in the real environment.
  3.  前記機器搭載センサは、前記実環境における温度、湿度、風および匂いのうちの少なくとも1つを検出することを特徴とする請求項2に記載の遠隔体験システム。 The remote experience system according to claim 2, wherein the device-mounted sensor detects at least one of temperature, humidity, wind, and odor in the real environment.
  4.  前記表現情報生成部は、前記実環境における温度、湿度、風および匂いのうちの少なくとも1つの検出結果を示す前記第1検出情報を、視覚および聴覚のうち少なくとも一方で認識することが可能な前記第1表現情報に変換することを特徴とする請求項3に記載の遠隔体験システム。 The expression information generation unit is configured to detect the first detection information indicating a detection result of at least one of temperature, humidity, wind, and odor in the actual environment, and the expression information generating unit may be configured to detect the first detection information indicating the detection result of at least one of temperature, humidity, wind, and odor in the actual environment. 4. The remote experience system according to claim 3, wherein the remote experience system converts into first expression information.
  5.  前記実環境センサは、前記実環境における温度、湿度、風および匂いのうちの少なくとも1つを検出することを特徴とする請求項2または3に記載の遠隔体験システム。 The remote experience system according to claim 2 or 3, wherein the real environment sensor detects at least one of temperature, humidity, wind, and odor in the real environment.
  6.  前記表現情報生成部は、前記実環境における温度、湿度、風および匂いのうちの少なくとも1つの検出結果を示す前記第2検出情報を、視覚および聴覚のうち少なくとも一方で認識することが可能な前記第2表現情報に変換することを特徴とする請求項3に記載の遠隔体験システム。 The expression information generation unit is configured to detect the second detection information indicating a detection result of at least one of temperature, humidity, wind, and odor in the actual environment, and the expression information generation unit may be configured to detect the second detection information indicating the detection result of at least one of temperature, humidity, wind, and odor in the actual environment. 4. The remote experience system according to claim 3, wherein the remote experience system converts into second expression information.
  7.  前記表現情報生成部は、前記第2検出情報を用いて前記第1表現情報を生成することを特徴とする請求項1から5のいずれか1つに記載の遠隔体験システム。 The remote experience system according to any one of claims 1 to 5, wherein the expression information generation unit generates the first expression information using the second detection information.
  8.  前記機器は、マニピュレ-タを含み、
     前記機器搭載センサは、マニピュレ-タの手先の力触覚を検出する力触覚センサを含み、
     前記第1表現装置は、力触覚を第1ユーザの手に伝達する力触覚デバイスを含むことを特徴とする請求項1から3のいずれか1つに記載の遠隔体験システム。
    The device includes a manipulator;
    The device-mounted sensor includes a force tactile sensor that detects force tactile sensation of the hand of the manipulator,
    The remote experience system according to any one of claims 1 to 3, wherein the first expression device includes a haptic device that transmits a haptic sensation to the first user's hand.
  9.  前記表現情報生成部は、前記仮想視点に前記第2ユーザに対応するアバタである仮想アバタが表示されるように前記第2表現情報を生成することを特徴とする請求項1から3のいずれか1つに記載の遠隔体験システム。 4. The expression information generation unit generates the second expression information so that a virtual avatar corresponding to the second user is displayed at the virtual viewpoint. The remote experience system described in 1.
  10.  前記第2ユーザの動作を検出する参加者動作検出装置、
     を備え、
     前記動作情報受信部は、前記第2ユーザの動作を検出する前記参加者動作検出装置から当該参加者動作検出装置によって取得された前記第2ユーザの動作を示す動作情報を受信し、
     前記表現情報生成部は、前記第2ユーザの前記動作情報と前記第2検出情報とに基づいて、前記仮想視点を含む範囲である仮想空間内における前記実環境の前記物体と前記仮想アバタとが前記第2ユーザの動作に伴って変化する映像を仮想的に生成し、生成した映像を示す仮想情報を前記第2表現情報と合成し、
     前記表現情報送信部は、前記仮想情報が合成された前記第2表現情報を前記第2表現装置へ送信することを特徴とする請求項9に記載の遠隔体験システム。
    a participant motion detection device that detects the second user's motion;
    Equipped with
    The motion information receiving unit receives motion information indicating the second user's motion acquired by the participant motion detecting device from the participant motion detecting device that detects the second user's motion;
    The expression information generation unit is configured to determine, based on the motion information of the second user and the second detection information, the object in the real environment and the virtual avatar in a virtual space that is a range including the virtual viewpoint. virtually generating a video that changes in accordance with the actions of the second user, and combining virtual information indicating the generated video with the second expression information;
    The remote experience system according to claim 9, wherein the expression information transmitting unit transmits the second expression information in which the virtual information is combined to the second expression device.
  11.  前記表現情報生成部は、前記仮想情報を前記第1表現情報と合成し、
     前記表現情報送信部は、前記仮想情報が合成された前記第1表現情報を前記第1表現装置へ送信することを特徴とする請求項10に記載の遠隔体験システム。
    The expression information generation unit combines the virtual information with the first expression information,
    11. The remote experience system according to claim 10, wherein the expression information transmitting unit transmits the first expression information in which the virtual information is combined to the first expression device.
  12.  前記実環境において前記イベントに参加する第3ユーザに映像を提示可能な第3表現装置、
     を備え、
     前記表現情報送信部は、前記仮想情報を前記第3表現装置へ送信することを特徴とする請求項10または11に記載の遠隔体験システム。
    a third expression device capable of presenting a video to a third user participating in the event in the real environment;
    Equipped with
    The remote experience system according to claim 10 or 11, wherein the expression information transmitter transmits the virtual information to the third expression device.
  13.  前記情報処理装置は、前記第1ユーザおよび前記第2ユーザに関する情報を、公開するか非公開とするかを設定可能であることを特徴とする請求項1から12のいずれか1つに記載の遠隔体験システム。 13. The information processing apparatus according to claim 1, wherein the information processing apparatus is capable of setting whether to make information about the first user and the second user public or private. Remote experience system.
  14.  前記情報処理装置と前記参加者動作検出装置との間の通信回線、前記情報処理装置と前記機器搭載センサとの間の通信回線、前記情報処理装置と前記実環境センサとの間の通信回線、前記情報処理装置と前記機器との間の通信回線、前記情報処理装置と前記第1表現装置との間の通信回線、および、前記情報処理装置と前記第2表現装置との間の通信回線のうちの少なくとも1つはBeyond 5G回線であることを特徴とする請求項1から12のいずれか1つに記載の遠隔体験システム。 A communication line between the information processing device and the participant motion detection device, a communication line between the information processing device and the equipment-mounted sensor, a communication line between the information processing device and the real environment sensor, A communication line between the information processing device and the device, a communication line between the information processing device and the first expression device, and a communication line between the information processing device and the second expression device. The remote experience system according to any one of claims 1 to 12, wherein at least one of them is a Beyond 5G line.
  15.  イベントが開催される実環境から離れた場所で前記イベントに参加する第1ユーザの動作を示す動作情報を受信する動作情報受信部と、
     前記動作情報を用いて、前記実環境に設置された機器に前記動作情報に応じた動作を行わせるための制御情報を生成する機器制御部と、
     前記制御情報を前記機器へ送信する制御情報送信部と、
     前記機器に搭載され実環境における物体に関する情報を検出する機器搭載センサから前記機器搭載センサによって検出された第1検出情報を受信し、前記実環境に設置され前記実環境における物体に関する情報を検出する実環境センサから前記実環境センサによって検出された第2検出情報を受信する検出情報受信部と、
     前記第1検出情報を用いて前記第1ユーザに伝達する第1表現情報を生成し、前記第2検出情報を用いて、前記実環境とは離れた場所から前記イベントに参加する第2ユーザの前記実環境における仮想視点に対応する映像を含む第2表現情報を生成する表現情報生成部と、
     前記第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置へ前記第1表現情報を送信し、前記第2ユーザに映像を提示可能な第2表現装置へ前記第2表現情報を送信する表現情報送信部と、
     を備えることを特徴とする情報処理装置。
    a behavior information receiving unit that receives behavior information indicating the behavior of a first user who participates in the event at a location away from the actual environment where the event is held;
    a device control unit that uses the operation information to generate control information for causing a device installed in the actual environment to perform an operation according to the operation information;
    a control information transmitter that transmits the control information to the device;
    Receive first detection information detected by the device-mounted sensor from a device-mounted sensor that is mounted on the device and detects information about an object in the real environment, and detect information about the object installed in the real environment in the real environment. a detection information receiving unit that receives second detection information detected by the real environment sensor from the real environment sensor;
    The first detection information is used to generate first expression information to be transmitted to the first user, and the second detection information is used to generate a second user who participates in the event from a location away from the actual environment. an expression information generation unit that generates second expression information including an image corresponding to a virtual viewpoint in the real environment;
    The first user transmits the first expression information to a first expression device capable of making an expression that can be recognized with at least one of the five senses, and sends the first expression information to a second expression device capable of presenting an image to the second user. 2. An expression information transmitter that transmits expression information;
    An information processing device comprising:
  16.  情報処理装置における情報処理方法であって、
     イベントが開催される実環境から離れた場所で前記イベントに参加する第1ユーザの動作を示す動作情報を受信し、
     前記動作情報を用いて、前記実環境に設置された機器に前記動作情報に応じた動作を行わせるための制御情報を生成し、
     前記制御情報を前記機器へ送信し、
     前記機器に搭載され実環境における物体に関する情報を検出する機器搭載センサから前記機器搭載センサによって検出された第1検出情報を受信し、
     前記実環境に設置され前記実環境における物体に関する情報を検出する実環境センサから前記実環境センサによって検出された第2検出情報を受信し、
     前記第1検出情報を用いて前記第1ユーザに伝達する第1表現情報を生成し、
     前記第2検出情報を用いて、前記実環境とは離れた場所から前記イベントに参加する第2ユーザの前記実環境における仮想視点に対応する映像を含む第2表現情報を生成し、
     前記第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置へ前記第1表現情報を送信し、前記第2ユーザに映像を提示可能な第2表現装置へ前記第2表現情報を送信することを特徴とする情報処理方法。
    An information processing method in an information processing device, the method comprising:
    receiving behavior information indicating the behavior of a first user who participates in the event at a location remote from the actual environment where the event is held;
    Using the operation information, generate control information for causing a device installed in the actual environment to perform an operation according to the operation information,
    transmitting the control information to the device;
    receiving first detection information detected by the device-mounted sensor from a device-mounted sensor that is mounted on the device and detects information regarding an object in a real environment;
    receiving second detection information detected by the real environment sensor from a real environment sensor installed in the real environment and detecting information about objects in the real environment;
    generating first expression information to be transmitted to the first user using the first detection information;
    using the second detection information to generate second expression information including an image corresponding to a virtual viewpoint in the real environment of a second user who participates in the event from a location away from the real environment;
    The first user transmits the first expression information to a first expression device capable of making an expression that can be recognized with at least one of the five senses, and sends the first expression information to a second expression device capable of presenting an image to the second user. An information processing method characterized by transmitting 2-expression information.
  17.  コンピュ-タシステムに、
     イベントが開催される実環境から離れた場所で前記イベントに参加する第1ユーザの動作を示す動作情報を受信するステップと、
     前記動作情報を用いて、前記実環境に設置された機器に前記動作情報に応じた動作を行わせるための制御情報を生成するステップと、
     前記制御情報を前記機器へ送信するステップと、
     前記機器に搭載され実環境における物体に関する情報を検出する機器搭載センサから前記機器搭載センサによって検出された第1検出情報を受信するステップと、
     前記実環境に設置され前記実環境における物体に関する情報を検出する実環境センサから前記実環境センサによって検出された第2検出情報を受信するステップと、
     前記第1検出情報を用いて前記第1ユーザに伝達する第1表現情報を生成するステップと、
     前記第2検出情報を用いて、前記実環境とは離れた場所から前記イベントに参加する第2ユーザの前記実環境における仮想視点に対応する映像を含む第2表現情報を生成するステップと、
     前記第1ユーザが五感のうちの少なくとも1つで認識できる表現が可能な第1表現装置へ前記第1表現情報を送信し、前記第2ユーザに映像を提示可能な第2表現装置へ前記第2表現情報を送信するステップと、
     を実行させることを特徴とするプログラム。
    to the computer system,
    receiving behavior information indicating the behavior of a first user participating in the event at a location remote from the actual environment in which the event is held;
    using the operation information to generate control information for causing a device installed in the actual environment to perform an operation according to the operation information;
    transmitting the control information to the device;
    receiving first detection information detected by the device-mounted sensor from a device-mounted sensor that is mounted on the device and detects information about an object in a real environment;
    receiving second detection information detected by the real environment sensor from a real environment sensor installed in the real environment and detecting information about objects in the real environment;
    generating first expression information to be communicated to the first user using the first detection information;
    using the second detection information to generate second expression information including an image corresponding to a virtual viewpoint in the real environment of a second user who participates in the event from a location away from the real environment;
    The first user transmits the first expression information to a first expression device capable of making an expression that can be recognized with at least one of the five senses, and sends the first expression information to a second expression device capable of presenting an image to the second user. 2. A step of transmitting expression information;
    A program characterized by executing.
PCT/JP2022/015971 2022-03-30 2022-03-30 Remote experience system, information processing device, information processing method, and program WO2023188104A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/015971 WO2023188104A1 (en) 2022-03-30 2022-03-30 Remote experience system, information processing device, information processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/015971 WO2023188104A1 (en) 2022-03-30 2022-03-30 Remote experience system, information processing device, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2023188104A1 true WO2023188104A1 (en) 2023-10-05

Family

ID=88200265

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/015971 WO2023188104A1 (en) 2022-03-30 2022-03-30 Remote experience system, information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2023188104A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012152395A (en) * 2011-01-26 2012-08-16 Sony Computer Entertainment Inc Information processing system, its control method, program and information storage medium
JP2013020389A (en) * 2011-07-08 2013-01-31 Dowango:Kk Display system for installation in venue
JP2017033536A (en) * 2015-07-29 2017-02-09 イマージョン コーポレーションImmersion Corporation Crowd-based haptics
JP2018008369A (en) * 2016-06-10 2018-01-18 ザ・ボーイング・カンパニーThe Boeing Company Remote control of robotic platforms based on multi-modal sensory data
US20180338163A1 (en) * 2017-05-18 2018-11-22 International Business Machines Corporation Proxies for live events
WO2019225548A1 (en) * 2018-05-21 2019-11-28 Telexistence株式会社 Remote control system, information processing method, and program
JP2021144522A (en) * 2020-03-12 2021-09-24 キヤノン株式会社 Image processing apparatus, image processing method, program, and image processing system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012152395A (en) * 2011-01-26 2012-08-16 Sony Computer Entertainment Inc Information processing system, its control method, program and information storage medium
JP2013020389A (en) * 2011-07-08 2013-01-31 Dowango:Kk Display system for installation in venue
JP2017033536A (en) * 2015-07-29 2017-02-09 イマージョン コーポレーションImmersion Corporation Crowd-based haptics
JP2018008369A (en) * 2016-06-10 2018-01-18 ザ・ボーイング・カンパニーThe Boeing Company Remote control of robotic platforms based on multi-modal sensory data
US20180338163A1 (en) * 2017-05-18 2018-11-22 International Business Machines Corporation Proxies for live events
WO2019225548A1 (en) * 2018-05-21 2019-11-28 Telexistence株式会社 Remote control system, information processing method, and program
JP2021144522A (en) * 2020-03-12 2021-09-24 キヤノン株式会社 Image processing apparatus, image processing method, program, and image processing system

Similar Documents

Publication Publication Date Title
LaValle Virtual reality
US9654734B1 (en) Virtual conference room
Mandal Brief introduction of virtual reality & its challenges
Mavor et al. Virtual reality: scientific and technological challenges
JP2022549853A (en) Individual visibility in shared space
Larsson et al. The actor-observer effect in virtual reality presentations
JP6298130B2 (en) Simulation system and program
Stanney et al. Extended reality (XR) environments
JP3715219B2 (en) Virtual field training device
JP6683864B1 (en) Content control system, content control method, and content control program
JP2007151647A (en) Image processing apparatus and method and program
JP2017215577A (en) Education system using virtual robot
Vafadar Virtual reality: Opportunities and challenges
US10582190B2 (en) Virtual training system
JP2020080154A (en) Information processing system
JP2018136944A (en) Simulation system and program
WO2023188104A1 (en) Remote experience system, information processing device, information processing method, and program
JP7465737B2 (en) Teaching system, viewing terminal, information processing method and program
JP6892478B2 (en) Content control systems, content control methods, and content control programs
Nesamalar et al. An introduction to virtual reality techniques and its applications
JP2021009351A (en) Content control system, content control method, and content control program
US20240096227A1 (en) Content provision system, content provision method, and content provision program
KR102581805B1 (en) Method and system for training plant operator by metaverse server
Ghosh et al. Education Applications of 3D Technology
WO2022107294A1 (en) Vr image space generation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22935250

Country of ref document: EP

Kind code of ref document: A1