WO2022049707A1 - Somatosensory interface system, and action somatosensation system - Google Patents

Somatosensory interface system, and action somatosensation system Download PDF

Info

Publication number
WO2022049707A1
WO2022049707A1 PCT/JP2020/033478 JP2020033478W WO2022049707A1 WO 2022049707 A1 WO2022049707 A1 WO 2022049707A1 JP 2020033478 W JP2020033478 W JP 2020033478W WO 2022049707 A1 WO2022049707 A1 WO 2022049707A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
region
user
virtual space
period
Prior art date
Application number
PCT/JP2020/033478
Other languages
French (fr)
Japanese (ja)
Inventor
良哉 尾小山
Original Assignee
株式会社Abal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Abal filed Critical 株式会社Abal
Priority to PCT/JP2020/033478 priority Critical patent/WO2022049707A1/en
Priority to JP2021520433A priority patent/JP6933849B1/en
Publication of WO2022049707A1 publication Critical patent/WO2022049707A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention is an experience-based interface system for controlling the operation of a controlled object existing in an area different from the area in which the user exists in the real space via a virtual space, and an operation that the controlled object can execute. It is related to the motion experience system that allows you to experience.
  • a user is made to recognize an image of a virtual space generated by a server or the like and an image of an avatar corresponding to the user in the virtual space via a head-mounted display (hereinafter, may be referred to as “HMD”).
  • HMD head-mounted display
  • a motion capture device or the like recognizes a user's movement in the real space (for example, body movement, coordinate movement, posture change, etc.), and according to the recognized movement, There is something that controls the behavior of the avatar in the virtual space (see, for example, Patent Document 1).
  • the virtual space experience system of Patent Document 1 generates a first avatar corresponding to a user, a game pad type object, and a predetermined character type second avatar in the virtual space. Then, this system operates the first avatar according to the movement of the user, operates the gamepad type object according to the movement of the first avatar, and operates according to the movement of the gamepad type object. Activate the second avatar.
  • the virtual space is interposed as an interface between the two real spaces, and the second avatar is operated in the virtual space through the operation of the first avatar corresponding to the user.
  • an interface system that controls the operation of a controlled object such as a robot corresponding to the second avatar is desired.
  • the interface system it exists in a predetermined area of the real space in a virtual space created corresponding to an area where it is difficult for a user to actually enter (for example, a nuclear power plant during decommissioning work).
  • the user operates the second avatar corresponding to the controlled object via the first avatar corresponding to the user in the virtual space, thereby performing the operation of the controlled object existing in the area different from the user. , You will be able to control it as if you were operating it in direct contact with your own hands.
  • the present invention has been made in view of the above points, and the user can control the operation of a controlled object that exists in a region different from the region in which he / she exists in the real space via a virtual space. It is an object of the present invention to provide an interface system that can be controlled in accordance with the above, and an operation experience system that allows the controlled object to experience an executable operation.
  • the experience-based interface system of the present invention is Through the virtual space, a user existing in the first region of the real space controls the operation of the controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a hands-on interface system for A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist. A user motion recognition unit that recognizes the user's motion in the first region, An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
  • An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
  • An image display that causes the user to recognize the determined image of the virtual space, and A possible motion recognition unit that recognizes an action that the controlled object can execute in the second region,
  • a target control unit that controls the operation of the controlled object according to the operation of the second avatar is provided.
  • the avatar control unit is characterized in that it controls while limiting the actions that the second avatar can perform, based on the actions that the control target can perform in the second region.
  • the experience-based interface system of the present invention controls the operation of the first avatar corresponding to the user according to the operation of the user, and controls the operation of the second avatar according to the operation of the first avatar. ing.
  • the user can operate the control target corresponding to the second avatar in the real space by operating the second avatar via the first avatar corresponding to the user in the virtual space. ing.
  • the operation of the second avatar in the virtual space is limited based on the operation that can be executed by the controlled object existing in the second area having at least one of the time and position different from the first area in which the user exists. While being controlled. That is, the operation is controlled while being restricted based on the constraints in the real space in which the controlled object exists (for example, the surrounding environment of the controlled object, the function of the controlled object, etc.).
  • the operation corresponds to the operation that the controlled object cannot execute (that is, the operation ignoring the constraint in the real space). )
  • the user cannot naturally execute the operation even in the virtual space.
  • the second avatar may not be able to perform a predetermined action or may not be able to move at a predetermined speed or higher.
  • the operation of the second avatar corresponding to the operation that the controlled object cannot execute cannot be executed naturally even in the virtual space, so that the user has the controlled object.
  • the operation of the controlled object in the second region can be controlled according to the second region without considering the restrictions in the second region of the real space.
  • a space recognition unit that recognizes the shape of the first region and the shape of the second region is provided.
  • the virtual space generation unit generates the virtual space so that the shape of the virtual space corresponds to the shape of the first region.
  • the avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the shape of the first region and the shape of the second region. It is preferable to control while limiting the operation.
  • the first area may be a narrow space such as a room in a home or an office
  • the second area may be a large space such as a construction site.
  • the virtual space when the virtual space is generated so that the shape of the virtual space corresponds to the shape of the first region in this way, the virtual space corresponds to the range in which the original user can operate. You will be able to operate the first avatar without making a mistake in recognizing the range in which you can operate. As a result, even if the shape of the first region and the shape of the second region are different, the user can perform the operation of the controlled object in the second region according to the second region while being in the first region. Can be controlled.
  • the movement of the second avatar is restricted based on the difference between the shape of the first region and the shape of the second region as well as the movement that the controlled object can execute in the second region. It is preferable to do so.
  • the operation of the second avatar corresponding to the operation that makes the controlled object unexecutable can be naturally restricted.
  • the user controls the operation of the controlled object in the second region according to the second region without considering the difference between the shape of the first region and the shape of the second region. can do.
  • a period recognition unit that recognizes the length of the first period, which is the operable period of the second avatar in the virtual space, and the length of the second period, which is the operable period of the controlled object in the second region.
  • a motion storage unit for storing the motion of the second avatar is provided.
  • the avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the length of the first period and the length of the second period. Control while limiting movement, It is preferable that the target control unit controls the control target in the second period according to the operation of the second avatar stored in the motion storage unit in the first period.
  • the experience-based interface system of the present invention As a situation in which the experience-based interface system of the present invention is used, there is a situation in which the operation of the second avatar and the operation of the controlled object do not have to correspond in real time. In such a situation, if the motion storage unit is provided in this way, the motion of the second avatar in a certain time zone is stored (that is, the motion of the controlled object is set), and the motion is stored. Depending on the situation, the controlled object can actually operate in a later time zone.
  • the length of the first period which is the operable period of the second avatar in the virtual space
  • the length of the second period which is the operable period of the controlled object in the second region.
  • the length of the first period and the length of the second period are different, it becomes easy for the user to make the operation corresponding to the operation that the controlled object cannot execute to the second avatar corresponding to the controlled object. There is a risk that it will end up.
  • the operation speed of the controlled object with respect to the operation speed of the second avatar becomes faster, so that when the user operates the second avatar in the same manner as when there is no difference in the period. , There is a risk of exceeding the upper limit of the operating speed of the controlled object.
  • the movement of the second avatar is restricted based on the difference between the length of the first period and the length of the second period as well as the movement that the controlled object can execute in the second region. It is preferable to do so.
  • the operation of the second avatar corresponding to the operation that makes the controlled object unexecutable can be naturally restricted.
  • the user controls the operation of the controlled object in the second period according to the second region without considering the difference between the length of the first period and the length of the second period. can do.
  • the motion experience system of the present invention is Through the virtual space, a user existing in the first region of the real space can perform an operation that can be executed by a controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a movement experience system for experiencing, A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist. A user motion recognition unit that recognizes the user's motion in the first region, An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
  • a mode switching unit that switches between the first mode and the second mode, An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
  • An image display that causes the user to recognize the determined image of the virtual space, and
  • a target function recognition unit that recognizes an operation that the control target can execute as a function, It is provided with a constraint recognition unit that recognizes constraints when the control target operates in the second region.
  • the avatar control unit controls an operation that can be executed by the second avatar based only on the function of the controlled object, and in the second mode, the function of the controlled object and the second. It is characterized in that the second avatar controls while limiting the actions that can be performed based on the constraint in the area.
  • the motion experience system of the present invention controls the motion of the first avatar corresponding to the user according to the motion of the user in any of the switchable modes, and according to the motion of the first avatar. It controls the operation of the second avatar.
  • the operation that the second avatar can execute in the virtual space is controlled based only on the function to be controlled. Specifically, in the first mode, at least one of the time and the position is different from the first region in which the user exists, and the surrounding environment of the second region in which the control target exists is not considered. Thereby, in the first mode, the user can experience the operation that the controlled object can execute as a function.
  • the controlled object when the controlled object is actually operated in the second region, there is a possibility that the controlled object comes into contact with the surrounding environment in the second region. Therefore, in principle, it is recommended that the controlled object is not actually operated in the first mode, except when there are almost no restrictions in the second region.
  • the operation that the second avatar can execute in the virtual space is controlled while being restricted not only based on the function to be controlled but also based on the constraint in the second area.
  • the actions that can be executed by the second avatar correspond to the actions that the controlled object can actually execute in the second region, which is the real space.
  • the second avatar may not be able to perform a predetermined action or may not be able to move at a predetermined speed or higher.
  • the user can experience the operation that the controlled object can actually execute in the second region which is the real space.
  • the controlled object In the second mode, even if the controlled object is actually operated in the second area, there is no possibility that the controlled object comes into contact with the surrounding environment of the second area. However, from the viewpoint of experiencing the operation that the controlled object can actually perform, it is not always necessary to actually operate the controlled object. Therefore, in the second mode, the controlled object may or may not be actually operated.
  • the user compares the operation that the controlled object can execute based only on the function with the operation that the controlled object can execute based on the function and the constraint in the second region. You can experience it while doing it. As a result, according to this system, the user can intuitively verify the operation of fully exerting the function of the controlled object under a predetermined environment.
  • the schematic diagram which shows the schematic structure of the VR system which concerns on embodiment.
  • the block diagram which shows the structure of the processing part of the VR system of FIG.
  • a timing chart showing an example of the relationship between the time zone in which the user's action is executed in the VR system of FIG. 1 and the time zone in which the action of the avatar, the object, and the drone is executed.
  • the flowchart which shows the process to execute when the VR system of FIG. 1 is started to use.
  • FIG. 3 is a flowchart showing a process executed based on the recognized information among the processes executed when the VR system of FIG. 1 recognizes an executable operation of an object.
  • FIG. 3 is a flowchart showing a process executed based on newly recognized information among the processes executed when the VR system of FIG. 1 recognizes an executable operation of an object.
  • the flowchart which shows the process which the VR system of FIG. 1 executes when operating an object and operating a drone.
  • VR system S experience-based interface system, motion-experience system
  • the VR system S is a system that makes the user U recognize that the user U itself exists in the virtual space and allows the user U to experience virtual reality (so-called VR (virtual reality)).
  • VR virtual reality
  • the VR system S is a controlled object that exists in the second region RS2 in the real space where at least one of the time and the position is different from the first region RS1 in the real space where the user U exists via the virtual space.
  • This is a system for allowing the user U to experience an operation that controls the operation or that the controlled object can execute.
  • the VR system S is used to cause the drone to perform a predetermined work (for example, transportation of luggage) at a construction site. Therefore, the own room of the user U in which the user U exists is set as the first region RS1, and the first drone D1 and the second drone D2 (hereinafter, collectively referred to as "drone D") to be controlled exist.
  • the construction site located away from the first area RS1 is referred to as the second area RS2.
  • the first region and the second region of the present invention and the control target are not limited to such a configuration.
  • the first area may be an area in the real space in which the user exists, and the second area may have or may have a controlled object, and at least one of the first area and the time and position. May be different real-world areas.
  • the control target may be one in which the user controls the operation or verifies the operation via the virtual space.
  • the first area is a room for providing the service
  • the second area is the room.
  • It may be a breeding box in which small animals, insects, etc. that are smaller than the actual breeding box are actually bred
  • the control target may be a robot with a camera that can move inside the breeding box.
  • the first area is set as an operating room in a predetermined time zone
  • the second area is set as a time zone after the predetermined time zone.
  • the controlled object may be a robot that actually performs surgery.
  • the VR system S includes a device existing in the first region RS1 in the real space where the user U exists, a device existing in the second region RS2 where the drone D exists, and a device. , Is configured by a server 1 installed by a service provider or the like using the VR system S.
  • the system of the present invention is not limited to such a configuration.
  • a terminal instead of the server may be installed in the first area or the second area, or a plurality of servers, terminals, and the like may be used to realize the function of the server 1 of the present embodiment (processing unit described later). You may do it.
  • the devices existing in the first region RS1 include a plurality of signs 2 attached to the user U, a first camera 3 for photographing the user U (strictly speaking, the sign 2 attached to the user U), and a virtual device.
  • a head-mounted display (hereinafter referred to as "HMD4") that allows the user U to recognize the image and sound of the space VS (see FIG. 3), and a controller 5 used by the user U (not shown in FIG. 1, see FIG. 2). Applies to.
  • the sign 2 is attached to each of the user U's head, both hands, and both feet via the HMD4, gloves, shoes, and the like worn by the user U.
  • the sign 2 is used to recognize the movement of the user U in the first region RS1 (for example, the movement of each part of the body, the movement of coordinates, the change of posture, etc.) as described later. Therefore, the mounting position may be appropriately changed according to other devices and the like constituting the VR system S.
  • a plurality of first cameras 3 are installed in the first area RS1 so that the user U itself and the range in which the user U can operate in the first area RS1 can be photographed from multiple directions. Only one first camera 3 may be installed depending on the performance of the first camera 3, the shape of the first region RS1, and the like, and the installation location thereof may be appropriately changed.
  • the HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 1, and a virtual space determined by the server 1. It has a speaker 41 (voice generator) for causing the user U to recognize the voice of VS.
  • a monitor 40 image display
  • speaker 41 voice generator
  • the controller 5 transmits to the server 1 a mode switching instruction, which will be described later, and an operation instruction to the first object O1 corresponding to the first drone D1 and the second object O2 corresponding to the second drone D2. Used for.
  • the VR system S When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS by the HMD4, and the user U himself / herself is made to recognize the avatar A described later (see FIG. 3). ) To be recognized as existing in the virtual space. That is, the VR system S is configured as a so-called immersive system.
  • the VR system S employs a so-called motion capture device configured by using the sign 2 and the first camera 3 as a system for recognizing the operation of the user U in the first region RS1.
  • the system of the present invention is not limited to such a configuration.
  • a motion capture device in addition to the above configuration, one having a different number of signs and cameras from the above configuration (for example, one is provided for each) may be used.
  • a device other than the motion capture device may be used to recognize the operation in the first region of the user's real space.
  • a sensor such as GPS may be mounted on the HMD, gloves, shoes, or the like worn by the user, and the movement of the player may be recognized based on the output from the sensor. Further, such a sensor may be used in combination with a motion capture device as described above.
  • the controller may be omitted in the real space, and an object corresponding to the controller (for example, the fifth object O5 described later in the present embodiment) may be generated only in the virtual space. Also, both such controllers and objects may be omitted, or used in combination with at least one of them, to recognize instructions given using them based on the user's voice or gesture. ..
  • the device existing in the second area RS2 corresponds to the second camera 6 (not shown in FIG. 1, see FIG. 2) installed in the second area RS2.
  • a relay device for controlling the drone D may be installed in the second region RS2 based on the instruction received from the server 1.
  • a plurality of second cameras 6 are installed so that the drone D itself and the range in which they can operate in the second area can be photographed from multiple directions. Only one second camera 6 may be installed depending on the performance of the second camera 6, the shape of the second region RS2, and the like, and the installation location thereof may be appropriately set.
  • the server 1 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like.
  • the server 1 communicates with the first camera 3, the HMD 4, and the controller 5 existing in the first area RS1 and the second camera 6 existing in the second area RS1 by short-range wireless communication and wired communication. , Internet network, public line, etc., are configured to enable mutual information communication. Further, the server 1 is also configured to enable information communication with the drone D in the same manner.
  • the server 1 has a virtual space generation unit 10 (spatial recognition unit, constraint recognition unit), a user motion recognition unit 11, and a mode as functions realized by the implemented hardware configuration or program.
  • Switching unit 12 period recognition unit 13, possible motion recognition unit 14 (target function recognition unit), avatar control unit 15, output information determination unit 16 (image determination unit), target control unit 17 (operation storage unit). ) And.
  • the virtual space generation unit 10 includes a virtual space VS (strictly speaking, the background of the virtual space VS), an avatar A (first avatar) corresponding to the user U existing in the virtual space VS, and a plurality of objects. Generate an image. The virtual space generation unit 10 also generates sounds related to those images.
  • the objects generated by the virtual space generation unit 10 include the first object O1 (second avatar) and the second drone D2 corresponding to the first drone D1 existing in the second region RS2.
  • the second object O2 (second avatar) corresponding to, the third object O3 corresponding to the work machine W, the fourth object O4 corresponding to the building material M, and the controller 5 existing in the first region RS1.
  • the fifth object O5 corresponding to is included.
  • the virtual space generation unit 10 (spatial recognition unit) recognizes the shape of the first region RS1 based on the image taken by the first camera 3, and the second camera 6 is based on the image taken by the second camera 6. Recognize the shape of the two-region RS2. Then, the virtual space generation unit 10 generates the virtual space VS so that the shape of the virtual space VS corresponds to the shape of the first region RS1.
  • the first region RS1 is recognized as a narrow space of a plan view square
  • the second region RS2 is recognized as a vertically long wide space of a plan view rectangle. do.
  • the virtual space generation unit 10 generates the shape of the virtual space VS so as to match the shape of the first region RS1.
  • the image of the virtual space VS is generated as an image based on the captured image of the second region RS2, reduced to fit the shape of the first region RS1.
  • the aspect ratio of the first region RS1 that is, the virtual space VS
  • the aspect ratio of the second region RS2 is different from the aspect ratio of the second region RS2. Therefore, the reduction ratios of the images of the virtual space VS in the vertical direction and the horizontal direction of the drawing are different.
  • the system of the present invention is not limited to such a configuration, and when the shape of the first region and the shape of the second region always match (for example, when only the time is different), the system is not limited to such a configuration.
  • the virtual space generation unit may not include a function of deforming the image of the second region based on the shape of the first region.
  • the virtual space generation unit 10 (constraint recognition unit) restricts the operation of the user U (and by extension, the avatar A corresponding to the user U) in the first region RS1 based on the image taken by the first camera 3. Recognize. Then, the virtual space generation unit 10 generates the virtual space VS in consideration of the constraint.
  • the virtual space generation unit 10 recognizes the furniture F existing in the first region RS1, and the furniture F recognizes the furniture F existing in the first region RS1.
  • the first constraint area LA1 in which the avatar A cannot enter is generated at the position of the virtual space VS corresponding to the position existing in.
  • the furniture F is generated in the virtual space VS as a ghost in a semi-transparent state.
  • the virtual space generation unit 10 recognizes the restriction when the drone D operates in the second region RS2 based on the image taken by the second camera 6 and the function of the work machine W input in advance. do. Then, the virtual space generation unit 10 generates the virtual space VS in consideration of the constraint.
  • the virtual space generation unit 10 recognizes the building material M existing in the second region RS2. Then, the second constraint area LA2 is generated at the position of the virtual space VS corresponding to the position where the building material M exists in the second region RS2.
  • the second restricted area LA2 is an area in which the first object O1 and the second object O2 (hereinafter, collectively referred to as “target objects O1 and O2”) corresponding to the drone D cannot enter.
  • the virtual space generation unit 10 recognizes the work machine W existing in the second region RS2. Then, the third constraint area LA3 is generated in the area of the virtual space VS corresponding to the area where the work machine W can exist in the second area RS2 (that is, the area where the work machine W operates).
  • the third restricted area LA3 is an area where entry of the target objects O1 and O2 is prohibited (for example, an area where entry is possible but a warning is displayed).
  • the first constraint area LA1, the second constraint area LA2, and the third constraint area LA3 generated in this way are shown as a semi-transparent three-dimensional object in the virtual space VS, for example, as shown in FIG.
  • the virtual space generation unit 10 limits the shapes of the first region RS1 and the second region RS2, and the restrictions in the first region RS1 and the second region RS2, not necessarily the images of the first camera 3 and the second camera 6. It is not necessary to recognize based on the above, and it may be recognized based on a model, numerical value, etc. separately input by the user U himself or the like.
  • the user motion recognition unit 11 recognizes the motion of the user U based on the image data captured by the first camera 3. Specifically, the user motion recognition unit 11 extracts the sign 2 attached to the user U from the image data of the user U, and based on the extraction result, the motion and coordinates of each part of the body of the user U are Recognize movements and changes in posture.
  • the mode switching unit 12 recognizes the user U's mode change instruction based on the input to the controller 5, and changes the mode executed by the VR system S based on the instruction.
  • a mode for controlling the drone D in real time (second mode) and a mode for controlling the drone D in a different time zone after the time zone in which the target objects O1 and O2 are operated (second mode).
  • the drone D is not actually controlled and can be changed to one of three modes (first mode) for verifying its operation.
  • the system of the present invention is not limited to such a configuration, and may include at least one of these three modes. Therefore, when configuring to execute only one mode, the mode switching unit may be omitted.
  • the period recognition unit 13 recognizes the length of the first period, which is the operable period of the target objects O1 and O2 in the virtual space VS, and the time zone thereof, based on the input to the controller 5. That is, the operable period of the avatar A (and by extension, the user U) that operates the target objects O1 and O2 is recognized.
  • the period recognition unit 13 recognizes the length of the second period, which is the operable period of the drone D in the second area RS2, and the time zone thereof, based on the input to the controller 5 and the like.
  • the system of the present invention is not limited to such a configuration.
  • the period recognition unit may be omitted. good.
  • the possible motion recognition unit 14 recognizes the motion that the drone D can execute in the second area RS2.
  • the possible motion recognition unit 14 recognizes the motion that the drone D can execute as a function based on the information or the like input in advance by the user U or the like.
  • the operation that can be executed as a function may be recognized only at the stage when the VR system S is started to be used, and may be recognized at any time based on the information fed back from the drone D while using the VR system S. You may try to recognize it.
  • the possible motion recognition unit 14 has a time zone of the second period recognized by the period recognition unit 13, an image of the second region RS2 recognized by the virtual space generation unit 10, and a plan of construction to be executed in the second region RS2.
  • the drone D recognizes an action that can be executed even based on the environment (weather, etc.) of the second region RS2.
  • the possible motion recognition unit 14 recognizes the work schedule of the work machine W in the second period separately input, and within that period, the drone D can be executed as a function, but with the work machine W. Recognize actions that are infeasible to avoid contact. After that, the possible motion recognition unit 14 recognizes the action that the drone D can perform in the second period with reference to the recognized action that becomes infeasible.
  • the avatar control unit 15 controls the operation of the avatar A according to the operation of the user U, and controls the operation of the target objects O1 and O2 according to the operation of the avatar A. Specifically, as shown in FIG. 3, the user U operates the target object in the virtual space VS through the operation of the avatar A corresponding to the user U or by operating the object O5 corresponding to the controller 5. Controls the operation of O1 and O2.
  • the avatar control unit 15 limits the operations of the target objects O1 and O2 based on the operations that the drone D corresponding to them can execute as a function.
  • the first object O1 Operation is restricted. Specifically, when the user U moves the first object O1 by pushing it with the hand of the avatar A, and the moving speed becomes equal to or higher than a predetermined speed, the hand of the avatar A moves. Processing such as passing through the first object O1 is performed.
  • the avatar control unit 15 limits the operations of the target objects O1 and O2 based on the operations that the drone D corresponding to them can execute in the second region RS2.
  • the drone D cannot enter the position where the building material M exists in the second region RS2. Therefore, the target objects O1 and O2 cannot be moved to the second constraint area LA2 corresponding to the position.
  • the drone D is moved to a region where the work machine W can exist in the second region RS2, there is a possibility that the work machine W will come into contact with the work machine W. Therefore, when the target objects O1 and O2 are moved to the third constraint area LA3 corresponding to the area, their colors change and a warning text is displayed in the vicinity of the avatar A in the virtual space VS.
  • the avatar control unit 15 limits the operation of the target objects O1 and O2 even based on the difference between the shape of the first region RS1 and the shape of the second region RS2.
  • the virtual space VS is generated based on the shape of the first region RS1.
  • the user U tends to make the target objects O1 and O2 corresponding to the operations corresponding to the operations that the drone D cannot execute.
  • the second region RS2 is wider than the first region RS1
  • the virtual space VS is generated based on the shape of the first region RS1
  • the movement of the drone D with respect to the movement amount of the target objects O1 and O2.
  • the amount will be large. Therefore, if the user U moves the target objects O1 and O2, the upper limit of the moving speed of the drone D may be exceeded.
  • the avatar control unit 15 is the target.
  • the movable speed of the objects O1 and O2 in each direction is limited for each direction according to the degree of reduction thereof.
  • the target objects O1 and O2 can move only at a speed or less specified for each direction.
  • the movable speed of the target objects O1 and O2 in the vertical direction on the paper surface of FIG. 4 is slower than the moving speed in the horizontal direction.
  • the avatar control unit 15 limits the operation of the target objects O1 and O2 even based on the difference between the length of the first period and the length of the second period.
  • the length of the first period and the length of the second period may be different.
  • the user U tends to make the target objects O1 and O2 corresponding to the actions corresponding to the actions that the drone D cannot execute.
  • the operating speed of the drone D becomes faster than the operating speed of the target objects O1 and O2. Therefore, if the user U operates the target objects O1 and O2 in the same manner as when there is no difference in the period, the upper limit of the operating speed of the drone D may be exceeded.
  • the avatar control unit 15 uniformly limits the movable speeds of the target objects O1 and O2 in all directions according to the difference in the periods.
  • the target objects O1 and O2 can move only at a predetermined speed or less in any direction.
  • the movable speed of the target objects O1 and O2 is slower than the normal speed.
  • the system of the present invention is not limited to such a configuration.
  • the avatar control unit when the shape of the first region and the shape of the second region match, the avatar control unit operates the second avatar based on the difference between the shape of the first region and the shape of the second region. Does not have to be restricted. Further, for example, when the system is configured to have only a mode for controlling the controlled object in real time, the avatar control unit may use the second avatar based on the difference between the first period and the second period. It is not necessary to limit the operation.
  • the drone D can be executed in the second region RS2 in consideration of the surrounding environment as described above.
  • the operation of the target objects O1 and O2 in the virtual space VS based on the operation, the difference between the shape of the first region RS1 and the shape of the second region RS2, and the difference between the length of the first period and the length of the second period. Is restricted.
  • the operation is not restricted based on the difference in the surrounding environment and the period, and the target objects O1 and O2 are based only on the difference in shape and the function of the drone D.
  • the operation in the virtual space VS is limited.
  • the output information determination unit 16 determines the image and sound of the virtual space VS to be recognized by the user U via the HMD 4.
  • the target control unit 17 controls the operation of the drone D corresponding to the operation of the target objects O1 and O2 in the virtual space VS in the second region RS2.
  • the target control unit 17 can store the operations of the target objects O1 and O2 executed in the first period. This makes it possible to control the drone D not only in real time but also in the second period, which is a period after the first period (at least a period in which the start time is different).
  • the target control unit 17 can individually store each operation of the first object O1 and the second object O2. This makes it possible for one user U to control the operations of the first drone D1 and the second drone D2 corresponding to them in the second region RS2 at different time zones.
  • the user operates the first object O1 via the avatar A in the first period (the period from t1 to t2).
  • the operation of the first object O1 at that time is stored in the target control unit 17.
  • the second period the period from t4 to t7 after the first period, the first drone D1 is operated according to the operation of the first object O1 stored in the target control unit 17.
  • the lengths of the first period and the second period in this case do not necessarily have to be the same.
  • the operation to be executed by the controlled object in a short time for example, surgery by a robot
  • the second period longer than the first period it is possible to set the operation that must be performed slowly in consideration of the situation of the second region in a short time.
  • the user operates the second object O2 via the avatar A in the first period (the period from t3 to t6).
  • the operation of the second object O2 at that time is stored in the target control unit 17.
  • the second period the period from t5 to t8 after the first period, the second drone D2 is operated according to the operation of the second object O2 stored in the target control unit 17.
  • the first period (the period from t1 to t2) for the first object O1 different from the first period (the period from t3 to t6) for the second object O2
  • the first period (for example, the period from t3 to t6) may partially overlap with the second second period (for example, the period from t5 to t8).
  • the user U can set the subsequent operation while taking into account the situation in real time to some extent.
  • the system of the present invention is not limited to such a configuration.
  • the operation storage unit may be omitted.
  • the virtual space generation unit 10 determines the shape of the first region RS1 and the second region based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Recognize the shape of RS (Fig. 6 / STEP100).
  • the virtual space generation unit 10 determines whether or not the recognized shape of the first region RS1 and the shape of the second region RS2 are different (FIG. 6 / STEP101).
  • the virtual space generation unit 10 has, for example, the shape of the first region RS1 as shown in FIG. Based on this, the image of the second region RS2 is corrected so as to match the shape of the first region RS1 (FIG. 6 / STEP102).
  • the virtual space generation unit 10 generates a virtual space VS (strictly speaking, an image that becomes the background of the virtual space VS) based on the corrected image of the second region RS2 (FIG. 6 / STEP103).
  • the virtual space generation unit 10 does not correct the image of the second region RS2.
  • a virtual space VS (strictly speaking, an image that is a background of the virtual space VS) is generated based on the uncorrected image (FIG. 6 / STEP104).
  • the virtual space generation unit 10 sets the initial state (coordinates, posture) in the first region RS1 of the user U based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Etc.) and the initial state (coordinates, posture, etc.) in the second region RS2 of the drone D is recognized (FIG. 6 / STEP105).
  • the initial state of the user U is the initial state in the first period and is the current state. Further, in the present embodiment, the initial state of the drone D is the initial state in the second period, and is the initial state in the second period.
  • the virtual space generation unit 10 generates the avatar A in the virtual space VS so as to correspond to the initial state of the user U, and causes the target objects O1 and O2 to correspond to the initial state of the drone D. (Fig. 6 / STEP106).
  • the virtual space generation unit 10 creates the surrounding environment of the first region RS1 and the second region RS2 based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Recognize (Fig. 6 / STEP107).
  • the virtual space generation unit 10 recognizes the furniture F existing in the first region RS1 as the surrounding environment of the first region RS1, and the building material M and the building material M existing in the second region RS2.
  • the work machine W is recognized as the surrounding environment of the second region RS2.
  • the virtual space generation unit 10 generates objects and constraint areas corresponding to them in the virtual space VS based on the recognized surrounding environment. (Fig. 6 / STEP108).
  • the virtual space generation unit 10 generates the ghost of the furniture F and the first constraint area LA1 at the position of the virtual space VS corresponding to the position where the furniture F exists in the first region RS1. Further, the virtual space generation unit 10 places the building material M and the work machine W at the position of the virtual space VS corresponding to the position where the building material M and the work machine W exist in the second region RS2, and the third object corresponding to the building material M and the work machine W. The fourth object O4 corresponding to the O3 and the work machine, and the second constraint area LA2 and the third constraint area LA3 are generated.
  • the output information determination unit 16 determines the image and sound to be recognized by the user based on the state of the avatar A (FIG. 6 / STEP109).
  • the output information determination unit 16 displays the determined image on the monitor 40 of the HMD 4 worn by the user U, and generates the determined voice on the speaker 41 (FIG. 6 / STEP110). , End this process.
  • the user U is in a state of recognizing, for example, the virtual space VS as shown in FIG.
  • the possible motion recognition unit 14 can execute the drone D corresponding to the target objects O1 and O2 as a function in the second region RS2 based on the information or the like input in advance by the user U or the like. Recognize the operation (Fig. 7A / STEP200).
  • the period recognition unit 13 recognizes the first period and the second period based on the information or the like input in advance by the user U or the like (FIG. 7A / STEP201).
  • the possible motion recognition unit 14 recognizes the surrounding environment of the second region RS2 in the second period (FIG. 7A / STEP202).
  • the possible motion recognition unit 14 recognizes the surrounding environment that may change in the second region RS2 in the second period.
  • the amount of the building material M (that is, the arrangement space thereof) and the operation of the work machine W may change in the second period. Therefore, the possible motion recognition unit 14 changes the status of the building material M and the work machine W at each time point in the second period based on the construction plan or the like input in advance by the user U or the like to the surrounding environment of the second region RS2. Recognize as.
  • the target objects O1 and O2 corresponding to the drone D are the virtual space VS based on the recognized function of the drone D and the surrounding environment of the second region RS1 in the second period. Recognizes the actions that can be performed in (Fig. 7A / STEP203).
  • the avatar control unit 15 determines whether or not the shape of the first region RS1 and the shape of the second region RS2 are different based on the information recognized by the virtual space generation unit 10 (FIG. 7A / STEP204). ..
  • the avatar control unit 15 refers to the image of the second region RS2 when generating the virtual space VS.
  • the content of the correction made is recognized, and the operation of the target objects O1 and O2 limited by the position of the virtual space VS is recognized based on the content (FIG. 7A / STEP205).
  • the first region RS1 is a narrow space of a plan view square
  • the second region RS2 is a vertically long wide space of a plan view rectangle
  • the virtual space generation unit 10 it is assumed that the image of the virtual space VS is corrected so that the image of the second region RS2 is reduced to fit the shape of the first region RS1.
  • the avatar control unit 15 limits the movable speed of the target objects O1 and O2 for each direction according to the degree of reduction thereof. As a result, the movable speed of the target objects O1 and O2 in the vertical direction on the paper surface of FIG. 4 becomes slower than the moving speed in the horizontal direction.
  • the avatar control unit 15 determines whether or not the length of the first period and the length of the second period are different (FIG. 7A / STEP206).
  • the avatar control unit 15 sets the execution limit of each operation of the target objects O1 and O2 based on the difference amount. (Fig. 7A / STEP207).
  • the speed at which the drone D can perform as a function in the second period is uniformly accelerated or decelerated based on the length of the first period and the ratio between the second period and the length.
  • the processing up to this point is the processing executed before the user U starts to operate the target objects O1 and O2 via the avatar A (that is, before the start of the first period).
  • the process described below is a process executed after the user U starts operating the target objects O1 and O2 via the avatar A (that is, after the start of the first period).
  • the virtual space generation unit 10 determines whether or not the surrounding environment of the drone D in the second region RS2 has changed at the time of the second period corresponding to the present time (that is, a predetermined time of the first period). (Fig. 7B / STEP208).
  • the virtual space generation unit 10 modifies the second constraint area LA2 or the third constraint area LA3 based on the change in the surrounding environment of the second region RS1. (FIG. 7B / STEP209).
  • the virtual space generation unit 10 sets the second constraint area LA2 corresponding to the building material M. Shrink as it decreases. Further, for example, when the posture or the like has changed from the state of FIG. 3 as a result of the operation of the work machine W, the virtual space generation unit 10 sets the third constraint area LA3 corresponding to the work machine W. Deform according to changes.
  • the avatar control unit 15 recognizes the operation of the target objects O1 and O2 newly restricted as a result of the modification based on the modification of the second constraint area LA2 or the third constraint area LA3 (FIG. 7B / STEP210). ).
  • the possible movement recognition unit 14 After recognizing the movements of the newly restricted target objects O1 and O2, or when the surrounding environment has not changed (NO in STEP 208), the possible movement recognition unit 14 has changed the state of the drone D. It is determined whether or not (FIG. 7B / STEP211).
  • the avatar control unit 15 recognizes the action that the target objects O1 and O2 can execute again based on the change in the state of the drone D (FIG. FIG. 7B / STEP212).
  • the avatar control unit 15 also changes the executable operation of the target objects O1 and O2 corresponding to the drone D according to the change in the state.
  • the VR system S After recognizing the executable operation of the target objects O1 and O2 again, or when the state of the drone D has not changed (NO in STEP211), the VR system S is instructed to end by the user U. It is determined whether or not it was present (Fig. 7B / STEP 213).
  • each processing unit of the VR system S performs a process to be executed when the user U operates the target objects O1 and O2 via the avatar A, and a drone D. The process to be executed when operating is described.
  • the user motion recognition unit 11 determines whether or not the user U has operated (FIG. 8 / STEP300).
  • STEP300 If the user is not operating (NO in STEP300), the determination of STEP300 is executed again in a predetermined control cycle.
  • the avatar control unit 15 operates the avatar A based on the operation of the user U (FIG. 8 / STEP301).
  • the avatar control unit 15 determines whether or not the operation of the avatar A is such that the target objects O1 and O2 are operated (FIG. 8 / STEP302).
  • the movement of the avatar A is such that the target objects O1 and O2 corresponding to the drone D are moved by using the body such as a hand, or the fifth object O5 corresponding to the controller 5 is used.
  • the avatar control unit 15 determines whether or not the operation is such that a predetermined operation is performed on the target objects O1 and O2.
  • the avatar control unit 15 When the operation of the avatar A is such that the target objects O1 and O2 are operated (YES in STEP 302), the avatar control unit 15 operates the target objects O1 and O2 based on the operation of the avatar A. (FIG. 8 / STEP303).
  • the target control unit 17 stores the operations of the target objects O1 and O2 (FIG. 8 / STEP304).
  • the output information determination unit 16 determines the user.
  • the image and sound to be recognized by U are determined based on the states of the avatar A and the target objects O1 and O2 (FIG. 8 / STEP305).
  • the output information determination unit 16 displays the determined image on the monitor 40 of the HMD 4 worn by the user U, and generates the determined voice on the speaker 41 (FIG. 8 / STEP306).
  • the VR system S determines whether or not the user U has instructed the termination (FIG. 8 / STEP307).
  • the target control unit 17 determines whether or not the second period has started (FIG. 8 / STEP 308).
  • the target control unit 17 controls the operation of the drone D based on the stored operations of the target objects O1 and O2 (FIG. 8 / STEP309), this process is completed.
  • the mode in which the drone D is controlled in real time starts a process of storing the operation of the target objects O1 and O2 (FIG. 8 / STEP304) and the start of the second period.
  • the difference is that the process of determining (FIG. 8 / STEP308) is omitted, and the process of controlling the drone D (FIG. 8 / STEP309) is executed before the process of determining the end (FIG. 8 / STEP307).
  • the VR system S corresponds to the user U as an experience-based interface system according to the operation of the user U.
  • the operation of the avatar A is controlled, and the operation of the target objects O1 and O2 is controlled according to the operation of the avatar A.
  • the user U operates the target objects O1 and O2 in the virtual space VS via the avatar A corresponding to the user U, thereby operating the drone D corresponding to the target objects O1 and O2 in the second region RS2. It is designed to be able to be made to.
  • the operation of the target objects O1 and O2 in the virtual space VS is an operation that can be executed by the drone D existing in the second area RS2 having at least one of the time and the position different from the first area RS1 in which the user U exists. It is controlled while limiting based on. That is, the operation is controlled while being restricted based on the constraints in the real space in which the drone D exists (for example, the surrounding environment of the drone D, the function of the drone D, etc.).
  • the operations of the target objects O1 and O2 corresponding to the operations that the drone D cannot execute cannot be executed naturally even in the virtual space VS, so that the user U has the drone D.
  • the operation of the drone D in the second region RS2 can be controlled according to the second region RS2 without considering the restrictions in the second region RS2 in the real space.
  • the shape of the virtual space VS is generated so as to correspond to the shape of the first region RS1.
  • the virtual space VS corresponds to the original operable range of the user U, so that the user U does not mistakenly recognize the operable range by himself / herself. (For example, contact with the furniture F existing in the first region RS1 is suppressed), and the avatar A can be operated.
  • the target is based not only on the operation that the drone D can execute in the second region RS2 but also on the difference between the shape of the first region RS1 and the shape of the second region RS2.
  • the operation of objects O1 and O2 is restricted.
  • the operation of the target objects O1 and O2 corresponding to the operation that makes the drone D infeasible can be naturally restricted.
  • the user U can perform the operation of the drone D in the second region RS2 in the second region RS2 without considering the difference between the shape of the first region RS1 and the shape of the second region RS2. It can be controlled according to.
  • the motion storage unit 17 by providing the motion storage unit 17, the length of the first period, which is the operable period of the avatar A in the virtual space VS, and the operable period of the target objects O1 and O2 in the second region RS2 Even if the length of a certain second period is different, the operation of the drone D can be controlled.
  • the target object O1 is based not only on the operation that the drone D can execute in the second region RS2 but also on the difference between the length of the first period and the length of the second period. , The operation of O2 is restricted.
  • the operation of the target objects O1 and O2 corresponding to the operation that makes the drone D infeasible can be naturally restricted.
  • the user U adjusts the operation of the drone D in the second period to the second region RS2 without considering the difference between the length of the first period and the length of the second period. Can be controlled.
  • the mode in which the drone D is not actually controlled and its operation is verified is the surrounding environment of the first region RS1 and the second region RS2. Processing that considers the first period and the second period (FIG. 6 / STEP107, 108, FIG. 7A / STEP201, 202, 204 to 207, FIG. 7B / STEP208 to 2013), and processing that actually controls the drone D (FIG. 8 / STP304,308,309) is omitted.
  • the VR system S performs the operation that the target objects O1 and O2 can execute as the operation experience system, the difference between the shape of the first region RS1 and the shape of the second region RS2, and It is controlled based only on the function of the drone D. That is, the surrounding environment of the second region RS2 in which the drone D exists, the difference between the length of the first period and the length of the second period, etc. are not taken into consideration.
  • the user U can experience the operation that the drone D can execute as a function.
  • the operations that the target objects O1 and O2 can execute in the virtual space VS are the surrounding environment of the second region RS2, the length of the first period and the length of the second period. It is controlled while limiting based on the constraints caused by differences and the like. That is, in the mode other than the mode for verifying the operation, the operation that the target objects O1 and O2 can execute corresponds to the operation that the target objects O1 and O2 can actually execute in the second region RS2.
  • the user U can experience the operation that the drone D can actually execute in the second area RS2. From the viewpoint of experiencing the operation that the drone D can actually perform, it is not always necessary to actually operate the drone D. Therefore, when the purpose is only to experience the operation, the drone D may or may not be actually operated.
  • the user U switches the operation of the target objects O1 and O2 (and thus the drone D) by switching between the mode for verifying the operation and the mode for actually controlling the operation of the drone D. , You can experience it while comparing.
  • the user U can sensuously verify the operation of fully exerting the function of the drone D under a predetermined environment.
  • the drone D may have sufficient knowledge about the operations that can be executed as a function.
  • each drone when there are a plurality of types of drones and the operation that each drone can perform is verified in order to select a drone that is easy to operate in the second region RS2 from among them, it is a candidate.
  • the VR system S is used to generate an object corresponding to the virtual space VS based only on the function of each drone, thereby verifying the action that each object (and thus each candidate drone) can execute. You may.
  • 3rd constraint area, M ... building materials, O1 ... 1st object (2nd avatar), O2 ... 2nd object (2nd avatar), O3 ... 3rd Object, O4 ... 4th object, O5 ... 5th object, RS1 ... 1st area, RS2 ... 2nd area, S ... VR system, U ... user, VS ... virtual space, W ... work machine.

Abstract

A VR system S comprises: a virtual space generation unit 10 that generates a virtual space in which a first avatar and a second avatar are present; an avatar control unit 15 that controls an action of the first avatar in accordance with an action of a user, and controls an action of the second avatar in accordance with the action of the first avatar; an HMD 4 that allows the user to perceive the virtual space; a possible action recognition unit 14 that recognizes an action or actions which a control target can perform; and a target control unit 17 that controls an action of the control target in accordance with the action of the second avatar. The avatar control unit 15 restricts the actions which the second avatar can perform, on the basis of the action or actions that the control target can perform in a second region.

Description

体感型インターフェースシステム、及び、動作体感システムExperience-based interface system and motion experience system
 本発明は、仮想空間を介して、現実空間でユーザが存在している領域と異なる領域に存在する制御対象の動作を制御するための体感型インターフェースシステム、及び、その制御対象が実行可能な動作を体感することができる動作体感システムに関する。 The present invention is an experience-based interface system for controlling the operation of a controlled object existing in an area different from the area in which the user exists in the real space via a virtual space, and an operation that the controlled object can execute. It is related to the motion experience system that allows you to experience.
 従来、ユーザに、ヘッドマウントディスプレイ(以下、「HMD」ということがある。)を介して、サーバ等で生成した仮想空間の画像、及び、その仮想空間でユーザに対応するアバターの画像を認識させることによって、ユーザ自身がその仮想空間に存在していると認識させて仮想現実を体感させる、いわゆる没入型の仮想空間体感システムがある。 Conventionally, a user is made to recognize an image of a virtual space generated by a server or the like and an image of an avatar corresponding to the user in the virtual space via a head-mounted display (hereinafter, may be referred to as “HMD”). As a result, there is a so-called immersive virtual space experience system in which the user himself / herself recognizes that he / she exists in the virtual space and experiences virtual reality.
 この種の仮想空間体感システムとしては、モーションキャプチャー装置等によって、現実空間におけるユーザの動作(例えば、身体の動作、座標の移動、姿勢の変化等)を認識し、その認識した動作に応じて、仮想空間のアバターの動作を制御するものがある(例えば、特許文献1参照)。 In this kind of virtual space experience system, a motion capture device or the like recognizes a user's movement in the real space (for example, body movement, coordinate movement, posture change, etc.), and according to the recognized movement, There is something that controls the behavior of the avatar in the virtual space (see, for example, Patent Document 1).
 この特許文献1の仮想空間体感システムは、仮想空間に、ユーザに対応する第1のアバター、ゲームパッド型のオブジェクト、及び、所定のキャラクター型の第2のアバターを生成する。そして、このシステムは、ユーザの動作に応じて第1のアバターを動作させて、第1のアバターの動作に応じてゲームパッド型のオブジェクトを動作させて、ゲームパッド型のオブジェクトの動作に応じて第2のアバターを動作させる。 The virtual space experience system of Patent Document 1 generates a first avatar corresponding to a user, a game pad type object, and a predetermined character type second avatar in the virtual space. Then, this system operates the first avatar according to the movement of the user, operates the gamepad type object according to the movement of the first avatar, and operates according to the movement of the gamepad type object. Activate the second avatar.
特開2019-033881号公報Japanese Unexamined Patent Publication No. 2019-033881
 ところで、2つの現実空間のインターフェースとして仮想空間を介在させ、仮想空間で、ユーザに対応する第1アバターの動作を介して第2アバターを動作させて、ユーザが存在している現実空間とは時間及び位置の少なくとも一方が異なる現実空間で、第2アバターに対応するロボット等の制御対象の動作を制御するインターフェースシステムが望まれる。 By the way, the virtual space is interposed as an interface between the two real spaces, and the second avatar is operated in the virtual space through the operation of the first avatar corresponding to the user. And in a real space where at least one of the positions is different, an interface system that controls the operation of a controlled object such as a robot corresponding to the second avatar is desired.
 そのインターフェースシステムでは、実際にはユーザが進入すること等の難しい領域(例えば、廃炉作業中の原発等)に対応してつくられた仮想空間に、現実空間の所定の領域に存在しているユーザに対応する第1アバターと、ユーザが存在している領域とは異なる現実空間の領域に存在する制御対象(例えば、その原発の内部に存在するロボット等)に対応する第2アバターを生成する。 In the interface system, it exists in a predetermined area of the real space in a virtual space created corresponding to an area where it is difficult for a user to actually enter (for example, a nuclear power plant during decommissioning work). Generate a first avatar corresponding to the user and a second avatar corresponding to a controlled object (for example, a robot existing inside the nuclear power plant) existing in a real space area different from the area where the user exists. ..
 これによれば、ユーザは、仮想空間で自らに対応する第1アバターを介して制御対象に対応する第2アバターを動作させることによって、ユーザとは異なる領域に存在している制御対象の動作を、自らの手で直接触って動作させているかのように、制御することができるようになる。 According to this, the user operates the second avatar corresponding to the controlled object via the first avatar corresponding to the user in the virtual space, thereby performing the operation of the controlled object existing in the area different from the user. , You will be able to control it as if you were operating it in direct contact with your own hands.
 ここで、そのようなインターフェースシステムを構築するに際し、特許文献1のシステムを利用することが考えられる。しかし、特許文献1のシステムにおいてユーザが制御するものは、仮想空間に存在しているアバターである。 Here, when constructing such an interface system, it is conceivable to use the system of Patent Document 1. However, what the user controls in the system of Patent Document 1 is an avatar existing in the virtual space.
 そのため、特許文献1のシステムを利用して、そのようなインターフェースシステムを構築した場合、制御対象が現実空間に存在しているにもかかわらず、ユーザは、その制御対象に対応するアバターを、その現実空間における制約(例えば、制御対象の周辺環境、制御対象の機能等)を考慮せずに、動かそうとしてしまう場合が生じることがある。 Therefore, when such an interface system is constructed by using the system of Patent Document 1, the user can use the avatar corresponding to the controlled object even though the controlled object exists in the real space. In some cases, the user tries to move without considering the restrictions in the real space (for example, the surrounding environment of the controlled object, the function of the controlled object, etc.).
 本発明は以上の点に鑑みてなされたものであり、ユーザが、仮想空間を介して、現実空間で自らが存在している領域とは異なる領域に存在する制御対象の動作を、その異なる領域に即して制御することができるインターフェースシステム、及び、その制御対象が実行可能な動作を体感することができる動作体感システムを提供することを目的とする。 The present invention has been made in view of the above points, and the user can control the operation of a controlled object that exists in a region different from the region in which he / she exists in the real space via a virtual space. It is an object of the present invention to provide an interface system that can be controlled in accordance with the above, and an operation experience system that allows the controlled object to experience an executable operation.
 本発明の体感型インターフェースシステムは、
 仮想空間を介して、現実空間の第1領域に存在しているユーザが、前記第1領域と時間及び位置の少なくとも一方が異なる現実空間の第2領域に存在する制御対象の動作を、制御するための体感型インターフェースシステムであって、
 前記ユーザに対応する第1アバター、及び、前記制御対象に対応する第2アバターが存在する仮想空間を生成する仮想空間生成部と、
 前記ユーザの前記第1領域における動作を認識するユーザ動作認識部と、
 前記ユーザの動作に応じて前記第1アバターの動作を制御し、前記第1アバターの動作に応じて前記第2アバターの動作を制御するアバター制御部と、
 前記第1アバターの状態及び前記第2アバターの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
 決定された前記仮想空間の画像を、前記ユーザに認識させる画像表示器と、
 前記第2領域で前記制御対象が実行可能な動作を認識する可能動作認識部と、
 前記第2アバターの動作に応じて、前記制御対象の動作を制御する対象制御部とを備え、
 前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することを特徴とする。
The experience-based interface system of the present invention is
Through the virtual space, a user existing in the first region of the real space controls the operation of the controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a hands-on interface system for
A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist.
A user motion recognition unit that recognizes the user's motion in the first region,
An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
An image display that causes the user to recognize the determined image of the virtual space, and
A possible motion recognition unit that recognizes an action that the controlled object can execute in the second region,
A target control unit that controls the operation of the controlled object according to the operation of the second avatar is provided.
The avatar control unit is characterized in that it controls while limiting the actions that the second avatar can perform, based on the actions that the control target can perform in the second region.
 このように、本発明の体感型インターフェースシステムは、ユーザの動作に応じて、ユーザに対応する第1アバターの動作を制御し、その第1アバターの動作に応じて第2アバターの動作を制御している。これにより、ユーザは、仮想空間で、自らに対応する第1アバターを介して第2アバターを動作させることによって、現実空間で、第2アバターに対応する制御対象を動作させることができるようになっている。 As described above, the experience-based interface system of the present invention controls the operation of the first avatar corresponding to the user according to the operation of the user, and controls the operation of the second avatar according to the operation of the first avatar. ing. As a result, the user can operate the control target corresponding to the second avatar in the real space by operating the second avatar via the first avatar corresponding to the user in the virtual space. ing.
 ただし、仮想空間における第2アバターの動作は、ユーザが存在している第1領域とは時間及び位置の少なくとも一方が異なる第2領域に存在する制御対象が実行可能な動作に基づいて、制限しつつ制御されている。すなわち、その動作は、その制御対象が存在する現実空間における制約(例えば、制御対象の周辺環境、制御対象の機能等)に基づいて、制限しつつ制御されている。 However, the operation of the second avatar in the virtual space is limited based on the operation that can be executed by the controlled object existing in the second area having at least one of the time and position different from the first area in which the user exists. While being controlled. That is, the operation is controlled while being restricted based on the constraints in the real space in which the controlled object exists (for example, the surrounding environment of the controlled object, the function of the controlled object, etc.).
 これにより、ユーザが第1アバターを介して第2アバターを動作させようとしても、その動作が、制御対象が実行不可能な動作に対応する動作(すなわち、現実空間における制約を無視したような動作)であった場合、ユーザは、その動作を、仮想空間であっても自然と実行させることができなくなる。例えば、第2アバターが、所定の動作を実行できなくなったり、所定のスピード以上で移動できなくなったりする。 As a result, even if the user tries to operate the second avatar via the first avatar, the operation corresponds to the operation that the controlled object cannot execute (that is, the operation ignoring the constraint in the real space). ), The user cannot naturally execute the operation even in the virtual space. For example, the second avatar may not be able to perform a predetermined action or may not be able to move at a predetermined speed or higher.
 したがって、本発明の体感型インターフェースシステムによれば、制御対象が実行不可能な動作に対応する第2アバターの動作は仮想空間であっても自然と実行できなくなるので、ユーザは、制御対象が存在する現実空間の第2領域における制約を考慮しなくても、第2領域における制御対象の動作を、第2領域に即して制御することができる。 Therefore, according to the experience-based interface system of the present invention, the operation of the second avatar corresponding to the operation that the controlled object cannot execute cannot be executed naturally even in the virtual space, so that the user has the controlled object. The operation of the controlled object in the second region can be controlled according to the second region without considering the restrictions in the second region of the real space.
 また、本発明の体感型インターフェースシステムにおいては、
 前記第1領域の形状及び前記第2領域の形状を認識する空間認識部を備え、
 前記仮想空間生成部は、前記仮想空間の形状が前記第1領域の形状に対応するように、前記仮想空間を生成し、
 前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作、及び、前記第1領域の形状と前記第2領域の形状との相違に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することが好ましい。
Further, in the experience-based interface system of the present invention,
A space recognition unit that recognizes the shape of the first region and the shape of the second region is provided.
The virtual space generation unit generates the virtual space so that the shape of the virtual space corresponds to the shape of the first region.
The avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the shape of the first region and the shape of the second region. It is preferable to control while limiting the operation.
 本発明の体感型インターフェースシステムを使用する状況としては、第1領域の形状と第2領域の形状とを一致させることができない場合、又は、一致させることが難しい場合がある。例えば、第1領域は、自宅又はオフィスの一室のような狭い空間であり、第2領域は、工事現場のような広い空間であるような場合がある。 As a situation in which the experience-based interface system of the present invention is used, there are cases where the shape of the first region and the shape of the second region cannot be matched, or it is difficult to match them. For example, the first area may be a narrow space such as a room in a home or an office, and the second area may be a large space such as a construction site.
 そのような場合、第2領域の形状に対応するように仮想空間を生成してしまうと、ユーザは、自らの動作可能な範囲を誤って認識してしまうことがある。ひいては、ユーザは、自らに対応する第1アバターを動作させようとした際に、自らが存在している第1領域で周辺環境(例えば、自室に設置されている家具等)に接触する等してしまうおそれがある。 In such a case, if a virtual space is created so as to correspond to the shape of the second region, the user may mistakenly recognize his / her operable range. As a result, when the user tries to operate the first avatar corresponding to himself / herself, he / she comes into contact with the surrounding environment (for example, furniture installed in his / her room) in the first area where he / she exists. There is a risk that it will end up.
 そこで、このように、仮想空間の形状が第1領域の形状に対応するように仮想空間を生成すると、その仮想空間はもともとのユーザの動作可能な範囲に対応したものになるので、ユーザは、自らが動作可能な範囲の認識を誤ることがなく第1アバターを動作させることがでるようになる。これにより、ユーザは、第1領域の形状と第2領域の形状とが相違する場合であっても、第2領域における制御対象の動作を、第1領域にいながら第2領域に即して制御することができる。 Therefore, when the virtual space is generated so that the shape of the virtual space corresponds to the shape of the first region in this way, the virtual space corresponds to the range in which the original user can operate. You will be able to operate the first avatar without making a mistake in recognizing the range in which you can operate. As a result, even if the shape of the first region and the shape of the second region are different, the user can perform the operation of the controlled object in the second region according to the second region while being in the first region. Can be controlled.
 しかし、仮想空間を第1領域の形状に基づいて生成するだけでは、ユーザが、制御対象が実行不可能な動作に対応する動作を、制御対象に対応する第2アバターにさせようとしやすくなってしまうおそれがある。例えば、第1領域が第2領域よりも広い場合、第2アバターの移動量に対する制御対象の移動量が大きくなるので、ユーザが、形状の相違がない場合と同様に第2アバターを移動させると、制御対象の移動速度の上限を超えてしまうおそれがある。 However, simply generating the virtual space based on the shape of the first region makes it easier for the user to make the operation corresponding to the operation that the controlled object cannot execute to the second avatar corresponding to the controlled object. There is a risk that it will end up. For example, when the first area is wider than the second area, the movement amount of the controlled object with respect to the movement amount of the second avatar becomes large, so that when the user moves the second avatar as in the case where there is no difference in shape. , There is a risk that the upper limit of the movement speed of the controlled object will be exceeded.
 そこで、このように、第2領域で制御対象が実行可能な動作だけでなく、第1領域の形状と第2領域の形状との相違にも基づいて、第2アバターの動作を制限するようにすることが好ましい。 Therefore, in this way, the movement of the second avatar is restricted based on the difference between the shape of the first region and the shape of the second region as well as the movement that the controlled object can execute in the second region. It is preferable to do so.
 これにより、仮想空間を第1領域の形状に基づいて生成した場合でも、それによって制御対象が実行不可能になった動作に対応する第2アバターの動作を、自然に制限することができる。ひいては、そのような場合でも、ユーザは、第1領域の形状と第2領域の形状との相違を考慮しなくても、第2領域における制御対象の動作を、第2領域に即して制御することができる。 As a result, even when the virtual space is generated based on the shape of the first region, the operation of the second avatar corresponding to the operation that makes the controlled object unexecutable can be naturally restricted. As a result, even in such a case, the user controls the operation of the controlled object in the second region according to the second region without considering the difference between the shape of the first region and the shape of the second region. can do.
 また、本発明の体感型インターフェースシステムにおいては、
 前記仮想空間における前記第2アバターの動作可能期間である第1期間の長さ、及び、前記第2領域における前記制御対象の動作可能期間である第2期間の長さを認識する期間認識部と、
 前記第2アバターの動作を記憶する動作記憶部とを備え、
 前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作、及び、前記第1期間の長さと前記第2期間の長さとの相違に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御し、
 前記対象制御部は、前記第1期間に前記動作記憶部に記憶されていた前記第2アバターの動作に応じて、前記第2期間に前記制御対象を制御することが好ましい。
Further, in the experience-based interface system of the present invention,
A period recognition unit that recognizes the length of the first period, which is the operable period of the second avatar in the virtual space, and the length of the second period, which is the operable period of the controlled object in the second region. ,
A motion storage unit for storing the motion of the second avatar is provided.
The avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the length of the first period and the length of the second period. Control while limiting movement,
It is preferable that the target control unit controls the control target in the second period according to the operation of the second avatar stored in the motion storage unit in the first period.
 本発明の体感型インターフェースシステムを使用する状況としては、第2アバターの動作と制御対象の動作とを、リアルタイムで対応させなくてもよい状況もある。そのような状況においては、このように動作記憶部を設けておくと、ある時間帯における第2アバターの動作を記憶しておき(すなわち、制御対象の動作を設定しておき)、その動作に応じて、後の時間帯で制御対象に実際に動作をさせることができる。 As a situation in which the experience-based interface system of the present invention is used, there is a situation in which the operation of the second avatar and the operation of the controlled object do not have to correspond in real time. In such a situation, if the motion storage unit is provided in this way, the motion of the second avatar in a certain time zone is stored (that is, the motion of the controlled object is set), and the motion is stored. Depending on the situation, the controlled object can actually operate in a later time zone.
 ところで、仮想空間における第2アバターの動作可能期間である第1期間の長さと、第2領域における制御対象の動作可能期間である第2期間の長さとは、必ずしも一致させる必要はない。例えば、第1期間を第2期間よりも長く設定することによって、短時間で制御対象に実行させたい動作(例えば、ロボットによる手術等)を、事前に時間をかけて検証しつつ設定しておくというようなことができる。 By the way, it is not always necessary to match the length of the first period, which is the operable period of the second avatar in the virtual space, with the length of the second period, which is the operable period of the controlled object in the second region. For example, by setting the first period longer than the second period, the operation to be executed by the controlled object in a short time (for example, surgery by a robot) is set while being verified in advance over time. You can do something like that.
 しかし、第1期間の長さと第2期間の長さとが異なると、ユーザが、制御対象が実行不可能な動作に対応する動作を、その制御対象に対応する第2アバターにさせようとしやすくなってしまうおそれがある。例えば、第1期間が第2期間よりも長い場合、第2アバターの動作速度に対する制御対象の動作速度が速くなるので、ユーザが、期間の相違がない場合と同様に第2アバターを動作させると、制御対象の動作速度の上限を超えてしまうおそれがある。 However, if the length of the first period and the length of the second period are different, it becomes easy for the user to make the operation corresponding to the operation that the controlled object cannot execute to the second avatar corresponding to the controlled object. There is a risk that it will end up. For example, when the first period is longer than the second period, the operation speed of the controlled object with respect to the operation speed of the second avatar becomes faster, so that when the user operates the second avatar in the same manner as when there is no difference in the period. , There is a risk of exceeding the upper limit of the operating speed of the controlled object.
 そこで、このように、第2領域で制御対象が実行可能な動作だけでなく、第1期間の長さと第2期間の長さとの相違にも基づいて、第2アバターの動作を制限するようにすることが好ましい。 Therefore, in this way, the movement of the second avatar is restricted based on the difference between the length of the first period and the length of the second period as well as the movement that the controlled object can execute in the second region. It is preferable to do so.
 これにより、第1期間の長さと第2期間の長さとが異なる場合でも、それによって制御対象が実行不可能になった動作に対応する第2アバターの動作を、自然に制限することができる。ひいては、そのような場合でも、ユーザは、第1期間の長さと第2期間の長さとの相違を考慮しなくても、第2期間における制御対象の動作を、第2領域に即して制御することができる。 As a result, even if the length of the first period and the length of the second period are different, the operation of the second avatar corresponding to the operation that makes the controlled object unexecutable can be naturally restricted. As a result, even in such a case, the user controls the operation of the controlled object in the second period according to the second region without considering the difference between the length of the first period and the length of the second period. can do.
 また、本発明の動作体感システムは、
 仮想空間を介して、現実空間の第1領域に存在しているユーザが、前記第1領域と時間及び位置の少なくとも一方が異なる現実空間の第2領域に存在する制御対象が実行可能な動作を、体感するための動作体感システムであって、
 前記ユーザに対応する第1アバター、及び、前記制御対象に対応する第2アバターが存在する仮想空間を生成する仮想空間生成部と、
 前記ユーザの前記第1領域における動作を認識するユーザ動作認識部と、
 前記ユーザの動作に応じて前記第1アバターの動作を制御し、前記第1アバターの動作に応じて前記第2アバターの動作を制御するアバター制御部と、
 第1モードと第2モードとを切り換えるモード切換部と、
 前記第1アバターの状態及び前記第2アバターの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
 決定された前記仮想空間の画像を、前記ユーザに認識させる画像表示器と、
 前記制御対象が機能として実行可能な動作を認識する対象機能認識部と、
 前記第2領域で前記制御対象が動作する際の制約を認識する制約認識部とを備え、
 前記アバター制御部は、前記第1モードでは、前記制御対象の機能のみに基づいて、前記第2アバターが実行可能な動作を制御し、前記第2モードでは、前記制御対象の機能及び前記第2領域における制約に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することを特徴とする。
In addition, the motion experience system of the present invention is
Through the virtual space, a user existing in the first region of the real space can perform an operation that can be executed by a controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a movement experience system for experiencing,
A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist.
A user motion recognition unit that recognizes the user's motion in the first region,
An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
A mode switching unit that switches between the first mode and the second mode,
An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
An image display that causes the user to recognize the determined image of the virtual space, and
A target function recognition unit that recognizes an operation that the control target can execute as a function,
It is provided with a constraint recognition unit that recognizes constraints when the control target operates in the second region.
In the first mode, the avatar control unit controls an operation that can be executed by the second avatar based only on the function of the controlled object, and in the second mode, the function of the controlled object and the second. It is characterized in that the second avatar controls while limiting the actions that can be performed based on the constraint in the area.
 このように、本発明の動作体感システムは、切換可能なモードのいずれにおいても、ユーザの動作に応じて、ユーザに対応する第1アバターの動作を制御し、その第1アバターの動作に応じて第2アバターの動作を制御している。 As described above, the motion experience system of the present invention controls the motion of the first avatar corresponding to the user according to the motion of the user in any of the switchable modes, and according to the motion of the first avatar. It controls the operation of the second avatar.
 このとき、第1モードでは、仮想空間で第2アバターが実行可能な動作は、制御対象の機能のみに基づいて、制御される。具体的には、第1モードでは、ユーザが存在している第1領域とは時間及び位置の少なくとも一方が異なり、制御対象が存在する第2領域の周辺環境等は、考慮されない。これにより、第1モードでは、ユーザは、制御対象が機能として実行可能な動作を、体感することができる。 At this time, in the first mode, the operation that the second avatar can execute in the virtual space is controlled based only on the function to be controlled. Specifically, in the first mode, at least one of the time and the position is different from the first region in which the user exists, and the surrounding environment of the second region in which the control target exists is not considered. Thereby, in the first mode, the user can experience the operation that the controlled object can execute as a function.
 なお、第1モードでは、制御対象を第2領域で実際に動作させると、制御対象が第2領域の周辺環境に接触してしまうような可能性が生じる。そのため、第2領域における制約がほぼ無いような場合を除き、原則として、第1モードでは、制御対象は実際に動作させないことが推奨される。 In the first mode, when the controlled object is actually operated in the second region, there is a possibility that the controlled object comes into contact with the surrounding environment in the second region. Therefore, in principle, it is recommended that the controlled object is not actually operated in the first mode, except when there are almost no restrictions in the second region.
 一方、第2モードでは、仮想空間で第2アバターが実行可能な動作は、制御対象の機能に基づくだけではなく、第2領域における制約にも基づいて、制限しつつ制御される。 On the other hand, in the second mode, the operation that the second avatar can execute in the virtual space is controlled while being restricted not only based on the function to be controlled but also based on the constraint in the second area.
 すなわち、第2モードでは、第2アバターが実行可能な動作は、現実空間である第2領域で実際に制御対象が実行可能な動作に対応したものとなる。例えば、第2アバターが、所定の動作を実行できなくなったり、所定のスピード以上で移動できなくなったりする。これにより、第2モードでは、ユーザは、現実空間である第2領域で実際に制御対象が実行可能な動作を、体感することができる。 That is, in the second mode, the actions that can be executed by the second avatar correspond to the actions that the controlled object can actually execute in the second region, which is the real space. For example, the second avatar may not be able to perform a predetermined action or may not be able to move at a predetermined speed or higher. Thereby, in the second mode, the user can experience the operation that the controlled object can actually execute in the second region which is the real space.
 なお、第2モードでは、制御対象を第2領域で実際に動作させても、制御対象が第2領域の周辺環境に接触してしまうおそれがない。しかし、制御対象が実際に実行可能な動作を体感するという観点からは、必ずしも、制御対象を実際に動作させなくてもよい。そのため、第2モードでは、制御対象を実際に動作させてもよいし、動作させなくてもよい。 In the second mode, even if the controlled object is actually operated in the second area, there is no possibility that the controlled object comes into contact with the surrounding environment of the second area. However, from the viewpoint of experiencing the operation that the controlled object can actually perform, it is not always necessary to actually operate the controlled object. Therefore, in the second mode, the controlled object may or may not be actually operated.
 したがって、本発明の動作体感システムによれば、ユーザは、機能のみに基づいて制御対象が実行可能な動作と、機能及び第2領域における制約に基づいて制御対象が実行可能な動作とを、比較しつつ体感することができる。ひいては、このシステムによれば、ユーザは、所定の環境下において、制御対象の機能を十全に発揮させる動作を、感覚的に検証することができる。 Therefore, according to the motion experience system of the present invention, the user compares the operation that the controlled object can execute based only on the function with the operation that the controlled object can execute based on the function and the constraint in the second region. You can experience it while doing it. As a result, according to this system, the user can intuitively verify the operation of fully exerting the function of the controlled object under a predetermined environment.
実施形態に係るVRシステムの概略構成を示す模式図。The schematic diagram which shows the schematic structure of the VR system which concerns on embodiment. 図1のVRシステムの処理部の構成を示すブロック図。The block diagram which shows the structure of the processing part of the VR system of FIG. 図1のVRシステムがユーザに認識させる仮想空間の模式図。The schematic diagram of the virtual space which the VR system of FIG. 1 makes a user recognize. 図1のVRシステムにおいて生成される仮想空間広さと、現実空間における第1領域広さ及び第2領域の広さとの関係を示す説明図。It is explanatory drawing which shows the relationship between the virtual space area generated in the VR system of FIG. 1 and the area of the 1st area and the area of 2nd area in a real space. 図1のVRシステムでユーザの動作が実行された時間帯とアバター、オブジェクト及びドローンの動作を実行する時間帯との関係の一例を示すタイミングチャート。A timing chart showing an example of the relationship between the time zone in which the user's action is executed in the VR system of FIG. 1 and the time zone in which the action of the avatar, the object, and the drone is executed. 図1のVRシステムが使用が開始された際に実行する処理を示すフローチャート。The flowchart which shows the process to execute when the VR system of FIG. 1 is started to use. 図1のVRシステムがオブジェクトの実行可能な動作を認識する際に実行する処理のうち、認識済みの情報に基づいて実行される処理を示すフローチャート。FIG. 3 is a flowchart showing a process executed based on the recognized information among the processes executed when the VR system of FIG. 1 recognizes an executable operation of an object. 図1のVRシステムがオブジェクトの実行可能な動作を認識する際に実行する処理のうち、新たに認識された情報に基づいて実行される処理を示すフローチャート。FIG. 3 is a flowchart showing a process executed based on newly recognized information among the processes executed when the VR system of FIG. 1 recognizes an executable operation of an object. 図1のVRシステムがオブジェクトを動作させた際、及び、ドローンを動作させる際に実行する処理を示すフローチャート。The flowchart which shows the process which the VR system of FIG. 1 executes when operating an object and operating a drone.
 以下、図面を参照して、実施形態に係るVRシステムS(体感型インターフェースシステム、動作体感システム)について説明する。 Hereinafter, the VR system S (experience-based interface system, motion-experience system) according to the embodiment will be described with reference to the drawings.
 VRシステムSは、仮想空間にユーザU自身が存在していると認識させて、ユーザUに仮想現実(いわゆるVR(virtual reality))を体感させるシステムである。 The VR system S is a system that makes the user U recognize that the user U itself exists in the virtual space and allows the user U to experience virtual reality (so-called VR (virtual reality)).
 また、VRシステムSは、その仮想空間を介して、ユーザUが存在している現実空間の第1領域RS1と時間及び位置の少なくとも一方が異なる現実空間の第2領域RS2に存在する制御対象の動作を制御する、又は、その制御対象が実行可能な動作を、ユーザUに体感させるためのシステムである。 Further, the VR system S is a controlled object that exists in the second region RS2 in the real space where at least one of the time and the position is different from the first region RS1 in the real space where the user U exists via the virtual space. This is a system for allowing the user U to experience an operation that controls the operation or that the controlled object can execute.
 なお、以下においては、VRシステムSを用いて、工事現場でドローンに所定の作業(例えば、荷物の搬送等)を実行させる場合について説明する。そこで、ユーザUが存在しているユーザUの自室を第1領域RS1とし、制御対象である第1ドローンD1及び第2ドローンD2(以下、総称する場合は、「ドローンD」という。)が存在し、第1領域RS1とは離れた場所に存在している工事現場を第2領域RS2としている。 In the following, a case where the VR system S is used to cause the drone to perform a predetermined work (for example, transportation of luggage) at a construction site will be described. Therefore, the own room of the user U in which the user U exists is set as the first region RS1, and the first drone D1 and the second drone D2 (hereinafter, collectively referred to as "drone D") to be controlled exist. However, the construction site located away from the first area RS1 is referred to as the second area RS2.
 しかし、本発明の第1領域及び第2領域並びに制御対象は、このような構成に限定されるものではない。第1領域は、ユーザが存在している現実空間の領域であればよく、第2領域は、制御対象が存在し又は存在する可能性があり、且つ、第1領域と時間及び位置の少なくとも一方が異なる現実空間の領域であればよい。また、制御対象は、ユーザが仮想空間を介して、その動作を制御又はその動作を検証するものであればよい。 However, the first region and the second region of the present invention and the control target are not limited to such a configuration. The first area may be an area in the real space in which the user exists, and the second area may have or may have a controlled object, and at least one of the first area and the time and position. May be different real-world areas. Further, the control target may be one in which the user controls the operation or verifies the operation via the virtual space.
 そのため、例えば、本発明のシステムを小動物、昆虫等といった小さな生物の視点から自然を体感するサービスに適用する場合には、第1領域を、サービスを提供する部屋とし、第2領域を、その部屋よりも小さく小動物、昆虫等が実際に飼育されている飼育箱とし、制御対象を、飼育箱の内部で移動自在なカメラ付きのロボットとしてもよい。 Therefore, for example, when the system of the present invention is applied to a service for experiencing nature from the viewpoint of small organisms such as small animals and insects, the first area is a room for providing the service, and the second area is the room. It may be a breeding box in which small animals, insects, etc. that are smaller than the actual breeding box are actually bred, and the control target may be a robot with a camera that can move inside the breeding box.
 また、例えば、本発明のシステムをロボットによる手術に適用する場合には、第1領域を、所定の時間帯における手術室とし、第2領域を、所定の時間帯よりも後の時間帯におけるその手術室とするとともに、制御対象を、実際に手術を行うロボットとしてもよい。 Further, for example, when the system of the present invention is applied to a robotic operation, the first area is set as an operating room in a predetermined time zone, and the second area is set as a time zone after the predetermined time zone. In addition to being an operating room, the controlled object may be a robot that actually performs surgery.
[システムの概略構成]
 まず、図1及び図2を参照して、VRシステムSの概略構成について説明する。
[Overview of system configuration]
First, a schematic configuration of the VR system S will be described with reference to FIGS. 1 and 2.
 図1に示すように、VRシステムSは、ユーザUが存在している現実空間の第1領域RS1に存在している機器、ドローンDが存在する第2領域RS2に存在している機器、及び、VRシステムSを用いたサービスの提供者等の設置しているサーバ1によって、構成されている。 As shown in FIG. 1, the VR system S includes a device existing in the first region RS1 in the real space where the user U exists, a device existing in the second region RS2 where the drone D exists, and a device. , Is configured by a server 1 installed by a service provider or the like using the VR system S.
 なお、本発明のシステムは、このような構成に限定されるものではない。例えば、第1領域又は第2領域にサーバに代わる端末を設置してもよいし、複数のサーバ、端末等を用いて、本実施形態のサーバ1の機能(後述する処理部)を実現するようにしてもよい。 The system of the present invention is not limited to such a configuration. For example, a terminal instead of the server may be installed in the first area or the second area, or a plurality of servers, terminals, and the like may be used to realize the function of the server 1 of the present embodiment (processing unit described later). You may do it.
 第1領域RS1に存在している機器としては、ユーザUに取り付けられる複数の標識2と、ユーザU(厳密には、ユーザUに取り付けられた標識2)を撮影する第1カメラ3と、仮想空間VS(図3参照)の画像及び音声をユーザUに認識させるヘッドマウントディスプレイ(以下、「HMD4」という。)と、ユーザUが使用するコントローラ5(図1では不図示。図2参照。)が該当する。 The devices existing in the first region RS1 include a plurality of signs 2 attached to the user U, a first camera 3 for photographing the user U (strictly speaking, the sign 2 attached to the user U), and a virtual device. A head-mounted display (hereinafter referred to as "HMD4") that allows the user U to recognize the image and sound of the space VS (see FIG. 3), and a controller 5 used by the user U (not shown in FIG. 1, see FIG. 2). Applies to.
 標識2は、ユーザUが装着するHMD4、手袋及び靴等を介して、ユーザUの頭部、両手及び両足のそれぞれに取り付けられている。ただし、標識2は、後述するようにユーザUの第1領域RS1における動作(例えば、身体の各部位の動作、座標の移動、姿勢の変化等)を認識するために用いられるものである。そのため、その取り付け位置は、VRシステムSを構成する他の機器等に応じて、適宜変更してもよい。 The sign 2 is attached to each of the user U's head, both hands, and both feet via the HMD4, gloves, shoes, and the like worn by the user U. However, the sign 2 is used to recognize the movement of the user U in the first region RS1 (for example, the movement of each part of the body, the movement of coordinates, the change of posture, etc.) as described later. Therefore, the mounting position may be appropriately changed according to other devices and the like constituting the VR system S.
 第1カメラ3は、ユーザUそのもの、及び、第1領域RS1でユーザUが動作可能な範囲を多方向から撮影可能なように、第1領域RS1に複数設置されている。なお、第1カメラ3は、第1カメラ3の性能、第1領域RS1の形状等に応じて、1つだけ設置してもよいし、その設置場所も、適宜変更してもよい。 A plurality of first cameras 3 are installed in the first area RS1 so that the user U itself and the range in which the user U can operate in the first area RS1 can be photographed from multiple directions. Only one first camera 3 may be installed depending on the performance of the first camera 3, the shape of the first region RS1, and the like, and the installation location thereof may be appropriately changed.
 HMD4は、ユーザUの頭部に装着される。図2に示すように、HMD4は、ユーザUに、サーバ1によって決定された仮想空間VSの画像をユーザUの認識させるためのモニタ40(画像表示器)と、サーバ1によって決定された仮想空間VSの音声をユーザUに認識させるためのスピーカ41(音声発生器)とを有している。 The HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 1, and a virtual space determined by the server 1. It has a speaker 41 (voice generator) for causing the user U to recognize the voice of VS.
 コントローラ5は、後述するモードの切り換え指示、及び、第1ドローンD1に対応する第1オブジェクトO1、及び、第2ドローンD2に対応する第2オブジェクトO2への動作指示を、サーバ1に送信するために用いられる。 The controller 5 transmits to the server 1 a mode switching instruction, which will be described later, and an operation instruction to the first object O1 corresponding to the first drone D1 and the second object O2 corresponding to the second drone D2. Used for.
 このVRシステムSを用いてユーザUに仮想現実を体感させる場合、ユーザUは、HMD4によって、仮想空間VSの画像と音声のみを認識させられて、ユーザU自身が後述するアバターA(図3参照)として仮想空間に存在していると認識させられる。すなわち、VRシステムSは、いわゆる没入型のシステムとして構成されている。 When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS by the HMD4, and the user U himself / herself is made to recognize the avatar A described later (see FIG. 3). ) To be recognized as existing in the virtual space. That is, the VR system S is configured as a so-called immersive system.
 なお、VRシステムSでは、ユーザUの第1領域RS1における動作を認識するシステムとして、標識2と第1カメラ3と用いて構成された、いわゆるモーションキャプチャー装置を採用している。 The VR system S employs a so-called motion capture device configured by using the sign 2 and the first camera 3 as a system for recognizing the operation of the user U in the first region RS1.
 しかし、本発明のシステムは、このような構成に限定されるものではない。例えば、モーションキャプチャー装置を使用する場合には、上記の構成のものの他、標識及びカメラの数が上記構成とは異なる(例えば、それぞれ1つずつ設けられている)ものを用いてもよい。 However, the system of the present invention is not limited to such a configuration. For example, when using a motion capture device, in addition to the above configuration, one having a different number of signs and cameras from the above configuration (for example, one is provided for each) may be used.
 また、例えば、モーションキャプチャー装置以外の装置を用いて、ユーザの現実空間の第1領域における動作を認識するようにしてもよい。具体的には、ユーザの装着するHMD、手袋及び靴等にGPS等のセンサを搭載し、そのセンサからの出力に基づいて、プレイヤーの動作を認識するようにしてもよい。また、そのようなセンサと、上記のようなモーションキャプチャー装置を併用してもよい。 Further, for example, a device other than the motion capture device may be used to recognize the operation in the first region of the user's real space. Specifically, a sensor such as GPS may be mounted on the HMD, gloves, shoes, or the like worn by the user, and the movement of the player may be recognized based on the output from the sensor. Further, such a sensor may be used in combination with a motion capture device as described above.
 また、例えば、現実空間ではコントローラを省略し、仮想空間でのみコントローラに対応するオブジェクト(例えば、本実施形態において後述する第5オブジェクトO5)を生成するようにしてもよい。また、そのようなコントローラ及びオブジェクトの両方を省略し、又は、その少なくとも一方と併用して、それらを用いて行われていた指示を、ユーザの声又はジェスチャーに基づいて認識するようにしてもよい。 Further, for example, the controller may be omitted in the real space, and an object corresponding to the controller (for example, the fifth object O5 described later in the present embodiment) may be generated only in the virtual space. Also, both such controllers and objects may be omitted, or used in combination with at least one of them, to recognize instructions given using them based on the user's voice or gesture. ..
 第2領域RS2に存在している機器としては、第2領域RS2に設置されている第2カメラ6(図1では不図示。図2参照。)が該当する。なお、本実施形態においては省略しているが、第2領域RS2には、サーバ1から受信した指示に基づいて、ドローンDを制御するための中継機器が設置されていてもよい。 The device existing in the second area RS2 corresponds to the second camera 6 (not shown in FIG. 1, see FIG. 2) installed in the second area RS2. Although omitted in the present embodiment, a relay device for controlling the drone D may be installed in the second region RS2 based on the instruction received from the server 1.
 第2カメラ6は、ドローンDそのもの、及び、第2領域でそれらが動作可能な範囲を、多方向から撮影可能なように、複数設置されている。なお、第2カメラ6は、第2カメラ6の性能、第2領域RS2の形状等に応じて、1つだけ設置してもよいし、その設置場所も、適宜設定してもよい。 A plurality of second cameras 6 are installed so that the drone D itself and the range in which they can operate in the second area can be photographed from multiple directions. Only one second camera 6 may be installed depending on the performance of the second camera 6, the shape of the second region RS2, and the like, and the installation location thereof may be appropriately set.
 サーバ1は、CPU、RAM、ROM、インターフェース回路等を含む1つ又は複数の電子回路ユニットにより構成されている。 The server 1 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like.
 サーバ1は、第1領域RS1に存在している第1カメラ3、HMD4、及び、コントローラ5、並びに、第2領域RS1に存在している第2カメラ6と、近距離無線通信、有線による通信、インターネット網、公衆回線等を通じて、相互に情報通信可能に構成されている。また、サーバ1は、ドローンDとも、同様に、相互に情報通信可能に構成されている。 The server 1 communicates with the first camera 3, the HMD 4, and the controller 5 existing in the first area RS1 and the second camera 6 existing in the second area RS1 by short-range wireless communication and wired communication. , Internet network, public line, etc., are configured to enable mutual information communication. Further, the server 1 is also configured to enable information communication with the drone D in the same manner.
[各処理部の構成]
 次に、図1~図5を用いて、サーバ1の備えている処理部の構成を詳細に説明する。
[Configuration of each processing unit]
Next, the configuration of the processing unit included in the server 1 will be described in detail with reference to FIGS. 1 to 5.
 図2に示すように、サーバ1は、実装されたハードウェア構成又はプログラムにより実現される機能として、仮想空間生成部10(空間認識部、制約認識部)と、ユーザ動作認識部11と、モード切換部12と、期間認識部13と、可能動作認識部14(対象機能認識部)と、アバター制御部15と、出力情報決定部16(画像決定部)と、対象制御部17(動作記憶部)とを備えている。 As shown in FIG. 2, the server 1 has a virtual space generation unit 10 (spatial recognition unit, constraint recognition unit), a user motion recognition unit 11, and a mode as functions realized by the implemented hardware configuration or program. Switching unit 12, period recognition unit 13, possible motion recognition unit 14 (target function recognition unit), avatar control unit 15, output information determination unit 16 (image determination unit), target control unit 17 (operation storage unit). ) And.
 仮想空間生成部10は、仮想空間VS(厳密には、仮想空間VSの背景)、並びに、その仮想空間VSに存在するユーザUに対応するアバターA(第1アバター)、及び、複数のオブジェクトの画像を生成する。また、仮想空間生成部10は、それらの画像に関連する音声も生成する。 The virtual space generation unit 10 includes a virtual space VS (strictly speaking, the background of the virtual space VS), an avatar A (first avatar) corresponding to the user U existing in the virtual space VS, and a plurality of objects. Generate an image. The virtual space generation unit 10 also generates sounds related to those images.
 図1及び図3に示すように、仮想空間生成部10が生成するオブジェクトには、第2領域RS2に存在する第1ドローンD1に対応する第1オブジェクトO1(第2アバター)、第2ドローンD2に対応する第2オブジェクトO2(第2アバター)、作業機械Wに対応する第3オブジェクトO3、及び、建築資材Mに対応する第4オブジェクトO4、並びに、第1領域RS1に存在しているコントローラ5に対応する第5オブジェトO5が含まれる。 As shown in FIGS. 1 and 3, the objects generated by the virtual space generation unit 10 include the first object O1 (second avatar) and the second drone D2 corresponding to the first drone D1 existing in the second region RS2. The second object O2 (second avatar) corresponding to, the third object O3 corresponding to the work machine W, the fourth object O4 corresponding to the building material M, and the controller 5 existing in the first region RS1. The fifth object O5 corresponding to is included.
 また、仮想空間生成部10(空間認識部)は、第1カメラ3の撮影した画像に基づいて、第1領域RS1の形状を認識するとともに、第2カメラ6の撮影した画像に基づいて、第2領域RS2の形状を認識する。そのうえで、仮想空間生成部10は、仮想空間VSの形状が第1領域RS1の形状に対応するように、仮想空間VSを生成する。 Further, the virtual space generation unit 10 (spatial recognition unit) recognizes the shape of the first region RS1 based on the image taken by the first camera 3, and the second camera 6 is based on the image taken by the second camera 6. Recognize the shape of the two-region RS2. Then, the virtual space generation unit 10 generates the virtual space VS so that the shape of the virtual space VS corresponds to the shape of the first region RS1.
 具体的には、図4に示すように、例えば、第1領域RS1が平面視正方形の狭い空間であると認識され、第2領域RS2が平面視長方形の縦長の広い空間であると認識されたとする。その場合、仮想空間生成部10は、仮想空間VSの形状を、第1領域RS1の形状と一致するように生成する。一方で、仮想空間VSの画像は、第2領域RS2の撮影画像に基づいた画像を、第1領域RS1の形状に合わせて縮小したものとして生成される。 Specifically, as shown in FIG. 4, for example, the first region RS1 is recognized as a narrow space of a plan view square, and the second region RS2 is recognized as a vertically long wide space of a plan view rectangle. do. In that case, the virtual space generation unit 10 generates the shape of the virtual space VS so as to match the shape of the first region RS1. On the other hand, the image of the virtual space VS is generated as an image based on the captured image of the second region RS2, reduced to fit the shape of the first region RS1.
 このとき、第1領域RS1(すなわち、仮想空間VS)の平面視におけるアスペクト比は、第2領域RS2のアスペクト比とは異なっている。そのため、仮想空間VSの画像の図面の縦方向と横方向における縮小の比率は、異なるものとなる。 At this time, the aspect ratio of the first region RS1 (that is, the virtual space VS) in the plan view is different from the aspect ratio of the second region RS2. Therefore, the reduction ratios of the images of the virtual space VS in the vertical direction and the horizontal direction of the drawing are different.
 なお、本発明のシステムは、このような構成に限定されるものではなく、第1領域の形状と第2領域の形状とが必ず一致する場合(例えば、時間のみが異なる場合等)には、仮想空間生成部は、第1領域の形状に基づいて第2領域の画像を変形する機能を含んでいなくてもよい。 The system of the present invention is not limited to such a configuration, and when the shape of the first region and the shape of the second region always match (for example, when only the time is different), the system is not limited to such a configuration. The virtual space generation unit may not include a function of deforming the image of the second region based on the shape of the first region.
 また、仮想空間生成部10(制約認識部)は、第1カメラ3の撮影した画像に基づいて、第1領域RS1でユーザU(ひいては、ユーザUに対応するアバターA)が動作する際の制約を認識する。そのうえで、仮想空間生成部10は、その制約を考慮して仮想空間VSを生成する。 Further, the virtual space generation unit 10 (constraint recognition unit) restricts the operation of the user U (and by extension, the avatar A corresponding to the user U) in the first region RS1 based on the image taken by the first camera 3. Recognize. Then, the virtual space generation unit 10 generates the virtual space VS in consideration of the constraint.
 具体的には、例えば、図1、図3及び図4に示すように、仮想空間生成部10は、第1領域RS1に存在している家具Fを認識するとともに、家具Fが第1領域RS1で存在している位置に対応する仮想空間VSの位置に、アバターAが進入できない第1制約エリアLA1を生成する。なお、このとき、家具Fは、仮想空間VSにゴーストとして、半透明の状態で生成される。 Specifically, for example, as shown in FIGS. 1, 3 and 4, the virtual space generation unit 10 recognizes the furniture F existing in the first region RS1, and the furniture F recognizes the furniture F existing in the first region RS1. The first constraint area LA1 in which the avatar A cannot enter is generated at the position of the virtual space VS corresponding to the position existing in. At this time, the furniture F is generated in the virtual space VS as a ghost in a semi-transparent state.
 また、仮想空間生成部10は、第2カメラ6の撮影した画像、及び、予め入力されていた作業機械Wの機能等に基づいて、第2領域RS2でドローンDが動作する際の制約を認識する。そのうえで、仮想空間生成部10は、その制約を考慮して仮想空間VSを生成する。 Further, the virtual space generation unit 10 recognizes the restriction when the drone D operates in the second region RS2 based on the image taken by the second camera 6 and the function of the work machine W input in advance. do. Then, the virtual space generation unit 10 generates the virtual space VS in consideration of the constraint.
 具体的には、例えば、図1、図3及び図4に示すように、仮想空間生成部10は、第2領域RS2に存在している建築資材Mを認識する。そのうえで、建築資材Mが第2領域RS2で存在している位置に対応する仮想空間VSの位置に、第2制約エリアLA2を生成する。第2制約エリアLA2は、ドローンDに対応する第1オブジェクトO1及び第2オブジェクトO2(以下、総称する場合は「対象オブジェクトO1,O2」という。)が進入不可能なエリアとなる。 Specifically, for example, as shown in FIGS. 1, 3 and 4, the virtual space generation unit 10 recognizes the building material M existing in the second region RS2. Then, the second constraint area LA2 is generated at the position of the virtual space VS corresponding to the position where the building material M exists in the second region RS2. The second restricted area LA2 is an area in which the first object O1 and the second object O2 (hereinafter, collectively referred to as “target objects O1 and O2”) corresponding to the drone D cannot enter.
 同様に、仮想空間生成部10は、第2領域RS2に存在している作業機械Wを認識する。そのうえで、作業機械Wが第2領域RS2で存在し得る領域(すなわち、作業機械Wが動作を行う領域)に対応する仮想空間VSの領域に、第3制約エリアLA3を生成する。第3制約エリアLA3は、対象オブジェクトO1,O2の進入が禁止されるエリア(例えば、進入させることは可能だが警告が表示されるエリア)となる。 Similarly, the virtual space generation unit 10 recognizes the work machine W existing in the second region RS2. Then, the third constraint area LA3 is generated in the area of the virtual space VS corresponding to the area where the work machine W can exist in the second area RS2 (that is, the area where the work machine W operates). The third restricted area LA3 is an area where entry of the target objects O1 and O2 is prohibited (for example, an area where entry is possible but a warning is displayed).
 このように生成された第1制約エリアLA1、第2制約エリアLA2及び第3制約エリアLA3は、例えば、図3に示すように、仮想空間VSで半透明の立体物のようにして示される。 The first constraint area LA1, the second constraint area LA2, and the third constraint area LA3 generated in this way are shown as a semi-transparent three-dimensional object in the virtual space VS, for example, as shown in FIG.
 なお、仮想空間生成部10は、第1領域RS1の形状及び第2領域RS2の形状、並びに、第1領域RS1及び第2領域RS2における制約を、必ずしも第1カメラ3及び第2カメラ6の画像に基づいて認識する必要はなく、ユーザU自身等によって別途入力されたモデル、数値等に基づいて認識してもよい。 The virtual space generation unit 10 limits the shapes of the first region RS1 and the second region RS2, and the restrictions in the first region RS1 and the second region RS2, not necessarily the images of the first camera 3 and the second camera 6. It is not necessary to recognize based on the above, and it may be recognized based on a model, numerical value, etc. separately input by the user U himself or the like.
 図2に示すように、ユーザ動作認識部11は、第1カメラ3が撮影した画像データに基づいて、ユーザUの動作を認識する。具体的には、ユーザ動作認識部11は、ユーザUの画像データからユーザUに取り付けられている標識2を抽出し、その抽出結果に基づいて、ユーザUの身体の各部位の動作、座標の移動及び姿勢の変化を認識する。 As shown in FIG. 2, the user motion recognition unit 11 recognizes the motion of the user U based on the image data captured by the first camera 3. Specifically, the user motion recognition unit 11 extracts the sign 2 attached to the user U from the image data of the user U, and based on the extraction result, the motion and coordinates of each part of the body of the user U are Recognize movements and changes in posture.
 モード切換部12は、コントローラ5への入力等に基づいて、ユーザUのモード変更の指示を認識し、その指示に基づいて、VRシステムSが実行するモードを変更する。 The mode switching unit 12 recognizes the user U's mode change instruction based on the input to the controller 5, and changes the mode executed by the VR system S based on the instruction.
 VRシステムSでは、リアルタイムでドローンDを制御するモード(第2モード)と、対象オブジェクトO1,O2を動作させた時間帯よりも後の異なる時間帯でドローンDを制御するモード(第2モード)と、ドローンDを実際には制御せず、その動作を検証するモード(第1モード)の3つのモードのいずれかに変更可能となっている。 In the VR system S, a mode for controlling the drone D in real time (second mode) and a mode for controlling the drone D in a different time zone after the time zone in which the target objects O1 and O2 are operated (second mode). The drone D is not actually controlled and can be changed to one of three modes (first mode) for verifying its operation.
 なお、本発明のシステムは、このような構成に限定されるものではなく、それらの3つのモードのうちの少なくとも1つを備えていればよい。そのため、1つのモードだけを実行するように構成する場合には、モード切換部は省略してもよい。 The system of the present invention is not limited to such a configuration, and may include at least one of these three modes. Therefore, when configuring to execute only one mode, the mode switching unit may be omitted.
 期間認識部13は、コントローラ5への入力等に基づいて、仮想空間VSにおける対象オブジェクトO1,O2の動作可能期間である第1期間の長さ及びその時間帯を認識する。すなわち、対象オブジェクトO1,O2を動作させるアバターA(ひいては、ユーザU)の動作可能期間を認識する。 The period recognition unit 13 recognizes the length of the first period, which is the operable period of the target objects O1 and O2 in the virtual space VS, and the time zone thereof, based on the input to the controller 5. That is, the operable period of the avatar A (and by extension, the user U) that operates the target objects O1 and O2 is recognized.
 また、期間認識部13は、コントローラ5への入力等に基づいて、第2領域RS2におけるドローンDの動作可能期間である第2期間の長さ及びその時間帯を認識する。 Further, the period recognition unit 13 recognizes the length of the second period, which is the operable period of the drone D in the second area RS2, and the time zone thereof, based on the input to the controller 5 and the like.
 なお、本発明のシステムは、このような構成に限定されるものではなく、例えば、リアルタイムで制御対象を制御するモードのみを備えたものである場合には、期間認識部は、省略してもよい。 The system of the present invention is not limited to such a configuration. For example, when the system has only a mode for controlling a controlled object in real time, the period recognition unit may be omitted. good.
 可能動作認識部14(対象機能認識部)は、第2領域RS2で、ドローンDが実行可能な動作を認識する。 The possible motion recognition unit 14 (target function recognition unit) recognizes the motion that the drone D can execute in the second area RS2.
 具体的には、まず、可能動作認識部14(対象機能認識部)は、ユーザU等によって予め入力された情報等に基づいて、ドローンDが機能として実行可能な動作を認識する。 Specifically, first, the possible motion recognition unit 14 (target function recognition unit) recognizes the motion that the drone D can execute as a function based on the information or the like input in advance by the user U or the like.
 なお、機能として実行可能な動作は、VRシステムSを使用し始めた段階でのみ認識してもよいし、VRシステムSを使用している間に、ドローンDからフィードバックされた情報に基づいて随時認識するようにしてもよい。 The operation that can be executed as a function may be recognized only at the stage when the VR system S is started to be used, and may be recognized at any time based on the information fed back from the drone D while using the VR system S. You may try to recognize it.
 また、可能動作認識部14は、期間認識部13が認識した第2期間の時間帯、仮想空間生成部10が認識した第2領域RS2の画像、第2領域RS2で実行される工事の計画、第2領域RS2の環境(天気等)等に基づいても、ドローンDが実行可能な動作を認識する。 Further, the possible motion recognition unit 14 has a time zone of the second period recognized by the period recognition unit 13, an image of the second region RS2 recognized by the virtual space generation unit 10, and a plan of construction to be executed in the second region RS2. The drone D recognizes an action that can be executed even based on the environment (weather, etc.) of the second region RS2.
 例えば、可能動作認識部14は、別途入力された第2期間における作業機械Wの作業の予定を認識し、その期間内において、ドローンDが機能としては実行可能ではあるが、作業機械Wとの接触を避けるうえで実行不可能となる動作を認識する。その後、可能動作認識部14は、そのように認識された実行不可能となる動作を参照して、第2期間においてドローンDが実行可能な動作を認識する。 For example, the possible motion recognition unit 14 recognizes the work schedule of the work machine W in the second period separately input, and within that period, the drone D can be executed as a function, but with the work machine W. Recognize actions that are infeasible to avoid contact. After that, the possible motion recognition unit 14 recognizes the action that the drone D can perform in the second period with reference to the recognized action that becomes infeasible.
 アバター制御部15は、ユーザUの動作に応じて、アバターAの動作を制御し、そのアバターAの動作に応じて、対象オブジェクトO1,O2の動作を制御する。具体的には、図3に示すように、ユーザUは、仮想空間VSで、自らに対応するアバターAの動作を介して、又は、コントローラ5に対応するオブジェクトO5を操作することによって、対象オブジェクトO1,O2の動作を制御する。 The avatar control unit 15 controls the operation of the avatar A according to the operation of the user U, and controls the operation of the target objects O1 and O2 according to the operation of the avatar A. Specifically, as shown in FIG. 3, the user U operates the target object in the virtual space VS through the operation of the avatar A corresponding to the user U or by operating the object O5 corresponding to the controller 5. Controls the operation of O1 and O2.
 このとき、アバター制御部15は、対象オブジェクトO1,O2の動作を、それらに対応するドローンDが機能として実行可能な動作に基づいて制限する。 At this time, the avatar control unit 15 limits the operations of the target objects O1 and O2 based on the operations that the drone D corresponding to them can execute as a function.
 例えば、ユーザUが、第1ドローンD1の実際に実現可能な移動速度を超えるような速度に対応する速度で、それに対応する第1オブジェクトO1を移動させようした場合には、第1オブジェクトO1の動作が制限される。具体的には、ユーザUが、アバターAの手で押すようにして、第1オブジェクトO1を移動させている場合、その移動速度が所定の速度以上となった場合には、アバターAの手が第1オブジェクトO1を通り抜けてしまうといった処理が行われる。 For example, when the user U tries to move the corresponding first object O1 at a speed corresponding to a speed exceeding the actually feasible movement speed of the first drone D1, the first object O1 Operation is restricted. Specifically, when the user U moves the first object O1 by pushing it with the hand of the avatar A, and the moving speed becomes equal to or higher than a predetermined speed, the hand of the avatar A moves. Processing such as passing through the first object O1 is performed.
 また、アバター制御部15は、対象オブジェクトO1,O2の動作を、それらに対応するドローンDが第2領域RS2で実行可能な動作に基づいて制限する。 Further, the avatar control unit 15 limits the operations of the target objects O1 and O2 based on the operations that the drone D corresponding to them can execute in the second region RS2.
 例えば、建築資材Mが第2領域RS2で存在している位置には、ドローンDは進入することができない。そこで、その位置に対応する第2制約エリアLA2には、対象オブジェクトO1,O2を移動させることができなくなる。 For example, the drone D cannot enter the position where the building material M exists in the second region RS2. Therefore, the target objects O1 and O2 cannot be moved to the second constraint area LA2 corresponding to the position.
 また、例えば、作業機械Wが第2領域RS2で存在し得る領域に、ドローンDを移動させると、作業機械Wと接触してしまう可能性がある。そこで、その領域に対応する第3制約エリアLA3には、対象オブジェクトO1,O2を移動させると、それらの色が変わるとともに、警告文章が仮想空間VSのアバターAの近傍に表示される。 Further, for example, if the drone D is moved to a region where the work machine W can exist in the second region RS2, there is a possibility that the work machine W will come into contact with the work machine W. Therefore, when the target objects O1 and O2 are moved to the third constraint area LA3 corresponding to the area, their colors change and a warning text is displayed in the vicinity of the avatar A in the virtual space VS.
 また、アバター制御部15は、対象オブジェクトO1,O2の動作を、第1領域RS1の形状と第2領域RS2の形状との相違に基づいても、制限する。 Further, the avatar control unit 15 limits the operation of the target objects O1 and O2 even based on the difference between the shape of the first region RS1 and the shape of the second region RS2.
 例えば、第1領域RS1の形状と第2領域RS2の形状とが異なる場合、仮想空間VSは、第1領域RS1の形状に基づいて生成される。しかし、そのような仮想空間VSでは、ユーザUが、ドローンDが実行不可能な動作に対応する動作を、それらに対応する対象オブジェクトO1,O2にさせようとしやすくなってしまうおそれがある。 For example, when the shape of the first region RS1 and the shape of the second region RS2 are different, the virtual space VS is generated based on the shape of the first region RS1. However, in such a virtual space VS, there is a possibility that the user U tends to make the target objects O1 and O2 corresponding to the operations corresponding to the operations that the drone D cannot execute.
 具体的には、第1領域RS1よりも第2領域RS2が広い場合、仮想空間VSが第1領域RS1の形状に基づいて生成されると、対象オブジェクトO1,O2の移動量に対するドローンDの移動量が大きくなる。そのため、ユーザUが対象オブジェクトO1,O2を移動させると、ドローンDの移動速度の上限を超えてしまうおそれがある。 Specifically, when the second region RS2 is wider than the first region RS1, when the virtual space VS is generated based on the shape of the first region RS1, the movement of the drone D with respect to the movement amount of the target objects O1 and O2. The amount will be large. Therefore, if the user U moves the target objects O1 and O2, the upper limit of the moving speed of the drone D may be exceeded.
 そこで、第1領域RS1よりも第2領域RS2が広い場合(すなわち、図4に示すように仮想空間VSが第2領域RS2を縮小したものである場合)には、アバター制御部15は、対象オブジェクトO1,O2の方向ごとの移動可能速度を、その縮小の度合いに応じて、方向ごとに制限する。 Therefore, when the second region RS2 is wider than the first region RS1 (that is, when the virtual space VS is a reduced version of the second region RS2 as shown in FIG. 4), the avatar control unit 15 is the target. The movable speed of the objects O1 and O2 in each direction is limited for each direction according to the degree of reduction thereof.
 これにより、対象オブジェクトO1,O2は、方向ごとに定められた速度以下でしか移動することができなくなる。具体的には、対象オブジェクトO1,O2の図4の紙面上の縦方向における移動可能速度は、横方向における移動速度よりも遅くなる。 As a result, the target objects O1 and O2 can move only at a speed or less specified for each direction. Specifically, the movable speed of the target objects O1 and O2 in the vertical direction on the paper surface of FIG. 4 is slower than the moving speed in the horizontal direction.
 また、アバター制御部15は、対象オブジェクトO1,O2の動作を、第1期間の長さと第2期間の長さとの相違に基づいても、制限する。 Further, the avatar control unit 15 limits the operation of the target objects O1 and O2 even based on the difference between the length of the first period and the length of the second period.
 例えば、第1期間の長さと第2期間の長さとが異なる場合がある。しかし、そのような場合、ユーザUが、ドローンDが実行不可能な動作に対応する動作を、それらに対応する対象オブジェクトO1,O2にさせようとしやすくなってしまうおそれがある。 For example, the length of the first period and the length of the second period may be different. However, in such a case, there is a possibility that the user U tends to make the target objects O1 and O2 corresponding to the actions corresponding to the actions that the drone D cannot execute.
 具体的には、第1期間が第2期間よりも長い場合、対象オブジェクトO1,O2の動作速度に対するドローンDの動作速度が速くなる。そのため、ユーザUが、期間の相違がない場合と同様に対象オブジェクトO1,O2を動作させると、ドローンDの動作速度の上限を超えてしまうおそれがある。 Specifically, when the first period is longer than the second period, the operating speed of the drone D becomes faster than the operating speed of the target objects O1 and O2. Therefore, if the user U operates the target objects O1 and O2 in the same manner as when there is no difference in the period, the upper limit of the operating speed of the drone D may be exceeded.
 そこで、第1期間が第2期間よりも長い場合には、アバター制御部15は、対象オブジェクトO1,O2の移動可能速度を、その期間の差に応じて、全方向において一律に制限する。 Therefore, when the first period is longer than the second period, the avatar control unit 15 uniformly limits the movable speeds of the target objects O1 and O2 in all directions according to the difference in the periods.
 これにより、対象オブジェクトO1,O2は、いずれの方向であっても定められた速度以下でしか移動することができなくなる。具体的には、対象オブジェクトO1,O2の移動可能速度は、通常の速度よりも遅くなる。 As a result, the target objects O1 and O2 can move only at a predetermined speed or less in any direction. Specifically, the movable speed of the target objects O1 and O2 is slower than the normal speed.
 なお、本発明のシステムは、このような構成に限定されるものではない。例えば、第1領域の形状と第2領域の形状とが一致している場合には、アバター制御部は、第1領域の形状と第2領域の形状との相違に基づいて第2アバターの動作を制限しなくてもよい。また、例えば、システムがリアルタイムで制御対象を制御するモードのみを備えるように構成されている場合には、アバター制御部は、第1期間と第2期間との相違にも基づいて第2アバターの動作を制限しなくてもよい。 The system of the present invention is not limited to such a configuration. For example, when the shape of the first region and the shape of the second region match, the avatar control unit operates the second avatar based on the difference between the shape of the first region and the shape of the second region. Does not have to be restricted. Further, for example, when the system is configured to have only a mode for controlling the controlled object in real time, the avatar control unit may use the second avatar based on the difference between the first period and the second period. It is not necessary to limit the operation.
 なお、リアルタイムでドローンDを制御するモード、又は、異なる時間帯でドローンDを制御するモードの場合には、前述のように、ドローンDが第2領域RS2で周辺環境も加味して実行可能な動作、第1領域RS1の形状と第2領域RS2の形状との相違、及び、第1期間の長さと第2期間の長さとの相違に基づいて、対象オブジェクトO1,O2の仮想空間VSにおける動作が制限される。 In the case of the mode in which the drone D is controlled in real time or the mode in which the drone D is controlled in different time zones, the drone D can be executed in the second region RS2 in consideration of the surrounding environment as described above. The operation of the target objects O1 and O2 in the virtual space VS based on the operation, the difference between the shape of the first region RS1 and the shape of the second region RS2, and the difference between the length of the first period and the length of the second period. Is restricted.
 一方、動作を検証するモードの場合には、それらのうち周辺環境、期間の相違に基づいては動作が制限されず、形状の相違とドローンDの機能にのみ基づいて、対象オブジェクトO1,O2の仮想空間VSにおける動作が制限される。 On the other hand, in the case of the mode for verifying the operation, the operation is not restricted based on the difference in the surrounding environment and the period, and the target objects O1 and O2 are based only on the difference in shape and the function of the drone D. The operation in the virtual space VS is limited.
 出力情報決定部16(画像決定部)は、HMD4を介してユーザUに認識させる仮想空間VSの画像及び音声を、決定する。 The output information determination unit 16 (image determination unit) determines the image and sound of the virtual space VS to be recognized by the user U via the HMD 4.
 対象制御部17(動作記憶部)は、仮想空間VSにおける対象オブジェクトO1,O2の動作に応じて、それらに対応するドローンDの第2領域RS2における動作を制御する。 The target control unit 17 (operation storage unit) controls the operation of the drone D corresponding to the operation of the target objects O1 and O2 in the virtual space VS in the second region RS2.
 また、対象制御部17は、第1期間に実行された対象オブジェクトO1,O2の動作を記憶することが可能となっている。これにより、リアルタイムだけでなく、第1期間よりも後の期間(少なくとも開始時期のずれた期間)である第2期間に、ドローンDを制御することが可能となっている。 Further, the target control unit 17 can store the operations of the target objects O1 and O2 executed in the first period. This makes it possible to control the drone D not only in real time but also in the second period, which is a period after the first period (at least a period in which the start time is different).
 さらに、対象制御部17は、第1オブジェクトO1及び第2オブジェクトO2の各々の動作について、個別に記憶することが可能となっている。これにより、それらに対応する第1ドローンD1及び第2ドローンD2の第2領域RS2における動作を、異なる時間帯で、一人のユーザUが制御することが可能となっている。 Further, the target control unit 17 can individually store each operation of the first object O1 and the second object O2. This makes it possible for one user U to control the operations of the first drone D1 and the second drone D2 corresponding to them in the second region RS2 at different time zones.
 例えば、図5に示すように、まず、第1期間(t1~t2の期間)において、ユーザが、アバターAを介して第1オブジェクトO1を動作させたとする。その際の第1オブジェクトO1の動作は、対象制御部17に記憶される。その後、第1期間よりも後の第2期間(t4~t7の期間)において、第1ドローンD1は、対象制御部17に記憶されていた第1オブジェクトO1の動作に応じて、動作される。 For example, as shown in FIG. 5, first, it is assumed that the user operates the first object O1 via the avatar A in the first period (the period from t1 to t2). The operation of the first object O1 at that time is stored in the target control unit 17. After that, in the second period (the period from t4 to t7) after the first period, the first drone D1 is operated according to the operation of the first object O1 stored in the target control unit 17.
 このとき、この場合における第1期間と第2期間との長さは、必ずしも一致している必要はない。これにより、例えば、第1期間を第2期間よりも長く設定することによって、短時間で制御対象に実行させたい動作(例えば、ロボットによる手術等)を、事前に時間をかけて検証しつつ設定しておくというようなことができる。逆に、第2期間を第1期間よりも長く設定することによって、第2領域の状況を加味してゆっくりと行わなければいけない動作を、短時間で設定しておくというようなこともできる。 At this time, the lengths of the first period and the second period in this case do not necessarily have to be the same. As a result, for example, by setting the first period to be longer than the second period, the operation to be executed by the controlled object in a short time (for example, surgery by a robot) is set while being verified in advance over time. You can do something like that. On the contrary, by setting the second period longer than the first period, it is possible to set the operation that must be performed slowly in consideration of the situation of the second region in a short time.
 また、例えば、まず、第1期間(t3~t6の期間)において、ユーザが、アバターAを介して第2オブジェクトO2を動作させたとする。その際の第2オブジェクトO2の動作は、対象制御部17に記憶される。その後、第1期間よりも後の第2期間(t5~t8の期間)において、第2ドローンD2は、対象制御部17に記憶されていた第2オブジェクトO2の動作に応じて、動作される。 Further, for example, first, it is assumed that the user operates the second object O2 via the avatar A in the first period (the period from t3 to t6). The operation of the second object O2 at that time is stored in the target control unit 17. After that, in the second period (the period from t5 to t8) after the first period, the second drone D2 is operated according to the operation of the second object O2 stored in the target control unit 17.
 このように、第1オブジェクトO1についての第1期間(t1~t2の期間)と、第2オブジェクトO2についての第1期間(t3~t6の期間)とを異ならせることにより、一人のユーザUが複数の制御対象を制御することができる。また、その第1期間(例えば、t3~t6の期間)は、第2第2期間(例えば、t5~t8の期間)と一部が重複していてもよい。これにより、ユーザUは、ある程度リアルタイムで状況を加味しつつ、その後の動作を設定することができるようになっている。 In this way, by making the first period (the period from t1 to t2) for the first object O1 different from the first period (the period from t3 to t6) for the second object O2, one user U can be used. It is possible to control multiple control targets. Further, the first period (for example, the period from t3 to t6) may partially overlap with the second second period (for example, the period from t5 to t8). As a result, the user U can set the subsequent operation while taking into account the situation in real time to some extent.
 なお、本発明のシステムは、このような構成に限定されるものではない。例えば、システムがリアルタイムで制御対象を制御するモードのみを備えるように構成されている場合には、動作記憶部は省略してもよい。 The system of the present invention is not limited to such a configuration. For example, if the system is configured to have only a mode for controlling the controlled object in real time, the operation storage unit may be omitted.
[各処理部で実行される処理]
 次に、図2~図8を参照して、VRシステムSの実行する処理について説明する。
[Processes executed by each processing unit]
Next, the process executed by the VR system S will be described with reference to FIGS. 2 to 8.
 なお、以下においては、対象オブジェクトO1,O2を動作させた時間帯よりも後の異なる時間帯でドローンDを制御するモードにおける処理について、詳細に説明する。リアルタイムでドローンDを制御するモードにおける処理、及び、ドローンDを実際には制御せず、その動作を検証するモードにおける処理については、異なる時間帯でドローンDを制御するモードにおける処理と相違する部分のみを説明する。 In the following, the processing in the mode in which the drone D is controlled in a different time zone after the time zone in which the target objects O1 and O2 are operated will be described in detail. The processing in the mode that controls the drone D in real time and the processing in the mode that does not actually control the drone D and verifies its operation are different from the processing in the mode that controls the drone D in different time zones. Only explain.
[使用が開始された際に実行される処理]
 まず、図2、図3、図4及び図6を参照して、VRシステムSの各処理部が、VRシステムSの使用が開始された際に、実行する処理について説明する。
[Process to be executed when use starts]
First, with reference to FIGS. 2, 3, 4, and 6, a process to be executed by each processing unit of the VR system S when the use of the VR system S is started will be described.
 この処理においては、まず、仮想空間生成部10が、第1カメラ3及び第2カメラ6の撮影した画像、予め入力されたモデル、数値等に基づいて、第1領域RS1の形状及び第2領域RSの形状を認識する(図6/STEP100)。 In this process, first, the virtual space generation unit 10 determines the shape of the first region RS1 and the second region based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Recognize the shape of RS (Fig. 6 / STEP100).
 次に、仮想空間生成部10が、認識された第1領域RS1の形状と第2領域RS2の形状とが異なるか否かを判断する(図6/STEP101)。 Next, the virtual space generation unit 10 determines whether or not the recognized shape of the first region RS1 and the shape of the second region RS2 are different (FIG. 6 / STEP101).
 第1領域RS1の形状と第2領域RS2の形状とが異なっていた場合(STEP101でYESの場合)、仮想空間生成部10は、例えば、図4に示すように、第1領域RS1の形状に基づいて、第2領域RS2の画像を、第1領域RS1の形状に一致するように補正する(図6/STEP102)。 When the shape of the first region RS1 and the shape of the second region RS2 are different (YES in STEP 101), the virtual space generation unit 10 has, for example, the shape of the first region RS1 as shown in FIG. Based on this, the image of the second region RS2 is corrected so as to match the shape of the first region RS1 (FIG. 6 / STEP102).
 次に、仮想空間生成部10が、補正された第2領域RS2の画像に基づいて、仮想空間VS(厳密には、仮想空間VSの背景となる画像)を生成する(図6/STEP103)。 Next, the virtual space generation unit 10 generates a virtual space VS (strictly speaking, an image that becomes the background of the virtual space VS) based on the corrected image of the second region RS2 (FIG. 6 / STEP103).
 一方、第1領域RS1の形状と第2領域RS2の形状とが一致していた場合(STEP101でNOの場合)、仮想空間生成部10が、第2領域RS2の画像を補正せずに、その補正されていない画像に基づいて、仮想空間VS(厳密には、仮想空間VSの背景となる画像)を生成する(図6/STEP104)。 On the other hand, when the shape of the first region RS1 and the shape of the second region RS2 match (NO in STEP101), the virtual space generation unit 10 does not correct the image of the second region RS2. A virtual space VS (strictly speaking, an image that is a background of the virtual space VS) is generated based on the uncorrected image (FIG. 6 / STEP104).
 次に、仮想空間生成部10が、第1カメラ3及び第2カメラ6の撮影した画像、予め入力されたモデル、数値等に基づいて、ユーザUの第1領域RS1における初期状態(座標、姿勢等)、及び、ドローンDの第2領域RS2における初期状態(座標、姿勢等)を認識する(図6/STEP105)。 Next, the virtual space generation unit 10 sets the initial state (coordinates, posture) in the first region RS1 of the user U based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Etc.) and the initial state (coordinates, posture, etc.) in the second region RS2 of the drone D is recognized (FIG. 6 / STEP105).
 なお、本実施形態においては、ユーザUの初期状態は、第1期間における初期状態であり、現在の状態である。また、本実施形態においては、ドローンDの初期状態とは、第2期間における初期状態であり、第2期間当初の状態である。 In the present embodiment, the initial state of the user U is the initial state in the first period and is the current state. Further, in the present embodiment, the initial state of the drone D is the initial state in the second period, and is the initial state in the second period.
 次に、仮想空間生成部10が、仮想空間VSに、アバターAを、ユーザUの初期状態に対応するようにして生成するとともに、対象オブジェクトO1,O2を、ドローンDの初期状態に対応するようにして生成する(図6/STEP106)。 Next, the virtual space generation unit 10 generates the avatar A in the virtual space VS so as to correspond to the initial state of the user U, and causes the target objects O1 and O2 to correspond to the initial state of the drone D. (Fig. 6 / STEP106).
 次に、仮想空間生成部10が、第1カメラ3及び第2カメラ6の撮影した画像、予め入力されたモデル、数値等に基づいて、第1領域RS1の及び第2領域RS2の周辺環境を認識する(図6/STEP107)。 Next, the virtual space generation unit 10 creates the surrounding environment of the first region RS1 and the second region RS2 based on the images taken by the first camera 3 and the second camera 6, the model input in advance, the numerical values, and the like. Recognize (Fig. 6 / STEP107).
 具体的には、仮想空間生成部10が、第1領域RS1に存在している家具Fを、第1領域RS1の周辺環境として認識するとともに、第2領域RS2に存在している建築資材M及び作業機械Wを、第2領域RS2の周辺環境として認識する。 Specifically, the virtual space generation unit 10 recognizes the furniture F existing in the first region RS1 as the surrounding environment of the first region RS1, and the building material M and the building material M existing in the second region RS2. The work machine W is recognized as the surrounding environment of the second region RS2.
 次に、仮想空間生成部10が、認識された周辺環境に基づいて、仮想空間VSに、それらに対応するオブジェクト及び制約エリアを生成する。
(図6/STEP108)。
Next, the virtual space generation unit 10 generates objects and constraint areas corresponding to them in the virtual space VS based on the recognized surrounding environment.
(Fig. 6 / STEP108).
 具体的には、仮想空間生成部10が、家具Fが第1領域RS1で存在している位置に対応する仮想空間VSの位置に、家具Fのゴースト及び第1制約エリアLA1を生成する。また、仮想空間生成部10が、建築資材M及び作業機械Wが第2領域RS2で存在している位置に対応する仮想空間VSの位置に、建築資材M及び作業機械Wに対応する第3オブジェクトO3及び作業機械に対応する第4オブジェクトO4、並びに、第2制約エリアLA2及び第3制約エリアLA3を生成する。 Specifically, the virtual space generation unit 10 generates the ghost of the furniture F and the first constraint area LA1 at the position of the virtual space VS corresponding to the position where the furniture F exists in the first region RS1. Further, the virtual space generation unit 10 places the building material M and the work machine W at the position of the virtual space VS corresponding to the position where the building material M and the work machine W exist in the second region RS2, and the third object corresponding to the building material M and the work machine W. The fourth object O4 corresponding to the O3 and the work machine, and the second constraint area LA2 and the third constraint area LA3 are generated.
 次に、出力情報決定部16が、アバターAの状態に基づいて、ユーザに認識させる画像及び音声を決定する(図6/STEP109)。 Next, the output information determination unit 16 determines the image and sound to be recognized by the user based on the state of the avatar A (FIG. 6 / STEP109).
 次に、出力情報決定部16が、ユーザUの装着しているHMD4のそのモニタ40に決定された画像を表示させるとともに、そのスピーカ41に決定された音声を発生させて(図6/STEP110)、今回の処理を終了する。 Next, the output information determination unit 16 displays the determined image on the monitor 40 of the HMD 4 worn by the user U, and generates the determined voice on the speaker 41 (FIG. 6 / STEP110). , End this process.
 以上の処理により、VRシステムSの使用が開始された直後においては、ユーザUは、例えば、図3に示すような仮想空間VSを、認識した状態になる。 Immediately after the use of the VR system S is started by the above processing, the user U is in a state of recognizing, for example, the virtual space VS as shown in FIG.
[使用が開始された後にオブジェクトの実行可能な動作を認識する際に実行する処理]
 次に、図2~図4、図7を参照して、VRシステムSの各処理部が、VRシステムSの使用が開始された後、対象オブジェクトO1,O2の実行可能な動作を認識する際に、実行する処理について説明する。
[Process to be executed when recognizing the executable action of the object after it is started to be used]
Next, with reference to FIGS. 2 to 4, when each processing unit of the VR system S recognizes the executable operation of the target objects O1 and O2 after the use of the VR system S is started. The process to be executed will be described.
 この処理においては、まず、可能動作認識部14が、ユーザU等によって予め入力された情報等に基づいて、対象オブジェクトO1,O2に対応するドローンDが、第2領域RS2で機能として実行可能な動作を認識する(図7A/STEP200)。 In this process, first, the possible motion recognition unit 14 can execute the drone D corresponding to the target objects O1 and O2 as a function in the second region RS2 based on the information or the like input in advance by the user U or the like. Recognize the operation (Fig. 7A / STEP200).
 次に、期間認識部13が、ユーザU等が予め入力した情報等に基づいて、第1期間及び第2期間を認識する(図7A/STEP201)。 Next, the period recognition unit 13 recognizes the first period and the second period based on the information or the like input in advance by the user U or the like (FIG. 7A / STEP201).
 次に、可能動作認識部14が、第2期間における第2領域RS2の周辺環境を認識する(図7A/STEP202)。 Next, the possible motion recognition unit 14 recognizes the surrounding environment of the second region RS2 in the second period (FIG. 7A / STEP202).
 具体的には、可能動作認識部14が、第2期間において、第2領域RS2で変化する可能性のある周辺環境を認識する。本実施形態では、第2期間において、建築資材Mの量(すなわち、その配置スペース)、及び、作業機械Wの動作が変化する可能性がある。そのため、可能動作認識部14が、ユーザU等によって予め入力された工事の計画等に基づいて、第2期間における各時点における建築資材M及び作業機械Wの状況を、第2領域RS2の周辺環境として認識する。 Specifically, the possible motion recognition unit 14 recognizes the surrounding environment that may change in the second region RS2 in the second period. In the present embodiment, the amount of the building material M (that is, the arrangement space thereof) and the operation of the work machine W may change in the second period. Therefore, the possible motion recognition unit 14 changes the status of the building material M and the work machine W at each time point in the second period based on the construction plan or the like input in advance by the user U or the like to the surrounding environment of the second region RS2. Recognize as.
 次に、可能動作認識部14が、認識されたドローンDの機能、及び、第2期間における第2領域RS1の周辺環境に基づいて、ドローンDに対応する対象オブジェクトO1,O2が、仮想空間VSで実行可能な動作を認識する(図7A/STEP203)。 Next, the target objects O1 and O2 corresponding to the drone D are the virtual space VS based on the recognized function of the drone D and the surrounding environment of the second region RS1 in the second period. Recognizes the actions that can be performed in (Fig. 7A / STEP203).
 次に、アバター制御部15が、仮想空間生成部10が認識した情報に基づいて、第1領域RS1の形状と第2領域RS2の形状とが異なるか否かを判断する(図7A/STEP204)。 Next, the avatar control unit 15 determines whether or not the shape of the first region RS1 and the shape of the second region RS2 are different based on the information recognized by the virtual space generation unit 10 (FIG. 7A / STEP204). ..
 第1領域RS1の形状と第2領域RS2の形状とが異なっていた場合(STEP204でYESの場合)、アバター制御部15は、仮想空間VSを生成する際に第2領域RS2の画像に対して行われた補正の内容を認識し、その内容に基づいて、仮想空間VSの位置により制限される対象オブジェクトO1,O2の動作を認識する(図7A/STEP205)。 When the shape of the first region RS1 and the shape of the second region RS2 are different (YES in STEP 204), the avatar control unit 15 refers to the image of the second region RS2 when generating the virtual space VS. The content of the correction made is recognized, and the operation of the target objects O1 and O2 limited by the position of the virtual space VS is recognized based on the content (FIG. 7A / STEP205).
 具体的には、図4に示すように、例えば、第1領域RS1が平面視正方形の狭い空間であり、第2領域RS2が平面視長方形の縦長の広い空間であって、仮想空間生成部10が、仮想空間VSの画像を、第2領域RS2の画像を、第1領域RS1の形状に合わせて縮小するように補正したとする。 Specifically, as shown in FIG. 4, for example, the first region RS1 is a narrow space of a plan view square, and the second region RS2 is a vertically long wide space of a plan view rectangle, and the virtual space generation unit 10 However, it is assumed that the image of the virtual space VS is corrected so that the image of the second region RS2 is reduced to fit the shape of the first region RS1.
 その場合、アバター制御部15が、対象オブジェクトO1,O2の方向ごとの移動可能速度を、その縮小の度合いに応じて、方向ごとに制限する。これにより、対象オブジェクトO1,O2の図4の紙面上の縦方向における移動可能速度は、横方向における移動速度よりも遅くなる。 In that case, the avatar control unit 15 limits the movable speed of the target objects O1 and O2 for each direction according to the degree of reduction thereof. As a result, the movable speed of the target objects O1 and O2 in the vertical direction on the paper surface of FIG. 4 becomes slower than the moving speed in the horizontal direction.
 第1領域RS1の形状と第2領域RS2の形状との相違に基づく制限を認識した後、又は、第1領域RS1の形状と第2領域RS2の形状とが一致していた場合(STEP204でNOの場合)、アバター制御部15は、第1期間の長さと第2期間の長さとが異なるか否かを判断する(図7A/STEP206)。 After recognizing the limitation based on the difference between the shape of the first region RS1 and the shape of the second region RS2, or when the shape of the first region RS1 and the shape of the second region RS2 match (NO in STEP 204). In the case of), the avatar control unit 15 determines whether or not the length of the first period and the length of the second period are different (FIG. 7A / STEP206).
 第1期間の長さと第2期間の長さとが異なっていた場合(STEP206でYESの場合)、アバター制御部15は、その相違量に基づいて、対象オブジェクトO1,O2の各動作の実行限度となる速度を認識する(図7A/STEP207)。 When the length of the first period and the length of the second period are different (YES in STEP206), the avatar control unit 15 sets the execution limit of each operation of the target objects O1 and O2 based on the difference amount. (Fig. 7A / STEP207).
 具体的には、ドローンDが第2期間において機能として実行可能な速度を、第1期間の長さと第2期間と長さとの比率に基づいて、一律に加速又は減速する。 Specifically, the speed at which the drone D can perform as a function in the second period is uniformly accelerated or decelerated based on the length of the first period and the ratio between the second period and the length.
 ここまでの処理が、ユーザUが、アバターAを介して対象オブジェクトO1,O2の動作をさせ始める前までに(すなわち、第1期間の開始前までに)実行される処理である。以下に説明する処理は、ユーザUが、アバターAを介して対象オブジェクトO1,O2の動作させ始めた後に(すなわち、第1期間の開始後に)実行される処理である。 The processing up to this point is the processing executed before the user U starts to operate the target objects O1 and O2 via the avatar A (that is, before the start of the first period). The process described below is a process executed after the user U starts operating the target objects O1 and O2 via the avatar A (that is, after the start of the first period).
 第1期間の長さと第2期間の長さとの相違に基づく制限を認識した後、又は、第1期間の長さと第2期間の長さとが一致していた場合(STEP206でNOの場合)、仮想空間生成部10が、現時点(すなわち、第1期間の所定の時点)に対応する第2期間の時点において、第2領域RS2におけるドローンDの周辺環境が、変化しているか否かを判断する(図7B/STEP208)。 After recognizing the limitation based on the difference between the length of the first period and the length of the second period, or when the length of the first period and the length of the second period match (NO in STEP 206). The virtual space generation unit 10 determines whether or not the surrounding environment of the drone D in the second region RS2 has changed at the time of the second period corresponding to the present time (that is, a predetermined time of the first period). (Fig. 7B / STEP208).
 周辺環境が変化していた場合(STEP208でYESの場合)、仮想空間生成部10が、第2領域RS1の周辺環境の変化に基づいて、第2制約エリアLA2又は第3制約エリアLA3を修正する(図7B/STEP209)。 When the surrounding environment has changed (YES in STEP 208), the virtual space generation unit 10 modifies the second constraint area LA2 or the third constraint area LA3 based on the change in the surrounding environment of the second region RS1. (FIG. 7B / STEP209).
 具体的には、例えば、建築資材Mの量が図3に示した状態よりも減少していた場合には、仮想空間生成部10が、建築資材Mに対応する第2制約エリアLA2を、その減少に応じて縮小させる。また、例えば、作業機械Wが動作した結果として図3の状態とは姿勢等が変化していた場合には、仮想空間生成部10が、作業機械Wに対応する第3制約エリアLA3を、その変化に応じて変形させる。 Specifically, for example, when the amount of the building material M is smaller than that shown in FIG. 3, the virtual space generation unit 10 sets the second constraint area LA2 corresponding to the building material M. Shrink as it decreases. Further, for example, when the posture or the like has changed from the state of FIG. 3 as a result of the operation of the work machine W, the virtual space generation unit 10 sets the third constraint area LA3 corresponding to the work machine W. Deform according to changes.
 次に、アバター制御部15が、第2制約エリアLA2又は第3制約エリアLA3の修正に基づいて、その修正の結果新たに制限される対象オブジェクトO1,O2の動作を認識する(図7B/STEP210)。 Next, the avatar control unit 15 recognizes the operation of the target objects O1 and O2 newly restricted as a result of the modification based on the modification of the second constraint area LA2 or the third constraint area LA3 (FIG. 7B / STEP210). ).
 新たに制限される対象オブジェクトO1,O2の動作を認識した後、又は、周辺環境が変化していなかった場合(STEP208でNOの場合)、可能動作認識部14は、ドローンDの状態が変化したか否かを判断する(図7B/STEP211)。 After recognizing the movements of the newly restricted target objects O1 and O2, or when the surrounding environment has not changed (NO in STEP 208), the possible movement recognition unit 14 has changed the state of the drone D. It is determined whether or not (FIG. 7B / STEP211).
 ドローンDの状態が変化していた場合(STEP211でYESの場合)、アバター制御部15は、ドローンDの状態の変化に基づいて、改めて対象オブジェクトO1,O2が実行可能な動作を認識する(図7B/STEP212)。 When the state of the drone D has changed (YES in STEP211), the avatar control unit 15 recognizes the action that the target objects O1 and O2 can execute again based on the change in the state of the drone D (FIG. FIG. 7B / STEP212).
 具体的には、例えば、ドローンDの状態が、単なる飛行状態から、建築資材Mを搬送している状態に変化していたとする。その場合には、ドローンDは、その状態の変化に伴って移動可能速度、移動可能領域が変化する。そこで、アバター制御部15は、そのような変化が生じた場合には、その状態の変化に伴って、ドローンDに対応する対象オブジェクトO1,O2の実行可能な動作も変更する。 Specifically, for example, it is assumed that the state of the drone D has changed from a mere flight state to a state in which the building material M is being transported. In that case, the movable speed and movable area of the drone D change as the state of the drone D changes. Therefore, when such a change occurs, the avatar control unit 15 also changes the executable operation of the target objects O1 and O2 corresponding to the drone D according to the change in the state.
 改めて対象オブジェクトO1,O2の実行可能な動作を認識した後、又は、ドローンDの状態が変化していなかった場合(STEP211でNOの場合)、VRシステムSが、ユーザUからの終了の指示があったか否かを判断する(図7B/STEP213)。 After recognizing the executable operation of the target objects O1 and O2 again, or when the state of the drone D has not changed (NO in STEP211), the VR system S is instructed to end by the user U. It is determined whether or not it was present (Fig. 7B / STEP 213).
 終了の指示がなかった場合(STEP213でNOの場合)、STEP208に戻り、STEP208~STEP213の処理を再度実行する。 If there is no end instruction (NO in STEP 213), the process returns to STEP 208 and the processes of STEP 208 to STEP 213 are executed again.
 一方、終了の指示があった場合(STEP213でYESの場合)、VRシステムSは、今回の処理を終了する。 On the other hand, if there is an instruction to end (YES in STEP 213), the VR system S ends this process.
[オブジェクトを動作させた際、及び、ドローンを動作させる際に実行する処理]
 次に、図2及び図8を参照して、VRシステムSの各処理部が、ユーザUがアバターAを介して対象オブジェクトO1,O2を動作させた際に実行する処理、及び、ドローンDを動作させる際に実行する処理について説明する。
[Process to be executed when operating an object and when operating a drone]
Next, with reference to FIGS. 2 and 8, each processing unit of the VR system S performs a process to be executed when the user U operates the target objects O1 and O2 via the avatar A, and a drone D. The process to be executed when operating is described.
 この処理においては、まず、ユーザ動作認識部11が、ユーザUが動作したか否かを判断する(図8/STEP300)。 In this process, first, the user motion recognition unit 11 determines whether or not the user U has operated (FIG. 8 / STEP300).
 ユーザが動作していなかった場合(STEP300でNOの場合)、所定の制御周期でSTEP300の判断を再度実行する。 If the user is not operating (NO in STEP300), the determination of STEP300 is executed again in a predetermined control cycle.
 一方、ユーザが動作していた場合(STEP300でYESの場合)、アバター制御部15は、ユーザUの動作に基づいて、アバターAを動作させる(図8/STEP301)。 On the other hand, when the user is operating (YES in STEP300), the avatar control unit 15 operates the avatar A based on the operation of the user U (FIG. 8 / STEP301).
 次に、アバター制御部15が、アバターAの動作が対象オブジェクトO1,O2を動作させるような動作であるか否かを判断する(図8/STEP302)。 Next, the avatar control unit 15 determines whether or not the operation of the avatar A is such that the target objects O1 and O2 are operated (FIG. 8 / STEP302).
 具体的には、アバターAの動作が、手等の身体を用いてドローンDに対応する対象オブジェクトO1,O2を移動させるような動作であるか、コントローラ5に対応する第5オブジェクトO5を用いて、対象オブジェクトO1,O2に対して所定の操作をするような動作であるか等を、アバター制御部15が判断する。 Specifically, the movement of the avatar A is such that the target objects O1 and O2 corresponding to the drone D are moved by using the body such as a hand, or the fifth object O5 corresponding to the controller 5 is used. , The avatar control unit 15 determines whether or not the operation is such that a predetermined operation is performed on the target objects O1 and O2.
 アバターAの動作が対象オブジェクトO1,O2を動作させるような動作であった場合(STEP302でYESの場合)、アバター制御部15が、アバターAの動作に基づいて、対象オブジェクトO1,O2を動作させる(図8/STEP303)。 When the operation of the avatar A is such that the target objects O1 and O2 are operated (YES in STEP 302), the avatar control unit 15 operates the target objects O1 and O2 based on the operation of the avatar A. (FIG. 8 / STEP303).
 次に、対象制御部17が、対象オブジェクトO1,O2の動作を記憶する(図8/STEP304)。 Next, the target control unit 17 stores the operations of the target objects O1 and O2 (FIG. 8 / STEP304).
 対象オブジェクトO1,O2の動作を記憶した後、又は、アバターAの動作が対象オブジェクトO1,O2を動作させるような動作でなかった場合(STEP302でNOの場合)、出力情報決定部16が、ユーザUに認識させる画像及び音声を、アバターA及び対象オブジェクトO1,O2の状態に基づいて決定する(図8/STEP305)。 After memorizing the actions of the target objects O1 and O2, or when the action of the avatar A is not an action that causes the target objects O1 and O2 to operate (NO in STEP302), the output information determination unit 16 determines the user. The image and sound to be recognized by U are determined based on the states of the avatar A and the target objects O1 and O2 (FIG. 8 / STEP305).
 次に、出力情報決定部16が、ユーザUの装着しているHMD4のそのモニタ40に決定された画像を表示させるとともに、そのスピーカ41に決定された音声を発生させる(図8/STEP306)。 Next, the output information determination unit 16 displays the determined image on the monitor 40 of the HMD 4 worn by the user U, and generates the determined voice on the speaker 41 (FIG. 8 / STEP306).
 次に、VRシステムSが、ユーザUからの終了の指示があったか否かを判断する(図8/STEP307)。 Next, the VR system S determines whether or not the user U has instructed the termination (FIG. 8 / STEP307).
 終了の指示がなかった場合(STEP307でNOの場合)、STEP300に戻り、STEP300~STEP307の処理を再度実行する。 If there is no end instruction (NO in STEP307), the process returns to STEP300 and the processes of STEP300 to STEP307 are executed again.
 一方、終了の指示があった場合(STEP307でYESの場合)、対象制御部17は、第2期間が開始されたか否かを判断する(図8/STEP308) On the other hand, when there is an instruction to end (YES in STEP 307), the target control unit 17 determines whether or not the second period has started (FIG. 8 / STEP 308).
 第2期間が開始されていなかった場合(STEP308でNOの場合)、所定の制御周期でSTEP308の判断を再度実行する。 If the second period has not started (NO in STEP308), the determination of STEP308 is executed again in a predetermined control cycle.
 一方、第2期間が開始された場合(STEP308でYESの場合)、対象制御部17が、記憶された対象オブジェクトO1,O2の動作に基づいて、ドローンDの動作を制御して(図8/STEP309)、今回の処理を終了する。 On the other hand, when the second period is started (YES in STEP308), the target control unit 17 controls the operation of the drone D based on the stored operations of the target objects O1 and O2 (FIG. 8 / STEP309), this process is completed.
[異なる時間帯で制御するモードとリアルタイムで制御するモードとの相違]
 以上においては、対象オブジェクトO1,O2を動作させた時間帯よりも後の異なる時間帯でローンDを制御するモードにおける処理について、詳細に説明した。
[Difference between the mode controlled in different time zone and the mode controlled in real time]
In the above, the processing in the mode in which the loan D is controlled in a different time zone after the time zone in which the target objects O1 and O2 are operated has been described in detail.
 この異なる時間帯でドローンDを制御するモードに対し、リアルタイムでドローンDを制御するモードは、対象オブジェクトO1,O2の動作を記憶する処理(図8/STEP304)、及び、第2期間の開始を判断する処理(図8/STEP308)が省略されるとともに、終了を判断する処理(図8/STEP307)の前に、ドローンDを制御する処理(図8/STEP309)が実行される点で異なる。 In contrast to the mode in which the drone D is controlled in these different time zones, the mode in which the drone D is controlled in real time starts a process of storing the operation of the target objects O1 and O2 (FIG. 8 / STEP304) and the start of the second period. The difference is that the process of determining (FIG. 8 / STEP308) is omitted, and the process of controlling the drone D (FIG. 8 / STEP309) is executed before the process of determining the end (FIG. 8 / STEP307).
 そして、異なる時間帯でドローンDを制御するモード及びリアルタイムでドローンDを制御するモードのいずれにおいても、VRシステムSは、体感型インターフェースシステムとして、ユーザUの動作に応じて、ユーザUに対応するアバターAの動作を制御し、そのアバターAの動作に応じて対象オブジェクトO1,O2の動作を制御している。 Then, in both the mode of controlling the drone D in different time zones and the mode of controlling the drone D in real time, the VR system S corresponds to the user U as an experience-based interface system according to the operation of the user U. The operation of the avatar A is controlled, and the operation of the target objects O1 and O2 is controlled according to the operation of the avatar A.
 これにより、ユーザUは、仮想空間VSで、自らに対応するアバターAを介して対象オブジェクトO1,O2を動作させることによって、第2領域RS2で、対象オブジェクトO1,O2に対応するドローンDを動作させることができるようになっている。 As a result, the user U operates the target objects O1 and O2 in the virtual space VS via the avatar A corresponding to the user U, thereby operating the drone D corresponding to the target objects O1 and O2 in the second region RS2. It is designed to be able to be made to.
 ただし、仮想空間VSにおける対象オブジェクトO1,O2の動作は、ユーザUが存在している第1領域RS1とは時間及び位置の少なくとも一方が異なる第2領域RS2に存在するドローンDが実行可能な動作に基づいて、制限しつつ制御されている。すなわち、その動作は、そのドローンDが存在する現実空間における制約(例えば、ドローンDの周辺環境、ドローンDの機能等)に基づいて、制限しつつ制御されている。 However, the operation of the target objects O1 and O2 in the virtual space VS is an operation that can be executed by the drone D existing in the second area RS2 having at least one of the time and the position different from the first area RS1 in which the user U exists. It is controlled while limiting based on. That is, the operation is controlled while being restricted based on the constraints in the real space in which the drone D exists (for example, the surrounding environment of the drone D, the function of the drone D, etc.).
 これにより、ユーザUがアバターAを介して対象オブジェクトO1,O2を動作させようとしても、その動作が、ドローンDが実行不可能な動作に対応する動作(すなわち、現実空間における制約を無視したような動作)であった場合、ユーザUは、その動作を、仮想空間VSであっても自然と実行させることができなくなる。 As a result, even if the user U tries to operate the target objects O1 and O2 via the avatar A, the operation seems to ignore the operation corresponding to the operation that the drone D cannot execute (that is, the constraint in the real space is ignored). In the case of (operation), the user U cannot naturally execute the operation even in the virtual space VS.
 したがって、VRシステムSによれば、ドローンDが実行不可能な動作に対応する対象オブジェクトO1,O2の動作は仮想空間VSであっても自然と実行できなくなるので、ユーザUは、ドローンDが存在する現実空間の第2領域RS2における制約を考慮しなくても、第2領域RS2におけるドローンDの動作を、第2領域RS2に即して制御することができる。 Therefore, according to the VR system S, the operations of the target objects O1 and O2 corresponding to the operations that the drone D cannot execute cannot be executed naturally even in the virtual space VS, so that the user U has the drone D. The operation of the drone D in the second region RS2 can be controlled according to the second region RS2 without considering the restrictions in the second region RS2 in the real space.
 また、VRシステムSでは、第1領域RS1の形状と第2領域RS2の形状等が相違する場合には、仮想空間VSの形状が、第1領域RS1の形状に対応するように生成される。これにより、そのような場合であってもその仮想空間VSはもともとのユーザUの動作可能な範囲に対応したものになるので、ユーザUは、自らが動作可能な範囲の認識を誤ることがなく(例えば、第1領域RS1に存在している家具Fへ接触すること等が抑制されて)、アバターAを動作させることがでるようになる。 Further, in the VR system S, when the shape of the first region RS1 and the shape of the second region RS2 are different, the shape of the virtual space VS is generated so as to correspond to the shape of the first region RS1. As a result, even in such a case, the virtual space VS corresponds to the original operable range of the user U, so that the user U does not mistakenly recognize the operable range by himself / herself. (For example, contact with the furniture F existing in the first region RS1 is suppressed), and the avatar A can be operated.
 さらに、そのような場合、VRシステムSでは、第2領域RS2でドローンDが実行可能な動作だけでなく、第1領域RS1の形状と第2領域RS2の形状との相違にも基づいて、対象オブジェクトO1,O2の動作を制限している。 Further, in such a case, in the VR system S, the target is based not only on the operation that the drone D can execute in the second region RS2 but also on the difference between the shape of the first region RS1 and the shape of the second region RS2. The operation of objects O1 and O2 is restricted.
 これにより、そのような場合でも、それによってドローンDが実行不可能になった動作に対応する対象オブジェクトO1,O2の動作を、自然に制限することができる。ひいては、そのような場合でも、ユーザUは、第1領域RS1の形状と第2領域RS2の形状との相違を考慮しなくても、第2領域RS2におけるドローンDの動作を、第2領域RS2に即して制御することができる。 Thereby, even in such a case, the operation of the target objects O1 and O2 corresponding to the operation that makes the drone D infeasible can be naturally restricted. As a result, even in such a case, the user U can perform the operation of the drone D in the second region RS2 in the second region RS2 without considering the difference between the shape of the first region RS1 and the shape of the second region RS2. It can be controlled according to.
 また、VRシステムSでは、動作記憶部17を備えることによって、仮想空間VSにおけるアバターAの動作可能期間である第1期間の長さと、第2領域RS2における対象オブジェクトO1,O2の動作可能期間である第2期間の長さとが異なる場合であっても、ドローンDの動作を制御することができるようになっている。 Further, in the VR system S, by providing the motion storage unit 17, the length of the first period, which is the operable period of the avatar A in the virtual space VS, and the operable period of the target objects O1 and O2 in the second region RS2 Even if the length of a certain second period is different, the operation of the drone D can be controlled.
 さらに、そのような場合、VRシステムSでは、第2領域RS2でドローンDが実行可能な動作だけでなく、第1期間の長さと第2期間の長さとの相違にも基づいて、対象オブジェクトO1,O2の動作を制限している。 Further, in such a case, in the VR system S, the target object O1 is based not only on the operation that the drone D can execute in the second region RS2 but also on the difference between the length of the first period and the length of the second period. , The operation of O2 is restricted.
 これにより、そのような場合でも、それによってドローンDが実行不可能になった動作に対応する対象オブジェクトO1,O2の動作を、自然に制限することができる。ひいては、そのような場合でも、ユーザUは、第1期間の長さと第2期間の長さとの相違を考慮しなくても、第2期間におけるドローンDの動作を、第2領域RS2に即して制御することができる。 Thereby, even in such a case, the operation of the target objects O1 and O2 corresponding to the operation that makes the drone D infeasible can be naturally restricted. As a result, even in such a case, the user U adjusts the operation of the drone D in the second period to the second region RS2 without considering the difference between the length of the first period and the length of the second period. Can be controlled.
[異なる時間帯で制御するモードと動作を検証するモードとの相違]
 また、詳細に説明した異なる時間帯でドローンDを制御するモードに対し、ドローンDを実際には制御せず、その動作を検証するモードは、第1領域RS1及び第2領域RS2の周辺環境、第1期間及び第2期間を考慮する処理(図6/STEP107,108、図7A/STEP201,202,204~207、図7B/STEP208~2013)、及び、実際にドローンDを制御する処理(図8/STP304,308,309)が省略される点で異なる。
[Difference between the mode to control in different time zone and the mode to verify the operation]
Further, in contrast to the mode in which the drone D is controlled in different time zones described in detail, the mode in which the drone D is not actually controlled and its operation is verified is the surrounding environment of the first region RS1 and the second region RS2. Processing that considers the first period and the second period (FIG. 6 / STEP107, 108, FIG. 7A / STEP201, 202, 204 to 207, FIG. 7B / STEP208 to 2013), and processing that actually controls the drone D (FIG. 8 / STP304,308,309) is omitted.
 そして、動作を検証するモードでは、VRシステムSは、動作体感システムとして、対象オブジェクトO1,O2が実行可能な動作を、第1領域RS1の形状と第2領域RS2の形状との相違、及び、ドローンDの機能のみに基づいて、制御する。すなわち、ドローンDが存在する第2領域RS2の周辺環境、第1期間の長さと第2期間の長さとの相違等は考慮されない。 Then, in the mode for verifying the operation, the VR system S performs the operation that the target objects O1 and O2 can execute as the operation experience system, the difference between the shape of the first region RS1 and the shape of the second region RS2, and It is controlled based only on the function of the drone D. That is, the surrounding environment of the second region RS2 in which the drone D exists, the difference between the length of the first period and the length of the second period, etc. are not taken into consideration.
 これにより、動作を検証するモードでは、ユーザUは、ドローンDが機能として実行可能な動作を、体感することができる。 As a result, in the mode for verifying the operation, the user U can experience the operation that the drone D can execute as a function.
 一方、動作を検証するモード以外のモードでは、仮想空間VSで対象オブジェクトO1,O2が実行可能な動作は、ド第2領域RS2の周辺環境、第1期間の長さと第2期間の長さとの相違等に起因する制約にも基づいて、制限しつつ制御される。すなわち、動作を検証するモード以外のモードでは、対象オブジェクトO1,O2が実行可能な動作は、第2領域RS2で実際に対象オブジェクトO1,O2が実行可能な動作に対応したものとなる。 On the other hand, in the mode other than the mode for verifying the operation, the operations that the target objects O1 and O2 can execute in the virtual space VS are the surrounding environment of the second region RS2, the length of the first period and the length of the second period. It is controlled while limiting based on the constraints caused by differences and the like. That is, in the mode other than the mode for verifying the operation, the operation that the target objects O1 and O2 can execute corresponds to the operation that the target objects O1 and O2 can actually execute in the second region RS2.
 これにより、動作を検証するモード以外のモードでは、ユーザUは、第2領域RS2で実際にドローンDが実行可能な動作を、体感することができる。なお、ドローンDが実際に実行可能な動作を体感するという観点からは、必ずしも、ドローンDを実際に動作させなくてもよい。そのため、動作を体感することのみを目的としている場合には、ドローンDを実際に動作させてもよいし、動作させなくてもよい。 Thereby, in the mode other than the mode for verifying the operation, the user U can experience the operation that the drone D can actually execute in the second area RS2. From the viewpoint of experiencing the operation that the drone D can actually perform, it is not always necessary to actually operate the drone D. Therefore, when the purpose is only to experience the operation, the drone D may or may not be actually operated.
 したがって、VRシステムSによれば、ユーザUは、動作を検証するモードと、実際にドローンDの動作を制御するモードとを切り替えることによって、対象オブジェクトO1,O2(ひいては、ドローンD)の動作を、比較しつつ体感することができる。ひいては、VRシステムSによれば、ユーザUは、所定の環境下において、ドローンDの機能を十全に発揮させる動作を、感覚的に検証することができる。 Therefore, according to the VR system S, the user U switches the operation of the target objects O1 and O2 (and thus the drone D) by switching between the mode for verifying the operation and the mode for actually controlling the operation of the drone D. , You can experience it while comparing. As a result, according to the VR system S, the user U can sensuously verify the operation of fully exerting the function of the drone D under a predetermined environment.
 なお、ユーザUによっては、ドローンDが機能として実行可能な動作について、十分な知識を有している場合もある。そのような場合には、リアルタイムでドローンDを制御するモードと、対象オブジェクトO1,O2を動作させた時間帯よりも後の異なる時間帯でドローンDを制御するモードの少なくとも一方のみを備える場合であっても、VRシステムSを、動作体感システムとして利用することができる。 Note that, depending on the user U, the drone D may have sufficient knowledge about the operations that can be executed as a function. In such a case, there is a case where only one of a mode for controlling the drone D in real time and a mode for controlling the drone D in a different time zone after the time zone in which the target objects O1 and O2 are operated is provided. Even if there is, the VR system S can be used as an operation experience system.
 さらに、例えば、複数種類のドローンが存在しており、それらのうちから第2領域RS2で動作させやすいドローンを選択するために各ドローンが実行可能な動作を検証する場合等には、候補となる各ドローンの機能のみに基づいて、仮想空間VSに対応するオブジェクトを生成し、それによって、各オブジェクト(ひいては、候補となる各ドローン)が実行可能な動作を検証するために、VRシステムSを利用してもよい。 Further, for example, when there are a plurality of types of drones and the operation that each drone can perform is verified in order to select a drone that is easy to operate in the second region RS2 from among them, it is a candidate. The VR system S is used to generate an object corresponding to the virtual space VS based only on the function of each drone, thereby verifying the action that each object (and thus each candidate drone) can execute. You may.
 そのような場合には、必ずしも第2領域RS2で候補となる各ドローンを実際には動作させる必要はない。これは、そのような場合には、ドローンを第2領域で実際に動作させると、ドローンが第2領域の周辺環境に接触してしまうような可能性が生じるためである。そのため、第2領域における制約がほぼ無いような場合を除き、原則として、このような場合には、ドローンは実際に動作させないことが推奨される。 In such a case, it is not always necessary to actually operate each candidate drone in the second area RS2. This is because, in such a case, when the drone is actually operated in the second region, there is a possibility that the drone may come into contact with the surrounding environment in the second region. Therefore, in principle, it is recommended not to actually operate the drone in such a case, except when there are almost no restrictions in the second area.
1…サーバ、2…標識、3…第1カメラ、4…HMD、5…コントローラ、6…第2カメラ、10…仮想空間生成部(空間認識部、制約認識部)、11…ユーザ動作認識部、12…モード切換部、13…期間認識部、14…可能動作認識部(対象機能認識部)、15…アバター制御部、16…出力情報決定部(画像決定部)、17…対象制御部(動作記憶部)、40…モニタ(画像表示器)、41…スピーカ(音声発生器)、A…アバター(第1アバター)、D1…第1ドローン、D2…第2ドローン、F…家具、LA1…第1制約エリア、LA2…第2制約エリア、LA3…第3制約エリア、M…建築資材、O1…第1オブジェクト(第2アバター)、O2…第2オブジェクト(第2アバター)、O3…第3オブジェクト、O4…第4オブジェクト、O5…第5オブジェクト、RS1…第1領域、RS2…第2領域、S…VRシステム、U…ユーザ、VS…仮想空間、W…作業機械。 1 ... Server, 2 ... Marker, 3 ... First camera, 4 ... HMD, 5 ... Controller, 6 ... Second camera, 10 ... Virtual space generation unit (space recognition unit, constraint recognition unit), 11 ... User motion recognition unit , 12 ... mode switching unit, 13 ... period recognition unit, 14 ... possible motion recognition unit (target function recognition unit), 15 ... avatar control unit, 16 ... output information determination unit (image determination unit), 17 ... target control unit ( Operation storage unit), 40 ... monitor (image display), 41 ... speaker (voice generator), A ... avatar (first avatar), D1 ... first drone, D2 ... second drone, F ... furniture, LA1 ... 1st constraint area, LA2 ... 2nd constraint area, LA3 ... 3rd constraint area, M ... building materials, O1 ... 1st object (2nd avatar), O2 ... 2nd object (2nd avatar), O3 ... 3rd Object, O4 ... 4th object, O5 ... 5th object, RS1 ... 1st area, RS2 ... 2nd area, S ... VR system, U ... user, VS ... virtual space, W ... work machine.

Claims (4)

  1.  仮想空間を介して、現実空間の第1領域に存在しているユーザが、前記第1領域と時間及び位置の少なくとも一方が異なる現実空間の第2領域に存在する制御対象の動作を、制御するための体感型インターフェースシステムであって、
     前記ユーザに対応する第1アバター、及び、前記制御対象に対応する第2アバターが存在する仮想空間を生成する仮想空間生成部と、
     前記ユーザの前記第1領域における動作を認識するユーザ動作認識部と、
     前記ユーザの動作に応じて前記第1アバターの動作を制御し、前記第1アバターの動作に応じて前記第2アバターの動作を制御するアバター制御部と、
     前記第1アバターの状態及び前記第2アバターの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
     決定された前記仮想空間の画像を、前記ユーザに認識させる画像表示器と、
     前記第2領域で前記制御対象が実行可能な動作を認識する可能動作認識部と、
     前記第2アバターの動作に応じて、前記制御対象の動作を制御する対象制御部とを備え、
     前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することを特徴とする体感型インターフェースシステム。
    Through the virtual space, a user existing in the first region of the real space controls the operation of the controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a hands-on interface system for
    A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist.
    A user motion recognition unit that recognizes the user's motion in the first region,
    An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
    An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
    An image display that causes the user to recognize the determined image of the virtual space, and
    A possible motion recognition unit that recognizes an action that the controlled object can execute in the second region,
    A target control unit that controls the operation of the controlled object according to the operation of the second avatar is provided.
    The avatar control unit is a sensory interface system characterized in that the second avatar controls the operations that can be executed by the second avatar based on the operations that can be executed by the controlled object in the second region.
  2.  請求項1に記載の体感型インターフェースシステムにおいて、
     前記第1領域の形状及び前記第2領域の形状を認識する空間認識部を備え、
     前記仮想空間生成部は、前記仮想空間の形状が前記第1領域の形状に対応するように、前記仮想空間を生成し、
     前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作、及び、前記第1領域の形状と前記第2領域の形状との相違に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することを特徴とする体感型インターフェースシステム。
    In the experience-based interface system according to claim 1,
    A space recognition unit that recognizes the shape of the first region and the shape of the second region is provided.
    The virtual space generation unit generates the virtual space so that the shape of the virtual space corresponds to the shape of the first region.
    The avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the shape of the first region and the shape of the second region. A hands-on interface system that is characterized by controlling operations while limiting them.
  3.  請求項1又は請求項2に記載の体感型インターフェースシステムにおいて、
     前記仮想空間における前記第2アバターの動作可能期間である第1期間の長さ、及び、前記第2領域における前記制御対象の動作可能期間である第2期間の長さを認識する期間認識部と、
     前記第2アバターの動作を記憶する動作記憶部とを備え、
     前記アバター制御部は、前記第2領域で前記制御対象が実行可能な動作、及び、前記第1期間の長さと前記第2期間の長さとの相違に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御し、
     前記対象制御部は、前記第1期間に前記動作記憶部に記憶されていた前記第2アバターの動作に応じて、前記第2期間に前記制御対象を制御することを特徴とする体感型インターフェースシステム。
    In the experience-based interface system according to claim 1 or 2.
    A period recognition unit that recognizes the length of the first period, which is the operable period of the second avatar in the virtual space, and the length of the second period, which is the operable period of the controlled object in the second region. ,
    A motion storage unit for storing the motion of the second avatar is provided.
    The avatar control unit can execute the second avatar based on the operation that the controlled object can execute in the second region and the difference between the length of the first period and the length of the second period. Control while limiting movement,
    The target control unit is a sensory interface system characterized in that the control target is controlled in the second period according to the operation of the second avatar stored in the motion storage unit in the first period. ..
  4.  仮想空間を介して、現実空間の第1領域に存在しているユーザが、前記第1領域と時間及び位置の少なくとも一方が異なる現実空間の第2領域に存在する制御対象が実行可能な動作を、体感するための動作体感システムであって、
     前記ユーザに対応する第1アバター、及び、前記制御対象に対応する第2アバターが存在する仮想空間を生成する仮想空間生成部と、
     前記ユーザの前記第1領域における動作を認識するユーザ動作認識部と、
     前記ユーザの動作に応じて前記第1アバターの動作を制御し、前記第1アバターの動作に応じて前記第2アバターの動作を制御するアバター制御部と、
     第1モードと第2モードとを切り換えるモード切換部と、
     前記第1アバターの状態及び前記第2アバターの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
     決定された前記仮想空間の画像を、前記ユーザに認識させる画像表示器と、
     前記制御対象が機能として実行可能な動作を認識する対象機能認識部と、
     前記第2領域で前記制御対象が動作する際の制約を認識する制約認識部とを備え、
     前記アバター制御部は、前記第1モードでは、前記制御対象の機能のみに基づいて、前記第2アバターが実行可能な動作を制御し、前記第2モードでは、前記制御対象の機能及び前記第2領域における制約に基づいて、前記第2アバターが実行可能な動作を制限しつつ制御することを特徴とする動作体感システム。
    Through the virtual space, a user existing in the first region of the real space can perform an operation that can be executed by a controlled object existing in the second region of the real space where at least one of the time and the position is different from the first region. It is a movement experience system for experiencing,
    A virtual space generation unit that generates a virtual space in which a first avatar corresponding to the user and a second avatar corresponding to the controlled object exist.
    A user motion recognition unit that recognizes the user's motion in the first region,
    An avatar control unit that controls the operation of the first avatar according to the operation of the user and controls the operation of the second avatar according to the operation of the first avatar.
    A mode switching unit that switches between the first mode and the second mode,
    An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the first avatar and the state of the second avatar.
    An image display that causes the user to recognize the determined image of the virtual space, and
    A target function recognition unit that recognizes an operation that the control target can execute as a function,
    It is provided with a constraint recognition unit that recognizes constraints when the control target operates in the second region.
    In the first mode, the avatar control unit controls an operation that can be executed by the second avatar based only on the function of the controlled object, and in the second mode, the function of the controlled object and the second. A motion experience system characterized in that the second avatar controls while limiting the actions that can be performed based on the constraints in the area.
PCT/JP2020/033478 2020-09-03 2020-09-03 Somatosensory interface system, and action somatosensation system WO2022049707A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/033478 WO2022049707A1 (en) 2020-09-03 2020-09-03 Somatosensory interface system, and action somatosensation system
JP2021520433A JP6933849B1 (en) 2020-09-03 2020-09-03 Experience-based interface system and motion experience system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/033478 WO2022049707A1 (en) 2020-09-03 2020-09-03 Somatosensory interface system, and action somatosensation system

Publications (1)

Publication Number Publication Date
WO2022049707A1 true WO2022049707A1 (en) 2022-03-10

Family

ID=77550012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/033478 WO2022049707A1 (en) 2020-09-03 2020-09-03 Somatosensory interface system, and action somatosensation system

Country Status (2)

Country Link
JP (1) JP6933849B1 (en)
WO (1) WO2022049707A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7158781B1 (en) 2021-11-29 2022-10-24 クラスター株式会社 Terminal device, server, virtual reality space providing system, program, and virtual reality space providing method
JP2024004662A (en) * 2022-06-29 2024-01-17 キヤノン株式会社 Control device, system, control method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010257081A (en) * 2009-04-22 2010-11-11 Canon Inc Image procession method and image processing system
JP2017199237A (en) * 2016-04-28 2017-11-02 株式会社カプコン Virtual space display system, game system, virtual space display program and game program
WO2018097223A1 (en) * 2016-11-24 2018-05-31 国立大学法人京都大学 Robot control system, machine control system, robot control method, machine control method, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010257081A (en) * 2009-04-22 2010-11-11 Canon Inc Image procession method and image processing system
JP2017199237A (en) * 2016-04-28 2017-11-02 株式会社カプコン Virtual space display system, game system, virtual space display program and game program
WO2018097223A1 (en) * 2016-11-24 2018-05-31 国立大学法人京都大学 Robot control system, machine control system, robot control method, machine control method, and recording medium

Also Published As

Publication number Publication date
JP6933849B1 (en) 2021-09-08
JPWO2022049707A1 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
WO2022049707A1 (en) Somatosensory interface system, and action somatosensation system
US20170263058A1 (en) Method and system for controlling a head-mounted display system
US11779845B2 (en) Information display method and apparatus in virtual scene, device, and computer-readable storage medium
JP2010253277A (en) Method and system for controlling movements of objects in video game
US11375559B2 (en) Communication connection method, terminal device and wireless communication system
CN106774817A (en) The flexible apparatus of tactile activation
JP2007260157A (en) Game apparatus and control method of game apparatus, and program
JP6734236B2 (en) Program, system, and method for providing game
US11957995B2 (en) Toy system for augmented reality
US20160114243A1 (en) Image processing program, server device, image processing system, and image processing method
JP6936465B1 (en) Virtual space experience system
JP6153985B2 (en) Video game processing program, video game processing system, and video game processing method
EP2135649A1 (en) Game device, control method of game device and information storage medium
US11468650B2 (en) System and method for authoring augmented reality storytelling experiences incorporating interactive physical components
CN112007360A (en) Processing method and device for monitoring functional prop and electronic equipment
US20220417490A1 (en) Information processing system, information processing method, and information processing program
CN114053693B (en) Object control method and device in virtual scene and terminal equipment
WO2017199460A1 (en) Program, computer device, program execution method, and computer system
CN114356097A (en) Method, apparatus, device, medium, and program product for processing vibration feedback of virtual scene
WO2021240601A1 (en) Virtual space body sensation system
CN109176520B (en) Steering engine motion parameter range adjusting method, control terminal, robot and medium
CN113018862A (en) Virtual object control method and device, electronic equipment and storage medium
CN112717403A (en) Virtual object control method and device, electronic equipment and storage medium
WO2023017584A1 (en) Virtual space experience system and complex space experience system
WO2022097271A1 (en) Virtual space experience system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021520433

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20952444

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20952444

Country of ref document: EP

Kind code of ref document: A1