WO2020218533A1 - Method and device for assigning attribute information to object - Google Patents

Method and device for assigning attribute information to object Download PDF

Info

Publication number
WO2020218533A1
WO2020218533A1 PCT/JP2020/017745 JP2020017745W WO2020218533A1 WO 2020218533 A1 WO2020218533 A1 WO 2020218533A1 JP 2020017745 W JP2020017745 W JP 2020017745W WO 2020218533 A1 WO2020218533 A1 WO 2020218533A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
robot
attribute information
virtual
information
Prior art date
Application number
PCT/JP2020/017745
Other languages
French (fr)
Japanese (ja)
Inventor
ロクラン ウィルソン
パーベル サフキン
Original Assignee
株式会社エスイーフォー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社エスイーフォー filed Critical 株式会社エスイーフォー
Publication of WO2020218533A1 publication Critical patent/WO2020218533A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • the present invention relates to a method and a device for giving attribute information to an object.
  • Non-Patent Document 1 local features such as depth, color, and curvature obtained from a distance image of daily necessities are learned, and as seven functional attributes provided in daily necessities, “grasp” and “cut (Grasp)” “Cut”, “Scoop”, “Contain”, “Pond”, “Support”, “Grasp-Grasp” Proposed.
  • Non-Patent Document 1 states that in order for a robot to live together with a human, it is necessary not only to know the name of the tool but also to identify each part of the tool and identify the function of each part. Has been done.
  • Non-Patent Document 2 which cites Non-Patent Document 1, the object shape of daily necessities such as cups is acquired by a 3D sensor, and the "functional attributes" provided in the daily necessities are not only the local shape information of the object but also the object shape.
  • a method has been proposed in which the shape of other parts having continuity is also used to make an estimation based on the likelihood integration of the identification result based on the local feature amount of the object shape.
  • Non-Patent Document 2 further discloses that an object manipulation task (pour water into a mug and lift it) by a robot arm is performed by using the proposed functional attribute estimation method.
  • Patent Document 1 the shape of the object recognized by using the shape feature amount and the function of the object recognized by using the functional feature amount are applied to the object concept model, and the object is statistically processed.
  • All of the above prior documents provide a method of acquiring an image of an object and estimating the attributes (concept, shape and function) of the object from the image.
  • the attributes of the object estimated based on the image are not always correct. For example, depending on the posture of the object in the acquired image, the attributes of the object may not be estimated accurately.
  • new and unique objects have no data to reference, making it difficult to estimate the attributes of such objects.
  • these methods require a large amount of calculation cost to recognize each object, a large amount of computer resources and time are required to recognize each object in an environment where many objects exist.
  • it is a method of giving attribute information to an object, that is, generating a virtual world that reproduces the environment of the real world, and a model corresponding to the object to which the attribute information is given in the virtual world.
  • the model is given attribute information that includes information about at least one part of the model.
  • it is a device that imparts attribute information to an object, and corresponds to generating a virtual world that reproduces an environment in the real world and an object that imparts attribute information in the virtual world.
  • a device is provided with a processor configured to acquire the model and to give the model attribute information, including information about at least one part of the model.
  • a data structure for storing attribute information of an object is provided, and the attribute information includes a data structure including information about at least one part of the object.
  • Fig. (A) shows how to give an operation instruction to a virtual mug which is a virtual object in the virtual world
  • Fig. (B) shows how to grasp the mug in the real world with the robot hand of a robot in the real world according to the instruction. It shows how to move it.
  • FIG. 1 is a block diagram showing an embodiment of a robot control system.
  • FIG. 2 is a diagram showing a schematic configuration of an embodiment of a robot.
  • the robot control system 1 includes a robot 100, a control unit 200 that controls the robot 100, and a control device 300 that controls the control unit 200.
  • the robot 100 disclosed in the present embodiment includes at least two robot arms 120, a robot housing 140 that supports the robot arms 120, and the surrounding environment of the robot 100. It is provided with an environment sensor 160 that senses the above and a transmission / reception unit 180.
  • Each robot arm 120 in the present embodiment is, for example, a 6-axis articulated arm (hereinafter, also referred to as “arm”), and a robot hand (hereinafter, also referred to as “hand”) which is an end effector at the tip thereof. It has 122.
  • the robot arm 120 includes an actuator (not shown) having a servomotor on each rotation axis. Each servomotor is connected to the control unit 200 and is configured to control its operation based on a control signal sent from the control unit 200.
  • a 6-axis articulated arm is used as the arm 120, but the number of axes (the number of joints) of the arm can be appropriately determined according to the application of the robot 100, the functions required thereof, and the like.
  • the two-fingered hand 122 is used as the end effector, but the present invention is not limited to this, and for example, a robot hand having three or more fingers and a means of attracting by magnetic force or negative pressure are provided.
  • a robot hand equipped with a gripping means that applies the jamming phenomenon of powder or granular material filled in a rubber film, or a robot hand that can repeatedly grip and release the gripping object by any other means. Can be used. It is preferable that the hands 122a and 122b are configured to be rotatable around the wrist portion thereof.
  • the hand 122 is equipped with a kinetic sensor that detects the amount of displacement of the hand 122, the force, acceleration, vibration, etc. acting on the hand 122. Further, the hand 122 preferably includes a tactile sensor that detects the gripping force and the tactile sensation of the hand 122.
  • the robot housing 140 may be installed in a state of being fixed on a mounting table (not shown), or may be installed on the mounting table so as to be rotatable via a rotation drive device (not shown). May be good.
  • the robot housing 140 is rotatably installed on the mounting table, the working range of the robot 100 can be expanded not only to the area in front of the robot 100 but also to the area around the robot 100.
  • the robot housing 140 can be used for vehicles, ships, submersibles, helicopters, drones, and other moving objects equipped with a plurality of wheels and endless tracks, depending on the application and usage environment of the robot 100. It may be mounted, or the robot housing 140 may be configured as part of such a moving body.
  • the robot housing 140 may have two or more legs as walking means.
  • the robot housing 140 has such a moving means, the working range of the robot 100 can be made wider.
  • the robot arm 120 may be directly fixed to a mounting table or the like without the intervention of the robot housing 140.
  • the environment sensor 160 senses the surrounding environment of the robot 100.
  • the surrounding environment includes, for example, electromagnetic waves (including visible light, invisible light, X-ray, gamma ray, etc.), sound, temperature, humidity, wind velocity, atmospheric composition, etc. Therefore, the environment sensor 160 is a visual sensor, X-ray.
  • -It may include, but is not limited to, gamma ray sensors, auditory sensors, temperature sensors, humidity sensors, wind velocity sensors, atmospheric analyzers, and the like.
  • the environment sensor 160 is shown to be integrated with the robot 100 in the figure, the environment sensor 160 does not have to be integrated with the robot 100.
  • the environment sensor 160 may be installed at a position away from the robot 100, or may be installed on a moving body such as a vehicle or a drone. Further, the environment sensor 160 preferably includes a GPS (Global Positioning System) sensor, an altitude sensor, a gyro sensor, and the like. Further, the environment sensor 160 is used as a position detection means for detecting the position of the robot 100 outdoors or indoors, in addition to the above GPS sensor, WiFi positioning, beacon positioning, self-contained navigation positioning, geomagnetic positioning, sonic positioning, UWB (Ultra Wideband). Band: Ultra-wideband) It is preferable to have a configuration for positioning, visible light / invisible light positioning, and the like.
  • GPS Global Positioning System
  • the visual sensor for example, a 2D camera, a depth sensor, a 3D camera, an RGB-D sensor, a 3D-LiDAR sensor, a Kinect TM sensor, and the like can be used.
  • the visual information obtained by the environment sensor 160 is sent to the control unit 200 and processed by the control unit 200.
  • Other environmental information obtained by the environment sensor 160 can also be transmitted to the control unit 200 and used for analysis of the surrounding environment of the robot 100.
  • the transmission / reception unit 180 transmits / receives signals / information to / from the control unit 200.
  • the transmission / reception unit 180 can be connected to the control unit 200 by a wired connection or a wireless connection, and therefore, transmission / reception of those signals / information can be performed by a wired or wireless connection.
  • the communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used.
  • the transmission / reception unit 180 may be connected to a network such as the Internet.
  • control unit 200 in the robot control system 1 of the present embodiment will be described.
  • control unit 200 of the system 1 includes a processor 220, a storage unit 240, and a transmission / reception unit 260.
  • the processor 220 mainly controls the drive unit and the sensor (both not shown) of the robot arm 120 and the body 140 of the robot 100, controls the environment sensor 160, processes the information transmitted from the environment sensor 160, and interacts with the control device 300. It controls the operation and transmission / reception unit 260.
  • the processor 220 is composed of, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a digital signal processor (DSP), or a combination thereof.
  • the processor 220 may be composed of one or more processors.
  • the processor 220 recognizes an object existing in the surrounding environment of the robot 100 based on the visual information obtained by the environment sensor 160 as processing of the information sent from the environment sensor 160.
  • the processor 220 of the control device 200 detects the shape of an object included in the visual information based on the visual information (image information and the depth information in the image) obtained by the environment sensor 160, and the control unit.
  • Attribute information of the object stored in the storage unit 240 of the 200 data regarding the name of the object, the shape / configuration of each part, their functions, etc.), or the object existing on the network to which the control unit 200 is connected.
  • the object included in the visual information is specified.
  • To identify an object for example, refer to the feature points of the object shape, refer to the look-up table associated with the shape data of the known object, or use any machine learning algorithm or AI technique to identify the known object. It can be executed by comparing with the shape data of.
  • each object specified as described above is further given an annotation indicating the function and configuration of the object itself, and further, the function and configuration of each part of the object. For example, if the objects are bolts and nuts, the following annotations are added.
  • -Bolt Consists of a head and a shaft.
  • the head has a hexagonal column shape, the circumference is gripped when attaching and detaching the nut, the shaft has a male screw, and the nut can be fastened to the shaft (attached by rotating clockwise and removed by rotating counterclockwise).
  • -Nut Hexagonal column shape, grips the outer circumference when attaching and detaching to the bolt, and a female screw is formed on the inner circumference.
  • the female screw can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise).
  • Annotation of the identified object is performed, for example, by the processor 220, data regarding the function of each part of the object stored in the storage unit 240, or each part of the object existing on the network to which the control unit 200 is connected. This can be done by referring to the data related to the function. Alternatively, the processor 220 can recognize the function of each part from the shape of the specified object by using an arbitrary machine learning algorithm or AI technique, and add annotations to the object.
  • the control unit 200 automatically recognizes and annotates the object by using the method like the prior art described above on the control unit 200 side as described above. It is possible to do.
  • the object can be recognized and annotated by the user on the control device 300 side as described later.
  • Information about the object identified in this way and the annotation of its function is stored in the storage unit 240. This information may also be sent to the control device 300.
  • the processor 220 receives, for example, a control signal of the robot 100 transmitted from the control device 300, an operation command generated in response to the control signal, an operation of the robot 100 actually executed, and an environment sensor 160 after the operation is executed.
  • the collected ambient environment data is stored in the storage unit 240 as data, and machine learning is executed using the data to generate learning data and store it in the storage unit 240.
  • the processor 220 can generate an operation command by deciding an operation to be executed by the robot 100 based on the control signal of the robot 100 transmitted from the control device 300 from the next time onward with reference to the learning data. ..
  • the control unit 200 of the robot 100 in the real world has a machine learning function locally.
  • the storage unit 240 is a computer program for controlling the robot 100, a computer program for processing information transmitted from the environment sensor 160, and a computer for interacting with the control device 300 as described in the present embodiment.
  • -A program, a computer program for controlling the transmission / reception unit 260, a program for executing machine learning, and the like are stored.
  • the storage unit 240 stores software or a program that causes a computer to perform a process as described in this embodiment to cause a function as a control unit 200.
  • the storage unit 240 stores a computer program that can be executed by the processor 220, including instructions that implement the methods described below with reference to FIG. 4 and the like.
  • the storage unit 240 stores the shape data of known objects as described above, the function data of those objects, and the look-up table associated with them. Further, the storage unit 240 includes the state of each part (servo (not shown), hand 122, etc.) of the robot arm 120 of the robot 100, information transmitted from the environment sensor 160, information sent from the control device 300, control signals, and the like. It also has the role of storing at least temporarily. Further, as described above, the storage unit 240 also has a role of storing the operation instruction of the robot 100 and the operation and learning data of the robot 100 executed in response to the operation instruction. The storage unit 240 preferably includes a non-volatile storage medium that retains the storage state even when the power of the control unit 200 is turned off.
  • a hard disk drive HDD
  • SSD solid-state storage device
  • compact storage unit 240 Optical disk storage such as disc (CD), digital versatile disc (DVD), Blu-ray disc (BD), non-volatile random access memory (NVRAM), EPROM (Rrasable Programmable Read Only Memory), non-volatile storage such as flash memory Is equipped with.
  • the storage unit 240 may further include volatile storage such as a static random access memory (SRAM), but each computer program described above is a non-volatile (non-temporary) storage medium of the storage unit 340. Is remembered in.
  • the transmission / reception unit 260 transmits / receives signals / information to / from the robot 100 and transmits / receives signals / information to / from the control device 300.
  • the control unit 200 can be connected to the robot 100 by a wired connection or a wireless connection, and therefore the signals and information thereof can be transmitted and received by a wired or wireless connection.
  • the communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used.
  • the transmission / reception unit 260 may be connected to a network such as the Internet.
  • the transmission / reception unit 260 transmits / receives signals / information to / from the control device 300.
  • the control unit 200 can be connected to the control device 300 by a wired connection or a wireless connection, and therefore the signals and information thereof can be transmitted and received by a wired or wireless connection.
  • the communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used.
  • control unit 200 is shown as being independent of the robot 100 in FIG. 1, it is not limited to that form.
  • the control unit 200 may be provided in the housing 140 of the robot 100.
  • the number of robots 100 used in this system 1 is not limited to one, and a plurality of robots 100 may be operated independently or in cooperation with each other. In this case, a single control unit 200 may control a plurality of robots 100, or a plurality of control units 200 may cooperate to control a plurality of robots 100.
  • control device 300 in the robot control system 1 of the present embodiment will be described.
  • the control device 300 of the system 1 includes a processor 320, a storage unit 340, an input device 350, a transmission / reception unit 360, and a display 370.
  • the processor 320 mainly controls the interaction with the control unit 200, the processing based on the input performed by the user via the input device 350, the control of the transmission / reception unit 360, and the display of the display 370.
  • the processor 320 generates a control signal based on the user input input by the input device 350 and transmits it to the control unit 200.
  • the processor 220 of the control unit 200 generates one or more operation commands for operating each drive unit (not shown) of the robot arm 120 and the body 140 of the robot 100 and the environment sensor 160 based on the control signal. ..
  • the processor 320 is composed of, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a digital signal processor (DSP), or a combination thereof.
  • the processor 320 may be composed of one or more processors.
  • the processor 320 may be composed of one or more processors.
  • the processor 320 of the control device 300 is configured to generate a UI (user interface) screen to be presented to the user and display it on the display 370.
  • the UI screen (not shown) includes, for example, a selection button that hierarchically provides the user with a plurality of options.
  • the processor 320 generates an image or a moving image of a virtual world (simulation space) based on an image or a moving image of the real world of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100, and displays it on the display 370.
  • the processor 320 When the processor 320 generates an image or a moving image of a virtual world based on an image or a moving image of the real world, for example, by associating a coordinate system of the real world with a coordinate system of the virtual world, the processor 320 connects the real world and the virtual world. Build a correlation. Further, the image or moving image of the real world and the image or moving image of the virtual world (simulation space) may be displayed on the display 370 at the same time. Further, the UI screen may be superimposed on the image or moving image of the surrounding environment of the robot 100 or the image or moving image of the virtual world.
  • the virtual world (simulation space) image or moving image generated based on the real world image or moving image of the surrounding environment of the robot 100 also includes an object existing in the surrounding environment of the robot 100.
  • the processor 320 By building a correlation between the real world and the virtual world as the processor 320 generates a virtual world image or video based on a real world image or video, a user in the virtual world, as described in detail below. It is possible to make a change in the real world based on the operation of, and to reflect the change in the real world in the virtual world.
  • the processor 220 of the control unit 200 identifies an object existing in the surrounding environment of the robot 100 based on the visual information obtained by the environment sensor 160, and adds an annotation relating to the function of each part of the object. As described above, these processes may be performed by the processor 320 of the control device 300 in addition to the processes by the processor 220 of the control unit 200 or in place of the processes by the processor 220.
  • the methods for identifying and annotating an object are as follows: 1) Obtaining ready-made information about the model and its associated annotations, 2) Scanning the object and annotating it, and 3) Creating a model of the object and annotating it. There are three possible modes of doing this.
  • the mode of 1) "obtaining the ready-made information of the model and the annotation associated with it” deals with the case where the model corresponding to the scanned object is available and the annotation is associated with the model. is there.
  • the processor 320 of the control device 300 uses the attribute information of the objects (the name of the object, the shape / configuration of each part, and the like) stored in the storage unit 340.
  • the attribute information of the object existing on the network to which the control device 300 is connected, the identification of the object included in the visual information and the annotation related to the function of each part can be performed. Grant.
  • any machine learning algorithm or AI technique may be used for the processing of identifying the object and adding annotations related to the functions of the respective parts.
  • the mode of 2) "scanning an object and annotating it” can be dealt with when a model corresponding to the scanned object is not available.
  • the user uses the input device 350 to combine various primitive shape elements in the UI screen to generate a model corresponding to the scanned object.
  • Primitive shape elements include, for example, elements such as prisms of arbitrary angle, pyramids of arbitrary angle, cylinders, cones, and spheres.
  • the processor 320 may allow the user to draw an arbitrary shape in the UI screen and add it as a primitive shape element.
  • the user selects these various shape elements in the UI screen, changes the dimensions of each part of the selected shape elements as appropriate, and combines those elements according to the image of the scanned object to correspond to the scanned object. You can generate a model to do. When generating a model using these elements, it is also possible to represent dents and holes of objects.
  • the user adds annotations to the model generated in that way in the UI screen.
  • the object is a bolt
  • annotate the part corresponding to the head and axis of the generated model to the effect that it is the head and axis, respectively, and further, for the head, , Annotate that the surrounding surface of the hexagon can be gripped, and for the shaft, a male screw is formed, a nut can be fastened (attached by rotating clockwise, removed by rotating counterclockwise), Annotate.
  • the object is a cupboard with an open / close door and a drawer
  • the annotation is given that the door can rotate around the hinge and that the drawer can be pulled out by grasping the handle.
  • the object is a faucet
  • the correlation between the rotation angle of the handle and the amount of water discharged from the faucet discharge port may be added as an annotation.
  • the annotations given include not only the function of the object, but also what information comes from which part of the object (in the above example, the information "water” is generated from the outlet of the faucet. What happens is included as an annotation).
  • the aspect of 3) "creating a model of an object and annotating it" can be dealt with when a user combines various primitive shape elements in a UI screen to generate a model of an arbitrary object. is there.
  • the user's operation for generating the model and annotating in this aspect is the same as the operation described in the above aspect 2).
  • the processor 320 reproduces the virtual object corresponding to the object specified as described above by, for example, computer graphics, and displays it in the virtual world on the display 370.
  • the storage unit 340 is interactively performed by the user on the UI screen via a program for causing the processor 320 to execute the operation described in the present embodiment, a computer program that interacts with the control unit 200, and an input device 350. It stores a computer program that performs processing based on input, a computer program that controls the transmission / reception unit 260, a computer program that displays the display 370, and the like.
  • the storage unit 340 stores software or a program that causes the computer to perform an operation described later to generate a function as the control device 300.
  • the storage unit 340 stores a computer program that can be executed by the processor 320, including instructions that implement the methods described below with reference to FIG. 4 and the like.
  • the storage unit 340 includes an image or moving image of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100 and sent to the control device 300 via the control unit 200, and an image or moving image of the surrounding environment of the robot 100. It is possible to at least temporarily store an image or moving image of a virtual world (simulation space) generated by the processor 320 based on the moving image.
  • the storage unit 340 of the control device 300 also preferably includes a non-volatile storage medium that retains the storage state even when the power of the control device 300 is turned off.
  • the storage unit 340 may further include volatile storage such as a static random access memory (SRAM), but each computer program described above is a non-volatile (non-temporary) storage medium of the storage unit 340. Is remembered in.
  • the storage unit 340 also functions as a database of the system 1, and as described in relation to the concept of the present invention, the operation data of the robot 100 in the real world (the operation generated by the control unit 200) operated based on the control signal. (Including commands) and the ambient environment data indicating the operation result detected by the environment sensor 160 is stored.
  • the input device 350 for example, a keyboard, a mouse, a joystick, etc. can be used, and further, a device called a tracker, which can track the position and posture using infrared rays or the like and has a trigger button or the like, is used. You can also.
  • the display 370 includes a touch panel type display device, the touch panel can be used as an input device.
  • the display 370 is a head-mounted display used as a display device for VR (virtual reality), AR (augmented reality), MR (mixed reality), etc., and has a user's line-of-sight tracking function
  • the line-of-sight tracking function can be used as an input device.
  • a device having a line-of-sight tracking function but not a display can use the line-of-sight tracking function as an input device.
  • a voice input device can also be used as an input device.
  • the input device 350 exemplified as an example of the input device 350, and the means that can be used for the input device 350 is not limited to these. Further, the above-mentioned means may be arbitrarily combined and used as the input device 350.
  • the user can select, for example, select a selection button, input characters, or take a picture of the robot by the environment sensor 160 of the robot 100 on the UI screen displayed on the display 370.
  • the transmission / reception unit 360 transmits / receives signals / information to / from the control unit 200.
  • the control device 300 can be connected to the control unit 200 by a wired connection or a wireless connection, and therefore, transmission and reception of these signals / information can be performed by a wired or wireless connection.
  • the communication protocol, frequency, and the like used for transmitting and receiving the signal and information can be appropriately selected according to the application and environment in which the system 1 is used.
  • the transmission / reception unit 360 may be connected to a network such as the Internet.
  • the display 370 is used as a display device such as a display monitor, a computer / tablet device (including a device equipped with a touch panel type display), VR (virtual reality) / AR (augmented reality), or MR (mixed reality). Any form of display device such as a head-mounted display or a projector can be used.
  • the head-mounted display when a head-mounted display is used as the display 370, the head-mounted display provides an image or a moving image in which the left and right eyes of the user have parallax, thereby causing the user to perceive a three-dimensional image or moving image. Can be done. Further, when the head-mounted display has a motion tracking function, it is possible to display an image or a moving image according to the position and direction of the head of the user wearing the head-mounted display. Furthermore, when the head-mounted display has a user's line-of-sight tracking function as described above, the line-of-sight tracking function can be used as an input device.
  • the processor 320 of the control device 300 is an image of a virtual world (simulation space) based on a real-world image or moving image of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100.
  • a head-mounted display that generates a moving image and is used as a display device for VR (virtual reality), AR (augmented reality), MR (mixed reality), etc. is used as the display 370, and infrared rays or the like is used as the input device 350.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • FIG. 3 is a flowchart illustrating the control operation of the robot according to the present embodiment.
  • the ambient environment information of the robot 100 in the real world obtained by the environment sensor 160 of the robot 100 is transmitted to the control device 300 via the control unit 200.
  • the visual information may be a single still image, a plurality of images, or a moving image, and preferably includes depth information.
  • the control device 300 may store the transmitted visual information in the storage unit 340.
  • the processor 320 of the control device 300 generates a virtual world (simulation space) that reproduces the surrounding environment of the robot 100 based on the visual information of the control device 300. It is displayed on the display 370.
  • a virtual world in addition to the landscape around the robot 100 in the real world, at least real-world objects existing in the area accessible to the robot 100 are displayed.
  • the object may be a two-dimensional or three-dimensional image of a real-world object obtained by a visual sensor, a depth map, a point cloud, or the like. Alternatively, it may be represented by computer graphics that represent the object. In this example, bolts and nuts are displayed as objects.
  • step S315 of FIG. 3 the objects are specified and annotations relating to the functions of each part of the objects are added.
  • these processes include 1) obtaining the ready-made information of the model and the annotation associated with it, 2) scanning the object and annotating it, and 3) creating the model of the object and annotating it. It can be done by three aspects of doing.
  • the processor 320 of the control device 300 uses the visual information based on the visual information sent from the control unit 200 as described above.
  • the shape of the object contained therein is detected, and the shape data of the object stored in the storage unit 340 of the control device 300 or the shape data of the object existing on the network to which the control device 300 is connected is referred to.
  • the object included in the visual information is specified.
  • identify an object for example, refer to the feature points of the object shape, refer to the look-up table associated with the shape data of the known object, or use any machine learning algorithm or AI technique to identify the known object. It can be executed by comparing with the shape data of.
  • the object can be specified by the user performing an operation of specifying the object by using the input device 350 of the control device 300.
  • the object can be specified by the user performing an operation of specifying the object by using the input device 350 of the control device 300.
  • at least a segment, a fixing object for fixing the segment, a bolt and a nut are specified as objects.
  • the processor 320 is specified by referring to, for example, data on the function of the object stored in the storage unit 340 or data on the function of the object existing on the network to which the control device 300 is connected. Annotate the object.
  • the processor 320 can recognize the function of each part from the shape of the specified object by using an arbitrary machine learning algorithm or AI technique, and add annotations to the object.
  • -Bolt Consists of a head and a shaft.
  • the head has a hexagonal column shape, the circumference is gripped when attaching and detaching the nut, the shaft has a male screw, and the nut can be fastened to the shaft (attached by rotating clockwise and removed by rotating counterclockwise).
  • -Nut Hexagonal column shape, grips the outer circumference when attaching and detaching to the bolt, and a female screw is formed on the inner circumference.
  • the female screw can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise).
  • the robot 100 is an element of the system 1, and the functions of each part of the robot 100 are known in the system 1. Therefore, at least the hand 122 of the robot 100 is recognized as an object in advance, and the following annotations are stored in the storage unit 240 or the storage unit 340.
  • ⁇ Hand Equipped with wrists and claws. Objects can be gripped with the claws. The wrist can be rotated. The annotations given to each object are stored in the data structure shown in Table 1.
  • the processor 320 does not recognize the object or recognizes the wrong object, the object can be identified and annotated by the mode of 2) "scanning the object and annotating it".
  • the object is not recognized or the wrong object is recognized, it is considered that the data corresponding to the object to be recognized is not stored, the image quality of the object to be recognized is low, and the like.
  • FIG. 4 is a flowchart showing a method of annotating the scanned object.
  • 5 and 6 are diagrams showing the scanned object and the corresponding model.
  • the processor 320 of the control device 300 displays a virtual world (simulation space) that reproduces the surrounding environment of the robot 100 generated in step S310 of FIG. 3 on the display 370 of the control device 300.
  • a virtual world simulation space
  • the object scanned by the environment sensor 160 of the robot 100 in the real world is displayed (step S405 in FIG. 4).
  • bolts FIG. 5 (a)
  • nuts FIG. 6 (a)
  • FIG. 2 it is not yet recognized in the system 1 that these objects are bolts and nuts at this stage.
  • the model corresponding to the scanned object can be displayed on the display 370 by the user using the input device 350 of the control device 300 to select a primitive shape element from the in-screen menu displayed on the display 370. It can be generated by combining the displayed primitive shape elements.
  • Primitive shape elements include, for example, elements such as prisms of arbitrary angle, pyramids of arbitrary angle, cylinders, cones, and spheres, which are stored in the storage unit 340 of the control device 300.
  • the user uses the input device 350 of the control device 300 to select from the in-screen menu displayed on the display 370. Hexagonal columns and cylinders are selected as primitive shape elements to generate the model. Then, the processor 320 of the control device 300 reads out those shape element data from the storage unit 340 and displays them on the display 370. Then, the user operates the hexagonal column and the cylinder on the display 370 using the input device 350 to appropriately change the size, the dimensional ratio, etc., and combine them to form a bolt model as shown in FIG. 5 (b). Create 50.
  • a hexagonal column and a cylinder are selected as primitive shape elements and they are combined in the same manner as described above.
  • a nut model 60 is created.
  • the cylinder represents a hole formed in a hexagonal column.
  • a hexagon along the contour of one end face of the hexagonal column portion (head) is defined by using the input device 350, and the hexagon is defined as the object.
  • a hexagonal columnar volume space defined by the hexagonal moving region can be formed.
  • a circle along the contour of one end face of the cylindrical portion (screw axis) of the object shown in FIG. 5A is defined by using the input device 350, and the circle is defined by the other of the cylindrical portions along the longitudinal direction of the object.
  • a cylindrical volumetric space defined by the circular moving region can be formed.
  • the model 50 shown in FIG. 5B is generated in the virtual world so as to be superimposed on the object shown in FIG. 5A.
  • the model 60 shown in FIG. 6B can also be generated by the same procedure.
  • the three-dimensional shape of the object when the three-dimensional shape of the object can be recognized by the control device 300, the three-dimensional shape may be used as the model instead of generating the model using the primitive shape elements.
  • the user gives a name to the model displayed on the display 370 as the identification information of the object by using the input device 350 of the control device 300 (step S415 in FIG. 4).
  • the model 50 shown in FIG. 5 (b) is given a "bolt” as the name of the corresponding object
  • the model 60 shown in FIG. 6 (b) is given a "nut” as the name of the corresponding object.
  • that information can also be added (for example, "M8 ⁇ 20" indicating the diameter and length of the screw for the bolt, etc. ).
  • the user uses the input device 350 to display the screen displayed on the display 370, and the attribute information of various objects stored in the storage unit 340 of the control device 300 (object name, shape / configuration of each part, and their).
  • the name of the object can be given to the model. Can be granted.
  • the user inputs the attribute information including the name of the object using the input device 350. It is possible to store the information in the storage unit 340 and to select the name and give it to the model.
  • attribute information is added to each part of the model created as described above (step S420 in FIG. 4).
  • the user selects the hexagonal column portion 50a of the model 50 displayed on the display 370 using the input device 350 of the control device 300, and bolts.
  • Annotation of "head” which means that it is the head of the head is added as identification information of the part.
  • six surfaces around the hexagonal column portion 50a of the model 50 are designated or selected by using the input device 350, and the annotation "grasping the outer circumference when attaching / detaching the nut" is added as functional information of the portion.
  • the input device 350 is used to select the cylindrical portion 50b of the model 50 displayed on the display 370, and annotate the "axis" indicating that it is the axis of the bolt as identification information of that portion. Furthermore, the outer peripheral surface of the cylindrical portion 50b of the model 50 is instructed or selected using the input device 350, and the annotation "male screw. The nut can be fastened (attached by rotating clockwise and removed by rotating counterclockwise)" is annotated with the functional information of that portion. Granted as.
  • an annotation that prohibits gripping the shaft may be added as a function of the shaft. This makes it possible to prevent gripping of parts that are not desirable to be gripped by the robot's hand, such as fragile parts.
  • the outer peripheral portion 60a of the model 60 is selected, and an annotation of “outer circumference” meaning that it is the outer circumference of the nut is added as identification information of that portion.
  • the outer peripheral portion 60a of the model 60 is annotated with "hold the outer circumference when attaching / detaching to / from the bolt” as functional information of that portion.
  • the annotation "Grip the outer circumference when attaching / detaching to / from the bolt” also means that any part of the nut can be grasped except when attaching / detaching to / from the bolt (for example, when simply moving the nut). To do.
  • the inner peripheral portion 60b of the model 60 is selected, and an annotation of "inner circumference” meaning that it is the inner circumference of the nut is added as identification information of the portion.
  • the inner peripheral portion 60b of the model 60 is annotated as "female screw. Can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise)" as functional information of that portion.
  • the user uses the input device 350 to display the attribute information of various objects stored in the storage unit 340 of the control device 300 on the screen displayed on the display 370.
  • Information to be given to the model by referring to (data on the name of the object, the shape / configuration of each part, and their functions) or the attribute information of various objects existing on the network to which the control device 300 is connected.
  • Function information can be added to the model by selecting. For example, when the functional attribute of the bolt is stored in the storage unit 340, information on the attribute information (name of the part and its function) of the head and the shaft of the bolt is selected, and the information is selected in each of the parts 50a and 50b, respectively. Can be granted.
  • the user uses the input device 350 to input the attribute information including the function of the object. It is possible to input and store it in the storage unit 340, and to select the name and give it to the model.
  • control device 300 also functions as a device for imparting attribute information to the object, and the control device 300 can be used to impart attribute information to the object.
  • the attribute information data assigned to each object is stored in the data structure shown in Table 1.
  • Step S425 in FIG. 4 the user uses the input device 350 to select the model 50 shown in FIG. 5B on the screen displayed on the display 370 and move the model 50 within the screen. , Overlay the objects shown in the same screen (FIG. 5 (a)) so that their positions and postures match. If the model 50 is superposed on the object and generated in advance in the virtual world, this superimposing operation is omitted.
  • the processor 320 calculates the positional relationship between the contour of the three-dimensional shape of the object and the contour of the three-dimensional shape of the model 50, and determines whether or not the model 50 is superimposed on the object based on a predetermined criterion. When it is determined that they are overlapped, the object and the model are associated with each other.
  • the processor 320 of the control device 300 recognizes that the object (FIG. 5A) is the object corresponding to the annotated model 50, and the processor 320 becomes the object specified as described above.
  • the corresponding virtual object is reproduced, for example, in computer graphics and displayed so as to be superimposed on the object on the display 370.
  • the virtual object may be reproduced using, for example, an image of a model (computer graphics).
  • the user can use the input device 350 to perform operations such as moving the virtual object in the virtual world displayed on the display 370.
  • the processor 220 of the control unit 200 detects the shape of the object included in the visual information from the environment sensor 160, identifies the object included in the visual information, and adds an annotation related to each part function of the object. (That is, by referring to the attribute information of the object stored in the storage unit 240 of the control unit 200, or the attribute information of the object existing on the network to which the control unit 200 is connected.
  • the processor 320 of the control device 300 does not perform the above-described processing, obtains information about the specified object from the control unit 200, and virtualizes the object in the virtual world. Display the object.
  • the object in the virtual space displayed on the display 370 of the control device 300 is not a mere object occupying a certain volume space, but is based on the given attribute information.
  • System 1 recognizes that it represents the specified object.
  • the virtual object can be moved by the user using the input device 350 in the virtual world displayed on the display 370.
  • the virtual object is adjusted to the movement of the tracker in the virtual world while the trigger button is pressed. Can be moved freely. Then, by releasing the trigger button after moving the virtual object to a desired position / posture in the virtual world, the movement operation of the virtual object can be completed.
  • the user can also operate two virtual objects at the same time in the virtual world by operating the two trackers with both hands at the same time.
  • the hand 122 of the robot 100 into an object and display it as a virtual hand, and operate the virtual hand with a tracker to move the virtual object.
  • the virtual hand is moved.
  • the virtual object to be moved is moved with the virtual hand. Allows you to move while grasping.
  • the virtual object can be moved while the virtual object is grasped by the virtual hand.
  • the trigger button after moving the virtual object to a desired position / posture in the virtual world, the movement operation of the virtual hand and the virtual object can be completed.
  • the operation of virtual objects and virtual hands in the virtual world using the input device 350 as described above, and the display thereof on the display 370 are controlled by the processor 320.
  • the robot operates the virtual object displayed in the virtual world on the display 370 by the user using the input device 350 in the virtual world.
  • Input the operation for causing 100 to execute the task In this example, in order to cause the robot 100 to execute the task of fastening the bolts and nuts to each other, the following operations are input.
  • FIG. 7 is a diagram schematically showing the transition of the user input operation.
  • virtual bolts and virtual nuts corresponding to bolts and nuts which are objects placed in the real world where the robot 100 exists, are bolts in the real world. And are displayed in the same arrangement as the nut.
  • FIG. 7A in the virtual world displayed on the display 370, there are a virtual bolt 70 and a virtual nut 71 placed on a table, and two virtual hands 122a_vr and 122a_vr. There is.
  • the virtual hand 122b_vr of one hand 122b of the robot 100 displayed in the virtual world on the display 370 is moved by using the tracker of the input device 350, and the virtual hand 122b_vr is used.
  • the claw portion of the virtual hand 122b_vr is overlapped with the virtual segment 70 in the virtual world.
  • the head of the bolt is annotated to indicate that "the outer circumference can be gripped when the nut is attached or detached (in other cases, any part can be gripped)", and the claw of the hand is "grasping the object".
  • An annotation indicating that it is possible is added.
  • the processor 320 of the control device 300 derives the meaning of "holding the bolt with the claws of the hand” based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "holding the bolt 70 with the claw portion of the hand 122b” from the input operation of superimposing the claw portion of the virtual hand 122b_vr on the head portion of the virtual bolt 70.
  • the virtual hand 122a_vr representing the other hand 122a of the robot 100 is moved using a tracker, and the virtual hand 122a_vr grips the virtual nut 71 as a virtual operation in the virtual world.
  • the claw portion of the hand 122a_vr is overlapped with the virtual nut 71.
  • Annotation indicating "Grip the outer circumference when attaching / detaching to / from the bolt (in other cases, any part can be gripped)" is attached to the outer circumference of the nut, and "Object can be gripped” on the claw part of the hand. Annotation indicating that there is is attached.
  • the processor 320 of the control device 300 derives the meaning of "holding the nut with the claws of the hand” based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "holding the nut with the claw portion of the hand 122a” from the input operation of superimposing the claw portion of the virtual hand 122a_vr on the virtual nut 71.
  • the virtual hand 122b_vr is moved while holding the virtual bolt 70, and the virtual hand 122a_vr is moved while holding the virtual nut 71 to fasten the bolt and the nut.
  • an operation an operation of fitting the virtual nut 71 to the shaft of the virtual bolt 70 is performed.
  • the bolt shaft is annotated to indicate that "the nut can be fastened (clockwise rotation, remove it counterclockwise)", and the inner circumference of the nut can be "fastened to the bolt shaft (clockwise rotation,).
  • Annotation indicating "remove by turning counterclockwise)" is added.
  • the processor 320 of the controller 300 derives that "the nut can be attached to the shaft of the bolt” based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "rotating and fastening the bolt and the nut to each other" from the input operation of fitting the virtual nut 71 to the shaft of the virtual bolt 70.
  • the processor 320 of the control device 300 identifies a part of the object that causes an action in each action.
  • the meaning of each action input is specified based on the attribute information about the function given to the part.
  • the attribute information given to each object in the data structure as shown in Table 1 is associated with the operation of the virtual object in the virtual world, and the robot 100 is associated with the object in the real world. Is used to identify the actions performed by.
  • FIG. 8 is a flowchart showing a method of specifying an action performed by a robot on an object in the real world by using attribute information given to the object.
  • Step S805 in FIG. 8 When a user operates a virtual object in the virtual world displayed on the display 370 of the control device 300 by using the input device 350 of the control device 300, the operation to be executed by the robot 100 is input by the control device 300 ( Step S805 in FIG. 8).
  • the processor 320 of the control device 300 identifies the part of the object that causes the action based on the input operation (step S810 in FIG. 8). For example, when an operation of superimposing virtual objects is performed in a virtual world, the superposed portion of these virtual objects is specified as a portion that causes an action.
  • the processor 320 of the control device 300 refers to the functional attributes given to each part of those objects in the data structure, and identifies the meaning of the input operation based on the relationship of those functional attributes. (Step S815 in FIG. 8).
  • the input of the operation is completed for the task of fastening the bolt and the nut shown in step S320 of FIG.
  • the input operation of this series of operations is stored in the storage unit 340 of the control device 300.
  • step S325 of FIG. 3 the processor 320 generates a control signal to be transmitted to the control unit 200 based on the meaning of each operation extracted as described above, and the control device 300 generates the control unit 200. Send to. In this way, the control device 300 generates a control signal for controlling the robot 100 by using the data structure in which the attribute information of the object is stored.
  • the control unit 200 that has received the control signal performs motion planning of the robot 100 based on the received control signal and the surrounding environment information detected by the environment sensor 160 of the robot 100, and generates an operation command to be executed by the robot 100. Then, the robot 100 is made to execute the task based on the operation command (step S330 in FIG. 3). As a result, in the real world, the robot 100 executes the task of grasping the bolts and nuts with the hands 122a and 122b and fastening them.
  • the environment sensor 160 of the robot 100 detects the surrounding environment while the robot 100 is executing the task or after the task is executed, in order to detect the execution result. Then, the surrounding environment information is transmitted to the control unit 200.
  • the surrounding environment information is stored in the storage unit 240. Further, the ambient environment information may be transmitted to the control device 300 and stored in the storage unit 340. Further, the received control signal and the generated operation command can also be stored and stored in the storage unit 240, and they may be transmitted to the control device 300 and stored and stored in the storage unit 340.
  • the processor 220 uses an arbitrary machine learning algorithm or AI technology to compare and learn the control signal received from the control device 300 and the operation instruction generated by the processor 220 with the surrounding environment information after the task is executed. , It is possible to improve the quality of generated operation instructions.
  • the processor 220 can select, for example, an operation with less possibility of failure and an operation with less momentum by accumulating the generation / execution of the operation instruction and the result and learning them.
  • control unit 200 and the control device 300 may exchange and share the learning result data of the control unit 200 and the learning result data of the control device 300 with each other.
  • attribute information including functions and the like included in each part of the object in the real world is given, and the attribute information is shown in, for example, Table 1. It is stored in the data structure shown.
  • the attribute information given to each object is used to identify the action performed by the robot 100 on the object in the real world by associating it with the action performed on the virtual object in the virtual world. Since the user can intuitively perform the instruction operation in the virtual world, it is possible to easily add the attribute information (annotation) to the object.
  • the attribute information includes identification information of the object represented by the model (object name, etc.), identification information indicating at least one part of the model (name of each part of the object, etc.), and information about at least one part of the model (of each part of the object). It may include (shape, function, etc.), but the attribute information given to the object may not be all of these, but only a part of them. For example, it is possible to give an object only shape information about at least one part of the model as attribute information.
  • the robot 100 having an arm having a hand is illustrated as the form of the robot, but the form of the robot controlled by the present invention is not limited to this, and for example, the form of the robot is a vehicle, a ship, or the like. It may be a submersible, a drone, a construction machine (excavator, bulldozer, excavator, crane, etc.) or the like.
  • the environment and applications in which robots that can be operated using the system of this embodiment are used include space development, mining, mining, resource extraction, agriculture, forestry, and water.
  • the objects operated by the robot of the present embodiment vary depending on the environment and application in which the robot is used. As an example, when a shovel car is used as a robot, excavated soil, sand, etc. are also objects.
  • FIG. 9A shows a model corresponding to a faucet object.
  • This model can be generated by steps S405-S410 of the method shown in FIG.
  • a "faucet" is added to this model as identification information of an object, and a handle 90 and a discharge port 91 are added as identification information of each part thereof (step S415 in FIG. 4).
  • the functional information thereof is added to each part of the model (step S420 in FIG. 4).
  • the following functions are given as functional information of each part of the model.
  • -Handle 90 Rotatable. Rotation angle is 0 ° to 180 °
  • Water spout 91 Discharges water. Discharge amount is proportional to the rotation angle of the handle
  • the robot 100 By superimposing and associating the model to which the attribute information is given with the object displayed in the virtual world in this way, the attribute information possessed by the model can be given to the object.
  • the robot 100 is given the handle of the faucet in the real world. It is possible to instruct the operation of rotating to a rotation angle according to the amount of water.
  • a function indicating the relationship between the rotation angle and the amount of torque required to rotate the handle may be added as an annotation when the handle is rotated from the closed state to the open state and vice versa.
  • the function can be modified accordingly for later optimization.
  • the proportional function may be added as an annotation, and the proportional function once specified in order to optimize the proportional function. Can be changed later as appropriate.
  • FIG. 9B shows a model corresponding to the objects in the cupboard.
  • This model can also be generated by steps S405 to S410 of the method shown in FIG.
  • a "cupboard" is added to this model as object identification information, and a top plate 95, a drawer 96, a right door 97, and a left door 98 are added as identification information for each part (step S415 in FIG. 4). ).
  • each part of the model is given those functions as functional information (step S420 in FIG. 4).
  • the following functions are given as functional information of each part of the model.
  • -Top plate 95 Objects can be placed on top.
  • -Drawer 96 Has a handle. You can grab the handle and pull it out toward you. Objects can be stored inside when pulled out.
  • -Right door 97 Has a handle. Can be opened and closed around the hinge on the right side by grasping the handle. Objects can be stored inside in the open state.
  • -Left door 98 Has a handle. Can be opened and closed around the hinge on the left side by grasping the handle. Objects can be stored inside in the open state.
  • the user inputs an action in the virtual world that causes an action on the closed right door handle of the virtual object of the cupboard, so that the robot 100 can receive the right door handle of the cupboard in the real world. You can instruct the operation to open the right door centering on the hinge.
  • the system 1 continues to recognize the virtual object even if each part moves in the virtual world and the entire shape of the virtual object is deformed. Becomes possible. For example, in a cupboard as shown in FIG. 10, the overall shape of the cupboard changes depending on the drawer and the opening degree of the door, but the system 1 can recognize the virtual object as the cupboard regardless of the change in the overall shape. .. Table 2 shows a data structure for storing the attribute information given to each of the virtual objects of the faucet and the cupboard.
  • FIG. 10 is a diagram showing a robot (unmanned submersible) used in this example and a pipe to be worked underwater.
  • the robot 100 used in this example has the form of an unmanned submersible, and includes a robot arm 120 having a robot hand 122 at its tip and a housing 140 in which the robot arm 120 is installed. have.
  • the robot 100 moves in the horizontal direction (X-axis direction), the front-rear direction (Y-axis direction), and the vertical direction (Z-axis direction) in water, and rotates around each XYZ axis.
  • the housing 140 is provided with an environment sensor and a transmission / reception unit described with reference to FIG. 2 and the like.
  • the housing 140 is provided with at least a visual sensor as an environment sensor, whereby visual information on the surrounding environment of the robot 100 (particularly, the environment including the robot arm 120 and the hand 122 in front of the robot 100) can be acquired. It is possible. Since the other configurations of the robot (unmanned submersible) 100 used in this example are the same as those described above with reference to FIG. 2, detailed description thereof will be omitted here.
  • the system configuration and control method used in this example are the same as those described above. In this example, the characteristic points regarding the task of grasping the underwater pipe with the robot hand at the tip of the robot arm of the robot in the form of an unmanned submersible will be described.
  • FIG. 10 shows a virtual world generated based on the environmental information acquired by the environment sensor of the robot (unmanned submersible) 100.
  • the shape and function of each part of the robot (unmanned submersible) 100 is at least modeled and stored in advance in the storage unit 340 of the control device 300, and is therefore known in the system 1. Therefore, the modeled robot (unmanned submersible) 100 is displayed in the virtual world.
  • the pipe 40 in the virtual world is displayed in a state of being reproduced based on the environmental information acquired by the environment sensor of the robot (unmanned submersible) 100.
  • the pipe 40 Since the pipe 40 is photographed only from a specific direction by the environment sensor of the unmanned submersible 100, it is reproduced in a shape that can be recognized from the photographed direction, and the portion on the opposite side is not reproduced. In FIG. 10, the shape of the left side portion of the pipe 40 in the drawing is reproduced, but the right side portion of the pipe 40 in the drawing is displayed in a missing state.
  • FIG. 11 is a diagram showing how annotations (attribute information) are added to the pipe shown in FIG.
  • FIG. 11 shows a virtual world generated by the controller 300 according to steps S305 and S310 shown in FIG.
  • the user annotates the pipe 40 displayed in the virtual world on the display 370 according to the above-mentioned "2) Scanning and annotating an object" mode, and the control device 300 It is stored in the storage unit 340 of.
  • the scanned object, the pipe 40 is displayed on the display 370 as shown in FIG. 11A (step 405 in FIG. 4).
  • the user generates a cylindrical model using the tracker 350 (the corresponding virtual tracker 350_vr is displayed in FIG. 11) in the UI screen displayed on the display 370 (the corresponding virtual tracker 350_vr is displayed).
  • FIG. 11 (b) by moving it so as to overlap the pipe 40 in the virtual world displayed on the display 370 (FIG. 11 (c)), and by adjusting the diameter and length (FIG. 11 (b)).
  • the model corresponding to the pipe 40 is acquired (step 410 in FIG. 4).
  • a "pipe” is added as the identification information of the object to this model, and an “outer circumference” is added as the identification information of the portion (step S415 in FIG. 4). Further, the functional information (shape / function) is added to the portion (step S420 in FIG. 4). For example, the following functions are given as functional information of the model part. -Outer circumference: Cylindrical shape. Can be gripped with a robot hand. The annotation added to the pipe 40 is stored in the data structure shown in Table 3.
  • the attribute information possessed by the model can be given to the object.
  • the pipe 40 to which the attribute information is added in this way is displayed in the virtual world as a virtual pipe by the computer graphics representation representing the model. Thereby, for example, it is possible to instruct the operation of grasping the virtual object (virtual pipe 40_vr) of the pipe 40 with the robot hand 122 of the robot (unmanned submersible) 100.
  • FIG. 12 is a diagram showing a mug 80 placed on a table.
  • the mug 80 has a handle 82 and a main body 84.
  • the handle 82 of the mug 80 can be gripped by the hand 122 provided on the arm 120 of the robot 100 described with reference to FIGS. 1 and 2.
  • the hand 122 grips the handle 82
  • the robot arm 120 For example, the mug 80 can be moved on the table.
  • FIG. 13 is a diagram showing how annotations (attribute information) are added to the mug 80 on the table.
  • FIG. 13 shows a virtual world generated by the controller 300 according to steps S305 and S310 shown in FIG.
  • the user annotates the mug 80 displayed in the virtual world on the display 370 according to the above-mentioned "2) Scanning and annotating an object" mode, and the control device 300 It is stored in the storage unit 340 of.
  • the scanned object is displayed on the display 370 as shown in FIG. 13 (a) (step 405 in FIG. 4).
  • the user generates a cylindrical model using the tracker 350 (the corresponding virtual tracker 350_vr is displayed in FIG. 13) in the UI screen displayed on the display 370.
  • the user By moving this so as to be superimposed on the main body 84 of the mug 80 in the virtual world displayed on the display 370 (FIG. 13 (a)) and adjusting the diameter and length (FIG. 13 (b)).
  • Acquire a model corresponding to the main body 84 step 410 in FIG. 4).
  • a tracker 350 (virtual tracker 350_vr shown in FIG. 13) is used to generate a rectangular parallelepiped model corresponding to the handle 82 of the mug 80, which is displayed on the display 370 in the virtual world of the mug 80.
  • a model corresponding to the main body 84 is acquired (step 410 in FIG. 4).
  • a "mug” is added to this model as identification information of an object, and a handle 82 and a main body 84 are added as identification information of each part thereof (step S415 in FIG. 4). Further, the functional information thereof is added to each part of the model (step S420 in FIG. 4). For example, the following functions are given as functional information of each part of the model. -Handle: Rectangular parallelepiped shape. Can be gripped with a robot hand. -Main body: Cylindrical shape. The annotation added to the mug 80 is stored in the data structure shown in Table 4.
  • the attribute information possessed by the model can be given to the object.
  • the mug 80 to which the attribute information is added in this way is displayed in the virtual world as a virtual pipe by a computer graphics representation representing the model.
  • an operation of grasping and moving the virtual object of the mug 80 with the robot hand 122 of the robot 100 in the virtual world is instructed in the virtual world (FIG. 14A), and the mug in the real world is instructed according to the instruction.
  • the 80 can be gripped and moved by the robot hand 122 of the robot 100 in the real world (FIG. 14 (b)).
  • Attribute information may be provided only for the handle 82, which is a part of the mug 80.
  • the attribute information (annotation) given to the object can be stored in the data structure shown in Tables 1 to 4.

Abstract

A method according to one embodiment of the present invention includes: generating a virtual world reproducing an environment in the real world; acquiring a model corresponding to an object in the virtual world to which attribute information is to be assigned (S410); and assigning attribute information including information relating to at least one part of the model to the model (S415, S420).

Description

オブジェクトに属性情報を付与する方法及び装置Method and device for giving attribute information to an object
 本発明は、オブジェクトに属性情報を付与する方法及び装置に関する。 The present invention relates to a method and a device for giving attribute information to an object.
 ロボットがオブジェクトを使ってタスクを実行するためには、ロボットがそのオブジェクトの形状だけではなく、オブジェクトが備える機能を知得することで、そのオブジェクトをどのように使用するかを理解する必要がある。オブジェクトの形状及び機能を認識する種々の手法が提案されている。 In order for a robot to execute a task using an object, it is necessary to understand how the robot uses the object by knowing not only the shape of the object but also the functions of the object. Various methods for recognizing the shape and function of an object have been proposed.
 非特許文献1には、日用品を撮影した距離画像から得られる奥行き、色、曲率等の局所特徴量を学習し、日用品に備わる7つの機能属性として、「掴む(Grasp)」、「切断する(Cut)」、「すくう(Scoop)」、「含む(Contain)」、「叩く(Pound)」、「支える(Support)」、「周囲を掴む(Wrap-Grasp)」をピクセル単位で推定する手法が提案されている。非特許文献1には、ロボットが人間と共同生活するためには、単にツールの名前を知るだけではなく、ツールの各部を特定し、かつそれらの各部の機能を識別する必要があることが述べられている。 In Non-Patent Document 1, local features such as depth, color, and curvature obtained from a distance image of daily necessities are learned, and as seven functional attributes provided in daily necessities, "grasp" and "cut (Grasp)" "Cut", "Scoop", "Contain", "Pond", "Support", "Grasp-Grasp" Proposed. Non-Patent Document 1 states that in order for a robot to live together with a human, it is necessary not only to know the name of the tool but also to identify each part of the tool and identify the function of each part. Has been done.
 非特許文献1を引用する非特許文献2には、コップ等の日用品のオブジェクト形状を3Dセンサで取得し、その日用品に備わった「機能属性」を、オブジェクトの局所的な形状情報だけでなく、連続性を持った他の部位の形状も利用して、オブジェクト形状の局所特徴量による識別結果の尤度統合に基づいて推定する手法が提案されている。非特許文献2はさらに、提案する機能属性推定手法を用いて、ロボットアームによる物体操作タスク(マグカップに水を注いで、それを持ち上げる)を実行することを開示している。 In Non-Patent Document 2, which cites Non-Patent Document 1, the object shape of daily necessities such as cups is acquired by a 3D sensor, and the "functional attributes" provided in the daily necessities are not only the local shape information of the object but also the object shape. A method has been proposed in which the shape of other parts having continuity is also used to make an estimation based on the likelihood integration of the identification result based on the local feature amount of the object shape. Non-Patent Document 2 further discloses that an object manipulation task (pour water into a mug and lift it) by a robot arm is performed by using the proposed functional attribute estimation method.
 また、特許文献1には、物体概念モデルに、形状特徴量を用いて認識される物体の形状と機能特徴量を用いて認識される物体の機能とを適用し、統計的に処理して物体を学習し、認識対象物体の形状又は機能の観測された情報を物体の物体概念モデルに適用して統計的に処理し、認識対象物体の未観測情報を推定して認識対象物体を認識する技術が開示されている。 Further, in Patent Document 1, the shape of the object recognized by using the shape feature amount and the function of the object recognized by using the functional feature amount are applied to the object concept model, and the object is statistically processed. Technology that recognizes the object to be recognized by estimating the unobserved information of the object to be recognized by applying the observed information of the shape or function of the object to be recognized to the object concept model of the object and processing it statistically. Is disclosed.
特開2008-123365号公報Japanese Unexamined Patent Publication No. 2008-123365
 上記の先行文献はいずれも、オブジェクトの画像を取得して、その画像からオブジェクトの属性(概念、形状及び機能)を推定する手法を提供している。しかしながら、画像に基づいて推定されたオブジェクトの属性が必ずしも正しいとは限らない。例えば、取得した画像におけるオブジェクトの姿勢によっては、オブジェクトの属性の推定が正確に行われない可能性がある。さらに、新規なオブジェクトやユニークなオブジェクトは参照すべきデータが存在しないため、そのようなオブジェクトの属性を推定することは困難である。また、これらの手法は各々のオブジェクトの認識に多くの計算コストを要するため、多数のオブジェクトが存在する環境では個々のオブジェクトの認識に多くのコンピュータ資源と時間を必要とする。 All of the above prior documents provide a method of acquiring an image of an object and estimating the attributes (concept, shape and function) of the object from the image. However, the attributes of the object estimated based on the image are not always correct. For example, depending on the posture of the object in the acquired image, the attributes of the object may not be estimated accurately. Moreover, new and unique objects have no data to reference, making it difficult to estimate the attributes of such objects. In addition, since these methods require a large amount of calculation cost to recognize each object, a large amount of computer resources and time are required to recognize each object in an environment where many objects exist.
 さらには、上記の先行文献に開示された手法では、例えば各種ボタンを備えた電気機器をオブジェクトとして認識する場合に、各々のボタンに割り当てられた機能を認識することはできない。例えば、送風の「弱」、「中」、「強」のボタンを備えた扇風機をオブジェクトとして認識する場合、形状の情報から扇風機のスタンドベースにボタンが備えられていることを認識することが可能であったとしても、各々のボタンに割り当てられた機能(この例では、各ボタンに割り当てられた風量)を認識することまではできない。あるいは、回転角度に応じて風量を変えることができる風量ダイヤルを備えた扇風機の場合は、風量ダイヤルの形状を認識できたとしても、その風量ダイヤルが回転可能であること、さらにはその回転角度と風量とに相関関係があることは認識することはできない。 Furthermore, in the method disclosed in the above-mentioned prior literature, for example, when an electric device having various buttons is recognized as an object, the function assigned to each button cannot be recognized. For example, when recognizing a fan with "weak", "medium", and "strong" buttons for blowing air as an object, it is possible to recognize that the fan's stand base has buttons from the shape information. Even if it is, it is not possible to recognize the function assigned to each button (in this example, the air volume assigned to each button). Alternatively, in the case of a fan equipped with an air volume dial that can change the air volume according to the rotation angle, even if the shape of the air volume dial can be recognized, the air volume dial can be rotated, and further, the rotation angle and the like. It cannot be recognized that there is a correlation with the air volume.
 また、上記のいずれの先行文献においても、ロボットに実行させる動作を特定するために物品の形状や機能の情報をどのように用いるかについては言及されていない。 Further, neither of the above-mentioned prior documents mentions how to use the information on the shape and function of the article to specify the action to be executed by the robot.
 本発明の一態様によれば、オブジェクトに属性情報を付与する方法であって、現実世界の環境を再現する仮想世界を生成することと、仮想世界において、属性情報を付与するオブジェクトに対応するモデルを取得することと、モデルに対して、モデルの少なくとも1つの部分に関する情報を含む属性情報を付与することと、を含む方法が提供される。 According to one aspect of the present invention, it is a method of giving attribute information to an object, that is, generating a virtual world that reproduces the environment of the real world, and a model corresponding to the object to which the attribute information is given in the virtual world. Is provided, and the model is given attribute information that includes information about at least one part of the model.
 本発明の他の態様によれば、オブジェクトに属性情報を付与する装置であって、現実世界の環境を再現する仮想世界を生成することと、仮想世界において、属性情報を付与するオブジェクトに対応するモデルを取得することと、モデルに対して、モデルの少なくとも1つの部分に関する情報を含む属性情報を付与することと、を実行するように構成されたプロセッサを備えた装置が提供される。 According to another aspect of the present invention, it is a device that imparts attribute information to an object, and corresponds to generating a virtual world that reproduces an environment in the real world and an object that imparts attribute information in the virtual world. A device is provided with a processor configured to acquire the model and to give the model attribute information, including information about at least one part of the model.
 本発明の他の態様によれば、オブジェクトの属性情報を格納するデータ構造であって、属性情報は、オブジェクトの少なくとも1つの部分に関する情報を含むデータ構造が提供される。 According to another aspect of the present invention, a data structure for storing attribute information of an object is provided, and the attribute information includes a data structure including information about at least one part of the object.
 本発明の他の特徴事項および利点は、例示的且つ非網羅的に与えられている以下の説明及び添付図面から理解することができる。 Other features and advantages of the present invention can be understood from the following description and accompanying drawings given exemplary and non-exhaustive.
ロボット制御システムの一実施形態を示すブロック図である。It is a block diagram which shows one Embodiment of a robot control system. ロボットの一実施形態の概略構成を示す図である。It is a figure which shows the schematic structure of one Embodiment of a robot. ロボットの制御動作の一実施形態を説明するフローチャートである。It is a flowchart explaining one Embodiment of the control operation of a robot. スキャンしたオブジェクトにアノテーションを付与する方法を示すフローチャートである。It is a flowchart which shows the method of annotating a scanned object. スキャンしたオブジェクトと、それに対応するモデルとを示す図である。It is a figure which shows the scanned object and the corresponding model. スキャンしたオブジェクトと、それに対応するモデルとを示す図である。It is a figure which shows the scanned object and the corresponding model. ユーザ入力操作の遷移を概略的に示す図である。It is a figure which shows the transition of a user input operation schematicly. 現実世界のオブジェクトに対してロボットが行う動作をオブジェクトに付与された属性情報を用いて特定する方法を示すフローチャートである。It is a flowchart which shows the method of specifying the operation performed by a robot with respect to the object in the real world by using the attribute information given to the object. 属性情報(アノテーション)を付与するオブジェクトの他の例を示す図である。It is a figure which shows other example of the object which gives attribute information (annotation). ロボット(無人潜水機)と、水中作業対象のパイプとを示す図である。It is a figure which shows the robot (unmanned submersible) and the pipe which is the object of underwater work. パイプに対してアノテーション(属性情報)を付与する様子を示す図である。It is a figure which shows the state of giving an annotation (attribute information) to a pipe. テーブルの上に置かれたマグカップを示す図である。It is a figure which shows the mug placed on the table. テーブル上のマグカップに対してアノテーション(属性情報)を付与する様子を示す図である。It is a figure which shows the state of giving annotation (attribute information) to the mug on a table. 同図(a)は仮想世界内で仮想オブジェクトである仮想マグカップに対して動作指示を行う様子を示し、同図(b)はその指示に従って現実世界のマグカップを現実世界のロボットのロボットハンドで把持して移動させる様子を示す。Fig. (A) shows how to give an operation instruction to a virtual mug which is a virtual object in the virtual world, and Fig. (B) shows how to grasp the mug in the real world with the robot hand of a robot in the real world according to the instruction. It shows how to move it.
 以下、本発明の実施の形態を図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、ロボット制御システムの一実施形態を示すブロック図である。図2は、ロボットの一実施形態の概略構成を示す図である。 FIG. 1 is a block diagram showing an embodiment of a robot control system. FIG. 2 is a diagram showing a schematic configuration of an embodiment of a robot.
 図1に示すように、本実施形態に係るロボット制御システム1は、ロボット100と、ロボット100を制御する制御ユニット200と、制御ユニット200の制御を司る制御装置300とを備えている。 As shown in FIG. 1, the robot control system 1 according to the present embodiment includes a robot 100, a control unit 200 that controls the robot 100, and a control device 300 that controls the control unit 200.
 最初に、本実施形態のロボット制御システム1におけるロボット100について説明する。 First, the robot 100 in the robot control system 1 of the present embodiment will be described.
 図1及び図2に示すように、本実施形態に開示するロボット100は、一例として、少なくとも2つのロボットアーム120と、それらのロボットアーム120を支持するロボット筐体140と、ロボット100の周囲環境をセンシングする環境センサ160と、送受信ユニット180とを備えている。 As shown in FIGS. 1 and 2, as an example, the robot 100 disclosed in the present embodiment includes at least two robot arms 120, a robot housing 140 that supports the robot arms 120, and the surrounding environment of the robot 100. It is provided with an environment sensor 160 that senses the above and a transmission / reception unit 180.
 本実施形態における各々のロボットアーム120は、例えば6軸の多関節アーム(以下、「アーム」とも称する。)であり、先端にはエンドエフェクタであるロボットハンド(以下、「ハンド」とも称する。)122を有している。ロボットアーム120は各回転軸にサーボモータを有するアクチュエータ(不図示)を備えている。各サーボモータは制御ユニット200に接続されており、制御ユニット200から送られる制御信号に基づいて動作制御されるように構成されている。本実施形態では、アーム120として6軸の多関節アームを用いているが、アームの軸数(関節数)はロボット100の用途やそれに求められる機能等に応じて適宜定めることができる。また、本実施形態ではエンドエフェクタとして2本指のハンド122を用いているが、これに限らず、例えば、3本あるいはそれ以上の指を備えたロボットハンド、磁力あるいは負圧による吸着手段を備えたロボットハンド、ゴム膜内に充填された粉粒体のジャミング(詰まり)現象を応用した把持手段を備えたロボットハンド、その他任意の手段により把持対象物のグリップとリリースを繰り返し行うことができるものを用いることができる。各ハンド122a,122bは、その手首部分を中心として回転可能に構成されていることが好ましい。 Each robot arm 120 in the present embodiment is, for example, a 6-axis articulated arm (hereinafter, also referred to as “arm”), and a robot hand (hereinafter, also referred to as “hand”) which is an end effector at the tip thereof. It has 122. The robot arm 120 includes an actuator (not shown) having a servomotor on each rotation axis. Each servomotor is connected to the control unit 200 and is configured to control its operation based on a control signal sent from the control unit 200. In the present embodiment, a 6-axis articulated arm is used as the arm 120, but the number of axes (the number of joints) of the arm can be appropriately determined according to the application of the robot 100, the functions required thereof, and the like. Further, in the present embodiment, the two-fingered hand 122 is used as the end effector, but the present invention is not limited to this, and for example, a robot hand having three or more fingers and a means of attracting by magnetic force or negative pressure are provided. A robot hand equipped with a gripping means that applies the jamming phenomenon of powder or granular material filled in a rubber film, or a robot hand that can repeatedly grip and release the gripping object by any other means. Can be used. It is preferable that the hands 122a and 122b are configured to be rotatable around the wrist portion thereof.
 ハンド122には、ハンド122の変位量、ハンド122に作用する力・加速度・振動等を検出する動力学センサが備えられている。さらに、ハンド122は、ハンド122による把持力や触覚を検出する触覚センサを備えていることが好ましい。 The hand 122 is equipped with a kinetic sensor that detects the amount of displacement of the hand 122, the force, acceleration, vibration, etc. acting on the hand 122. Further, the hand 122 preferably includes a tactile sensor that detects the gripping force and the tactile sensation of the hand 122.
 ロボット筐体140は、例えば、載置台(不図示)の上に固定した状態で設置してもよく、あるいは、載置台の上に回転駆動装置(不図示)を介して旋回可能に設置してもよい。ロボット筐体140を載置台の上に旋回可能に設置した場合には、ロボット100の作業範囲をロボット100の正面の領域だけでなく、ロボット100の周囲の範囲に広げることができる。さらには、ロボット筐体140は、ロボット100の用途や使用環境に応じて、複数の車輪や無限軌道等を備えた車両、船舶、潜水機、ヘリコプターやドローン等の飛行体、その他の移動体に載置されていてもよく、あるいは、ロボット筐体140がそのような移動体の一部として構成されていてもよい。さらには、ロボット筐体140は歩行手段として2足またはそれ以上の足を有していてもよい。ロボット筐体140がそのような移動手段を有することにより、ロボット100の作業範囲をより広範囲とすることができる。ロボット100の用途によっては、ロボットアーム120はロボット筐体140を介さずに載置台等に直接固定されていてもよい。 For example, the robot housing 140 may be installed in a state of being fixed on a mounting table (not shown), or may be installed on the mounting table so as to be rotatable via a rotation drive device (not shown). May be good. When the robot housing 140 is rotatably installed on the mounting table, the working range of the robot 100 can be expanded not only to the area in front of the robot 100 but also to the area around the robot 100. Further, the robot housing 140 can be used for vehicles, ships, submersibles, helicopters, drones, and other moving objects equipped with a plurality of wheels and endless tracks, depending on the application and usage environment of the robot 100. It may be mounted, or the robot housing 140 may be configured as part of such a moving body. Furthermore, the robot housing 140 may have two or more legs as walking means. When the robot housing 140 has such a moving means, the working range of the robot 100 can be made wider. Depending on the application of the robot 100, the robot arm 120 may be directly fixed to a mounting table or the like without the intervention of the robot housing 140.
 環境センサ160は、ロボット100の周囲環境をセンシングする。周囲環境には例えば、電磁波(可視光線、非可視光線、X線、ガンマ線等を含む)、音、温度、湿度、風速、大気組成等が含まれ、したがって環境センサ160は、視覚センサ、X線・ガンマ線センサ、聴覚センサ、温度センサ、湿度センサ、風速センサ、大気分析装置等を含み得るが、これらに限定されない。なお、図では環境センサ160がロボット100と一体であるように示されているが、環境センサ160はロボット100とは一体でなくてもよい。例えば、環境センサ160はロボット100から離れた位置に設置されていたり、車両やドローン等の移動体に設置されていてもよい。また、環境センサ160は、GPS(Grobal Positioning System)センサ、高度センサ、ジャイロセンサ等を備えていることが好ましい。さらに、環境センサ160は、ロボット100の屋外または屋内における位置検出のため、位置検出手段として、上記GPSセンサの他、WiFi測位、ビーコン測位、自立航法測位、地磁気測位、音波測位、UWB(Ultra Wide Band:超広帯域無線)測位、可視光・非可視光測位等を行うための構成を備えていることが好ましい。 The environment sensor 160 senses the surrounding environment of the robot 100. The surrounding environment includes, for example, electromagnetic waves (including visible light, invisible light, X-ray, gamma ray, etc.), sound, temperature, humidity, wind velocity, atmospheric composition, etc. Therefore, the environment sensor 160 is a visual sensor, X-ray. -It may include, but is not limited to, gamma ray sensors, auditory sensors, temperature sensors, humidity sensors, wind velocity sensors, atmospheric analyzers, and the like. Although the environment sensor 160 is shown to be integrated with the robot 100 in the figure, the environment sensor 160 does not have to be integrated with the robot 100. For example, the environment sensor 160 may be installed at a position away from the robot 100, or may be installed on a moving body such as a vehicle or a drone. Further, the environment sensor 160 preferably includes a GPS (Global Positioning System) sensor, an altitude sensor, a gyro sensor, and the like. Further, the environment sensor 160 is used as a position detection means for detecting the position of the robot 100 outdoors or indoors, in addition to the above GPS sensor, WiFi positioning, beacon positioning, self-contained navigation positioning, geomagnetic positioning, sonic positioning, UWB (Ultra Wideband). Band: Ultra-wideband) It is preferable to have a configuration for positioning, visible light / invisible light positioning, and the like.
 特に視覚センサとしては、例えば、2Dカメラ及び深度センサ、3Dカメラ、RGB-Dセンサ、3D-LiDARセンサ、Kinect(商標)センサなどを用いることができる。環境センサ160で得られた視覚情報は制御ユニット200へ送られ、制御ユニット200において処理される。環境センサ160で得られるその他の環境情報も制御ユニット200へ送信し、ロボット100の周囲環境の解析に用いることができる。 In particular, as the visual sensor, for example, a 2D camera, a depth sensor, a 3D camera, an RGB-D sensor, a 3D-LiDAR sensor, a Kinect ™ sensor, and the like can be used. The visual information obtained by the environment sensor 160 is sent to the control unit 200 and processed by the control unit 200. Other environmental information obtained by the environment sensor 160 can also be transmitted to the control unit 200 and used for analysis of the surrounding environment of the robot 100.
 送受信ユニット180は、制御ユニット200との間での信号・情報の送受信を行う。送受信ユニット180は、制御ユニット200と有線接続または無線接続によって接続することが可能であり、したがってそれらの信号・情報の送受信は有線または無線によって行うことができる。それらの信号・情報の送受信に用いられる通信プロトコル及び周波数等は、ロボット100が用いられる用途や環境等に応じて適宜選択しうる。さらに、送受信ユニット180はインターネット等のネットワークに接続されていてもよい。 The transmission / reception unit 180 transmits / receives signals / information to / from the control unit 200. The transmission / reception unit 180 can be connected to the control unit 200 by a wired connection or a wireless connection, and therefore, transmission / reception of those signals / information can be performed by a wired or wireless connection. The communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used. Further, the transmission / reception unit 180 may be connected to a network such as the Internet.
 次に、本実施形態のロボット制御システム1における制御ユニット200について説明する。 Next, the control unit 200 in the robot control system 1 of the present embodiment will be described.
 再び図1を参照すると、本実施形態に係るシステム1の制御ユニット200は、プロセッサ220、記憶ユニット240および送受信ユニット260を備えている。 Referring to FIG. 1 again, the control unit 200 of the system 1 according to the present embodiment includes a processor 220, a storage unit 240, and a transmission / reception unit 260.
 プロセッサ220は主として、ロボット100のロボットアーム120及びボディ140の駆動部及びセンサ(共に不図示)の制御、環境センサ160の制御、環境センサ160から送信された情報の処理、制御装置300との相互作用、送受信ユニット260の制御を司る。プロセッサ220は、例えば、中央演算処理装置(CPU)、特定用途向け集積回路(ASIC)、組込みプロセッサ、マイクロプロセッサ、デジタル信号プロセッサ(DSP)、あるいはそれらの組み合わせで構成される。プロセッサ220は、1又は2以上のプロセッサで構成されていてもよい。 The processor 220 mainly controls the drive unit and the sensor (both not shown) of the robot arm 120 and the body 140 of the robot 100, controls the environment sensor 160, processes the information transmitted from the environment sensor 160, and interacts with the control device 300. It controls the operation and transmission / reception unit 260. The processor 220 is composed of, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a digital signal processor (DSP), or a combination thereof. The processor 220 may be composed of one or more processors.
 プロセッサ220は、環境センサ160から送られた情報の処理として、環境センサ160で得られた視覚情報に基づいてロボット100の周囲環境に存在するオブジェクトの認識を行う。一例として、制御装置200のプロセッサ220は、環境センサ160で得られた視覚情報(画像情報及びその画像中の深度情報)に基づいて、視覚情報中に含まれるオブジェクトの形状を検出し、制御ユニット200の記憶ユニット240に記憶されているオブジェクトの属性情報(オブジェクトの名称、各部の形状・構成およびそれらの機能等に関するデータ)、あるいは、制御ユニット200が接続されているネットワーク上に存在するオブジェクトの属性情報を参照することにより、視覚情報中に含まれるオブジェクトを特定する。オブジェクトの特定は、例えば、オブジェクト形状の特徴点を参照し、既知のオブジェクトの形状データに関連付けられたルックアップテーブルを参照したり、あるいは、任意の機械学習アルゴリズムやAI技術を用いて既知のオブジェクトの形状データとの比較を行うことで、実行することができる。 The processor 220 recognizes an object existing in the surrounding environment of the robot 100 based on the visual information obtained by the environment sensor 160 as processing of the information sent from the environment sensor 160. As an example, the processor 220 of the control device 200 detects the shape of an object included in the visual information based on the visual information (image information and the depth information in the image) obtained by the environment sensor 160, and the control unit. Attribute information of the object stored in the storage unit 240 of the 200 (data regarding the name of the object, the shape / configuration of each part, their functions, etc.), or the object existing on the network to which the control unit 200 is connected. By referring to the attribute information, the object included in the visual information is specified. To identify an object, for example, refer to the feature points of the object shape, refer to the look-up table associated with the shape data of the known object, or use any machine learning algorithm or AI technique to identify the known object. It can be executed by comparing with the shape data of.
 本実施形態ではさらに、上記のように特定された各オブジェクトについて、オブジェクト自体の機能及び構成、さらにはオブジェクトの各部の機能及び構成を示すアノテーションも付与される。
 例えば、オブジェクトがボルト及びナットである場合、下記のアノテーションが付与される。
In the present embodiment, each object specified as described above is further given an annotation indicating the function and configuration of the object itself, and further, the function and configuration of each part of the object.
For example, if the objects are bolts and nuts, the following annotations are added.
・ボルト:頭部と軸とからなる。頭部は六角柱形状、ナットの着脱時には周囲を把持、軸は雄ねじが形成されている、軸にナットを締結可能(右回転で取り付け、左回転で取り外し)。 -Bolt: Consists of a head and a shaft. The head has a hexagonal column shape, the circumference is gripped when attaching and detaching the nut, the shaft has a male screw, and the nut can be fastened to the shaft (attached by rotating clockwise and removed by rotating counterclockwise).
・ナット:六角柱形状、ボルトへの着脱時には外周を把持、内周に雌ねじが形成されている、雌ねじはボルトの軸に締結可能(右回転で取り付け、左回転で取り外し)。 -Nut: Hexagonal column shape, grips the outer circumference when attaching and detaching to the bolt, and a female screw is formed on the inner circumference. The female screw can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise).
 特定されたオブジェクトについてのアノテーションの付与は、例えば、プロセッサ220が、記憶ユニット240に記憶されているオブジェクトの各部機能に関するデータ、あるいは、制御ユニット200が接続されているネットワーク上に存在するオブジェクトの各部機能に関するデータを参照することにより行うことができる。あるいは、プロセッサ220が、特定したオブジェクトの形状からその各部の機能を任意の機械学習アルゴリズムやAI技術を用いて認識して、そのオブジェクトについてのアノテーションの付与を行うことも可能である。検出されたオブジェクトに関する情報がシステム1において既知あるいは入手可能である場合には、上記のようにオブジェクトの認識とアノテーション付与を制御ユニット200側で上述した先行技術のような手法を用いて自動的に行うことが可能である。一方、検出されたオブジェクトに関する情報がシステム1において未知あるいは入手不可能である場合には、後述するように制御装置300側にてユーザによりオブジェクトの認識とアノテーション付与とが行われ得る。 Annotation of the identified object is performed, for example, by the processor 220, data regarding the function of each part of the object stored in the storage unit 240, or each part of the object existing on the network to which the control unit 200 is connected. This can be done by referring to the data related to the function. Alternatively, the processor 220 can recognize the function of each part from the shape of the specified object by using an arbitrary machine learning algorithm or AI technique, and add annotations to the object. When the information about the detected object is known or available in the system 1, the control unit 200 automatically recognizes and annotates the object by using the method like the prior art described above on the control unit 200 side as described above. It is possible to do. On the other hand, when the information about the detected object is unknown or unavailable in the system 1, the object can be recognized and annotated by the user on the control device 300 side as described later.
 このようにして特定されたオブジェクトと、その機能のアノテーションとに関する情報は記憶ユニット240に記憶される。この情報はまた、制御装置300に送られてもよい。 Information about the object identified in this way and the annotation of its function is stored in the storage unit 240. This information may also be sent to the control device 300.
 さらに、プロセッサ220は、例えば、制御装置300から送信されたロボット100の制御信号と、それに応じて生成した動作命令と、実際に実行されたロボット100の動作と、動作実行後に環境センサ160で取集した周囲環境データとを記憶ユニット240にデータとして記憶させ、そのデータを用いて機械学習を実行して学習データを生成して記憶ユニット240に記憶させる。プロセッサ220は、次回以降に制御装置300から送信されたロボット100の制御信号に基づいてロボット100に実行させるべき動作をその学習データを参照して決定して動作命令を生成することが可能である。このように、本実施形態では現実世界にあるロボット100の制御ユニット200がローカルに機械学習機能を備えている。 Further, the processor 220 receives, for example, a control signal of the robot 100 transmitted from the control device 300, an operation command generated in response to the control signal, an operation of the robot 100 actually executed, and an environment sensor 160 after the operation is executed. The collected ambient environment data is stored in the storage unit 240 as data, and machine learning is executed using the data to generate learning data and store it in the storage unit 240. The processor 220 can generate an operation command by deciding an operation to be executed by the robot 100 based on the control signal of the robot 100 transmitted from the control device 300 from the next time onward with reference to the learning data. .. As described above, in the present embodiment, the control unit 200 of the robot 100 in the real world has a machine learning function locally.
 記憶ユニット240は、本実施形態で説明するようにロボット100を制御するためのコンピュータ・プログラム、環境センサ160から送信された情報の処理を行うコンピュータ・プログラム、制御装置300との相互作用を行うコンピュータ・プログラム、送受信ユニット260を制御するコンピュータ・プログラム、機械学習を実行するプログラム等を記憶している。好ましくは、記憶ユニット240には、コンピュータに本実施形態で説明するような処理を行わせて制御ユニット200としての機能を生じさせるソフトウェアまたはプログラムが記憶されている。特に、記憶ユニット240には、図4等を参照して後述する方法を実施する命令を含む、プロセッサ220によって実行可能なコンピュータ・プログラムが記憶されている。 The storage unit 240 is a computer program for controlling the robot 100, a computer program for processing information transmitted from the environment sensor 160, and a computer for interacting with the control device 300 as described in the present embodiment. -A program, a computer program for controlling the transmission / reception unit 260, a program for executing machine learning, and the like are stored. Preferably, the storage unit 240 stores software or a program that causes a computer to perform a process as described in this embodiment to cause a function as a control unit 200. In particular, the storage unit 240 stores a computer program that can be executed by the processor 220, including instructions that implement the methods described below with reference to FIG. 4 and the like.
 さらに、記憶ユニット240は、上述したような既知のオブジェクトの形状データ及びそれらのオブジェクトが備える機能のデータや、それらに関連付けられたルックアップテーブルを記憶していることが好ましい。さらに、記憶ユニット240は、ロボット100のロボットアーム120の各部(サーボ(不図示)、ハンド122等)の状態、環境センサ160から送信された情報、制御装置300から送られた情報、制御信号等を少なくとも一時的に記憶する役割も有する。さらには、記憶ユニット240は、上述したように、ロボット100の動作指示とそれに応じて実行されたロボット100の動作、学習データを記憶する役割も有する。記憶ユニット240は、制御ユニット200の電源がオフされても記憶状態が保持される不揮発性の記憶媒体を備えていることが好ましく、例えば、ハードディスクドライブ(HDD)、固体記憶装置(SSD)、コンパクトディスク(CD)・ディジタル・バーサタイル・ディスク(DVD)・ブルーレイディスク(BD)等の光学ディスクストレージ、不揮発性ランダムアクセスメモリ(NVRAM)、EPROM(Rrasable Programmable Read Only Memory)、フラッシュメモリ等の不揮発性ストレージを備えている。なお、記憶ユニット240はスタティックランダムアクセスメモリ(SRAM)等の揮発性ストレージをさらに備えていてもよいが、上述した各コンピュータ・プログラムは記憶ユニット340のうち不揮発性の(非一時的な)記憶媒体に記憶される。 Further, it is preferable that the storage unit 240 stores the shape data of known objects as described above, the function data of those objects, and the look-up table associated with them. Further, the storage unit 240 includes the state of each part (servo (not shown), hand 122, etc.) of the robot arm 120 of the robot 100, information transmitted from the environment sensor 160, information sent from the control device 300, control signals, and the like. It also has the role of storing at least temporarily. Further, as described above, the storage unit 240 also has a role of storing the operation instruction of the robot 100 and the operation and learning data of the robot 100 executed in response to the operation instruction. The storage unit 240 preferably includes a non-volatile storage medium that retains the storage state even when the power of the control unit 200 is turned off. For example, a hard disk drive (HDD), a solid-state storage device (SSD), or a compact storage unit 240. Optical disk storage such as disc (CD), digital versatile disc (DVD), Blu-ray disc (BD), non-volatile random access memory (NVRAM), EPROM (Rrasable Programmable Read Only Memory), non-volatile storage such as flash memory Is equipped with. The storage unit 240 may further include volatile storage such as a static random access memory (SRAM), but each computer program described above is a non-volatile (non-temporary) storage medium of the storage unit 340. Is remembered in.
 送受信ユニット260は、ロボット100との間での信号・情報の送受信と、制御装置300との間での信号・情報の送受信とを行う。制御ユニット200は、ロボット100と有線接続または無線接続によって接続することが可能であり、したがってそれらの信号・情報の送受信は有線または無線によって行うことができる。それらの信号・情報の送受信に用いられる通信プロトコル及び周波数等は、ロボット100が用いられる用途や環境等に応じて適宜選択しうる。送受信ユニット260はインターネット等のネットワークに接続されていてもよい。 The transmission / reception unit 260 transmits / receives signals / information to / from the robot 100 and transmits / receives signals / information to / from the control device 300. The control unit 200 can be connected to the robot 100 by a wired connection or a wireless connection, and therefore the signals and information thereof can be transmitted and received by a wired or wireless connection. The communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used. The transmission / reception unit 260 may be connected to a network such as the Internet.
 さらに、送受信ユニット260は、制御装置300との間での信号・情報の送受信とを行う。制御ユニット200は、制御装置300と有線接続または無線接続によって接続することが可能であり、したがってそれらの信号・情報の送受信は有線または無線によって行うことができる。それらの信号・情報の送受信に用いられる通信プロトコル及び周波数等は、ロボット100が用いられる用途や環境等に応じて適宜選択しうる。 Further, the transmission / reception unit 260 transmits / receives signals / information to / from the control device 300. The control unit 200 can be connected to the control device 300 by a wired connection or a wireless connection, and therefore the signals and information thereof can be transmitted and received by a wired or wireless connection. The communication protocol, frequency, and the like used for transmitting and receiving those signals and information can be appropriately selected according to the application, environment, and the like in which the robot 100 is used.
 なお、図1では制御ユニット200がロボット100から独立したものとして示されているが、その形態に限られない。例えば、制御ユニット200はロボット100の筐体140内に設けられていてもよい。また、本システム1で用いるロボット100は1つに限られず、複数のロボット100を独立して、あるいは互いに協働させて動作させてもよい。この場合、単体の制御ユニット200で複数のロボット100を制御してもよく、あるいは、複数の制御ユニット200を協働させて複数のロボット100を制御してもよい。 Although the control unit 200 is shown as being independent of the robot 100 in FIG. 1, it is not limited to that form. For example, the control unit 200 may be provided in the housing 140 of the robot 100. Further, the number of robots 100 used in this system 1 is not limited to one, and a plurality of robots 100 may be operated independently or in cooperation with each other. In this case, a single control unit 200 may control a plurality of robots 100, or a plurality of control units 200 may cooperate to control a plurality of robots 100.
 続いて、本実施形態のロボット制御システム1における制御装置300について説明する。 Subsequently, the control device 300 in the robot control system 1 of the present embodiment will be described.
 図1に示すように、本実施形態に係るシステム1の制御装置300は、プロセッサ320、記憶ユニット340、入力デバイス350、送受信ユニット360、ディスプレイ370を備えている。 As shown in FIG. 1, the control device 300 of the system 1 according to the present embodiment includes a processor 320, a storage unit 340, an input device 350, a transmission / reception unit 360, and a display 370.
 プロセッサ320は主として、制御ユニット200との相互作用、入力デバイス350を介してユーザによって行われる入力に基づく処理、送受信ユニット360の制御、ディスプレイ370の表示を司る。とりわけ、プロセッサ320は、入力デバイス350によって入力されたユーザ入力に基づいて制御信号を生成し、制御ユニット200に送信する。制御ユニット200のプロセッサ220は、その制御信号に基づき、ロボット100のロボットアーム120及びボディ140の各駆動部(不図示)や環境センサ160を動作させるための1つのあるいは複数の動作指令を生成する。プロセッサ320は、例えば、中央演算処理装置(CPU)、特定用途向け集積回路(ASIC)、組込みプロセッサ、マイクロプロセッサ、デジタル信号プロセッサ(DSP)、あるいはそれらの組み合わせで構成される。プロセッサ320は、1又は2以上のプロセッサで構成されていてもよい。プロセッサ320は、1又は2以上のプロセッサで構成されていてもよい。 The processor 320 mainly controls the interaction with the control unit 200, the processing based on the input performed by the user via the input device 350, the control of the transmission / reception unit 360, and the display of the display 370. In particular, the processor 320 generates a control signal based on the user input input by the input device 350 and transmits it to the control unit 200. The processor 220 of the control unit 200 generates one or more operation commands for operating each drive unit (not shown) of the robot arm 120 and the body 140 of the robot 100 and the environment sensor 160 based on the control signal. .. The processor 320 is composed of, for example, a central processing unit (CPU), an application specific integrated circuit (ASIC), an embedded processor, a microprocessor, a digital signal processor (DSP), or a combination thereof. The processor 320 may be composed of one or more processors. The processor 320 may be composed of one or more processors.
 さらに、制御装置300のプロセッサ320は、ユーザに提示するUI(ユーザ・インターフェース)画面を生成し、ディスプレイ370に表示するように構成されている。UI画面(不図示)は、例えば、複数の選択肢を階層的にユーザに提供する選択ボタンを含む。さらにプロセッサ320は、ロボット100の環境センサ160によって撮影されたロボット100の周囲環境の現実世界の画像または動画に基づいて仮想世界(シミュレーション空間)の画像または動画を生成し、ディスプレイ370に表示する。プロセッサ320は、現実世界の画像または動画に基づいて仮想世界の画像または動画を生成する際に、例えば現実世界の座標系と仮想世界の座標系とを対応付けることにより、現実世界と仮想世界との相関関係を構築する。さらに、現実世界の画像または動画と仮想世界(シミュレーション空間)の画像または動画とを同時にディスプレイ370に表示してもよい。さらには、UI画面をロボット100の周囲環境の画像または動画あるいは仮想世界の画像または動画に重ね合わせて表示してもよい。ロボット100の周囲環境の現実世界の画像または動画に基づいて生成される仮想世界(シミュレーション空間)の画像または動画には、ロボット100の周囲環境に存在するオブジェクトも含まれる。プロセッサ320が現実世界の画像または動画に基づいて仮想世界の画像または動画を生成する際に現実世界と仮想世界との相関関係を構築することで、以下に詳しく説明するように、仮想世界におけるユーザの操作に基づいて現実世界において変化を生じさせ、かつ、現実世界における変化を仮想世界において反映させることが可能となる。 Further, the processor 320 of the control device 300 is configured to generate a UI (user interface) screen to be presented to the user and display it on the display 370. The UI screen (not shown) includes, for example, a selection button that hierarchically provides the user with a plurality of options. Further, the processor 320 generates an image or a moving image of a virtual world (simulation space) based on an image or a moving image of the real world of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100, and displays it on the display 370. When the processor 320 generates an image or a moving image of a virtual world based on an image or a moving image of the real world, for example, by associating a coordinate system of the real world with a coordinate system of the virtual world, the processor 320 connects the real world and the virtual world. Build a correlation. Further, the image or moving image of the real world and the image or moving image of the virtual world (simulation space) may be displayed on the display 370 at the same time. Further, the UI screen may be superimposed on the image or moving image of the surrounding environment of the robot 100 or the image or moving image of the virtual world. The virtual world (simulation space) image or moving image generated based on the real world image or moving image of the surrounding environment of the robot 100 also includes an object existing in the surrounding environment of the robot 100. By building a correlation between the real world and the virtual world as the processor 320 generates a virtual world image or video based on a real world image or video, a user in the virtual world, as described in detail below. It is possible to make a change in the real world based on the operation of, and to reflect the change in the real world in the virtual world.
 なお上記では、制御ユニット200のプロセッサ220が、環境センサ160で得られた視覚情報に基づいてロボット100の周囲環境に存在するオブジェクトを特定し、オブジェクトの各部の機能に関するアノテーションの付与を行うことを説明したが、それらの処理は、制御ユニット200のプロセッサ220による処理に加えて、あるいは、プロセッサ220による処理に代わって、制御装置300のプロセッサ320が行ってもよい。 In the above, the processor 220 of the control unit 200 identifies an object existing in the surrounding environment of the robot 100 based on the visual information obtained by the environment sensor 160, and adds an annotation relating to the function of each part of the object. As described above, these processes may be performed by the processor 320 of the control device 300 in addition to the processes by the processor 220 of the control unit 200 or in place of the processes by the processor 220.
 オブジェクトの特定とアノテーション付与の方法としては、1)モデルおよびそれに関連付けられたアノテーションの既成情報を入手する、2)オブジェクトをスキャンし、アノテーションを付与する、3)オブジェクトのモデルの作成とアノテーション付与とを行う、の3つの態様が考えられる。 The methods for identifying and annotating an object are as follows: 1) Obtaining ready-made information about the model and its associated annotations, 2) Scanning the object and annotating it, and 3) Creating a model of the object and annotating it. There are three possible modes of doing this.
 1)の「モデルおよびそれに関連付けられたアノテーションの既成情報を入手する」態様は、スキャンしたオブジェクトに対応するモデルが入手可能であり、かつそのモデルにアノテーションが関連付けられている場合に対処するものである。この態様では、制御ユニット200に関連して上述したのと同様に、制御装置300のプロセッサ320は、記憶ユニット340に記憶されているオブジェクトの属性情報(オブジェクトの名称、各部の形状・構成およびそれらの機能等に関するデータ)、あるいは、制御装置300が接続されているネットワーク上に存在するオブジェクトの属性情報を参照することにより、視覚情報中に含まれるオブジェクトの特定と、その各部の機能に関するアノテーションの付与を行う。オブジェクトの特定と、その各部の機能に関するアノテーションの付与との処理には、上述したように任意の機械学習アルゴリズムやAI技術を用いてもよい。 The mode of 1) "obtaining the ready-made information of the model and the annotation associated with it" deals with the case where the model corresponding to the scanned object is available and the annotation is associated with the model. is there. In this embodiment, as described above in relation to the control unit 200, the processor 320 of the control device 300 uses the attribute information of the objects (the name of the object, the shape / configuration of each part, and the like) stored in the storage unit 340. By referring to the attribute information of the object existing on the network to which the control device 300 is connected, the identification of the object included in the visual information and the annotation related to the function of each part can be performed. Grant. As described above, any machine learning algorithm or AI technique may be used for the processing of identifying the object and adding annotations related to the functions of the respective parts.
 また、2)の「オブジェクトをスキャンし、アノテーションを付与する」態様は、スキャンしたオブジェクトに対応するモデルが入手できない場合に対処可能である。この態様では、例えば、ユーザが入力デバイス350を用いてUI画面内で種々のプリミティブな形状要素を組み合わせて、スキャンしたオブジェクトに対応するモデルを生成する。プリミティブな形状要素としては、例えば、任意角の角柱、任意角の角錐、円柱、円錐、球体等の要素が含まれる。さらには、プロセッサ320は、ユーザがUI画面内で任意の形状を描画して、それをプリミティブな形状要素として追加できるようにされていてもよい。ユーザは、UI画面内でこれらの種々の形状要素を選択し、選択した形状要素の各部寸法を適宜変更し、スキャンしたオブジェクトの画像に合わせてそれらの要素を組み合わせることで、スキャンしたオブジェクトに対応するモデルを生成することができる。これらの要素を用いてモデルを生成する際には、オブジェクトの窪みや穴等を表現することも可能である。 In addition, the mode of 2) "scanning an object and annotating it" can be dealt with when a model corresponding to the scanned object is not available. In this aspect, for example, the user uses the input device 350 to combine various primitive shape elements in the UI screen to generate a model corresponding to the scanned object. Primitive shape elements include, for example, elements such as prisms of arbitrary angle, pyramids of arbitrary angle, cylinders, cones, and spheres. Further, the processor 320 may allow the user to draw an arbitrary shape in the UI screen and add it as a primitive shape element. The user selects these various shape elements in the UI screen, changes the dimensions of each part of the selected shape elements as appropriate, and combines those elements according to the image of the scanned object to correspond to the scanned object. You can generate a model to do. When generating a model using these elements, it is also possible to represent dents and holes of objects.
 そしてユーザは、UI画面内で、そのように生成したモデルに対してアノテーションの付与を行う。前述のようにオブジェクトがボルトである場合には、生成したモデルの頭部と軸とに対応する部分に、それぞれ、頭部と軸とである旨のアノテーションを付与し、さらに、頭部については、六角形の周囲の面を把持可能である旨のアノテーションを付与し、軸については、雄ねじが形成されていること、ナットを締結可能(右回転で取り付け、左回転で取り外し)であること、のアノテーションを付与する。 Then, the user adds annotations to the model generated in that way in the UI screen. As described above, when the object is a bolt, annotate the part corresponding to the head and axis of the generated model to the effect that it is the head and axis, respectively, and further, for the head, , Annotate that the surrounding surface of the hexagon can be gripped, and for the shaft, a male screw is formed, a nut can be fastened (attached by rotating clockwise, removed by rotating counterclockwise), Annotate.
 あるいは、オブジェクトが開閉扉と引出しとを有する戸棚の場合は、扉がヒンジを中心として回動可能であること、引出しを取っ手を掴んで引出して出し入れ可能であること、がアノテーションとして付与される。さらに、開閉扉や引出しを動かすのに必要な力量や力を加える方向も併せてアノテーションとして付与することも可能である。 Alternatively, if the object is a cupboard with an open / close door and a drawer, the annotation is given that the door can rotate around the hinge and that the drawer can be pulled out by grasping the handle. Furthermore, it is also possible to add the force required to move the opening / closing door and the drawer and the direction in which the force is applied as an annotation.
 さらには、オブジェクトが水道の蛇口の場合は、ハンドルを回転させることで蛇口の先端(吐出口)から水が出ることがアノテーションとして付与される。この場合、ハンドルの回転角度と、蛇口の吐出口から出る水量との相関関係もアノテーションとして付与してもよい。このように、付与されるアノテーションは、オブジェクトの機能だけでなく、オブジェクトのどの部分から何の情報が生じるかということも含まれる(上記の例では、蛇口の吐出口から「水」という情報が生じることがアノテーションとして含まれる)。 Furthermore, if the object is a faucet, it is annotated that water comes out from the tip (discharge port) of the faucet by rotating the handle. In this case, the correlation between the rotation angle of the handle and the amount of water discharged from the faucet discharge port may be added as an annotation. In this way, the annotations given include not only the function of the object, but also what information comes from which part of the object (in the above example, the information "water" is generated from the outlet of the faucet. What happens is included as an annotation).
 また、3)の「オブジェクトのモデルの作成とアノテーション付与とを行う」態様は、ユーザがUI画面内で種々のプリミティブな形状要素を組み合わせて、任意のオブジェクトのモデルを生成する場合に対処可能である。この態様におけるモデルの生成とアノテーション付与のためのユーザの操作は、上記の2)の態様に説明した操作と同様である。予めモデルを作成してシステム1内に蓄積しておくことで、上記の1)の態様で説明したように、そのモデルに対応する現実のオブジェクトを現実世界で操作する必要が生じたときに、仮想世界の中でそのモデルを対応するオブジェクトと関連付けることが可能になる。 In addition, the aspect of 3) "creating a model of an object and annotating it" can be dealt with when a user combines various primitive shape elements in a UI screen to generate a model of an arbitrary object. is there. The user's operation for generating the model and annotating in this aspect is the same as the operation described in the above aspect 2). By creating a model in advance and accumulating it in the system 1, as described in the above aspect 1), when it becomes necessary to operate a real object corresponding to the model in the real world, It is possible to associate the model with the corresponding object in the virtual world.
 プロセッサ320は、上記のように特定されたオブジェクトに対応する仮想オブジェクトを例えばコンピュータ・グラフィックスで再現し、ディスプレイ370上の仮想世界内に表示する。 The processor 320 reproduces the virtual object corresponding to the object specified as described above by, for example, computer graphics, and displays it in the virtual world on the display 370.
 記憶ユニット340は、プロセッサ320に本実施形態で説明する動作を実行させるためのプログラム、制御ユニット200との相互作用を行うコンピュータ・プログラム、入力デバイス350を介してUI画面においてユーザによってインタラクティブに行われる入力に基づく処理を行うコンピュータ・プログラム、送受信ユニット260の制御を行うコンピュータ・プログラム、ディスプレイ370の表示を行うコンピュータ・プログラム等を記憶している。好ましくは、記憶ユニット340には、コンピュータに後述する動作を行わせて制御装置300としての機能を生じさせるソフトウェアまたはプログラムが記憶されている。特に、記憶ユニット340には、図4等を参照して後述する方法を実施する命令を含む、プロセッサ320によって実行可能なコンピュータ・プログラムが記憶されている。 The storage unit 340 is interactively performed by the user on the UI screen via a program for causing the processor 320 to execute the operation described in the present embodiment, a computer program that interacts with the control unit 200, and an input device 350. It stores a computer program that performs processing based on input, a computer program that controls the transmission / reception unit 260, a computer program that displays the display 370, and the like. Preferably, the storage unit 340 stores software or a program that causes the computer to perform an operation described later to generate a function as the control device 300. In particular, the storage unit 340 stores a computer program that can be executed by the processor 320, including instructions that implement the methods described below with reference to FIG. 4 and the like.
 さらに、記憶ユニット340は、ロボット100の環境センサ160によって撮影され、制御ユニット200を介して制御装置300に送られたロボット100の周囲環境の画像または動画と、そのロボット100の周囲環境の画像または動画に基づいてプロセッサ320によって生成された仮想世界(シミュレーション空間)の画像または動画とを少なくとも一時的に記憶することが可能である。制御装置300の記憶ユニット340も、制御装置300の電源がオフされても記憶状態が保持される不揮発性の記憶媒体を備えていることが好ましく、例えば、ハードディスクドライブ(HDD)、固体記憶装置(SSD)、コンパクトディスク(CD)・ディジタル・バーサタイル・ディスク(DVD)・ブルーレイディスク(BD)等の光学ディスクストレージ、不揮発性ランダムアクセスメモリ(NVRAM)、EPROM(Rrasable Programmable Read Only Memory)、フラッシュメモリ等の不揮発性ストレージを備えている。なお、記憶ユニット340はスタティックランダムアクセスメモリ(SRAM)等の揮発性ストレージをさらに備えていてもよいが、上述した各コンピュータ・プログラムは記憶ユニット340のうち不揮発性の(非一時的な)記憶媒体に記憶される。 Further, the storage unit 340 includes an image or moving image of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100 and sent to the control device 300 via the control unit 200, and an image or moving image of the surrounding environment of the robot 100. It is possible to at least temporarily store an image or moving image of a virtual world (simulation space) generated by the processor 320 based on the moving image. The storage unit 340 of the control device 300 also preferably includes a non-volatile storage medium that retains the storage state even when the power of the control device 300 is turned off. For example, a hard disk drive (HDD) or a solid-state storage device ( SSD), optical disk storage such as compact disc (CD), digital versatile disc (DVD), Blu-ray disc (BD), non-volatile random access memory (NVRAM), EPROM (Rrasable Programmable Read Only Memory), flash memory, etc. Equipped with non-volatile storage. The storage unit 340 may further include volatile storage such as a static random access memory (SRAM), but each computer program described above is a non-volatile (non-temporary) storage medium of the storage unit 340. Is remembered in.
 さらに記憶ユニット340はシステム1のデータベースとしても機能し、本発明のコンセプトに関連して説明したように、制御信号に基づいて動作した現実世界におけるロボット100の動作データ(制御ユニット200が生成した動作指令を含む)と、環境センサ160で検出された動作結果を示す周囲環境データを記憶する。 Further, the storage unit 340 also functions as a database of the system 1, and as described in relation to the concept of the present invention, the operation data of the robot 100 in the real world (the operation generated by the control unit 200) operated based on the control signal. (Including commands) and the ambient environment data indicating the operation result detected by the environment sensor 160 is stored.
 入力デバイス350として、例えば、キーボード、マウス、ジョイスティックなどを用いることができ、さらには、赤外線等を用いて位置と姿勢をトラッキングすることが可能でトリガーボタンなどを備えたトラッカーと呼ばれるデバイスを用いることもできる。また、ディスプレイ370がタッチパネル式のディスプレイ・デバイスを備えている場合には、そのタッチパネルを入力デバイスとして用いることができる。さらには、ディスプレイ370がVR(仮想現実)・AR(拡張現実)あるいはMR(複合現実)等の表示デバイスとして用いられるヘッドマウントディスプレイであり、かつユーザの視線追跡機能を備えている場合には、その視線追跡機能を入力デバイスとして用いることができる。あるいは、視線追跡機能を備えているがディスプレイを備えていないデバイスであっても、その視線追跡機能を入力デバイスとして用いることができる。さらには、音声入力装置を入力デバイスとして用いることもできる。これらは入力デバイス350の例として例示したものであり、入力デバイス350に用いることができる手段はこれらに限られない。また、上述したような手段を任意に組み合わせて入力デバイス350として使用してもよい。上記のような入力デバイス350を用いることにより、ユーザはディスプレイ370に表示されたUI画面において、例えば、選択ボタンを選択したり、文字を入力したり、ロボット100の環境センサ160によって撮影されたロボット100の周囲環境の画像または動画中に含まれるオブジェクト、あるいは、ロボット100の環境センサ160によって撮影されたロボット100の周囲環境の画像または動画に基づいて生成された仮想世界(シミュレーション空間)の画像または動画中に含まれる仮想オブジェクトを選択したりすることができる。 As the input device 350, for example, a keyboard, a mouse, a joystick, etc. can be used, and further, a device called a tracker, which can track the position and posture using infrared rays or the like and has a trigger button or the like, is used. You can also. When the display 370 includes a touch panel type display device, the touch panel can be used as an input device. Further, when the display 370 is a head-mounted display used as a display device for VR (virtual reality), AR (augmented reality), MR (mixed reality), etc., and has a user's line-of-sight tracking function, The line-of-sight tracking function can be used as an input device. Alternatively, even a device having a line-of-sight tracking function but not a display can use the line-of-sight tracking function as an input device. Furthermore, a voice input device can also be used as an input device. These are exemplified as an example of the input device 350, and the means that can be used for the input device 350 is not limited to these. Further, the above-mentioned means may be arbitrarily combined and used as the input device 350. By using the input device 350 as described above, the user can select, for example, select a selection button, input characters, or take a picture of the robot by the environment sensor 160 of the robot 100 on the UI screen displayed on the display 370. An object contained in the image or moving image of the surrounding environment of 100, or an image or image of a virtual world (simulation space) generated based on the image or moving image of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100. You can select virtual objects included in the video.
 送受信ユニット360は、制御ユニット200との間での信号・情報の送受信を行う。上述したように、制御装置300は制御ユニット200と有線接続または無線接続によって接続することが可能であり、したがってそれらの信号・情報の送受信は有線または無線によって行うことができる。その信号・情報の送受信に用いられる通信プロトコル及び周波数等は、システム1が用いられる用途や環境等に応じて適宜選択しうる。さらに、送受信ユニット360はインターネット等のネットワークに接続されていてもよい。 The transmission / reception unit 360 transmits / receives signals / information to / from the control unit 200. As described above, the control device 300 can be connected to the control unit 200 by a wired connection or a wireless connection, and therefore, transmission and reception of these signals / information can be performed by a wired or wireless connection. The communication protocol, frequency, and the like used for transmitting and receiving the signal and information can be appropriately selected according to the application and environment in which the system 1 is used. Further, the transmission / reception unit 360 may be connected to a network such as the Internet.
 ディスプレイ370には、ディスプレイ・モニター、コンピュータ・タブレット装置(タッチパネル式のディスプレイを備えたものを含む)、VR(仮想現実)・AR(拡張現実)あるいはMR(複合現実)等の表示デバイスとして用いられるヘッドマウントディスプレイ、プロジェクター等の任意の形態の表示装置を用いることができる。 The display 370 is used as a display device such as a display monitor, a computer / tablet device (including a device equipped with a touch panel type display), VR (virtual reality) / AR (augmented reality), or MR (mixed reality). Any form of display device such as a head-mounted display or a projector can be used.
 特に、ディスプレイ370としてヘッドマウントディスプレイが用いられる場合、ヘッドマウントディスプレイがユーザの左右の眼にそれぞれ視差を持たせた画像または動画を提供することで、ユーザに三次元の画像または動画を知覚させることができる。さらに、ヘッドマウントディスプレイがモーション・トラッキング機能を備えている場合は、ヘッドマウントディスプレイを装着しているユーザの頭の位置、方向に応じた画像または動画を表示させることができる。さらには、上述したようにヘッドマウントディスプレイがユーザの視線追跡機能を備えている場合には、その視線追跡機能を入力デバイスとして用いることができる。 In particular, when a head-mounted display is used as the display 370, the head-mounted display provides an image or a moving image in which the left and right eyes of the user have parallax, thereby causing the user to perceive a three-dimensional image or moving image. Can be done. Further, when the head-mounted display has a motion tracking function, it is possible to display an image or a moving image according to the position and direction of the head of the user wearing the head-mounted display. Furthermore, when the head-mounted display has a user's line-of-sight tracking function as described above, the line-of-sight tracking function can be used as an input device.
 本実施形態における以下の説明では、制御装置300のプロセッサ320が、ロボット100の環境センサ160によって撮影されたロボット100の周囲環境の現実世界の画像または動画に基づいて仮想世界(シミュレーション空間)の画像または動画を生成し、ディスプレイ370として、VR(仮想現実)・AR(拡張現実)あるいはMR(複合現実)等の表示デバイスとして用いられるヘッドマウントディスプレイが用いられ、入力デバイス350として、赤外線等を用いて位置と姿勢をトラッキングすることが可能でトリガーボタンなどを備えたトラッカーが用いられる場合を例示的に説明する。 In the following description of the present embodiment, the processor 320 of the control device 300 is an image of a virtual world (simulation space) based on a real-world image or moving image of the surrounding environment of the robot 100 taken by the environment sensor 160 of the robot 100. Alternatively, a head-mounted display that generates a moving image and is used as a display device for VR (virtual reality), AR (augmented reality), MR (mixed reality), etc. is used as the display 370, and infrared rays or the like is used as the input device 350. An example will be described in which a tracker capable of tracking the position and posture and having a trigger button or the like is used.
 次に、図3~8を参照し、ロボットの制御方法の一動作例として、コンポーネントの組み立てとしてボルトとナットを締結するタスクを行うシナリオを例に挙げて説明する。 Next, with reference to FIGS. 3 to 8, as an operation example of the robot control method, a scenario in which a task of fastening bolts and nuts is performed as an assembly of components will be described as an example.
 図3は、本実施形態におけるロボットの制御動作を説明するフローチャートである。 FIG. 3 is a flowchart illustrating the control operation of the robot according to the present embodiment.
 本例においては、最初に、図3のステップS305に示すように、ロボット100の環境センサ160で得られた、現実世界のロボット100の周囲環境情報を、制御ユニット200を介して制御装置300へ送信する。視覚情報は、単一の静止画、複数の画像、あるいは動画であってもよく、さらには深度情報を含むことが好ましい。制御装置300は、送信された視覚情報を記憶ユニット340に保存し得る。 In this example, first, as shown in step S305 of FIG. 3, the ambient environment information of the robot 100 in the real world obtained by the environment sensor 160 of the robot 100 is transmitted to the control device 300 via the control unit 200. Send. The visual information may be a single still image, a plurality of images, or a moving image, and preferably includes depth information. The control device 300 may store the transmitted visual information in the storage unit 340.
 次に、図3のステップS310に示すように、制御装置300のプロセッサ320が、その視覚情報に基づいて、ロボット100の周囲環境を再現した仮想世界(シミュレーション空間)を生成し、制御装置300のディスプレイ370に表示する。仮想世界では、現実世界におけるロボット100の周囲の風景に加え、少なくともロボット100がアクセス可能なエリアに存在する現実世界のオブジェクトが表示される。オブジェクトは、視覚センサによって得られた現実世界のオブジェクトの二次元あるいは三次元画像、深度マップあるいはポイントクラウド等による表現であってもよい。あるいは、オブジェクトを表すコンピュータ・グラフィックスによって表現されてもよい。本例では、ボルト及びナットがオブジェクトとして表示される。 Next, as shown in step S310 of FIG. 3, the processor 320 of the control device 300 generates a virtual world (simulation space) that reproduces the surrounding environment of the robot 100 based on the visual information of the control device 300. It is displayed on the display 370. In the virtual world, in addition to the landscape around the robot 100 in the real world, at least real-world objects existing in the area accessible to the robot 100 are displayed. The object may be a two-dimensional or three-dimensional image of a real-world object obtained by a visual sensor, a depth map, a point cloud, or the like. Alternatively, it may be represented by computer graphics that represent the object. In this example, bolts and nuts are displayed as objects.
 そして、図3のステップS315に示すように、それらのオブジェクトの特定と、オブジェクトの各部の機能に関するアノテーションの付与とを行う。それらの処理は、上述したように、1)モデルおよびそれに関連付けられたアノテーションの既成情報を入手する、2)オブジェクトをスキャンし、アノテーションを付与する、3)オブジェクトのモデルの作成とアノテーション付与とを行う、の3つの態様によって行うことが可能である。 Then, as shown in step S315 of FIG. 3, the objects are specified and annotations relating to the functions of each part of the objects are added. As described above, these processes include 1) obtaining the ready-made information of the model and the annotation associated with it, 2) scanning the object and annotating it, and 3) creating the model of the object and annotating it. It can be done by three aspects of doing.
 1)の「モデルおよびそれに関連付けられたアノテーションの既成情報を入手する」態様によれば、制御装置300のプロセッサ320は、上記のように制御ユニット200から送られた視覚情報に基づいて、視覚情報中に含まれるオブジェクトの形状を検出し、制御装置300の記憶ユニット340に記憶されているオブジェクトの形状データや、あるいは、制御装置300が接続されているネットワーク上に存在するオブジェクトの形状データを参照することにより、視覚情報中に含まれるオブジェクトを特定する。オブジェクトの特定は、例えば、オブジェクト形状の特徴点を参照し、既知のオブジェクトの形状データに関連付けられたルックアップテーブルを参照したり、あるいは、任意の機械学習アルゴリズムやAI技術を用いて既知のオブジェクトの形状データとの比較を行うことで、実行することができる。あるいは、制御装置300の入力デバイス350を用いてユーザがオブジェクトを特定する操作をすることで、そのオブジェクトの特定を行うこともできる。この動作例においては、少なくとも、セグメント、セグメントを固定する固定対象物、ボルトおよびナットがオブジェクトとして特定される。 According to the aspect of 1) "obtaining the ready-made information of the model and the annotation associated therewith", the processor 320 of the control device 300 uses the visual information based on the visual information sent from the control unit 200 as described above. The shape of the object contained therein is detected, and the shape data of the object stored in the storage unit 340 of the control device 300 or the shape data of the object existing on the network to which the control device 300 is connected is referred to. By doing so, the object included in the visual information is specified. To identify an object, for example, refer to the feature points of the object shape, refer to the look-up table associated with the shape data of the known object, or use any machine learning algorithm or AI technique to identify the known object. It can be executed by comparing with the shape data of. Alternatively, the object can be specified by the user performing an operation of specifying the object by using the input device 350 of the control device 300. In this example of operation, at least a segment, a fixing object for fixing the segment, a bolt and a nut are specified as objects.
 さらに、プロセッサ320は、例えば、記憶ユニット340に記憶されているオブジェクトの機能に関するデータ、あるいは、制御装置300が接続されているネットワーク上に存在するオブジェクトの機能に関するデータを参照することにより、特定されたオブジェクトについてアノテーションの付与を行う。あるいは、プロセッサ320は、特定したオブジェクトの形状からその各部の機能を任意の機械学習アルゴリズムやAI技術を用いて認識して、そのオブジェクトについてのアノテーションの付与を行うことも可能である。 Further, the processor 320 is specified by referring to, for example, data on the function of the object stored in the storage unit 340 or data on the function of the object existing on the network to which the control device 300 is connected. Annotate the object. Alternatively, the processor 320 can recognize the function of each part from the shape of the specified object by using an arbitrary machine learning algorithm or AI technique, and add annotations to the object.
 本例では、特定された各オブジェクト(ボルトおよびナット)について、下記のアノテーションが付与される。 In this example, the following annotations are added to each specified object (bolt and nut).
・ボルト:頭部と軸とからなる。頭部は六角柱形状、ナットの着脱時には周囲を把持、軸は雄ねじが形成されている、軸にナットを締結可能(右回転で取り付け、左回転で取り外し)。 -Bolt: Consists of a head and a shaft. The head has a hexagonal column shape, the circumference is gripped when attaching and detaching the nut, the shaft has a male screw, and the nut can be fastened to the shaft (attached by rotating clockwise and removed by rotating counterclockwise).
・ナット:六角柱形状、ボルトへの着脱時には外周を把持、内周に雌ねじが形成されている、雌ねじはボルトの軸に締結可能(右回転で取り付け、左回転で取り外し)。 -Nut: Hexagonal column shape, grips the outer circumference when attaching and detaching to the bolt, and a female screw is formed on the inner circumference. The female screw can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise).
 なお、ロボット100はシステム1の一要素であり、ロボット100の各部の機能についてはシステム1において既知である。したがって、ロボット100の少なくともハンド122については予めオブジェクトとして認識され、下記のアノテーションが記憶ユニット240あるいは記憶ユニット340に記憶されている。
・ハンド:手首と爪部を備える。爪部でオブジェクトを把持可能。手首が回転可能。
 各オブジェクトに付与されたアノテーションは、表1に示すデータ構造に格納される。
The robot 100 is an element of the system 1, and the functions of each part of the robot 100 are known in the system 1. Therefore, at least the hand 122 of the robot 100 is recognized as an object in advance, and the following annotations are stored in the storage unit 240 or the storage unit 340.
・ Hand: Equipped with wrists and claws. Objects can be gripped with the claws. The wrist can be rotated.
The annotations given to each object are stored in the data structure shown in Table 1.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 プロセッサ320がオブジェクトを認識しない場合、あるいは間違ったオブジェクトを認識した場合は、2)の「オブジェクトをスキャンし、アノテーションを付与する」態様により、オブジェクトの特定とアノテーションの付与とを行うことができる。オブジェクトが認識されない、あるいは間違ったオブジェクトが認識される場合の原因としては、認識対象のオブジェクトに対応するデータが記憶されていない、認識対象のオブジェクトの画像品質が低い等が考えられる。 If the processor 320 does not recognize the object or recognizes the wrong object, the object can be identified and annotated by the mode of 2) "scanning the object and annotating it". When the object is not recognized or the wrong object is recognized, it is considered that the data corresponding to the object to be recognized is not stored, the image quality of the object to be recognized is low, and the like.
 ここで、上記の2)の態様によりオブジェクトにアノテーションを付与する方法について図4~図6を参照して説明する。図4は、スキャンしたオブジェクトにアノテーションを付与する方法を示すフローチャートである。図5及び図6は、スキャンしたオブジェクトと、それに対応するモデルとを示す図である。 Here, a method of annotating an object according to the aspect of 2) above will be described with reference to FIGS. 4 to 6. FIG. 4 is a flowchart showing a method of annotating the scanned object. 5 and 6 are diagrams showing the scanned object and the corresponding model.
 最初に、制御装置300のプロセッサ320が、図3のステップS310で生成されたロボット100の周囲環境を再現した仮想世界(シミュレーション空間)を、制御装置300のディスプレイ370に表示する。このとき、仮想世界では、現実世界においてロボット100の環境センサ160でスキャンされたオブジェクトが表示される(図4のステップS405)。本例では、ボルト(図5(a))及びナット(図6(a))がオブジェクトとして表示される。ただし、2)の態様のシナリオでは、この段階では未だこれらのオブジェクトがボルトやナットであることがシステム1において認識されていない。 First, the processor 320 of the control device 300 displays a virtual world (simulation space) that reproduces the surrounding environment of the robot 100 generated in step S310 of FIG. 3 on the display 370 of the control device 300. At this time, in the virtual world, the object scanned by the environment sensor 160 of the robot 100 in the real world is displayed (step S405 in FIG. 4). In this example, bolts (FIG. 5 (a)) and nuts (FIG. 6 (a)) are displayed as objects. However, in the scenario of the aspect 2), it is not yet recognized in the system 1 that these objects are bolts and nuts at this stage.
 次に、オブジェクトに対応するモデルを取得する(図4のステップS410)。スキャンしたオブジェクトに対応するモデルは、一例として、ユーザが制御装置300の入力デバイス350を用いて、ディスプレイ370に表示される画面内のメニューでプリミティブな形状要素を選択し、それに応じてディスプレイ370に表示されるプリミティブな形状要素を組み合わせることで生成することができる。プリミティブな形状要素としては、例えば、任意角の角柱、任意角の角錐、円柱、円錐、球体等の要素が含まれ、それらは制御装置300の記憶ユニット340に格納されている。 Next, acquire the model corresponding to the object (step S410 in FIG. 4). As an example, the model corresponding to the scanned object can be displayed on the display 370 by the user using the input device 350 of the control device 300 to select a primitive shape element from the in-screen menu displayed on the display 370. It can be generated by combining the displayed primitive shape elements. Primitive shape elements include, for example, elements such as prisms of arbitrary angle, pyramids of arbitrary angle, cylinders, cones, and spheres, which are stored in the storage unit 340 of the control device 300.
 図5(a)に示すようにオブジェクトがボルトである(とユーザが認識する)場合には、ユーザは制御装置300の入力デバイス350を用いて、ディスプレイ370に表示される画面内のメニューから、そのモデルを生成するためにプリミティブな形状要素として六角柱と円柱とを選択する。すると、制御装置300のプロセッサ320がそれらの形状要素データを記憶ユニット340から読み出し、ディスプレイ370に表示する。そして、ユーザは入力デバイス350を用いてディスプレイ370上の六角柱及び円柱をそれぞれ操作して大きさや寸法比率等を適宜変更し、それらを結合させて図5(b)に示すようにボルトのモデル50を作成する。 When the object is a volt (recognized by the user) as shown in FIG. 5A, the user uses the input device 350 of the control device 300 to select from the in-screen menu displayed on the display 370. Hexagonal columns and cylinders are selected as primitive shape elements to generate the model. Then, the processor 320 of the control device 300 reads out those shape element data from the storage unit 340 and displays them on the display 370. Then, the user operates the hexagonal column and the cylinder on the display 370 using the input device 350 to appropriately change the size, the dimensional ratio, etc., and combine them to form a bolt model as shown in FIG. 5 (b). Create 50.
 また、図6(a)に示すようにオブジェクトがナットである(とユーザが認識する)場合には、上記と同様にして、プリミティブな形状要素として六角柱と円柱とを選択し、それらを結合させて図6(b)に示すようにナットのモデル60を作成する。図6(b)に示すモデル60では、円柱は六角柱に形成された孔を表現している。 Further, when the object is a nut (recognized by the user) as shown in FIG. 6A, a hexagonal column and a cylinder are selected as primitive shape elements and they are combined in the same manner as described above. Then, as shown in FIG. 6B, a nut model 60 is created. In the model 60 shown in FIG. 6B, the cylinder represents a hole formed in a hexagonal column.
 上記では、オブジェクトに対応するモデルを生成する方法の一つとして、プリミティブな形状要素を読み出してそれらを組み合わせる例を示したが、モデルを作成する方法はこれに限られない。 In the above, as one of the methods for generating a model corresponding to an object, an example of reading primitive shape elements and combining them is shown, but the method of creating a model is not limited to this.
 例えば、図5(a)に示すオブジェクト(ボルト)に対し、六角柱の部分(頭部)の一方の端面の輪郭に沿う六角形を入力デバイス350を用いて画定し、その六角形をオブジェクトの長手方向に沿って六角柱の部分の他方の端面まで移動させることで、その六角形の移動領域によって画定される六角柱状の容積空間を形成することができる。さらに、図5(a)に示すオブジェクトの円柱部分(ねじ軸)の一方の端面の輪郭に沿う円形を入力デバイス350を用いて画定し、その円形をオブジェクトの長手方向に沿って円柱部分の他方の端面まで移動させることで、その円形の移動領域によって画定される円柱状の容積空間を形成することができる。このようにして、図5(b)に示すモデル50が、図5(a)に示すオブジェクトに重ね合わされるように仮想世界内で生成される。図6(b)に示すモデル60も同様の手順で生成することができる。 For example, for the object (bolt) shown in FIG. 5A, a hexagon along the contour of one end face of the hexagonal column portion (head) is defined by using the input device 350, and the hexagon is defined as the object. By moving the hexagonal column portion along the longitudinal direction to the other end face, a hexagonal columnar volume space defined by the hexagonal moving region can be formed. Further, a circle along the contour of one end face of the cylindrical portion (screw axis) of the object shown in FIG. 5A is defined by using the input device 350, and the circle is defined by the other of the cylindrical portions along the longitudinal direction of the object. By moving to the end face of, a cylindrical volumetric space defined by the circular moving region can be formed. In this way, the model 50 shown in FIG. 5B is generated in the virtual world so as to be superimposed on the object shown in FIG. 5A. The model 60 shown in FIG. 6B can also be generated by the same procedure.
 あるいは、他の方法として、オブジェクトの3次元形状を制御装置300において認識できる場合には、プリミティブな形状要素を用いてモデルを生成するのではなく、その3次元形状をモデルとして用いてもよい。 Alternatively, as another method, when the three-dimensional shape of the object can be recognized by the control device 300, the three-dimensional shape may be used as the model instead of generating the model using the primitive shape elements.
 次に、ユーザが制御装置300の入力デバイス350を用いて、ディスプレイ370に表示されたモデルにそのオブジェクトの識別情報として名称を付与する(図4のステップS415)。具体的には、図5(b)に示すモデル50には対応するオブジェクトの名称として「ボルト」を付与し、図6(b)に示すモデル60には対応するオブジェクトの名称として「ナット」を付与する。このとき、例えばオブジェクトについて規格等の詳細情報が既知である場合には、その情報を併せて付与することができる(例えば、ボルトについて、ねじの径及び長さ等を示す「M8×20」等)。 Next, the user gives a name to the model displayed on the display 370 as the identification information of the object by using the input device 350 of the control device 300 (step S415 in FIG. 4). Specifically, the model 50 shown in FIG. 5 (b) is given a "bolt" as the name of the corresponding object, and the model 60 shown in FIG. 6 (b) is given a "nut" as the name of the corresponding object. Give. At this time, for example, if detailed information such as a standard is known for the object, that information can also be added (for example, "M8 × 20" indicating the diameter and length of the screw for the bolt, etc. ).
 ユーザは、入力デバイス350を用いて、ディスプレイ370に表示される画面において、制御装置300の記憶ユニット340に記憶されている種々のオブジェクトの属性情報(オブジェクトの名称、各部の形状・構成およびそれらの機能等に関するデータ)、あるいは、制御装置300が接続されているネットワーク上に存在する種々のオブジェクトの属性情報を参照して、モデルに付与すべき名称を選択することで、モデルにオブジェクトの名称を付与することができる。記憶ユニット340や外部のデータベース等に存在していない新たなオブジェクト等である場合にオブジェクトの属性情報を参照できないときは、例えば、オブジェクトの名称を含む属性情報をユーザが入力デバイス350を用いて入力し、記憶ユニット340に記憶させるとともに、その名称を選択してモデルに付与することが可能である。
 次に、上記の様に作成したモデルの各部に属性情報(アノテーション)を付与する(図4のステップS420)。
The user uses the input device 350 to display the screen displayed on the display 370, and the attribute information of various objects stored in the storage unit 340 of the control device 300 (object name, shape / configuration of each part, and their). By referring to the attribute information of various objects existing on the network to which the control device 300 is connected, and selecting the name to be given to the model, the name of the object can be given to the model. Can be granted. When the attribute information of the object cannot be referred to in the case of a new object that does not exist in the storage unit 340 or an external database, for example, the user inputs the attribute information including the name of the object using the input device 350. It is possible to store the information in the storage unit 340 and to select the name and give it to the model.
Next, attribute information (annotation) is added to each part of the model created as described above (step S420 in FIG. 4).
 図5(b)に示すモデル50へのアノテーションの付与においては、例えば、ユーザが制御装置300の入力デバイス350を用いて、ディスプレイ370に表示されたモデル50の六角柱部分50aを選択し、ボルトの頭部であることを意味する「頭部」のアノテーションをその部分の識別情報として付与する。さらに、モデル50の六角柱部分50aの周囲の6つの面を入力デバイス350を用いて指示あるいは選択し、「ナットの着脱時には外周を把持」のアノテーションをその部分の機能情報として付与する。「ナットの着脱時には外周を把持」のアノテーションは、一義的には、ナットを着脱するという機能を発揮するためのボルトの把持領域としてボルトの外周を把持することを指定するが、ナットを着脱するとき以外(例えば、単にボルトを移動する場合等)ではボルトの任意の部分を把持可能であることを意味する。 In annotating the model 50 shown in FIG. 5B, for example, the user selects the hexagonal column portion 50a of the model 50 displayed on the display 370 using the input device 350 of the control device 300, and bolts. Annotation of "head" which means that it is the head of the head is added as identification information of the part. Further, six surfaces around the hexagonal column portion 50a of the model 50 are designated or selected by using the input device 350, and the annotation "grasping the outer circumference when attaching / detaching the nut" is added as functional information of the portion. The annotation "Grip the outer circumference when attaching and detaching the nut" uniquely specifies that the outer circumference of the bolt is gripped as the gripping area of the bolt for exerting the function of attaching and detaching the nut, but the nut is attached and detached. It means that any part of the bolt can be gripped except when (for example, when simply moving the bolt).
 さらに、入力デバイス350を用いて、ディスプレイ370に表示されたモデル50の円柱部分50bを選択し、ボルトの軸であることを示す「軸」のアノテーションをその部分の識別情報として付与する。さらに、モデル50の円柱部分50bの外周面を入力デバイス350を用いて指示あるいは選択し、「雄ねじ。ナットを締結可能(右回転で取り付け、左回転で取り外し)」のアノテーションをその部分の機能情報として付与する。ここで、例えば、雄ねじが形成された軸を把持することを禁止したい場合には、軸の機能として、軸の把持を禁止するアノテーションを加えてもよい。これにより、壊れやすい部分のような、ロボットのハンドで把持することが望ましくない部分が把持されることを防ぐことができる。 Further, the input device 350 is used to select the cylindrical portion 50b of the model 50 displayed on the display 370, and annotate the "axis" indicating that it is the axis of the bolt as identification information of that portion. Furthermore, the outer peripheral surface of the cylindrical portion 50b of the model 50 is instructed or selected using the input device 350, and the annotation "male screw. The nut can be fastened (attached by rotating clockwise and removed by rotating counterclockwise)" is annotated with the functional information of that portion. Granted as. Here, for example, when it is desired to prohibit gripping the shaft on which the male screw is formed, an annotation that prohibits gripping the shaft may be added as a function of the shaft. This makes it possible to prevent gripping of parts that are not desirable to be gripped by the robot's hand, such as fragile parts.
 図6(b)に示すモデル60に対しても同様に、モデル60の外周部分60aを選択し、ナットの外周であることを意味する「外周」のアノテーションをその部分の識別情報として付与する。加えて、モデル60の外周部分60aには「ボルトへの着脱時には外周を把持」のアノテーションをその部分の機能情報として付与する。「ボルトへの着脱時には外周を把持」のアノテーションも、上記と同様に、ボルトへの着脱時以外(例えば、単にナットを移動する場合等)ではナットの任意の部分を把持可能であることを意味する。さらに、モデル60の内周部分60bを選択し、ナットの内周であることを意味する「内周」のアノテーションをその部分の識別情報として付与する。加えて、モデル60の内周部分60bには「雌ねじ。ボルトの軸に締結可能(右回転で取り付け、左回転で取り外し)」のアノテーションをその部分の機能情報として付与する。 Similarly, for the model 60 shown in FIG. 6B, the outer peripheral portion 60a of the model 60 is selected, and an annotation of “outer circumference” meaning that it is the outer circumference of the nut is added as identification information of that portion. In addition, the outer peripheral portion 60a of the model 60 is annotated with "hold the outer circumference when attaching / detaching to / from the bolt" as functional information of that portion. The annotation "Grip the outer circumference when attaching / detaching to / from the bolt" also means that any part of the nut can be grasped except when attaching / detaching to / from the bolt (for example, when simply moving the nut). To do. Further, the inner peripheral portion 60b of the model 60 is selected, and an annotation of "inner circumference" meaning that it is the inner circumference of the nut is added as identification information of the portion. In addition, the inner peripheral portion 60b of the model 60 is annotated as "female screw. Can be fastened to the shaft of the bolt (attached by rotating clockwise and removed by rotating counterclockwise)" as functional information of that portion.
 ここで、ステップS415において説明したのと同様に、ユーザは、入力デバイス350を用いて、ディスプレイ370に表示される画面において、制御装置300の記憶ユニット340に記憶されている種々のオブジェクトの属性情報(オブジェクトの名称、各部の形状・構成およびそれらの機能に関するデータ)、あるいは、制御装置300が接続されているネットワーク上に存在する種々のオブジェクトの属性情報を参照して、モデルに付与すべき情報を選択することで、モデルに機能情報を付与することができる。例えば、記憶ユニット340にボルトの機能属性が記憶されているときには、ボルトの頭部と軸のそれぞれの属性情報(部分の名称とその機能)に関する情報を選択して、各部分50a,50bにそれぞれ付与することができる。一方、記憶ユニット340や外部のデータベース等に存在していない新たなオブジェクトであるためにオブジェクトの属性情報を参照できないときは、例えば、オブジェクトの機能を含む属性情報をユーザが入力デバイス350を用いて入力し、記憶ユニット340に記憶させるとともに、その名称を選択してモデルに付与することが可能である。 Here, as described in step S415, the user uses the input device 350 to display the attribute information of various objects stored in the storage unit 340 of the control device 300 on the screen displayed on the display 370. Information to be given to the model by referring to (data on the name of the object, the shape / configuration of each part, and their functions) or the attribute information of various objects existing on the network to which the control device 300 is connected. Function information can be added to the model by selecting. For example, when the functional attribute of the bolt is stored in the storage unit 340, information on the attribute information (name of the part and its function) of the head and the shaft of the bolt is selected, and the information is selected in each of the parts 50a and 50b, respectively. Can be granted. On the other hand, when the attribute information of the object cannot be referred to because it is a new object that does not exist in the storage unit 340 or an external database, for example, the user uses the input device 350 to input the attribute information including the function of the object. It is possible to input and store it in the storage unit 340, and to select the name and give it to the model.
 このように、制御装置300はオブジェクトに属性情報を付与する装置としても機能し、制御装置300を用いてオブジェクトに属性情報を付与することができる。オブジェクト毎に付与された属性情報データは、表1に示したデータ構造に格納される。 In this way, the control device 300 also functions as a device for imparting attribute information to the object, and the control device 300 can be used to impart attribute information to the object. The attribute information data assigned to each object is stored in the data structure shown in Table 1.
 続いて、仮想世界内において、上記のように生成したモデルを対応するオブジェクトに重ね合わせて、モデルとオブジェクトとの関連付けを行い、これにより、そのオブジェクトに対して属性情報(アノテーション)を付与する(図4のステップS425)。図5に示した例では、ユーザは、入力デバイス350を用いて、ディスプレイ370に表示される画面において、図5(b)に示すモデル50を選択し、画面内でそのモデル50を移動させて、同じ画面内に示されているオブジェクト(図5(a))と位置及び姿勢が合致するように重ね合わせる。モデル50をオブジェクトに重ね合わせるようにして仮想世界内で予め生成していた場合には、この重ね合せ動作は省略される。そして、プロセッサ320によって、オブジェクトの三次元形状の輪郭とモデル50の三次元形状の輪郭との位置関係が計算され、オブジェクトにモデル50が重ね合わされたか否かが所定の基準に基づいて判断され、重ね合わされたと判断した場合にはオブジェクトとモデルとの関連付けが行われる。 Then, in the virtual world, the model generated as described above is superimposed on the corresponding object, the model is associated with the object, and attribute information (annotation) is added to the object (annotation). Step S425 in FIG. 4). In the example shown in FIG. 5, the user uses the input device 350 to select the model 50 shown in FIG. 5B on the screen displayed on the display 370 and move the model 50 within the screen. , Overlay the objects shown in the same screen (FIG. 5 (a)) so that their positions and postures match. If the model 50 is superposed on the object and generated in advance in the virtual world, this superimposing operation is omitted. Then, the processor 320 calculates the positional relationship between the contour of the three-dimensional shape of the object and the contour of the three-dimensional shape of the model 50, and determines whether or not the model 50 is superimposed on the object based on a predetermined criterion. When it is determined that they are overlapped, the object and the model are associated with each other.
 これにより、オブジェクト(図5(a))が、アノテーション付けされたモデル50に対応するオブジェクトであることが制御装置300のプロセッサ320が認識され、プロセッサ320は、上記のように特定されたオブジェクトに対応する仮想オブジェクトを例えばコンピュータ・グラフィックスで再現し、ディスプレイ370上の当該オブジェクトに重ね合わせるように表示する。仮想オブジェクトは、例えばモデルの画像(コンピュータ・グラフィックス)を用いて再現してもよい。ユーザは、入力デバイス350を用いて、ディスプレイ370に表示される仮想世界内において、その仮想オブジェクトの移動等の操作を行うことが可能である。 As a result, the processor 320 of the control device 300 recognizes that the object (FIG. 5A) is the object corresponding to the annotated model 50, and the processor 320 becomes the object specified as described above. The corresponding virtual object is reproduced, for example, in computer graphics and displayed so as to be superimposed on the object on the display 370. The virtual object may be reproduced using, for example, an image of a model (computer graphics). The user can use the input device 350 to perform operations such as moving the virtual object in the virtual world displayed on the display 370.
 なお、制御ユニット200のプロセッサ220が、環境センサ160からの視覚情報中に含まれるオブジェクトの形状を検出し、視覚情報中に含まれるオブジェクトを特定するとともにそのオブジェクトの各部機能に関するアノテーションを付与する処理を行っていた場合(すなわち、制御ユニット200の記憶ユニット240に記憶されているオブジェクトの属性情報、あるいは、制御ユニット200が接続されているネットワーク上に存在するオブジェクトの属性情報を参照することにより、オブジェクトを特定することができる場合)には、制御装置300のプロセッサ320は、上述した処理を行わず、特定されたオブジェクトに関する情報を制御ユニット200から得て、仮想世界においてそのオブジェクトに対応する仮想オブジェクトを表示する。 The processor 220 of the control unit 200 detects the shape of the object included in the visual information from the environment sensor 160, identifies the object included in the visual information, and adds an annotation related to each part function of the object. (That is, by referring to the attribute information of the object stored in the storage unit 240 of the control unit 200, or the attribute information of the object existing on the network to which the control unit 200 is connected. When the object can be specified), the processor 320 of the control device 300 does not perform the above-described processing, obtains information about the specified object from the control unit 200, and virtualizes the object in the virtual world. Display the object.
 上記のようにしてオブジェクトに属性情報(アノテーション)を付与することにより、制御装置300のディスプレイ370に表示する仮想空間内のオブジェクトが、ある容積空間を占める単なる物体ではなく、付与された属性情報によって特定されるオブジェクトを表すものであることがシステム1において認識される。 By adding the attribute information (annotation) to the object as described above, the object in the virtual space displayed on the display 370 of the control device 300 is not a mere object occupying a certain volume space, but is based on the given attribute information. System 1 recognizes that it represents the specified object.
 仮想オブジェクトは、ディスプレイ370上に表示された仮想世界においてユーザが入力デバイス350を用いて動かすことができる。例えば、入力デバイス350としてトラッカーを用いる場合、トラッカーで或る仮想オブジェクトをポインティングしてからトリガーボタンを押すことで、そのトリガーボタンを押している間は仮想世界内でその仮想オブジェクトをトラッカーの動きに合わせて自由に動かすことができる。そして、仮想オブジェクトを仮想世界内で所望の位置・姿勢に移動させた後にトリガーボタンをリリースすることで、仮想オブジェクトの移動操作を終了させることができる。ユーザは、2つのトラッカーを両手で同時に操作することで、仮想世界内で2つの仮想オブジェクトを同時に操作することもできる。 The virtual object can be moved by the user using the input device 350 in the virtual world displayed on the display 370. For example, when using a tracker as an input device 350, by pointing a virtual object on the tracker and then pressing the trigger button, the virtual object is adjusted to the movement of the tracker in the virtual world while the trigger button is pressed. Can be moved freely. Then, by releasing the trigger button after moving the virtual object to a desired position / posture in the virtual world, the movement operation of the virtual object can be completed. The user can also operate two virtual objects at the same time in the virtual world by operating the two trackers with both hands at the same time.
 ディスプレイ370上の仮想世界において、ロボット100のハンド122をオブジェクト化して仮想ハンドとして表示し、トラッカーで仮想ハンドを操作して、仮想オブジェクトを動かすことが可能である。例えば、仮想ハンドをトラッカーでポインティングしてトリガーボタンを押すことで、その仮想ハンドを動かす。そして、仮想ハンドの爪又は指の部分を移動対象の仮想オブジェクトに位置合わせし、押しているトリガーボタンをリリースする、あるいは、他のトリガーボタンを押すことで、その移動対象の仮想オブジェクトを仮想ハンドで掴んだ状態で動かすことを可能にする。その後、例えばトリガーボタンを押しながら仮想ハンドをトラッカーで移動させることで、仮想ハンドで仮想オブジェクトを掴んだ状態でその仮想オブジェクトを移動させることができる。そして、仮想オブジェクトを仮想世界内で所望の位置・姿勢に移動させた後にトリガーボタンをリリースすることで、仮想ハンド及び仮想オブジェクトの移動操作を終了させることができる。 In the virtual world on the display 370, it is possible to convert the hand 122 of the robot 100 into an object and display it as a virtual hand, and operate the virtual hand with a tracker to move the virtual object. For example, by pointing a virtual hand with a tracker and pressing the trigger button, the virtual hand is moved. Then, by aligning the claw or finger part of the virtual hand with the virtual object to be moved and releasing the trigger button being pressed, or by pressing another trigger button, the virtual object to be moved is moved with the virtual hand. Allows you to move while grasping. After that, for example, by moving the virtual hand with the tracker while pressing the trigger button, the virtual object can be moved while the virtual object is grasped by the virtual hand. Then, by releasing the trigger button after moving the virtual object to a desired position / posture in the virtual world, the movement operation of the virtual hand and the virtual object can be completed.
 上記のような入力デバイス350を用いた仮想世界内での仮想オブジェクト及び仮想ハンドの操作、およびそのディスプレイ370上での表示は、プロセッサ320によって制御される。 The operation of virtual objects and virtual hands in the virtual world using the input device 350 as described above, and the display thereof on the display 370 are controlled by the processor 320.
 次に、図3のステップS320に示すように、仮想世界においてユーザが入力デバイス350を用いて、ディスプレイ370上の仮想世界内に表示されている各仮想オブジェクトに対して操作を行うことにより、ロボット100にタスクを実行させるための動作の入力を行う。本例では、ボルトとナットとを互いに締結するタスクをロボット100に実行させるために、以下に示す動作の入力を行う。 Next, as shown in step S320 of FIG. 3, the robot operates the virtual object displayed in the virtual world on the display 370 by the user using the input device 350 in the virtual world. Input the operation for causing 100 to execute the task. In this example, in order to cause the robot 100 to execute the task of fastening the bolts and nuts to each other, the following operations are input.
 図7はユーザ入力操作の遷移を概略的に示す図である。図7に示すように、ディスプレイ370上の仮想世界内には、ロボット100が存在する現実世界に置かれているオブジェクトであるボルトおよびナットにそれぞれ対応する仮想ボルトおよび仮想ナットが、現実世界におけるボルトおよびナットと同じ配置で表示される。 FIG. 7 is a diagram schematically showing the transition of the user input operation. As shown in FIG. 7, in the virtual world on the display 370, virtual bolts and virtual nuts corresponding to bolts and nuts, which are objects placed in the real world where the robot 100 exists, are bolts in the real world. And are displayed in the same arrangement as the nut.
 図7(a)に示すように、ディスプレイ370上に表示される仮想世界内には、台の上に置かれた仮想ボルト70および仮想ナット71と、2つの仮想ハンド122a_vr,122a_vrが存在している。 As shown in FIG. 7A, in the virtual world displayed on the display 370, there are a virtual bolt 70 and a virtual nut 71 placed on a table, and two virtual hands 122a_vr and 122a_vr. There is.
 まず、図7(b)に示すように、ディスプレイ370上の仮想世界内に表示されるロボット100の一方のハンド122bの仮想ハンド122b_vrを入力デバイス350のトラッカーを用いて移動させ、仮想ハンド122b_vrで仮想ボルト70を掴む動作として、仮想世界内において仮想ハンド122b_vrの爪部を仮想セグメント70に重ね合わせる。ボルトの頭部には「ナットの着脱時には外周を把持(それ以外の場合は任意の部分を把持可能)」であることを示すアノテーションが付与されており、ハンドの爪部には「オブジェクトを把持可能」であることを示すアノテーションが付与されている。制御装置300のプロセッサ320は、これらのオブジェクトの互いに作用し合う各部分の上記のアノテーションの関係性に基づいて、「ハンドの爪部でボルトを把持」の意味を導き出す。したがって、制御装置300のプロセッサ320は、仮想ハンド122b_vrの爪部を仮想ボルト70の頭部に重ね合わせる入力操作から、「ハンド122bの爪部でボルト70を把持する」という意味を特定する。 First, as shown in FIG. 7B, the virtual hand 122b_vr of one hand 122b of the robot 100 displayed in the virtual world on the display 370 is moved by using the tracker of the input device 350, and the virtual hand 122b_vr is used. As an operation of grasping the virtual bolt 70, the claw portion of the virtual hand 122b_vr is overlapped with the virtual segment 70 in the virtual world. The head of the bolt is annotated to indicate that "the outer circumference can be gripped when the nut is attached or detached (in other cases, any part can be gripped)", and the claw of the hand is "grasping the object". An annotation indicating that it is possible is added. The processor 320 of the control device 300 derives the meaning of "holding the bolt with the claws of the hand" based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "holding the bolt 70 with the claw portion of the hand 122b" from the input operation of superimposing the claw portion of the virtual hand 122b_vr on the head portion of the virtual bolt 70.
 次に、図7(c)に示すように、ロボット100の他方のハンド122aを表す仮想ハンド122a_vrをトラッカーを用いて移動させ、仮想ハンド122a_vrで仮想ナット71を掴む動作として、仮想世界内において仮想ハンド122a_vrの爪部を仮想ナット71に重ね合わせる。ナットの外周には「ボルトへの着脱時には外周を把持(それ以外の場合は任意の部分を把持可能)」を示すアノテーションが付与されており、ハンドの爪部には「オブジェクトを把持可能」であることを示すアノテーションが付与されている。制御装置300のプロセッサ320は、これらのオブジェクトの互いに作用し合う各部分の上記のアノテーションの関係性に基づいて、「ハンドの爪部でナットを把持」の意味を導き出す。したがって、制御装置300のプロセッサ320は、仮想ハンド122a_vrの爪部を仮想ナット71に重ね合わせる入力操作から、「ハンド122aの爪部でナットを把持する」という意味を特定する。 Next, as shown in FIG. 7C, the virtual hand 122a_vr representing the other hand 122a of the robot 100 is moved using a tracker, and the virtual hand 122a_vr grips the virtual nut 71 as a virtual operation in the virtual world. The claw portion of the hand 122a_vr is overlapped with the virtual nut 71. Annotation indicating "Grip the outer circumference when attaching / detaching to / from the bolt (in other cases, any part can be gripped)" is attached to the outer circumference of the nut, and "Object can be gripped" on the claw part of the hand. Annotation indicating that there is is attached. The processor 320 of the control device 300 derives the meaning of "holding the nut with the claws of the hand" based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "holding the nut with the claw portion of the hand 122a" from the input operation of superimposing the claw portion of the virtual hand 122a_vr on the virtual nut 71.
 そして、図7(d)に示すように、仮想ボルト70を掴んだ状態で仮想ハンド122b_vrを移動させるとともに、仮想ナット71を掴んだ状態で仮想ハンド122a_vrを移動させ、ボルトとナットとを締結させる動作として、仮想ナット71を仮想ボルト70の軸にはめ合わせる操作を行う。ボルトの軸には「ナットを締結可能(右回転で取り付け、左回転で取り外し)」を示すアノテーションが付与されており、ナットの内周には「ボルトの軸に締結可能(右回転で取り付け、左回転で取り外し)」を示すアノテーションが付与されている。制御装置300のプロセッサ320は、これらのオブジェクトの互いに作用し合う各部分の上記のアノテーションの関係性に基づいて、「ナットをボルトの軸に取り付け可能」であることを導き出す。したがって、制御装置300のプロセッサ320は、仮想ナット71を仮想ボルト70の軸にはめ合わせる入力操作から、「ボルトとナットとを互いに回転させて締結させる」という意味を特定する。 Then, as shown in FIG. 7D, the virtual hand 122b_vr is moved while holding the virtual bolt 70, and the virtual hand 122a_vr is moved while holding the virtual nut 71 to fasten the bolt and the nut. As an operation, an operation of fitting the virtual nut 71 to the shaft of the virtual bolt 70 is performed. The bolt shaft is annotated to indicate that "the nut can be fastened (clockwise rotation, remove it counterclockwise)", and the inner circumference of the nut can be "fastened to the bolt shaft (clockwise rotation,). Annotation indicating "remove by turning counterclockwise)" is added. The processor 320 of the controller 300 derives that "the nut can be attached to the shaft of the bolt" based on the relationship of the above annotations of the interacting parts of these objects. Therefore, the processor 320 of the control device 300 specifies the meaning of "rotating and fastening the bolt and the nut to each other" from the input operation of fitting the virtual nut 71 to the shaft of the virtual bolt 70.
 さらに、「ボルトとナットとを互いに回転させて締結させる」という意味が特定されたことに伴い、ボルトについては、頭部の「ナットの着脱時には外周を把持」のアノテーションに従い、「ハンド122bの爪部でボルト70の『外周』を把持する」ことが特定され、ナットについては、外周の「ボルトへの着脱時には外周を把持」のアノテーションに従い、「ハンド122aの爪部でナットの『外周』を把持する」ことが特定される。 Furthermore, with the identification of the meaning of "rotating and fastening bolts and nuts to each other", for bolts, follow the annotation of "Grip the outer circumference when attaching and detaching nuts" on the head, and "Hand 122b claws". It was specified that "the outer circumference" of the bolt 70 is gripped by the part, and for the nut, according to the annotation of "grasping the outer circumference when attaching and detaching to the bolt", "the" outer circumference "of the nut is gripped by the claw part of the hand 122a. "Grip" is specified.
 このように、ユーザが仮想世界内で仮想オブジェクトに対してタスクを実行するための動作の入力を行ったとき、制御装置300のプロセッサ320は、各動作において作用を生じさせるオブジェクトの部分を特定し、その部分に付与されている機能に関する属性情報に基づいて各動作入力の意味(ロボット100に実行させる動作)を特定する。こように、表1に示すようなデータ構造において各オブジェクトに付与された属性情報は、仮想世界内で仮想オブジェクトに対して操作された動作との関連付けにより、現実世界のオブジェクトに対してロボット100が行う動作の特定に用いられる。 In this way, when the user inputs an action for executing a task to a virtual object in the virtual world, the processor 320 of the control device 300 identifies a part of the object that causes an action in each action. , The meaning of each action input (the action to be executed by the robot 100) is specified based on the attribute information about the function given to the part. As described above, the attribute information given to each object in the data structure as shown in Table 1 is associated with the operation of the virtual object in the virtual world, and the robot 100 is associated with the object in the real world. Is used to identify the actions performed by.
 図8は、現実世界のオブジェクトに対してロボットが行う動作をオブジェクトに付与された属性情報を用いて特定する方法を示すフローチャートである。 FIG. 8 is a flowchart showing a method of specifying an action performed by a robot on an object in the real world by using attribute information given to the object.
 制御装置300のディスプレイ370に表示される仮想世界内の仮想オブジェクトを、ユーザが制御装置300の入力デバイス350を用いて操作することで、ロボット100に実行させる動作が制御装置300で入力される(図8のステップS805)。 When a user operates a virtual object in the virtual world displayed on the display 370 of the control device 300 by using the input device 350 of the control device 300, the operation to be executed by the robot 100 is input by the control device 300 ( Step S805 in FIG. 8).
 制御装置300のプロセッサ320は、入力された動作に基づいて、作用を生じさせるオブジェクトの部分を特定する(図8のステップS810)。例えば、仮想世界内で仮想オブジェクト同士を重ね合わせる操作が行われた場合には、それらの仮想オブジェクトの重ね合わされた部分が、作用を生じさせる部分であると特定される。 The processor 320 of the control device 300 identifies the part of the object that causes the action based on the input operation (step S810 in FIG. 8). For example, when an operation of superimposing virtual objects is performed in a virtual world, the superposed portion of these virtual objects is specified as a portion that causes an action.
 次に、制御装置300のプロセッサ320は、データ構造においてそれらのオブジェクトの各部分に付与された機能属性を参照し、それらの機能属性の関係性に基づいて、入力された動作の意味を特定する(図8のステップS815)。 Next, the processor 320 of the control device 300 refers to the functional attributes given to each part of those objects in the data structure, and identifies the meaning of the input operation based on the relationship of those functional attributes. (Step S815 in FIG. 8).
 以上により、図3のステップS320に示す、ボルトとナットを締結するタスクのため動作の入力を終了する。この一連の動作の入力操作は、制御装置300の記憶ユニット340に記憶される。 With the above, the input of the operation is completed for the task of fastening the bolt and the nut shown in step S320 of FIG. The input operation of this series of operations is stored in the storage unit 340 of the control device 300.
 次に、図3のステップS325に示すように、上記のように抽出された各動作の意味に基づいて、制御ユニット200に送信する制御信号をプロセッサ320によって生成し、制御装置300から制御ユニット200に送信する。このように、制御装置300は、オブジェクトの属性情報を格納したデータ構造を用いて、ロボット100を制御する制御信号を生成する。 Next, as shown in step S325 of FIG. 3, the processor 320 generates a control signal to be transmitted to the control unit 200 based on the meaning of each operation extracted as described above, and the control device 300 generates the control unit 200. Send to. In this way, the control device 300 generates a control signal for controlling the robot 100 by using the data structure in which the attribute information of the object is stored.
 制御信号を受信した制御ユニット200は、受信した制御信号と、ロボット100の環境センサ160で検出した周囲環境情報とに基づいてロボット100のモーション・プランニングを行い、ロボット100に実行させる動作命令を生成し、その動作命令に基づいてロボット100にタスクを実行させる(図3のステップS330)。これにより、現実世界において、ロボット100がハンド122a,122bでボルト及びナットを掴み、それらを締結するタスクを実行する。 The control unit 200 that has received the control signal performs motion planning of the robot 100 based on the received control signal and the surrounding environment information detected by the environment sensor 160 of the robot 100, and generates an operation command to be executed by the robot 100. Then, the robot 100 is made to execute the task based on the operation command (step S330 in FIG. 3). As a result, in the real world, the robot 100 executes the task of grasping the bolts and nuts with the hands 122a and 122b and fastening them.
 次に、図3のステップS335に示すように、ロボット100がタスクを実行している間、あるいはタスクを実行した後に、その実行結果を検知するためにロボット100の環境センサ160で周囲環境を検出し、周囲環境情報を制御ユニット200へ送信する。その周囲環境情報は記憶ユニット240に記憶される。さらには、周囲環境情報は制御装置300へ送信されて記憶ユニット340に記憶されてもよい。また、受信した制御信号及び生成した動作命令も記憶ユニット240に記憶して蓄積することができ、それらを制御装置300へ送信して記憶ユニット340に記憶して蓄積してもよい。 Next, as shown in step S335 of FIG. 3, the environment sensor 160 of the robot 100 detects the surrounding environment while the robot 100 is executing the task or after the task is executed, in order to detect the execution result. Then, the surrounding environment information is transmitted to the control unit 200. The surrounding environment information is stored in the storage unit 240. Further, the ambient environment information may be transmitted to the control device 300 and stored in the storage unit 340. Further, the received control signal and the generated operation command can also be stored and stored in the storage unit 240, and they may be transmitted to the control device 300 and stored and stored in the storage unit 340.
 プロセッサ220は、任意の機械学習アルゴリズムやAI技術を用いて、制御装置300から受信した制御信号及びプロセッサ220が生成した動作命令と、タスク実行後の周囲環境情報とを比較して学習することで、生成する動作命令の質を高めることが可能である。プロセッサ220は、動作命令の生成・実行と、その結果とを蓄積し、それらを学習することで、例えば、より失敗の可能性の少ない動作、より運動量が少ない動作を選択できるようになる。 The processor 220 uses an arbitrary machine learning algorithm or AI technology to compare and learn the control signal received from the control device 300 and the operation instruction generated by the processor 220 with the surrounding environment information after the task is executed. , It is possible to improve the quality of generated operation instructions. The processor 220 can select, for example, an operation with less possibility of failure and an operation with less momentum by accumulating the generation / execution of the operation instruction and the result and learning them.
 一方、それらの情報が制御装置300に送信される場合には、制御装置のプロセッサ320で学習することもできる。この場合、プロセッサ320が制御信号を生成する際に学習結果を考慮することもできる。さらには、タスク実行結果を見たユーザが、次に仮想世界で入力操作を行う際の改善に活用することも可能である。制御ユニット200と制御装置300とは、制御ユニット200での学習結果のデータと、制御装置300での学習結果のデータとを相互に交換して共有してもよい。
 以上により、本例におけるロボットの制御動作が終了する。
On the other hand, when such information is transmitted to the control device 300, it can also be learned by the processor 320 of the control device. In this case, the learning result can be taken into consideration when the processor 320 generates the control signal. Furthermore, it can be used for improvement when the user who sees the task execution result performs an input operation next time in the virtual world. The control unit 200 and the control device 300 may exchange and share the learning result data of the control unit 200 and the learning result data of the control device 300 with each other.
With the above, the control operation of the robot in this example is completed.
 上記に説明したように、本実施形態によれば、現実世界にあるオブジェクトの各部分に対して、その部分が備える機能等を含む属性情報(アノテーション)が付与され、属性情報は例えば表1に示すようなデータ構造に収納される。各オブジェクトに付与された属性情報は、仮想世界内で仮想オブジェクトに対して行われる動作との関連付けにより、現実世界のオブジェクトに対してロボット100が行う動作の特定に用いられる。仮想世界内ではユーザが直感的に指示動作を行うことができるため、オブジェクトに対する属性情報(アノテーション)の付与を容易に行うことができる。 As described above, according to the present embodiment, attribute information (annotation) including functions and the like included in each part of the object in the real world is given, and the attribute information is shown in, for example, Table 1. It is stored in the data structure shown. The attribute information given to each object is used to identify the action performed by the robot 100 on the object in the real world by associating it with the action performed on the virtual object in the virtual world. Since the user can intuitively perform the instruction operation in the virtual world, it is possible to easily add the attribute information (annotation) to the object.
 属性情報は、モデルが表すオブジェクトの識別情報(オブジェクトの名称等)と、モデルの少なくとも1つの部分を示す識別情報(オブジェクトの各部名称等)と、モデルの少なくとも1つの部分に関する情報(オブジェクト各部の形状・機能等)とを含みうるが、オブジェクトに付与する属性情報はこれら全てではなく、その一部のみであってもよい。例えば、モデルの少なくとも1つの部分に関する形状の情報のみを属性情報としてオブジェクトに付与することも可能である。  The attribute information includes identification information of the object represented by the model (object name, etc.), identification information indicating at least one part of the model (name of each part of the object, etc.), and information about at least one part of the model (of each part of the object). It may include (shape, function, etc.), but the attribute information given to the object may not be all of these, but only a part of them. For example, it is possible to give an object only shape information about at least one part of the model as attribute information.
 なお、本実施形態の説明ではロボットの形態としてハンドを有するアームを備えたロボット100を例示したが、本発明によって制御されるロボットの形態はそれに限られず、例えば、ロボットの形態は車両、船舶、潜水機、ドローン、建設機械(ショベルカー、ブルドーザー、掘削機、クレーン等)等であってもよい。また、本実施形態のシステムを用いて動作させることができるロボットが使用される環境や用途としては、本実施形態で説明したものの他、宇宙開発、採鉱、採掘、資源採取、農業、林業、水産業、畜産業、捜索救助、災害支援、災害復旧、人道支援、爆発物処理、経路上における障害の除去、災害監視用、防犯監視等の多種多様な環境や用途がある。本実施形態のロボットによって操作されるオブジェクトは、ロボットが使用される環境や用途によって様々である。一例として、ロボットとしてシャベルカーを用いる場合は、掘り出す土、砂等もオブジェクトである。 In the description of the present embodiment, the robot 100 having an arm having a hand is illustrated as the form of the robot, but the form of the robot controlled by the present invention is not limited to this, and for example, the form of the robot is a vehicle, a ship, or the like. It may be a submersible, a drone, a construction machine (excavator, bulldozer, excavator, crane, etc.) or the like. In addition to the environments and applications described in this embodiment, the environment and applications in which robots that can be operated using the system of this embodiment are used include space development, mining, mining, resource extraction, agriculture, forestry, and water. There are a wide variety of environments and applications such as industry, livestock industry, search and rescue, disaster support, disaster recovery, humanitarian support, explosives disposal, removal of obstacles on the route, disaster monitoring, and crime prevention monitoring. The objects operated by the robot of the present embodiment vary depending on the environment and application in which the robot is used. As an example, when a shovel car is used as a robot, excavated soil, sand, etc. are also objects.
 (他のオブジェクト)
 ここで、本実施形態の方法により属性情報(アノテーション)を付与する他のオブジェクトの例を示す。
(Other objects)
Here, an example of another object to which attribute information (annotation) is given by the method of this embodiment is shown.
[第1の例]
 図9(a)は、蛇口のオブジェクトに対応するモデルを示す。このモデルは、図4に示す方法のステップS405~S410により生成することができる。このモデルに対し、オブジェクトの識別情報として「蛇口」を付与し、さらにその各部の識別情報として、それぞれ、ハンドル90、吐出口91を付与する(図4のステップS415)。さらに、モデルの各部分にそれらの機能情報を付与する(図4のステップS420)。モデルの各部分の機能情報として、例えば以下の機能が付与される。
・ハンドル90:回転可能。回転角は0°~180°
・吐水口91:水を吐出。吐出量はハンドルの回転角度に比例
[First example]
FIG. 9A shows a model corresponding to a faucet object. This model can be generated by steps S405-S410 of the method shown in FIG. A "faucet" is added to this model as identification information of an object, and a handle 90 and a discharge port 91 are added as identification information of each part thereof (step S415 in FIG. 4). Further, the functional information thereof is added to each part of the model (step S420 in FIG. 4). For example, the following functions are given as functional information of each part of the model.
-Handle 90: Rotatable. Rotation angle is 0 ° to 180 °
-Water spout 91: Discharges water. Discharge amount is proportional to the rotation angle of the handle
 このように属性情報が付与されたモデルを仮想世界内に表示されたオブジェクトに重ね合わせて関連付けることにより、そのオブジェクトにそのモデルが有する属性情報を付与することができる。これにより、例えば、蛇口の仮想オブジェクトのハンドルに対して、必要な水量の情報とともにハンドルを作用させる動作を仮想世界内でユーザが入力することで、ロボット100に、現実世界の蛇口のハンドルをその水量に応じた回転角度まで回転させる動作を指示することができる。 By superimposing and associating the model to which the attribute information is given with the object displayed in the virtual world in this way, the attribute information possessed by the model can be given to the object. As a result, for example, when the user inputs an action in the virtual world to act on the handle of the virtual object of the faucet together with the information on the required amount of water, the robot 100 is given the handle of the faucet in the real world. It is possible to instruct the operation of rotating to a rotation angle according to the amount of water.
 なお、ハンドルについて、閉じた状態から開いた状態まで回転させる場合、及びその逆の場合について、回転角度と、ハンドルの回転に要するトルク量との関係を示す関数をアノテーションとして付与してもよい。その関数は、後で最適化するために適宜変更することも可能である。また、吐水口についての「吐出量はハンドルの回転角度に比例」のアノテーションについて、例えば、その比例関数をアノテーションとして付与してもよく、その比例関数を最適化するために、一旦指定した比例関数を後で適宜変更することも可能である。 Note that a function indicating the relationship between the rotation angle and the amount of torque required to rotate the handle may be added as an annotation when the handle is rotated from the closed state to the open state and vice versa. The function can be modified accordingly for later optimization. Further, regarding the annotation of "the discharge amount is proportional to the rotation angle of the handle" for the spout, for example, the proportional function may be added as an annotation, and the proportional function once specified in order to optimize the proportional function. Can be changed later as appropriate.
 図9(b)は、戸棚のオブジェクトに対応するモデルを示す。このモデルも、図4に示す方法のステップS405~S410により生成することができる。このモデルに対し、オブジェクトの識別情報として「戸棚」を付与し、さらにその各部の識別情報として、それぞれ、天板95、引き出し96、右扉97、左扉98を付与する(図4のステップS415)。さらに、モデルの各部分に機能情報としてそれらの機能を付与する(図4のステップS420)。モデルの各部分の機能情報として、例えば以下の機能が付与される。
・天板95:上にオブジェクトを置くことが可能。
・引き出し96:取っ手を有する。取っ手を掴んで手前に引出し可能。引出した状態で内部にオブジェクトを収納可能。
・右扉97:取っ手を有する。取っ手を掴んで右側面のヒンジを中心に開閉可能。開いた状態で内部にオブジェクトを収納可能。
・左扉98:取っ手を有する。取っ手を掴んで左側面のヒンジを中心に開閉可能。開いた状態で内部にオブジェクトを収納可能。
FIG. 9B shows a model corresponding to the objects in the cupboard. This model can also be generated by steps S405 to S410 of the method shown in FIG. A "cupboard" is added to this model as object identification information, and a top plate 95, a drawer 96, a right door 97, and a left door 98 are added as identification information for each part (step S415 in FIG. 4). ). Further, each part of the model is given those functions as functional information (step S420 in FIG. 4). For example, the following functions are given as functional information of each part of the model.
-Top plate 95: Objects can be placed on top.
-Drawer 96: Has a handle. You can grab the handle and pull it out toward you. Objects can be stored inside when pulled out.
-Right door 97: Has a handle. Can be opened and closed around the hinge on the right side by grasping the handle. Objects can be stored inside in the open state.
-Left door 98: Has a handle. Can be opened and closed around the hinge on the left side by grasping the handle. Objects can be stored inside in the open state.
 このように属性情報が付与されたモデルを仮想世界内でオブジェクトに重ね合わせて関連付けることにより、そのオブジェクトにそのモデルが有する属性情報を付与することができる。これにより、例えば、戸棚の仮想オブジェクトの閉じている右扉の取っ手に対して作用を生じさせる動作を仮想世界内でユーザが入力することで、ロボット100に、現実世界の戸棚の右扉の取っ手を掴み、ヒンジを中心として右扉を開ける動作を指示することができる。 By superimposing and associating a model to which attribute information is given on an object in the virtual world in this way, it is possible to give the attribute information possessed by the model to the object. As a result, for example, the user inputs an action in the virtual world that causes an action on the closed right door handle of the virtual object of the cupboard, so that the robot 100 can receive the right door handle of the cupboard in the real world. You can instruct the operation to open the right door centering on the hinge.
 さらに、オブジェクトの各部分に属性情報を付与することにより、仮想世界内で各部分が移動して仮想オブジェクトの全体形状が変形した場合であっても、システム1がその仮想オブジェクトを認識し続けることが可能になる。例えば、図10に示すような戸棚は引き出しや扉の開き具合によって戸棚の全体形状が変化するが、全体形状の変化に関わらず、システム1はその仮想オブジェクトを戸棚であると認識することができる。
 表2に、蛇口及び戸棚の仮想オブジェクトにそれぞれ付与された属性情報を収納するデータ構造を示す。
Furthermore, by adding attribute information to each part of the object, the system 1 continues to recognize the virtual object even if each part moves in the virtual world and the entire shape of the virtual object is deformed. Becomes possible. For example, in a cupboard as shown in FIG. 10, the overall shape of the cupboard changes depending on the drawer and the opening degree of the door, but the system 1 can recognize the virtual object as the cupboard regardless of the change in the overall shape. ..
Table 2 shows a data structure for storing the attribute information given to each of the virtual objects of the faucet and the cupboard.
Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002
[第2の例]
 図10は、本例において用いられるロボット(無人潜水機)と、水中作業対象のパイプとを示す図である。図10に示すように、本例において用いられるロボット100は無人潜水機の形態を有しており、先端にロボットハンド122を備えたロボットアーム120と、ロボットアーム120が設置された筐体140とを有している。筐体140には、ロボット100が水中で左右方向(X軸方向)、前後方向(Y軸方向)および上下方向(Z軸方向)へ移動し、また、XYZ各軸を中心として回転することを可能にする複数のスラスタ(不図示)が設けられている。それらのスラスタは、例えば、電気モータで回転するプロペラで構成されている。筐体140には、図には明示されていないが、図2等を参照して説明した環境センサ及び送受信ユニットが設けられている。特に、筐体140には環境センサとして少なくとも視覚センサが備えられており、これにより、ロボット100の周囲環境(特に、ロボット100の前方のロボットアーム120及びハンド122を含む環境)の視覚情報が取得可能である。本例において用いられるロボット(無人潜水機)100のその他の構成は図2を参照して上記に説明したものと同様であるので、ここでは詳しい説明は省略する。また、本例において用いられるシステム構成及び制御方法は上記において説明したものと同様である。本例では、無人潜水機の形態のロボットのロボットアームの先端のロボットハンドで水中のパイプを把持するタスクに関して特徴的な点についてフォーカスして説明する。
[Second example]
FIG. 10 is a diagram showing a robot (unmanned submersible) used in this example and a pipe to be worked underwater. As shown in FIG. 10, the robot 100 used in this example has the form of an unmanned submersible, and includes a robot arm 120 having a robot hand 122 at its tip and a housing 140 in which the robot arm 120 is installed. have. In the housing 140, the robot 100 moves in the horizontal direction (X-axis direction), the front-rear direction (Y-axis direction), and the vertical direction (Z-axis direction) in water, and rotates around each XYZ axis. There are multiple thrusters (not shown) that allow it. These thrusters consist of, for example, propellers that are rotated by an electric motor. Although not clearly shown in the drawing, the housing 140 is provided with an environment sensor and a transmission / reception unit described with reference to FIG. 2 and the like. In particular, the housing 140 is provided with at least a visual sensor as an environment sensor, whereby visual information on the surrounding environment of the robot 100 (particularly, the environment including the robot arm 120 and the hand 122 in front of the robot 100) can be acquired. It is possible. Since the other configurations of the robot (unmanned submersible) 100 used in this example are the same as those described above with reference to FIG. 2, detailed description thereof will be omitted here. The system configuration and control method used in this example are the same as those described above. In this example, the characteristic points regarding the task of grasping the underwater pipe with the robot hand at the tip of the robot arm of the robot in the form of an unmanned submersible will be described.
 なお、図10はロボット(無人潜水機)100の環境センサによって取得された環境情報に基づいて生成された仮想世界を表示したものである。ロボット(無人潜水機)100の各部形状及び機能は少なくとも制御装置300の記憶ユニット340にモデル化されて予め記憶されており、したがってシステム1において既知である。そのため、仮想世界内ではそのモデル化されたロボット(無人潜水機)100が表示されている。一方、仮想世界内のパイプ40はロボット(無人潜水機)100の環境センサによって取得された環境情報に基づいて再現された状態で表示されている。パイプ40は無人潜水機100の環境センサによってある特定の方向からのみ撮影されているため、撮影した方向から認識できた形状で再現されており、その反対側の部分は再現されていない。図10において、パイプ40の図面左側部分は形状が再現されているが、パイプ40の図面右側部分は欠損した状態で表示されている。 Note that FIG. 10 shows a virtual world generated based on the environmental information acquired by the environment sensor of the robot (unmanned submersible) 100. The shape and function of each part of the robot (unmanned submersible) 100 is at least modeled and stored in advance in the storage unit 340 of the control device 300, and is therefore known in the system 1. Therefore, the modeled robot (unmanned submersible) 100 is displayed in the virtual world. On the other hand, the pipe 40 in the virtual world is displayed in a state of being reproduced based on the environmental information acquired by the environment sensor of the robot (unmanned submersible) 100. Since the pipe 40 is photographed only from a specific direction by the environment sensor of the unmanned submersible 100, it is reproduced in a shape that can be recognized from the photographed direction, and the portion on the opposite side is not reproduced. In FIG. 10, the shape of the left side portion of the pipe 40 in the drawing is reproduced, but the right side portion of the pipe 40 in the drawing is displayed in a missing state.
 図11は、図10に示したパイプに対してアノテーション(属性情報)を付与する様子を示す図である。 FIG. 11 is a diagram showing how annotations (attribute information) are added to the pipe shown in FIG.
 図11は、図3に示すステップS305及びS310に従って制御装置300によって生成された仮想世界を示している。本例においては、ディスプレイ370上の仮想世界内に表示されているパイプ40に対して、上述した「2)オブジェクトをスキャンし、アノテーションを付与する」態様に従ってユーザがアノテーションを付与し、制御装置300の記憶ユニット340に記憶する。 FIG. 11 shows a virtual world generated by the controller 300 according to steps S305 and S310 shown in FIG. In this example, the user annotates the pipe 40 displayed in the virtual world on the display 370 according to the above-mentioned "2) Scanning and annotating an object" mode, and the control device 300 It is stored in the storage unit 340 of.
 より具体的に説明すると、まず、スキャンしたオブジェクトであるパイプ40を図11(a)に示すようにディスプレイ370上に表示する(図4のステップ405)。次に、ユーザが、ディスプレイ370上に表示されるUI画面内で、トラッカー350(図11にはこれに対応する仮想トラッカー350_vrが表示されている)を用いて、円筒形状のモデルを生成し(図11(b))、これをディスプレイ370上に表示される仮想世界内でパイプ40に重ね合わされるように移動し(図11(c))、かつ直径及び長さを調節することで(図11(d))、パイプ40に相当するモデルを取得する(図4のステップ410)。このモデルに対し、オブジェクトの識別情報として「パイプ」を付与し、さらにその部分の識別情報として「外周」を付与する(図4のステップS415)。さらに、その部分にその機能情報(形状・機能)を付与する(図4のステップS420)。モデルの部分の機能情報として、例えば以下の機能が付与される。
・外周:円筒形状。ロボット・ハンドで把持可能。
 パイプ40に付与されたアノテーションは、表3に示すデータ構造に格納される。
More specifically, first, the scanned object, the pipe 40, is displayed on the display 370 as shown in FIG. 11A (step 405 in FIG. 4). Next, the user generates a cylindrical model using the tracker 350 (the corresponding virtual tracker 350_vr is displayed in FIG. 11) in the UI screen displayed on the display 370 (the corresponding virtual tracker 350_vr is displayed). FIG. 11 (b), by moving it so as to overlap the pipe 40 in the virtual world displayed on the display 370 (FIG. 11 (c)), and by adjusting the diameter and length (FIG. 11 (b)). 11 (d)), the model corresponding to the pipe 40 is acquired (step 410 in FIG. 4). A "pipe" is added as the identification information of the object to this model, and an "outer circumference" is added as the identification information of the portion (step S415 in FIG. 4). Further, the functional information (shape / function) is added to the portion (step S420 in FIG. 4). For example, the following functions are given as functional information of the model part.
-Outer circumference: Cylindrical shape. Can be gripped with a robot hand.
The annotation added to the pipe 40 is stored in the data structure shown in Table 3.
Figure JPOXMLDOC01-appb-T000003
Figure JPOXMLDOC01-appb-T000003
 このように属性情報が付与されたモデルを仮想世界内に表示されたオブジェクト(パイプ40)に重ね合わせて関連付けることにより、そのオブジェクトにそのモデルが有する属性情報を付与することができる。このように属性情報が付与されたパイプ40は、モデルを表すコンピュータ・グラフィックス表現によって仮想パイプとして仮想世界内において表示される。これにより、例えば、パイプ40の仮想オブジェクト(仮想パイプ40_vr)をロボット(無人潜水機)100のロボットハンド122で把持する動作を指示することが可能となる。 By superimposing and associating the model to which the attribute information is given in this way on the object (pipe 40) displayed in the virtual world, the attribute information possessed by the model can be given to the object. The pipe 40 to which the attribute information is added in this way is displayed in the virtual world as a virtual pipe by the computer graphics representation representing the model. Thereby, for example, it is possible to instruct the operation of grasping the virtual object (virtual pipe 40_vr) of the pipe 40 with the robot hand 122 of the robot (unmanned submersible) 100.
[第3の例]
 図12は、テーブルの上に置かれたマグカップ80を示す図である。
[Third example]
FIG. 12 is a diagram showing a mug 80 placed on a table.
 マグカップ80は、取っ手82と、本体84とを有している。マグカップ80の取っ手82は、図1及び図2を参照して説明したロボット100のアーム120に備えられたハンド122で把持することができ、ハンド122が取っ手82を把持することで、ロボットアーム120によって例えばマグカップ80をテーブル上で移動させることができる。 The mug 80 has a handle 82 and a main body 84. The handle 82 of the mug 80 can be gripped by the hand 122 provided on the arm 120 of the robot 100 described with reference to FIGS. 1 and 2. When the hand 122 grips the handle 82, the robot arm 120 For example, the mug 80 can be moved on the table.
 図13は、テーブル上のマグカップ80に対してアノテーション(属性情報)を付与する様子を示す図である。 FIG. 13 is a diagram showing how annotations (attribute information) are added to the mug 80 on the table.
 図13は、図3に示すステップS305及びS310に従って制御装置300によって生成された仮想世界を示している。本例においては、ディスプレイ370上の仮想世界内に表示されているマグカップ80に対して、上述した「2)オブジェクトをスキャンし、アノテーションを付与する」態様に従ってユーザがアノテーションを付与し、制御装置300の記憶ユニット340に記憶する。 FIG. 13 shows a virtual world generated by the controller 300 according to steps S305 and S310 shown in FIG. In this example, the user annotates the mug 80 displayed in the virtual world on the display 370 according to the above-mentioned "2) Scanning and annotating an object" mode, and the control device 300 It is stored in the storage unit 340 of.
 より具体的に説明すると、まず、スキャンしたオブジェクトであるマグカップ80を図13(a)に示すようにディスプレイ370上に表示する(図4のステップ405)。次に、ユーザが、ディスプレイ370上に表示されるUI画面内で、トラッカー350(図13にはこれに対応する仮想トラッカー350_vrが表示されている)を用いて、円筒形状のモデルを生成し、これをディスプレイ370上に表示される仮想世界内でマグカップ80の本体84に重ね合わされるように移動し(図13(a))、かつ直径及び長さを調節することで(図13(b))、本体84に相当するモデルを取得する(図4のステップ410)。 More specifically, first, the scanned object, the mug 80, is displayed on the display 370 as shown in FIG. 13 (a) (step 405 in FIG. 4). Next, the user generates a cylindrical model using the tracker 350 (the corresponding virtual tracker 350_vr is displayed in FIG. 13) in the UI screen displayed on the display 370. By moving this so as to be superimposed on the main body 84 of the mug 80 in the virtual world displayed on the display 370 (FIG. 13 (a)) and adjusting the diameter and length (FIG. 13 (b)). ), Acquire a model corresponding to the main body 84 (step 410 in FIG. 4).
 同様に、トラッカー350(図13に示す仮想トラッカー350_vr)を用いて、マグカップ80の取っ手82に相当する直方体状のモデルを生成し、これをディスプレイ370上に表示される仮想世界内でマグカップ80の取っ手82に重ね合わされるように移動することで(図13(d))、本体84に相当するモデルを取得する(図4のステップ410)。 Similarly, a tracker 350 (virtual tracker 350_vr shown in FIG. 13) is used to generate a rectangular parallelepiped model corresponding to the handle 82 of the mug 80, which is displayed on the display 370 in the virtual world of the mug 80. By moving so as to be overlapped with the handle 82 (FIG. 13 (d)), a model corresponding to the main body 84 is acquired (step 410 in FIG. 4).
 このモデルに対し、オブジェクトの識別情報として「マグカップ」を付与し、さらにその各部の識別情報として、それぞれ、取っ手82、本体84を付与する(図4のステップS415)。さらに、モデルの各部分にそれらの機能情報を付与する(図4のステップS420)。モデルの各部分の機能情報として、例えば以下の機能が付与される。
・取っ手:直方体形状。ロボット・ハンドで把持可能。
・本体:円筒形状。
 マグカップ80に付与されたアノテーションは、表4に示すデータ構造に格納される。
A "mug" is added to this model as identification information of an object, and a handle 82 and a main body 84 are added as identification information of each part thereof (step S415 in FIG. 4). Further, the functional information thereof is added to each part of the model (step S420 in FIG. 4). For example, the following functions are given as functional information of each part of the model.
-Handle: Rectangular parallelepiped shape. Can be gripped with a robot hand.
-Main body: Cylindrical shape.
The annotation added to the mug 80 is stored in the data structure shown in Table 4.
Figure JPOXMLDOC01-appb-T000004
Figure JPOXMLDOC01-appb-T000004
 このように属性情報が付与されたモデルを仮想世界内に表示されたオブジェクト(マグカップ80)に重ね合わせて関連付けることにより、そのオブジェクトにそのモデルが有する属性情報を付与することができる。このように属性情報が付与されたマグカップ80は、モデルを表すコンピュータ・グラフィックス表現によって仮想パイプとして仮想世界内において表示される。これにより、例えば、マグカップ80の仮想オブジェクトを仮想世界内のロボット100のロボットハンド122で把持して移動させる動作を仮想世界内で指示し(図14(a))、その指示に従って現実世界のマグカップ80を現実世界のロボット100のロボットハンド122で把持して移動させることが可能となる(図14(b))。 By superimposing and associating the model to which the attribute information is given in this way on the object (mug 80) displayed in the virtual world, the attribute information possessed by the model can be given to the object. The mug 80 to which the attribute information is added in this way is displayed in the virtual world as a virtual pipe by a computer graphics representation representing the model. As a result, for example, an operation of grasping and moving the virtual object of the mug 80 with the robot hand 122 of the robot 100 in the virtual world is instructed in the virtual world (FIG. 14A), and the mug in the real world is instructed according to the instruction. The 80 can be gripped and moved by the robot hand 122 of the robot 100 in the real world (FIG. 14 (b)).
 なお、本例ではマグカップ80の取っ手82と本体84とに属税情報を付与する例を説明したが、例えばロボット100のロボットハンド122で取っ手82を把持してマグカップ80を移動させる目的であれば、マグカップ80の一部である取っ手82についてのみ属性情報をしてもよい。 In this example, an example of assigning tax information to the handle 82 and the main body 84 of the mug 80 has been described. However, for example, if the purpose is to hold the handle 82 with the robot hand 122 of the robot 100 and move the mug 80. , Attribute information may be provided only for the handle 82, which is a part of the mug 80.
 本実施形態の方法により属性情報(アノテーション)を付与するオブジェクトについて種々の例を挙げて説明したが、本実施形態の方法により属性情報を付与可能なオブジェクトは上記のオブジェクトに限られず、任意のオブジェクトに対して属性情報を付与することが可能である。 Various examples have been described for objects to which attribute information (annotation) is given by the method of the present embodiment, but the objects to which attribute information can be given by the method of the present embodiment are not limited to the above objects and are arbitrary objects. It is possible to add attribute information to the object.
 例えば、送風の「弱」、「中」、「強」のボタンを備えた扇風機について、それらのボタンに対して、それぞれ、「押すことでオン/オフを切り替え可能。オンで風量「弱」」、「押すことでオン/オフを切り替え可能。オンで風量「中」」、「押すことでオン/オフを切り替え可能。オンで風量「強」」のような機能情報を付与することが可能である。あるいは、回転角度に応じて風量を変えることができる風量ダイヤルを備えた扇風機について、その風量ダイヤルに対して、「回転可能。回転角は0°~120°。風量は回転角に比例」のような機能情報を付与することができる。 For example, for a fan equipped with "weak", "medium", and "strong" buttons for blowing air, "press to switch on / off. When on, the air volume is" weak "". , "Press to switch on / off. On to switch on / off air volume" Medium "," Press to switch on / off. On to air volume "strong" ". is there. Alternatively, for a fan equipped with an air volume dial that can change the air volume according to the rotation angle, for the air volume dial, "rotatable. Rotation angle is 0 ° to 120 °. Air volume is proportional to the rotation angle." Function information can be added.
 さらには、複数のドアを備えた車について、それらの部分について、「ドアノブを有する。ドアノブを掴んでヒンジを中心に開閉可能。」のような機能情報を付与することが可能である。 Furthermore, for vehicles equipped with multiple doors, it is possible to add functional information such as "has a doorknob. It can be opened and closed around the hinge by grasping the doorknob."
 また、オブジェクトの各部に重さ、負荷、摩擦係数、重心等の情報もアノテーションとして付与することにより、オブジェクト自体やその各部を動的に動作させる場合に適切な動作が行われるようすることが可能になる。 In addition, by adding information such as weight, load, friction coefficient, and center of gravity to each part of the object as annotations, it is possible to perform appropriate operation when the object itself or each part is dynamically operated. become.
 オブジェクトに付与された属性情報(アノテーション)は、表1~表4に示すようなデータ構造に収納することができる。 The attribute information (annotation) given to the object can be stored in the data structure shown in Tables 1 to 4.
 以上、発明の実施形態を通じて本発明を説明したが、上述の実施形態は、特許請求の範囲に係る発明を限定するものではない。また、本発明の実施形態の中で説明されている特徴を組み合わせた形態も本発明の技術的範囲に含まれ得る。さらに、上述の実施形態に、多様な変更または改良を加えることが可能であることも当業者に明らかである。
 
Although the present invention has been described above through the embodiments of the invention, the above-described embodiments do not limit the invention according to the claims. In addition, a form combining the features described in the embodiments of the present invention may also be included in the technical scope of the present invention. Furthermore, it will be apparent to those skilled in the art that various changes or improvements can be made to the above embodiments.

Claims (14)

  1.  オブジェクトに属性情報を付与する方法であって、
     現実世界の環境を再現する仮想世界を生成することと、
     前記仮想世界において、属性情報を付与するオブジェクトに対応するモデルを取得することと、
     前記モデルに対して、前記モデルの少なくとも1つの部分に関する情報を含む属性情報を付与することと、
    を含む、方法。
    It is a method of giving attribute information to an object.
    Creating a virtual world that recreates the real world environment,
    In the virtual world, acquiring the model corresponding to the object to which the attribute information is given,
    To give the model attribute information including information about at least one part of the model.
    Including methods.
  2.  前記属性情報は、前記モデルが表すオブジェクトの識別情報と、前記モデルの少なくとも1つの部分を示す識別情報とをさらに含む、請求項1に記載の方法。 The method according to claim 1, wherein the attribute information further includes identification information of an object represented by the model and identification information indicating at least one part of the model.
  3.  前記仮想世界において、前記オブジェクトと前記モデルとを関連付けて、前記オブジェクトに前記属性情報を付与することと、
    をさらに含む、請求項1または2に記載の方法。
    In the virtual world, associating the object with the model and imparting the attribute information to the object,
    The method according to claim 1 or 2, further comprising.
  4.  前記オブジェクトと前記モデルとの関連付けは、前記仮想世界内で前記モデルを前記オブジェクトに重ね合わせることで行われる、請求項3に記載の方法。 The method according to claim 3, wherein the association between the object and the model is performed by superimposing the model on the object in the virtual world.
  5.  オブジェクトに属性情報を付与する装置であって、
     現実世界の環境を再現する仮想世界を生成することと、
     前記仮想世界において、属性情報を付与するオブジェクトに対応するモデルを取得することと、
     前記モデルに対して、前記モデルの少なくとも1つの部分に関する情報を含む属性情報を付与することと、
    を実行するように構成されたプロセッサを備えた、装置。
    A device that gives attribute information to an object
    Creating a virtual world that recreates the real world environment,
    In the virtual world, acquiring the model corresponding to the object to which the attribute information is given,
    To give the model attribute information including information about at least one part of the model.
    A device with a processor configured to run.
  6.  前記属性情報は、前記モデルが表すオブジェクトの識別情報と、前記モデルの少なくとも1つの部分を示す識別情報とをさらに含む、請求項5に記載の装置。 The device according to claim 5, wherein the attribute information further includes identification information of an object represented by the model and identification information indicating at least one part of the model.
  7.  前記プロセッサはさらに、
     前記仮想世界において、前記オブジェクトと前記モデルとを関連付けて、前記オブジェクトに前記属性情報を付与することと、
    を実行するように構成されている、請求項5または6に記載の装置。
    The processor further
    In the virtual world, associating the object with the model and imparting the attribute information to the object,
    5. The device of claim 5 or 6, which is configured to perform.
  8.  前記オブジェクトと前記モデルとの関連付けは、前記仮想世界内で前記モデルを前記オブジェクトに重ね合わせることで行われる、請求項7に記載の装置。 The device according to claim 7, wherein the association between the object and the model is performed by superimposing the model on the object in the virtual world.
  9.  オブジェクトの属性情報を格納するデータ構造であって、
     前記属性情報は、前記オブジェクトの少なくとも1つの部分に関する情報を含む、データ構造。
    A data structure that stores the attribute information of an object
    The attribute information is a data structure that includes information about at least one part of the object.
  10.  前記属性情報は、前記モデルが表すオブジェクトの識別情報と、前記モデルの少なくとも1つの部分を示す識別情報とをさらに含む、請求項9に記載のデータ構造。 The data structure according to claim 9, wherein the attribute information further includes identification information of an object represented by the model and identification information indicating at least one part of the model.
  11.  前記データ構造は、ロボットを制御する制御信号を生成する制御装置において前記制御信号を生成することに用いられる、請求項9または10に記載のデータ構造。 The data structure according to claim 9 or 10, wherein the data structure is used for generating the control signal in a control device that generates a control signal for controlling the robot.
  12.  前記制御装置は、前記オブジェクトに対応する仮想オブジェクトを含む仮想世界を生成し、ユーザによる、前記仮想世界における前記仮想オブジェクトに対する動作入力を受け付け、前記動作入力によって作用を生じさせるオブジェクトの部分を特定するように構成されており、
     前記データ構造は、前記制御装置において、前記部分に関する前記情報に基づいて前記動作入力の意味を特定し、前記動作入力の意味に基づいて前記制御信号を生成することに用いられる、請求項11に記載のデータ構造。
    The control device creates a virtual world including a virtual object corresponding to the object, receives an action input for the virtual object in the virtual world by a user, and identifies a part of the object that causes an action by the action input. Is configured to
    11. The data structure is used in the control device to identify the meaning of the action input based on the information about the portion and to generate the control signal based on the meaning of the action input. The data structure described.
  13.  プロセッサによって実行可能なコンピュータ・プログラムであって、請求項1~4のいずれか1項に記載の方法を実施する命令を含む、コンピュータ・プログラム。 A computer program that can be executed by a processor and includes an instruction that implements the method according to any one of claims 1 to 4.
  14.  非一時的なコンピュータ可読媒体であって、請求項1~4のいずれか1項に記載の方法を実施する命令を含む、前記媒体に記憶され、プロセッサによって実行することができるコンピュータ・プログラムを含む、非一時的なコンピュータ可読媒体。 A non-transitory computer-readable medium comprising a computer program stored on the medium and capable of being executed by a processor, including instructions for performing the method according to any one of claims 1-4. , Non-temporary computer-readable medium.
PCT/JP2020/017745 2019-04-26 2020-04-24 Method and device for assigning attribute information to object WO2020218533A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-086655 2019-04-26
JP2019086655 2019-04-26

Publications (1)

Publication Number Publication Date
WO2020218533A1 true WO2020218533A1 (en) 2020-10-29

Family

ID=72942795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/017745 WO2020218533A1 (en) 2019-04-26 2020-04-24 Method and device for assigning attribute information to object

Country Status (1)

Country Link
WO (1) WO2020218533A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013218535A (en) * 2012-04-09 2013-10-24 Crescent Inc Method and device for displaying finger integrated into cg image in three-dimensionally modeled cg image and wide viewing angle head mount display device for displaying three-dimensionally modeled cg image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013218535A (en) * 2012-04-09 2013-10-24 Crescent Inc Method and device for displaying finger integrated into cg image in three-dimensionally modeled cg image and wide viewing angle head mount display device for displaying three-dimensionally modeled cg image

Similar Documents

Publication Publication Date Title
US11691273B2 (en) Generating a model for an object encountered by a robot
Fong et al. Novel interfaces for remote driving: gesture, haptic, and PDA
Chen et al. Human performance issues and user interface design for teleoperated robots
Abi-Farraj et al. A haptic shared-control architecture for guided multi-target robotic grasping
JP2020017264A (en) Bidirectional real-time 3d interactive operations of real-time 3d virtual objects within real-time 3d virtual world representing real world
Xu et al. Visual-haptic aid teleoperation based on 3-D environment modeling and updating
Reddivari et al. Teleoperation control of Baxter robot using body motion tracking
Negrello et al. Humanoids at work: The walk-man robot in a postearthquake scenario
US10105847B1 (en) Detecting and responding to geometric changes to robots
WO2020203793A1 (en) Robot control device, control unit, and robot control system including same
WO2020218533A1 (en) Method and device for assigning attribute information to object
Pryor et al. A Virtual Reality Planning Environment for High-Risk, High-Latency Teleoperation
KR20230138487A (en) Object-based robot control
WO2020235539A1 (en) Method and device for specifying position and posture of object
WO2021049147A1 (en) Information processing device, information processing method, information processing program, and control device
CN110807971B (en) Submersible mechanical arm control virtual reality training system based on optical-inertial fusion positioning
Lwowski et al. The utilization of virtual reality as a system of systems research tool
Zhang et al. A visual tele-operation system for the humanoid robot BHR-02
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Zhu et al. Fusing multiple sensors information into mixed reality-based user interface for robot teleoperation
CN116423515B (en) Digital twin control system of multiple robots and positioning and mapping method thereof
US11731278B1 (en) Robot teleoperation using mobile device motion sensors and web standards
Park Supervisory control of robot manipulator for gross motions
Chen et al. Object detection for a mobile robot using mixed reality
Maheshwari et al. DESIGN OF ADVANCE ROBOT FOR SURVEILLANCE AND RISK-FREE MOVEMENT: A SIMULATION APPROACH.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20796044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20796044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP