WO2022254609A1 - 情報処理装置、移動体、情報処理方法、及びプログラム - Google Patents
情報処理装置、移動体、情報処理方法、及びプログラム Download PDFInfo
- Publication number
- WO2022254609A1 WO2022254609A1 PCT/JP2021/021001 JP2021021001W WO2022254609A1 WO 2022254609 A1 WO2022254609 A1 WO 2022254609A1 JP 2021021001 W JP2021021001 W JP 2021021001W WO 2022254609 A1 WO2022254609 A1 WO 2022254609A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- orientation
- controlled
- space
- unit
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 155
- 238000003672 processing method Methods 0.000 title claims description 5
- 239000013598 vector Substances 0.000 claims description 85
- 238000001514 detection method Methods 0.000 claims description 68
- 239000011159 matrix material Substances 0.000 claims description 50
- 238000011156 evaluation Methods 0.000 claims description 10
- 230000004807 localization Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 description 45
- 239000003550 marker Substances 0.000 description 31
- 238000012545 processing Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 18
- 230000008859 change Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 8
- 101100433196 Mus musculus Zfp27 gene Proteins 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 101100263760 Caenorhabditis elegans vms-1 gene Proteins 0.000 description 3
- 101100433201 Mus musculus Zfp2 gene Proteins 0.000 description 3
- 101100433195 Mus musculus Zfp26 gene Proteins 0.000 description 3
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000011056 performance test Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
Definitions
- the present invention relates to an information processing device, a mobile object, an information processing method, and a program.
- an information processing device for controlling a robot detects the relative position and orientation of an object and the robot in the space in which the robot is located, and based on the detected position and orientation, An information processing apparatus that generates map information indicating a map of the space is known (see Non-Patent Document 1).
- map information representing maps are also known.
- the arrangement of multiple objects in the space is often changed from the initial arrangement.
- map information indicating a map with poor accuracy may be generated.
- An object is to provide an information processing device, a mobile object, an information processing method, and a program that can generate information with high accuracy.
- An information processing device that controls a controlled device to be controlled, comprising: a first object arranged in a space in which the controlled device is located; and the first object arranged in the space.
- a storage unit for storing layout constraint condition information including information indicating a relative position with a different second object; a position of the controlled device; and a relative position between the first object and the controlled device. and at least one of the first object-device relative position information indicating the relative position of the second object and the control target device, and the and a control unit that generates map information indicating a map of the space based on the layout constraint information.
- the control unit includes a first estimation unit that estimates the position of the controlled device based on a predetermined initial value, and the first object-device relative position information. and the second object-apparatus relative position information as inter-object-apparatus relative position information; the layout constraint condition information stored in the storage; a generation unit that generates map information indicating a map in the space based on the inter-object-device relative position information and the position of the control target device estimated by the first estimation unit.
- the control unit detects the first object and the control target based on an output from a detection unit that detects at least one of the first object and the second object.
- a second estimation unit that estimates at least one of a position relative to a device and a position relative to the second object and the controlled device, Information indicating at least one of a position relative to a device to be controlled and a position relative to the second object and the device to be controlled is obtained from the second estimation unit as the object-device relative position information. do.
- the generation unit includes the inter-object-device relative position information acquired by the acquisition unit, the layout constraint condition information stored by the storage unit, and the first generating an information matrix and an information vector in Graph-SLAM (Simultaneous Localization and Mapping) based on the position of the device to be controlled estimated by the estimating unit, and calculating an evaluation function based on the generated information matrix and the information vector;
- the map information is generated by performing the optimization.
- the generation unit may include the inter-object-device relative position information acquired by the acquisition unit, the layout constraint condition information stored by the storage unit, and the first estimating the positions of the first object and the second object based on the position of the device to be controlled estimated by the estimating unit, and based on the estimated positions of the first object and the second object , to generate the information matrix and the information vector.
- the generation unit obtains both the first object-device relative position information and the second object-device relative position information as the object-device relative position information.
- control target device is a mobile body capable of changing at least one of the position and orientation of the control target device.
- a moving body comprising the information processing device described above as the controlled device.
- a computer stores relative positions of a first object arranged in a space in which a controlled device to be controlled is located and a second object arranged in the space and different from the first object; a reading step of reading the layout constraint information from a storage unit storing the layout constraint information including information indicating the position of the controlled device and the relative position between the first object and the controlled device; at least one of first object-apparatus relative position information and second object-apparatus relative position information indicating a relative position between the second object and the control target apparatus; and a generating step of generating map information indicating a map of the space based on the layout constraint information.
- An information processing device that controls a control target device to be controlled, comprising: a first object arranged in a space in which the control target device is located; and the first object arranged in the space.
- a storage unit for storing layout constraint condition information including information indicating a relative orientation with a different second object; an orientation of the controlled device; and a relative orientation between the first object and the controlled device.
- layout constraint condition information including information indicating a relative orientation with a different second object; an orientation of the controlled device; and a relative orientation between the first object and the controlled device.
- first inter-object-device relative orientation information indicating the relative orientation between the second object and the second object-device relative orientation information indicating the relative orientation between the second object and the control target device
- a control unit that generates map information indicating a map of the space based on the layout constraint information.
- a moving body comprising the information processing device described above as the controlled device.
- Information indicating the relative orientation between a first object placed in a space where a controlled device to be controlled is located and a second object placed in the space and different from the first object a reading step of reading the layout constraint information from a storage unit storing the layout constraint information including a layout constraint information; and a first object indicating an orientation of the controlled device and a relative orientation between the first object and the controlled device. at least one of inter-device relative orientation information and second object-to-device relative orientation information indicating a relative orientation between the second object and the control target device; and the layout constraint conditions read in the reading step. and a generating step of generating map information indicating a map of the space based on the information.
- a computer stores a relative attitude between a first object arranged in a space in which a control target device to be controlled is located and a second object arranged in the space and different from the first object; a reading step of reading the layout constraint information from a storage unit storing the layout constraint information including information indicating the orientation of the controlled device and the relative orientation between the first object and the controlled device; at least one of first object-to-apparatus relative orientation information and second object-to-apparatus relative orientation information indicating a relative orientation between the second object and the control target apparatus; and a generating step of generating map information indicating a map of the space based on the layout constraint information.
- FIG. 3 is a diagram illustrating an example of a hardware configuration of an information processing device 30;
- FIG. 3 is a diagram showing an example of a functional configuration of an information processing device 30;
- FIG. 10 is an image diagram of the vector shown in Equation (4);
- 4 is a diagram showing an example of the flow of processing in which the information processing device 30 generates an information matrix ⁇ and an information vector ⁇ ;
- FIG. 4 is a diagram showing an example of the flow of processing in which the information processing device 30 generates map information;
- FIG. 3 is a diagram showing an example of map information generated by an information processing device 30;
- FIG. 4 is a diagram showing another example of map information generated by the information processing device 30;
- FIG. 8 is a diagram showing still another example of map information generated by the information processing device 30;
- FIG. 9 is a diagram showing yet another example of map information generated by the information processing device 30; It is a figure showing other examples of composition of control system 1 concerning an embodiment.
- FIG. 10 is a diagram showing an example of an object M1 provided with three markers;
- FIG. 1 is a diagram showing an example of the configuration of a control system 1 according to an embodiment.
- the control system 1 includes a control target device 10 to be controlled, a detection unit 20 provided in the control target device 10, and an information processing device 30.
- the control target device 10 the detection unit 20, and the information processing device 30 are configured separately.
- some or all of the controlled device 10, the detection unit 20, and the information processing device 30 may be integrally configured.
- the information processing device 30 is connected to a LAN (Local Area Network). ), a WAN (Wide Area Network), a dedicated communication network, or the like. may be configured to be communicably connected to each of the.
- LAN Local Area Network
- WAN Wide Area Network
- a dedicated communication network or the like.
- the control system 1 generates map information indicating a map of the space in which the controlled device 10 is located.
- the space in which the controlled device 10 is located will be referred to as a target space R.
- the target space R is, for example, the space in the room where the control target device 10 is located, but is not limited to this. Spaces other than the space in the room where the controlled device 10 is located may be, for example, underwater, air, outer space, or the like.
- the target space R is a space in which a plurality of objects are arranged. At least some of the plurality of objects arranged in the target space R are grouped into one or more groups according to their uses. Note that one or more of these groups are configured by combining two or more objects. Therefore, the plurality of objects arranged in the target space R may include one or more objects that are not grouped together.
- the plurality of objects arranged in the target space R are four objects M1 to M4 shown in FIG. 1 will be described.
- the case where the objects M1 and M2 are put together in the group G1 and the objects M3 and M4 are put together in the group G2 will be described. That is, in the example shown in FIG. 1, four objects arranged in the target space R are grouped into two groups. Therefore, in this example, the four objects placed in the object space R do not include any ungrouped objects.
- the reason why the four objects in the object space R are grouped into two groups is that even if the arrangement of these four objects is changed to a different arrangement from the initial arrangement, the objects in each group will be separated from each other. This is because relative positions and orientations are often retained. For example, when the objects M1 and M2 grouped into the group G1 are connected side by side in the target space R, even if the arrangement of the objects M1 and M2 is changed to a different arrangement from the initial arrangement, As long as the uses of the object M1 and the object M2 are not changed, the arrangement is changed to a different arrangement from the initial arrangement while the relative position and orientation of the object M1 and the object M2 are maintained.
- the state in which the object M1 and the object M2 are connected side by side is a state suitable for the application of the object M1 and the object M2.
- Objects used in this manner include, for example, work desks, shelves, and the like.
- the arrangement of the objects that is, the layout of the objects, does not substantially change or is constrained not to change in each group. Therefore, in each group, the relative positions and orientations of the objects can be used as a constraint condition to constrain the arrangement of the objects so as not to change when generating map information indicating a map within the target space R. can.
- the control system 1 generates map information indicating a map within the target space R based on the layout constraint condition information for each group within the target space R. That is, the control system 1 generates the map information based on the layout constraint information of the group G1 and the layout constraint condition information of the group G2.
- the layout constraint information of the group G1 and the layout constraint condition information of the group G2 will be collectively referred to as layout constraint information unless it is necessary to distinguish them.
- the constraint as described above may be referred to as layout constraint.
- the constraint conditions as described above may be referred to as layout constraint conditions.
- the layout constraint condition information for a certain group in the target space R is information indicating a constraint condition for constraining two or more objects grouped together in the group so as not to change their arrangement. More specifically, the layout constraint information is information that includes, as the constraint condition, information indicating the relative positions and orientations of two or more objects grouped together. That is, the layout constraint condition information of the group G1 in the target space R is a constraint that constrains the information indicating the relative positions and orientations of the objects M1 and M2 so that the arrangement of the objects M1 and M2 is not changed. This is information included as a condition.
- the layout constraint information includes information indicating the relative positions and orientations of the objects M1 and M2, and in addition, the allowable relative positions and orientations of the objects M1 and M2. A case will be described where information indicating the value of the error is included.
- the layout constraint condition information of the group G2 in the target space R is a constraint that constrains the information indicating the relative positions and orientations of the objects M3 and M4 so that the arrangement of the objects M3 and M4 is not changed. This is information included as a condition.
- the layout constraint information includes information indicating the relative positions and orientations of the objects M3 and M4, as well as the permitted relative positions and orientations of the objects M3 and M4. A case will be described where information indicating the value of the error is included.
- the control system 1 can accurately display the map information indicating the map within the target space R even if the layout of a plurality of objects within the target space R is changed from the initial layout. can be produced well. This is because at least part of the arrangement of the plurality of objects after being changed from the initial arrangement can be estimated by the layout constraint information.
- the control system 1 generates map information indicating a map within the target space R based on a Graph-SLAM (Simultaneous Localization and Mapping) algorithm using layout constraint condition information for each group is described. explain.
- the conventional Graph-SLAM algorithm is already well-known, so detailed description beyond necessity will be omitted.
- control system 1 may be configured to generate the map information based on another type of SLAM using the layout constraint condition information for each group. Further, the control system 1 may be configured to generate the map information based on an algorithm other than the SLAM algorithm as an algorithm using the layout constraint information.
- the algorithm may be a known algorithm or an algorithm to be developed as long as it is an algorithm capable of generating map information.
- the Graph-SLAM algorithm using the layout constraint information for each group will be referred to as layout constraint Graph-SLAM below.
- layout-constrained Graph-SLAM will be introduced while comparing it with a Graph-SLAM algorithm different from layout-constrained Graph-SLAM (for example, a conventional Graph-SLAM algorithm, etc.).
- map information is generated based on a Graph-SLAM algorithm that is different from layout-constrained Graph-SLAM
- the control system 1 needs to detect all objects in the target space R with the detection unit 20 .
- the control system 1 estimates the relative positions and orientations of all the objects in the object space R and the controlled device 10, and adds up each element of the information matrix in the Graph-SLAM algorithm. (update) and addition (update) of each element of the information vector in the Graph-SLAM algorithm.
- the control system 1 detects some objects in the target space R by the detection unit 20, thereby generating information matrixes in the layout constrained Graph-SLAM. Each element can be added up. That is, in this case, the control system 1 detects at least one of the object M1 and the object M2 and at least one of the object M3 and the object M4 by the detection unit 20 among the four objects M1 to M4. Thus, addition of each element of the information matrix in the layout constraint Graph-SLAM and addition of each element of the information vector in the layout constraint Graph-SLAM can be performed.
- control target device 10 the detection unit 20, and the information processing device 30 included in the control system 1 will be described in detail.
- the control target device 10 is controlled by the information processing device 30 .
- the control target device 10 is, for example, a moving object such as a drone, a movable robot, or an AGV (Automatic Guided Vehicle), but may be an immovable device controlled by the information processing device 30 .
- the controlled device 10 may be a device that is carried by an animal such as a person or a dog (that is, it may be a non-self-propelled device). In the following, as an example, a case where the controlled device 10 is a drone as shown in FIG. 1 will be described.
- the controlled device 10 when the controlled device 10 is a robot, the controlled device 10 may be a robot that can move with propellers, legs, wheels, caterpillars, etc., and can move by moving the housing like a snake-shaped robot. It may be a robot of any kind, a movable robot of another type, an immovable robot, another immovable device equipped with the detection unit 20 described later, or the like. , or the detection unit 20 itself. However, even if the control target device 10 is an immovable device, the control target device 10 is a device whose movement amount can be detected when carried by a person. This movement amount may be detected by the detection unit 20 or by a sensor different from the detection unit 20 . Note that the control target device 10 may be configured to include the information processing device 30 . In the following, as an example, a case where the controlled device 10 does not include the information processing device 30 will be described.
- the detection unit 20 may be any device as long as it can detect objects in the target space R.
- the detection unit 20 is an imaging device (for example, a camera etc.) will be described. That is, the detection unit 20 in this example detects an object in the target space R by imaging it.
- the detection unit 20 captures an image of an imageable range in accordance with control from the information processing device 30 .
- the detection unit 20 outputs the captured image to the information processing device 30 .
- the detection unit 20 may be another device such as LIDER (Light Detection and Ranging, Laser Imaging Detection and Ranging), ToF (Time of Flight), as long as it is a device capable of detecting an object.
- LIDER Light Detection and Ranging
- Laser Imaging Detection and Ranging Laser Imaging Detection and Ranging
- ToF Time of Flight
- each of the objects M1 to M4 imaged by the detection unit 20 (that is, detected by the detection unit 20) is provided with a marker as shown in FIG. If there is
- a first marker in which identification information for identifying the object M1 is encoded as first encoded information, and a relative A marker MKR1 is provided that includes a second marker in which information indicating position and orientation is encoded as second encoded information.
- the marker MKR1 may be a known marker or a marker to be developed in the future.
- a first marker in which identification information for identifying the object M2 is encoded as first encoded information, and a relative position and orientation between the object M2 and the detection unit 20 that detects the object M2.
- a marker MKR2 is provided that includes a second marker whose indicating information is encoded as second encoded information.
- the marker MKR2 may be a known marker or a marker to be developed in the future.
- a marker MKR3 is provided that includes a second marker whose indicating information is encoded as second encoded information.
- the marker MKR3 may be a known marker or a marker to be developed in the future.
- a first marker in which identification information for identifying the object M4 is encoded as first encoded information, and a relative position and orientation between the detection unit 20 that detects the object M4 and the object M2.
- a marker MKR4 is provided which includes a second marker whose indicating information is encoded as second encoded information.
- the marker MKR4 may be a known marker or a marker to be developed in the future.
- markers MKR1 to MKR4 they will be collectively referred to as the marker MKR.
- the detection unit 20 is provided in the controlled device 10 in this example, as described above. Therefore, the range that can be imaged by the detection unit 20 changes as the control target device 10 moves. That is, the detection unit 20 can capture an image of a range corresponding to the position and orientation of the control target device 10 . Note that the detection unit 20 may be provided in the target space R so as to be able to capture at least part of the target space R instead of being provided in the control target device 10 .
- the detection unit 20 captures a still image.
- the detection unit 20 may be configured to capture a moving image.
- the captured image described in this embodiment can be replaced by each frame that constitutes the moving image captured by the detection unit 20 .
- the information processing device 30 is, for example, a multifunctional mobile phone terminal (smartphone).
- the information processing device 30 may be a tablet PC (Personal Computer), a notebook PC, a PDA (Personal Digital Assistant), a mobile phone terminal, a desktop PC, a workstation, or other information processing device instead of a multifunctional mobile phone terminal.
- tablet PC Personal Computer
- notebook PC Portable Computer
- PDA Personal Digital Assistant
- mobile phone terminal a desktop PC
- workstation or other information processing device instead of a multifunctional mobile phone terminal.
- the information processing device 30 controls the control target device 10 .
- the information processing device 30 moves the controlled device 10 along a predetermined trajectory based on a pre-stored program. Further, for example, the information processing device 30 moves the control-target device 10 according to the received operation.
- the information processing device 30 controls the detection unit 20 while moving the control target device 10 along a predetermined trajectory, and every time a predetermined sampling period elapses, the detection unit 20 detects an imageable range. 20 is imaged.
- the information processing device 30 acquires the captured image captured by the detection unit 20 from the detection unit 20 .
- each captured image captured by the detection unit 20 is associated with time information indicating the time when the captured image was captured.
- the information processing device 30 selects images based on a plurality of captured images captured by the detection unit 20 while the controlled device 10 is moving along a predetermined trajectory, and layout constraint condition information for each group stored in advance.
- each element of the information matrix in the layout constraint Graph-SLAM is added up, and each element of the information vector in the layout constraint Graph-SLAM is added up.
- the information processing device 30 optimizes the evaluation function in the layout constrained Graph-SLAM, and estimates the position and orientation of each of the four objects in the target space R in the world coordinate system.
- the information processing device 30 can generate map information indicating a map within the target space R based on the estimation results of the positions and orientations of the four objects within the target space R in the world coordinate system.
- the method of generating map information in layout-constrained Graph-SLAM may be the same as the method of generating map information in the conventional Graph-SLAM algorithm, or may be a method that will be developed in the future.
- the information processing device 30 generates map information indicating a map within the target space R based on the layout constraint Graph-SLAM.
- the information processing device 30 can accurately generate map information indicating a map within the target space R even if the layout of the four objects within the target space R has been changed from the initial layout.
- the information processing device 30 can, for example, cause the control-target device 10 to perform highly accurate work.
- the world coordinate system is a three-dimensional orthogonal coordinate system for indicating the position and orientation in the actual target space R, for example, a three-dimensional orthogonal coordinate system associated with the target space R.
- the information matrix in the layout constraining Graph-SLAM contains values indicating the strength of constraining the arrangement of the object M1 and the object M2 by the constraint condition that constrains the arrangement of the object M1 and the object M2 so as not to change.
- the value indicating the strength to constrain the arrangement of the object M1 and the object M2 by the constraint condition that constrains the arrangement of the object M1 and the object M2 so as not to change the arrangement of the object M1 and the object M2 is included in the layout constraint condition information of the group G1. It is the reciprocal of the error value indicated by the information.
- the larger the value of the error the smaller the value indicating the strength of constraining the arrangement of the objects M1 and M2 by the constraint condition that constrains the arrangement of the objects M1 and M2 so as not to change the arrangement of the objects M1 and M2. becomes larger as the value of is smaller.
- the larger the value indicating the strength of constraining the arrangement of the objects M1 and M2 by the constraint condition for constraining the arrangement of the objects M1 and M2 so that the arrangement of the objects M1 and M2 is not changed the more relative the object M1 and the object M2 are. Position and pose are less likely to change in the generation of map information in layout constrained Graph-SLAM.
- the value indicating the strength of constraining the arrangement of the object M3 and the object M4 by the constraint condition that constrains the arrangement of the object M3 and the object M4 so as not to change the arrangement of the object M3 and the object M4 is included in the layout constraint condition information of the group G2. It is the reciprocal of the error value indicated by the information. That is, the larger the value of the error, the smaller the value indicating the strength of constraining the arrangement of the objects M3 and M4 by the constraint condition that constrains the arrangement of the objects M3 and M4 so as not to change the arrangement of the objects M3 and M4. becomes larger as the value of is smaller.
- adding up each element of the information vector in the layout constraint Graph-SLAM is the same as adding up each element of the information vector in the conventional Graph-SLAM algorithm. Therefore, a detailed description of adding up each element of the information vector in the layout constrained Graph-SLAM will be omitted.
- the relative position and orientation between an object and the controlled device 10 will be referred to as the inter-object-apparatus relative position and orientation of the object.
- the relative position and orientation of two objects will be referred to as the inter-object relative position and orientation of the two objects.
- FIG. 2 is a diagram showing an example of the hardware configuration of the information processing device 30. As shown in FIG.
- the information processing device 30 includes, for example, a CPU (Central Processing Unit) 31, a storage section 32, an input reception section 33, a communication section 34, and a display section 35. These components are communicatively connected to each other via a bus.
- the information processing device 30 also communicates with the control target device 10 and the detection unit 20 via the communication unit 34 .
- the CPU 31 is, for example, a processor that controls the entire information processing device 30 .
- the CPU 31 may be another processor such as an FPGA (Field Programmable Gate Array).
- the CPU 31 executes various programs stored in the storage section 32 .
- the storage unit 32 includes, for example, a HDD (Hard Disk Drive), SSD (Solid State Drive), EEPROM (Electrically Erasable Programmable Read-Only Memory), ROM (Read-Only Memory), RAM (Random Access Memory), and the like. Note that the storage unit 32 may be an external storage device connected by a digital input/output port such as USB (Universal Serial Bus) instead of being built in the information processing device 30 .
- the storage unit 32 stores various information, various images, various programs, and the like processed by the information processing device 30 . For example, the storage unit 32 stores the layout constraint condition information for each group described above.
- the input reception unit 33 is an input device such as a keyboard, mouse, and touch pad. Note that the input reception unit 33 may be a touch panel configured integrally with the display unit 35 .
- the communication unit 34 includes, for example, an antenna, a digital input/output port such as USB, an Ethernet (registered trademark) port, and the like.
- the display unit 35 is a display device including, for example, a liquid crystal display panel or an organic EL (Electro Luminescence) display panel.
- FIG. 3 is a diagram showing an example of the functional configuration of the information processing device 30. As shown in FIG.
- the information processing device 30 includes a storage unit 32, an input reception unit 33, a communication unit 34, a display unit 35, and a control unit 36.
- the control unit 36 controls the entire information processing device 30 .
- the control unit 36 includes, for example, an imaging control unit 361, an image acquisition unit 362, a first estimation unit 363, a second estimation unit 364, an acquisition unit 365, a generation unit 366, and a moving body control unit 367.
- These functional units included in the control unit 36 are implemented by the CPU 31 executing various programs stored in the storage unit 32, for example.
- some or all of the functional units may be hardware functional units such as LSI (Large Scale Integration) and ASIC (Application Specific Integrated Circuit). In addition, part or All may be integrally constructed.
- Some or all of the imaging control unit 361, the image acquisition unit 362, the first estimation unit 363, the second estimation unit 364, the acquisition unit 365, the generation unit 366, and the moving object control unit 367 Each may be divided into two or more functional units.
- the imaging control unit 361 causes the detection unit 20 to image a range that the detection unit 20 can image.
- the image acquisition unit 362 acquires the captured image captured by the detection unit 20 from the detection unit 20 .
- the first estimation unit 363 estimates the position and orientation of the control-target device 10 in the world coordinate system for each time when the detection unit 20 captures an image.
- the position and orientation of the controlled device 10 in the world coordinate system will be referred to as the controlled device position and orientation.
- the second estimating unit 364 determines the object for each of the one or more objects captured in the captured image among the four objects in the target space R. Estimate the relative position and orientation between devices.
- the obtaining unit 365 obtains from the second estimating unit 364 inter-object-apparatus relative position and orientation information indicating each of the one or more inter-object-apparatus relative positions and orientations estimated by the second estimating unit 364 .
- the generation unit 366 generates the layout constraint condition information stored in advance in the storage unit 32, the inter-object-device relative position/orientation information acquired by the acquisition unit 365, and the position/orientation of the control target device estimated by the first estimation unit 363. , the elements of the information matrix and the elements of the information vector are added up. Then, the generation unit 366 optimizes the evaluation function based on the information matrix and the information vector after adding up each element, and calculates the position and orientation of each of the four objects in the target space R in the world coordinate system. to estimate Then, the generation unit 366 generates map information indicating a map within the target space R based on the estimation results of the positions and orientations of the four objects in the target space R in the world coordinate system.
- the mobile body control unit 367 controls the control target device 10 .
- the moving body control unit 367 moves the controlled device 10 along a predetermined trajectory based on an operation program pre-stored in the storage unit 32 . Further, for example, the moving body control unit 367 moves the controlled device 10 according to the received operation.
- each of the four objects in the target space R will be identified by the value of the variable k.
- the other object whose relative position and orientation to the object identified by the value of k is constrained by the layout constraint condition information.
- the value of the variable l is any value from 1 to 4 other than the value of k.
- time is represented by t as an example.
- x ⁇ y means that x is accompanied by y as a superscript, that is, xy .
- x_y means that x is accompanied by y as a subscript, that is, xy .
- x ⁇ y_z means that x is accompanied by y as a superscript and that x is accompanied by z as a subscript.
- each of these x, y, and z may be any alphabet or a sequence of two or more alphabets enclosed in braces.
- xyz ⁇ jkl ⁇ means xyzjkl .
- xyz_ ⁇ jkl ⁇ means xyz jkl .
- x ⁇ y_z ⁇ means that x is accompanied by y_z as a superscript.
- x_ ⁇ y_z ⁇ means that x is accompanied by y_z as a subscript.
- a position in the world coordinate system is represented by coordinates on the xy plane in the world coordinate system will be described.
- attitude in the world coordinate system is represented by the azimuth angle in the world coordinate system
- position in the world coordinate system may be represented by three-dimensional coordinates in the world coordinate system.
- orientation in the world coordinate system may be represented by Euler angles in the world coordinate system.
- the position and orientation of the object in the world coordinate system are obtained using the position and orientation in the world coordinate system of the controlled device 10 at time t and the relative position and orientation of the object in the world coordinate system at time t. , can be calculated based on the following equations (1) and (2).
- the position and orientation of an object in the world coordinate system will be referred to as the object position and orientation of the object.
- m_k is a vector whose elements are three values indicating the object position and orientation for the object identified by k.
- r_t is a vector indicating the position and orientation of the controlled device at time t.
- r_t is represented by equation (2) above.
- x_rt is the x-coordinate indicating the position of the controlled device in the world coordinate system at time t.
- y_rt is the y-coordinate indicating the position of the controlled device 10 in the world coordinate system at time t.
- ⁇ _rt is an azimuth angle indicating the attitude of the controlled device 10 in the world coordinate system at time t.
- Z_k_t is a vector representing observations about the object.
- Z_k_t is a vector representing the object-apparatus relative position and orientation of the object estimated based on the captured image captured by the detection unit 20 at time t.
- Z_k_t is explicitly represented by equation (3) above.
- x ⁇ r_t ⁇ _ ⁇ m_k ⁇ which is the first element on the right side of Equation (3), is the x-coordinate indicating the relative position between the object and the controlled device 10 at time t.
- x ⁇ r_t ⁇ _ ⁇ m_k ⁇ is the relative It is the x-coordinate indicating the position.
- y ⁇ r_t ⁇ _ ⁇ m_k ⁇ which is the second element on the right side of Equation (2), is the y-coordinate indicating the relative position between the object and the controlled device 10 at time t.
- y ⁇ r_t ⁇ _ ⁇ m_k ⁇ is the relative It is the y-coordinate indicating the position.
- ⁇ r_t ⁇ _ ⁇ m_k ⁇ which is the third element on the right side of Equation (2), is the azimuth angle indicating the relative attitude between the object and the controlled device 10 at time t.
- ⁇ r_t ⁇ _ ⁇ m_k ⁇ is the relative This is the azimuth angle that indicates the attitude.
- the inter-object relative position/orientation between the object identified by the k value and the object identified by the l value is specified by the layout constraint information for each group stored in advance in the storage unit 32 .
- Layout constraint information including the inter-object relative position/orientation information can be represented by the following equations (4) and (5).
- p ⁇ k_l on the left side of equation (4) represents information indicating the inter-object relative position and orientation between the object identified by the value of k and the object identified by the value of l.
- 2 is a vector indicating layout constraint information included as constraint conditions for constraining the arrangement of . More specifically, layout constraint information including information indicating the relative position and orientation of the object identified by the value of k with respect to the position and orientation of the object identified by the value of l, as the constraint condition.
- FIG. 4 is an image diagram of the vector shown in Equation (4).
- Object MK shown in FIG. 4 shows an example of an object identified by the value of k.
- Object ML shown in FIG. 4 represents an example of an object identified by the value of l.
- the vector is represented as a vector indicating the position of the object ML from the position of the object MK.
- the position of the object MK is represented by the position of the centroid of the object MK as an example.
- the position of the object MK is represented by the position of the centroid of the object MK as an example.
- this vector actually has, as elements, values indicating the strength with which the arrangement of the object MK and the object ML is constrained by a constraint condition that constrains the arrangement of the object MK and the object ML so that the arrangement of the object MK and the object ML is not changed.
- the first element on the right hand side of equation (4), ⁇ k_l is the allowed position and orientation of the object identified by the value of k relative to the position and orientation of the object identified by the value of l. Indicates the value of the error (this value can be a single value, a vector, or a matrix).
- the second element on the right side of equation (4), m ⁇ k_l is a vector indicating the relative position and orientation of the object identified by the value of k with respect to the position and orientation of the object identified by the value of l. be. That is, the second element is a vector indicating the inter-object relative position and orientation of these two objects.
- Equation (5) The second element on the right side of Equation (4) is expressed as Equation (5).
- the second element on the right side of equation (5), y ⁇ m_k ⁇ _ ⁇ m_l ⁇ indicates the relative position of the object identified by the value of k with respect to the position of the object identified by the value of l. is the y-coordinate.
- the third element on the right side of equation (5), ⁇ m_k ⁇ _ ⁇ m_l ⁇ indicates the relative pose of the object identified by the value of k with respect to the pose of the object identified by the value of l. is the azimuth angle.
- the object position and orientation for the object identified by a certain value of k the layout constraint condition information expressed by Equation (4), and the object position and orientation for the object identified by the value of l are calculated. If so, the elements related between these two objects can be added to the information matrix ⁇ and the information vector ⁇ as in Equation (7) below.
- the elements explicitly shown in the information matrix ⁇ in equation (7) above are the elements related to m_l and m_k in the information matrix ⁇ , i.e., the object identified by some value of k and the value of l It is the element that relates to the identified object.
- This element is the reciprocal of the first element of the vector representing the layout constraint information. That is, this element indicates the strength with which the placement of the object identified by the value of k and the object identified by the value of l is constrained by the constraint to keep the placement of these two objects from changing. value.
- the elements explicitly shown in the information vector ⁇ in the above equation (7) are the elements related to m_l and m_k in the information vector ⁇ , and are m_l and m_k themselves.
- the information processing device 30 adds up each element of the information matrix ⁇ based on the above equations (1) to (7) and the captured image captured by the detection unit 20 at time t. can be added up with each element of the information vector ⁇ .
- the information processing device 30 adds up such elements based on the captured image captured by the detection unit 20 at each time. Thereby, the information processing device 30 can generate the information matrix ⁇ and the information vector ⁇ in the layout constraint Graph-SLAM.
- the position and orientation in the world coordinate system of the object identified by the value of k and the object position and orientation of the object identified by the value of l are calculated based on the above equations (1) and (2).
- the position and orientation calculated based on Equations (1) and (2) are referred to as a first estimated position and orientation.
- the position and orientation calculated based on Equation (6) among the object position and orientation of the object identified by the value of l will be referred to as the second object orientation and orientation.
- a difference vector between the vector representing the first estimated position/posture and the vector representing the second object position/posture can be used to calculate the amount representing the magnitude of the deviation between the first estimated position/posture and the second estimated position/posture.
- a quantity is a quantity that can be calculated as the distance between the first estimated position-posture and the second estimated position-posture.
- the left side of the above equation (8) indicates the aforementioned inner product.
- the first term in each of the two parentheses on the right side of Equation (8) is a vector representing the second estimated position and orientation.
- the first term is calculated based on Equation (9).
- the second term in each of the two parentheses on the right side of Equation (8), that is, m_l, is a vector representing the first estimated position and orientation.
- the above-described first threshold is, for example, an estimation error for the second estimated position and orientation.
- This estimation error may be estimated by any method.
- the first threshold may be another value instead of the estimated error.
- the inner product is repeatedly calculated. Therefore, when the number of times the inner product exceeds the first threshold exceeds a predetermined second threshold, the elements related between m_l and m_k in the above information matrix ⁇ can be replaced with 0. desirable.
- the second threshold is a number between 0.0 and 1.0.
- layout constrained Graph-SLAM does not reflect the layout change between objects in the result even if the layout between objects is changed.
- the closer the second threshold is to 0.0 the easier it is for the layout constrained Graph-SLAM to reflect the change in the layout between objects in the result.
- layout canceling process the process of replacing elements related to two objects with 0 is referred to as layout canceling process.
- FIG. 5 is a diagram showing an example of the flow of processing for the information processing device 30 to generate the information matrix ⁇ and the information vector ⁇ .
- FIG. 5 is a diagram showing an example of the flow of processing for the information processing device 30 to generate the information matrix ⁇ and the information vector ⁇ .
- the second estimating unit 364 reads the layout constraint condition information for each group stored in advance in the storage unit 32 from the storage unit 32 (step S110).
- control unit 36 selects one of the captured images obtained in advance from the detection unit 20 as a target captured image in order of the earliest captured time, and performs step The processing from S130 to step S170 is repeated (step S120).
- the second estimation unit 364 determines whether or not at least one of the four objects in the target space R is captured in the selected target captured image. (Step S130). That is, in step S130, the second estimation unit 364 determines whether or not at least one of the four objects in the target space R is detected based on the selected target captured image. In FIG. 5, the process of step S130 is indicated by "object detection?".
- step S130-NO When the second estimating unit 364 determines that none of the four objects in the target space R is captured in the selected target captured image (step S130-NO), the process transitions to step S120, Select the next target captured image. Note that if there is no captured image that can be selected as the next target captured image in step S120, the control unit 36 terminates the repeated processing of steps S120 to S170, and terminates the processing of the flowchart shown in FIG.
- the second estimation unit 364 determines that at least one of the four objects in the target space R is captured in the selected target captured image (step S130-YES )
- the time at which the target captured image was captured is specified based on the time information associated with the target captured image.
- the first estimator 363 estimates the position and orientation of the controlled device at each specified time based on the predetermined initial value and the history of the speed of the controlled device 10 during movement on the predetermined trajectory.
- the predetermined initial value is the initial value for each of the three elements of the vector representing the position and orientation of the controlled device 10 in the world coordinate system. These three initial values can be any value.
- the second estimation unit 364 estimates the inter-object-apparatus relative position and orientation of one or more objects captured in the target captured image among the four objects in the target space R. (step S150).
- the second estimation unit 364 calculates the first encoded information and the second encoded information from the marker MKR of the object. Read information.
- the second estimating unit 364 can identify which of the four objects the object is, and the inter-object-apparatus relative position of the object at the time specified in step S140. Posture information can be estimated.
- the second estimation unit 364 performs such estimation for each of one or more objects captured in the target captured image.
- the acquisition unit 365 acquires the estimation result estimated by the second estimation unit 364 in step S150. Based on this estimation result, the layout constraint condition information read out from the storage unit 32 in step S110, and the estimation result estimated by the first estimation unit 363 in step S140, the generating unit 366 generates the object space Object positions of one or more objects captured in the target captured image among the four objects in R, and one or more objects forming a group with each of the one or more objects The posture is estimated (step S160). Note that the estimation method in step S160 has already been described in ⁇ Method of Adding Each Element of Information Matrix in Layout Constrained Graph-SLAM>, so a detailed description thereof will be omitted here.
- step S160 the generation unit 366 calculates the difference between the vector representing the first estimated position and orientation and the vector representing the second estimated position and orientation for each of the one or more objects captured in the target captured image.
- An inner product of vectors may be calculated, and it may be determined whether or not the calculated inner product exceeds the first threshold.
- the generator 366 generates a vector representing the first estimated position and orientation of the object M1 and a second estimated position and orientation of the object M1 in step S160. is calculated, and it is determined whether or not the calculated inner product exceeds the first threshold.
- the generating unit 366 determines that the object M1 and the object M2 are replaced with 0 for the layout cancellation processing.
- step S170 the generation unit 366 adds up each element of the information matrix ⁇ and each element of the information vector ⁇ based on the estimation result in step S160 and the layout constraint information read out in step S110. and (step S170). Note that the method of adding up in step S170 has already been explained in ⁇ Method of Adding Each Element of Information Matrix in Layout Constrained Graph-SLAM>, so a detailed explanation will be omitted here.
- step S170 the control unit 36 transitions to step S120 and selects the next target captured image. Note that if there is no captured image that can be selected as the next target captured image in step S120, the control unit 36 terminates the repeated processing of steps S120 to S170, and terminates the processing of the flowchart shown in FIG.
- the acquisition unit 365 may be configured to acquire the estimation result estimated by the first estimation unit 363 in step S140 and the estimation result estimated in step S150 from another device.
- the generation unit 366 performs the processing of step S160 and the processing of step S170 based on these estimation results acquired by the acquisition unit 365 .
- the information processing device 30 can generate the information matrix ⁇ and the information vector ⁇ in the layout constraint Graph-SLAM.
- FIG. 6 is a diagram showing an example of the flow of processing in which the information processing device 30 generates map information.
- the information processing apparatus 30 generates the information matrix ⁇ and the information vector ⁇ by the process of the flowchart shown in FIG. 5 at the timing before the process of step S210 shown in FIG. I will explain the case where In the following, as an example, a case where the information processing apparatus 30 has previously received an operation to start the process at the timing will be described.
- the generation unit 366 generates an evaluation function in layout constraint Graph-SLAM based on the pre-generated information matrix ⁇ and information vector ⁇ (step S210).
- the method of generating this evaluation function may be a method in the conventional Graph-SLAM algorithm or a method to be developed in the future.
- the generation unit 366 performs optimization processing based on the evaluation function generated in step S210 (step S220).
- the optimization method used in the optimization process in step S220 may be a known method or a method that will be developed in the future. Thereby, the generation unit 366 can estimate the position and orientation of each of the four objects in the target space R in the world coordinate system.
- the generation unit 366 generates map information indicating a map within the target space R based on the estimation result in step S220 (step S230), and ends the processing of the flowchart shown in FIG.
- the information processing device 30 estimates the position and orientation of the controlled device based on the predetermined initial values, and calculates the object-to-apparatus relative position and orientation for at least one of the four objects in the object space R. Acquiring position and orientation information, and based on the layout constraint information stored in the storage unit 32, the obtained inter-object-apparatus relative position and orientation information about the at least one object, and the estimated position and orientation of the control target device, Map information indicating a map within the target space R is generated. As a result, the information processing device 30 can accurately generate map information indicating a map within the target space R even if the layout of the four objects within the target space R has been changed from the initial layout.
- FIG. 7 is a diagram showing an example of map information generated by the information processing device 30.
- the information processing device 30 in generating the map information shown in FIG. 7, the information processing device 30 generates a map showing a map of the space in which the controlled device 10 is located based on a conventional Graph-SLAM algorithm that does not use layout constraint information. generated information.
- the map space indicated by the map information shown in FIG. 7 is a space in which eight objects whose relative positions and orientations are held by layout constraints are arranged.
- a route V1 indicated by a dotted line in FIG. 7 indicates the movement route of the controlled device 10 estimated by the controlled device 10 .
- a route V2 indicated by a dotted line in FIG. 7 indicates the position of each of the eight objects estimated by the controlled device 10 .
- the arrows attached to each of the eight squares VMs indicated by solid lines in FIG. As shown in FIG.
- the information processing device 30 cannot accurately estimate the positions and orientations of the eight objects. I understand. This is evident from the low degree of matching between the eight quadrilaterals Ms and the eight quadrilaterals VMs and the low degree of matching between the arrows attached to each of these quadrilaterals.
- FIG. 8 is a diagram showing another example of map information generated by the information processing device 30.
- the information processing apparatus 30 generated map information indicating a map in the same space as the space shown in FIG. 7 based on the layout constraint Graph-SLAM.
- the information processing device 30 compares the result of FIG. It can be seen that the position and orientation of each object can be estimated with high accuracy. This is also clear from the fact that the degree of matching between the eight quadrilaterals Ms and the eight quadrilaterals VMs in FIG.
- the layout constrained Graph-SLAM used to obtain the results shown in FIG. 8 may be configured to perform the above-described layout canceling process, or may be configured not to perform the layout canceling process.
- FIG. 9 is a diagram showing still another example of map information generated by the information processing device 30.
- the information processing device 30 generated map information indicating a map of the space in which the controlled device 10 is located, based on the layout constraint Graph-SLAM.
- the space of the map indicated by the map information shown in FIG. 9 is the same space as the space shown in FIG. 7 except for a part.
- one of the eight objects whose relative positions and orientations were maintained by layout constraints has been moved away from the remaining seven objects. . This movement of one object is the difference between the space shown in FIG. 7 and the space shown in FIG. In FIG.
- the positions where the seven objects are actually located are indicated by seven dotted-line squares Mss. Also, in FIG. 9, the actual orientations of the seven objects are indicated by arrows attached to each of the seven dotted-line squares Mss. Further, in FIG. 9, the position where the one object whose layout constraint is released is actually located is indicated by a dotted-line rectangle Ms1. Also, in FIG. 9, the actual orientation of the one object is indicated by an arrow attached to the dotted square Ms1.
- Each of seven squares VMss indicated by solid lines in FIG. 9 indicates the position of each of the seven objects estimated by the controlled device 10 . Also, the arrows attached to each of the seven squares VMss indicated by solid lines in FIG.
- a rectangle VMs1 indicated by a solid line in FIG. 9 indicates the position of the one object estimated by the control-target device 10 .
- 9 indicates the attitude of the one object estimated by the control-target device 10.
- FIG. 9 shows the result of the information processing apparatus 30 generating map information indicating a map in this space based on the layout constrained Graph-SLAM without layout cancellation processing.
- the information processing device 30 cannot accurately estimate the positions and orientations of the eight objects. This is evident from the low degree of matching between the seven quadrangles Mss and the seven quadrilaterals VMss, and is further evident from the distance between the quadrilaterals Ms1 and VMs1.
- FIG. 10 is a diagram showing yet another example of the map information generated by the information processing device 30.
- the information processing device 30 in generating the map information shown in FIG. 10, the information processing device 30 generates map information showing a map in the same space as the space shown in FIG. generated.
- the information processing device 30 compares the results of FIG. It can be seen that the position and orientation of the object can be estimated with high accuracy.
- the information processing apparatus 30 uses the layout constraint Graph-SLAM that performs the layout cancellation process, so that even if the arrangement of a plurality of objects in the space where the control-target device 10 is located is changed from the initial arrangement, Map information indicating a map in space can be generated more reliably and accurately.
- FIG. 11 is a diagram showing another example of the configuration of the control system 1 according to the embodiment.
- the information processing device 30 estimates the position and orientation in the world coordinate system of each of the four objects in the target space R based on the layout constraint Graph-SLAM. map information indicating a map within the target space R can be generated.
- FIG. 12 is a diagram showing an example of an object M1 provided with three markers.
- the object M1 shown in FIG. 12 is provided with three markers, a marker MKR1-1, a marker MKR1-2, and a marker MKR1-3. This makes it easier for the captured image captured by the detection unit 20 to capture the marker provided on the object M1. Further, when the information processing device 30 can identify each of these three markers, the information processing device 30 can more accurately detect the orientation of the object M1 based on the captured image of the object M1. .
- Huecode is a combination of QR code (registered trademark) and AR code. Since Huecode is already known, detailed description thereof will be omitted. Huecode is described in detail in, for example, Japanese Patent Application No. 2020-169115.
- the position and orientation described above may be only the position.
- the posture described above is not used in various processes, and various processes are executed using only the position.
- various types of information indicating position and orientation are information indicating position.
- the positions and orientations described above may be only the orientations.
- the positions described above are not used for various processes, and various processes are executed using only the attitude.
- various types of information indicating the position and orientation become information indicating the orientation.
- the information processing device controls the control target device (the control target device 10 in the example described above) to be controlled.
- an information processing device a first object (for example, object M1 in the example described above) arranged in a space (target space R in the example described above) in which a device to be controlled is located;
- a storage unit (the above In the example described in , the storage unit 32), the position and orientation of the controlled device, the first object-device relative position and orientation information indicating the relative position and orientation of the first object and the controlled device, and the A spatial map based on at least one of second object-device relative position/attitude information indicating the relative positions and orientations of the two objects and the control target device and layout constraint condition information stored in the storage unit.
- the information processing device can accurately generate map information indicating a map of the space, even if the arrangement of a plurality of objects in the space where the control target device is located has been changed from the initial arrangement. .
- the control unit includes a first estimation unit (the first estimation unit 363 in the example described above) that estimates the position and orientation of the control target device based on predetermined initial values; an acquisition unit (the acquisition unit 365 in the example described above) that acquires at least one of the first inter-object-device relative position and orientation information and the second inter-object-device relative position and orientation information as the inter-object-device relative position and orientation information; , based on the layout constraint condition information stored in the storage unit, the relative position and orientation information between object devices acquired by the acquisition unit, and the position and orientation of the control target device estimated by the first estimation unit, A configuration including a generating unit (generating unit 366 in the example described above) that generates map information indicating a map of .
- generating unit 366 generating unit 366 in the example described above
- the control unit detects the first object and the second object based on the output from the detection unit (the detection unit 20 in the example described above) that detects at least one of the first object and the second object.
- a second estimation unit that estimates at least one of the relative position and orientation with respect to the controlled device and the relative position and orientation between the second object and the controlled device 364), wherein the obtaining unit obtains information indicating at least one of the relative position and orientation of the first object and the controlled device and the relative position and orientation of the second object and the controlled device. is obtained from the second estimation unit as the inter-object-apparatus relative position/orientation information.
- a configuration may be used in which the generation unit generates map information based on Graph-SLAM.
- the generation unit includes the inter-object-device relative position/orientation information acquired by the acquisition unit, the layout constraint condition information stored by the storage unit, and the control target device estimated by the first estimation unit.
- a configuration is used in which map information is generated by generating an information matrix and information vector in Graph-SLAM based on position and orientation, and optimizing an evaluation function based on the generated information matrix and information vector.
- the generation unit includes the inter-object-device relative position/orientation information acquired by the acquisition unit, the layout constraint condition information stored by the storage unit, and the control target device estimated by the first estimation unit. estimating the position and orientation of each of the first and second objects based on the positions and orientations, and generating an information matrix and an information vector based on the estimated positions and orientations of the first and second objects, respectively; , configurations may be used.
- the information processing device when both the first inter-object-device relative position and orientation information and the second inter-object-device relative position and orientation information are acquired by the acquisition unit as inter-object-device relative position and orientation information, Based on the position and orientation of the device to be controlled estimated by the first estimation unit, the object-to-device relative position and orientation information acquired by the acquisition unit, and the layout constraint condition information stored by the storage unit, a first object estimating the position and orientation of the control target device as a first estimated position and orientation, estimating the position and orientation of the second object as a second estimated position and orientation, and estimating the position and orientation of the controlled device estimated by the first estimation unit; estimating the position and orientation of the first object as a third estimated position and orientation and estimating the position and orientation of the second object as a fourth estimated position and orientation based on the object-apparatus relative position and orientation information acquired by the acquisition unit; Then, the difference between the vector representing the first estimated position-posture and the vector representing the third estimated position-posture is calculated as the
- an information matrix A configuration may be used in which connection of elements determined according to the layout constraint information among the elements possessed by is deleted.
- control target device is a moving body capable of changing the position and posture of the control target device.
- the information processing apparatus controls the control target apparatus (the control target apparatus 10 in the example described above).
- a first object for example, object M1 in the example described above
- a space target space R in the example described above
- the information processing device to be controlled a storage unit (the above In the example described in , the storage unit 32), the position of the control target device, first object-device relative position information indicating the relative position between the first object and the control target device, the second object and the control target Control for generating map information indicating a map in space based on at least one of second object relative position information between devices indicating a position relative to the device and layout constraint condition information stored in the storage unit section (the control section 36 in the example described above).
- the information processing device can accurately generate map information indicating a map of the space, even if the arrangement of a plurality of objects in the space where the control target device
- the control unit includes a first estimation unit (the first estimation unit 363 in the example described above) that estimates the position of the control target device based on a predetermined initial value;
- An acquisition unit (acquisition unit 365 in the example described above) that acquires at least one of the object-device relative position information and the second object-device relative position information as the object-device relative position information, and stores it in the storage unit.
- map information indicating a map in space based on the obtained layout constraint condition information, the inter-object-device relative position information obtained by the obtaining unit, and the position of the control target device estimated by the first estimating unit; and a generator (generator 366 in the example described above) may be used.
- the control unit detects the first object and the second object based on the output from the detection unit (the detection unit 20 in the example described above) that detects at least one of the first object and the second object.
- a second estimating unit (the second estimating unit 364 in the example described above) that estimates at least one of the relative position with respect to the control target device and the relative position between the second object and the control target device;
- the obtaining unit obtains information indicating at least one of a relative position between the first object and the controlled device and a relative position between the second object and the controlled device as inter-object-device relative position information. may be obtained from the second estimating unit as .
- a configuration may be used in which the generation unit generates map information based on Graph-SLAM.
- the generating unit generates information on the inter-object-device relative position acquired by the acquiring unit, layout constraint condition information stored by the storage unit, and position of the control target device estimated by the first estimating unit.
- a configuration may be used in which map information is generated by generating an information matrix and an information vector in Graph-SLAM based on and optimizing an evaluation function based on the generated information matrix and information vector. .
- the generating unit generates information on the inter-object-device relative position acquired by the acquiring unit, layout constraint condition information stored by the storage unit, and position of the control target device estimated by the first estimating unit. and generating an information matrix and an information vector based on the estimated positions of the first object and the second object, respectively. good.
- the generation unit when both the first object-device relative position information and the second object-device relative position information are acquired by the acquisition unit as the object-device relative position information, the first estimation first estimation of the position of the first object based on the position of the device to be controlled estimated by the unit, the relative position information between the object devices acquired by the acquisition unit, and the layout constraint condition information stored by the storage unit; and estimating the position of the second object as a second estimated position, and the position of the control target device estimated by the first estimating unit and the object-to-device relative position information acquired by the acquiring unit.
- the position of the first object is estimated as a third estimated position and the position of the second object is estimated as a fourth estimated position
- the difference between the vector representing the first estimated position and the vector representing the third estimated position is calculated as Calculate as a first difference, calculate the difference between the vector indicating the second estimated position and the vector indicating the fourth estimated position as the second difference, and calculate the first difference, the estimated error for the position of the first object
- a configuration may be used in which, based on the second difference and the estimated error of the position of the second object, the connection of the elements determined according to the layout constraint information among the elements of the information matrix is deleted.
- control target device is a mobile body that can change at least one of the position and posture of the control target device.
- the information processing apparatus controls the control target apparatus (the control target apparatus 10 in the example described above).
- a first object for example, object M1 in the example described above
- a space target space R in the example described above
- the information processing device to be controlled a storage unit (the above In the example described in , the storage unit 32), the orientation of the controlled device, the first object-to-device relative orientation information indicating the relative orientation between the first object and the controlled device, the second object and the controlled object Control
- the information processing device can accurately generate map information indicating a map of the space, even if the arrangement of a plurality of objects in the space where the control
- the control unit includes a first estimation unit (the first estimation unit 363 in the example described above) that estimates the orientation of the control target device based on a predetermined initial value;
- An acquisition unit (acquisition unit 365 in the example described above) that acquires at least one of object-device relative orientation information and second object-device relative orientation information as inter-object-device relative orientation information, and stores in a storage unit map information indicating a map in space based on the obtained layout constraint condition information, the inter-object-device relative orientation information obtained by the obtaining unit, and the orientation of the controlled device estimated by the first estimating unit; and a generator (generator 366 in the example described above) may be used.
- the control unit detects the first object and the second object based on the output from the detection unit (the detection unit 20 in the example described above) that detects at least one of the first object and the second object.
- a second estimating unit (the second estimating unit 364 in the example described above) that estimates at least one of the relative orientation with respect to the control target device and the relative orientation between the second object and the control target device;
- the acquisition unit obtains information indicating at least one of the relative orientation between the first object and the controlled device and the relative orientation between the second object and the controlled device as inter-object-device relative orientation information. may be obtained from the second estimating unit as .
- a configuration may be used in which the generation unit generates map information based on Graph-SLAM.
- the generating unit generates the inter-object-device relative orientation information acquired by the acquiring unit, the layout constraint condition information stored by the storage unit, and the orientation of the control target device estimated by the first estimating unit.
- a configuration may be used in which map information is generated by generating an information matrix and an information vector in Graph-SLAM based on and optimizing an evaluation function based on the generated information matrix and information vector. .
- the generating unit generates the inter-object-device relative orientation information acquired by the acquiring unit, the layout constraint condition information stored by the storage unit, and the orientation of the control target device estimated by the first estimating unit. and generating an information matrix and an information vector based on the estimated poses of the first and second objects, respectively. good.
- the generation unit when both the first inter-object-device relative orientation information and the second inter-object-device relative orientation information are acquired by the acquisition unit as the inter-object-device relative orientation information, the first estimation first estimation of the orientation of the first object based on the orientation of the control target device estimated by the unit, the inter-object-device relative orientation information acquired by the acquisition unit, and the layout constraint condition information stored by the storage unit and estimating the orientation of the second object as the second estimated orientation, and combining the orientation of the controlled device estimated by the first estimation unit with the object-to-apparatus relative orientation information acquired by the acquisition unit.
- the orientation of the first object is estimated as a third estimated orientation and the orientation of the second object is estimated as a fourth estimated orientation
- the difference between the vector indicating the first estimated orientation and the vector indicating the third estimated orientation is calculated as A first difference is calculated
- a difference between a vector indicating the second estimated orientation and a vector indicating the fourth estimated orientation is calculated as the second difference
- the first difference, the estimated error of the orientation of the first object, and A configuration may be used in which, based on the second difference and the estimated error of the orientation of the second object, the connection of the elements determined according to the layout constraint information among the elements of the information matrix is deleted.
- control target device is a mobile body that can change at least one of the position and posture of the control target device.
- a program for realizing functions of arbitrary components in the devices described above is recorded on a computer-readable recording medium,
- the program may be read into a computer system and executed.
- the term "computer system” as used herein includes an OS (Operating System) and hardware such as peripheral devices.
- “computer-readable recording medium” means portable media such as flexible disks, magneto-optical disks, ROM (Read Only Memory), CD (Compact Disk)-ROM, and storage such as hard disks built into computer systems. It refers to equipment.
- “computer-readable recording medium” means the volatile memory (RAM (Random Access Memory)), which holds programs for a certain period of time.
- the above program may be transmitted from a computer system storing this program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in a transmission medium.
- the "transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
- the above program may be for realizing part of the functions described above.
- the above program may be a so-called difference file (difference program) that can realize the functions described above in combination with a program already recorded in the computer system.
- Control system 10 Control object apparatus 20
- Detection part 30 ... Information processing apparatus 31... CPU 32... Storage part 33... Input receiving part 34... Communication part 35... Display part 36
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
[1]制御対象となる制御対象装置を制御する情報処理装置であって、前記制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部と、前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記記憶部に記憶された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する制御部と、を備える情報処理装置。
[2]上記に記載の情報処理装置では、前記制御部は、予め決められた初期値に基づいて前記制御対象装置の位置を推定する第1推定部と、前記第1物体装置間相対位置情報と、前記第2物体装置間相対位置情報との少なくとも一方を物体装置間相対位置情報として取得する取得部と、前記記憶部に記憶された前記レイアウト拘束条件情報と、前記取得部により取得された前記物体装置間相対位置情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、前記空間内の地図を示す地図情報を生成する生成部と、を備える。
[3]上記に記載の情報処理装置では、前記制御部は、前記第1物体と前記第2物体との少なくとも一方を検出する検出部からの出力に基づいて、前記第1物体と前記制御対象装置との相対的な位置と、前記第2物体と前記制御対象装置との相対的な位置との少なくとも一方を推定する第2推定部を更に備え、前記取得部は、前記第1物体と前記制御対象装置との相対的な位置と、前記第2物体と前記制御対象装置との相対的な位置との少なくとも一方を示す情報を、前記物体装置間相対位置情報として前記第2推定部から取得する。
[4]上記に記載の情報処理装置では、前記生成部は、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、Graph-SLAM(Simultaneous Localization and Mapping)における情報行列及び情報ベクトルを生成し、生成した前記情報行列及び前記情報ベクトルに基づく評価関数の最適化を行うことにより、前記地図情報を生成する。
[5]上記に記載の情報処理装置では、前記生成部は、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、前記第1物体及び前記第2物体それぞれの位置を推定し、推定した前記第1物体及び前記第2物体それぞれの位置に基づいて、前記情報行列及び前記情報ベクトルを生成する。
[6]上記に記載の情報処理装置では、前記生成部は、前記第1物体装置間相対位置情報と前記第2物体装置間相対位置情報との両方が前記物体装置間相対位置情報として前記取得部により取得された場合、前記第1推定部により推定された前記制御対象装置の位置と、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報とに基づいて、前記第1物体の位置を第1推定位置として推定するとともに前記第2物体の位置を第2推定位置として推定し、且つ、前記第1推定部により推定された前記制御対象装置の位置と、前記取得部により取得された前記物体装置間相対位置情報とに基づいて、前記第1物体の位置を第3推定位置として推定するとともに前記第2物体の位置を第4推定位置として推定し、前記第1推定位置を示すベクトルと前記第3推定位置を示すベクトルとの差を第1差分として算出するとともに、前記第2推定位置を示すベクトルと前記第4推定位置を示すベクトルとの差を第2差分として算出し、前記第1差分と、前記第1物体の位置についての推定誤差と、前記第2差分と、前記第2物体の位置についての推定誤差とに基づいて、前記情報行列が有する要素のうち前記レイアウト拘束条件情報に応じて決められた要素の繋がりを削除する。
[7]上記に記載の情報処理装置では、前記制御対象装置は、前記制御対象装置の位置及び姿勢の少なくとも一方を変化させることができる移動体である。
[8]前記制御対象装置として、上記に記載の情報処理装置を備える、移動体。
[9]制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、を有する情報処理方法。
[10]コンピュータに、制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、を実行させるためのプログラム。
[11]制御対象となる制御対象装置を制御する情報処理装置であって、前記制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部と、前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記記憶部に記憶された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する制御部と、を備える情報処理装置。
[12]前記制御対象装置として、上記に記載の情報処理装置を備える、移動体。
[13]制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、を有する情報処理方法。
[14]コンピュータに、制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、を実行させるためのプログラム。
以下、本発明の実施形態について、図面を参照して説明する。
以下、図1を参照し、実施形態に係る制御システム1の構成について説明する。図1は、実施形態に係る制御システム1の構成の一例を示す図である。
以下、図2を参照し、情報処理装置30のハードウェア構成について説明する。図2は、情報処理装置30のハードウェア構成の一例を示す図である。
以下、図3を参照し、情報処理装置30の機能構成について説明する。図3は、情報処理装置30の機能構成の一例を示す図である。
以下、レイアウト拘束Graph-SLAMにおける情報行列の各要素の足し上げ方法について説明する。ただし、以下では、対象空間R内における各グループの物体間に関係する要素の当該情報行列への足し上げ方法について説明し、この要素以外の要素の足し上げ方法については、従来のGraph-SLAMアルゴリズムにおける足し上げ方法と同様であるため、説明を省略する。
以下、図5を参照し、情報処理装置30が情報行列Ω及び情報ベクトルξを生成する処理について説明する。図5は、情報処理装置30が情報行列Ω及び情報ベクトルξを生成する処理の流れの一例を示す図である。なお、以下では、一例として、図5に示したステップS110の処理が行われるよりも前のタイミングにおいて、制御対象装置10が所定の軌道を移動している期間において検出部20により撮像された複数の撮像画像を、情報処理装置30が検出部20から取得している場合について説明する。また、以下では、一例として、当該タイミングにおいて、情報処理装置30が当該処理を開始する操作を予め受け付けている場合について説明する。
以下、図6を参照し、情報処理装置30が地図情報を生成する処理について説明する。図6は、情報処理装置30が地図情報を生成する処理の流れの一例を示す図である。なお、以下では、一例として、図6に示したステップS210の処理が行われるよりも前のタイミングにおいて、図5に示したフローチャートの処理によって情報処理装置30が情報行列Ω及び情報ベクトルξを生成している場合について説明する。また、以下では、一例として、当該タイミングにおいて、情報処理装置30が当該処理を開始する操作を予め受け付けている場合について説明する。
以下、図を参照し、シミュレーターによる情報処理装置30の性能試験結果について説明する。図7は、情報処理装置30により生成された地図情報の一例を示す図である。ただし、図7に示した地図情報の生成において、情報処理装置30は、レイアウト拘束条件情報を用いない従来のGraph-SLAMアルゴリズムに基づいて、制御対象装置10が位置する空間内の地図を示す地図情報を生成した。図7に示した地図情報が示す地図の空間は、レイアウト拘束によって互いの相対的な位置及び姿勢が保持されている8つの物体が配置された空間である。また、この空間内には、当該Graph-SLAMアルゴリズムにおける繰り返し処理の収束性を向上させるため、ワールド座標系における位置及び姿勢が動かないように固定されている4つのランドマークLM1~LM4が配置されている。その結果、地図全体の精度は、従来手法よりも改善される。ただし、これら4つのランドマークLM1~LM4は、図7に示した地図情報の物体間の相対的な位置関係の精度には、ほぼ影響を与えない。このため、この空間内には、これら4つのランドマークLM1~LM4のうちの一部又は全部は、配置されていなくてもよい。図7では、この空間内に配置されている当該8つの物体が実際に位置している位置を、8つの点線の四角形Msによって示している。また、図7では、当該8つの物体の実際の姿勢を、8つの点線の四角形Msのそれぞれに付随した矢印によって示している。また、図7において点線で示した経路V1は、制御対象装置10により推定された制御対象装置10の移動経路を示す。一方、図7において点線で示した経路V2は、実際の制御対象装置10の移動経路を示す。すなわち、図7に示した例では、情報処理装置30は、検出部20に撮像を行わせながら、所定の軌道として経路V2に沿った移動を制御対象装置10に行わせている。そして、図7に実線で示した8つの四角形VMsのそれぞれは、制御対象装置10によって推定された当該8つの物体それぞれの位置を示している。また、図7に実線で示した8つの四角形VMsのそれぞれに付随する矢印は、制御対象装置10によって推定された当該8つの物体それぞれの姿勢を示している。図7に示したように、レイアウト拘束条件情報を用いない従来のGraph-SLAMアルゴリズムを用いた場合、情報処理装置30は、当該8つの物体それぞれの位置及び姿勢を、精度よく推定できていないことが分かる。これは、8つの四角形Msと8つの四角形VMsとの一致度合いの低さと、これら4角形のそれぞれに付随する矢印同士の一致度合いの低さとのそれぞれから明らかである。
また、上記において説明した位置及び姿勢は、姿勢のみであってもよい。この場合、上記において説明した位置については、各種の処理に使用されず、姿勢のみを用いて各種の処理等が実行される。この場合、例えば、位置及び姿勢を示す各種の情報は、姿勢を示す情報となる。
また、上記のプログラムは、前述した機能の一部を実現するためのものであってもよい。さらに、上記のプログラムは、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差分ファイル(差分プログラム)であってもよい。
Claims (14)
- 制御対象となる制御対象装置を制御する情報処理装置であって、
前記制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部と、
前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記記憶部に記憶された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する制御部と、
を備える情報処理装置。 - 前記制御部は、
予め決められた初期値に基づいて前記制御対象装置の位置を推定する第1推定部と、
前記第1物体装置間相対位置情報と、前記第2物体装置間相対位置情報との少なくとも一方を物体装置間相対位置情報として取得する取得部と、
前記記憶部に記憶された前記レイアウト拘束条件情報と、前記取得部により取得された前記物体装置間相対位置情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、前記空間内の地図を示す地図情報を生成する生成部と、
を備える、
請求項1に記載の情報処理装置。 - 前記制御部は、前記第1物体と前記第2物体との少なくとも一方を検出する検出部からの出力に基づいて、前記第1物体と前記制御対象装置との相対的な位置と、前記第2物体と前記制御対象装置との相対的な位置との少なくとも一方を推定する第2推定部を更に備え、
前記取得部は、前記第1物体と前記制御対象装置との相対的な位置と、前記第2物体と前記制御対象装置との相対的な位置との少なくとも一方を示す情報を、前記物体装置間相対位置情報として前記第2推定部から取得する、
請求項2に記載の情報処理装置。 - 前記生成部は、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、Graph-SLAM(Simultaneous Localization and Mapping)における情報行列及び情報ベクトルを生成し、生成した前記情報行列及び前記情報ベクトルに基づく評価関数の最適化を行うことにより、前記地図情報を生成する、
請求項2又は3に記載の情報処理装置。 - 前記生成部は、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報と、前記第1推定部により推定された前記制御対象装置の位置とに基づいて、前記第1物体及び前記第2物体それぞれの位置を推定し、推定した前記第1物体及び前記第2物体それぞれの位置に基づいて、前記情報行列及び前記情報ベクトルを生成する、
請求項4に記載の情報処理装置。 - 前記生成部は、前記第1物体装置間相対位置情報と前記第2物体装置間相対位置情報との両方が前記物体装置間相対位置情報として前記取得部により取得された場合、前記第1推定部により推定された前記制御対象装置の位置と、前記取得部により取得された前記物体装置間相対位置情報と、前記記憶部により記憶された前記レイアウト拘束条件情報とに基づいて、前記第1物体の位置を第1推定位置として推定するとともに前記第2物体の位置を第2推定位置として推定し、且つ、前記第1推定部により推定された前記制御対象装置の位置と、前記取得部により取得された前記物体装置間相対位置情報とに基づいて、前記第1物体の位置を第3推定位置として推定するとともに前記第2物体の位置を第4推定位置として推定し、前記第1推定位置を示すベクトルと前記第3推定位置を示すベクトルとの差を第1差分として算出するとともに、前記第2推定位置を示すベクトルと前記第4推定位置を示すベクトルとの差を第2差分として算出し、前記第1差分と、前記第1物体の位置についての推定誤差と、前記第2差分と、前記第2物体の位置についての推定誤差とに基づいて、前記情報行列が有する要素のうち前記レイアウト拘束条件情報に応じて決められた要素の繋がりを削除する、
請求項4又は5に記載の情報処理装置。 - 前記制御対象装置は、前記制御対象装置の位置及び姿勢の少なくとも一方を変化させることができる移動体である、
請求項1から6のうち何れか一項に記載の情報処理装置。 - 前記制御対象装置として、請求項1から7のうちいずれか一項に記載の情報処理装置を備える、
移動体。 - 制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、
前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、
を有する情報処理方法。 - コンピュータに、
制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な位置を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、
前記制御対象装置の位置と、前記第1物体と前記制御対象装置との相対的な位置を示す第1物体装置間相対位置情報と、前記第2物体と前記制御対象装置との相対的な位置を示す第2物体装置間相対位置情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、
を実行させるためのプログラム。 - 制御対象となる制御対象装置を制御する情報処理装置であって、
前記制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部と、
前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記記憶部に記憶された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する制御部と、
を備える情報処理装置。 - 前記制御対象装置として、請求項11に記載の情報処理装置を備える、
移動体。 - 制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、
前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、
を有する情報処理方法。 - コンピュータに、
制御対象となる制御対象装置が位置する空間内に配置された第1物体と、前記空間内に配置された前記第1物体と異なる第2物体との相対的な姿勢を示す情報を含むレイアウト拘束条件情報を記憶する記憶部から前記レイアウト拘束条件情報を読み出す読出ステップと、
前記制御対象装置の姿勢と、前記第1物体と前記制御対象装置との相対的な姿勢を示す第1物体装置間相対姿勢情報と、前記第2物体と前記制御対象装置との相対的な姿勢を示す第2物体装置間相対姿勢情報との少なくとも一方と、前記読出ステップにより読み出された前記レイアウト拘束条件情報とに基づいて、前記空間内の地図を示す地図情報を生成する生成ステップと、
を実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180098367.XA CN117321526A (zh) | 2021-06-02 | 2021-06-02 | 信息处理装置、移动体、信息处理方法以及程序 |
PCT/JP2021/021001 WO2022254609A1 (ja) | 2021-06-02 | 2021-06-02 | 情報処理装置、移動体、情報処理方法、及びプログラム |
JP2023525240A JPWO2022254609A1 (ja) | 2021-06-02 | 2021-06-02 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/021001 WO2022254609A1 (ja) | 2021-06-02 | 2021-06-02 | 情報処理装置、移動体、情報処理方法、及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022254609A1 true WO2022254609A1 (ja) | 2022-12-08 |
Family
ID=84322840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/021001 WO2022254609A1 (ja) | 2021-06-02 | 2021-06-02 | 情報処理装置、移動体、情報処理方法、及びプログラム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2022254609A1 (ja) |
CN (1) | CN117321526A (ja) |
WO (1) | WO2022254609A1 (ja) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019138834A1 (ja) * | 2018-01-12 | 2019-07-18 | キヤノン株式会社 | 情報処理装置、情報処理方法、プログラム、およびシステム |
US20190220002A1 (en) * | 2016-08-18 | 2019-07-18 | SZ DJI Technology Co., Ltd. | Systems and methods for augmented stereoscopic display |
WO2019211932A1 (ja) * | 2018-05-01 | 2019-11-07 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、及び自律行動ロボット制御システム |
WO2019244668A1 (ja) * | 2018-06-22 | 2019-12-26 | ソニー株式会社 | 移動体および移動体の制御方法 |
US20200159227A1 (en) * | 2017-06-08 | 2020-05-21 | Israel Aerospace Industries Ltd. | Method of navigating a vehicle and system thereof |
JP2020118586A (ja) * | 2019-01-25 | 2020-08-06 | 株式会社豊田中央研究所 | 移動体 |
WO2021024665A1 (ja) * | 2019-08-08 | 2021-02-11 | ソニー株式会社 | 情報処理システム、情報処理装置及び情報処理方法 |
WO2021024685A1 (ja) * | 2019-08-05 | 2021-02-11 | ソニー株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
-
2021
- 2021-06-02 WO PCT/JP2021/021001 patent/WO2022254609A1/ja active Application Filing
- 2021-06-02 JP JP2023525240A patent/JPWO2022254609A1/ja active Pending
- 2021-06-02 CN CN202180098367.XA patent/CN117321526A/zh active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190220002A1 (en) * | 2016-08-18 | 2019-07-18 | SZ DJI Technology Co., Ltd. | Systems and methods for augmented stereoscopic display |
US20200159227A1 (en) * | 2017-06-08 | 2020-05-21 | Israel Aerospace Industries Ltd. | Method of navigating a vehicle and system thereof |
WO2019138834A1 (ja) * | 2018-01-12 | 2019-07-18 | キヤノン株式会社 | 情報処理装置、情報処理方法、プログラム、およびシステム |
WO2019211932A1 (ja) * | 2018-05-01 | 2019-11-07 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、及び自律行動ロボット制御システム |
WO2019244668A1 (ja) * | 2018-06-22 | 2019-12-26 | ソニー株式会社 | 移動体および移動体の制御方法 |
JP2020118586A (ja) * | 2019-01-25 | 2020-08-06 | 株式会社豊田中央研究所 | 移動体 |
WO2021024685A1 (ja) * | 2019-08-05 | 2021-02-11 | ソニー株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
WO2021024665A1 (ja) * | 2019-08-08 | 2021-02-11 | ソニー株式会社 | 情報処理システム、情報処理装置及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022254609A1 (ja) | 2022-12-08 |
CN117321526A (zh) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI776113B (zh) | 物體位姿估計方法及裝置、電腦可讀儲存介質 | |
Censi et al. | Scan matching in the Hough domain | |
CN110858076B (zh) | 一种设备定位、栅格地图构建方法及移动机器人 | |
JP6439817B2 (ja) | 認識的アフォーダンスに基づくロボットから人間への物体ハンドオーバの適合 | |
US11163299B2 (en) | Control device, control method, and program recording medium | |
US20170061195A1 (en) | Real-time pose estimation system using inertial and feature measurements | |
US20130245828A1 (en) | Model generation apparatus, information processing apparatus, model generation method, and information processing method | |
US11292132B2 (en) | Robot path planning method with static and dynamic collision avoidance in an uncertain environment | |
CN114012731B (zh) | 手眼标定方法、装置、计算机设备和存储介质 | |
CN111736586B (zh) | 用于路径规划的自动驾驶车辆位置的方法及其装置 | |
US11504849B2 (en) | Deterministic robot path planning method for obstacle avoidance | |
WO2021242215A1 (en) | A robot path planning method with static and dynamic collision avoidance in an uncertain environment | |
JP2023513613A (ja) | 適応共蒸留モデル | |
WO2022254609A1 (ja) | 情報処理装置、移動体、情報処理方法、及びプログラム | |
KR101844278B1 (ko) | 관절식 객체의 자세를 추정하기 위한 파라미터 학습 방법 및 관절식 객체의 자세 추정 방법 | |
WO2019171491A1 (ja) | 移動体制御装置、移動体、移動体制御システム、移動体制御方法および記録媒体 | |
CN110114195B (zh) | 动作转移装置、动作转移方法和存储动作转移程序的非暂时性计算机可读介质 | |
JP7156643B2 (ja) | 姿勢推定装置、学習装置、方法、及びプログラム | |
Emaduddin et al. | Accurate floor detection and segmentation for indoor navigation using RGB+ D and stereo cameras | |
US20230259134A1 (en) | Information processing device, information processing method, and program | |
CN113359865B (zh) | 一种无人机飞行路径生成方法及系统 | |
US20240091950A1 (en) | Probabilistic approach to unifying representations for robotic mapping | |
KR102405818B1 (ko) | 노이즈 제거 방법, 노이즈 제거 장치 및 상기 방법을 실행시키기 위하여 기록매체에 저장된 컴퓨터 프로그램 | |
Bokovoy et al. | Maomaps: A photo-realistic benchmark for vslam and map merging quality assessment | |
KR101882319B1 (ko) | 경로 역추적 이동을 위한 수중 이미지 처리 장치 및 그 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21944113 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023525240 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18561611 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180098367.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21944113 Country of ref document: EP Kind code of ref document: A1 |