WO2019186677A1 - Robot position/posture estimation and 3d measurement device - Google Patents

Robot position/posture estimation and 3d measurement device Download PDF

Info

Publication number
WO2019186677A1
WO2019186677A1 PCT/JP2018/012316 JP2018012316W WO2019186677A1 WO 2019186677 A1 WO2019186677 A1 WO 2019186677A1 JP 2018012316 W JP2018012316 W JP 2018012316W WO 2019186677 A1 WO2019186677 A1 WO 2019186677A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
orientation
camera
asynchronous
posture
Prior art date
Application number
PCT/JP2018/012316
Other languages
French (fr)
Japanese (ja)
Inventor
秀行 粂
聡 笹谷
亮祐 三木
誠也 伊藤
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2018/012316 priority Critical patent/WO2019186677A1/en
Publication of WO2019186677A1 publication Critical patent/WO2019186677A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes

Definitions

  • the present invention relates to a robot position / orientation estimation / three-dimensional measurement apparatus.
  • Patent Document 1 states that “a plurality of first estimation units and a plurality of second estimation units each execute self-position estimation processing independently from each other based on images captured by cameras associated in advance. 200 further includes a totaling unit 150.
  • the totaling unit 150 totals (in other words, combines) the estimation results from each of the plurality of first estimation units, and generates one estimation result. There is a description that "the estimation results obtained by the two estimation units are totaled to generate one estimation result, and the estimation results associated with the same time data are totaled.”
  • a representative one of the present invention is a robot position / orientation / three-dimensional measuring device, which includes a plurality of asynchronous cameras mounted on the robot, and a plurality of images captured by the asynchronous cameras.
  • a robot position / orientation estimation / three-dimensional measurement apparatus that can accurately estimate the position / orientation of a robot and the 3D positions of stationary objects and moving objects existing around the robot. Is possible.
  • the figure which shows the block configuration of the robot position and orientation estimation / three-dimensional measurement apparatus 100 The figure which shows an example of the robot 200
  • FIG. 1 is a diagram illustrating a block configuration of a robot position / orientation estimation / three-dimensional measurement apparatus 100.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 estimates the position / orientation of the robot 200 from images captured by a plurality of asynchronous cameras 201 mounted on the robot 200, and 3D of objects existing around the robot 200. Measure the position.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a camera position / orientation / stationary object 3D position estimation unit 101, a robot position / orientation / synchronization amount estimation unit 102, a moving object 3D position / motion estimation unit 103, and an output unit. 104.
  • the camera position / posture / stationary object 3D position estimation unit 101 calculates the position / posture of the asynchronous camera 201 at the shooting time of each image and the 3D position of the stationary object existing around the robot 200 from the images captured by the respective asynchronous cameras 201. presume.
  • the robot position / posture / synchronization deviation amount estimation unit 102 determines the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the surroundings of the robot 200 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. Estimate the 3D position of an existing stationary object.
  • the moving object 3D position / motion estimation unit 103 estimates the 3D position and motion of the moving object existing around the robot 200 from the estimation result of the robot position / posture / synchronization amount estimation unit 102.
  • the output unit 104 outputs estimation results of the camera position / posture / stationary object 3D position estimation unit 101, the robot position / posture / synchronization shift amount estimation unit 102, and the moving object 3D position / motion estimation unit 103.
  • FIG. 2 is a diagram illustrating an example of the robot 200.
  • the robot 200 is, for example, a moving robot with tires, a bipedal walking robot, an automobile, an electric wheelchair, and the like, and moves in the environment.
  • the robot 200 includes a plurality of asynchronous cameras 201.
  • the identifier of the asynchronous camera 201 is c
  • C is the total number of asynchronous cameras 201. That is, the robot 200 includes C asynchronous cameras 201.
  • the robot 200 includes four asynchronous cameras 201 that photograph the front side, rear side, right side, and left side of the robot 200.
  • each asynchronous camera 201 and the robot 200 are estimated in advance and are assumed to be known. Therefore, the position and orientation of each asynchronous camera 201 and the position and orientation of the robot 200 can be converted to each other.
  • the camera internal parameters such as the focal length, the image center, and the lens distortion parameter of each asynchronous camera 201 are estimated in advance and are assumed to be known.
  • FIG. 3 is a diagram illustrating an example of the shooting time of the asynchronous camera 201.
  • the photographing time is, for example, the time when the asynchronous camera 201 starts exposure or the time when exposure ends.
  • Asynchronous camera 201 captures an image at a predetermined capturing interval l c .
  • the identifier of the image captured by each asynchronous camera 201 is i
  • the total number of images captured by the c-th asynchronous camera 201 is I c .
  • the shooting time of the first image shot by the 0th asynchronous camera 201 is 300T
  • the shooting time of the second image is 301T
  • the shooting time of the third image is 302T
  • the cth asynchronous image is 300T
  • the shooting time of the second image is 301T
  • the shooting time of the third image is 302T
  • the cth asynchronous image is 300T
  • the shooting time of the second image is 301T
  • the shooting time of the third image is 302T
  • the shooting time of the first image shot by the camera 201 is 310T, and the shooting time of the second image is 311T.
  • the difference between the imaging times 300T with the shooting time 301T, and the difference in photographing time 301T with the shooting time 302T is the shooting distance l
  • the difference in photographing time 310T with the shooting time 311T is imaging interval l c.
  • the synchronization deviation amount ⁇ c of the c-th asynchronous camera 201 is the shooting time of an arbitrary reference image shot by the arbitrary asynchronous camera 201 and the arbitrary reference shot by the c-th camera. It can be defined as the difference in image capturing time.
  • the asynchronous camera 201 serving as an arbitrary reference is the 0th asynchronous camera 201 and the image serving as an arbitrary reference is the first image captured by each asynchronous camera 201. That is, the difference between the photographing time 300T and the photographing time 310T is the synchronization shift amount ⁇ c .
  • the shooting time of the i-th image shot by the c-th asynchronous camera 201 is il c + ⁇ c .
  • the resolution of the plurality of asynchronous cameras 201 may be different.
  • the asynchronous camera 201 with a long shooting interval and a high resolution may be combined with the asynchronous camera 201 with a short shooting interval and a low resolution.
  • the camera position / posture / stationary object 3D position estimation unit 101 from the images captured by each asynchronous camera 201, independently for each asynchronous camera 201, the position / posture of the asynchronous camera 201 at the time of capturing each image and the surroundings of the robot 200.
  • Estimate the 3D position of an existing stationary object it is possible to use the well-known StructureStructfrom Motion method or Visual Simultaneous Localization and Mapping (vSLAM) method that estimates the 3D position of the camera's position and orientation and feature points by associating feature points between multiple images.
  • vSLAM Visual Simultaneous Localization and Mapping
  • G. Klein and D. Murray Parallel Tracking and Mapping for Small AR Workspaces, Proc. IEEE and ACM Int. Symp. On Mixed and Augmented Reality, pp.225-234, 2007. Can do.
  • the robot position / posture / synchronization deviation amount estimation unit 102 determines the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the surroundings of the robot 200 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. Estimate the 3D position of an existing stationary object. Details of the processing will be described later.
  • the moving object 3D position / motion estimation unit 103 estimates the 3D position and movement of the moving object existing around the robot 200 from the estimation result of the robot position / posture / synchronization amount estimation unit 102.
  • a known method for estimating the 3D position and motion of the moving object from a plurality of images and the image capturing times can be used.
  • Li, Miyoshi, 3D measurement of dynamic environment from multiple images with different shooting times based on reprojection error minimization, Image Recognition and Understanding Symposium (MIRU) Extended Abstracts, PS 3-6, 2017. Can be used.
  • the amount of synchronization deviation of the asynchronous camera is estimated in advance using a known pattern, but in the moving object 3D position / motion estimation unit 103, the robot position / posture / synchronization amount estimation unit 102 estimates. The amount of synchronization deviation is used.
  • the output unit 104 outputs the estimation results of the camera position / posture / stationary object 3D position estimation unit 101, the robot position / posture / synchronization amount estimation unit 102, and the moving object 3D position / motion estimation unit 103.
  • the output unit 104 outputs the estimation result to a control device that controls the robot 200.
  • the output unit 104 may output the estimation result to a storage medium such as an HDD.
  • the connection between the asynchronous camera 201 and the robot position / orientation estimation / three-dimensional measurement apparatus 100 may be a wired connection such as USB, Ethernet, or CAN, or a wireless connection via a wireless network.
  • data stored in a storage medium existing in the asynchronous camera 201 or the robot 200 may be input to the robot position / orientation estimation / three-dimensional measurement apparatus 100 later.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 may be provided in the robot 200, or may be provided in a PC or server connected to the robot 200.
  • the robot position / posture / synchronization amount estimation unit 102 determines the posture R ci of the robot 200 at the shooting time of each image shot by each asynchronous camera 201 from the estimation result of each camera position / posture / stationary object 3D position estimation unit 101.
  • the position t ci , the 3D position p j of each feature point, and the amount of synchronization deviation ⁇ c of each asynchronous camera 201 are estimated by minimizing the objective function E as shown in (Equation 1).
  • j is an identifier of a feature point
  • J represents the total number of feature points
  • Equation 1 For minimizing (Equation 1), a known nonlinear least square method such as the Levenberg-Marquardt method or the Gauss-Newton method is used.
  • the position and orientation of each asynchronous camera 201 and the 3D position of the feature point at the shooting time of each image estimated by the camera position and orientation / stationary object 3D position estimation unit 101 are used as initial values for minimization.
  • the position and orientation of each asynchronous camera 201 are converted into the posture R ci and position t ci of the robot 201 based on the relative position and orientation of the robot 200 and each asynchronous camera 201 estimated in advance.
  • the objective function E is calculated by (Equation 2).
  • E g is a cost calculated based on the camera geometry
  • E m is a cost calculated based on the motion model
  • ⁇ m is a preset weight.
  • the motion model is, for example, a constant velocity motion, a constant acceleration motion, a constant angular velocity motion, a constant angular acceleration motion, or the like.
  • ⁇ cij represents the visibility of the feature point, and is 1 when the j-th feature point is detected in the i-th image captured by the c-th asynchronous camera 201, and is not detected Becomes 0.
  • x cij is the detection position of the j-th feature point in the i-th image taken by the c-th asynchronous camera 201.
  • x ′ cij is the projection position of the j th feature point in the i th image taken by the c th asynchronous camera 201.
  • x ′ cij is the 3D position p j of the j-th feature point, the posture R ci of the robot 200 at the shooting time of the i-th image taken by the c-th asynchronous camera 201, the position t ci, and the robot 200 It is calculated from the relative position and orientation of the c-th asynchronous camera 201 and camera internal parameters of the c-th asynchronous camera 201 based on camera geometry such as a perspective projection model.
  • Cost E m is calculated based on the motion model is calculated by (Equation 4).
  • E t is a cost related to a position
  • E r is a cost related to a posture
  • ⁇ t and ⁇ r are preset weights.
  • Cost E t on the position is calculated based on the motion model such as uniform motion and uniformly accelerated motion.
  • the motion model is set in advance from the expected movement of the robot 200. For example, when using a constant-velocity motion as the motion model, the cost E t on the position is calculated by (Equation 5).
  • v ci is the speed of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 6).
  • c n and i n are identifiers of images captured first after the time when the i-th image is captured by the c-th asynchronous camera 201, and are determined from the synchronization shift amount ⁇ c . That is, next to the captured image of the c-th captured i th image asynchronously camera 201 is a c n-th captured image to i n-th asynchronous camera 201.
  • FIG. 4 is a diagram illustrating an example of the photographing time when four asynchronous cameras 201 are used.
  • the photographing times 320 of the asynchronous cameras 201 can be arranged in time order by the synchronization deviation amounts ⁇ 0 , ⁇ 1 , ⁇ 2 .
  • FIG. 5 is a diagram illustrating an example of the speed of the robot 200. 4 shows the position 330 and the speed 340 of the robot 200 when the four asynchronous cameras 201 capture an image at the capturing time shown in FIG.
  • the speed v 30 of the robot 200 at the shooting time of the 0th image shot by the third asynchronous camera 201 is the position of the robot 200 at the shooting time of the 0th image shot by the third asynchronous camera 201. It is calculated from t 30 , the position t 01 of the robot 200 at the shooting time of the first image taken by the 0th asynchronous camera 201, and the shooting time of each image.
  • a ci is the acceleration of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 8).
  • c p and i p are identifiers of the last image taken before the time when the i-th image is taken by the c-th asynchronous camera 201, and are determined from the synchronization shift amount ⁇ c .
  • the image taken after the i p th image captured by the c p th asynchronous camera 201 is the i th image captured by the c th asynchronous camera 201.
  • the cost Er relating to the posture is calculated based on a motion model such as a constant angular velocity motion or a constant angular acceleration motion.
  • the motion model is set in advance from the expected movement of the robot 200. For example, when a constant angular velocity motion is used as the motion model, the cost Er relating to the posture is calculated by (Equation 9).
  • ⁇ ci is the angular velocity of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 10).
  • A is a function that converts a rotation matrix into Euler angles.
  • ⁇ ci is the angular acceleration of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 12).
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a robot position / orientation / synchronization deviation estimation unit 102.
  • the robot position / orientation / synchronization deviation amount estimation unit 102 calculates the position / orientation of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot from images taken by the plurality of asynchronous cameras 201. Estimate ( Figure 1). Therefore, by considering the amount of synchronization deviation of the asynchronous camera 201, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a moving object 3D position / motion estimation unit 103.
  • the moving object 3D position / motion estimation unit 103 is based on the position and orientation of the robot 200 estimated by the robot position / posture / synchronization shift amount estimation unit 102 and the synchronization shift amount of the asynchronous camera 201, and the moving object exists around the robot 200.
  • the highly accurate position and orientation of the robot 200 estimated by the robot position and orientation / synchronization deviation estimation unit 102 the 3D position of the moving object existing around the robot 200 can be estimated with high accuracy. .
  • the robot position / orientation / synchronization deviation estimation unit 102 is calculated from the cost calculated based on the camera geometry and the cost calculated based on the motion model from the images captured by the plurality of asynchronous cameras 201.
  • the position and orientation of the robot 200 and the amount of synchronization deviation of the asynchronous camera 201 are estimated (FIGS. 4, 5, and 1 and 2). Therefore, by considering the amount of synchronization deviation of the asynchronous camera 201 and the motion model of the robot 200, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy. .
  • the robot position / posture / synchronization deviation estimation unit 102 estimates the position / posture of the robot 200 at the shooting time of each image taken by each asynchronous camera 201 by minimizing the objective function (FIGS. 4 and 4). 5, number 1). Therefore, it is possible to estimate the position and orientation of the robot 200 at a higher cycle than in the case of using a plurality of synchronous cameras. For example, when four synchronous cameras of 10 fps are used, the position and orientation of the robot 200 are estimated at 10 Hz. However, when four asynchronous cameras 201 of 10 fps are used, the position and orientation of the robot 200 are estimated at 40 Hz. be able to.
  • the robot 200 includes an asynchronous camera 201 with a long shooting interval and a high resolution, and an asynchronous camera 201 with a short shooting interval and a low resolution. Therefore, the robot position / orientation / synchronization amount estimation unit 102 can estimate the position / orientation of the robot 200 with high accuracy by using the feature points estimated from the high-resolution image, and the imaging interval is short. By using the image, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated at a high cycle.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a camera position / orientation / stationary object 3D position estimation unit 101.
  • the robot position / posture / synchronization amount estimation unit 102 is configured to detect the position / posture of the asynchronous camera 201 at each image capturing time estimated by the camera position / posture / stationary object 3D position estimation unit 101, and the stationary object existing around the robot 200.
  • the objective function is minimized with the 3D position as the initial value (Fig. 1). Therefore, when the objective function is minimized by the nonlinear least square method, the initial value is improved, so that the position and orientation of the robot 200 can be estimated with high accuracy and the calculation cost can be reduced.
  • the camera position / posture / stationary object 3D position estimation unit 101 associates feature points independently for each asynchronous camera 201, thereby reducing feature point association calculation costs.
  • Asynchronous camera 201 captures images at a constant capture interval l c .
  • the shooting interval of the asynchronous camera 201 is not limited to this.
  • the asynchronous camera 201 may shoot images at different shooting intervals for each image.
  • a timer for recording the shooting time of each image is provided in the asynchronous camera 201.
  • the asynchronous camera 201 outputs an image and a shooting time at the same time.
  • the robot position / posture / synchronization deviation amount estimation unit 102 exists in the vicinity of the robot using the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, using the shooting time of each image output by each asynchronous camera 201.
  • Estimate the 3D position of a stationary object Therefore, it is possible to estimate the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot with high accuracy using the asynchronous camera 201 whose shooting interval is not constant.
  • the robot position / posture / synchronization amount estimation unit 102 minimizes an objective function including a cost calculated based on the camera geometry and a cost calculated based on a preset motion model, The position and orientation of the robot 200, the amount of synchronization deviation of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot are estimated.
  • the method for setting the motion model is not limited to this.
  • the robot position / posture / synchronization amount estimation unit 102 independently minimizes a plurality of objective functions having different motion models, and the position of the robot 200 estimated by the objective function with the lowest cost calculated based on the camera geometry.
  • the posture and the 3D position of a stationary object existing around the robot may be used as the estimation result.
  • the robot position / posture / synchronization deviation estimation unit 102 automatically selects an optimal motion model. Therefore, the position and orientation of the robot 200 of various motion models can be estimated with high accuracy. Further, even when the motion model of the robot 200 changes, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
  • the robot position / posture / synchronization deviation amount estimation unit 102 uses the correspondence between the feature points estimated by the camera position / posture / stationary object 3D position estimation unit 101, the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the robot. Estimate the 3D position of a stationary object in the vicinity.
  • the correspondence of the feature points used for estimation is not limited to this.
  • the robot position / posture / synchronization deviation amount estimation unit 102 uses the correspondence between the feature points estimated by the camera position / posture / stationary object 3D position estimation unit 101, the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the robot. After estimating the 3D position of a stationary object existing in the vicinity, additional feature points may be associated. For example, a pair of images with overlapping fields of view is selected from the estimated position and orientation of the robot 200 at each estimated image capturing time, and feature points are associated with each other. After that, by minimizing the objective function again, the position and orientation of the robot 200, the amount of synchronization deviation of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot are updated.
  • the robot position / posture / synchronization deviation estimation unit 102 uses many feature points. Therefore, the position and orientation of the robot 200, the amount of synchronization shift of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
  • the robot position / posture / synchronization deviation amount estimation unit 102 minimizes the objective function using the position / posture of the robot 200 at the shooting time of all images taken by each asynchronous camera 201 as a parameter (Equation 1).
  • the parameter for minimizing the objective function is not limited to this.
  • the robot position / orientation / synchronization amount estimation unit 102 may select a preset number of the latest images and use them to minimize the objective function. That is, the objective function may be minimized by using the position and orientation of the robot 200 at the shooting time of the selected image, the 3D position of the feature point detected in the selected image, and the amount of synchronization deviation of the asynchronous camera 201 as parameters. .
  • the robot position / posture / synchronization deviation estimation unit 102 uses only a small number of images. Therefore, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with a small calculation cost.
  • the position and orientation sensor is a sensor that can acquire the position, velocity, acceleration, posture, angular velocity, angular acceleration, and the like of the robot, such as a GPS, a wheel encoder, an accelerometer, a compass, and a gyroscope.
  • FIG. 6 is a diagram showing a block configuration of the robot position / orientation estimation / three-dimensional measurement apparatus 500.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 500 determines the position / orientation of the robot 600 from images taken by a plurality of asynchronous cameras 201 mounted on the robot 600 and measurement data of one or more position / orientation sensors 602. The 3D position of an object existing around the robot 600 is estimated.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 500 includes a camera position / orientation / stationary object 3D position estimation unit 101, a robot position / orientation / synchronization amount estimation unit 502, a moving object 3D position / motion estimation unit 103, and an output unit 104. And comprising.
  • the identifier of the position / orientation sensor 602 is s, and S is the total number of position / orientation sensors 602. That is, the robot 600 includes S position and orientation sensors 602.
  • the robot position / posture / synchronization deviation amount estimation unit 502 determines the position / posture of the robot 600 and the position of the asynchronous camera 201 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101 and the measurement data of each position / posture sensor 602.
  • the amount of synchronization deviation, the amount of synchronization deviation ⁇ s of each position and orientation sensor 602, and the 3D position of a stationary object existing around the robot are estimated.
  • the synchronization deviation amount ⁇ s of the position / orientation sensor 602 is a difference between the image capturing time of the first image captured by the 0th asynchronous camera 201 and the time when the data is first acquired by each position / orientation sensor 602.
  • the robot position / posture / synchronization amount estimation unit 502 determines the posture R ci of the robot 600 at the shooting time of each image shot by each asynchronous camera 201 from the estimation result of each camera position / posture / stationary object 3D position estimation unit 101.
  • the position t ci , the 3D position p j of each feature point, the synchronization deviation amount ⁇ c of each asynchronous camera 201, and the synchronization deviation amount ⁇ s of each position and orientation sensor 602 are expressed by the objective function E ′ Is estimated by minimizing.
  • E s is a cost based on measurement data of the s-th position and orientation center
  • ⁇ s is a weight for the s-th position and orientation sensor set in advance.
  • Cost E s based on s-th position and orientation center measurement data represents the measurement data of s-th position and orientation sensor, the consistency of the position and orientation of the robot 600.
  • cost E s based on the measurement data of the s-th position and orientation center is calculated by (Equation 15).
  • v ′ sci is determined by the synchronization deviation amount ⁇ s of the s-th position and orientation sensor 602, and the s-th time is closest to the time when the i-th image is captured by the c-th asynchronous camera 201.
  • the speed acquired by the position / orientation sensor 602 is represented.
  • the robot position / orientation estimation / three-dimensional measurement apparatus 500 includes a robot position / orientation / synchronization deviation estimation unit 502.
  • the robot position / posture / synchronization deviation estimation unit 502 uses the images taken by the plurality of asynchronous cameras 201 and the measurement data of one or more position / orientation sensors 602 to determine the position / posture of the robot 600 and the amount of synchronization deviation of the asynchronous camera 201. Then, the amount of synchronization deviation of the position and orientation sensor 602 and the 3D position of a stationary object existing around the robot are estimated (FIG. 6). Therefore, it is possible to estimate the position and orientation of the robot 600 and the 3D position of an object existing around the robot with high accuracy by using the measurement data of the position and orientation sensor 602 while taking the synchronization deviation amount of the position and orientation sensor 602 into consideration. it can.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • Other embodiments conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • Each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The purpose of the present invention is to achieve a robot position/posture estimation and 3D measurement device capable of estimating with high accuracy the position and posture of a robot, and the 3D positions of stationary objects and moving objects present around the robot. To this end, this robot position/posture estimation and 3D measurement device is configured to comprise: a plurality of asynchronous cameras mounted on a robot; a robot position/posture and synchronization deviation estimation unit that estimates the position and posture of the robot, the synchronization deviation amount of the asynchronous cameras, and the 3D positions of stationary objects present around the robot, from a plurality of images captured by the asynchronous cameras; and an output unit that outputs estimation results of the robot position/posture and synchronization deviation estimation unit.

Description

ロボット位置姿勢推定・三次元計測装置Robot position and orientation estimation and 3D measurement device
本発明はロボット位置姿勢推定・三次元計測装置に関する。 The present invention relates to a robot position / orientation estimation / three-dimensional measurement apparatus.
 自律移動ロボットを実現するためには、ロボットの位置・姿勢推定と、ロボット周辺の物体の3D計測が必要である。ロボットの位置・姿勢推定とロボット周辺の3D計測を実現する手法として、ロボットに搭載された複数のカメラを用いる手法がある。特許文献1には「複数の第1推定部と複数の第2推定部のそれぞれは、予め対応付けられたカメラによる撮像画像に基づいて、互いに独立して自己位置推定処理を実行する。ゲーム機200は集計部150をさらに備える。集計部150は、複数の第1推定部のそれぞれによる推定結果を集計して(言い換えれば合成して)、1つの推定結果を生成する。また、複数の第2推定部のそれぞれによる推定結果を集計して、1つの推定結果を生成する。なお、同じ時刻データが対応付けられた推定結果を集計する。」という記載がある。 In order to realize an autonomous mobile robot, it is necessary to estimate the position and orientation of the robot and to perform 3D measurement of objects around the robot. As a method for estimating the position / orientation of the robot and 3D measurement around the robot, there is a method using a plurality of cameras mounted on the robot. Patent Document 1 states that “a plurality of first estimation units and a plurality of second estimation units each execute self-position estimation processing independently from each other based on images captured by cameras associated in advance. 200 further includes a totaling unit 150. The totaling unit 150 totals (in other words, combines) the estimation results from each of the plurality of first estimation units, and generates one estimation result. There is a description that "the estimation results obtained by the two estimation units are totaled to generate one estimation result, and the estimation results associated with the same time data are totaled."
特開2017-72560号公報JP-A-2017-72560
 特許文献1に記載されている発明では、ロボットに搭載されている複数のカメラを用いることで、1つのカメラを用いる場合と比較して高精度にロボットの位置姿勢と周辺の物体の3D位置を推定することができる。しかし、ロボットに搭載されている複数のカメラの撮影時刻が同じであることを前提としているため、複数のカメラの撮影時刻が異なる場合には、ロボットの位置・姿勢の推定誤差が考慮されていない。また、ロボット周辺の物体の3D位置の推定においては、静止物体のみを対象としており、動物体の3D位置は推定されない。 In the invention described in Patent Document 1, by using a plurality of cameras mounted on the robot, the position and orientation of the robot and the 3D positions of surrounding objects can be determined with higher accuracy than when using a single camera. Can be estimated. However, since it is assumed that the shooting times of multiple cameras mounted on the robot are the same, if the shooting times of multiple cameras are different, the estimation error of the robot's position and orientation is not considered. . Moreover, in the estimation of the 3D position of the object around the robot, only the stationary object is targeted, and the 3D position of the moving object is not estimated.
 本発明の代表的なものの一つを示せば、ロボット位置姿勢・三次元計測装置を、ロボットに搭載された複数の非同期カメラと、前記非同期カメラによって撮影された複数の画像から前記ロボットの位置姿勢、前記非同期カメラの同期ずれ量及び前記ロボット周辺に存在する静止物体の3D位置を推定するロボット位置姿勢・同期ずれ量推定部と、前記ロボット位置姿勢・同期ずれ量推定部の推定結果を出力する出力部と、を備えるようにする。 A representative one of the present invention is a robot position / orientation / three-dimensional measuring device, which includes a plurality of asynchronous cameras mounted on the robot, and a plurality of images captured by the asynchronous cameras. Output the estimation results of the robot position / posture / synchronization shift amount estimation unit for estimating the amount of synchronization shift of the asynchronous camera and the 3D position of a stationary object existing around the robot, and the robot position / posture / synchronization shift amount estimation unit And an output unit.
 本発明によれば、高精度にロボットの位置・姿勢と、ロボットの周辺に存在する静止物体、動物体の3D位置を推定することができるロボット位置姿勢推定・三次元計測装置を実現することが可能である。 According to the present invention, it is possible to realize a robot position / orientation estimation / three-dimensional measurement apparatus that can accurately estimate the position / orientation of a robot and the 3D positions of stationary objects and moving objects existing around the robot. Is possible.
ロボット位置姿勢推定・三次元計測装置100のブロック構成を示す図The figure which shows the block configuration of the robot position and orientation estimation / three-dimensional measurement apparatus 100 ロボット200の一例を示す図The figure which shows an example of the robot 200 非同期カメラ201の撮影時刻の一例を示す図The figure which shows an example of the imaging | photography time of the asynchronous camera 201 4台の非同期カメラ201を用いた場合の撮影時刻の一例を示す図The figure which shows an example of the imaging | photography time at the time of using the four asynchronous cameras 201 ロボット200の速度の一例を示す図The figure which shows an example of the speed of the robot 200 ロボット位置姿勢推定・三次元計測装置500のブロック構成を示す図The figure which shows the block configuration of the robot position and orientation estimation / three-dimensional measuring apparatus 500
 以下、図1~図5を参照して、ロボット位置姿勢推定・三次元計測装置の第1の実施の形態を説明する。 
(ブロック構成)
 図1は、ロボット位置姿勢推定・三次元計測装置100のブロック構成を示す図である。ロボット位置姿勢推定・三次元計測装置100は、ロボット200に搭載されている複数の非同期カメラ201によって撮影された画像から、ロボット200の位置姿勢を推定し、ロボット200の周囲に存在する物体の3D位置を計測する。ロボット位置姿勢推定・三次元計測装置100は、カメラ位置姿勢・静止物体3D位置推定部101と、ロボット位置姿勢・同期ずれ量推定部102と、動物体3D位置・運動推定部103と、出力部104と、を備える。カメラ位置姿勢・静止物体3D位置推定部101は、各非同期カメラ201が撮影した画像から、各画像の撮影時刻における非同期カメラ201の位置姿勢と、ロボット200の周囲に存在する静止物体の3D位置を推定する。ロボット位置姿勢・同期ずれ量推定部102は、各カメラ位置姿勢・静止物体3D位置推定部101の推定結果から、ロボット200の位置姿勢と、非同期カメラ201の同期ずれ量と、ロボット200の周囲に存在する静止物体の3D位置と、を推定する。動物体3D位置・運動推定部103は、ロボット位置姿勢・同期ずれ量推定部102の推定結果から、ロボット200の周辺に存在する動物体の3D位置と運動を推定する。出力部104は、カメラ位置姿勢・静止物体3D位置推定部101と、ロボット位置姿勢・同期ずれ量推定部102と、動物体3D位置・運動推定部103の推定結果を出力する。
The first embodiment of the robot position / orientation estimation / three-dimensional measurement apparatus will be described below with reference to FIGS.
(Block configuration)
FIG. 1 is a diagram illustrating a block configuration of a robot position / orientation estimation / three-dimensional measurement apparatus 100. The robot position / orientation estimation / three-dimensional measurement apparatus 100 estimates the position / orientation of the robot 200 from images captured by a plurality of asynchronous cameras 201 mounted on the robot 200, and 3D of objects existing around the robot 200. Measure the position. The robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a camera position / orientation / stationary object 3D position estimation unit 101, a robot position / orientation / synchronization amount estimation unit 102, a moving object 3D position / motion estimation unit 103, and an output unit. 104. The camera position / posture / stationary object 3D position estimation unit 101 calculates the position / posture of the asynchronous camera 201 at the shooting time of each image and the 3D position of the stationary object existing around the robot 200 from the images captured by the respective asynchronous cameras 201. presume. The robot position / posture / synchronization deviation amount estimation unit 102 determines the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the surroundings of the robot 200 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. Estimate the 3D position of an existing stationary object. The moving object 3D position / motion estimation unit 103 estimates the 3D position and motion of the moving object existing around the robot 200 from the estimation result of the robot position / posture / synchronization amount estimation unit 102. The output unit 104 outputs estimation results of the camera position / posture / stationary object 3D position estimation unit 101, the robot position / posture / synchronization shift amount estimation unit 102, and the moving object 3D position / motion estimation unit 103.
 図2は、ロボット200の一例を示す図である。ロボット200は、たとえば、タイヤ付き移動ロボット、二足歩行ロボット、自動車、電動車いすなどであり、環境内を移動する。ロボット200は、複数の非同期カメラ201を備える。以下の説明では、非同期カメラ201の識別子をcとし、Cを非同期カメラ201の総数とする。すなわち、ロボット200はC台の非同期カメラ201を備える。図2の例では、ロボット200は、ロボット200の前側、後ろ側、右側、左側を撮影する4つの非同期カメラ201を備えている。 FIG. 2 is a diagram illustrating an example of the robot 200. The robot 200 is, for example, a moving robot with tires, a bipedal walking robot, an automobile, an electric wheelchair, and the like, and moves in the environment. The robot 200 includes a plurality of asynchronous cameras 201. In the following description, the identifier of the asynchronous camera 201 is c, and C is the total number of asynchronous cameras 201. That is, the robot 200 includes C asynchronous cameras 201. In the example of FIG. 2, the robot 200 includes four asynchronous cameras 201 that photograph the front side, rear side, right side, and left side of the robot 200.
 各非同期カメラ201とロボット200の相対位置姿勢は、あらかじめ推定されており、既知とする。したがって、各非同期カメラ201の位置姿勢と、ロボット200の位置姿勢は相互に変換可能である。また、各非同期カメラ201の焦点距離や画像中心、レンズ歪パラメータなどのカメラ内部パラメータは、あらかじめ推定されており、既知とする。 The relative position and orientation of each asynchronous camera 201 and the robot 200 are estimated in advance and are assumed to be known. Therefore, the position and orientation of each asynchronous camera 201 and the position and orientation of the robot 200 can be converted to each other. The camera internal parameters such as the focal length, the image center, and the lens distortion parameter of each asynchronous camera 201 are estimated in advance and are assumed to be known.
 図3は、非同期カメラ201の撮影時刻の一例を示す図である。なお、撮影時刻とは、たとえば、非同期カメラ201が露光を開始した時刻、もしくは露光を終了した時刻である。非同期カメラ201は、あらかじめ定められた撮影間隔lcで画像を撮影する。以下の説明では、各非同期カメラ201が撮影した画像の識別子をiとし、c番目の非同期カメラ201が撮影した画像の総数をIcとする。図3に示す例では、0番目の非同期カメラ201で撮影された最初の画像の撮影時刻を300T、2番目の画像の撮影時刻を301T、3番目の画像の撮影時刻を302T、c番目の非同期カメラ201で撮影された最初の画像の撮影時刻を310T、2番目の画像の撮影時刻を311Tとしている。すなわち、撮影時刻300Tと撮影時刻301Tの差、および、撮影時刻301Tと撮影時刻302Tの差が撮影間隔l0であり、撮影時刻310Tと撮影時刻311Tの差が撮影間隔lcである。 FIG. 3 is a diagram illustrating an example of the shooting time of the asynchronous camera 201. The photographing time is, for example, the time when the asynchronous camera 201 starts exposure or the time when exposure ends. Asynchronous camera 201 captures an image at a predetermined capturing interval l c . In the following description, the identifier of the image captured by each asynchronous camera 201 is i, and the total number of images captured by the c-th asynchronous camera 201 is I c . In the example shown in FIG. 3, the shooting time of the first image shot by the 0th asynchronous camera 201 is 300T, the shooting time of the second image is 301T, the shooting time of the third image is 302T, and the cth asynchronous image. The shooting time of the first image shot by the camera 201 is 310T, and the shooting time of the second image is 311T. In other words, the difference between the imaging times 300T with the shooting time 301T, and the difference in photographing time 301T with the shooting time 302T is the shooting distance l 0, the difference in photographing time 310T with the shooting time 311T is imaging interval l c.
 複数の非同期カメラ201の間には同期ずれが存在する。c番目の非同期カメラ201の同期ずれ量δcは、任意の基準となる非同期カメラ201で撮影された任意の基準となる画像の撮影時刻と、c番目のカメラで撮影された任意の基準となる画像の撮影時刻の差として定義することができる。以下の説明では、任意の基準となる非同期カメラ201を0番目の非同期カメラ201、任意の基準となる画像を各非同期カメラ201で撮影された最初の画像とする。すなわち、撮影時刻300Tと撮影時刻310Tの差が同期ずれ量δcである。c番目の非同期カメラ201の撮影間隔lcと同期ずれ量δcから、c番目の非同期カメラ201でi番目に撮影された画像の撮影時刻は、ilccとなる。 There is a synchronization shift between the plurality of asynchronous cameras 201. The synchronization deviation amount δ c of the c-th asynchronous camera 201 is the shooting time of an arbitrary reference image shot by the arbitrary asynchronous camera 201 and the arbitrary reference shot by the c-th camera. It can be defined as the difference in image capturing time. In the following description, it is assumed that the asynchronous camera 201 serving as an arbitrary reference is the 0th asynchronous camera 201 and the image serving as an arbitrary reference is the first image captured by each asynchronous camera 201. That is, the difference between the photographing time 300T and the photographing time 310T is the synchronization shift amount δ c . From the shooting interval l c of the c-th asynchronous camera 201 and the synchronization shift amount δ c , the shooting time of the i-th image shot by the c-th asynchronous camera 201 is il c + δ c .
 複数の非同期カメラ201の解像度は異なっていても良い。たとえば、撮影間隔が長く高解像度の非同期カメラ201と、撮影間隔が短く低解像度の非同期カメラ201を、組み合わせても良い。 The resolution of the plurality of asynchronous cameras 201 may be different. For example, the asynchronous camera 201 with a long shooting interval and a high resolution may be combined with the asynchronous camera 201 with a short shooting interval and a low resolution.
 カメラ位置姿勢・静止物体3D位置推定部101は、各非同期カメラ201が撮影した画像から、非同期カメラ201毎に独立に、各画像の撮影時刻における非同期カメラ201の位置姿勢と、ロボット200の周囲に存在する静止物体の3D位置を推定する。推定には、複数の画像間で特徴点を対応付けることによりカメラの位置姿勢と特徴点の3D位置を推定する、公知のStructure from Motion法やVisual Simultaneous Localization and Mapping(vSLAM)法を用いることができる。たとえば、vSLAM法としては、G. Klein and D. Murray, Parallel Tracking and Mapping for Small AR Workspaces, Proc. IEEE and ACM Int. Symp. on Mixed and Augmented Reality, pp.225-234, 2007.を用いることができる。 The camera position / posture / stationary object 3D position estimation unit 101, from the images captured by each asynchronous camera 201, independently for each asynchronous camera 201, the position / posture of the asynchronous camera 201 at the time of capturing each image and the surroundings of the robot 200. Estimate the 3D position of an existing stationary object. For estimation, it is possible to use the well-known StructureStructfrom Motion method or Visual Simultaneous Localization and Mapping (vSLAM) method that estimates the 3D position of the camera's position and orientation and feature points by associating feature points between multiple images. . For example, as the vSLAM method, G. Klein and D. Murray, Parallel Tracking and Mapping for Small AR Workspaces, Proc. IEEE and ACM Int. Symp. On Mixed and Augmented Reality, pp.225-234, 2007. Can do.
 ロボット位置姿勢・同期ずれ量推定部102は、各カメラ位置姿勢・静止物体3D位置推定部101の推定結果から、ロボット200の位置姿勢と、非同期カメラ201の同期ずれ量と、ロボット200の周囲に存在する静止物体の3D位置と、を推定する。処理の詳細は後述する。 The robot position / posture / synchronization deviation amount estimation unit 102 determines the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the surroundings of the robot 200 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. Estimate the 3D position of an existing stationary object. Details of the processing will be described later.
 動物体3D位置・運動推定部103は、ロボット位置姿勢・同期ずれ量推定部102の推定結果から、ロボット200の周辺に存在する動物体の3D位置と運動を推定する。推定には、複数の画像と、画像の撮影時刻と、から動物体の3D位置と運動を推定する公知の手法を用いることができる。たとえば、粂、李、三好、再投影誤差最小化に基づく撮影時刻の異なる複数の画像からの動的環境の三次元計測、画像の認識・理解シンポジウム(MIRU) Extended Abstracts、PS3-6、2017.を用いることができる。ただし、この手法では、事前に既知のパターンを用いて非同期カメラの同期ずれ量を推定しているが、動物体3D位置・運動推定部103では、ロボット位置姿勢・同期ずれ量推定部102が推定した同期ずれ量を用いる。 The moving object 3D position / motion estimation unit 103 estimates the 3D position and movement of the moving object existing around the robot 200 from the estimation result of the robot position / posture / synchronization amount estimation unit 102. For the estimation, a known method for estimating the 3D position and motion of the moving object from a plurality of images and the image capturing times can be used. For example, 粂, Li, Miyoshi, 3D measurement of dynamic environment from multiple images with different shooting times based on reprojection error minimization, Image Recognition and Understanding Symposium (MIRU) Extended Abstracts, PS 3-6, 2017. Can be used. However, in this method, the amount of synchronization deviation of the asynchronous camera is estimated in advance using a known pattern, but in the moving object 3D position / motion estimation unit 103, the robot position / posture / synchronization amount estimation unit 102 estimates. The amount of synchronization deviation is used.
 出力部104は、カメラ位置姿勢・静止物体3D位置推定部101と、ロボット位置姿勢・同期ずれ量推定部102と、動物体3D位置・運動推定部103の推定結果を出力する。たとえば、出力部104は、推定結果を、ロボット200を制御している制御装置へ出力する。また、出力部104は、推定結果を、HDDなどの記憶媒体に出力してもよい。 The output unit 104 outputs the estimation results of the camera position / posture / stationary object 3D position estimation unit 101, the robot position / posture / synchronization amount estimation unit 102, and the moving object 3D position / motion estimation unit 103. For example, the output unit 104 outputs the estimation result to a control device that controls the robot 200. The output unit 104 may output the estimation result to a storage medium such as an HDD.
 なお、非同期カメラ201とロボット位置姿勢推定・三次元計測装置100の接続は、USBやイーサネット、CANといった有線接続でも良いし、無線ネットワークを介した無線接続でも良い。もしくは、非同期カメラ201内やロボット200内に存在する記憶媒体に記憶されたデータを、後から、ロボット位置姿勢推定・三次元計測装置100に入力しても良い。また、ロボット位置姿勢推定・三次元計測装置100は、ロボット200内に設けられても良いし、ロボット200と接続されたPCやサーバ内に設けられても良い。
(ロボット位置姿勢・同期ずれ量推定部の動作)
 次に図4、図5を用いて、ロボット位置姿勢・同期ずれ量推定部102における処理の内容について説明する。ロボット位置姿勢・同期ずれ量推定部102は、各カメラ位置姿勢・静止物体3D位置推定部101の推定結果から、各非同期カメラ201で撮影された各画像の撮影時刻におけるロボット200の姿勢Rci、位置tciと、各特徴点の3D位置pjと、各非同期カメラ201の同期ずれ量δcを、(数1)のように目的関数Eを最小化することで推定する。
The connection between the asynchronous camera 201 and the robot position / orientation estimation / three-dimensional measurement apparatus 100 may be a wired connection such as USB, Ethernet, or CAN, or a wireless connection via a wireless network. Alternatively, data stored in a storage medium existing in the asynchronous camera 201 or the robot 200 may be input to the robot position / orientation estimation / three-dimensional measurement apparatus 100 later. The robot position / orientation estimation / three-dimensional measurement apparatus 100 may be provided in the robot 200, or may be provided in a PC or server connected to the robot 200.
(Operation of the robot position and orientation / synchronization deviation estimation unit)
Next, the contents of processing in the robot position / posture / synchronization deviation estimation unit 102 will be described with reference to FIGS. The robot position / posture / synchronization amount estimation unit 102 determines the posture R ci of the robot 200 at the shooting time of each image shot by each asynchronous camera 201 from the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. The position t ci , the 3D position p j of each feature point, and the amount of synchronization deviation δ c of each asynchronous camera 201 are estimated by minimizing the objective function E as shown in (Equation 1).
Figure JPOXMLDOC01-appb-M000001
ここで、jは特徴点の識別子であり、Jは特徴点の総数を表す。
Figure JPOXMLDOC01-appb-M000001
Here, j is an identifier of a feature point, and J represents the total number of feature points.
 (数1)の最小化には、Levenberg-Marquardt法やガウスニュートン法などの公知の非線形最小二乗法を用いる。ここで、カメラ位置姿勢・静止物体3D位置推定部101によって推定された各画像の撮影時刻における各非同期カメラ201の位置姿勢と、特徴点の3D位置は、最小化の初期値として用いられる。ただし、各非同期カメラ201の位置姿勢は、あらかじめ推定されたロボット200と各非同期カメラ201の相対位置姿勢により、ロボット201の姿勢Rci、位置tciに変換する。 For minimizing (Equation 1), a known nonlinear least square method such as the Levenberg-Marquardt method or the Gauss-Newton method is used. Here, the position and orientation of each asynchronous camera 201 and the 3D position of the feature point at the shooting time of each image estimated by the camera position and orientation / stationary object 3D position estimation unit 101 are used as initial values for minimization. However, the position and orientation of each asynchronous camera 201 are converted into the posture R ci and position t ci of the robot 201 based on the relative position and orientation of the robot 200 and each asynchronous camera 201 estimated in advance.
 目的関数Eは、(数2)により計算される。 The objective function E is calculated by (Equation 2).
Figure JPOXMLDOC01-appb-M000002
ここで、Eはカメラ幾何に基づいて計算されるコスト、Emは運動モデルに基づいて計算されるコスト、λmはあらかじめ設定された重みである。運動モデルとは、たとえば、等速運動、等加速度運動、等角速度運動、等角加速度運動などである。
Figure JPOXMLDOC01-appb-M000002
Here, E g is a cost calculated based on the camera geometry, E m is a cost calculated based on the motion model, and λ m is a preset weight. The motion model is, for example, a constant velocity motion, a constant acceleration motion, a constant angular velocity motion, a constant angular acceleration motion, or the like.
 カメラ幾何に基づいて計算されるコストEは、(数3)により計算される。 The cost E g calculated based on the camera geometry is calculated by (Equation 3).
Figure JPOXMLDOC01-appb-M000003
ここで、νcijは、特徴点の視認性を表し、c番目の非同期カメラ201でi番目に撮影された画像で、j番目の特徴点が検出された場合には1、検出されなかった場合は0となる。xcijはc番目の非同期カメラ201でi番目に撮影された画像におけるj番目の特徴点の検出位置である。x'cijはc番目の非同期カメラ201でi番目に撮影された画像におけるj番目の特徴点の投影位置である。x'cijは、j番目の特徴点の3D位置pjと、c番目の非同期カメラ201で撮影されたi番目の画像の撮影時刻におけるロボット200の姿勢Rci、位置tciと、ロボット200とc番目の非同期カメラ201の相対位置姿勢と、c番目の非同期カメラ201のカメラ内部パラメータから、透視投影モデルなどのカメラ幾何に基づいて計算される。
Figure JPOXMLDOC01-appb-M000003
Here, ν cij represents the visibility of the feature point, and is 1 when the j-th feature point is detected in the i-th image captured by the c-th asynchronous camera 201, and is not detected Becomes 0. x cij is the detection position of the j-th feature point in the i-th image taken by the c-th asynchronous camera 201. x ′ cij is the projection position of the j th feature point in the i th image taken by the c th asynchronous camera 201. x ′ cij is the 3D position p j of the j-th feature point, the posture R ci of the robot 200 at the shooting time of the i-th image taken by the c-th asynchronous camera 201, the position t ci, and the robot 200 It is calculated from the relative position and orientation of the c-th asynchronous camera 201 and camera internal parameters of the c-th asynchronous camera 201 based on camera geometry such as a perspective projection model.
 運動モデルに基づいて計算されるコストEmは、(数4)により計算される。 Cost E m is calculated based on the motion model is calculated by (Equation 4).
Figure JPOXMLDOC01-appb-M000004
ここで、Etは位置に関するコスト、Erは姿勢に関するコスト、λt、λrはあらかじめ設定された重みである。
Figure JPOXMLDOC01-appb-M000004
Here, E t is a cost related to a position, E r is a cost related to a posture, and λ t and λ r are preset weights.
 位置に関するコストEtは等速運動や等加速度運動といった運動モデルに基づいて計算される。運動モデルは、想定されるロボット200の動きから事前に設定する。たとえば、運動モデルとして等速運動を用いる場合には、位置に関するコストEtは(数5)により計算される。 Cost E t on the position is calculated based on the motion model such as uniform motion and uniformly accelerated motion. The motion model is set in advance from the expected movement of the robot 200. For example, when using a constant-velocity motion as the motion model, the cost E t on the position is calculated by (Equation 5).
Figure JPOXMLDOC01-appb-M000005
ここでvciは、c番目の非同期カメラ201でi番目の画像が撮影された時刻におけるロボット200の速度であり、(数6)で計算される。
Figure JPOXMLDOC01-appb-M000005
Here, v ci is the speed of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 6).
Figure JPOXMLDOC01-appb-M000006
ここでcn、inは、c番目の非同期カメラ201でi番目の画像が撮影された時刻以降に、最初に撮影された画像の識別子であり、同期ずれ量δcから決定される。すなわち、c番目の非同期カメラ201で撮影されたi番目の画像の次に撮影された画像は、cn番目の非同期カメラ201でin番目に撮影された画像となる。
Figure JPOXMLDOC01-appb-M000006
Here, c n and i n are identifiers of images captured first after the time when the i-th image is captured by the c-th asynchronous camera 201, and are determined from the synchronization shift amount δ c . That is, next to the captured image of the c-th captured i th image asynchronously camera 201 is a c n-th captured image to i n-th asynchronous camera 201.
 図4は、4台の非同期カメラ201を用いた場合の撮影時刻の一例を示す図である。同期ずれ量δ0、δ1、δ2により、各非同期カメラ201の撮影時刻320を時間順に並べることができる。たとえば、3番目の非同期カメラ201で0番目に撮影された画像の次に撮影された画像は、0番目の非同期カメラ201で1番目に撮影された画像となる。すなわちc=3、i=0に対しては、cn=0、in=1となる。 FIG. 4 is a diagram illustrating an example of the photographing time when four asynchronous cameras 201 are used. The photographing times 320 of the asynchronous cameras 201 can be arranged in time order by the synchronization deviation amounts δ 0 , δ 1 , δ 2 . For example, the image captured next to the 0th image captured by the third asynchronous camera 201 is the first image captured by the 0th asynchronous camera 201. That is, for c = 3 and i = 0, c n = 0 and i n = 1.
 図5は、ロボット200の速度の一例を示す図である。4台の非同期カメラ201が、図5に示した撮影時刻で画像を撮影した場合のロボット200の位置330と速度340を示している。たとえば、3番目の非同期カメラ201で撮影された0番目の画像の撮影時刻におけるロボット200の速度v30は、3番目の非同期カメラ201で撮影された0番目の画像の撮影時刻におけるロボット200の位置t30と、0番目の非同期カメラ201で撮影された1番目の画像の撮影時刻におけるロボット200の位置t01と、各画像の撮影時刻から計算されている。 FIG. 5 is a diagram illustrating an example of the speed of the robot 200. 4 shows the position 330 and the speed 340 of the robot 200 when the four asynchronous cameras 201 capture an image at the capturing time shown in FIG. For example, the speed v 30 of the robot 200 at the shooting time of the 0th image shot by the third asynchronous camera 201 is the position of the robot 200 at the shooting time of the 0th image shot by the third asynchronous camera 201. It is calculated from t 30 , the position t 01 of the robot 200 at the shooting time of the first image taken by the 0th asynchronous camera 201, and the shooting time of each image.
 また、運動モデルとして等加速度運動を用いる場合には、位置に関するコストEtは(数7)により計算される。 In the case of using a uniformly accelerated motion as the motion model, the cost E t on the position is calculated by (Equation 7).
Figure JPOXMLDOC01-appb-M000007
ここでaciは、c番目の非同期カメラ201でi番目の画像が撮影された時刻におけるロボット200の加速度であり、(数8)で計算される。
Figure JPOXMLDOC01-appb-M000007
Here, a ci is the acceleration of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 8).
Figure JPOXMLDOC01-appb-M000008
ここでcp、ipは、c番目の非同期カメラ201でi番目の画像が撮影された時刻以前に、最後に撮影された画像の識別子であり、同期ずれ量δcから決定される。すなわち、cp番目の非同期カメラ201でip番目に撮影された画像の次に撮影された画像は、c番目の非同期カメラ201でi番目に撮影された画像となる。
Figure JPOXMLDOC01-appb-M000008
Here, c p and i p are identifiers of the last image taken before the time when the i-th image is taken by the c-th asynchronous camera 201, and are determined from the synchronization shift amount δ c . In other words, the image taken after the i p th image captured by the c p th asynchronous camera 201 is the i th image captured by the c th asynchronous camera 201.
 姿勢に関するコストErは等角速度運動や等角加速度運動といった運動モデルに基づいて計算される。運動モデルは、想定されるロボット200の動きから事前に設定する。たとえば、運動モデルとして等角速度運動を用いる場合には、姿勢に関するコストErは(数9)により計算される。 The cost Er relating to the posture is calculated based on a motion model such as a constant angular velocity motion or a constant angular acceleration motion. The motion model is set in advance from the expected movement of the robot 200. For example, when a constant angular velocity motion is used as the motion model, the cost Er relating to the posture is calculated by (Equation 9).
Figure JPOXMLDOC01-appb-M000009
ここでωciは、c番目の非同期カメラ201でi番目の画像が撮影された時刻におけるロボット200の角速度であり、(数10)で計算される。
Figure JPOXMLDOC01-appb-M000009
Here, ω ci is the angular velocity of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 10).
Figure JPOXMLDOC01-appb-M000010
ここで、Aは、回転行列をオイラー角に変換する関数である。
Figure JPOXMLDOC01-appb-M000010
Here, A is a function that converts a rotation matrix into Euler angles.
 また、運動モデルとして等角加速度運動を用いる場合には、姿勢に関するコストErは(数11)により計算される。 Further, when the equiangular acceleration motion is used as the motion model, the cost Er relating to the posture is calculated by (Equation 11).
Figure JPOXMLDOC01-appb-M000011
ここでαciは、c番目の非同期カメラ201でi番目の画像が撮影された時刻におけるロボット200の角加速度であり、(数12)で計算される。
Figure JPOXMLDOC01-appb-M000011
Here, α ci is the angular acceleration of the robot 200 at the time when the i-th image is taken by the c-th asynchronous camera 201, and is calculated by (Equation 12).
Figure JPOXMLDOC01-appb-M000012

(作用効果)
 上述した第1の実施の形態によれば、次の作用効果が得られる。
Figure JPOXMLDOC01-appb-M000012

(Effect)
According to the first embodiment described above, the following operational effects are obtained.
 (1) ロボット位置姿勢推定・三次元計測装置100は、ロボット位置姿勢・同期ずれ量推定部102を備える。ロボット位置姿勢・同期ずれ量推定部102は、複数の非同期カメラ201によって撮影された画像から、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量と、ロボット周辺に存在する静止物体の3D位置を推定する(図1)。そのため、非同期カメラ201の同期ずれ量が考慮されることにより、高精度にロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。 (1) The robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a robot position / orientation / synchronization deviation estimation unit 102. The robot position / orientation / synchronization deviation amount estimation unit 102 calculates the position / orientation of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot from images taken by the plurality of asynchronous cameras 201. Estimate (Figure 1). Therefore, by considering the amount of synchronization deviation of the asynchronous camera 201, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
 (2) ロボット位置姿勢推定・三次元計測装置100は、動物体3D位置・運動推定部103を備える。動物体3D位置・運動推定部103は、ロボット位置姿勢・同期ずれ量推定部102によって推定されたロボット200の位置姿勢と、非同期カメラ201の同期ずれ量から、ロボット200の周囲に存在する動物体の3D位置を推定する(図1)。そのため、事前に非同期カメラ201の同期ずれ量を推定しなくても、ロボット200の周囲に存在する動物体の3D位置を推定することができる。また、ロボット位置姿勢・同期ずれ量推定部102によって推定された高精度なロボット200の位置姿勢を用いることで、ロボット200の周囲に存在する動物体の3D位置を高精度に推定することができる。 (2) The robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a moving object 3D position / motion estimation unit 103. The moving object 3D position / motion estimation unit 103 is based on the position and orientation of the robot 200 estimated by the robot position / posture / synchronization shift amount estimation unit 102 and the synchronization shift amount of the asynchronous camera 201, and the moving object exists around the robot 200. Estimate the 3D position of (Fig. 1). Therefore, the 3D position of the moving object existing around the robot 200 can be estimated without estimating the amount of synchronization deviation of the asynchronous camera 201 in advance. Further, by using the highly accurate position and orientation of the robot 200 estimated by the robot position and orientation / synchronization deviation estimation unit 102, the 3D position of the moving object existing around the robot 200 can be estimated with high accuracy. .
 (3) ロボット位置姿勢・同期ずれ量推定部102は、複数の非同期カメラ201によって撮影された画像から、カメラ幾何に基づいて計算されるコストと、運動モデルに基づいて計算されるコストと、からなる目的関数を最小化することで、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量を推定する(図4、図5、数1、数2)。そのため、非同期カメラ201の同期ずれ量と、ロボット200の運動モデルと、が考慮されることにより、高精度にロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。 (3) The robot position / orientation / synchronization deviation estimation unit 102 is calculated from the cost calculated based on the camera geometry and the cost calculated based on the motion model from the images captured by the plurality of asynchronous cameras 201. By minimizing the objective function, the position and orientation of the robot 200 and the amount of synchronization deviation of the asynchronous camera 201 are estimated (FIGS. 4, 5, and 1 and 2). Therefore, by considering the amount of synchronization deviation of the asynchronous camera 201 and the motion model of the robot 200, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy. .
 (4) ロボット位置姿勢・同期ずれ量推定部102は、目的関数を最小化することで、各非同期カメラ201が撮影した各画像の撮影時刻におけるロボット200の位置姿勢を推定する(図4、図5、数1)。そのため、複数の同期カメラを用いる場合と比較して、高周期にロボット200の位置姿勢を推定することができる。たとえば、10fpsの同期カメラを4つ用いた場合には10Hzでロボット200の位置姿勢が推定されるが、10fpsの非同期カメラ201を4つ用いた場合には40Hzでロボット200の位置姿勢を推定することができる。 (4) The robot position / posture / synchronization deviation estimation unit 102 estimates the position / posture of the robot 200 at the shooting time of each image taken by each asynchronous camera 201 by minimizing the objective function (FIGS. 4 and 4). 5, number 1). Therefore, it is possible to estimate the position and orientation of the robot 200 at a higher cycle than in the case of using a plurality of synchronous cameras. For example, when four synchronous cameras of 10 fps are used, the position and orientation of the robot 200 are estimated at 10 Hz. However, when four asynchronous cameras 201 of 10 fps are used, the position and orientation of the robot 200 are estimated at 40 Hz. be able to.
 (5) ロボット200は、撮影間隔が長く高解像度の非同期カメラ201と、撮影間隔が短く低解像度の非同期カメラ201を備える。そのため、ロボット位置姿勢・同期ずれ量推定部102は、高解像度の画像から推定された特徴点を用いることで、高精度にロボット200の位置姿勢を推定することができ、かつ、撮影間隔が短い画像を用いることで、高周期にロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。 (5) The robot 200 includes an asynchronous camera 201 with a long shooting interval and a high resolution, and an asynchronous camera 201 with a short shooting interval and a low resolution. Therefore, the robot position / orientation / synchronization amount estimation unit 102 can estimate the position / orientation of the robot 200 with high accuracy by using the feature points estimated from the high-resolution image, and the imaging interval is short. By using the image, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated at a high cycle.
 (6) ロボット位置姿勢推定・三次元計測装置100は、カメラ位置姿勢・静止物体3D位置推定部101を備える。ロボット位置姿勢・同期ずれ量推定部102は、カメラ位置姿勢・静止物体3D位置推定部101によって推定された各画像撮影時刻における非同期カメラ201の位置姿勢と、ロボット200の周囲に存在する静止物体の3D位置を初期値として目的関数を最小化する(図1)。そのため、非線形最小二乗法による目的関数の最小化において、初期値が良くなることにより、高精度にロボット200の位置姿勢を推定することができ、かつ、計算コストを削減することができる。また、カメラ位置姿勢・静止物体3D位置推定部101において、非同期カメラ201毎に独立に特徴点の対応付けをすることにより、特徴点の対応付けの計算コストを削減することができる。
(変形例1)
 非同期カメラ201は、一定の撮影間隔lcで画像を撮影する。しかし、非同期カメラ201の撮影間隔はこれに限定されない。
(6) The robot position / orientation estimation / three-dimensional measurement apparatus 100 includes a camera position / orientation / stationary object 3D position estimation unit 101. The robot position / posture / synchronization amount estimation unit 102 is configured to detect the position / posture of the asynchronous camera 201 at each image capturing time estimated by the camera position / posture / stationary object 3D position estimation unit 101, and the stationary object existing around the robot 200. The objective function is minimized with the 3D position as the initial value (Fig. 1). Therefore, when the objective function is minimized by the nonlinear least square method, the initial value is improved, so that the position and orientation of the robot 200 can be estimated with high accuracy and the calculation cost can be reduced. In addition, the camera position / posture / stationary object 3D position estimation unit 101 associates feature points independently for each asynchronous camera 201, thereby reducing feature point association calculation costs.
(Modification 1)
Asynchronous camera 201 captures images at a constant capture interval l c . However, the shooting interval of the asynchronous camera 201 is not limited to this.
 非同期カメラ201は、画像毎に異なる撮影間隔で画像を撮影しても良い。この場合には、非同期カメラ201内に、各画像の撮影時刻を記録するタイマを設ける。非同期カメラ201は画像と撮影時刻を同時に出力する。 The asynchronous camera 201 may shoot images at different shooting intervals for each image. In this case, a timer for recording the shooting time of each image is provided in the asynchronous camera 201. The asynchronous camera 201 outputs an image and a shooting time at the same time.
 上述した変形例1によれば、次の作用効果が得られる。すなわち、ロボット位置姿勢・同期ずれ量推定部102は、各非同期カメラ201が出力した各画像の撮影時刻を用いて、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量と、ロボット周辺に存在する静止物体の3D位置を推定する。そのため、撮影間隔が一定でない非同期カメラ201を用いて、高精度にロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。
(変形例2)
 ロボット位置姿勢・同期ずれ量推定部102は、カメラ幾何に基づいて計算されるコストと、事前に設定された運動モデルに基づいて計算されるコストと、からなる目的関数を最小化することで、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量とロボット周辺に存在する静止物体の3D位置を推定する。しかし、運動モデルの設定方法はこれに限定されない。
According to Modification 1 described above, the following operational effects can be obtained. That is, the robot position / posture / synchronization deviation amount estimation unit 102 exists in the vicinity of the robot using the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, using the shooting time of each image output by each asynchronous camera 201. Estimate the 3D position of a stationary object. Therefore, it is possible to estimate the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot with high accuracy using the asynchronous camera 201 whose shooting interval is not constant.
(Modification 2)
The robot position / posture / synchronization amount estimation unit 102 minimizes an objective function including a cost calculated based on the camera geometry and a cost calculated based on a preset motion model, The position and orientation of the robot 200, the amount of synchronization deviation of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot are estimated. However, the method for setting the motion model is not limited to this.
 ロボット位置姿勢・同期ずれ量推定部102は、運動モデルの異なる複数の目的関数を独立に最小化し、もっともカメラ幾何に基づいて計算されるコストが小さくなった目的関数によって推定されたロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定結果としても良い。 The robot position / posture / synchronization amount estimation unit 102 independently minimizes a plurality of objective functions having different motion models, and the position of the robot 200 estimated by the objective function with the lowest cost calculated based on the camera geometry. The posture and the 3D position of a stationary object existing around the robot may be used as the estimation result.
 上述した変形例2によれば、次の作用効果が得られる。すなわち、ロボット位置姿勢・同期ずれ量推定部102は、最適な運動モデルを自動で選択する。そのため、様々な運動モデルのロボット200の位置姿勢を高精度に推定することができる。また、ロボット200の運動モデルが変化する場合にも高精度にロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。
(変形例3)
 ロボット位置姿勢・同期ずれ量推定部102は、カメラ位置姿勢・静止物体3D位置推定部101によって推定された特徴点の対応を用いて、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量とロボット周辺に存在する静止物体の3D位置を推定する。しかし、推定に用いる特徴点の対応はこれに限定されない。
According to Modification 2 described above, the following operational effects can be obtained. That is, the robot position / posture / synchronization deviation estimation unit 102 automatically selects an optimal motion model. Therefore, the position and orientation of the robot 200 of various motion models can be estimated with high accuracy. Further, even when the motion model of the robot 200 changes, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
(Modification 3)
The robot position / posture / synchronization deviation amount estimation unit 102 uses the correspondence between the feature points estimated by the camera position / posture / stationary object 3D position estimation unit 101, the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the robot. Estimate the 3D position of a stationary object in the vicinity. However, the correspondence of the feature points used for estimation is not limited to this.
 ロボット位置姿勢・同期ずれ量推定部102は、カメラ位置姿勢・静止物体3D位置推定部101によって推定された特徴点の対応を用いて、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量とロボット周辺に存在する静止物体の3D位置を推定した後に、追加で特徴点の対応付けをしても良い。たとえば、推定された各画像撮影時刻におけるロボット200の位置姿勢から、視野が重なっている画像のペアを選択し、特徴点を対応付ける。その後、再度、目的関数を最小化することで、ロボット200の位置姿勢と非同期カメラ201の同期ずれ量とロボット周辺に存在する静止物体の3D位置を更新する。 The robot position / posture / synchronization deviation amount estimation unit 102 uses the correspondence between the feature points estimated by the camera position / posture / stationary object 3D position estimation unit 101, the position / posture of the robot 200, the synchronization deviation amount of the asynchronous camera 201, and the robot. After estimating the 3D position of a stationary object existing in the vicinity, additional feature points may be associated. For example, a pair of images with overlapping fields of view is selected from the estimated position and orientation of the robot 200 at each estimated image capturing time, and feature points are associated with each other. After that, by minimizing the objective function again, the position and orientation of the robot 200, the amount of synchronization deviation of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot are updated.
 上述した変形例3によれば、次の作用効果が得られる。すなわち、ロボット位置姿勢・同期ずれ量推定部102は、多くの特徴点を用いる。そのため、高精度にロボット200の位置姿勢と非同期カメラ201の同期ずれ量とロボット周辺に存在する静止物体の3D位置を推定することができる。
(変形例4)
 ロボット位置姿勢・同期ずれ量推定部102は、各非同期カメラ201によって撮影されたすべての画像の撮影時刻におけるロボット200の位置姿勢をパラメータとして目的関数を最小化する(数1)。しかし、目的関数の最小化のパラメータはこれに限定されない。
According to Modification 3 described above, the following operational effects can be obtained. That is, the robot position / posture / synchronization deviation estimation unit 102 uses many feature points. Therefore, the position and orientation of the robot 200, the amount of synchronization shift of the asynchronous camera 201, and the 3D position of a stationary object existing around the robot can be estimated with high accuracy.
(Modification 4)
The robot position / posture / synchronization deviation amount estimation unit 102 minimizes the objective function using the position / posture of the robot 200 at the shooting time of all images taken by each asynchronous camera 201 as a parameter (Equation 1). However, the parameter for minimizing the objective function is not limited to this.
 ロボット位置姿勢・同期ずれ量推定部102は、事前に設定された数の最新の画像を選択し、目的関数の最小化に用いても良い。すなわち、選択された画像の撮影時刻におけるロボット200の位置姿勢と、選択された画像で検出された特徴点の3D位置と、非同期カメラ201の同期ずれ量をパラメータとして目的関数を最小化しても良い。 The robot position / orientation / synchronization amount estimation unit 102 may select a preset number of the latest images and use them to minimize the objective function. That is, the objective function may be minimized by using the position and orientation of the robot 200 at the shooting time of the selected image, the 3D position of the feature point detected in the selected image, and the amount of synchronization deviation of the asynchronous camera 201 as parameters. .
 上述した変形例4によれば、次の作用効果が得られる。すなわち、ロボット位置姿勢・同期ずれ量推定部102は、少数の画像のみを用いる。そのため、少ない計算コストでロボット200の位置姿勢とロボット周辺に存在する静止物体の3D位置を推定することができる。 According to Modification 4 described above, the following operational effects can be obtained. That is, the robot position / posture / synchronization deviation estimation unit 102 uses only a small number of images. Therefore, the position and orientation of the robot 200 and the 3D position of a stationary object existing around the robot can be estimated with a small calculation cost.
 以下、図6を参照して、ロボット位置姿勢推定・三次元計測装置の第2の実施の形態を説明する。以下の説明では、第1の実施の形態と同じ構成要素には同じ符号を付して相違点を主に説明する。特に説明しない点については、第1の実施の形態と同じである。本実施の形態では、ロボットが、複数の非同期カメラ201に加えて、1つ以上の位置姿勢センサを備える場合を対象とする。位置姿勢センサとは、たとえば、GPSやホイールエンコーダ、加速度計、コンパス、ジャイロスコープなど、ロボットの位置、速度、加速度、姿勢、角速度、角加速度などを取得できるセンサである。
(ブロック構成)
 図6は、ロボット位置姿勢推定・三次元計測装置500のブロック構成を示す図である。ロボット位置姿勢推定・三次元計測装置500は、ロボット600に搭載されている複数の非同期カメラ201によって撮影された画像と、1つ以上の位置姿勢センサ602の計測データから、ロボット600の位置姿勢を推定し、ロボット600の周囲に存在する物体の3D位置を計測する。ロボット位置姿勢推定・三次元計測装置500はカメラ位置姿勢・静止物体3D位置推定部101と、ロボット位置姿勢・同期ずれ量推定部502と、動物体3D位置・運動推定部103と、出力部104と、を備える。
Hereinafter, a second embodiment of the robot position / orientation estimation / three-dimensional measurement apparatus will be described with reference to FIG. In the following description, the same components as those in the first embodiment are denoted by the same reference numerals, and different points will be mainly described. Points that are not particularly described are the same as those in the first embodiment. In this embodiment, the case where the robot includes one or more position and orientation sensors in addition to the plurality of asynchronous cameras 201 is targeted. The position and orientation sensor is a sensor that can acquire the position, velocity, acceleration, posture, angular velocity, angular acceleration, and the like of the robot, such as a GPS, a wheel encoder, an accelerometer, a compass, and a gyroscope.
(Block configuration)
FIG. 6 is a diagram showing a block configuration of the robot position / orientation estimation / three-dimensional measurement apparatus 500. The robot position / orientation estimation / three-dimensional measurement apparatus 500 determines the position / orientation of the robot 600 from images taken by a plurality of asynchronous cameras 201 mounted on the robot 600 and measurement data of one or more position / orientation sensors 602. The 3D position of an object existing around the robot 600 is estimated. The robot position / orientation estimation / three-dimensional measurement apparatus 500 includes a camera position / orientation / stationary object 3D position estimation unit 101, a robot position / orientation / synchronization amount estimation unit 502, a moving object 3D position / motion estimation unit 103, and an output unit 104. And comprising.
 以下の説明では、位置姿勢センサ602の識別子をsとし、Sを位置姿勢センサ602の総数とする。すなわち、ロボット600はS台の位置姿勢センサ602を備える。 In the following description, the identifier of the position / orientation sensor 602 is s, and S is the total number of position / orientation sensors 602. That is, the robot 600 includes S position and orientation sensors 602.
 ロボット位置姿勢・同期ずれ量推定部502は、各カメラ位置姿勢・静止物体3D位置推定部101の推定結果と、各位置姿勢センサ602の計測データから、ロボット600の位置姿勢と、非同期カメラ201の同期ずれ量と、各位置姿勢センサ602の同期ずれ量δsとロボット周辺に存在する静止物体の3D位置を推定する。位置姿勢センサ602の同期ずれ量δsは、0番目の非同期カメラ201で最初に撮影された画像の撮影時刻と、各位置姿勢センサ602で最初にデータが取得された時刻の差である。
(ロボット位置姿勢・同期ずれ量推定部の動作)
 ロボット位置姿勢・同期ずれ量推定部502は、各カメラ位置姿勢・静止物体3D位置推定部101の推定結果から、各非同期カメラ201で撮影された各画像の撮影時刻におけるロボット600の姿勢Rci、位置tciと、各特徴点の3D位置pjと、各非同期カメラ201の同期ずれ量δcと、各位置姿勢センサ602の同期ずれ量δsを(数13)のように目的関数E’を最小化することで推定する。
The robot position / posture / synchronization deviation amount estimation unit 502 determines the position / posture of the robot 600 and the position of the asynchronous camera 201 based on the estimation result of each camera position / posture / stationary object 3D position estimation unit 101 and the measurement data of each position / posture sensor 602. The amount of synchronization deviation, the amount of synchronization deviation δ s of each position and orientation sensor 602, and the 3D position of a stationary object existing around the robot are estimated. The synchronization deviation amount δ s of the position / orientation sensor 602 is a difference between the image capturing time of the first image captured by the 0th asynchronous camera 201 and the time when the data is first acquired by each position / orientation sensor 602.
(Operation of the robot position and orientation / synchronization deviation estimation unit)
The robot position / posture / synchronization amount estimation unit 502 determines the posture R ci of the robot 600 at the shooting time of each image shot by each asynchronous camera 201 from the estimation result of each camera position / posture / stationary object 3D position estimation unit 101. The position t ci , the 3D position p j of each feature point, the synchronization deviation amount δ c of each asynchronous camera 201, and the synchronization deviation amount δ s of each position and orientation sensor 602 are expressed by the objective function E ′ Is estimated by minimizing.
Figure JPOXMLDOC01-appb-M000013
 目的関数E’は(数14)により計算される。
Figure JPOXMLDOC01-appb-M000013
The objective function E ′ is calculated by (Equation 14).
Figure JPOXMLDOC01-appb-M000014
ここで、Esはs番目の位置姿勢センタの計測データに基づくコスト、λsは事前に設定されたs番目の位置姿勢センサに対する重みである。
Figure JPOXMLDOC01-appb-M000014
Here, E s is a cost based on measurement data of the s-th position and orientation center, and λ s is a weight for the s-th position and orientation sensor set in advance.
 s番目の位置姿勢センタの計測データに基づくコストEsは、s番目の位置姿勢センサの計測データと、ロボット600の位置姿勢の整合性を表す。ホイールエンコーダのように、速度を取得可能なセンサを用いる場合には、s番目の位置姿勢センタの計測データに基づくコストEsは(数15)により計算される。 Cost E s based on s-th position and orientation center measurement data represents the measurement data of s-th position and orientation sensor, the consistency of the position and orientation of the robot 600. As the wheel encoder, in the case of using a sensor capable of acquiring speed, cost E s based on the measurement data of the s-th position and orientation center is calculated by (Equation 15).
Figure JPOXMLDOC01-appb-M000015
ここで、v’sciは、s番目の位置姿勢センサ602の同期ずれ量δsによって決定される、c番目の非同期カメラ201でi番目の画像が撮影された時刻に最も近い時刻にs番目の位置姿勢センサ602で取得された速度を表す。
Figure JPOXMLDOC01-appb-M000015
Here, v ′ sci is determined by the synchronization deviation amount δ s of the s-th position and orientation sensor 602, and the s-th time is closest to the time when the i-th image is captured by the c-th asynchronous camera 201. The speed acquired by the position / orientation sensor 602 is represented.
 位置姿勢センサ602が位置、加速度、姿勢、角速度、角加速度を取得する場合には、(数15)と同様に、ロボット600の位置姿勢もしくは位置姿勢から計算した加速度、角速度、角加速度と、位置姿勢センサ602が取得した位置、加速度、角速度、角加速度の差を位置姿勢センタの計測データに基づくコストEsとして用いる。
(作用効果)
 上述した第2の実施の形態によれば、次の作用効果が得られる。すなわち、ロボット位置姿勢推定・三次元計測装置500は、ロボット位置姿勢・同期ずれ量推定部502を備える。ロボット位置姿勢・同期ずれ量推定部502は、複数の非同期カメラ201によって撮影された画像と、1つ以上の位置姿勢センサ602の計測データから、ロボット600の位置姿勢と非同期カメラ201の同期ずれ量と位置姿勢センサ602の同期ずれ量とロボット周辺に存在する静止物体の3D位置を推定する(図6)。そのため、位置姿勢センサ602の同期ずれ量を考慮しながら、位置姿勢センサ602の計測データを用いることにより、高精度にロボット600の位置姿勢とロボット周辺に存在する物体の3D位置を推定することができる。
When the position / orientation sensor 602 acquires position, acceleration, attitude, angular velocity, and angular acceleration, the acceleration, angular velocity, angular acceleration calculated from the position / orientation or position / orientation of the robot 600, position and orientation sensor 602 acquires, using the acceleration, angular velocity, as the cost E s the difference of the angular acceleration based on the measurement data of the position and orientation center.
(Effect)
According to the second embodiment described above, the following operational effects can be obtained. That is, the robot position / orientation estimation / three-dimensional measurement apparatus 500 includes a robot position / orientation / synchronization deviation estimation unit 502. The robot position / posture / synchronization deviation estimation unit 502 uses the images taken by the plurality of asynchronous cameras 201 and the measurement data of one or more position / orientation sensors 602 to determine the position / posture of the robot 600 and the amount of synchronization deviation of the asynchronous camera 201. Then, the amount of synchronization deviation of the position and orientation sensor 602 and the 3D position of a stationary object existing around the robot are estimated (FIG. 6). Therefore, it is possible to estimate the position and orientation of the robot 600 and the 3D position of an object existing around the robot with high accuracy by using the measurement data of the position and orientation sensor 602 while taking the synchronization deviation amount of the position and orientation sensor 602 into consideration. it can.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。本発明の技術的思想の範囲内で考えられるその他の態様も本発明の範囲内に含まれる。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。 In addition, this invention is not limited to the above-mentioned Example, Various modifications are included. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described. Other embodiments conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention. Further, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment. Each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
100…ロボット位置姿勢・三次元計測装置、101…カメラ位置姿勢・静止物体3D位置推定部、102…ロボット位置姿勢・同期ずれ量推定部、103…動物体3D位置・運動推定部、104…出力部、200…ロボット、201…非同期カメラ、500…ロボット位置姿勢・三次元計測装置、602…位置姿勢センサ DESCRIPTION OF SYMBOLS 100 ... Robot position / orientation / three-dimensional measuring device 101 ... Camera position / orientation / stationary object 3D position estimation unit, 102 ... Robot position / orientation / synchronization amount estimation unit, 103 ... Moving object 3D position / motion estimation unit, 104 ... Output , 200 ... Robot, 201 ... Asynchronous camera, 500 ... Robot position / orientation / three-dimensional measuring device, 602 ... Position / orientation sensor

Claims (7)

  1.  ロボットに搭載された複数の非同期カメラと、
     前記非同期カメラによって撮影された複数の画像から前記ロボットの位置姿勢、前記非同期カメラの同期ずれ量及び前記ロボット周辺に存在する静止物体の3D位置を推定するロボット位置姿勢・同期ずれ量推定部と、
     前記ロボット位置姿勢・同期ずれ量推定部の推定結果を出力する出力部と、を備えるロボット位置姿勢・三次元計測装置。
    Multiple asynchronous cameras on the robot,
    A position and orientation of the robot from a plurality of images taken by the asynchronous camera, an amount of synchronization deviation of the asynchronous camera, and a robot position and orientation / synchronization amount estimation unit for estimating a 3D position of a stationary object existing around the robot;
    A robot position / orientation / three-dimensional measurement apparatus comprising: an output unit that outputs an estimation result of the robot position / orientation / synchronization deviation estimation unit.
  2.  請求項1に記載のロボット位置姿勢・三次元計測装置であって、
     前記ロボット位置姿勢・同期ずれ量推定部によって推定された前記ロボットの位置姿勢と前記非同期カメラの同期ずれ量から前記ロボットの周囲に存在する動物体の3D位置を推定する動物体3D位置・運動推定部を備えるロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measuring apparatus according to claim 1,
    A moving object 3D position / motion estimation that estimates the 3D position of the moving object existing around the robot from the position / posture of the robot estimated by the robot position / posture / synchronization deviation amount estimation unit and the synchronization deviation amount of the asynchronous camera. Robot position / orientation / three-dimensional measuring device equipped with a unit.
  3.  請求項1または2に記載のロボット位置姿勢・三次元計測装置であって、
     前記ロボット位置姿勢・同期ずれ量推定部はカメラ幾何に基づいて計算されるコストと運動モデルに基づいて計算されるコストとからなる目的関数を最小化することで、前記ロボットの位置姿勢、前記非同期カメラの同期ずれ量及び前記ロボット周辺に存在する静止物体の3D位置を推定するロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measurement apparatus according to claim 1 or 2,
    The robot position / orientation / synchronization amount estimation unit minimizes an objective function consisting of a cost calculated based on camera geometry and a cost calculated based on a motion model, so that the position / orientation of the robot and the asynchronous A robot position / orientation / three-dimensional measuring apparatus that estimates the amount of synchronization deviation of a camera and the 3D position of a stationary object existing around the robot.
  4.  請求項3に記載のロボット位置姿勢・三次元計測装置であって、
     前記ロボット位置姿勢・同期ずれ量推定部は、前記目的関数を最小化することで前記非同期カメラが撮影した画像の撮影時刻における前記ロボットの位置姿勢を推定するロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measurement apparatus according to claim 3,
    The robot position / orientation / synchronization deviation estimation unit is a robot position / orientation / three-dimensional measurement apparatus that estimates the position / orientation of the robot at the image capturing time of an image captured by the asynchronous camera by minimizing the objective function.
  5.  請求項1ないし4のいずれか一項に記載のロボット位置姿勢・三次元計測装置であって、
     前記ロボット位置姿勢・同期ずれ量推定部は、高解像度の前記非同期カメラと、前記高解像度の非同期カメラよりも撮影間隔が短い低解像度の前記非同期カメラとによって撮影された画像から、前記ロボットの位置姿勢、前記非同期カメラの同期ずれ量及び前記ロボット周辺に存在する静止物体の3D位置を推定するロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measurement apparatus according to claim 1,
    The robot position / posture / synchronization amount estimation unit is configured to determine the position of the robot from images taken by the high-resolution asynchronous camera and the low-resolution asynchronous camera having a shooting interval shorter than that of the high-resolution asynchronous camera. A robot position / orientation / three-dimensional measurement apparatus that estimates the posture, the amount of synchronization deviation of the asynchronous camera, and the 3D position of a stationary object existing around the robot.
  6.  請求項1ないし5のいずれか一項に記載のロボット位置姿勢・三次元計測装置であって、
     複数の前記非同期カメラがカメラ毎に対応して、前記非同期カメラによって撮影された各画像の撮影時刻における前記非同期カメラの位置姿勢及び前記ロボットの周囲に存在する静止物体の3D位置を推定するカメラ位置姿勢・静止物体3D位置推定部を備えるロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measurement apparatus according to claim 1,
    A plurality of asynchronous cameras corresponding to each camera, a camera position that estimates the position and orientation of the asynchronous camera at the shooting time of each image captured by the asynchronous camera and the 3D position of a stationary object existing around the robot A robot position / orientation / three-dimensional measuring device equipped with a posture / stationary object 3D position estimation unit.
  7.  請求項1ないし6のいずれか一項に記載のロボット位置姿勢・三次元計測装置であって、
     前記ロボット位置姿勢・同期ずれ量推定部は、前記ロボットに搭載された位置姿勢センサのデータを用いて前記位置姿勢センサの同期ずれ量を推定するロボット位置姿勢・三次元計測装置。
    The robot position / orientation / three-dimensional measurement apparatus according to claim 1,
    The robot position / orientation / synchronization deviation estimation unit is a robot position / orientation / three-dimensional measurement apparatus that estimates a synchronization deviation amount of the position / orientation sensor using data of a position / orientation sensor mounted on the robot.
PCT/JP2018/012316 2018-03-27 2018-03-27 Robot position/posture estimation and 3d measurement device WO2019186677A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/012316 WO2019186677A1 (en) 2018-03-27 2018-03-27 Robot position/posture estimation and 3d measurement device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/012316 WO2019186677A1 (en) 2018-03-27 2018-03-27 Robot position/posture estimation and 3d measurement device

Publications (1)

Publication Number Publication Date
WO2019186677A1 true WO2019186677A1 (en) 2019-10-03

Family

ID=68062354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/012316 WO2019186677A1 (en) 2018-03-27 2018-03-27 Robot position/posture estimation and 3d measurement device

Country Status (1)

Country Link
WO (1) WO2019186677A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021082181A (en) * 2019-11-22 2021-05-27 パナソニックIpマネジメント株式会社 Position estimation device, vehicle, position estimation method and position estimation program
WO2021157116A1 (en) * 2020-02-07 2021-08-12 パナソニックIpマネジメント株式会社 Position measurement device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006226965A (en) * 2005-02-21 2006-08-31 Sumitomo Electric Ind Ltd Image processing system, computer program and image processing method
JP2006338203A (en) * 2005-05-31 2006-12-14 Toshiba Corp Image processor and image processing method
JP2007233523A (en) * 2006-02-28 2007-09-13 Hitachi Ltd Person location estimation method using asynchronous camera image and system therefor
JP2014186004A (en) * 2013-03-25 2014-10-02 Toshiba Corp Measurement device, method and program
US20170113342A1 (en) * 2015-10-21 2017-04-27 F Robotics Acquisitions Ltd. Domestic Robotic System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006226965A (en) * 2005-02-21 2006-08-31 Sumitomo Electric Ind Ltd Image processing system, computer program and image processing method
JP2006338203A (en) * 2005-05-31 2006-12-14 Toshiba Corp Image processor and image processing method
JP2007233523A (en) * 2006-02-28 2007-09-13 Hitachi Ltd Person location estimation method using asynchronous camera image and system therefor
JP2014186004A (en) * 2013-03-25 2014-10-02 Toshiba Corp Measurement device, method and program
US20170113342A1 (en) * 2015-10-21 2017-04-27 F Robotics Acquisitions Ltd. Domestic Robotic System

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021082181A (en) * 2019-11-22 2021-05-27 パナソニックIpマネジメント株式会社 Position estimation device, vehicle, position estimation method and position estimation program
WO2021100650A1 (en) * 2019-11-22 2021-05-27 パナソニックIpマネジメント株式会社 Position estimation device, vehicle, position estimation method and position estimation program
WO2021157116A1 (en) * 2020-02-07 2021-08-12 パナソニックIpマネジメント株式会社 Position measurement device

Similar Documents

Publication Publication Date Title
JP6734940B2 (en) Three-dimensional measuring device
JP6658001B2 (en) Position estimation device, program, position estimation method
CN106871878B (en) Hand-held range unit and method, the storage medium that spatial model is created using it
Hol et al. Modeling and calibration of inertial and vision sensors
JP2008506953A5 (en)
US20200264011A1 (en) Drift calibration method and device for inertial measurement unit, and unmanned aerial vehicle
CN106170676B (en) Method, equipment and the system of movement for determining mobile platform
JP6782903B2 (en) Self-motion estimation system, control method and program of self-motion estimation system
JP6589636B2 (en) 3D shape measuring apparatus, 3D shape measuring method, and 3D shape measuring program
CN105378794A (en) 3d recording device, method for producing 3d image, and method for setting up 3d recording device
US20180075609A1 (en) Method of Estimating Relative Motion Using a Visual-Inertial Sensor
JP2017187861A5 (en)
Brunetto et al. Fusion of inertial and visual measurements for rgb-d slam on mobile devices
CN108413917A (en) Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
JP6905390B2 (en) Own vehicle position estimation environment map generator, own vehicle position estimation device, own vehicle position estimation environment map generation program, and own vehicle position estimation program
WO2019186677A1 (en) Robot position/posture estimation and 3d measurement device
JP2559939B2 (en) Three-dimensional information input device
JP4227037B2 (en) Imaging system and calibration method
JP2007025863A (en) Photographing system, photographing method, and image processing program
KR101059748B1 (en) Feature point placement method and helmet position estimation method in head tracker using feature point pattern
KR101791166B1 (en) Apparatus and Method for Estimation of Spatial information of an object
EP3392748B1 (en) System and method for position tracking in a virtual reality system
JP2003006618A (en) Method and device for generating three-dimensional model and computer program
WO2018134866A1 (en) Camera calibration device
CN111161357B (en) Information processing method and device, augmented reality device and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911758

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18911758

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP