WO2014102995A1 - Monitoring system, method, and information-recording medium containing program - Google Patents

Monitoring system, method, and information-recording medium containing program Download PDF

Info

Publication number
WO2014102995A1
WO2014102995A1 PCT/JP2012/084018 JP2012084018W WO2014102995A1 WO 2014102995 A1 WO2014102995 A1 WO 2014102995A1 JP 2012084018 W JP2012084018 W JP 2012084018W WO 2014102995 A1 WO2014102995 A1 WO 2014102995A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
robot
extended
image
point
Prior art date
Application number
PCT/JP2012/084018
Other languages
French (fr)
Japanese (ja)
Inventor
洋登 永吉
義崇 平松
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2012/084018 priority Critical patent/WO2014102995A1/en
Priority to JP2014553986A priority patent/JPWO2014102995A1/en
Publication of WO2014102995A1 publication Critical patent/WO2014102995A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to a remote monitoring system using a robot.
  • a wide-angle camera, a fish-eye camera, or a wide viewing angle camera that can shoot the entire circumference in a cylindrical shape is often used.
  • a wide viewing angle camera is not suitable for photographing a small object or a distant object, and is preferably used in combination with a camera that has a narrow field of view but can shoot at a telephoto position.
  • Patent Document 1 discloses a technique related to the expansion of the field of view of a vehicle
  • Patent Document 2 discloses a method for predicting a route of a robot in robot operation.
  • Patent Document 1 discloses that a display is provided on both sides of a rearview mirror and an image from a camera provided on a side mirror is displayed on the display in order to extend the driver's field of view.
  • the two types of cameras cannot be installed in the chassis due to the load on the drive unit, the size of the storage unit, and the design, the field of view of the robot cannot be expanded with this technology.
  • the present application includes a plurality of means for solving the above-mentioned problems.
  • a monitoring method using a robot in which a first image is provided by a first camera included in a remotely operable robot.
  • a step of photographing, a step of calculating a gazing point that is the center of the first image from the first image, a step of calculating an extended gazing point around the gazing point, and a second that shoots the extended gazing point A step of selecting a camera, and a step of displaying the first image and the second image taken by the second camera side by side on a display device for an operator of the robot.
  • the center of the second image, and the second camera is provided in a space in which the robot moves.
  • the monitoring system may be a first camera included in a remote-controllable robot and a first image captured by the first camera, and a gaze point that is the center of the first image and a gaze point
  • a calculation unit that calculates a surrounding extended gazing point
  • a camera selection unit that selects a second camera that shoots the extended gazing point, a first image, and a second image captured by the second camera
  • a display device used by an operator who operates the robot, the extended gazing point is the center of the second image, and the second camera is provided in a space in which the robot moves. It is characterized by.
  • the field of view of the robot can be expanded and the operability of the robot is improved.
  • FIG. FIG. 2 is a block diagram in the first embodiment. It is a figure which shows the example of a three-dimensional map. It is a figure which shows the example of environmental camera information. It is a figure which shows the flow of the monitoring method in Example 1.
  • FIG. It is a figure which shows the gaze point and extended gaze point of a robot. It is a figure which shows the calculation method of a gaze point position. It is a figure shown about rotation of an imaging direction vector. It is a figure which shows the example of path
  • FIG. 1 is a diagram showing an example of an environment for implementing the present invention.
  • the robot 100, the environmental camera 101, the installation object 140, and the installation object 141 are installed in the room 130, and the space information processing unit 120, the remote operation processing unit 121, the display device 121, and the input device 122 are installed and connected. Yes. It is desirable that there are a plurality of environmental cameras 101.
  • FIG. 2 shows the block configuration.
  • a robot 100, an environmental camera 101, a spatial information processing unit 120, and a remote operation processing unit 121 are connected by a network 230.
  • the display device 121 and the input device 122 are connected to the remote operation processing unit 121.
  • the display device 121 provides information to the operator, and the input device 122 receives input from the operator.
  • the remote operation processing unit 121 generates an image to be displayed on the display device 121 in accordance with an instruction from the spatial information processing unit 120, and transmits information from the input device 122 to the spatial information processing unit 120.
  • the robot 100 includes a control unit 210, a camera 211, a position / direction estimation unit 212, a measurement unit 213, and a drive unit 214.
  • the measuring unit 213 is a sensor such as a laser range finder, for example, and can measure the distance to surrounding objects.
  • the position / direction estimation unit 212 can estimate the position and direction of the robot by using, for example, a technique called SLAM (Simultaneous Localization and Mapping) using the measured distance information.
  • SLAM Simultaneous Localization and Mapping
  • the measuring unit 213 may be a sensor that measures the driving state of the driving unit 214.
  • the position / direction estimating unit 212 can estimate the position and direction of the robot from the driving state of the robot.
  • the driving unit 214 may include a mechanism for driving the camera 211 of the robot 100.
  • the camera 211 is often mounted on the head of the robot.
  • the driving unit 214 can change the direction of the camera 211 by driving the head of the robot.
  • the position / direction estimation unit 212 measures the driving state of the driving unit 214 by the measuring unit 213 and also estimates the direction of the camera 211.
  • the position / direction estimation unit 212 may have information on the height of the camera 211.
  • the measuring unit 213 may further include a sensor for measuring the height of the camera 211.
  • the spatial information processing unit 120 includes a spatial information DB (Data Base) 221.
  • the spatial information DB 221 holds a map 300 as shown in FIG. 3 and environmental camera information 400 as shown in FIG.
  • the map 300 is a three-dimensional map and records surfaces that block light, such as walls and desk surfaces.
  • the map information 330 corresponds to the room 130
  • the map information 340 corresponds to the installation object 140
  • the map information 341 corresponds to the installation object 141
  • the surface information is recorded as polygons, for example.
  • a design drawing may be used for the map 300, it may be created manually, or a result actually measured by a device such as a laser range finder or a distance image camera may be used.
  • the environmental camera information 400 includes a camera ID 410 assigned to each camera, a camera position 411 represented by X, Y, and Z on the map 300, a camera direction 412 represented by ⁇ and ⁇ on the map 300, and the current camera.
  • An angle of view 413, a camera direction adjustment range 414 that is an adjustable camera direction range, and a camera angle of view range 415 that is an adjustable camera angle of view range are included.
  • the width of the ⁇ and ⁇ adjustment ranges is zero in the camera direction adjustment range 414.
  • a method for acquiring images for remote monitoring using a robot will be described with reference to FIGS.
  • This processing is performed by the spatial information processing unit 120 unless otherwise noted.
  • the spatial information processing unit 120 acquires information on the position and direction of the robot 100 and the height and direction of the camera 211 from the position / direction estimation unit 212 of the robot 100 (502).
  • a predetermined value is used.
  • the gaze point 601 indicates the center of the image captured by the robot camera
  • the extended gaze point 602 is in the vicinity of the gaze point 601 and indicates the center of the image captured by the environment camera 101.
  • the surrounding image of the gazing point 601 can be acquired.
  • the shooting direction of the camera 211 of the robot 100 shooting the gazing point is shown as a vector 610
  • the shooting direction of the environmental camera 101 shooting the extended gazing point is shown as a vector 611.
  • the spatial information processing unit 120 acquires the position and direction of the robot 100 and the height and direction of the camera 211, the camera position 700 that is the position of the camera 211 and the shooting direction of the camera 211 are displayed on the map 300.
  • a shooting direction vector 701 is known.
  • the shooting direction vector 701 is extended from the camera position 700, and is extended until a certain aspect, that is, the map information 340 corresponding to the installation 140 in FIG.
  • the point where the collision occurs becomes a gaze point position 721. If it does not collide, it is assumed that there is a surface at a predetermined distance. In this way, the spatial information processing unit 120 calculates the position of the robot gazing point 601 (503).
  • the extended gaze position 722 that is the position of the extended gaze point 602 is calculated as follows (504). First, a plane 710 perpendicular to the shooting direction vector 701 is placed on the position 721 of the gazing point 601. Subsequently, an extended gazing point vector 702 that is a vector obtained by rotating the shooting direction vector 701 is calculated. The photographing direction vector 701 is rotated as shown in FIG. First, a vertical plane extending from the base 800 and the base 801 is assumed at the start point of the shooting direction vector 701. However, the base 800 is on the XY plane, and the base 801 is parallel to the Z axis.
  • the rotated vector 702 is rotated according to the first rotation 810 and the second rotation 811.
  • the first rotation 810 is a rotation in a plane stretched between the base 800 and the base 801 with respect to the base 800. This is a counterclockwise rotation when the direction of the shooting direction vector 701 is viewed from the camera position 700, and is a parameter that determines in which direction the extended gazing point is positioned with respect to the gazing point. If the extended gazing point is to be positioned horizontally with respect to the gazing point, the magnitude of the first rotation 810 may be set to 0 ° or 180 °.
  • the second rotation 811 is a rotation in a plane parallel to the imaging direction vector 701 and parallel to the vector obtained by rotating the base 800 with the first rotation 810.
  • the second rotation 811 is a parameter that determines how far the extended gazing point is placed from the gazing point.
  • the magnitude of the second rotation 811 may be determined according to the angle of view of the camera 211 and the angle of view of the environmental camera that captures the extended gazing point 602. For example, a range that cannot be captured, that is, a value at which no blind spot is generated, may be set as the value of the second rotation 811 between the image captured by the camera 211 of the robot 100 and the image captured by the environment camera 101. .
  • the image captured by the environment camera 101 is captured in a range as different as possible from the image captured by the camera 211 of the robot 100, a wider range of images can be provided to the robot operator. Therefore, it is desirable to use the minimum value among the above values.
  • the extended gaze vector 702 obtained in this way is extended until it hits the plane 710.
  • the point of collision is the extended gazing point position 722. If it does not collide, it is assumed that there is a surface at a predetermined distance.
  • the predicted path is a predicted position of the robot 100 after a predetermined time has elapsed, and here, a value predicted in the past is compared with the current robot position. If the prediction is incorrect, the predicted route is calculated again (506).
  • the predicted gazing point is a prediction of the position of the gazing point 601, and here, a past predicted value is compared with the current actual gazing point. If the prediction is incorrect, the predicted gaze point is calculated again (507). Although no prediction is performed in the initial state, in this case, the determination 505 proceeds to the next process on the assumption that the condition is true.
  • the first is a case where a destination has already been given and a route to the destination has already been calculated. This is referred to as planned movement.
  • the second case is a case where the robot operator uses the input device 122 to control the robot in real time. This is called real-time operation movement.
  • the route has already been calculated, so that route may be used.
  • the method disclosed in Patent Document 2 may be used to calculate the route when the destination is given.
  • the route may be calculated by the control unit 210 of the robot 100 or may be calculated by the spatial information processing unit 120.
  • the spatial information processing unit 120 only needs to acquire the route from the control unit 210. If the height of the camera 211 is acquired from the position / direction estimation unit 211, the predicted route on the map 300 can be expressed as a predicted route 1000 as shown in FIG.
  • the spatial information DB includes a route history 900 which is a history of route information as shown in FIG.
  • a route history 911 is a route information history in the first case
  • a route history 912 is a route information history in the second case, and is stored as a two-dimensional coordinate information group.
  • the two-dimensional coordinate information is not the coordinates themselves, but may represent discrete block-like positions such as 50 cm ⁇ 50 cm or 1 m ⁇ 1 m, for example.
  • a pedestrian flow line analysis method based on a mixed Markov model is known. Based on this movement destination prediction and the technique of Patent Document 26, the route of the robot 100 is obtained again. The route thus obtained becomes the predicted route 1000.
  • a predicted gazing point is obtained on the assumption that the relative direction of the camera 211 with respect to the traveling direction of the robot 100 is maintained. That is, as shown in FIG. 10, prediction execution points are set on the prediction path 1000 at predetermined intervals.
  • a predicted camera direction vector 1010 obtained by rotating the tangent vector of the predicted path 1000 at those points by the relative direction of the camera 211 is obtained.
  • the predicted camera direction vector 1010 is extended, and each position where the predicted camera direction vector 1010 hits a certain plane becomes a predicted gaze position 1020. If it does not collide, it is assumed that there is a surface at a predetermined distance.
  • prediction execution points on the prediction path 1000 are taken at predetermined intervals, and the tangent vector of the prediction path 1000 at that point becomes the prediction camera direction vector 1010 as it is.
  • the respective positions where they collide with a certain surface are set as predicted gaze point positions 1020. If it does not collide, it is assumed that there is a surface at a predetermined distance.
  • the subsequent processing is calculation of a predicted extended gaze point (508). As shown in FIG. 17, the predicted extended gaze position 1600 is performed in the same manner as the calculation of the extended gaze position 722 in FIG.
  • the next process is environmental camera selection (509).
  • Condition 1 Selection of environment camera
  • Condition 1 The corresponding environment camera 101 can shoot the current extended gaze point position 722 by adjusting pan / tilt
  • Condition 2 The environment camera 101 is predicted extended by adjusting pan / tilt Note The viewpoint 1600 can be photographed.
  • Condition 3 The angle formed by the predicted camera direction vector 1010 corresponding to the predicted extended gaze position 1600 and the vector connecting the predicted extended gaze position 1600 and the environment camera 101 is equal to or less than a predetermined value. 4: Four conditions are used that satisfy the condition 3 for the longest time.
  • Condition 1 cannot be taken unless this condition is satisfied in the first place, there is no point in selecting a camera that does not satisfy this condition.
  • Conditions 2 to 4 have the purpose of preventing the user from being confused if the camera is frequently switched.
  • the time for condition 3 can be obtained as follows.
  • the predicted extended gazing point positions 1600 are arranged in chronological order.
  • a predicted extended gazing point position 1600 can be photographed by pan / tilt adjustment for an environmental camera 101, and a predicted camera direction vector 1010 corresponding to the predicted extended gazing point position 1600 and a predicted extended gazing point position 1600 Whether or not the condition that the angle formed by the vector connecting the camera 101 is equal to or less than a predetermined value is satisfied is sequentially determined for each predicted extended gaze position 1600.
  • the number of extended gazing point positions 1600 that satisfy the condition may be the environmental camera 101 that satisfies the condition 4 that is continuously the largest since the present time.
  • it is a problem of which environmental camera 101 is selected it is not always necessary to obtain the absolute time.
  • the spatial information processing unit 120 uses the holding information 400 that is information about the environmental camera 101. If there are a plurality of extended gazing point positions 722, conditions 1 to 4 are determined in order, and one of the environmental cameras 101 is assigned to each of them. Note that a camera assigned to a certain extended gaze position 722 is excluded in advance when searching for a camera to be assigned to another extended gaze position 722.
  • the predetermined value regarding the difference in angle in condition 3 is set to a wide value, for example, 60 ° if the object to be imaged is planar, and to a narrow value, for example, if the object to be imaged is a complicated solid shape. Set 30 °. If there are no more cameras assigned, camera selection ends at that point.
  • environmental camera control is performed (510). Control is performed so that the environment camera 101 selected in the environment camera selection (509) faces the extended gazing point 722.
  • the image is created by the remote operation processing unit 121 and displayed on the display device 122.
  • an image 1100 of a gazing point 601 and images 1101 and 1102 of an extended gazing point 602 are displayed side by side on the display device 121. From the viewpoint of a person, an image around the gazing point 601 can be seen together, and a pseudo field of view can be enlarged.
  • FIG. 12 shows another display method.
  • the images 1100, 1101, and 1102 are displayed in a format that is directly pasted in the two-dimensional space.
  • the images are images taken from different viewpoints. Therefore, even if the characters written on the wall are copied as shown in FIG. 6, each image has a different degree of distortion.
  • images 1200, 1201, and 1202 are displayed as three-dimensional computer graphics (CG). Assuming that each image is on a plane perpendicular to the shooting direction of the camera, an image obtained by perspective-transforming the image as viewed from the robot 100 is displayed.
  • CG three-dimensional computer graphics
  • FIG. 13 and FIG. 14 show information passed between the blocks when the remote monitoring image generation is executed.
  • FIG. 13 shows information passed from the robot 100 to the spatial information processing unit 120.
  • Information on the position and direction of the robot 100 and the direction and height of the camera 211 is passed from the position / direction estimation unit 212 of the robot 100.
  • the spatial information processing unit 120 passes to the remote operation processing unit 121 an image captured by the environmental camera 101 and information about the position and direction of the camera.
  • the remote information processing unit 121 generates a screen as shown in FIG. 11 or 12 based on the information and passes the display screen to the display device 122.
  • FIG. 16 shows the flow of processing and data in the method described above.
  • FIG. 16 is a diagram mainly showing a program operating in the spatial information processing unit 120 and a data flow.
  • a gaze point calculation unit 1603 is a gaze point calculation step 503
  • an extended gaze point calculation unit 1604 is an extended gaze point calculation step 504
  • a prediction deviation determination unit 1605 is a prediction deviation determination step 505, and a prediction path calculation unit 1606.
  • a predicted route calculation step 506 a predicted gaze point calculation unit 1607 is a predicted gaze point calculation step 507, a predicted extended gaze point calculation unit 1608 is a predicted extended gaze point calculation step 508, and an environmental camera selection unit 1609 is an environment
  • the camera selection step 509 and the environmental camera control unit 1610 are programs corresponding to the environmental camera control step 510, respectively.
  • the monitoring system for robot control is the center of the first image from the first camera included in the remotely operable robot and the first image captured by the first camera. Photographed by a calculation unit that calculates a viewpoint and an extended gazing point around the gazing point, a camera selection unit that selects a second camera that shoots the extended gazing point, a first image, and a second camera And a display device used by an operator who operates the robot to display the second image displayed side by side, the extended gazing point is the center of the second image, and the second camera moves by the robot It is provided in the space.
  • the robot operator with an image with a field of view that is greater than the angle of view of the camera of the robot, and the operability of the robot is improved.
  • map information 1530, 1540, and 1541 are shown in the room 130 and the installation objects 140 and 141, respectively.
  • the height information is not known, it is possible to treat each installation as a three-dimensional map by assuming that each installation has an infinite height.
  • the number of robots 100 is one, but a plurality of robots 100 may be used.
  • the direction of the robot 100 equipped with a camera that is not an operation target it can be used as an environment camera 101 capable of pan / tilt adjustment.
  • the environment camera can be moved by using a camera provided by another robot as the environment camera, the shooting time specified in the condition 4 can be made longer. Therefore, the viewpoint change of the operator can be reduced, and the operability can be improved as compared with the first embodiment.

Abstract

The purpose of the present invention is to expand a field of vision of an environment camera when remote monitoring is performed using a robot provided with a mobile camera. Accordingly, a spatial information processing unit provided with a map estimates the position of a fixation point of the robot camera with reference to the map and the position and direction of the robot. Photographing a predetermined relative position in relation to position of the fixation point using the environment camera, and displaying images photographed by the robot in a line up on a display screen allows what is occurring outside an image photographed by the robot to be presented to an operator.

Description

モニタリングシステム、方法およびプログラムを記憶した情報記録媒体Information recording medium storing monitoring system, method and program
 本発明は,ロボットを用いた遠隔モニタリングシステムに関する。 The present invention relates to a remote monitoring system using a robot.
 例えばカメラを備えたロボットを用いると,遠く離れた場所からロボットの操作者の要求に応じて,いろいろな場所を撮影し,閲覧することができる。このとき,操作性の向上,閲覧性の向上のために,広い視野角が要求される。 For example, when a robot equipped with a camera is used, various places can be photographed and viewed from a remote place according to the request of the robot operator. At this time, a wide viewing angle is required to improve operability and browsing.
 一般的には,広角カメラ,魚眼カメラ,円筒状に全周を撮影できる広視野角カメラを用いることが多い。しかし,当然のことながら広視野角カメラは小さな物体,遠方にある物体を撮影するには不向きであり、視野は狭いが望遠で撮影できるカメラと組み合わせて使うことが好ましい。 Generally, a wide-angle camera, a fish-eye camera, or a wide viewing angle camera that can shoot the entire circumference in a cylindrical shape is often used. However, as a matter of course, a wide viewing angle camera is not suitable for photographing a small object or a distant object, and is preferably used in combination with a camera that has a narrow field of view but can shoot at a telephoto position.
 特許文献1には、車の視野拡張に関する技術が開示されており、特許文献2には、ロボット操作における、ロボットの経路予測についての方法が開示されている。 Patent Document 1 discloses a technique related to the expansion of the field of view of a vehicle, and Patent Document 2 discloses a method for predicting a route of a robot in robot operation.
特開2008-230558号公報JP 2008-230558 A 特開2011-53726号公報JP 2011-53726 A
 特許文献1では,運転者の視野を拡張するため、バックミラーの両脇にディスプレイを設け、サイドミラー部に設けたカメラからの映像を、ディスプレイに映し出すことが開示されている。しかし駆動部への負荷,格納部のサイズ,デザインの関係から,そもそも躯体に2種類のカメラを設置できない場合には、当該技術ではロボットの視野を拡張することはできない。 Patent Document 1 discloses that a display is provided on both sides of a rearview mirror and an image from a camera provided on a side mirror is displayed on the display in order to extend the driver's field of view. However, if the two types of cameras cannot be installed in the chassis due to the load on the drive unit, the size of the storage unit, and the design, the field of view of the robot cannot be expanded with this technology.
 上記課題を解決するために、例えば請求の範囲に記載の構成を採用する。本願は上記課題を解決する手段を複数含んでいるが、その一例を挙げるならば、ロボットを用いたモニタリング方法であって、遠隔操作可能なロボットが具備する第1のカメラで第1の画像を撮影するステップと、第1の画像から、第1の画像の中心である注視点を算出するステップと、注視点の周囲の拡張注視点を算出するステップと、拡張注視点を撮影する第2のカメラを選択するステップと、ロボットの操作者用の表示装置に、第1の画像と、第2のカメラで撮影された第2の画像とを並べて表示するステップとを有し、拡張注視点は、第2の画像の中心であり、第2のカメラはロボットの移動する空間内に設けられていることを特徴とする。 In order to solve the above problems, for example, the configuration described in the claims is adopted. The present application includes a plurality of means for solving the above-mentioned problems. For example, a monitoring method using a robot, in which a first image is provided by a first camera included in a remotely operable robot. A step of photographing, a step of calculating a gazing point that is the center of the first image from the first image, a step of calculating an extended gazing point around the gazing point, and a second that shoots the extended gazing point A step of selecting a camera, and a step of displaying the first image and the second image taken by the second camera side by side on a display device for an operator of the robot. The center of the second image, and the second camera is provided in a space in which the robot moves.
 または、モニタリングシステムであって、遠隔操作可能なロボットが具備する第1のカメラと、第1のカメラで撮影した第1の画像から、第1の画像の中心である注視点と、注視点の周囲の拡張注視点と、を算出する算出部と、拡張注視点を撮影する第2のカメラを選択するカメラ選択部と、第1の画像と、第2のカメラで撮影された第2の画像とを並べて表示する、ロボットを操作する操作者が用いる表示装置とを有し、拡張注視点は、第2の画像の中心であり、第2のカメラは前記ロボットの移動する空間内に設けられていることを特徴とする。 Alternatively, the monitoring system may be a first camera included in a remote-controllable robot and a first image captured by the first camera, and a gaze point that is the center of the first image and a gaze point A calculation unit that calculates a surrounding extended gazing point, a camera selection unit that selects a second camera that shoots the extended gazing point, a first image, and a second image captured by the second camera And a display device used by an operator who operates the robot, the extended gazing point is the center of the second image, and the second camera is provided in a space in which the robot moves. It is characterized by.
 本発明のシステムを用いることで、ロボットの視野を拡張することができ、ロボットの操作性が向上する。 視野 By using the system of the present invention, the field of view of the robot can be expanded and the operability of the robot is improved.
実施例1におけるロボットの周辺環境と装置を示す図である。It is a figure which shows the surrounding environment and apparatus of the robot in Example 1. FIG. 実施例1におけるブロック図を示す図である。FIG. 2 is a block diagram in the first embodiment. 3次元地図の例を示す図である。It is a figure which shows the example of a three-dimensional map. 環境カメラ情報の例を示す図である。It is a figure which shows the example of environmental camera information. 実施例1におけるモニタリング方法のフローを示す図である。It is a figure which shows the flow of the monitoring method in Example 1. FIG. ロボットの注視点,拡張注視点を示す図である。It is a figure which shows the gaze point and extended gaze point of a robot. 注視点位置の算出方法を示す図である。It is a figure which shows the calculation method of a gaze point position. 撮影方向ベクトルの回転について示す図である。It is a figure shown about rotation of an imaging direction vector. 経路履歴情報の例を示す図である。It is a figure which shows the example of path | route history information. 注視点,拡張注視点を示す図である。It is a figure which shows a gaze point and an extended gaze point. ロボット操作者用の表示画面の例を示す図である。It is a figure which shows the example of the display screen for robot operators. ロボット操作者用の表示画面の例を示す図である。It is a figure which shows the example of the display screen for robot operators. 装置間で渡される情報を示す図である。It is a figure which shows the information passed between apparatuses. 装置間で渡される情報を示す図である。It is a figure which shows the information passed between apparatuses. 2次元地図の例を示す図である。It is a figure which shows the example of a two-dimensional map. 装置間での情報の流れを示す図である。It is a figure which shows the flow of the information between apparatuses. 注視点,拡張注視点を示す図である。It is a figure which shows a gaze point and an extended gaze point.
 図1は,本発明を実施する環境の一例を表した図である。部屋130の中にロボット100,環境カメラ101,設置物140,設置物141が設置されており,空間情報処理部120,遠隔操作処理部121,表示装置121,入力装置122が設置,接続されている。環境カメラ101は複数あることが望ましい。 FIG. 1 is a diagram showing an example of an environment for implementing the present invention. The robot 100, the environmental camera 101, the installation object 140, and the installation object 141 are installed in the room 130, and the space information processing unit 120, the remote operation processing unit 121, the display device 121, and the input device 122 are installed and connected. Yes. It is desirable that there are a plurality of environmental cameras 101.
 図2にブロック構成を示す。ロボット100,環境カメラ101,空間情報処理部120,遠隔操作処理部121がネットワーク230により接続されている。表示装置121,入力装置122は,遠隔操作処理部121に接続されており,表示装置121はオペレータに情報を提供し,入力装置122はオペレータからの入力を受け付ける。遠隔操作処理部121は,空間情報処理部120からの指示に従い表示装置121に表示する画像の生成を行ったり,入力装置122からの情報を空間情報処理部120に伝えたりする。 Fig. 2 shows the block configuration. A robot 100, an environmental camera 101, a spatial information processing unit 120, and a remote operation processing unit 121 are connected by a network 230. The display device 121 and the input device 122 are connected to the remote operation processing unit 121. The display device 121 provides information to the operator, and the input device 122 receives input from the operator. The remote operation processing unit 121 generates an image to be displayed on the display device 121 in accordance with an instruction from the spatial information processing unit 120, and transmits information from the input device 122 to the spatial information processing unit 120.
 ロボット100は,制御部210,カメラ211,位置方向推定部212,計測部213,駆動部214を備える。ここで計測部213はたとえばレーザレンジファインダ等のセンサであって,周囲の物体までの距離を計測することができる。位置方向推定部212は,計測した上記距離情報を用いて,例えばSLAM(Simultaneous Localization and Mapping)と呼ばれる技術等によって,ロボットの位置と方向を推定することができる。 The robot 100 includes a control unit 210, a camera 211, a position / direction estimation unit 212, a measurement unit 213, and a drive unit 214. Here, the measuring unit 213 is a sensor such as a laser range finder, for example, and can measure the distance to surrounding objects. The position / direction estimation unit 212 can estimate the position and direction of the robot by using, for example, a technique called SLAM (Simultaneous Localization and Mapping) using the measured distance information.
 また、計測部213は駆動部214の駆動状況を計測するセンサであってもよく,その場合,ロボットの駆動状況から位置方向推定部212はロボットの位置と方向を推定できる。 Further, the measuring unit 213 may be a sensor that measures the driving state of the driving unit 214. In this case, the position / direction estimating unit 212 can estimate the position and direction of the robot from the driving state of the robot.
 さらに駆動部214はロボット100のカメラ211を駆動させる機構を備えていてもよい。たとえば,ヒト型ロボットであれば,ロボットの頭部にカメラ211を搭載することが多い。その場合,駆動部214は,ロボットの頭部を駆動して,カメラ211の方向を変えることができる。この場合,位置方向推定部212は,駆動部214の駆動状況を計測部213によって計測し,カメラ211の方向も併せて推定する。 Further, the driving unit 214 may include a mechanism for driving the camera 211 of the robot 100. For example, in the case of a humanoid robot, the camera 211 is often mounted on the head of the robot. In that case, the driving unit 214 can change the direction of the camera 211 by driving the head of the robot. In this case, the position / direction estimation unit 212 measures the driving state of the driving unit 214 by the measuring unit 213 and also estimates the direction of the camera 211.
 さらに位置方向推定部212は,カメラ211の高さに関する情報を持つとよい。たとえば,駆動部214によりロボット100の高さを変えることができ,カメラ211の高さも変わるならば,計測部213はさらにカメラ211の高さを計測するセンサも備えておくとよい。 Furthermore, the position / direction estimation unit 212 may have information on the height of the camera 211. For example, if the height of the robot 100 can be changed by the driving unit 214 and the height of the camera 211 is also changed, the measuring unit 213 may further include a sensor for measuring the height of the camera 211.
 空間情報処理部120は,空間情報DB(Data Base)221を備える。空間情報DB221は,図3に示すような地図300と,図4に示すような環境カメラ情報400を保持する。地図300は3次元地図であり,壁やデスクの表面といった,光を遮るような面が記録されている。地図情報330は部屋130に,地図情報340は設置物140に,地図情報341は設置物141に,それぞれ対応し,その表面情報が,たとえばポリゴンとして記録されている。 The spatial information processing unit 120 includes a spatial information DB (Data Base) 221. The spatial information DB 221 holds a map 300 as shown in FIG. 3 and environmental camera information 400 as shown in FIG. The map 300 is a three-dimensional map and records surfaces that block light, such as walls and desk surfaces. The map information 330 corresponds to the room 130, the map information 340 corresponds to the installation object 140, the map information 341 corresponds to the installation object 141, and the surface information is recorded as polygons, for example.
 地図300には設計図を用いてもよいし,人手で作成してもよいし,レーザレンジファインダや,距離画像カメラといったデバイスで実際に計測した結果を用いてもよい。 A design drawing may be used for the map 300, it may be created manually, or a result actually measured by a device such as a laser range finder or a distance image camera may be used.
 環境カメラ情報400は,カメラごとに割り振られたカメラID410,地図300上のX,Y,Zで表されたカメラ位置411,地図300上のφ,θで表されたカメラ方向412,現在のカメラ画角413,調整可能なカメラ方向の範囲であるカメラ方向調整範囲414,調整可能なカメラ画角の範囲であるカメラ画角範囲415を含む。固定カメラである場合,カメラ方向調整範囲414は,φ,θの調整範囲の幅が0となる。 The environmental camera information 400 includes a camera ID 410 assigned to each camera, a camera position 411 represented by X, Y, and Z on the map 300, a camera direction 412 represented by φ and θ on the map 300, and the current camera. An angle of view 413, a camera direction adjustment range 414 that is an adjustable camera direction range, and a camera angle of view range 415 that is an adjustable camera angle of view range are included. In the case of a fixed camera, the width of the φ and θ adjustment ranges is zero in the camera direction adjustment range 414.
 図5~図10を用いて,ロボットを用いた遠隔モニタリング用画像取得方法について説明する。この処理は,特に注釈のない限り,空間情報処理部120で処理される。処理が開始される(501)と,空間情報処理部120は,ロボット100の位置方向推定部212からロボット100の位置と方向,カメラ211の高さと方向の情報を取得する(502)。カメラ211の高さの情報が得られない場合は,所定の値を用いる。 A method for acquiring images for remote monitoring using a robot will be described with reference to FIGS. This processing is performed by the spatial information processing unit 120 unless otherwise noted. When the processing is started (501), the spatial information processing unit 120 acquires information on the position and direction of the robot 100 and the height and direction of the camera 211 from the position / direction estimation unit 212 of the robot 100 (502). When the height information of the camera 211 cannot be obtained, a predetermined value is used.
 ここで図6を用いて,ロボットの注視点601,拡張注視点602について説明する。注視点601は,ロボットのカメラが撮影している画像の中心を示し,拡張注視点602は,注視点601の近傍にあり,環境カメラ101が撮影する画像の中心を示す。これにより,注視点601の周辺画像を取得することができる。なお,注視点を撮影しているロボット100のカメラ211の撮影方向をベクトル610,拡張注視点を撮影している環境カメラ101の撮影方向をベクトル611,として図示している。 Here, the robot gaze point 601 and the extended gaze point 602 will be described with reference to FIG. The gaze point 601 indicates the center of the image captured by the robot camera, and the extended gaze point 602 is in the vicinity of the gaze point 601 and indicates the center of the image captured by the environment camera 101. Thereby, the surrounding image of the gazing point 601 can be acquired. Note that the shooting direction of the camera 211 of the robot 100 shooting the gazing point is shown as a vector 610, and the shooting direction of the environmental camera 101 shooting the extended gazing point is shown as a vector 611.
 次に、図7を用いて,注視点601の位置である注視点位置721の算出方法を説明する。空間情報処理部120は,ロボット100の位置と方向およびカメラ211の高さと方向を取得しているので,地図300上で,カメラ211の位置であるカメラ位置700と,カメラ211の撮影方向である撮影方向ベクトル701がわかっている。カメラ位置700から,撮影方向ベクトル701を伸ばしていき,何らかの面,図7でいえば設置物140に対応した地図情報340にぶつかるまで伸ばす。そのぶつかった点が,注視点位置721となる。ぶつからない場合は,所定の距離に面があったものとして処理をする。このようにして,空間情報処理部120はロボットの注視点601の位置を算出する(503)。 Next, a calculation method of the gaze point position 721 that is the position of the gaze point 601 will be described with reference to FIG. Since the spatial information processing unit 120 acquires the position and direction of the robot 100 and the height and direction of the camera 211, the camera position 700 that is the position of the camera 211 and the shooting direction of the camera 211 are displayed on the map 300. A shooting direction vector 701 is known. The shooting direction vector 701 is extended from the camera position 700, and is extended until a certain aspect, that is, the map information 340 corresponding to the installation 140 in FIG. The point where the collision occurs becomes a gaze point position 721. If it does not collide, it is assumed that there is a surface at a predetermined distance. In this way, the spatial information processing unit 120 calculates the position of the robot gazing point 601 (503).
 続いて,拡張注視点602の位置である拡張注視点位置722を,次のように算出する(504)。まず,注視点601の位置721上に,撮影方向ベクトル701に垂直な平面710をおく。続いて,撮影方向ベクトル701を回転させたベクトルである拡張注視点ベクトル702を算出する。
  撮影方向ベクトル701の回転は,図8に示すように実行する。まず,撮影方向ベクトル701の始点に,基底800と基底801から張られる垂直な平面を仮定する。ただし,基底800はXY平面上にあり,基底801はZ軸に平行である。
Subsequently, the extended gaze position 722 that is the position of the extended gaze point 602 is calculated as follows (504). First, a plane 710 perpendicular to the shooting direction vector 701 is placed on the position 721 of the gazing point 601. Subsequently, an extended gazing point vector 702 that is a vector obtained by rotating the shooting direction vector 701 is calculated.
The photographing direction vector 701 is rotated as shown in FIG. First, a vertical plane extending from the base 800 and the base 801 is assumed at the start point of the shooting direction vector 701. However, the base 800 is on the XY plane, and the base 801 is parallel to the Z axis.
 回転後のベクトル702は,第一の回転810と,第二の回転811に従って回転されたものである。第一の回転810は,基底800を基準とした,基底800と基底801に張られる平面内の回転である。これは,カメラ位置700から撮影方向ベクトル701の方向をみたとき,左回りの回転となり,拡張注視点を,注視点に対してどの方向に位置させるかを決めるパラメータとなる。拡張注視点を,注視点に対して水平な位置にしたければ,第一の回転810の大きさを0°もしくは180°にすればよい。 The rotated vector 702 is rotated according to the first rotation 810 and the second rotation 811. The first rotation 810 is a rotation in a plane stretched between the base 800 and the base 801 with respect to the base 800. This is a counterclockwise rotation when the direction of the shooting direction vector 701 is viewed from the camera position 700, and is a parameter that determines in which direction the extended gazing point is positioned with respect to the gazing point. If the extended gazing point is to be positioned horizontally with respect to the gazing point, the magnitude of the first rotation 810 may be set to 0 ° or 180 °.
 第二の回転811は,撮影方向ベクトル701に平行な平面で,基底800を第一の回転810で回転させたベクトルに平行な平面内の回転である。第二の回転811は,拡張注視点を,注視点に対してどれだけ離れた位置に設置するかを決めるパラメータとなる。第二の回転811の大きさは,カメラ211の画角と,拡張注視点602を撮影する環境カメラの画角に応じて決めるとよい。たとえば,ロボット100のカメラ211で撮影した画像と環境カメラ101で撮影した画像との間に,撮影できていない範囲,つまり死角が発生しない値を,第二の回転811の値として設定すればよい。また,環境カメラ101で撮影した画像は,ロボット100のカメラ211で撮影した画像とはなるべく異なる範囲を撮影したほうが,より広い範囲の画像をロボットの操作者に提供できる。よって,上記の値のうち,最小の値を用いることが望ましい。 The second rotation 811 is a rotation in a plane parallel to the imaging direction vector 701 and parallel to the vector obtained by rotating the base 800 with the first rotation 810. The second rotation 811 is a parameter that determines how far the extended gazing point is placed from the gazing point. The magnitude of the second rotation 811 may be determined according to the angle of view of the camera 211 and the angle of view of the environmental camera that captures the extended gazing point 602. For example, a range that cannot be captured, that is, a value at which no blind spot is generated, may be set as the value of the second rotation 811 between the image captured by the camera 211 of the robot 100 and the image captured by the environment camera 101. . Further, if the image captured by the environment camera 101 is captured in a range as different as possible from the image captured by the camera 211 of the robot 100, a wider range of images can be provided to the robot operator. Therefore, it is desirable to use the minimum value among the above values.
 このようにして求めた拡張注視点ベクトル702を,平面710にぶつかるまで伸ばしていく。ぶつかった点が,拡張注視点位置722となる。ぶつからない場合は,所定の距離に面があったものとして処理をする。 The extended gaze vector 702 obtained in this way is extended until it hits the plane 710. The point of collision is the extended gazing point position 722. If it does not collide, it is assumed that there is a surface at a predetermined distance.
 次は予測経路と予測注視点のずれの大きさの判定を行う(505)。予測経路とは,所定の時間経過後のロボット100の位置を予測したもので,ここでは過去に予測した値と,現在のロボット位置との比較を行う。予測が誤っていた場合に,改めて予測経路の算出(506)を行う。予測注視点とは,注視点601の位置を予測したもので,ここでは過去の予測値と,現在の実際の注視点との比較を行う。予測が誤っていた場合に再び予測注視点の算出(507)を行う。なお,初期状態では予測は行っていないが,その場合判定505は,条件は真だったとみなして次の処理へ進む。 Next, the size of the deviation between the predicted route and the predicted gazing point is determined (505). The predicted path is a predicted position of the robot 100 after a predetermined time has elapsed, and here, a value predicted in the past is compared with the current robot position. If the prediction is incorrect, the predicted route is calculated again (506). The predicted gazing point is a prediction of the position of the gazing point 601, and here, a past predicted value is compared with the current actual gazing point. If the prediction is incorrect, the predicted gaze point is calculated again (507). Although no prediction is performed in the initial state, in this case, the determination 505 proceeds to the next process on the assumption that the condition is true.
 次に、予測経路の算出(506)について説明する。ここでは二つの場合が考えられる。第一は,既に目的地が与えられ,目的地までの経路がすでに算出されている場合である。これを計画移動と呼ぶことにする。第二は,ロボット操作者が入力装置122を使って,ロボットをリアルタイムに制御している場合である。これをリアルタイム操作移動と呼ぶことにする。計画移動の場合は,既に経路が算出されているわけであるから,その経路を用いればよい。なお,目的地を与えた場合の経路の算出には,特許文献2に開示されている方法を用いればよい。経路はロボット100の制御部210で算出してもよいし,空間情報処理部120で算出してもよい。ただし,経路自体は制御部210で保持している必要があるので,どちらの場合でも,空間情報処理部120は制御部210から経路を取得すればよいことになる。位置方向推定部211からカメラ211の高さを取得すれば,図10に示すように,地図300上での予測経路は,予測経路1000のように表すことができる。 Next, calculation of a predicted route (506) will be described. Two cases are considered here. The first is a case where a destination has already been given and a route to the destination has already been calculated. This is referred to as planned movement. The second case is a case where the robot operator uses the input device 122 to control the robot in real time. This is called real-time operation movement. In the case of planned movement, the route has already been calculated, so that route may be used. Note that the method disclosed in Patent Document 2 may be used to calculate the route when the destination is given. The route may be calculated by the control unit 210 of the robot 100 or may be calculated by the spatial information processing unit 120. However, since the route itself needs to be held by the control unit 210, in either case, the spatial information processing unit 120 only needs to acquire the route from the control unit 210. If the height of the camera 211 is acquired from the position / direction estimation unit 211, the predicted route on the map 300 can be expressed as a predicted route 1000 as shown in FIG.
 リアルタイム操作移動の場合,空間情報DB内に,図9に示すような経路情報の履歴である経路履歴900を備える。図9において経路履歴911は,上記第一の場合の経路情報の履歴であり,経路履歴912は第二の場合の経路情報の履歴であり,2次元座標情報群として格納されている。このとき,2次元座標情報は,座標そのものではなく,たとえば50cm×50cmや,1m×1m等のように,離散化されたブロック状の位置を表すものでもよい。この経路履歴912に基づき,将来の移動先予測を行う技術としては,混合マルコフモデルに基づく歩行者動線解析方式,等が知られている。この移動先予測および,特許文献26の技術に基づいてロボット100の経路を改めて求める。これにより求めた経路が予測経路1000となる。 In the case of real-time operation movement, the spatial information DB includes a route history 900 which is a history of route information as shown in FIG. In FIG. 9, a route history 911 is a route information history in the first case, and a route history 912 is a route information history in the second case, and is stored as a two-dimensional coordinate information group. At this time, the two-dimensional coordinate information is not the coordinates themselves, but may represent discrete block-like positions such as 50 cm × 50 cm or 1 m × 1 m, for example. As a technique for predicting a future destination based on the route history 912, a pedestrian flow line analysis method based on a mixed Markov model is known. Based on this movement destination prediction and the technique of Patent Document 26, the route of the robot 100 is obtained again. The route thus obtained becomes the predicted route 1000.
 続いて,予測注視点を求める。計画移動については,ロボット100の進行方向に対するカメラ211の相対方向が維持されることを前提に,予測注視点を求める。つまり,図10に示すように,予測経路1000上を所定間隔で予測実行点を設定する。それらの点における予測経路1000の接線ベクトルを,カメラ211の相対方向だけ回転させた予測カメラ方向ベクトル1010を得る。予測カメラ方向ベクトル1010を伸ばし,何らかの面にぶつかった位置それぞれが予測注視点位置1020となる。ぶつからない場合は,所定の距離に面があったものとして処理をする。 Next, seek a forecast gaze point. With respect to the planned movement, a predicted gazing point is obtained on the assumption that the relative direction of the camera 211 with respect to the traveling direction of the robot 100 is maintained. That is, as shown in FIG. 10, prediction execution points are set on the prediction path 1000 at predetermined intervals. A predicted camera direction vector 1010 obtained by rotating the tangent vector of the predicted path 1000 at those points by the relative direction of the camera 211 is obtained. The predicted camera direction vector 1010 is extended, and each position where the predicted camera direction vector 1010 hits a certain plane becomes a predicted gaze position 1020. If it does not collide, it is assumed that there is a surface at a predetermined distance.
 図示してはいないが、リアルタイム操作移動の場合は,ロボット100の正面を見る必要がある。そのため,予測経路1000上を所定間隔で予測実行点をうち,その点における予測経路1000の接線ベクトルがそのまま予測カメラ方向ベクトル1010となる。計画移動の場合と同様,それらを伸ばし,何らかの面にぶつかった位置それぞれを予測注視点位置1020とする。ぶつからない場合は,所定の距離に面があったものとして処理をする。 Although not shown, in the case of real-time operation movement, it is necessary to look at the front of the robot 100. Therefore, prediction execution points on the prediction path 1000 are taken at predetermined intervals, and the tangent vector of the prediction path 1000 at that point becomes the prediction camera direction vector 1010 as it is. As in the case of the planned movement, they are extended and the respective positions where they collide with a certain surface are set as predicted gaze point positions 1020. If it does not collide, it is assumed that there is a surface at a predetermined distance.
 続いての処理は,予測拡張注視点の算出である(508)。図17に図示するように,予測拡張注視点位置1600は前述の予測注視点位置1020に対して,図7における拡張注視点位置722の算出と同様にして行う。 The subsequent processing is calculation of a predicted extended gaze point (508). As shown in FIG. 17, the predicted extended gaze position 1600 is performed in the same manner as the calculation of the extended gaze position 722 in FIG.
 次の処理は環境カメラ選択である(509)。環境カメラ選択の基準として
 条件1:該当環境カメラ101は,現在の拡張注視点位置722を,パン・チルトの調整により撮影できること
 条件2:該当環境カメラ101は,パン・チルトの調整によって予測拡張注視点1600を撮影できること
 条件3:予測拡張注視点位置1600に対応した予測カメラ方向ベクトル1010と,予測拡張注視点位置1600と環境カメラ101を結んだベクトルとの成す角が所定値以下であること
 条件4:条件3を満たしている時間が最も長いものであること
という4つの条件を用いる。
The next process is environmental camera selection (509). Condition 1: Selection of environment camera Condition 1: The corresponding environment camera 101 can shoot the current extended gaze point position 722 by adjusting pan / tilt Condition 2: The environment camera 101 is predicted extended by adjusting pan / tilt Note The viewpoint 1600 can be photographed. Condition 3: The angle formed by the predicted camera direction vector 1010 corresponding to the predicted extended gaze position 1600 and the vector connecting the predicted extended gaze position 1600 and the environment camera 101 is equal to or less than a predetermined value. 4: Four conditions are used that satisfy the condition 3 for the longest time.
 まず条件1についてはそもそもこの条件を満たせないと撮影できないので、これを満たしていないカメラを選択する意味がない。条件2~4については,頻繁にカメラが切り替わると,利用者に混乱を生じる恐れがあるため,それを防ぐ目的がある。 First, since condition 1 cannot be taken unless this condition is satisfied in the first place, there is no point in selecting a camera that does not satisfy this condition. Conditions 2 to 4 have the purpose of preventing the user from being confused if the camera is frequently switched.
 また,条件3の時間は次のように求めることができる。予測拡張注視点位置1600を時系列順に並べる。ある環境カメラ101に対して,パン・チルトの調整によって予測拡張注視点位置1600を撮影でき,かつ,予測拡張注視点位置1600に対応した予測カメラ方向ベクトル1010と,予測拡張注視点位置1600当環境カメラ101を結んだベクトルとの成す角が所定値以下である,という条件を満たすか否かを,各予測拡張注視点位置1600に対して順に判定していく。条件を満たす拡張注視点位置1600の数が,現時点以降連続して最も多いものを,条件4に当てはまる環境カメラ101とすればよい。ここではいずれの環境カメラ101を選択するかという問題であるため,必ずしも絶対時間を求める必要はない。 Also, the time for condition 3 can be obtained as follows. The predicted extended gazing point positions 1600 are arranged in chronological order. A predicted extended gazing point position 1600 can be photographed by pan / tilt adjustment for an environmental camera 101, and a predicted camera direction vector 1010 corresponding to the predicted extended gazing point position 1600 and a predicted extended gazing point position 1600 Whether or not the condition that the angle formed by the vector connecting the camera 101 is equal to or less than a predetermined value is satisfied is sequentially determined for each predicted extended gaze position 1600. The number of extended gazing point positions 1600 that satisfy the condition may be the environmental camera 101 that satisfies the condition 4 that is continuously the largest since the present time. Here, since it is a problem of which environmental camera 101 is selected, it is not always necessary to obtain the absolute time.
 なお,これらの条件を満たすか否かを判定するに当たり,空間情報処理部120は,環境カメラ101に関しての情報である保持情報400を用いる。また,拡張注視点位置722が複数ある場合は,順番に条件1~4を判定し,それぞれに対して環境カメラ101のうち一台を割り当てる。なお,ある拡張注視点位置722に割り当てられたカメラは,他の拡張注視点位置722に割り当てるカメラを探す際に,あらかじめ選択肢から除いておく。 In determining whether or not these conditions are satisfied, the spatial information processing unit 120 uses the holding information 400 that is information about the environmental camera 101. If there are a plurality of extended gazing point positions 722, conditions 1 to 4 are determined in order, and one of the environmental cameras 101 is assigned to each of them. Note that a camera assigned to a certain extended gaze position 722 is excluded in advance when searching for a camera to be assigned to another extended gaze position 722.
 また,条件3の角度の違いに関しての所定値は,撮影対象が平面状であれば広めの値,たとえば60°を設定し,撮影対象が複雑な立体形状であるときは,狭めの値,たとえば30°を設定する。割り当てられるカメラがなくなった場合は,カメラ選択はその時点で終了する。 Further, the predetermined value regarding the difference in angle in condition 3 is set to a wide value, for example, 60 ° if the object to be imaged is planar, and to a narrow value, for example, if the object to be imaged is a complicated solid shape. Set 30 °. If there are no more cameras assigned, camera selection ends at that point.
 続いて,環境カメラ制御を行う(510)。環境カメラ選択(509)で選択された環境カメラ101に対して,拡張注視点722の方向を向くよう,制御を行う。 Subsequently, environmental camera control is performed (510). Control is performed so that the environment camera 101 selected in the environment camera selection (509) faces the extended gazing point 722.
 ここからは、以上の方法により撮影したカメラ211の画像と,拡張注視点位置722に向けたカメラで撮影した画像とを表示する方法について述べる。画像の作成は,遠隔操作処理部121が行い,表示装置122に表示する。たとえば図11に示すように,表示装置121に,注視点601の画像1100,拡張注視点602の画像1101と1102を並べて表示する。人から見れば,注視点601の周辺の画像も併せて見ることができ,疑似的な視野拡大が可能となる。 Hereafter, a method of displaying the image of the camera 211 photographed by the above method and the image photographed by the camera toward the extended gazing point position 722 will be described. The image is created by the remote operation processing unit 121 and displayed on the display device 122. For example, as shown in FIG. 11, an image 1100 of a gazing point 601 and images 1101 and 1102 of an extended gazing point 602 are displayed side by side on the display device 121. From the viewpoint of a person, an image around the gazing point 601 can be seen together, and a pseudo field of view can be enlarged.
 図12は別の表示方法について示している。図11では,画像1100,1101,1102は,2次元空間にそのまま張り付けた形式で,表示されていた。しかし,ロボット100,環境カメラ101の位置はそれぞれ異なるので,それぞれの画像は異なる視点から撮影した画像になる。そのため,図6に示すように壁に書いてある文字を写した場合であっても,それぞれの画像は異なる歪み度合いとなる。一方,図12では,画像1200,1201,1202は,三次元コンピュータグラフィクス(CG)として表示されている。それぞれの画像は,カメラの撮影方向,垂直な平面上にあると仮定して,それをロボット100から見た画像として透視変換した画像を表示している。こうすることで,操作者には,画像1201もしくは1202を撮影した環境カメラ101が,もともとどの方向にあるか感覚的に推測できるというメリットがある。また,図6に示すように、平面に書かれた文字を写した場合であれば,それぞれの文字の歪み度合いは同じになる,というメリットも得られる。 FIG. 12 shows another display method. In FIG. 11, the images 1100, 1101, and 1102 are displayed in a format that is directly pasted in the two-dimensional space. However, since the positions of the robot 100 and the environmental camera 101 are different from each other, the images are images taken from different viewpoints. Therefore, even if the characters written on the wall are copied as shown in FIG. 6, each image has a different degree of distortion. On the other hand, in FIG. 12, images 1200, 1201, and 1202 are displayed as three-dimensional computer graphics (CG). Assuming that each image is on a plane perpendicular to the shooting direction of the camera, an image obtained by perspective-transforming the image as viewed from the robot 100 is displayed. By doing so, there is an advantage that the operator can sensuously estimate in which direction the environment camera 101 that has captured the image 1201 or 1202 is originally located. In addition, as shown in FIG. 6, if a character written on a plane is copied, there is an advantage that each character has the same degree of distortion.
 遠隔モニタリング用画像生成を実行するときに,各ブロック間で渡される情報について図13,図14に示す。図13には,ロボット100から空間情報処理部120に渡される情報を示した。ロボット100の位置方向推定部212から,ロボット100の位置と方向,およびカメラ211の方向と高さの情報が渡される。図14に示すように,空間情報処理部120からは遠隔操作処理部121に環境カメラで撮影101した画像と,そのカメラの位置と方向についての情報が渡される。遠隔情報処理部121では,その情報に基づき,図11や図12に示したような画面を生成し,その表示画面を表示装置122に渡す。 FIG. 13 and FIG. 14 show information passed between the blocks when the remote monitoring image generation is executed. FIG. 13 shows information passed from the robot 100 to the spatial information processing unit 120. Information on the position and direction of the robot 100 and the direction and height of the camera 211 is passed from the position / direction estimation unit 212 of the robot 100. As shown in FIG. 14, the spatial information processing unit 120 passes to the remote operation processing unit 121 an image captured by the environmental camera 101 and information about the position and direction of the camera. The remote information processing unit 121 generates a screen as shown in FIG. 11 or 12 based on the information and passes the display screen to the display device 122.
 以上述べた方式における,処理とデータの流れを図16に示す。図16は,主に空間情報処理部120の中で動作するプログラムと,データの流れを示す図である。図16において,注視点算出部1603は注視点算出ステップ503と,拡張注視点算出部1604は拡張注視点算出ステップ504と,予測ずれ判定部1605は予測ずれ判定ステップ505と,予測経路算出部1606は予測経路の算出ステップ506と,予測注視点算出部1607は予測注視点の算出ステップ507と,予測拡張注視点算出部1608は予測拡張注視点の算出ステップ508と,環境カメラ選択部1609は環境カメラ選択ステップ509と,環境カメラ制御部1610は環境カメラ制御ステップ510とそれぞれ対応したプログラムである。 FIG. 16 shows the flow of processing and data in the method described above. FIG. 16 is a diagram mainly showing a program operating in the spatial information processing unit 120 and a data flow. In FIG. 16, a gaze point calculation unit 1603 is a gaze point calculation step 503, an extended gaze point calculation unit 1604 is an extended gaze point calculation step 504, a prediction deviation determination unit 1605 is a prediction deviation determination step 505, and a prediction path calculation unit 1606. Is a predicted route calculation step 506, a predicted gaze point calculation unit 1607 is a predicted gaze point calculation step 507, a predicted extended gaze point calculation unit 1608 is a predicted extended gaze point calculation step 508, and an environmental camera selection unit 1609 is an environment The camera selection step 509 and the environmental camera control unit 1610 are programs corresponding to the environmental camera control step 510, respectively.
 以上を踏まえ、本発明のロボット制御用モニタリングシステムは、遠隔操作可能なロボットが具備する第1のカメラと、第1のカメラで撮影した第1の画像から、第1の画像の中心である注視点と、注視点の周囲の拡張注視点と、を算出する算出部と、拡張注視点を撮影する第2のカメラを選択するカメラ選択部と、第1の画像と、第2のカメラで撮影された第2の画像とを並べて表示する、ロボットを操作する操作者が用いる表示装置とを有し、拡張注視点は、第2の画像の中心であり、第2のカメラはロボットの移動する空間内に設けられていることを特徴とする。 Based on the above, the monitoring system for robot control according to the present invention is the center of the first image from the first camera included in the remotely operable robot and the first image captured by the first camera. Photographed by a calculation unit that calculates a viewpoint and an extended gazing point around the gazing point, a camera selection unit that selects a second camera that shoots the extended gazing point, a first image, and a second camera And a display device used by an operator who operates the robot to display the second image displayed side by side, the extended gazing point is the center of the second image, and the second camera moves by the robot It is provided in the space.
 本発明によれば,ロボットの持つカメラの画角以上の視野の画像を,ロボットの操作者に提供することができ,ロボットの操作性が向上する。 According to the present invention, it is possible to provide the robot operator with an image with a field of view that is greater than the angle of view of the camera of the robot, and the operability of the robot is improved.
 実施例1では,3次元地図を保持する場合について説明したが,図15に示すような2次元地図1500であっても同様の処理が可能である。図15では,部屋130,設置物140,141にそれぞれ地図情報1530,1540,1541を示した。このとき,高さ情報がわからないが,各設置物は無限大の高さを持つと仮定することで,3次元地図として扱うことが可能である。 In the first embodiment, the case where a three-dimensional map is held has been described, but the same processing is possible even with a two-dimensional map 1500 as shown in FIG. In FIG. 15, map information 1530, 1540, and 1541 are shown in the room 130 and the installation objects 140 and 141, respectively. At this time, although the height information is not known, it is possible to treat each installation as a three-dimensional map by assuming that each installation has an infinite height.
 本実施例では、実施例1の効果に加え、3次元地図を保持しなくともよくなるため、情報処理部の負荷を軽減することができる。 In the present embodiment, in addition to the effects of the first embodiment, it is not necessary to hold a three-dimensional map, so the load on the information processing unit can be reduced.
 実施例1では,ロボット100が1台の構成であったが,ロボット100が複数台であってもよい。この場合,操作対象ではない、カメラを具備するロボット100の方向を制御することで,パン・チルト調整可能な環境カメラ101とみなして,利用することができる。 In the first embodiment, the number of robots 100 is one, but a plurality of robots 100 may be used. In this case, by controlling the direction of the robot 100 equipped with a camera that is not an operation target, it can be used as an environment camera 101 capable of pan / tilt adjustment.
 本実施例では、他のロボットが具備するカメラを環境カメラとして用いることにより、環境カメラが移動可能となったため、条件4で規定されている撮影時間をより長くすることができる。したがって、操作者の視点変更を減らすことができ、実施例1よりも操作性を向上させることができる。 In this embodiment, since the environment camera can be moved by using a camera provided by another robot as the environment camera, the shooting time specified in the condition 4 can be made longer. Therefore, the viewpoint change of the operator can be reduced, and the operability can be improved as compared with the first embodiment.
 100 ロボット
 101 環境カメラ
 120 空間情報処理部
 121 遠隔操作処理部
 122 入力装置
 130 部屋
 140 設置物
 141 設置物
 230 ネットワーク
 201 ロボットの制御部
 211 ロボットのカメラ 
 212 位置方向推定部
 213 ロボットの計測部 
 214 ロボットの駆動部
 221 空間情報DB
 300 地図
 400 環境カメラ情報
 330 地図情報
 340 地図情報
 601 注視点
 602 拡張注視点
 700 カメラ位置
 701 撮影方向ベクトル
 721 注視点位置
 722 拡張注視点位置
 800 基底
 801 基底
 810 第一の回転
 820 第二の回転
 900 経路履歴
 1000 予測経路
 1010 予測カメラ方向ベクトル
 1020 予測注視点位置
 1100 注視点の画像
 1101 拡張注視点の画像
 1102 拡張注視点の画像
 1200 3次元表示した注視点の画像
 1201 3次元表示した拡張注視点の画像
 1202 3次元表示した拡張注視点の画像。
DESCRIPTION OF SYMBOLS 100 Robot 101 Environmental camera 120 Spatial information processing part 121 Remote operation processing part 122 Input device 130 Room 140 Installed object 141 Installed object 230 Network 201 Robot control part 211 Robot camera
212 Position / Direction Estimation Unit 213 Robot Measurement Unit
214 Robot Drive Unit 221 Spatial Information DB
300 Map 400 Environmental Camera Information 330 Map Information 340 Map Information 601 Gaze Point 602 Extended Gaze Point 700 Camera Position 701 Shooting Direction Vector 721 Gaze Point Position 722 Extended Gaze Point Position 800 Base 801 Base 810 First Rotation 820 Second Rotation 900 Route history 1000 Predicted route 1010 Predicted camera direction vector 1020 Predicted gazing point position 1100 Image of gazing point 1101 Image of extended gazing point 1102 Image of extended gazing point 1200 Image of gazing point displayed three-dimensionally 1201 Extended gazing point displayed three-dimensionally Image 1202 An image of an extended gazing point displayed three-dimensionally.

Claims (12)

  1.  遠隔操作可能なロボットが具備する第1のカメラで第1の画像を撮影するステップと、
     前記第1の画像から、前記第1の画像の中心である注視点を算出するステップと、
     前記注視点の周囲の拡張注視点を算出するステップと、
     前記拡張注視点を撮影する第2のカメラを選択するステップと、
     前記ロボットの操作者用の表示装置に、前記第1の画像と、前記第2のカメラで撮影された第2の画像とを並べて表示するステップとを有し、
     前記拡張注視点は、前記第2の画像の中心であり、
     前記第2のカメラは前記ロボットの移動する空間内に設けられていることを特徴とするモニタリング方法。
    Capturing a first image with a first camera of a remotely operable robot;
    Calculating a gaze point that is the center of the first image from the first image;
    Calculating an extended gaze point around the gaze point;
    Selecting a second camera that captures the extended gaze point;
    The display device for the operator of the robot has the step of displaying the first image and the second image taken by the second camera side by side,
    The extended gaze point is the center of the second image;
    The monitoring method, wherein the second camera is provided in a space in which the robot moves.
  2.  請求項1記載のモニタリング方法において、
     さらに、ロボットの移動経路を予測するステップと、
     予測した前記ロボットの移動経路に基づいて、前記注視点の移動経路を予測するステップと、
     予測した前記注視点の移動経路に基づいて、前記拡張注視点の移動経路を予測するステップと、を有することを特徴とするモニタリング方法。
    In the monitoring method according to claim 1,
    And predicting the movement path of the robot;
    Predicting the movement path of the point of interest based on the predicted movement path of the robot;
    Predicting the movement path of the extended gaze point based on the predicted movement path of the gaze point.
  3.  請求項2記載のモニタリング方法において、
     さらに、前記第2のカメラを選択するステップは、
     算出した移動経路に沿って移動する前記拡張注視点を、最も長い時間撮影可能なカメラを選択することを特徴とするモニタリング方法。
    In the monitoring method according to claim 2,
    Further, the step of selecting the second camera includes
    A monitoring method comprising: selecting a camera capable of photographing the extended gazing point that moves along the calculated movement route for the longest time.
  4.  請求項3記載のモニタリング方法において、
     前記第2のカメラは、前記第1のカメラを具備するロボットとは異なるロボットが具備するカメラであることを特徴とするモニタリング方法。
    The monitoring method according to claim 3,
    The monitoring method, wherein the second camera is a camera provided by a robot different from the robot provided with the first camera.
  5.  遠隔操作可能なロボットが具備する第1のカメラと、
     前記第1のカメラで撮影した第1の画像から、前記第1の画像の中心である注視点と、前記注視点の周囲の拡張注視点と、を算出する算出部と、
     前記拡張注視点を撮影する第2のカメラを選択するカメラ選択部と、
     前記第1の画像と、前記第2のカメラで撮影された第2の画像とを並べて表示する、前記ロボットを操作する操作者が用いる表示装置とを有し、
     前記拡張注視点は、前記第2の画像の中心であり、
     前記第2のカメラは前記ロボットの移動する空間内に設けられていることを特徴とするモニタリングシステム。
    A first camera provided in a remotely operable robot;
    From the first image captured by the first camera, a calculation unit that calculates a gaze point that is the center of the first image, and an extended gaze point around the gaze point,
    A camera selection unit for selecting a second camera for photographing the extended gazing point;
    A display device used by an operator who operates the robot, displaying the first image and the second image taken by the second camera side by side;
    The extended gaze point is the center of the second image;
    The monitoring system, wherein the second camera is provided in a space in which the robot moves.
  6.  請求項5記載のモニタリングシステムにおいて、
     さらに、ロボットの移動経路を予測する第1の経路予測部と、
     予測した前記ロボットの移動経路に基づいて、前記注視点の移動経路を予測する第2の経路予測部と、
     予測した前記注視点の移動経路に基づいて、前記拡張注視点の移動経路を予測する第3の経路予測部と、を有することを特徴とするモニタリングシステム。
    In the monitoring system according to claim 5,
    Furthermore, a first route prediction unit that predicts the movement route of the robot,
    Based on the predicted movement path of the robot, a second path prediction unit that predicts the movement path of the gazing point;
    And a third route prediction unit for predicting the movement route of the extended gazing point based on the predicted movement route of the gazing point.
  7.  請求項6記載のモニタリングシステムにおいて、
     さらに、前記カメラ選択部は、
     算出した移動経路に沿って移動する前記拡張注視点を、最も長い時間撮影できるカメラを選択することを特徴とするモニタリングシステム。
    The monitoring system according to claim 6,
    Furthermore, the camera selection unit
    A monitoring system, wherein a camera capable of photographing the extended gazing point moving along the calculated movement route for the longest time is selected.
  8.  請求項7記載のモニタリングシステムにおいて、
     前記第2のカメラは、前記第1のカメラを具備するロボットとは異なるロボットが具備するカメラであることを特徴とするモニタリングシステム。
    The monitoring system according to claim 7,
    The monitoring system, wherein the second camera is a camera provided by a robot different from the robot provided with the first camera.
  9.  遠隔操作可能なロボットが具備する第1のカメラで第1の画像を撮影するステップと、
     前記第1の画像から、前記第1の画像の中心である注視点を算出するステップと、
     前記注視点の周囲の拡張注視点を算出するステップと、
     前記拡張注視点を撮影する前記第1のカメラとは異なる第2のカメラを選択するステップと、
     前記ロボットの操作者用の表示装置に、前記第1の画像と、前記第2のカメラで撮影された第2の画像とを並べて表示するステップと、をコンピュータに実行させるためのプログラムが記録され、
     前記拡張注視点は、前記第2の画像の中心であることを特徴とするコンピュータ読取可能な情報記録媒体。
    Capturing a first image with a first camera of a remotely operable robot;
    Calculating a gaze point that is the center of the first image from the first image;
    Calculating an extended gaze point around the gaze point;
    Selecting a second camera different from the first camera that captures the extended gazing point;
    A program for causing the computer to execute the step of displaying the first image and the second image captured by the second camera side by side on the display device for the operator of the robot is recorded. ,
    The computer-readable information recording medium, wherein the extended attention point is a center of the second image.
  10.  請求項9記載の情報記録媒体において、
     さらに、ロボットの移動経路を予測するステップと、
     予測した前記ロボットの移動経路に基づいて、前記注視点の移動経路を予測するステップと、
     予測した前記注視点の移動経路に基づいて、前記拡張注視点の移動経路を予測するステップと、をコンピュータに実行させるためのプログラムが記録されていることを特徴とする情報記録媒体。
    In the information recording medium according to claim 9,
    And predicting the movement path of the robot;
    Predicting the movement path of the point of interest based on the predicted movement path of the robot;
    A program for causing a computer to execute a step of predicting the movement path of the extended gazing point based on the predicted movement path of the gazing point is recorded.
  11.  請求項10記載の情報記録媒体において、
     さらに、前記第2のカメラを選択するステップは、
     算出した移動経路に沿って移動する前記拡張注視点を、最も長い時間撮影可能なカメラを選択することを特徴とする情報記録媒体。
    The information recording medium according to claim 10,
    Further, the step of selecting the second camera includes
    An information recording medium, wherein a camera capable of photographing the extended gaze point moving along the calculated movement route for the longest time is selected.
  12.  請求項11記載の情報記録媒体において、
     前記第2のカメラは、前記第1のカメラを具備するロボットとは異なるロボットが具備するカメラであることを特徴とする情報記録媒体。
    The information recording medium according to claim 11,
    The information recording medium according to claim 1, wherein the second camera is a camera included in a robot different from the robot including the first camera.
PCT/JP2012/084018 2012-12-28 2012-12-28 Monitoring system, method, and information-recording medium containing program WO2014102995A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2012/084018 WO2014102995A1 (en) 2012-12-28 2012-12-28 Monitoring system, method, and information-recording medium containing program
JP2014553986A JPWO2014102995A1 (en) 2012-12-28 2012-12-28 Information recording medium storing monitoring system, method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/084018 WO2014102995A1 (en) 2012-12-28 2012-12-28 Monitoring system, method, and information-recording medium containing program

Publications (1)

Publication Number Publication Date
WO2014102995A1 true WO2014102995A1 (en) 2014-07-03

Family

ID=51020146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/084018 WO2014102995A1 (en) 2012-12-28 2012-12-28 Monitoring system, method, and information-recording medium containing program

Country Status (2)

Country Link
JP (1) JPWO2014102995A1 (en)
WO (1) WO2014102995A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045825A (en) * 2014-08-26 2016-04-04 三菱重工業株式会社 Image display system
JP2017111790A (en) * 2015-12-10 2017-06-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Movement control method, autonomous mobile robot, and program
WO2021006459A1 (en) * 2019-07-05 2021-01-14 삼성전자 주식회사 Electronic device, method for reconstructing stereoscopic image by using same, and computer-readable recording medium
WO2022180692A1 (en) * 2021-02-24 2022-09-01 日本電気株式会社 Information processing device, location estimation method, and non-transitory computer-readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002132343A (en) * 2000-10-26 2002-05-10 Kawasaki Heavy Ind Ltd Travel control method for autonomous travel body
JP2006133703A (en) * 2004-11-09 2006-05-25 Sharp Corp Camera and its still image generation method
JP2007098567A (en) * 2006-09-25 2007-04-19 Hitachi Ltd Autonomous control type robot and its control device
JP2009118072A (en) * 2007-11-05 2009-05-28 Ihi Corp Remote control device and remote control method
JP2010131751A (en) * 2010-03-15 2010-06-17 Toyota Motor Corp Mobile robot
JP2010183281A (en) * 2009-02-04 2010-08-19 Sumitomo Electric Ind Ltd In-vehicle photographing device, and safety measure method for vehicle
JP2012171024A (en) * 2011-02-17 2012-09-10 Japan Science & Technology Agency Robot system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002132343A (en) * 2000-10-26 2002-05-10 Kawasaki Heavy Ind Ltd Travel control method for autonomous travel body
JP2006133703A (en) * 2004-11-09 2006-05-25 Sharp Corp Camera and its still image generation method
JP2007098567A (en) * 2006-09-25 2007-04-19 Hitachi Ltd Autonomous control type robot and its control device
JP2009118072A (en) * 2007-11-05 2009-05-28 Ihi Corp Remote control device and remote control method
JP2010183281A (en) * 2009-02-04 2010-08-19 Sumitomo Electric Ind Ltd In-vehicle photographing device, and safety measure method for vehicle
JP2010131751A (en) * 2010-03-15 2010-06-17 Toyota Motor Corp Mobile robot
JP2012171024A (en) * 2011-02-17 2012-09-10 Japan Science & Technology Agency Robot system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045825A (en) * 2014-08-26 2016-04-04 三菱重工業株式会社 Image display system
JP2017111790A (en) * 2015-12-10 2017-06-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Movement control method, autonomous mobile robot, and program
CN107045355A (en) * 2015-12-10 2017-08-15 松下电器(美国)知识产权公司 Control method for movement, autonomous mobile robot
WO2021006459A1 (en) * 2019-07-05 2021-01-14 삼성전자 주식회사 Electronic device, method for reconstructing stereoscopic image by using same, and computer-readable recording medium
WO2022180692A1 (en) * 2021-02-24 2022-09-01 日本電気株式会社 Information processing device, location estimation method, and non-transitory computer-readable medium

Also Published As

Publication number Publication date
JPWO2014102995A1 (en) 2017-01-12

Similar Documents

Publication Publication Date Title
JP5057936B2 (en) Bird's-eye image generation apparatus and method
WO2010109730A1 (en) Camera calibrator
US20170094227A1 (en) Three-dimensional spatial-awareness vision system
JP6450481B2 (en) Imaging apparatus and imaging method
JP6507268B2 (en) Photography support apparatus and photography support method
EP3462733B1 (en) Overhead view video image generation device, overhead view video image generation system, overhead view video image generation method, and program
JP2008254150A (en) Teaching method and teaching device of robot
JP2005106825A (en) Method and apparatus for determining position and orientation of image receiving device
US9154769B2 (en) Parallel online-offline reconstruction for three-dimensional space measurement
WO2015145543A1 (en) Object detection apparatus, object detection method, and mobile robot
JP5007863B2 (en) 3D object position measuring device
WO2014102995A1 (en) Monitoring system, method, and information-recording medium containing program
CN113033280A (en) System and method for trailer attitude estimation
JP2014063411A (en) Remote control system, control method, and program
JP2014165810A (en) Parameter acquisition device, parameter acquisition method and program
JP2011174799A (en) Photographing route calculation device
JP4227037B2 (en) Imaging system and calibration method
TWI726536B (en) Image capturing method and image capturing apparatus
JP7334460B2 (en) Work support device and work support method
JP6368503B2 (en) Obstacle monitoring system and program
JP7138856B2 (en) Bird's eye view presentation system
JP2010231395A (en) Camera calibration device
JP7332403B2 (en) Position estimation device, mobile control system, position estimation method and program
WO2020234912A1 (en) Mobile device, position display method, and position display program
JP5045567B2 (en) Imaging direction determination program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12890789

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014553986

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12890789

Country of ref document: EP

Kind code of ref document: A1