JP2024022545A - ロボットを制御する装置及び方法 - Google Patents
ロボットを制御する装置及び方法 Download PDFInfo
- Publication number
- JP2024022545A JP2024022545A JP2023126350A JP2023126350A JP2024022545A JP 2024022545 A JP2024022545 A JP 2024022545A JP 2023126350 A JP2023126350 A JP 2023126350A JP 2023126350 A JP2023126350 A JP 2023126350A JP 2024022545 A JP2024022545 A JP 2024022545A
- Authority
- JP
- Japan
- Prior art keywords
- training
- pose
- environment
- observation
- errors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 76
- 238000005259 measurement Methods 0.000 claims description 32
- 238000010276 construction Methods 0.000 claims description 22
- 230000009471 action Effects 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 abstract description 10
- 239000003795 chemical substances by application Substances 0.000 description 35
- 238000004088 simulation Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 230000006378 damage Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 241001061260 Emmelichthys struhsakeri Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- G05D1/2469—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1653—Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/027—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
-
- G05D1/249—
-
- G05D1/648—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G05D2101/15—
-
- G05D2105/05—
-
- G05D2107/90—
-
- G05D2109/10—
Abstract
Description
本開示は、ロボットを制御する装置及び方法に関する。
種々の実施形態によれば、ロボット装置用の制御ポリシーをトレーニングするための方法であって、ロボット装置の環境の基準状態と当該基準状態に対する環境の基準観察とを取得することと、ロボット装置の姿勢推定の複数の誤差のそれぞれについて、姿勢推定の誤差に従って基準観察に対して妨害された観察と、トレーニング入力としての、生成された観察を含むトレーニングデータ要素とを生成することと、生成されたトレーニングデータ要素を使用して制御ポリシーをトレーニングすることと、を含む方法が提供される。
[i-開ループ選択]ここでは、ポリシーが、ブルドーザ100の到達すべき中間点を出力する。この場合、姿勢推定の誤差は、状態から観察までの最適でない投影として提示される。
[ii-閉ループ]ここでは、姿勢推定における誤差が、軌道の実行のためにブルドーザの下位の制御装置へフィードバックされる。この場合、誤差がシステムを通して伝播され、所望の経路からの逸脱が生じる。
Claims (9)
- ロボット装置用の制御ポリシーをトレーニングするための方法であって、
前記ロボット装置の環境の基準状態と前記基準状態に対する前記環境の基準観察とを取得することと、
前記ロボット装置の姿勢推定の複数の誤差のそれぞれについて、前記姿勢推定の誤差に従って前記基準観察に対して妨害された観察と、トレーニング入力としての、生成された観察を含むトレーニングデータ要素とを生成することと、
生成された前記トレーニングデータ要素を使用して前記制御ポリシーをトレーニングすることと、
を含む方法。 - 前記誤差のうちの少なくともいくつかのそれぞれが、設けられている前記ロボット装置の姿勢推定機能部によってセンサ測定データに応答して形成される姿勢推定結果と基準姿勢との間の誤差であり、前記センサ測定データは、前記基準姿勢においてそれぞれのノイズによって妨害された場合に前記ロボット装置が取得するはずのセンサ測定データである、請求項1に記載の方法。
- 前記センサ測定データは、前記ロボット装置の慣性測定ユニットの測定データ及び前記環境を観察するカメラからの画像データを含み、
前記姿勢推定機能部は、前記姿勢推定結果を決定するためのセンサフュージョンを実行する、
請求項2に記載の方法。 - 姿勢推定結果不確実性を出力する姿勢推定を実行することと、
前記姿勢推定結果の周囲の前記姿勢推定結果不確実性に従って誤差分布からのサンプリングを行うことにより、前記誤差のうちの少なくともいくつかを生成することと、
を含む、請求項1乃至3のいずれか一項に記載の方法。 - 前記ロボット装置は、工事車両であり、前記環境は、工事現場である、請求項1乃至4のいずれか一項に記載の方法。
- ロボット装置を制御するための方法であって、
請求項1乃至5のいずれか一項に記載の制御ポリシーをトレーニングすることと、
環境を観察して観察を生成することと、
トレーニングされた前記制御ポリシーを用いて、前記観察から1つ又は複数の動作を決定することと、
前記ロボット装置により前記ロボット装置の姿勢を推定することと、
観察された前記環境内の推定姿勢を考慮して、前記1つ又は複数の動作を実行することと、
を含む方法。 - 請求項1乃至6のいずれか一項に記載の方法を実施するように構成されている制御装置。
- コンピュータにより実行されるときに、請求項1乃至6のいずれか一項に記載の方法を前記コンピュータに実施させるための命令を含むコンピュータプログラム。
- コンピュータにより実行されるときに、請求項1乃至6のいずれか一項に記載の方法を前記コンピュータに実施させるための命令を備えるコンピュータ可読媒体。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102022208089.0 | 2022-08-03 | ||
DE102022208089.0A DE102022208089A1 (de) | 2022-08-03 | 2022-08-03 | Vorrichtung und Verfahren zum Steuern eines Roboters |
Publications (2)
Publication Number | Publication Date |
---|---|
JP7369890B1 JP7369890B1 (ja) | 2023-10-26 |
JP2024022545A true JP2024022545A (ja) | 2024-02-16 |
Family
ID=88418609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2023126350A Active JP7369890B1 (ja) | 2022-08-03 | 2023-08-02 | ロボットを制御する装置及び方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240045434A1 (ja) |
JP (1) | JP7369890B1 (ja) |
KR (1) | KR20240019042A (ja) |
CN (1) | CN117506886A (ja) |
DE (1) | DE102022208089A1 (ja) |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5168134B2 (ja) | 2008-12-26 | 2013-03-21 | 富士通株式会社 | 環境地図生成プログラム、環境地図生成方法及び移動ロボット |
WO2017076928A1 (en) | 2015-11-02 | 2017-05-11 | Starship Technologies Oü | Method, device and assembly for map generation |
KR102165967B1 (ko) | 2017-11-17 | 2020-10-15 | 미쓰비시덴키 가부시키가이샤 | 3차원 공간 감시 장치, 3차원 공간 감시 방법, 및 3차원 공간 감시 프로그램 |
US20200156241A1 (en) | 2018-11-21 | 2020-05-21 | Ford Global Technologies, Llc | Automation safety and performance robustness through uncertainty driven learning and control |
DE102019200435A1 (de) | 2019-01-16 | 2020-07-16 | Robert Bosch Gmbh | Verfahren zur Bereitstellung eines kinematischen Modells für kinematische Arbeitssysteme |
DE102019209616A1 (de) | 2019-07-01 | 2021-01-07 | Kuka Deutschland Gmbh | Durchführen einer vorgegebenen Aufgabe mithilfe wenigstens eines Roboters |
CN114269524B (zh) | 2019-08-28 | 2023-05-05 | 每日色彩株式会社 | 机器人控制装置 |
DE102020103854B4 (de) | 2020-02-14 | 2022-06-15 | Franka Emika Gmbh | Maschinelles Lernen einer erfolgreich abgeschlossenen Roboteranwendung |
US20210252698A1 (en) | 2020-02-14 | 2021-08-19 | Nvidia Corporation | Robotic control using deep learning |
CN111890351A (zh) | 2020-06-12 | 2020-11-06 | 深圳先进技术研究院 | 机器人及其控制方法、计算机可读存储介质 |
US11878419B2 (en) | 2020-06-26 | 2024-01-23 | Intel Corporation | Affordance-aware, multi-resolution, free-form object manipulation planning |
DE102020212658A1 (de) | 2020-10-07 | 2022-04-07 | Robert Bosch Gesellschaft mit beschränkter Haftung | Vorrichtung und Verfahren zum Steuern einer Robotervorrichtung |
DE102020214177A1 (de) | 2020-11-11 | 2022-05-12 | Robert Bosch Gesellschaft mit beschränkter Haftung | Vorrichtung und Verfahren zum Trainieren einer Steuerungsstrategie mittels bestärkendem Lernen |
-
2022
- 2022-08-03 DE DE102022208089.0A patent/DE102022208089A1/de active Pending
-
2023
- 2023-07-31 US US18/362,311 patent/US20240045434A1/en active Pending
- 2023-08-02 JP JP2023126350A patent/JP7369890B1/ja active Active
- 2023-08-02 CN CN202310969071.1A patent/CN117506886A/zh active Pending
- 2023-08-02 KR KR1020230100901A patent/KR20240019042A/ko unknown
Also Published As
Publication number | Publication date |
---|---|
CN117506886A (zh) | 2024-02-06 |
KR20240019042A (ko) | 2024-02-14 |
DE102022208089A1 (de) | 2024-02-08 |
JP7369890B1 (ja) | 2023-10-26 |
US20240045434A1 (en) | 2024-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kahn et al. | Badgr: An autonomous self-supervised learning-based navigation system | |
Fu et al. | One-shot learning of manipulation skills with online dynamics adaptation and neural network priors | |
Cassandra et al. | Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation | |
Costanzi et al. | UKF-based navigation system for AUVs: Online experimental validation | |
Zhang et al. | Increasing GPS localization accuracy with reinforcement learning | |
US9192869B2 (en) | Autonomous mobile robot system | |
Huster et al. | Relative position estimation for AUVs by fusing bearing and inertial rate sensor measurements | |
Qian et al. | IMM-UKF based land-vehicle navigation with low-cost GPS/INS | |
CN113340324B (zh) | 一种基于深度确定性策略梯度的视觉惯性自校准方法 | |
Akella et al. | Safety-Critical Controller Verification via Sim2Real Gap Quantification | |
JP7369890B1 (ja) | ロボットを制御する装置及び方法 | |
Abdulredah et al. | Developing a real time navigation for the mobile robots at unknown environments | |
Glavine et al. | Gps integrated inertial navigation system using interactive multiple model extended kalman filtering | |
Irmisch et al. | Simulation framework for a visual-inertial navigation system | |
Song et al. | Cooperative mid-depth navigation aided by ocean current prediction | |
Zhang et al. | Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End | |
Kohlbrecher et al. | Grid-based occupancy mapping and automatic gaze control for soccer playing humanoid robots | |
Fernández et al. | Benchmarking using UWSim, Simurv and ROS: an autonomous free floating dredging intervention case study | |
JP2022174734A (ja) | 建設現場用のオフロード車両のための方策を学習するための装置および方法 | |
CN105890589A (zh) | 一种水下机器人单目视觉定位方法 | |
Müller et al. | Improved Monte Carlo localization of autonomous robots through simultaneous estimation of motion model parameters | |
Uno et al. | Deep Inertial Underwater Odometry System. | |
Davis et al. | Motion planning under uncertainty: application to an unmanned helicopter | |
Zhu et al. | Visual inertial calibration of mobile robotic system based on reinforcement learning | |
Gridnev et al. | The Framework for robotic navigation algorithms evaluation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230907 |
|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20230907 |
|
A871 | Explanation of circumstances concerning accelerated examination |
Free format text: JAPANESE INTERMEDIATE CODE: A871 Effective date: 20230907 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20231003 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20231016 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7369890 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |