JP2020149096A - 倒立振子型ロボット - Google Patents
倒立振子型ロボット Download PDFInfo
- Publication number
- JP2020149096A JP2020149096A JP2019043578A JP2019043578A JP2020149096A JP 2020149096 A JP2020149096 A JP 2020149096A JP 2019043578 A JP2019043578 A JP 2019043578A JP 2019043578 A JP2019043578 A JP 2019043578A JP 2020149096 A JP2020149096 A JP 2020149096A
- Authority
- JP
- Japan
- Prior art keywords
- moving
- robot
- movement
- image
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 51
- 230000006399 behavior Effects 0.000 claims abstract description 16
- 230000000007 visual effect Effects 0.000 claims abstract description 6
- 230000007613 environmental effect Effects 0.000 claims description 18
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 4
- 230000003542 behavioural effect Effects 0.000 claims description 3
- 230000009471 action Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 38
- 230000007246 mechanism Effects 0.000 description 13
- 238000005259 measurement Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0291—Fleet control
- G05D1/0297—Fleet control by controlling means in a control room
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Electromagnetism (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Robotics (AREA)
- Optics & Photonics (AREA)
- Mechanical Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
Description
図1に示されている本発明の一実施形態としての経路決定システム1は、倒立振子型のロボット2(「移動装置」に相当する。)に適用されたものであり、後述する手法によって、交通参加者の存在確率が高い条件下で、ロボット2の経路を決定するものである。
ロボット2が有する制御装置10により、サーバ5から送信された指定地点データが無線通信装置14を介して受信された場合、当該指定地点データにより表わされる指定地点Pobjが読み込まれ、当該指定地点Pobjまでの経路が決定される。
次に、本実施形態の経路決定システム1の構成および経路決定方法の原理について説明する。図4に示されている学習装置30は、CNNのモデルパラメータ(結合層の重みおよびバイアス項)(後述する)を学習するためのものであり、LIDAR31、移動経路取得要素32、学習用データ取得要素33およびCNN学習要素34を備えている。これらの要素32〜34は、演算処理装置および記憶装置などによって構成されている。
R=γ1・sx(0≦sxの場合。R>255の場合はR=255),
G=127+γ2・sy(sy<0の場合。G<0の場合はG=0),
G=127+γ2・sy(0≦syの場合。G>255の場合はG=255)‥(1)。
本発明の一実施形態としての経路決定システム1によれば、複数の第2歩行者M2(第2移動物体)が複数の移動パターンのそれぞれにしたがって移動する環境において、第1歩行者M1(第1移動物体)が目的地点Poに向かって当該複数の第2歩行者M2との干渉を回避しながら移動した際の複数の移動経路Rwが認識される(図7〜図13参照)。
前記実施形態では、自律移動可能なロボット2が「移動装置」として採用されたが、他の実施形態として、一または複数の車輪を回転させることにより移動する車両、クローラ式の移動装置、または、2足歩行型のロボットなどが移動装置として採用されてもよい。移動装置2が、移動装置2に搭乗した人間により操作されることにより移動する移動装置、または、人間により遠隔操作されることにより移動する移動装置であってもよい。
Claims (3)
- 移動装置が、当該移動装置の周辺に複数の移動物体が存在する状況で目的地点まで目標移動経路を決定する方法であって、
複数の第2移動物体が、異なる複数の移動パターンのそれぞれにしたがって移動する状況において、第1移動物体が前記複数の第2移動物体のそれぞれとの干渉を回避しながら前記目的地点まで移動した際の前記第1移動物体の複数の移動経路を認識し、
前記移動装置が前記複数の移動経路のそれぞれにしたがって移動した際の、当該移動装置の周辺の視覚的環境を表わす環境画像として、当該移動装置の周辺に存在する移動物体を表わす移動物体画像領域の少なくとも一部に対して、当該移動物体画像領域の時系列的な変位態様に応じた色彩が付されている環境画像が生成され、当該環境画像を含む環境画像データと、当該移動装置の行動を表わす行動パラメータとが関連付けられた複数の学習用データを生成し、
前記環境画像データを入力とする一方で前記行動パラメータを出力とする行動モデルのモデルパラメータを、前記複数の学習用データを用いて指定学習方法にしたがって学習することにより、学習済みの当該行動モデルである学習済みモデルを生成し、
前記学習済みモデルを用いて、前記移動装置の前記目標移動経路を決定することを特徴とする経路決定方法。 - 請求項1に記載の経路決定方法において、
前記環境画像データは、前記環境画像に加えて、前記移動装置の速度の高低を表わす速度画像および前記目的地点の方向を表わす方向画像のうち少なくとも一方をさらに含むことを特徴とする経路決定方法。 - 請求項1または2に記載の経路決定方法において、
前記複数の学習用データは、仮想空間において仮想の前記ロボットが前記複数の移動経路のそれぞれにしたがって移動した際の、前記環境画像データおよび当該環境画像データに関連付けられた前記行動パラメータにより構成されていることを特徴とする経路決定方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019043578A JP7250572B2 (ja) | 2019-03-11 | 2019-03-11 | 倒立振子型ロボット |
CN202010096525.5A CN111673731B (zh) | 2019-03-11 | 2020-02-17 | 路径决定方法 |
US16/812,381 US11693416B2 (en) | 2019-03-11 | 2020-03-09 | Route determination method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019043578A JP7250572B2 (ja) | 2019-03-11 | 2019-03-11 | 倒立振子型ロボット |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2020149096A true JP2020149096A (ja) | 2020-09-17 |
JP7250572B2 JP7250572B2 (ja) | 2023-04-03 |
Family
ID=72424670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2019043578A Active JP7250572B2 (ja) | 2019-03-11 | 2019-03-11 | 倒立振子型ロボット |
Country Status (3)
Country | Link |
---|---|
US (1) | US11693416B2 (ja) |
JP (1) | JP7250572B2 (ja) |
CN (1) | CN111673731B (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005071265A (ja) * | 2003-08-27 | 2005-03-17 | Matsushita Electric Ind Co Ltd | 学習装置および方法、並びにロボットのカスタマイズ方法 |
JP2008204102A (ja) * | 2007-02-19 | 2008-09-04 | Yokohama National Univ | 画像処理システム |
JP2018124982A (ja) * | 2017-01-31 | 2018-08-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 制御装置および制御方法 |
JP2018190241A (ja) * | 2017-05-09 | 2018-11-29 | オムロン株式会社 | タスク実行システム、タスク実行方法、並びにその学習装置及び学習方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7813870B2 (en) * | 2006-03-03 | 2010-10-12 | Inrix, Inc. | Dynamic time series prediction of future traffic conditions |
JP4576445B2 (ja) | 2007-04-12 | 2010-11-10 | パナソニック株式会社 | 自律移動型装置および自律移動型装置用プログラム |
JP5402057B2 (ja) | 2009-02-16 | 2014-01-29 | トヨタ自動車株式会社 | 移動ロボット制御システム、経路探索方法、経路探索プログラム |
CN108122243B (zh) * | 2016-11-26 | 2021-05-28 | 沈阳新松机器人自动化股份有限公司 | 用于机器人检测运动物体的方法 |
US10268200B2 (en) * | 2016-12-21 | 2019-04-23 | Baidu Usa Llc | Method and system to predict one or more trajectories of a vehicle based on context surrounding the vehicle |
JP2019008519A (ja) * | 2017-06-23 | 2019-01-17 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 移動体検出方法、移動体学習方法、移動体検出装置、移動体学習装置、移動体検出システム、および、プログラム |
-
2019
- 2019-03-11 JP JP2019043578A patent/JP7250572B2/ja active Active
-
2020
- 2020-02-17 CN CN202010096525.5A patent/CN111673731B/zh active Active
- 2020-03-09 US US16/812,381 patent/US11693416B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005071265A (ja) * | 2003-08-27 | 2005-03-17 | Matsushita Electric Ind Co Ltd | 学習装置および方法、並びにロボットのカスタマイズ方法 |
JP2008204102A (ja) * | 2007-02-19 | 2008-09-04 | Yokohama National Univ | 画像処理システム |
JP2018124982A (ja) * | 2017-01-31 | 2018-08-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 制御装置および制御方法 |
JP2018190241A (ja) * | 2017-05-09 | 2018-11-29 | オムロン株式会社 | タスク実行システム、タスク実行方法、並びにその学習装置及び学習方法 |
Also Published As
Publication number | Publication date |
---|---|
JP7250572B2 (ja) | 2023-04-03 |
US20200293052A1 (en) | 2020-09-17 |
CN111673731B (zh) | 2023-05-30 |
US11693416B2 (en) | 2023-07-04 |
CN111673731A (zh) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7130062B2 (ja) | 経路決定方法 | |
Zhang et al. | 2d lidar-based slam and path planning for indoor rescue using mobile robots | |
JP7469850B2 (ja) | 経路決定装置、ロボット及び経路決定方法 | |
KR102303432B1 (ko) | 장애물의 특성을 고려한 dqn 및 slam 기반의 맵리스 내비게이션 시스템 및 그 처리 방법 | |
Zhu et al. | A hierarchical deep reinforcement learning framework with high efficiency and generalization for fast and safe navigation | |
CN116804879A (zh) | 一种改进蜣螂算法融合dwa算法的机器人路径规划框架方法 | |
US11467592B2 (en) | Route determination method | |
JP7258046B2 (ja) | 経路決定装置、ロボット及び経路決定方法 | |
Kim et al. | An open-source low-cost mobile robot system with an RGB-D camera and efficient real-time navigation algorithm | |
JP7250572B2 (ja) | 倒立振子型ロボット | |
JP7250573B2 (ja) | 倒立振子型ロボット | |
Tao et al. | Fast and Robust Training and Deployment of Deep Reinforcement Learning Based Navigation Policy | |
Kivrak et al. | A multilevel mapping based pedestrian model for social robot navigation tasks in unknown human environments | |
US20220076004A1 (en) | Model parameter learning method and movement mode parameter determination method | |
Afonso et al. | Autonomous Navigation of Wheelchairs in Indoor Environments using Deep Reinforcement Learning and Computer Vision | |
Singh et al. | Architecture and Algorithms for | |
Wang et al. | EMBEDDED NAVIGATION SYSTEM OF MECHATRONICS ROBOT BY FUZZY CONTROL ALGORITHM | |
CN117193328A (zh) | 一种用于移动机器人的路径规划方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20211126 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20221024 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20221101 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20221221 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20230307 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20230322 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7250572 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |