CN105955251A - Vision following control method of robot and robot - Google Patents
Vision following control method of robot and robot Download PDFInfo
- Publication number
- CN105955251A CN105955251A CN201610136001.8A CN201610136001A CN105955251A CN 105955251 A CN105955251 A CN 105955251A CN 201610136001 A CN201610136001 A CN 201610136001A CN 105955251 A CN105955251 A CN 105955251A
- Authority
- CN
- China
- Prior art keywords
- robot
- target
- control method
- following
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000004888 barrier function Effects 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims abstract description 4
- 230000009471 action Effects 0.000 claims description 19
- 230000008859 change Effects 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 5
- 238000007405 data analysis Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The present invention relates to a vision following control method of a robot and a robot. A sensor detection unit is configured to detect the surrounding environment, obtain the depth datagram of the environment, and send the obtained depth datagram to a data processing control unit; the data processing control unit is configured to process the received date after receiving the depth datagram, obtain the real world coordinate system, identify the human and barriers in the environment, and perform skeleton and motion identification and tracking of a following object to obtain an execution command; and a motion execution unit is configured to control the robot to track the following object according to the obtained following object information, and controlling the robot according to the obtained execution command to execute the command. The vision following control method of the robot and the robot are able to rapidly identify the following object, complete the effective identification of the following object in the environment having many people and rapidly make out reaction for the position changing of the following object, and are applicable to the vast majority of the commercial application scenes.
Description
Technical field
The present invention relates to robot field, particularly to a kind of robotic vision follow-up control method and
Robot.
Background technology
Along with robot correlation technique constantly in the application in productive life field, people's friendship to robot
Functional requirement is more and more higher mutually, and the robot technology of following is the important component part of field of human-computer interaction,
Robot follows technology has important effect in robot domestic, commercial and military multiple field.
Current robot follows technology major part and there are two kinds of defects, and one is cannot to realize under many people environment
Target is followed, second-hand during following cannot to carry out effective action mutual with people, the present invention is exactly
Be born in this context, it is achieved complex environment follow and provide the user close friend follow friendship
Mutually.
Summary of the invention
It is an object of the invention to provide a kind of robotic vision follow-up control method and robot, solve
The certainly problem of following during robot application, the target that emphasis solves under many people environment is followed and mesh
Mark the action recognition problem during following.
The invention provides a kind of robotic vision follow-up control method, described robot includes sensing
Device detector unit, data processing control units, and Motor execution unit, described control method includes:
Step one: sensor detector unit detection surrounding obtains the depth data figure of environment, and will
The depth data figure obtained sends data processing control units to;
Step 2: data are processed after receiving depth data figure by data processing control units, obtain
Take real world coordinates system, the people in environment and barrier are identified, and carry out following target
Skeleton and action recognition and tracking, obtain and perform order;
Step 3: Motor execution unit controls robot to following mesh according to the target information of following obtained
Mark is tracked, and robot is controlled by the execution order according to obtaining, and performs described order.
In step one, sensor detector unit use depth of field photographic head by depth of field infrared launcher and
Receiving device generates infrared ray code pattern, finally calculates depth map, can obtain people by depth map
Thing image, personage's skeleton.
Step 2 includes target location, and described target location needs to carry out target recognition, root in following the tracks of
It is numbered according to the personage identified, and the skeleton ratio of personage is carried out record.
In step 2, after completing person recognition, depth image according to personage constantly passes through weighted mass center
Algorithm calculates the barycenter following the tracks of target, judges the change in location of human target according to barycenter, obtains target
After barycenter, it is converted into real world coordinates system, calculates distance and the deviation angle of target range robot
Degree, completes to follow the tracking of target location.
Step 2 includes subject performance identification, described subject performance identification use openni gesture library and
Nite skeleton combines, and can identify subject performance, obtains according to the action identified and performs life
Order.
In step 3, robot is according to following the distance of target and angle-determining the need of being adjusted,
It is more than, more than set angle or distance, the following distance set if the deviation from angle, just follows.
Can utilize before step 3 is followed beginning and go build environment path map to carry out path by depth map
Planning, can carry out viable targets point according to the possible radius of robot during active path planning and search
Rope, controls robot without feasible path and is maintained for static, obtain feasible path and control robot
Will be tracked according to the position of target, and constantly adjust motion rail according to the change in location of target
Mark.
In step 3, robot completes to operate accordingly according to the action of user, and good during following
Good is mutual.
A kind of robot with above-mentioned control method.
Control method of the present invention and robot can quickly identify and follow target, it is possible to it is right to complete
Many people environment is followed effective identification of target, according to the change in location following target during following
Make a response rapidly, it is possible to make interaction response according to the gesture of user, it is achieved that robot is complicated
Following of environment is mutual, adapts to most of commercial applications scene.
Accompanying drawing explanation
Fig. 1 is control method flow chart of the present invention
Detailed description of the invention
Will be described in connection with the drawings the concrete reality of the constant voltage swelling device according to the present invention and caterpillar type robot
Execute mode.Following detailed description and drawings is for exemplarily illustrating the principle of the present invention, the present invention
Being not limited to described preferred embodiment, the scope of the present invention is defined by the claims.
A kind of robotic vision follow-up control method of the present invention, described robot includes sensor
Detector unit, data processing control units, and Motor execution unit, as it is shown in figure 1, described control
Method includes:
Step one: sensor detector unit detection surrounding obtains the depth data figure of environment, and will
The depth data figure obtained sends data processing control units to;This kind of method is simple to operate, acquisition
Data message is accurately and reliably.
Step 2: data are processed after receiving depth data figure by data processing control units, obtain
Take real world coordinates system, the people in environment and barrier are identified, and carry out following target
Skeleton and action recognition and tracking, obtain and perform order;
Step 3: Motor execution unit controls robot to following mesh according to the target information of following obtained
Mark is tracked, and robot is controlled by the execution order according to obtaining, and performs described order.
In step one, sensor detector unit use depth of field photographic head by depth of field infrared launcher and
Receiving device generates infrared ray code pattern, finally calculates depth map, can obtain people by depth map
Thing image, personage's skeleton.Depth of field photographic head uses xtion by depth of field infrared emission and receiving device
Generate infrared ray code pattern, finally calculate depth map, after depth map is processed by openni and nite
Person image, personage's skeleton can be obtained.
Step 2 includes target location, and described target location needs to carry out target recognition, root in following the tracks of
It is numbered according to the personage identified, and the skeleton ratio of personage is carried out record.Target location with
Track, firstly the need of carrying out target recognition, is numbered according to the personage that openni identifies, and to people
The skeleton ratio of thing carries out record, can accomplish that the human target in many people substance environment is distinguished.Following the tracks of
Target from robot view field disappear and return after can by skeleton ratio comparison proceed target with
Track, it is ensured that the stability in many people environment, solve major part robot view field limited after cannot continue
The continuous problem followed.
After completing person recognition, the depth image according to personage is constantly calculated by weighted mass center algorithm
Follow the tracks of target barycenter, judge the change in location of human target according to barycenter, obtain target centroid it
After, it is converted into real world coordinates system, calculates distance and the deviation angle of target range robot, complete
Become to follow the tracking of target location.Subject performance identification uses openni gesture library to tie mutually with nite skeleton
Close, subject performance can be identified, obtain according to the action identified and perform order.Mesh can be identified
Mark is raise one's hand, and waves to wait action and numerous gesture recognition, is transferred to fortune according to the action identified
Dynamic executable portion processes accordingly.
In step 3, robot is according to following the distance of target and angle-determining the need of being adjusted,
It is more than, more than set angle or distance, the following distance set if the deviation from angle, just follows.
Can utilize before following beginning and go build environment path map to carry out path planning, dynamic road by depth map
Footpath planning process can carry out viable targets point search according to the possible radius of robot, without can
Row path clustering robot is maintained for static, obtain feasible path control robot will be according to target
Position is tracked, and constantly adjusts movement locus according to the change in location of target.Robot according to
The action at family completes to operate accordingly, and good mutual during following.Robot follows process
The movement instruction of middle needs be divided into left-hand rotation, turn right, go ahead, to left front walk, the most before the life such as walk
Order, movement instruction can control concrete time and the distance of robot motion.
A kind of robot with above-mentioned control method, described robot includes sensor detector unit,
Data processing control units, and Motor execution unit:
In described sensor detector unit, using depth of field photographic head, the depth of field photographic head of sensor uses
Xtion generates infrared ray code pattern by depth of field infrared emission and receiving device, finally calculates the degree of depth
Figure, depth map can obtain person image, personage's skeleton after being processed by openni and nite.
In described data processing control units, the tracking of data analysis layer target location to be completed and target
The identification of action.Target location is followed the tracks of firstly the need of carrying out target recognition, identifies according to openni
Personage be numbered, and the skeleton ratio of personage is carried out record, can accomplish in many people substance environment
Human target distinguish.Skeleton ratio can be passed through following the tracks of after target disappears from robot view field and returns
Example comparison proceeds target following, it is ensured that the stability in many people environment, solves major part machine
The problem that cannot continue after device people's limited view to follow.Need according to personage after completing person recognition
Depth image constantly by weighted mass center algorithm calculate follow the tracks of target barycenter, judge people according to barycenter
The change in location of thing target.After obtaining target centroid, it is converted into real world coordinates system, calculates mesh
The distance of subject distance robot and deviation angle, complete to follow the tracking of target location.Subject performance is known
Not part uses openni gesture library and nite skeleton to combine, and can identify that target is raise one's hand, and waves
Action and numerous gesture recognition, be transferred to Motor execution unit according to the action identified and carry out
Corresponding process.
Described Motor execution unit: the character positions information obtained according to data analysis layer and user action
Robot is carried out movement instruction control.The movement instruction that robot needs during following is divided into a left side
Turn, turn right, to needing a spanking, to left front walk, the most before the order such as walk, movement instruction can control machine
The concrete time of people's motion and distance.Kinematic robot first can be according to distance and the angle following target
The need of being adjusted, if the deviation from angle more than 10 degree or distance more than setting follow away from
From, just follow.Can utilize before following beginning and go build environment path map to carry out by depth map
Path planning, can carry out viable targets point according to the possible radius of robot during active path planning
Search, the most static without feasible path robot, if obtaining feasible path, robot will root
It is tracked according to the position of target, and constantly adjusts movement locus according to the change in location of target.Machine
The motion layer of people gets the user action identification that data analysis layer is sent, and the action according to user is permissible
Complete the operations such as the unlatching of motion, stopping, direction adjustment, speed adjustment, it is provided that good during following
Good is mutual.
Although as it was previously stated, the exemplary embodiment of the present invention being carried out by reference to accompanying drawing in Shuo Ming
Illustrate, but the invention is not restricted to above-mentioned each detailed description of the invention, it is also possible to have other embodiments many
Mode, the scope of the present invention should be limited by claims and equivalents thereof.
Claims (9)
1. a robotic vision follow-up control method, it is characterised in that described robot includes passing
Sensor detector unit, data processing control units, and Motor execution unit, described control method bag
Include:
Step one: sensor detector unit detection surrounding obtains the depth data figure of environment, and will
The depth data figure obtained sends data processing control units to;
Step 2: data are processed after receiving depth data figure by data processing control units, obtain
Take real world coordinates system, the people in environment and barrier are identified, and carry out following target
Skeleton and action recognition and tracking, obtain and perform order;
Step 3: Motor execution unit controls robot to following mesh according to the target information of following obtained
Mark is tracked, and robot is controlled by the execution order according to obtaining, and performs described order.
A kind of robotic vision follow-up control method the most according to claim 1, its feature exists
In: in step one, sensor detector unit use depth of field photographic head by depth of field infrared launcher and
Receiving device generates infrared ray code pattern, finally calculates depth map, can obtain people by depth map
Thing image, personage's skeleton.
A kind of robotic vision follow-up control method the most according to claim 1, its feature exists
In: step 2 includes target location, and described target location needs to carry out target recognition, root in following the tracks of
It is numbered according to the personage identified, and the skeleton ratio of personage is carried out record.
A kind of robotic vision follow-up control method the most according to claim 3, its feature exists
In: in step 2, after completing person recognition, depth image according to personage constantly passes through weighted mass center
Algorithm calculates the barycenter following the tracks of target, judges the change in location of human target according to barycenter, obtains target
After barycenter, it is converted into real world coordinates system, calculates distance and the deviation angle of target range robot
Degree, completes to follow the tracking of target location.
A kind of robotic vision follow-up control method the most according to claim 3, its feature exists
Include subject performance identification in: step 2, described subject performance identification use openni gesture library and
Nite skeleton combines, and can identify subject performance, obtains according to the action identified and performs life
Order.
A kind of robotic vision follow-up control method the most according to claim 1, its feature exists
In: in step 3, robot is according to following the distance of target and angle-determining the need of being adjusted,
It is more than, more than set angle or distance, the following distance set if the deviation from angle, just follows.
A kind of robotic vision follow-up control method the most according to claim 6, its feature exists
In: can utilize before step 3 is followed beginning and go build environment path map to carry out path by depth map
Planning, can carry out viable targets point according to the possible radius of robot during active path planning and search
Rope, controls robot without feasible path and is maintained for static, obtain feasible path and control robot
Will be tracked according to the position of target, and constantly adjust motion rail according to the change in location of target
Mark.
A kind of robotic vision follow-up control method the most according to claim 6, its feature exists
In: in step 3, robot completes to operate accordingly according to the action of user, and good during following
Good is mutual.
9. a robot with the arbitrary described control method of claim 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610136001.8A CN105955251A (en) | 2016-03-11 | 2016-03-11 | Vision following control method of robot and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610136001.8A CN105955251A (en) | 2016-03-11 | 2016-03-11 | Vision following control method of robot and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105955251A true CN105955251A (en) | 2016-09-21 |
Family
ID=56917440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610136001.8A Pending CN105955251A (en) | 2016-03-11 | 2016-03-11 | Vision following control method of robot and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105955251A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106426180A (en) * | 2016-11-24 | 2017-02-22 | 深圳市旗瀚云技术有限公司 | Robot capable of carrying out intelligent following based on face tracking |
CN106502418A (en) * | 2016-11-09 | 2017-03-15 | 南京阿凡达机器人科技有限公司 | A kind of vision follower method based on monocular gesture identification |
CN107097256A (en) * | 2017-04-21 | 2017-08-29 | 河海大学常州校区 | Model-free method for tracking target of the view-based access control model nonholonomic mobile robot under polar coordinates |
CN107272744A (en) * | 2017-05-27 | 2017-10-20 | 芜湖星途机器人科技有限公司 | The robot active system for tracking being engaged with the number of taking machine |
CN107390721A (en) * | 2017-07-26 | 2017-11-24 | 歌尔科技有限公司 | Robot retinue control method, device and robot |
CN107398900A (en) * | 2017-05-27 | 2017-11-28 | 芜湖星途机器人科技有限公司 | Active system for tracking after robot identification human body |
CN107608392A (en) * | 2017-09-19 | 2018-01-19 | 浙江大华技术股份有限公司 | The method and apparatus that a kind of target follows |
CN107807652A (en) * | 2017-12-08 | 2018-03-16 | 灵动科技(北京)有限公司 | Merchandising machine people, the method for it and controller and computer-readable medium |
CN108107884A (en) * | 2017-11-20 | 2018-06-01 | 北京理工华汇智能科技有限公司 | Robot follows the data processing method and its intelligent apparatus of navigation |
CN108151742A (en) * | 2017-11-20 | 2018-06-12 | 北京理工华汇智能科技有限公司 | The data processing method and its intelligent apparatus of robot navigation |
CN108153180A (en) * | 2016-12-05 | 2018-06-12 | 苏州新世得机电设备有限公司 | Intellect service robot system |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN108536145A (en) * | 2018-04-10 | 2018-09-14 | 深圳市开心橙子科技有限公司 | A kind of robot system intelligently followed using machine vision and operation method |
CN108566535A (en) * | 2018-04-23 | 2018-09-21 | 苏州中科先进技术研究院有限公司 | Intelligent mobile camera and intelligent mobile monitoring system |
CN108717302A (en) * | 2018-05-14 | 2018-10-30 | 平安科技(深圳)有限公司 | Robot follows personage's method, apparatus and storage medium, robot |
CN108814444A (en) * | 2018-06-29 | 2018-11-16 | 炬大科技有限公司 | A kind of sweeping robot leg follows cleaning method and device |
CN108897236A (en) * | 2018-07-26 | 2018-11-27 | 佛山市高明曦逻科技有限公司 | Gesture identification intelligent electric appliance control system based on mobile detection |
WO2018228254A1 (en) * | 2017-06-12 | 2018-12-20 | 炬大科技有限公司 | Mobile electronic device and method for use in mobile electronic device |
CN109460031A (en) * | 2018-11-28 | 2019-03-12 | 科大智能机器人技术有限公司 | A kind of system for tracking of the automatic tractor based on human bioequivalence |
CN109960145A (en) * | 2017-12-22 | 2019-07-02 | 天津工业大学 | Mobile robot mixes vision track following strategy |
CN110147091A (en) * | 2018-02-13 | 2019-08-20 | 深圳市优必选科技有限公司 | Motion planning and robot control method, apparatus and robot |
CN110320523A (en) * | 2019-07-05 | 2019-10-11 | 齐鲁工业大学 | Follow the target locating set and method of robot |
CN110355758A (en) * | 2019-07-05 | 2019-10-22 | 北京史河科技有限公司 | A kind of machine follower method, equipment and follow robot system |
CN110515384A (en) * | 2019-09-09 | 2019-11-29 | 深圳市三宝创新智能有限公司 | A kind of the human body follower method and robot of view-based access control model mark |
CN111052025A (en) * | 2017-09-13 | 2020-04-21 | 日本电产株式会社 | Mobile robot system |
WO2020077608A1 (en) * | 2018-10-19 | 2020-04-23 | 深圳新物种科技有限公司 | Object recognition method and apparatus, electronic device, and computer readable storage medium |
CN111290377A (en) * | 2018-11-21 | 2020-06-16 | 富士施乐株式会社 | Autonomous moving apparatus and computer readable medium |
CN111352432A (en) * | 2018-12-20 | 2020-06-30 | 北京石头世纪科技股份有限公司 | Intelligent cleaning device, control method thereof and readable medium |
CN111820822A (en) * | 2020-07-30 | 2020-10-27 | 睿住科技有限公司 | Sweeping robot, illuminating method thereof and computer readable storage medium |
CN112346460A (en) * | 2020-11-05 | 2021-02-09 | 泉州装备制造研究所 | Automatic following method of mobile robot suitable for multi-person scene |
CN113495490A (en) * | 2020-04-07 | 2021-10-12 | 深圳爱根斯通科技有限公司 | Device control method, device, electronic device and storage medium |
CN113741550A (en) * | 2020-05-15 | 2021-12-03 | 北京机械设备研究所 | Mobile robot following method and system |
CN113885519A (en) * | 2021-10-27 | 2022-01-04 | 北京小乔机器人科技发展有限公司 | Method for controlling robot to automatically follow |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0221643A2 (en) * | 1985-08-30 | 1987-05-13 | Texas Instruments Incorporated | Vision navigation system for free-roaming mobile robots |
CN102411371A (en) * | 2011-11-18 | 2012-04-11 | 浙江大学 | Multi-sensor service-based robot following system and method |
CN104375504A (en) * | 2014-09-12 | 2015-02-25 | 中山大学 | Running accompanying robot and tracking control strategy and movement control method for running accompanying robot |
CN104589356A (en) * | 2014-11-27 | 2015-05-06 | 北京工业大学 | Dexterous hand teleoperation control method based on Kinect human hand motion capturing |
CN104950887A (en) * | 2015-06-19 | 2015-09-30 | 重庆大学 | Transportation device based on robot vision system and independent tracking system |
-
2016
- 2016-03-11 CN CN201610136001.8A patent/CN105955251A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0221643A2 (en) * | 1985-08-30 | 1987-05-13 | Texas Instruments Incorporated | Vision navigation system for free-roaming mobile robots |
CN102411371A (en) * | 2011-11-18 | 2012-04-11 | 浙江大学 | Multi-sensor service-based robot following system and method |
CN104375504A (en) * | 2014-09-12 | 2015-02-25 | 中山大学 | Running accompanying robot and tracking control strategy and movement control method for running accompanying robot |
CN104589356A (en) * | 2014-11-27 | 2015-05-06 | 北京工业大学 | Dexterous hand teleoperation control method based on Kinect human hand motion capturing |
CN104950887A (en) * | 2015-06-19 | 2015-09-30 | 重庆大学 | Transportation device based on robot vision system and independent tracking system |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502418A (en) * | 2016-11-09 | 2017-03-15 | 南京阿凡达机器人科技有限公司 | A kind of vision follower method based on monocular gesture identification |
CN106502418B (en) * | 2016-11-09 | 2019-04-16 | 南京阿凡达机器人科技有限公司 | A kind of vision follower method based on monocular gesture identification |
CN106426180A (en) * | 2016-11-24 | 2017-02-22 | 深圳市旗瀚云技术有限公司 | Robot capable of carrying out intelligent following based on face tracking |
CN108153180A (en) * | 2016-12-05 | 2018-06-12 | 苏州新世得机电设备有限公司 | Intellect service robot system |
CN107097256A (en) * | 2017-04-21 | 2017-08-29 | 河海大学常州校区 | Model-free method for tracking target of the view-based access control model nonholonomic mobile robot under polar coordinates |
CN107398900A (en) * | 2017-05-27 | 2017-11-28 | 芜湖星途机器人科技有限公司 | Active system for tracking after robot identification human body |
CN107272744A (en) * | 2017-05-27 | 2017-10-20 | 芜湖星途机器人科技有限公司 | The robot active system for tracking being engaged with the number of taking machine |
WO2018228254A1 (en) * | 2017-06-12 | 2018-12-20 | 炬大科技有限公司 | Mobile electronic device and method for use in mobile electronic device |
CN107390721A (en) * | 2017-07-26 | 2017-11-24 | 歌尔科技有限公司 | Robot retinue control method, device and robot |
CN107390721B (en) * | 2017-07-26 | 2021-05-18 | 歌尔科技有限公司 | Robot following control method and device and robot |
CN111052025A (en) * | 2017-09-13 | 2020-04-21 | 日本电产株式会社 | Mobile robot system |
CN107608392A (en) * | 2017-09-19 | 2018-01-19 | 浙江大华技术股份有限公司 | The method and apparatus that a kind of target follows |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
CN108151742A (en) * | 2017-11-20 | 2018-06-12 | 北京理工华汇智能科技有限公司 | The data processing method and its intelligent apparatus of robot navigation |
CN108107884A (en) * | 2017-11-20 | 2018-06-01 | 北京理工华汇智能科技有限公司 | Robot follows the data processing method and its intelligent apparatus of navigation |
CN107807652A (en) * | 2017-12-08 | 2018-03-16 | 灵动科技(北京)有限公司 | Merchandising machine people, the method for it and controller and computer-readable medium |
CN109960145B (en) * | 2017-12-22 | 2022-06-14 | 天津工业大学 | Mobile robot mixed vision trajectory tracking strategy |
CN109960145A (en) * | 2017-12-22 | 2019-07-02 | 天津工业大学 | Mobile robot mixes vision track following strategy |
CN110147091B (en) * | 2018-02-13 | 2022-06-28 | 深圳市优必选科技有限公司 | Robot motion control method and device and robot |
CN110147091A (en) * | 2018-02-13 | 2019-08-20 | 深圳市优必选科技有限公司 | Motion planning and robot control method, apparatus and robot |
CN108536145A (en) * | 2018-04-10 | 2018-09-14 | 深圳市开心橙子科技有限公司 | A kind of robot system intelligently followed using machine vision and operation method |
CN108566535A (en) * | 2018-04-23 | 2018-09-21 | 苏州中科先进技术研究院有限公司 | Intelligent mobile camera and intelligent mobile monitoring system |
CN108717302B (en) * | 2018-05-14 | 2021-06-25 | 平安科技(深圳)有限公司 | Method and device for robot to follow person, storage medium and robot |
CN108717302A (en) * | 2018-05-14 | 2018-10-30 | 平安科技(深圳)有限公司 | Robot follows personage's method, apparatus and storage medium, robot |
CN108814444B (en) * | 2018-06-29 | 2021-01-29 | 炬大科技有限公司 | Sweeping robot leg following sweeping method and device |
CN108814444A (en) * | 2018-06-29 | 2018-11-16 | 炬大科技有限公司 | A kind of sweeping robot leg follows cleaning method and device |
CN108897236A (en) * | 2018-07-26 | 2018-11-27 | 佛山市高明曦逻科技有限公司 | Gesture identification intelligent electric appliance control system based on mobile detection |
WO2020077608A1 (en) * | 2018-10-19 | 2020-04-23 | 深圳新物种科技有限公司 | Object recognition method and apparatus, electronic device, and computer readable storage medium |
US11960275B2 (en) | 2018-11-21 | 2024-04-16 | Fujifilm Business Innovation Corp. | Autonomous moving apparatus and non-transitory computer readable medium |
CN111290377A (en) * | 2018-11-21 | 2020-06-16 | 富士施乐株式会社 | Autonomous moving apparatus and computer readable medium |
CN111290377B (en) * | 2018-11-21 | 2023-10-10 | 富士胶片商业创新有限公司 | Autonomous mobile apparatus and computer readable medium |
CN109460031A (en) * | 2018-11-28 | 2019-03-12 | 科大智能机器人技术有限公司 | A kind of system for tracking of the automatic tractor based on human bioequivalence |
CN111352432A (en) * | 2018-12-20 | 2020-06-30 | 北京石头世纪科技股份有限公司 | Intelligent cleaning device, control method thereof and readable medium |
CN111352432B (en) * | 2018-12-20 | 2023-09-15 | 北京石头世纪科技股份有限公司 | Intelligent cleaning device, control method thereof and readable medium |
CN110355758A (en) * | 2019-07-05 | 2019-10-22 | 北京史河科技有限公司 | A kind of machine follower method, equipment and follow robot system |
CN110320523A (en) * | 2019-07-05 | 2019-10-11 | 齐鲁工业大学 | Follow the target locating set and method of robot |
CN110515384A (en) * | 2019-09-09 | 2019-11-29 | 深圳市三宝创新智能有限公司 | A kind of the human body follower method and robot of view-based access control model mark |
CN113495490A (en) * | 2020-04-07 | 2021-10-12 | 深圳爱根斯通科技有限公司 | Device control method, device, electronic device and storage medium |
CN113741550A (en) * | 2020-05-15 | 2021-12-03 | 北京机械设备研究所 | Mobile robot following method and system |
CN113741550B (en) * | 2020-05-15 | 2024-02-02 | 北京机械设备研究所 | Mobile robot following method and system |
CN111820822A (en) * | 2020-07-30 | 2020-10-27 | 睿住科技有限公司 | Sweeping robot, illuminating method thereof and computer readable storage medium |
CN112346460A (en) * | 2020-11-05 | 2021-02-09 | 泉州装备制造研究所 | Automatic following method of mobile robot suitable for multi-person scene |
CN112346460B (en) * | 2020-11-05 | 2022-08-09 | 泉州装备制造研究所 | Automatic following method of mobile robot suitable for multi-person scene |
CN113885519A (en) * | 2021-10-27 | 2022-01-04 | 北京小乔机器人科技发展有限公司 | Method for controlling robot to automatically follow |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105955251A (en) | Vision following control method of robot and robot | |
Leigh et al. | Person tracking and following with 2d laser scanners | |
US9201425B2 (en) | Human-tracking method and robot apparatus for performing the same | |
Tölgyessy et al. | Foundations of visual linear human–robot interaction via pointing gesture navigation | |
WO2017166767A1 (en) | Information processing method, mobile device, and computer storage medium | |
Tsalatsanis et al. | Vision based target tracking and collision avoidance for mobile robots | |
JP6825715B2 (en) | Mobile vehicle | |
CN108170166A (en) | The follow-up control method and its intelligent apparatus of robot | |
CN109164802A (en) | A kind of robot maze traveling method, device and robot | |
CN114326732A (en) | Robot autonomous following system and autonomous following control method | |
KR20220047637A (en) | Interactive attraction system and method for associating an object with a user | |
Oh et al. | A system for traded control teleoperation of manipulation tasks using intent prediction from hand gestures | |
Vincze et al. | Edge-projected integration of image and model cues for robust model-based object tracking | |
Jia et al. | Recent developments in vision based target tracking for autonomous vehicles navigation | |
Zhang et al. | An intelligent wheelchair based on automated navigation and BCI techniques | |
CN111673729A (en) | Path determining method | |
Do Hoang et al. | The reliable recovery mechanism for person-following robot in case of missing target | |
Nguyen et al. | Gesture Recognition Model with Multi-Tracking Capture System for Human-Robot Interaction | |
Ali et al. | Human tracking by a mobile robot using 3D features | |
Takaoka et al. | 3D map building for a humanoid robot by using visual odometry | |
Huber et al. | A behavior-based approach to active stereo vision for mobile robots | |
KR102426744B1 (en) | Operation method of the automatic driving following mobility utilizing the TOF camera and machine learning algorithm | |
Wang et al. | Agv navigation based on apriltags2 auxiliary positioning | |
Majer et al. | A precise teach and repeat visual navigation system based on the convergence theorem | |
Song et al. | Autonomous obstacle avoidance scheme using monocular vision applied to mobile robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
DD01 | Delivery of document by public notice |
Addressee: Beijing Krund Artificial Intelligence Technology Co., Ltd. Document name: the First Notification of an Office Action |
|
DD01 | Delivery of document by public notice | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160921 |
|
WD01 | Invention patent application deemed withdrawn after publication |