CN109352654A - A kind of intelligent robot system for tracking and method based on ROS - Google Patents
A kind of intelligent robot system for tracking and method based on ROS Download PDFInfo
- Publication number
- CN109352654A CN109352654A CN201811404095.8A CN201811404095A CN109352654A CN 109352654 A CN109352654 A CN 109352654A CN 201811404095 A CN201811404095 A CN 201811404095A CN 109352654 A CN109352654 A CN 109352654A
- Authority
- CN
- China
- Prior art keywords
- target
- binocular camera
- intelligent robot
- omni
- binocular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of intelligent robot system for tracking and method based on ROS, system include that binocular binocular camera, personal computer and Omni-mobile platform, wherein personal computer and binocular camera are mounted on Omni-mobile platform;Binocular camera is for capturing the target to be followed, and personal computer is for handling the data that binocular camera is got;Omni-mobile platform treated for receiving personal computer motion control signal, and make motor imagination, method include the target tracking algorism and follow motion control method that ROS system, KCF algorithm are merged with SGBM binocular ranging.The present invention by capture target image information, image real time transfer, data communication, motion control signal processing and etc., make full use of the characteristic of ROS system, KCF algorithm is merged with binocular ranging, binding chassis motion control, have devised a kind of intelligent robot system for tracking and method, followability of the present invention is good, can follow target in real time, prevent target from losing.
Description
Technical field
The invention belongs to intelligent mobile robot applied technical fields, and in particular to a kind of intelligent robot based on ROS
System for tracking and method.
Background technique
The robot history only less than 50 years so far, but it has been deep into the life of the mankind in all fields, each
Field plays huge effect.Robot is the Disciplinary Frontiers of a height intersection again simultaneously, is related to machinery, biology, electricity
Many subjects such as sub- engineering science, Control Theory and Control Engineering, computer, artificial intelligence and social science, are to work as this life
The synthesis of many Disciplinary Frontiers technologies in boundary.
The artificial intelligence epoch quietly come, and role is more and more important in daily life for intelligent robot.One
A complete intelligent robot should have three elements: first is that element is felt, for recognizing ambient conditions;Second is that movement
Element acts making a response property of the external world;Third is that thinking element, perceptually the obtained information of element, thinks deeply out using assorted
The movement of sample.Therefore, intelligent robot is one to have gathered environment sensing, dynamic decision and planning, behaviour control etc. multinomial
The integrated intelligent system of function.The various kinds of sensors that their own carries, such as vision, the sense of hearing, tactile, smell, according to these
The data information of sensor feedback is adjusted flexibly working condition, and completes various tasks under different scenes, realizes all kinds of
Target.
As the generation information technologies such as big data, cloud computing, mobile Internet are mutually merged with intelligent robot technology
The quickening of paces, the world is robot combined to predict, " robot revolution " will open the market of multibillion-dollar.Robot technology
The progress and application of research are 21st century to automatically control most convictive achievement, are the automations of contemporary highest meaning,
Obviously, " intelligent robot " will play an increasingly important role in fields such as industry, service trade, military affairs, space flight and aviation, especially
In current industry manufacture, robotics has been achieved for great success.Robot is rapidly developed in recent years, and
Public place is had been applied to as mankind's service, wherein following robot is the popular research field for serving determining target, with
Random device people can help people to complete various appointed tasks in numerous occasions.For example, helping people to take on airport or supermarket
Band luggage, helps regular cargo in warehouse.Mainstream follows technology mainly and has and follows technology, base based on image procossing at present
It follows technology in infrared and based on ultrasound follows technology.It is direction most popular at present that wherein vision, which follows technology, so
And its technological difficulties is more, comprehensive strong, complexity is high, and realization has certain difficulty, thus is badly in need of a kind of easy and stable
Scheme is such to solve the problems, such as.
Summary of the invention
To solve the above-mentioned problems, the invention proposes a kind of intelligent robot system for tracking and method based on ROS.
Technical solution used by system of the invention is:
A kind of intelligent robot system for tracking based on ROS, it is characterised in that: controlled including binocular camera, computer
Device and the big hardware module of Omni-mobile platform three composition, wherein computer control and binocular camera are installed in Omni-mobile
On platform, the binocular camera is mainly used for capturing the target to be followed, and obtains target image information, the computer control
ROS system is installed, computer control is for receiving the target image information that binocular camera passes over, according to mesh on device
Logo image data, calculate motion control signal, and the Omni-mobile platform is used to receive the movement of computer control sending
Signal is controlled, and makes mobile response and follows target.
As an improvement, the binocular camera is the binocular camera for having demarcated inside and outside parameter.
As an improvement, the Omni-mobile platform includes motor, driver, battery, wheel and control panel;
The motor is direct current generator, and the DC power supply of the battery and electric drilling match, the wheel is Mecanum wheel,
The control panel should be arduino series monolithic, and control panel is used to receive the motion control signal of computer control sending
Control Mecanum wheel motion accompanying target.
A kind of intelligent robot follower method based on ROS, which comprises the following steps:
Step 1: obtaining image information;
Step 2: following target selection;
Step 3: target position information calculates;
Step 4: motion control signal resolves;
Calculation result is issued in step 5:ROS network, the Omni-mobile platform of intelligent robot is carried out according to calculation result
Movement keeps following target.
As an improvement, the acquisition image information is obtained using binocular camera, the left and right of binocular camera in step 1
Camera obtains image information respectively, and specifically left and right binocular camera obtains the picture in its detection range respectively.
As an improvement, in step 2, described to follow target selection be after obtaining image information, with the acquisition of single camera
Picture be benchmark image, chosen from the image that the camera obtains and need the target that follows, and the target that will need to follow
Label.
As an improvement, described follow target selection specific implementation to include following sub-step:
Step 2.1, the determination target to be followed in the image that single camera obtains;
Step 2.2, the position according to target in the picture choose mark point;
Step 2.3 draws the rectangle frame comprising target by endpoint of the mark point, target is explicitly marked, at this time
Target is followed to be equivalent to follow the rectangle frame.
As an improvement, the target position information calculates in step 3, target phase is mainly obtained after choosing target
For the location information of binocular camera, it is specifically divided into following three sub-steps:
Step 3.1, the position by KCF algorithm keeps track target in the picture are to obtain object pixel coordinate, this is sentenced
Equivalent pixel coordinate of the geometric center point pixel coordinate of rectangle frame as target;
Step 3.2 obtains target depth information by SGBM algorithm, is to obtain target at a distance from binocular camera,
Depth information herein chooses the average value of the multiple pixel depth informations of rectangle frame region as target depth information;
Step 3.3, equivalent pixel coordinate and target depth information in conjunction with both the above step, calculate target in the world
Three-dimensional coordinate in coordinate system is location information of the target relative to binocular camera.
As an improvement, in step 4, the motion control signal resolving be after calculating the location information of target, according to
Target in the picture believe by location and the position under world coordinate system, the motion control for calculating Omni-mobile platform
Number, to adjust the distance and bearing of binocular camera, to guarantee to follow target, prevent target from losing.
As an improvement, the motion control signal resolving specifically includes following three sub-steps:
Step 4.1: the deflection angle of binocular camera is calculated according to target position in the picture and the reference position of setting
Degree, to guarantee the target heart in the picture after deflection;
Step 4.2: after determining deflection angle, being calculated according to target in the position of world coordinate system and the reference position of setting
The moving distance of binocular camera, be still move backward forward multiple spurs from can satisfy target and binocular camera away from
From for preset value.
Step 4.3: after determining deflection angle and move distance, calculating the motion control signal of Omni-mobile platform, make
Omni-mobile platform can adjust the pose of binocular camera, to follow target.
Compared with prior art, advantages and beneficial effects are mainly reflected in the following aspects to the present invention:
(1) present invention does not have particular/special requirement, conventional criteria on binocular camera, the type selecting and performance of individual calculus
It meets the requirements;
(2) present invention has certain requirement in the type selecting and performance of Omni-mobile platform, but complies fully with current master
Flow direction;
(3) the characteristics of making full use of ROS distributed system, by the function modoularization of various pieces, node, so that entirely
System structure is clear, succinct easily to realize.Binocular camera is responsible for Image Acquisition, and personal computer is responsible for data processing, and omnidirectional moves
Motion control is responsible on dynamic chassis, does not interfere with each other between various pieces, is communicated each other by ROS network, a Master
Node plans as a whole whole system;
(4) merging for KCF algorithm and binocular ranging SGBM algorithm is realized under ROS system, overcomes visual token calculation
The real time problems of method, so that target, which follows, can stablize realization.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of intelligent robot system for tracking in the embodiment of the present invention.
Fig. 2 is ROS distributed network schematic diagram in the embodiment of the present invention.
Fig. 3-1 to 3-3 is Omni-mobile platform hardware circuit diagram in the embodiment of the present invention.
Wherein Fig. 3-1 is power supply and power interface schematic diagram, and Fig. 3-2 is driver principles figure, and Fig. 3-3 is switch and MCU
Schematic diagram.
Fig. 4 is follower method flow chart in the embodiment of the present invention.
Fig. 5 is that motion software is followed to realize schematic diagram in the embodiment of the present invention.
Fig. 6 is that the movement of Omni-mobile platform in this embodiment of the present invention resolves schematic diagram.
Fig. 7 is the single camera picture schematic diagram of this embodiment of the present invention.
Fig. 8 is that intelligent robot system for tracking of the present invention carries out target line motion accompanying effect contrast figure.
Fig. 9 is that intelligent robot system for tracking of the present invention carries out target and discounts motion accompanying effect contrast figure.
Figure 10 is target motion path schematic diagram in Fig. 9.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair
It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
A kind of intelligent robot system for tracking based on ROS provided by the invention, system includes binocular binocular camera, a
People's computer and Omni-mobile platform, wherein personal computer and binocular binocular camera are mounted on Omni-mobile platform;
Binocular binocular camera is used for the image data got to binocular camera for capturing the target to be followed, personal computer
It is handled, calculates motion control signal;Omni-mobile platform treated for receiving personal computer motion control letter
Number, and respond, the pose of binocular camera is adjusted, guarantees that robot can follow target in real time, prevents target from losing.Side
Method includes the target tracking algorism and follow motion control method that ROS system, KCF algorithm are merged with SGBM binocular ranging.
Referring to Fig.1, the system for tracking of the embodiment of the present invention includes that binocular binocular camera, personal computer and omnidirectional are flat
Platform, wherein personal computer and binocular binocular camera are mounted on Omni-mobile platform.Binocular binocular camera is for catching
The target to be followed is caught, personal computer calculates movement for handling the image data that binocular camera is got
Control signal;Omni-mobile platform treated for receiving personal computer motion control signal, and respond, adjustment is double
The pose of lens camera guarantees that robot can follow target in real time, prevents target from losing.
See Fig. 2, the embodiment of the present invention uses ROS distributed network, its feature is made full use of, by the function of various pieces
Modularization, node, so that whole system is clear in structure, it is succinct easily to realize.Binocular camera is responsible for Image Acquisition, individual calculus
Machine is responsible for data processing, and Omni-mobile chassis is responsible for motion control (adjustment of binocular camera pose), between various pieces mutually not
Interference, is communicated by ROS network each other, and a Master node plans as a whole whole system.
See Fig. 3-1 to Fig. 3-3, the embodiment of the present invention, as motion controller, is constituted using Omni-mobile platform
Including motor, driver, battery, wheel and control panel.The motor should be 12V/24V direct current generator;The driver without
Particular/special requirement;The battery and electric drilling match should be 12V/24V DC power supply;The wheel should be Mecanum wheel;The control
Making sheet should be arduino series monolithic.
See Fig. 4, the embodiment of the present invention uses a kind of intelligent robot follower method based on ROS, comprising the following steps:
Step 1: obtaining image information;Image information is obtained respectively including binocular camera or so camera, that is to say a left side
Right camera obtains the picture in its detection range respectively.
Step 2: following target selection;Using the picture of left camera acquisition as benchmark image after obtaining image information, from
The target for needing to follow is chosen in the image, that is to say and the mesh to be followed is irised out on the image with an adjustable rectangle frame
Mark;Specific implementation includes following three sub-steps:
Step 2.1: the determination target to be followed in the picture;
Step 2.2: according to the position of target in the picture, suitable mark point is chosen, particularly as being that mark point is distributed in
Target surrounding;
Step 2.3: the rectangle frame that may include target is drawn as endpoint using the mark point, target is explicitly marked,
Target is followed to be reduced to follow the rectangle frame at this time.
Step 3: target position information calculates;After target being followed to select and be reduced to the rectangle frame chosen in step 2,
The location information that target can be calculated is specifically divided into following three sub-steps:
Step 3.1: being acquisition target by the position (left camera picture) of KCF algorithm keeps track target in the picture
Pixel coordinate is the equivalent pixel coordinate using the geometric center point pixel coordinate of rectangle frame as target herein;
Step 3.2: target depth information (left camera picture) being obtained by SGBM algorithm, is to obtain target and binocular
The distance of video camera, depth information herein, the present embodiment are chosen at one third and 2/3rds positions of rectangle frame long side
Four points and central point these point depth informations average as target depth information, by way of averaging
Ranging accuracy and confidence level can be improved;
Step 3.3: in conjunction with the equivalent pixel coordinate and target range information of both the above step, calculating target in the world
Three-dimensional coordinate in coordinate system is location information of the target relative to binocular camera.
Step 4: motion control signal resolves;After calculating the location information of target, according to target, (left side is taken the photograph in the picture
As head picture) the location of and the position under world coordinate system, calculate the motion control signal of Omni-mobile platform, with
The pose of adjustment binocular camera prevents target from losing to guarantee to follow target.Specifically include following three sub-steps:
Step 4.1: (being always ensured that target in the picture according to the reference position of target position in the picture and setting
The heart) deflection angle that calculates camera is that how many angle rotated can satisfy to require the target heart in the picture.
Step 4.2: after determining deflection angle, according to target in the position of world coordinate system and the reference position (ratio of setting
Such as it is always ensured that target at a distance from binocular camera for 1 meter) moving distance that calculates binocular camera is that go back forward
It is to move backward that how much rice can satisfy target and binocular camera distance as 1 meter.
Step 4.3: after determining deflection angle and move distance, calculating the motion control signal of Omni-mobile platform, make
Its pose that can adjust binocular camera, to follow target.
Calculation result is issued in step 5:ROS network;After the motion control signal for calculating Omni-mobile platform, need
Motion control signal is passed into Omni-mobile platform, Omni-mobile platform is made to make corresponding response, is to change binocular to take the photograph
Camera pose.Information transmitting between mobile platform is that calculated result is published to ROS by ROS network implementations
In network, the message is subscribed on chassis can obtain information, and make corresponding actions, realize that target follows.
See Fig. 5, movement is followed in the embodiment of the present invention to be realized by Omni-mobile platform, Fig. 3 table
The hardware that Omni-mobile chassis is illustrated is constituted.Omni-mobile platform is also a part in ROS network, first in personal computer
It will be calculated after motion control signal is published in ROS network according to image information, the message is subscribed on chassis, so that it may which obtaining should
Control signal;Then the arduino series monolithic in Omni-mobile platform receives the signal, carries out velocity composite and decomposes simultaneously
It converts thereof into four road pwm signals and is output to driver;Finally, driver driving motor operates, complete to follow.
See Fig. 7, the embodiment of the present invention produces visualization interface when following target, using C++ programming, which can
The location information of target and target is tracked with real-time display.
See Fig. 8 and Fig. 9, the embodiment of the present invention follows the effect of target ideal, passes through target under actual scene
Linear motion and broken line move two experimental verifications feasibility and validity of system for tracking and method.
The present invention can provide for user:
(1) user can be made, under conditions of having certain know-how and necessaries, fast simple is built
One available follows robot out;
(2) present invention uses ROS system in robot, and all program in machine codes are all to write simultaneously integrated modular wherein
, and ROS system is the system for being easy to transplant, therefore system of the invention can also be with Rapid transplant to other types phase
It is used in same robot;
(3) present invention is to follow technology using vision, be current more burning hot direction, therefore has and further research and develop
Value and brighter and clearer development prospect;
(4) present invention follows technology that robot may be implemented and helps people's carrying thing, less important work etc. by robot
Types of functionality can extend efficient help in daily life for user;
(5) present invention realizes the visualization for following effect, and user can be by the host computer that is mounted in robot
Or remote lan host computer, the case where target follows is understood in real time.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this
The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention
Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair
It is bright range is claimed to be determined by the appended claims.
Claims (10)
1. a kind of intelligent robot system for tracking based on ROS, it is characterised in that: including binocular camera, computer control
It is formed with the big hardware module of Omni-mobile platform three, wherein it is flat to be installed in Omni-mobile for computer control and binocular camera
On platform, the binocular camera is mainly used for capturing the target to be followed, and obtains target image information, the computer control
On ROS system is installed, computer control is for receiving the target image information that binocular camera passes over, according to target
Image data, calculates motion control signal, and the Omni-mobile platform is used to receive the movement control of computer control sending
Signal processed, and make mobile response and follow target.
2. the intelligent robot system for tracking according to claim 1 based on ROS, it is characterised in that: the binocular camera shooting
Machine is the binocular camera for having demarcated inside and outside parameter.
3. the intelligent robot system for tracking according to claim 1 based on ROS, it is characterised in that: the Omni-mobile
Platform includes motor, driver, battery, wheel and control panel;
The motor is direct current generator, and the DC power supply of the battery and electric drilling match, the wheel is Mecanum wheel, described
Control panel should be arduino series monolithic, and control panel is used to receive the motion control signal control of computer control sending
Mecanum wheel motion accompanying target.
4. a kind of intelligent robot follower method based on ROS, which comprises the following steps:
Step 1: obtaining image information;
Step 2: following target selection;
Step 3: target position information calculates;
Step 4: motion control signal resolves;
Calculation result is issued in step 5:ROS network, the Omni-mobile platform of intelligent robot is moved according to calculation result
Holding follows target.
5. intelligent robot follower method according to claim 4, it is characterised in that: in step 1, the acquisition image letter
Breath is obtained using binocular camera, and the left and right camera of binocular camera obtains image information respectively, and specifically left and right binocular is taken the photograph
Camera obtains the picture in its detection range respectively.
6. intelligent robot follower method according to claim 5, it is characterised in that: described that target is followed to select in step 2
Selecting is after obtaining image information, and the picture obtained using single camera is benchmark image, from the image that the camera obtains
Choose the target for needing to follow, and the target label that needs are followed.
7. intelligent robot follower method according to claim 6, it is characterised in that: described to follow target selection specifically real
Now include following sub-step:
Step 2.1, the determination target to be followed in the image that single camera obtains;
Step 2.2, the position according to target in the picture choose mark point;
Step 2.3 draws the rectangle frame comprising target by endpoint of the mark point, and target is explicitly marked, is followed at this time
Target is equivalent to follow the rectangle frame.
8. intelligent robot follower method according to claim 4, it is characterised in that: in step 3, the target position letter
Breath calculates, and location information of the target relative to binocular camera is mainly obtained after choosing target, is specifically divided into following three
Sub-steps:
Step 3.1, the position by KCF algorithm keeps track target in the picture are to obtain object pixel coordinate, this sentences rectangle
Equivalent pixel coordinate of the geometric center point pixel coordinate of frame as target;
Step 3.2 obtains target depth information by SGBM algorithm, is to obtain target at a distance from binocular camera, herein
Depth information choose the multiple pixel depth informations of rectangle frame region average value as target depth information;
Step 3.3, equivalent pixel coordinate and target depth information in conjunction with both the above step, calculate target in world coordinates
Three-dimensional coordinate in system is location information of the target relative to binocular camera.
9. intelligent robot follower method according to claim 4, it is characterised in that: in step 4, the motion control letter
Number resolve is after calculating the location information of target, according to target the location of in the picture and under world coordinate system
Position calculates the motion control signal of Omni-mobile platform, to adjust the distance and bearing of binocular camera, to guarantee to follow
Target prevents target from losing.
10. intelligent robot follower method according to claim 9, it is characterised in that: the motion control signal solution calculator
Body includes following three sub-steps:
Step 4.1: the deflection angle of binocular camera is calculated according to target position in the picture and the reference position of setting, with
Guarantee the target heart in the picture after deflection;
Step 4.2: after determining deflection angle, binocular being calculated in the position of world coordinate system and the reference position of setting according to target
The moving distance of video camera is that still move backward that how many distance can satisfy target and binocular camera distance forward
For preset value;
Step 4.3: after determining deflection angle and move distance, calculating the motion control signal of Omni-mobile platform, make omnidirectional
Mobile platform can adjust the pose of binocular camera, to follow target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811404095.8A CN109352654A (en) | 2018-11-23 | 2018-11-23 | A kind of intelligent robot system for tracking and method based on ROS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811404095.8A CN109352654A (en) | 2018-11-23 | 2018-11-23 | A kind of intelligent robot system for tracking and method based on ROS |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109352654A true CN109352654A (en) | 2019-02-19 |
Family
ID=65338500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811404095.8A Pending CN109352654A (en) | 2018-11-23 | 2018-11-23 | A kind of intelligent robot system for tracking and method based on ROS |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109352654A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028267A (en) * | 2019-12-25 | 2020-04-17 | 郑州大学 | Monocular vision following system and following method for mobile robot |
CN111070180A (en) * | 2019-12-30 | 2020-04-28 | 上海海事大学 | Post-disaster rescue channel detection robot based on ROS |
CN111897997A (en) * | 2020-06-15 | 2020-11-06 | 济南浪潮高新科技投资发展有限公司 | Data processing method and system based on ROS operating system |
CN112223278A (en) * | 2020-09-09 | 2021-01-15 | 山东省科学院自动化研究所 | Detection robot following method and system based on depth visual information |
CN112738022A (en) * | 2020-12-07 | 2021-04-30 | 浙江工业大学 | Attack method for ROS message of robot operating system |
CN112936276A (en) * | 2021-02-05 | 2021-06-11 | 华南理工大学 | ROS system-based humanoid robot joint multistage control device and method |
CN113381667A (en) * | 2021-06-25 | 2021-09-10 | 哈尔滨工业大学 | Seedling searching walking system and method based on ROS and image processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110024086A (en) * | 2009-09-01 | 2011-03-09 | 한국전자통신연구원 | Method for transferring/acquiring operating right of moving robot in a multi-operator multi-robot environment and system therefor |
CN106485735A (en) * | 2015-09-01 | 2017-03-08 | 南京理工大学 | Human body target recognition and tracking method based on stereovision technique |
CN107030693A (en) * | 2016-12-09 | 2017-08-11 | 南京理工大学 | A kind of hot line robot method for tracking target based on binocular vision |
CN107818587A (en) * | 2017-10-26 | 2018-03-20 | 吴铁成 | A kind of machine vision high-precision locating method based on ROS |
CN108646759A (en) * | 2018-07-09 | 2018-10-12 | 武汉科技大学 | Intelligent dismountable moving robot system based on stereoscopic vision and control method |
-
2018
- 2018-11-23 CN CN201811404095.8A patent/CN109352654A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110024086A (en) * | 2009-09-01 | 2011-03-09 | 한국전자통신연구원 | Method for transferring/acquiring operating right of moving robot in a multi-operator multi-robot environment and system therefor |
CN106485735A (en) * | 2015-09-01 | 2017-03-08 | 南京理工大学 | Human body target recognition and tracking method based on stereovision technique |
CN107030693A (en) * | 2016-12-09 | 2017-08-11 | 南京理工大学 | A kind of hot line robot method for tracking target based on binocular vision |
CN107818587A (en) * | 2017-10-26 | 2018-03-20 | 吴铁成 | A kind of machine vision high-precision locating method based on ROS |
CN108646759A (en) * | 2018-07-09 | 2018-10-12 | 武汉科技大学 | Intelligent dismountable moving robot system based on stereoscopic vision and control method |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028267A (en) * | 2019-12-25 | 2020-04-17 | 郑州大学 | Monocular vision following system and following method for mobile robot |
CN111028267B (en) * | 2019-12-25 | 2023-04-28 | 郑州大学 | Monocular vision following system and method for mobile robot |
CN111070180A (en) * | 2019-12-30 | 2020-04-28 | 上海海事大学 | Post-disaster rescue channel detection robot based on ROS |
CN111897997A (en) * | 2020-06-15 | 2020-11-06 | 济南浪潮高新科技投资发展有限公司 | Data processing method and system based on ROS operating system |
CN112223278A (en) * | 2020-09-09 | 2021-01-15 | 山东省科学院自动化研究所 | Detection robot following method and system based on depth visual information |
CN112738022A (en) * | 2020-12-07 | 2021-04-30 | 浙江工业大学 | Attack method for ROS message of robot operating system |
CN112738022B (en) * | 2020-12-07 | 2022-05-03 | 浙江工业大学 | Attack method for ROS message of robot operating system |
CN112936276A (en) * | 2021-02-05 | 2021-06-11 | 华南理工大学 | ROS system-based humanoid robot joint multistage control device and method |
CN112936276B (en) * | 2021-02-05 | 2023-07-18 | 华南理工大学 | Multi-stage control device and method for joint of humanoid robot based on ROS system |
CN113381667A (en) * | 2021-06-25 | 2021-09-10 | 哈尔滨工业大学 | Seedling searching walking system and method based on ROS and image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109352654A (en) | A kind of intelligent robot system for tracking and method based on ROS | |
CN108673501B (en) | Target following method and device for robot | |
Bonatti et al. | Towards a robust aerial cinematography platform: Localizing and tracking moving targets in unstructured environments | |
US10943361B2 (en) | Mapping optimization in autonomous and non-autonomous platforms | |
US10354396B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
US20210019527A1 (en) | Visual-Inertial Positional Awareness for Autonomous and Non-Autonomous Tracking | |
US11948369B2 (en) | Visual-inertial positional awareness for autonomous and non-autonomous mapping | |
US20220139027A1 (en) | Scene data obtaining method and model training method, apparatus and computer readable storage medium using the same | |
CN105425795B (en) | Method and device for planning optimal following path | |
Van den Bergh et al. | Real-time 3D hand gesture interaction with a robot for understanding directions from humans | |
Monajjemi et al. | UAV, come to me: End-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV | |
CN110844402B (en) | Garbage bin system is summoned to intelligence | |
CN105225270B (en) | A kind of information processing method and electronic equipment | |
Knoop et al. | Fusion of 2D and 3D sensor data for articulated body tracking | |
Schreiter et al. | The magni human motion dataset: Accurate, complex, multi-modal, natural, semantically-rich and contextualized | |
Hirose et al. | ExAug: Robot-conditioned navigation policies via geometric experience augmentation | |
Correa et al. | Active visual perception for mobile robot localization | |
Zujevs et al. | An event-based vision dataset for visual navigation tasks in agricultural environments | |
CN111611869B (en) | End-to-end monocular vision obstacle avoidance method based on serial deep neural network | |
CN109816717A (en) | The vision point stabilization of wheeled mobile robot in dynamic scene | |
Dionigi et al. | D-VAT: End-to-End Visual Active Tracking for Micro Aerial Vehicles | |
Golodetz et al. | Real-time hybrid mapping of populated indoor scenes using a low-cost monocular uav | |
Bruckstein et al. | Head movements for depth perception: Praying mantis versus pigeon | |
Wen et al. | Event-based improved FAST corner feature detection algorithm | |
Mehta et al. | Finite-time visual servo control for robotic fruit harvesting in the presence of fruit motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190219 |