CN102902271A - Binocular vision-based robot target identifying and gripping system and method - Google Patents

Binocular vision-based robot target identifying and gripping system and method Download PDF

Info

Publication number
CN102902271A
CN102902271A CN2012104056933A CN201210405693A CN102902271A CN 102902271 A CN102902271 A CN 102902271A CN 2012104056933 A CN2012104056933 A CN 2012104056933A CN 201210405693 A CN201210405693 A CN 201210405693A CN 102902271 A CN102902271 A CN 102902271A
Authority
CN
China
Prior art keywords
target object
robot
coordinate
binocular
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012104056933A
Other languages
Chinese (zh)
Inventor
晁衍凯
徐昱琳
周勇飞
吕晓梦
王明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2012104056933A priority Critical patent/CN102902271A/en
Publication of CN102902271A publication Critical patent/CN102902271A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a binocular vision-based robot target identifying and gripping system and method. The binocular vision-based robot target identifying and gripping system comprises a binocular image acquisition module, an RFID (Radio Frequency Identification) transceiving module, a chassis movement module, an obstacle avoidance module and a mechanical arm control module. The binocular vision-based robot target identifying and gripping method comprises the following steps of: (1) calibration of a binocular camera; (2) erection and conversion of a coordinate system; (3) target identification and location; (4) autonomous navigation and obstacle avoidance; and (5) control over mechanical arms for gripping a target object. According to the invention, a robot can be intelligently interactive with the environment, and the intelligent gripping capability of the robot is enhanced. According to the binocular vision-based robot target identifying and gripping system and method, except for identification and location to the target object, a navigation database is also established, and the robot can automatically reach an area where the object is located after the target object is positioned; and the binocular vision-based robot target identifying and gripping system and method ensures that a service mode of the robot is more intelligentized and humanized by controlling the human-simulated mechanical arms and adopting a specific personified path gripping manner.

Description

Robot target identification and grasping system and method based on binocular vision
Technical field
The invention belongs to the Robotics field, what be specifically related to is a kind of robot target identification based on binocular vision and grasping system and method.
Background technology
In the sixties in last century, the correlation technique of robot application has become the research emphasis of each large colleges and universities and enterprises and institutions.In the early stage of development, robot mainly in structurized environment, helps people to finish some dangerous or repeatability is higher simple work according to specific pattern.But along with the raising that the development of technology and people's daily life need, robot is faced with the challenge of destructuring, some problems such as complicated.Development along with computer technology and control technology, the intelligent level of robot is more and more higher, can independently finish a few thing in non-structured environment, can identify target object, through arriving the zone at target object place behind the location, the crawl target object.
The function of original service robot is comparatively single, can not satisfy the application demand of people family destructuring environment, and has certain technological deficiency.The first, the function of original service robot is comparatively single, often can only finish the work of a part.The second, original service robot can not arrive the target area by independent navigation, often needs people's set path.The 3rd, original service robot is not introduced the higher mechanical arm of degree of freedom, and control algolithm is comparatively simple, can not finish the action of relative complex.
Summary of the invention
Defective for prior art exists the purpose of this invention is to provide a kind of service robot target identification based on binocular vision and grasping system and method, makes robot provide more intelligentized service for people in a kind of non-structured environment.Robot can carry out intelligent interaction by vision, radio frequency, the sensor such as ultrasonic and external environment condition among the present invention, can identify and the localizing objects object, can independently arrive by navigational system the position at target object place, and the control mechanical arm is finished the crawl task to target object.
In order to achieve the above object, technical scheme of the present invention is:
A kind of identification of service robot target and grasping system based on binocular vision, it is characterized in that, described service robot control platform connects the binocular image acquisition module, RFID (Radio Frequency Identification) transceiver module, bobbin movement module, obstacle module and mechanical arm control module.
Described binocular image acquisition module refers to the Binocular Stereo Vision System in the native system; Described RFID R-T unit is the passive radio-frequency (RF) tag, contains the information such as position, feature of article; Described bobbin movement module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The described barrier module of keeping away adopts multiplex ultrasonic and multi-path light electric transducer, is installed on respectively robot belly and bottom train of dress place, realizes the detection of obstacles of differing heights; The apery mechanical arm of described mechanical arm control module is the 6+1 degree of freedom, arrives movement position by normal solution and inverse arithmetic, and mechanical arm can be realized the actions such as crawl, dancing, Dual-Arm Coordination.
A kind of identification of service robot target and grasping means based on binocular vision adopt said system to operate, and it is characterized in that, comprise following operation steps:
1) binocular camera is demarcated.Utilize the plane template method that binocular camera is demarcated, obtain the inner parameter of this video camera, with coordinate figure converting into target object the coordinate figure in space coordinates take binocular camera right eye as initial point of target object in image coordinate system.
2) robot coordinate system's builds conversion with the target object coordinate system.Set up space coordinates take the right shoulder of robot as true origin, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in this coordinate system.
3) target identification and location.Employing identifies target object based on the method for color images, through above two-stage coordinate system conversion, obtains the coordinate figure of target object in the robot coordinate system.
4) control arrives the target object region.Set up space navigation system by RFID, and position and the characteristic information of target object added in the guidance system data storehouse, robot is by searching the database-located target object, independently arrive the target object region by driving the bobbin movement module, robot is according to obstacle module avoiding barrier in the process of walking.
5) control mechanical arm crawl target object.After arriving the target object region, robot adjusts the distance of itself and target object, makes the coordinate figure of target object in the robot coordinate system that obtains more accurate.Realize the clamper crawl target object of mechanical arm tail end, then clamper need arrive the coordinate figure of target object, separates by mechanical arm is contrary, obtains the angle that each joint of mechanical arm need to be rotated.
Described step 1) demarcation of binocular camera adopts the plane template method that binocular camera is demarcated, and its concrete steps are as follows:
1., theoretical according to parallel optical axis, set up the binocular tri-dimensional vision model, utilize mathematical formulae that vision mode is found the solution, obtain the computing formula of target object coordinate figure in world coordinate system.
2., by 8
Figure 488842DEST_PATH_IMAGE002
8 plane chessboard templates utilize the image of increasing income to process the demarcation that storehouse OpenCV finishes binocular camera, obtain the inner parameter of video camera.
3., according to the 1. computing formula of resulting target object coordinate figure in world coordinate system of step, in conjunction with the inner parameter of the binocular camera that obtains after demarcating, with coordinate figure converting into target object the coordinate figure in world coordinate system take binocular camera right eye as initial point of target object in image.
The calibration result of binocular camera is subjected to the impact of the environmental factors such as light, temperature more serious, therefore needs repeatedly to demarcate to get average at timing signal.Before each test, need again binocular camera to be demarcated.
Described step 2) robot coordinate system build conversion with the target object coordinate figure, obtain the coordinate figure of target object in the robot coordinate system, prepare for the crawl of next step target object, its concrete steps are as follows:
1., according to the task characteristics, the present invention in conjunction with the physical size of robot, obtains the coordinate figure of each assembly of robot in these space coordinates take the right shoulder of robot as true origin Criterion space coordinates.
2., accurately monitoring head part axis is wired to the distance of both shoulders line and right eye to the distance of head axis to distance, the binocular of right shoulder.
3., three physical sizes that 2. obtain according to step, in conjunction with the world coordinate system take the binocular camera right eye as initial point and robot coordinate system's position relationship, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in the robot coordinate system.
In target identification and crawl process, as required, the head of robot can rotate or swing up and down, this just need to be according to rotation and the pendulum angle of head, recomputate the relation of two space coordinates, calculate the coordinate figure of target object in the robot coordinate system according to the transformational relation of two coordinate systems.Head rotation angle and pendulum angle obtain by two angular transducers that are installed in head in this step.
Described step 3) identification of target object and location: in the visual field of binocular camera, identify target object by the method for cutting apart based on color, and calculate the coordinate figure of target object in the robot space coordinates, finish the location of object, its concrete steps are as follows:
1., because pixel is subjected to the impact of the factors such as light source kind, intensity of illumination more serious at RGB (Red-Green-Blue) color space, so the present invention finishes the identification to target object in HSV (Hue-Saturation-Value) color space.Before experiment, want the HSV threshold information of off-line learning and record object object color.
2., the conversion of color space: because therefore the color in the binocular camera that the present invention the adopts output RGB color space need to finish the conversion of pixel from the RGB color space to the hsv color space.Conversion from RGB to HSV has been equivalent to do the work of a decoupling zero, and concrete conversion formula is shown in formula (1):
Figure 405983DEST_PATH_IMAGE004
H represents the H part in the HSV pixel space in the following formula (1), s represents the S part in the HSV pixel space, and v represents the V part in the HSV pixel space, and R represents the R part in the rgb pixel space, G represents the G part in the rgb pixel space, and B represents the B part in the rgb pixel space.
3., the identification of target object: robot is after obtaining seeking the instruction of target object, will constantly gather image by binocular camera, rgb value with each two field picture pixel converts the HSV value in real time, contrast is by the HSV threshold value of the target object of off-line learning record, if target object is in this two field picture, just extract the profile of target object, if target object does not continue to seek target object with regard to the rotary machine head part in this two field picture.
4., the location of target object: after robot identifies target object, at first process by image and obtain the profile of target object in image, calculate the coordinate figure of center in image coordinate system of objects' contour.Calculate the coordinate figure of target object in the space coordinates take the binocular right eye as initial point according to the conversion formula that obtains behind the camera calibration.Then, drive the bobbin movement module and make the robot rotation, make robot over against the target object direction, be converted to the coordinate figure of target object in the robot coordinate system through coordinate system.
Owing in the hsv color space, finishing the identification to target object, can reduce the impact of the factors such as target object color category, ambient lighting intensity, improved the success ratio of target object identification.After the location of target object is finished, can determine next step action according to the position of target object in the robot space coordinates, if distantly need to independently arrive the target object region by the RFID navigational system; If close together, robot can directly go to the target object next door, finishes the crawl task to target object.
Described step 4) robot independently arrives the target object region by navigational system, and in traveling process by obstacle module avoiding barrier, its concrete steps are as follows:
1., in robot bottom the RFID receiver is installed, lay passive RFID tags according to the room layout characteristics on the floor, divide the compass of competency for each label, set up navigational system.The characteristic information of object in each label institute compass of competency is added in the guidance system data storehouse, and set up the routing information that arrives next label position from one of them label position.
2., after robot finishes the location of target object, in the guidance system data storehouse, search the zone at target object place according to the color characteristic information of target object, zone in conjunction with the current place of robot, the recorder people arrives the label ID that will pass through the target area, i.e. the path that will pass through of robot.
3., robot is according to the indication information of label under the region, drives the position that the bobbin movement module independently arrives next label place, by that analogy, arrives the label position of target object region.In the process of walking, the information of real-time processor device people's belly and train of dress section sensor, avoid the barrier in the path.
The present invention adopts the method for color images, i.e. therefore color method of identification recognition target object has only recorded the color characteristic information of object in the database of navigational system.Because independently column barrier is only arranged in the room, traffic information is fairly simple, robot only needs to carry out just smooth avoiding barrier of three right angle break-ins according to the direction of next label, gets back to original path.
Described step 5) the control mechanical arm is realized the crawl to target object, after robot navigates to target object, independently arrive the zone at target object place according to navigational system, then control mechanical arm and finish crawl task to target object, its concrete steps are as follows:
1., utilize D-H (Denavit-Hartenberg) method that the 6+1 degree-of-freedom manipulator of robot is carried out modeling, obtain the contrary solution formula of manipulator model, namely obtain making mechanical arm tail end to arrive the angle that each joint of assigned address mechanical arm need to be rotated in the space.
2., after robot independently arrives the zone at target object place according to navigational system, also need through described step 3 process) again identifies target object, and calculates the coordinate figure of target object in the robot coordinate system.
3., according to the coordinate figure that calculates, by driving the walk position at target object place of bobbin movement modular robot.According to the experience that accumulates in the experimentation, the distance of robot and target object is within limits the time, and the coordinate figure of the target object that obtains is the most accurate, off-line learning and record reach this apart from the time image that obtains in the threshold value of objects' contour size.When the close target object of robot, constantly adjust the distance of robot and target object, make the profile size of the image internal object object that obtains within threshold range.In this case, calculate the coordinate figure of target object in the robot coordinate system.
If robot finishes the task of crawl target object, need to make the coordinate figure of mechanical arm tail end clamper equal the coordinate figure of target object 4..With the contrary solution formula of this coordinate figure substitution manipulator model, the angle that each joint of mechanical arm need to be rotated when obtaining making clamper to finish crawl to target object.After clamper arrives the target object position, drive clamper closed, finish the crawl task to target object.
Owing to do not have the setting pressure sensor on the robot gripper, so robot can only grasp the object of specified width, which width under open loop control mode.After robot grabs target object smoothly, can indicate as required robot to carry out next step action, such as getting back to starting point or target object being put into assigned address etc.
The present invention has following apparent outstanding substantive distinguishing features and significantly technical progress compared with prior art:
Robot among the present invention can be undertaken alternately by the sensors such as vision, radio frequency, ultrasound wave and external environment condition, can identify and the localizing objects object, can independently arrive by navigational system the position at target object place, and the control mechanical arm is finished the crawl task to target object.The present invention is directed to specific robotic, with the parameter of accumulation in the experiment operation is revised, make robot finish more accurately the crawl task.The present invention adopts the specific path Grasp Modes that personalizes to apery mechanical arm control, makes the method for service of robot more intelligent, hommization.
Description of drawings
Fig. 1 is system architecture diagram of the present invention.
Fig. 2 is robot of the present invention external view.
Fig. 3 is program flow diagram of the present invention.
Fig. 4 is binocular camera imaging arrangement figure of the present invention.
Fig. 5 is mechanical arm D-H modeling structure figure of the present invention.
Fig. 6,7 is experiment effect figure of the present invention.
Embodiment
Below in conjunction with accompanying drawing the preferred embodiments of the present invention are elaborated:
Embodiment one:
As shown in Figure 1, this is based on the identification of service robot target and the grasping system of binocular vision, and described service robot control platform (1) connects binocular image acquisition module (2), RFID transceiver module (3), bobbin movement module (6), obstacle module (4) and mechanical arm control module (5).
Described binocular image acquisition module (2) refers to the Binocular Stereo Vision System in the native system; Described RFID transceiver module (3) is the passive radio-frequency (RF) tag, contains the information such as position, feature of article; Described bobbin movement module (6) adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The described barrier module (4) of keeping away adopts multiplex ultrasonic and multi-path light electric transducer, is installed on respectively robot belly and bottom train of dress place, realizes the detection of obstacles of differing heights; The apery mechanical arm of described mechanical arm control module (5) is the 6+1 degree of freedom, arrives movement position by normal solution and inverse arithmetic, and mechanical arm can be realized the actions such as crawl, dancing, Dual-Arm Coordination.
As shown in Figure 2, the experiment porch robot of this example has the binocular vision camera, 3 anterior ultrasonic sensors, 2 sidepiece ultrasonic sensors, the barrier sensor is kept away on 7 chassis, 2 loudspeakers, 2 mechanical arms, 1 touch-screen, the user can finish by the button of man-machine interface the control of robot.The user can external microphone, directly and robot engage in the dialogue, conversation content user can oneself design.In addition, can also pass through telepilot, finish the functions such as machine human motion, information and amusement are chosen.
Embodiment two:
As shown in Figure 3, this adopts said system to operate based on the identification of service robot target and the grasping means of binocular vision, it is characterized in that, comprises following operation steps:
1) binocular camera is demarcated.Utilize the plane template method that the binocular camera of binocular image acquisition module (2) is demarcated, obtain the inner parameter of this video camera, with coordinate figure converting into target object the coordinate figure in space coordinates take binocular camera right eye as initial point of target object in image coordinate system.
2) coordinate system is built and is changed.Set up space coordinates take the right shoulder of robot as true origin, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in this coordinate system.
3) target identification and location.Employing identifies target object based on the method for color images, through above two-stage coordinate system conversion, obtains the coordinate figure of target object in the robot coordinate system.
4) independent navigation and keep away barrier.Control arrives the target object region.Set up space navigation system by RFID, and position and the characteristic information of target object added in the guidance system data storehouse, robot is by searching the database-located target object, independently arrive the target object region by driving bobbin movement module (6), robot is according to obstacle module (4) avoiding barrier in the process of walking.
(5) control mechanical arm crawl target object.After arriving the target object region, robot adjusts the distance of itself and target object, makes the coordinate figure of target object in the robot coordinate system that obtains more accurate.Realize the clamper crawl target object of mechanical arm tail end, then clamper need arrive the coordinate figure of target object, separates by mechanical arm is contrary, obtains the angle that each joint of mechanical arm need to be rotated.
Described step 1) be the demarcation of binocular camera, adopt the plane template method that binocular camera is demarcated, its concrete steps are as follows:
1., as shown in Figure 4, theoretical according to parallel optical axis, set up the binocular tri-dimensional vision model, utilize mathematical formulae that vision mode is found the solution, obtain the computing formula of target object coordinate figure in world coordinate system.
2., by 8
Figure 998769DEST_PATH_IMAGE002
8 plane chessboard templates utilize the image of increasing income to process the demarcation that storehouse OpenCV finishes binocular camera, obtain the inner parameter of video camera.
3., according to the 1. computing formula of resulting target object coordinate figure in world coordinate system of step, in conjunction with the inner parameter of the binocular camera that obtains after demarcating, with coordinate figure converting into target object the coordinate figure in world coordinate system take binocular camera right eye as initial point of target object in image.
The calibration result of binocular camera is subjected to the impact of the environmental factors such as light, temperature more serious, therefore needs repeatedly to demarcate to get average at timing signal.Before each test, need again binocular camera to be demarcated.
Described step 2) for robot coordinate system's the conversion with the target object coordinate figure of building, obtain the coordinate figure of target object in the robot coordinate system, for the crawl of next step target object is prepared, its concrete steps are as follows:
1., according to the task characteristics, the present invention in conjunction with the physical size of robot, obtains the coordinate figure of each assembly of robot in these space coordinates take the right shoulder of robot as true origin Criterion space coordinates.
2., accurately monitoring head part axis is wired to the distance of both shoulders line and right eye to the distance of head axis to distance, the binocular of right shoulder.
3., three physical sizes that 2. obtain according to step, in conjunction with the world coordinate system take the binocular camera right eye as initial point and robot coordinate system's position relationship, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in the robot coordinate system.
In target identification and crawl process, as required, the head of robot can rotate or swing up and down, this just need to be according to rotation and the pendulum angle of head, recomputate the relation of two space coordinates, calculate the coordinate figure of target object in the robot coordinate system according to the transformational relation of two coordinate systems.Head rotation angle and pendulum angle obtain by two angular transducers that are installed in head in this step.
Described step 3) is identification and the location of target object, the present invention identifies target object by the method for cutting apart based on color in the visual field of binocular camera, and calculate the coordinate figure of target object in the robot space coordinates, finish the location of object, its concrete steps are as follows:
1., because pixel is subjected to the impact of the factors such as light source kind, intensity of illumination more serious at the RGB color space, so the present invention finishes the identification to target object in the hsv color space.Before experiment, want the HSV threshold information of off-line learning and record object object color.
2., the conversion of color space.Because therefore the color in the binocular camera that the present invention the adopts output RGB color space need to finish the conversion of pixel from the RGB color space to the hsv color space.Conversion from RGB to HSV has been equivalent to do the work of a decoupling zero, and concrete conversion formula is as shown in the formula shown in (2):
Figure 393978DEST_PATH_IMAGE006
3., the identification of target object.Robot is after obtaining seeking the instruction of target object, will constantly gather image by binocular camera, rgb value with each two field picture pixel converts the HSV value in real time, contrast is by the HSV threshold value of the target object of off-line learning record, if target object is in this two field picture, just extract the profile of target object, if target object does not continue to seek target object with regard to the rotary machine head part in this two field picture.
4., the location of target object.After robot identifies target object, at first process by image and obtain the profile of target object in image, calculate the coordinate figure of center in image coordinate system of objects' contour.Calculate the coordinate figure of target object in the space coordinates take the binocular right eye as initial point according to the conversion formula that obtains behind the camera calibration.Then, drive the bobbin movement module and make the robot rotation, make robot over against the target object direction, be converted to the coordinate figure of target object in the robot coordinate system through coordinate system.
Owing in the hsv color space, finishing the identification to target object, can reduce the impact of the factors such as target object color category, ambient lighting intensity, improved the success ratio of target object identification.After the location of target object is finished, can determine next step action according to the position of target object in the robot space coordinates, if distantly need to independently arrive the target object region by the RFID navigational system; If close together, robot can directly go to the target object next door, finishes the crawl task to target object.
Described step 4) independently arrive the target object region for robot by navigational system, and in traveling process by obstacle module avoiding barrier, its concrete steps are as follows:
1., in robot bottom the RFID receiver is installed, lay passive RFID tags according to the room layout characteristics on the floor, divide the compass of competency for each label, set up navigational system.The characteristic information of object in each label institute compass of competency is added in the guidance system data storehouse, and set up the routing information that arrives next label position from one of them label position.
2., after robot finishes the location of target object, in the guidance system data storehouse, search the zone at target object place according to the color characteristic information of target object, zone in conjunction with the current place of robot, the recorder people arrives the label ID that will pass through the target area, i.e. the path that will pass through of robot.
3., robot is according to the indication information of label under the region, drives the position that the bobbin movement module independently arrives next label place, by that analogy, arrives the label position of target object region.In the process of walking, the information of real-time processor device people's belly and train of dress section sensor, avoid the barrier in the path.
The present invention adopts the method for color images, i.e. therefore color method of identification recognition target object has only recorded the color characteristic information of object in the database of navigational system.Because independently column barrier is only arranged in the room, traffic information is fairly simple, robot only needs to carry out just smooth avoiding barrier of three right angle break-ins according to the direction of next label, gets back to original path.
Described step 5) is the crawl of control mechanical arm realization to target object, after robot navigates to target object, independently arrive the zone at target object place according to navigational system, then control mechanical arm and finish crawl task to target object, its concrete steps are as follows:
1., as shown in Figure 5, utilize the D-H method that the 6+1 degree-of-freedom manipulator of robot is carried out modeling, obtain the contrary solution formula of manipulator model, namely obtain making mechanical arm tail end to arrive the angle that each joint of assigned address mechanical arm need to be rotated in the space.
2., after robot independently arrives the zone at target object place according to navigational system, also need through the described step 3 of claim 2 process) again identifies target object, and calculates the coordinate figure of target object in the robot coordinate system.
3., according to the coordinate figure that calculates, by driving the walk position at target object place of bobbin movement modular robot.According to the experience that accumulates in the experimentation, the distance of robot and target object is within limits the time, and the coordinate figure of the target object that obtains is the most accurate, off-line learning and record reach this apart from the time image that obtains in the threshold value of objects' contour size.When the close target object of robot, constantly adjust the distance of robot and target object, make the profile size of the image internal object object that obtains within threshold range.In this case, calculate the coordinate figure of target object in the robot coordinate system.
If robot finishes the task of crawl target object, need to make the coordinate figure of mechanical arm tail end clamper equal the coordinate figure of target object 4..With the contrary solution formula of this coordinate figure substitution manipulator model, the angle that each joint of mechanical arm need to be rotated when obtaining making clamper to finish crawl to target object.After clamper arrives the target object position, drive clamper closed, finish the crawl task to target object.
Owing to do not have the setting pressure sensor on the robot gripper, so robot can only grasp the object of specified width, which width under open loop control mode.After robot grabs target object smoothly, can indicate as required robot to carry out next step action, such as getting back to starting point or target object being put into assigned address etc.
Embodiment three:
The present embodiment realizes that with the analog family environment robot can be human service intelligently.The present embodiment may further comprise the steps:
The first step under the experimental situation of the present embodiment, utilizes the plane template method that the binocular camera of robot is demarcated, and obtains the inner parameter of video camera.Off-line learning target object color is in the threshold value in hsv color space; And off-line learning obtains target object during accurate coordinate figure, the threshold value of objects' contour size in the image.After system initialization is finished, start the searching to target object.
Second step, the in vitro optional position of robot begins to seek target object, constantly gather image by the binocular camera shooting head, pixel in the image is done color space conversion, begin to navigate to the zone at target object place when finding target object, rotation head continues to seek when can not find in present image.
In the 3rd step, as shown in Figure 6, after robot finds target object, in the guidance system data storehouse, seek the zone at target object place.The RFID label at place begins near robot, goes to step by step the target area, in the process of walking, utilizes and to keep away the barrier module and hide barrier in the path.
The 4th step, as shown in Figure 7, behind the arrival target area, pick up target object, go to target object when next door, obtain the accurate coordinates value of target object by the distance of adjusting robot and target object, then control mechanical arm and finish crawl task to target object.After the crawl, wait for next step order.
The present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to the above embodiments.

Claims (7)

1. the service robot target based on binocular vision is identified and grasping system, it is characterized in that: described service robot control platform (1) connects binocular image acquisition module (2), RFID transceiver module (3), bobbin movement module (6), obstacle module (4) and mechanical arm control module (5);
Described binocular image acquisition module refers to the Binocular Stereo Vision System in the native system; Described RFID transceiver module (3) is the passive radio-frequency (RF) tag, contains position and the characteristic information of article; Described bobbin movement module (6) adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The described barrier module (4) of keeping away adopts multiplex ultrasonic and multi-path light electric transducer, is installed on respectively robot belly and bottom train of dress place, realizes the detection of obstacles of differing heights; The apery mechanical arm of described mechanical arm control module (5) is the 6+1 degree of freedom, arrives movement position by normal solution and inverse arithmetic, and mechanical arm can be realized crawl, dancing, Dual-Arm Coordination action.
2. the identification of service robot target and grasping means based on a binocular vision adopts the service robot target identification based on binocular vision according to claim 1 to operate with grasping system, it is characterized in that, comprises following operation steps:
1) binocular camera is demarcated: utilize the plane template method that the binocular camera of binocular image acquisition module (2) is demarcated, obtain the inner parameter of this video camera, with coordinate figure converting into target object the coordinate figure in space coordinates take binocular camera right eye as initial point of target object in image coordinate system;
2) coordinate system is built and is changed: set up space coordinates take the right shoulder of robot as true origin, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in this coordinate system;
3) target identification and location: adopt the method based on color images to identify target object, through above two-stage coordinate system conversion, obtain the coordinate figure of target object in the robot coordinate system;
4) independent navigation and keep away barrier: control arrives the target object region, set up space navigation system by RFID, and position and the characteristic information of target object added in the guidance system data storehouse, robot is by searching the database-located target object, independently arrive the target object region by driving bobbin movement module (6), robot is according to obstacle module (4) avoiding barrier in the process of walking;
5) control mechanical arm crawl target object: after arriving the target object region, robot adjusts the distance of itself and target object, makes the coordinate figure of target object in the robot coordinate system that obtains more accurate; Realize the clamper crawl target object of mechanical arm tail end, then clamper need arrive the coordinate figure of target object, separates by mechanical arm is contrary, obtains the angle that each joint of mechanical arm need to be rotated.
3. the identification of service robot target and grasping means based on binocular vision according to claim 2 is characterized in that described step 1) the binocular camera demarcation: the concrete steps that adopt the plane template method that binocular camera is demarcated are as follows:
1., theoretical according to parallel optical axis, set up the binocular tri-dimensional vision model, utilize mathematical formulae that vision mode is found the solution, obtain the computing formula of target object coordinate figure in world coordinate system;
2., by 8
Figure 612554DEST_PATH_IMAGE001
8 plane chessboard templates utilize the image of increasing income to process the demarcation that storehouse OpenCV finishes binocular camera, obtain the inner parameter of video camera;
3., according to the 1. computing formula of resulting target object coordinate figure in world coordinate system of step, in conjunction with the inner parameter of the binocular camera that obtains after demarcating, with coordinate figure converting into target object the coordinate figure in world coordinate system take binocular camera right eye as initial point of target object in image;
The calibration result of binocular camera is subjected to the impact of light and temperature environment factor more serious, needs repeatedly to demarcate to get average at timing signal; Before each test, need again binocular camera to be demarcated.
4. the identification of service robot target and grasping means based on binocular vision according to claim 2, it is characterized in that, described step 2) robot coordinate system's builds conversion with the target object coordinate figure, obtain the coordinate figure of target object in the robot coordinate system, prepare for the crawl of next step target object, its concrete steps are as follows:
1., according to the task characteristics, take the right shoulder of robot as true origin Criterion space coordinates, in conjunction with the physical size of robot, obtain the coordinate figure of each assembly of robot in these space coordinates;
2., accurately monitoring head part axis is wired to the distance of both shoulders line and right eye to the distance of head axis to distance, the binocular of right shoulder;
3., three physical sizes that 2. obtain according to step, in conjunction with the world coordinate system take the binocular camera right eye as initial point and robot coordinate system's position relationship, the coordinate figure of target object in the coordinate system take right eye as initial point is transformed into coordinate figure in the robot coordinate system;
In target identification and crawl process, as required, the head of robot can rotate or swing up and down, need to be according to rotation and the pendulum angle of head, recomputate the relation of two space coordinates, calculate the coordinate figure of target object in the robot coordinate system according to the transformational relation of two coordinate systems; Head rotation angle and pendulum angle obtain by two angular transducers that are installed in head.
5. the identification of service robot target and grasping means based on binocular vision according to claim 2, it is characterized in that, described step 3) identification of target object and location: in the visual field of binocular camera, identify target object by the method for cutting apart based on color, and calculate the coordinate figure of target object in the robot space coordinates, finish the location of object, concrete steps are as follows:
1., because pixel is subjected to the impact of the factors such as light source kind, intensity of illumination more serious at the RGB color space, therefore in the hsv color space, finish the identification to target object; Before experiment, want the HSV threshold information of off-line learning and record object object color;
2., the conversion of color space: because therefore the color in the binocular camera that the adopts output RGB color space need to finish the conversion of pixel from the RGB color space to the hsv color space; Conversion from RGB to HSV has been equivalent to do the work of a decoupling zero, and concrete conversion formula is as shown in the formula shown in (1):
Figure 817271DEST_PATH_IMAGE002
H represents the H part in the HSV pixel space in the following formula (1), s represents the S part in the HSV pixel space, and v represents the V part in the HSV pixel space, and R represents the R part in the rgb pixel space, G represents the G part in the rgb pixel space, and B represents the B part in the rgb pixel space;
3., the identification of target object: robot is after obtaining seeking the instruction of target object, will constantly gather image by binocular camera, rgb value with each two field picture pixel converts the HSV value in real time, contrast is by the HSV threshold value of the target object of off-line learning record, if target object is in this two field picture, just extract the profile of target object, if target object does not continue to seek target object with regard to the rotary machine head part in this two field picture;
4., the location of target object: after robot identifies target object, at first process by image and obtain the profile of target object in image, calculate the coordinate figure of center in image coordinate system of objects' contour; Calculate the coordinate figure of target object in the space coordinates take the binocular right eye as initial point according to the conversion formula that obtains behind the camera calibration; Then, drive bobbin movement module (6) and make the robot rotation, make robot over against the target object direction, be converted to the coordinate figure of target object in the robot coordinate system through coordinate system;
Owing in the hsv color space, finishing the identification to target object, can reduce the impact of target object color category, ambient lighting strength factor, improved the success ratio of target object identification; After the location of target object is finished, according to the position of target object in the robot space coordinates, determine next step action; If distantly need to independently arrive the target object region by the RFID navigational system; If close together, robot can directly go to the target object next door, finishes the crawl task to target object.
6. the identification of service robot target and grasping means based on binocular vision according to claim 2, it is characterized in that, described step 4) control independently arrives the target object region by navigational system, and in traveling process by obstacle module (4) avoiding barrier, its concrete steps are as follows:
1., in robot bottom the RFID receiver is installed, lay passive RFID tags according to the room layout characteristics on the floor, divide the compass of competency for each label, set up navigational system; The characteristic information of object in each label institute compass of competency is added in the guidance system data storehouse, and set up the routing information that arrives next label position from one of them label position;
2., after robot finishes the location of target object, in the guidance system data storehouse, search the zone at target object place according to the color characteristic information of target object, zone in conjunction with the current place of robot, the recorder people arrives the label ID that will pass through the target area, i.e. the path that will pass through of robot;
3., robot is according to the indication information of label under the region, drives the position that bobbin movement module (6) independently arrives next label place, by that analogy, arrives the label position of target object region; In the process of walking, the information of real-time processor device people's belly and train of dress section sensor, avoid the barrier in the path;
The method of described employing color images, i.e. therefore color method of identification recognition target object has only recorded the color characteristic information of object in the database of navigational system; Because independently column barrier is only arranged in the room, traffic information is fairly simple, robot only needs to carry out just smooth avoiding barrier of three right angle break-ins according to the direction of next label, gets back to original path.
7. the identification of service robot target and grasping means based on binocular vision according to claim 2, it is characterized in that, described step 5) is the crawl of control mechanical arm realization to target object: after robot navigates to target object, independently arrive the zone at target object place according to navigational system, then control mechanical arm and finish crawl task to target object, its concrete steps are as follows:
1., utilize the D-H method that the 6+1 degree-of-freedom manipulator of robot is carried out modeling, obtain the contrary solution formula of manipulator model, namely obtain making mechanical arm tail end to arrive the angle that each joint of assigned address mechanical arm need to be rotated in the space;
2., after robot independently arrives the zone at target object place according to navigational system, also need through described step 3 process) again identifies target object, and calculates the coordinate figure of target object in the robot coordinate system;
3., according to the coordinate figure that calculates, make robot ambulation to the position at target object place by driving bobbin movement module (6); According to the experience that accumulates in the experimentation, the distance of robot and target object is within limits the time, and the coordinate figure of the target object that obtains is the most accurate, off-line learning and record reach this apart from the time image that obtains in the threshold value of objects' contour size; When the close target object of robot, constantly adjust the distance of robot and target object, make the profile size of the image internal object object that obtains within threshold range; In this case, calculate the coordinate figure of target object in the robot coordinate system;
If robot finishes the task of crawl target object, need to make the coordinate figure of mechanical arm tail end clamper equal the coordinate figure of target object 4.; With the contrary solution formula of this coordinate figure substitution manipulator model, the angle that each joint of mechanical arm need to be rotated when obtaining making clamper to finish crawl to target object; After clamper arrives the target object position, drive clamper closed, finish the crawl task to target object;
Owing to do not have the setting pressure sensor on the robot gripper, so robot can only grasp the object of specified width, which width under open loop control mode; After robot grabs target object smoothly, indicate as required robot to carry out next step action, such as getting back to starting point or target object being put into assigned address.
CN2012104056933A 2012-10-23 2012-10-23 Binocular vision-based robot target identifying and gripping system and method Pending CN102902271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012104056933A CN102902271A (en) 2012-10-23 2012-10-23 Binocular vision-based robot target identifying and gripping system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012104056933A CN102902271A (en) 2012-10-23 2012-10-23 Binocular vision-based robot target identifying and gripping system and method

Publications (1)

Publication Number Publication Date
CN102902271A true CN102902271A (en) 2013-01-30

Family

ID=47574567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012104056933A Pending CN102902271A (en) 2012-10-23 2012-10-23 Binocular vision-based robot target identifying and gripping system and method

Country Status (1)

Country Link
CN (1) CN102902271A (en)

Cited By (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103453889A (en) * 2013-09-17 2013-12-18 深圳市创科自动化控制技术有限公司 Calibrating and aligning method of CCD (Charge-coupled Device) camera
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN103707305A (en) * 2013-12-31 2014-04-09 上海交通大学 Service robot grabbing system based on cloud information library and control method thereof
CN103753585A (en) * 2014-01-10 2014-04-30 南通大学 Method for intelligently adjusting manipulator and grasping force on basis of visual image analysis
CN104199452A (en) * 2014-09-26 2014-12-10 上海未来伙伴机器人有限公司 Mobile robot, mobile robot system as well as mobile and communication method
CN104515502A (en) * 2013-09-28 2015-04-15 沈阳新松机器人自动化股份有限公司 Robot hand-eye stereo vision measurement method
CN104570938A (en) * 2015-01-06 2015-04-29 常州先进制造技术研究所 Double-arm robot system in plug-in mounting production and intelligent control method of double-arm robot system
CN105033997A (en) * 2015-09-15 2015-11-11 北京理工大学 Visual-sense-based rapid working whole-body planning and control method of humanoid robot
CN105589459A (en) * 2015-05-19 2016-05-18 中国人民解放军国防科学技术大学 Unmanned vehicle semi-autonomous remote control method
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN105798924A (en) * 2016-05-16 2016-07-27 苏州金建达智能科技有限公司 Back device of humanoid robot
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
CN106610666A (en) * 2015-10-22 2017-05-03 沈阳新松机器人自动化股份有限公司 Assistant robot based on binocular vision, and control method of assistant robot
CN106843213A (en) * 2017-02-10 2017-06-13 中国东方电气集团有限公司 The method that a kind of movement and courses of action based on mobile robot are planned automatically
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN106990777A (en) * 2017-03-10 2017-07-28 江苏物联网研究发展中心 Robot local paths planning method
CN107168110A (en) * 2016-12-09 2017-09-15 陈胜辉 A kind of material grasping means and system
CN107234625A (en) * 2017-07-07 2017-10-10 中国科学院自动化研究所 The method that visual servo is positioned and captured
CN107234619A (en) * 2017-06-02 2017-10-10 南京金快快无人机有限公司 A kind of service robot grasp system positioned based on active vision
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN107571263A (en) * 2017-07-19 2018-01-12 江汉大学 It is a kind of to pick up the robot for dragging for function with looking for something
WO2018014420A1 (en) * 2016-07-21 2018-01-25 深圳曼塔智能科技有限公司 Light-emitting target recognition-based unmanned aerial vehicle tracking control system and method
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN107911687A (en) * 2017-12-11 2018-04-13 中国科学院长春光学精密机械与物理研究所 Teleoperation of robot auxiliary system based on binocular stereo vision
CN107917666A (en) * 2016-10-09 2018-04-17 上海铼钠克数控科技股份有限公司 Binocular vision device and coordinate scaling method
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN108392269A (en) * 2017-12-29 2018-08-14 广州布莱医疗科技有限公司 A kind of operation householder method and auxiliary robot of performing the operation
CN108453739A (en) * 2018-04-04 2018-08-28 北京航空航天大学 Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting
CN108656107A (en) * 2018-04-04 2018-10-16 北京航空航天大学 A kind of mechanical arm grasping system and method based on image procossing
CN108942929A (en) * 2018-07-10 2018-12-07 广州供电局有限公司 The method and device of mechanical arm positioning crawl based on binocular stereo vision
CN109070330A (en) * 2016-04-08 2018-12-21 Groove X 株式会社 The autonomous humanoid robot of behavior shy with strangers
CN109089102A (en) * 2018-09-05 2018-12-25 华南智能机器人创新研究院 A kind of robotic article method for identifying and classifying and system based on binocular vision
CN109108966A (en) * 2018-08-14 2019-01-01 中民筑友科技投资有限公司 A kind of reinforced mesh crawl control method of view-based access control model identification
WO2019037013A1 (en) * 2017-08-24 2019-02-28 深圳蓝胖子机器人有限公司 Method for stacking goods by means of robot and robot
CN109483573A (en) * 2017-09-12 2019-03-19 发那科株式会社 Machine learning device, robot system and machine learning method
CN109540105A (en) * 2017-09-22 2019-03-29 北京印刷学院 A kind of courier packages' grabbing device and grasping means based on binocular vision
CN109597318A (en) * 2017-09-30 2019-04-09 北京柏惠维康科技有限公司 A kind of method and apparatus of robot space registration
CN109615658A (en) * 2018-12-04 2019-04-12 广东拓斯达科技股份有限公司 The article of robot is taken method, apparatus, computer equipment and storage medium
CN109753054A (en) * 2017-11-03 2019-05-14 财团法人资讯工业策进会 Unmanned self-propelled vehicle and its control method
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110110245A (en) * 2019-05-06 2019-08-09 山东大学 Dynamic article searching method and device under a kind of home environment
CN110163056A (en) * 2018-08-26 2019-08-23 国网江苏省电力有限公司物资分公司 Intelligent vision identifies sweep cable disc centre coordinate system
CN110223350A (en) * 2019-05-23 2019-09-10 汕头大学 A kind of building blocks automatic sorting method and system based on binocular vision
CN110587600A (en) * 2019-08-20 2019-12-20 南京理工大学 Point cloud-based autonomous path planning method for live working robot
CN110640733A (en) * 2019-10-10 2020-01-03 科大讯飞(苏州)科技有限公司 Control method and device for process execution and beverage selling system
CN110722556A (en) * 2019-10-17 2020-01-24 苏州恒辉科技有限公司 Movable mechanical arm control system and method based on reinforcement learning
CN111145257A (en) * 2019-12-27 2020-05-12 深圳市越疆科技有限公司 Article grabbing method and system and article grabbing robot
CN111267095A (en) * 2020-01-14 2020-06-12 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111322963A (en) * 2018-12-17 2020-06-23 中国科学院沈阳自动化研究所 Dynamic arrangement method for parts based on binocular image processing
CN111340884A (en) * 2020-02-24 2020-06-26 天津理工大学 Binocular heterogeneous camera and RFID dual target positioning and identity identification method
CN111360821A (en) * 2020-02-21 2020-07-03 海南大学 Picking control method, device and equipment and computer scale storage medium
CN111591650A (en) * 2020-04-30 2020-08-28 南京理工大学 Intelligent clamping type AGV (automatic guided vehicle) cargo handling auxiliary device and method
CN111908155A (en) * 2020-09-10 2020-11-10 佛山科学技术学院 Automatic loading and unloading system of container robot
CN112047057A (en) * 2019-06-05 2020-12-08 西安瑞德宝尔智能科技有限公司 Safety monitoring method and system for material conveying equipment
CN112051853A (en) * 2020-09-18 2020-12-08 哈尔滨理工大学 Intelligent obstacle avoidance system and method based on machine vision
CN112099513A (en) * 2020-11-09 2020-12-18 天津联汇智造科技有限公司 Method and system for accurately taking materials by mobile robot
CN112408281A (en) * 2020-09-28 2021-02-26 亿嘉和科技股份有限公司 Bucket adjusting operation guiding method of bucket arm vehicle based on visual tracking
CN112577509A (en) * 2020-12-28 2021-03-30 炬星科技(深圳)有限公司 Robot operation navigation method and robot
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN112621750A (en) * 2020-12-07 2021-04-09 合肥阿格德信息科技有限公司 Automatic control system of industrial robot
CN112947407A (en) * 2021-01-14 2021-06-11 华南理工大学 Multi-agent finite-time formation path tracking control method and system
CN113084817A (en) * 2021-04-15 2021-07-09 中国科学院自动化研究所 Object searching and grabbing control method of underwater bionic robot in turbulent flow environment
CN113296516A (en) * 2021-05-24 2021-08-24 淮阴工学院 Robot control method for automatically lifting automobile
CN113290552A (en) * 2020-02-24 2021-08-24 株式会社理光 Article placement system and article placement method
WO2021253629A1 (en) * 2020-06-16 2021-12-23 大连理工大学 Multi-arm robot for realizing sitting and lying posture switching and carrying of user
CN114029997A (en) * 2021-12-16 2022-02-11 广州城市理工学院 Working method of mechanical arm
CN114210589A (en) * 2021-11-01 2022-03-22 中国工商银行股份有限公司保定分行 Automatic sorting system of intelligent vault
CN114633248A (en) * 2020-12-16 2022-06-17 北京极智嘉科技股份有限公司 Robot and positioning method
CN114820619A (en) * 2022-06-29 2022-07-29 深圳市信润富联数字科技有限公司 Tuber plant sorting method, system, computer device and storage medium
CN115026836A (en) * 2022-07-21 2022-09-09 深圳市华成工业控制股份有限公司 Control method, device and equipment of five-axis manipulator and storage medium
CN115218918A (en) * 2022-09-20 2022-10-21 上海仙工智能科技有限公司 Intelligent blind guiding method and blind guiding equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080215184A1 (en) * 2006-12-07 2008-09-04 Electronics And Telecommunications Research Institute Method for searching target object and following motion thereof through stereo vision processing and home intelligent service robot using the same
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101625573A (en) * 2008-07-09 2010-01-13 中国科学院自动化研究所 Digital signal processor based inspection robot monocular vision navigation system
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080215184A1 (en) * 2006-12-07 2008-09-04 Electronics And Telecommunications Research Institute Method for searching target object and following motion thereof through stereo vision processing and home intelligent service robot using the same
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN101625573A (en) * 2008-07-09 2010-01-13 中国科学院自动化研究所 Digital signal processor based inspection robot monocular vision navigation system
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof

Cited By (104)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN103529856B (en) * 2013-08-27 2016-04-13 浙江工业大学 5 rotary joint robot end instrument posture control methods
CN103453889B (en) * 2013-09-17 2016-02-17 深圳市创科自动化控制技术有限公司 Ccd video camera calibration alignment method
CN103453889A (en) * 2013-09-17 2013-12-18 深圳市创科自动化控制技术有限公司 Calibrating and aligning method of CCD (Charge-coupled Device) camera
CN104515502A (en) * 2013-09-28 2015-04-15 沈阳新松机器人自动化股份有限公司 Robot hand-eye stereo vision measurement method
CN103707305B (en) * 2013-12-31 2016-02-10 上海交通大学 A kind of service robot grasp system based on cloud information bank and control method thereof
CN103707305A (en) * 2013-12-31 2014-04-09 上海交通大学 Service robot grabbing system based on cloud information library and control method thereof
CN103753585A (en) * 2014-01-10 2014-04-30 南通大学 Method for intelligently adjusting manipulator and grasping force on basis of visual image analysis
CN104199452A (en) * 2014-09-26 2014-12-10 上海未来伙伴机器人有限公司 Mobile robot, mobile robot system as well as mobile and communication method
CN104570938A (en) * 2015-01-06 2015-04-29 常州先进制造技术研究所 Double-arm robot system in plug-in mounting production and intelligent control method of double-arm robot system
CN105589459A (en) * 2015-05-19 2016-05-18 中国人民解放军国防科学技术大学 Unmanned vehicle semi-autonomous remote control method
CN105033997A (en) * 2015-09-15 2015-11-11 北京理工大学 Visual-sense-based rapid working whole-body planning and control method of humanoid robot
CN105033997B (en) * 2015-09-15 2017-06-13 北京理工大学 A kind of anthropomorphic robot sharp work whole body planning and control method
CN106610666A (en) * 2015-10-22 2017-05-03 沈阳新松机器人自动化股份有限公司 Assistant robot based on binocular vision, and control method of assistant robot
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN105598965B (en) * 2015-11-26 2018-03-16 哈尔滨工业大学 The autonomous grasping means of robot drive lacking hand based on stereoscopic vision
CN109070330A (en) * 2016-04-08 2018-12-21 Groove X 株式会社 The autonomous humanoid robot of behavior shy with strangers
US11192257B2 (en) 2016-04-08 2021-12-07 Groove X, Inc. Autonomously acting robot exhibiting shyness
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN105798924A (en) * 2016-05-16 2016-07-27 苏州金建达智能科技有限公司 Back device of humanoid robot
CN105798924B (en) * 2016-05-16 2018-03-16 南京创集孵化器管理有限公司 A kind of back device of anthropomorphic robot
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
WO2018014420A1 (en) * 2016-07-21 2018-01-25 深圳曼塔智能科技有限公司 Light-emitting target recognition-based unmanned aerial vehicle tracking control system and method
CN107917666A (en) * 2016-10-09 2018-04-17 上海铼钠克数控科技股份有限公司 Binocular vision device and coordinate scaling method
CN107972026B (en) * 2016-10-25 2021-05-04 河北亿超机械制造股份有限公司 Robot, mechanical arm and control method and device thereof
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN107168110A (en) * 2016-12-09 2017-09-15 陈胜辉 A kind of material grasping means and system
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN106851095B (en) * 2017-01-13 2019-12-24 深圳拓邦股份有限公司 Positioning method, device and system
CN106843213B (en) * 2017-02-10 2020-01-10 中国东方电气集团有限公司 Method for automatically planning movement and operation paths based on mobile robot
CN106843213A (en) * 2017-02-10 2017-06-13 中国东方电气集团有限公司 The method that a kind of movement and courses of action based on mobile robot are planned automatically
CN106990777A (en) * 2017-03-10 2017-07-28 江苏物联网研究发展中心 Robot local paths planning method
CN107234619A (en) * 2017-06-02 2017-10-10 南京金快快无人机有限公司 A kind of service robot grasp system positioned based on active vision
CN107234625B (en) * 2017-07-07 2019-11-26 中国科学院自动化研究所 The method of visual servo positioning and crawl
CN107234625A (en) * 2017-07-07 2017-10-10 中国科学院自动化研究所 The method that visual servo is positioned and captured
CN107571263A (en) * 2017-07-19 2018-01-12 江汉大学 It is a kind of to pick up the robot for dragging for function with looking for something
CN107571263B (en) * 2017-07-19 2019-07-23 江汉大学 It is a kind of with the robot for picking up fishing function that looks for something
WO2019037013A1 (en) * 2017-08-24 2019-02-28 深圳蓝胖子机器人有限公司 Method for stacking goods by means of robot and robot
CN109483573B (en) * 2017-09-12 2020-07-31 发那科株式会社 Machine learning device, robot system, and machine learning method
CN109483573A (en) * 2017-09-12 2019-03-19 发那科株式会社 Machine learning device, robot system and machine learning method
CN109540105A (en) * 2017-09-22 2019-03-29 北京印刷学院 A kind of courier packages' grabbing device and grasping means based on binocular vision
CN109597318A (en) * 2017-09-30 2019-04-09 北京柏惠维康科技有限公司 A kind of method and apparatus of robot space registration
CN107767423B (en) * 2017-10-10 2019-12-06 大连理工大学 mechanical arm target positioning and grabbing method based on binocular vision
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN109753054A (en) * 2017-11-03 2019-05-14 财团法人资讯工业策进会 Unmanned self-propelled vehicle and its control method
CN107911687B (en) * 2017-12-11 2020-04-10 中国科学院长春光学精密机械与物理研究所 Robot teleoperation auxiliary system based on binocular stereo vision
CN107911687A (en) * 2017-12-11 2018-04-13 中国科学院长春光学精密机械与物理研究所 Teleoperation of robot auxiliary system based on binocular stereo vision
CN108392269A (en) * 2017-12-29 2018-08-14 广州布莱医疗科技有限公司 A kind of operation householder method and auxiliary robot of performing the operation
CN108392269B (en) * 2017-12-29 2021-08-03 广州布莱医疗科技有限公司 Operation assisting method and operation assisting robot
CN108656107A (en) * 2018-04-04 2018-10-16 北京航空航天大学 A kind of mechanical arm grasping system and method based on image procossing
CN108453739A (en) * 2018-04-04 2018-08-28 北京航空航天大学 Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting
CN108656107B (en) * 2018-04-04 2020-06-26 北京航空航天大学 Mechanical arm grabbing system and method based on image processing
CN108942929A (en) * 2018-07-10 2018-12-07 广州供电局有限公司 The method and device of mechanical arm positioning crawl based on binocular stereo vision
CN109108966B (en) * 2018-08-14 2021-07-30 中民筑友科技投资有限公司 Reinforcing mesh grabbing control method based on visual identification
CN109108966A (en) * 2018-08-14 2019-01-01 中民筑友科技投资有限公司 A kind of reinforced mesh crawl control method of view-based access control model identification
CN110163056A (en) * 2018-08-26 2019-08-23 国网江苏省电力有限公司物资分公司 Intelligent vision identifies sweep cable disc centre coordinate system
CN110163056B (en) * 2018-08-26 2020-09-29 国网江苏省电力有限公司物资分公司 Intelligent vision recognition center coordinate system of vehicle plate cable tray
CN109089102A (en) * 2018-09-05 2018-12-25 华南智能机器人创新研究院 A kind of robotic article method for identifying and classifying and system based on binocular vision
CN109615658A (en) * 2018-12-04 2019-04-12 广东拓斯达科技股份有限公司 The article of robot is taken method, apparatus, computer equipment and storage medium
CN111322963A (en) * 2018-12-17 2020-06-23 中国科学院沈阳自动化研究所 Dynamic arrangement method for parts based on binocular image processing
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110110245A (en) * 2019-05-06 2019-08-09 山东大学 Dynamic article searching method and device under a kind of home environment
CN110110245B (en) * 2019-05-06 2021-03-16 山东大学 Dynamic article searching method and device in home environment
CN110223350A (en) * 2019-05-23 2019-09-10 汕头大学 A kind of building blocks automatic sorting method and system based on binocular vision
CN112047057A (en) * 2019-06-05 2020-12-08 西安瑞德宝尔智能科技有限公司 Safety monitoring method and system for material conveying equipment
CN110587600B (en) * 2019-08-20 2022-04-19 南京理工大学 Point cloud-based autonomous path planning method for live working robot
CN110587600A (en) * 2019-08-20 2019-12-20 南京理工大学 Point cloud-based autonomous path planning method for live working robot
CN110640733A (en) * 2019-10-10 2020-01-03 科大讯飞(苏州)科技有限公司 Control method and device for process execution and beverage selling system
CN110640733B (en) * 2019-10-10 2021-10-26 科大讯飞(苏州)科技有限公司 Control method and device for process execution and beverage selling system
CN110722556A (en) * 2019-10-17 2020-01-24 苏州恒辉科技有限公司 Movable mechanical arm control system and method based on reinforcement learning
CN111145257B (en) * 2019-12-27 2024-01-05 深圳市越疆科技有限公司 Article grabbing method and system and article grabbing robot
CN111145257A (en) * 2019-12-27 2020-05-12 深圳市越疆科技有限公司 Article grabbing method and system and article grabbing robot
CN111267095B (en) * 2020-01-14 2022-03-01 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111267095A (en) * 2020-01-14 2020-06-12 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111360821A (en) * 2020-02-21 2020-07-03 海南大学 Picking control method, device and equipment and computer scale storage medium
CN113290552A (en) * 2020-02-24 2021-08-24 株式会社理光 Article placement system and article placement method
CN113290552B (en) * 2020-02-24 2022-09-16 株式会社理光 Article placement system and article placement method
CN111340884A (en) * 2020-02-24 2020-06-26 天津理工大学 Binocular heterogeneous camera and RFID dual target positioning and identity identification method
CN111591650A (en) * 2020-04-30 2020-08-28 南京理工大学 Intelligent clamping type AGV (automatic guided vehicle) cargo handling auxiliary device and method
US11701781B2 (en) 2020-06-16 2023-07-18 Dalian University Of Technology Multi-arm robot for realizing conversion between sitting and lying posture of patients and carrying patients to different positions
WO2021253629A1 (en) * 2020-06-16 2021-12-23 大连理工大学 Multi-arm robot for realizing sitting and lying posture switching and carrying of user
CN111908155A (en) * 2020-09-10 2020-11-10 佛山科学技术学院 Automatic loading and unloading system of container robot
CN112051853A (en) * 2020-09-18 2020-12-08 哈尔滨理工大学 Intelligent obstacle avoidance system and method based on machine vision
CN112408281A (en) * 2020-09-28 2021-02-26 亿嘉和科技股份有限公司 Bucket adjusting operation guiding method of bucket arm vehicle based on visual tracking
CN112099513A (en) * 2020-11-09 2020-12-18 天津联汇智造科技有限公司 Method and system for accurately taking materials by mobile robot
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN112621750A (en) * 2020-12-07 2021-04-09 合肥阿格德信息科技有限公司 Automatic control system of industrial robot
CN114633248B (en) * 2020-12-16 2024-04-12 北京极智嘉科技股份有限公司 Robot and positioning method
CN114633248A (en) * 2020-12-16 2022-06-17 北京极智嘉科技股份有限公司 Robot and positioning method
WO2022127541A1 (en) * 2020-12-16 2022-06-23 北京极智嘉科技股份有限公司 Robot and localization method
CN112577509A (en) * 2020-12-28 2021-03-30 炬星科技(深圳)有限公司 Robot operation navigation method and robot
CN112947407A (en) * 2021-01-14 2021-06-11 华南理工大学 Multi-agent finite-time formation path tracking control method and system
CN113084817B (en) * 2021-04-15 2022-08-19 中国科学院自动化研究所 Object searching and grabbing control method of underwater robot in turbulent flow environment
CN113084817A (en) * 2021-04-15 2021-07-09 中国科学院自动化研究所 Object searching and grabbing control method of underwater bionic robot in turbulent flow environment
CN113296516B (en) * 2021-05-24 2022-07-12 淮阴工学院 Robot control method for automatically lifting automobile
CN113296516A (en) * 2021-05-24 2021-08-24 淮阴工学院 Robot control method for automatically lifting automobile
CN114210589A (en) * 2021-11-01 2022-03-22 中国工商银行股份有限公司保定分行 Automatic sorting system of intelligent vault
CN114210589B (en) * 2021-11-01 2024-03-15 中国工商银行股份有限公司保定分行 Automatic sorting system of intelligent vault
CN114029997A (en) * 2021-12-16 2022-02-11 广州城市理工学院 Working method of mechanical arm
CN114820619A (en) * 2022-06-29 2022-07-29 深圳市信润富联数字科技有限公司 Tuber plant sorting method, system, computer device and storage medium
CN115026836A (en) * 2022-07-21 2022-09-09 深圳市华成工业控制股份有限公司 Control method, device and equipment of five-axis manipulator and storage medium
CN115218918B (en) * 2022-09-20 2022-12-27 上海仙工智能科技有限公司 Intelligent blind guiding method and blind guiding equipment
CN115218918A (en) * 2022-09-20 2022-10-21 上海仙工智能科技有限公司 Intelligent blind guiding method and blind guiding equipment

Similar Documents

Publication Publication Date Title
CN102902271A (en) Binocular vision-based robot target identifying and gripping system and method
US8577126B2 (en) System and method for cooperative remote vehicle behavior
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
US20090180668A1 (en) System and method for cooperative remote vehicle behavior
CN100360204C (en) Control system of intelligent perform robot based on multi-processor cooperation
CN109262623B (en) Traction navigation autonomous mobile robot
Van den Bergh et al. Real-time 3D hand gesture interaction with a robot for understanding directions from humans
Monajjemi et al. UAV, come to me: End-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV
CN114080583B (en) Visual teaching and repetitive movement manipulation system
US10913151B1 (en) Object hand-over between robot and actor
CN109571513B (en) Immersive mobile grabbing service robot system
TWI694904B (en) Robot speech control system and method
CN110147106A (en) Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
CN102323817A (en) Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
CN102848388A (en) Service robot locating and grabbing method based on multiple sensors
CN106354161A (en) Robot motion path planning method
US20190184569A1 (en) Robot based on artificial intelligence, and control method thereof
US20170348858A1 (en) Multiaxial motion control device and method, in particular control device and method for a robot arm
JP5145569B2 (en) Object identification method and apparatus
Gromov et al. Proximity human-robot interaction using pointing gestures and a wrist-mounted IMU
JP6134895B2 (en) Robot control system, robot control program, and explanation robot
JP6134894B2 (en) Robot control system and robot
CN110744544A (en) Service robot vision grabbing method and service robot
CN114505840B (en) Intelligent service robot for independently operating box type elevator
KR20210026595A (en) Method of moving in administrator mode and robot of implementing thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130130