CN105729468A - Enhanced robot workbench based on multiple depth cameras - Google Patents

Enhanced robot workbench based on multiple depth cameras Download PDF

Info

Publication number
CN105729468A
CN105729468A CN201610056940.1A CN201610056940A CN105729468A CN 105729468 A CN105729468 A CN 105729468A CN 201610056940 A CN201610056940 A CN 201610056940A CN 105729468 A CN105729468 A CN 105729468A
Authority
CN
China
Prior art keywords
coordinate system
target object
camera
mechanical hand
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610056940.1A
Other languages
Chinese (zh)
Other versions
CN105729468B (en
Inventor
李石坚
杨莎
陶海
焦文均
叶振宇
潘纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zheda Xitou Brain Computer Intelligent Technology Co.,Ltd.
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610056940.1A priority Critical patent/CN105729468B/en
Publication of CN105729468A publication Critical patent/CN105729468A/en
Application granted granted Critical
Publication of CN105729468B publication Critical patent/CN105729468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40005Vision, analyse image at one station during manipulation at next station

Abstract

The invention discloses an enhanced robot workbench based on multiple depth cameras. The enhanced robot workbench comprises a workbench, a mechanical hand arranged on the workbench, a plurality of depth cameras arranged around the workbench, and a control processor. According to the enhanced robot workbench, the visual perception of the robot workbench is enhanced through the multiple depth cameras, and a target locating method is more convenient and more rapid through adopting an automatic target locating method of the depth cameras and the robot; the three-dimensional coordinate of a target object can be recognized and located more accurately through utilizing the multiple depth cameras in the situation that the object is shielded; and during target object recognition, the image recognition efficiency is greatly improved through narrowing the image matching area.

Description

A kind of robotic workstation strengthened based on many depth camera
Technical field
The invention belongs to computer intellectual technology field, be specifically related to a kind of robotic workstation strengthened based on many depth camera.
Background technology
Along with the fast development of intelligent robot, intelligent robot applies to the various industries such as industry, medical treatment, service widely.Mechanical hand plays vital effect in robot task completes, and in conjunction with the feature of mechanical hand self structure, the degree of freedom according to self, it is possible to complete specific task, such as moves to the position of target object, realizes the operations such as the crawl to object.In order to make mechanical hand more intelligent, installing external sensor for mechanical hand, non-contact sensor such as video camera and laser scanner have important effect for robot perception external environment.Vision sensor can make the environment around mechanical hand perception better, thus mechanical hand can interact with people better, it is provided that more service.
Vision has had very ripe application in the identification of object, detection, tracking etc..Visual system has a wide range of applications intelligent robot, mobile apparatus people, it can make robot perception surrounding enviroment better, positional information is provided for robot, such as WillowGaragePR2 robot, it can be carried out the identification of object by the binocular camera that robot configures, be realized the grasping manipulation to object.
Monocular cam and binocular camera are widely used in robotic vision system.Actual indoor environment Pioneer3 robot based on monocular vision realizes mobile robot global location.Monocular cam simple in construction, but the three-dimensional information of object can not be obtained.Binocular camera can obtain the three dimensional local information of object, and the path planning hence for the coupling of target object, mobile apparatus people provides positional information.The Bumblebee2 of PointGrey company of utilization Canada carries out HSV (hue-saturation-value) Threshold segmentation as a kind of home-services robot of binocular vision sensor by the colouring information of target object, obtains the three-dimensional coordinate of target object.Binocular camera is when obtaining the three-dimensional information of object, it is necessary to photographic head is demarcated, corrects, and demarcates the error produced and influences whether in follow-up object matches, and then affects the accuracy of object dimensional coordinate.In binocular or single camera vision system, photographic head is arranged on robot, and the environment faced for robot carries out perception, and cannot obtain after robot or the environment of periphery.
Summary of the invention
Above-mentioned technical problem existing for prior art, the invention provides a kind of robotic workstation strengthened based on many depth camera, it is capable of the auto-scaling between depth camera and robot, the robot accurate location according to the multiple depth camera identifications and location target object of arranging different angles on the table, moves to the position of target object and captures.
A kind of robotic workstation strengthened based on many depth camera, including: work platforms, the mechanical hand being located on work platforms, the multiple stage depth camera being arranged in work platforms surrounding and a control processor;Described mechanical hand is with gripper, and the centre of the palm of gripper is provided with Quick Response Code, and this Quick Response Code includes and is positioned at mechanical hand and can touch spatial dimension not four fixed points at grade positional information under robot coordinate system;
Described depth camera for carrying out image acquisition to work platforms, and the image collected is supplied to control processor;For arbitrary depth camera, its image collected is processed by the described processor that controls, identify the target object on work platforms to determine its three dimensional local information under this camera coordinate system, simultaneously to this camera coordinate system and robot coordinate system's auto-scaling, calculate target object three dimensional local information under robot coordinate system by Coordinate Conversion, and then make mechanical hand move near target object and control gripper to capture target object.
The number of units of described depth camera is be more than or equal to 3, and depth camera adopts Kinect video camera.
Described controls processor identification target object to determine its three dimensional local information under camera coordinate system, and detailed process is as follows:
First, the multiple templates about target object are obtained;
Then, the image that depth camera is collected carries out cutting, obtains ROI region;
Finally, target object is scanned for coupling to find target object in ROI region according to described template, and then determine target object three dimensional local information in camera coordinate system.
The image that depth camera is collected by described control processor carries out cutting and namely removes the area of space that in image, mechanical hand cannot touch.
Described control processor utilizes Affine-SIFT algorithm that target object scans for coupling in ROI region according to template.
Namely camera coordinate system and robot coordinate system's auto-scaling are calculated spin matrix between the two and translation vector by the described processor that controls.
Described control processor is as follows to the detailed process of camera coordinate system and robot coordinate system's auto-scaling:
First, control processor to occurring that Quick Response Code in the picture resolves, obtain four fixed points positional information under robot coordinate system, and then control mechanical hand arrives this four fixed points one by one;
After mechanical hand arrives arbitrary fixed point, controlling processor utilizes depth camera to pass through image acquisition acquisition current two-dimension central point positional information under camera coordinate system, and then in conjunction with this fixed point positional information under robot coordinate system, calculate current two-dimension central point positional information under robot coordinate system;Four fixed points of traversal according to this;
Finally, four groups obtained according to correspondence, about the Quick Response Code central point positional information under camera coordinate system and robot coordinate system respectively, calculate the spin matrix between camera coordinate system and robot coordinate system and translation vector.
The connecting points that the standard of the described mechanical hand arbitrary fixed point of arrival is defined as between mechanical hand with gripper overlaps with this fixed point.
Corresponding multiple stage depth camera, the described processor that controls obtains many groups about target object three dimensional local information under robot coordinate system by calculating, comprehensive each group information rejects the error three dimensional local information beyond tolerance interval, appoint from remaining information and take one group or to determine final goal object three dimensional local information under robot coordinate system in the way of being averaging, and then make mechanical hand move near target object and control gripper to capture target object.
The present invention strengthens the visually-perceptible of robotic workstation by many depth camera, the method using depth camera and robot auto-scaling, makes to determine calibration method more convenient and quicker;The present invention utilizes multiple stage depth camera can identify and position the three-dimensional coordinate of target object under object is blocked situation more exactly;When target object identification, the region that the present invention is mated by downscaled images, substantially increase the efficiency of image recognition.
Accompanying drawing explanation
Fig. 1 is the structural representation of robotic workstation of the present invention.
Fig. 2 is the workflow schematic diagram of robotic workstation of the present invention.
Fig. 3 is the schematic flow sheet of robotic workstation's auto-scaling of the present invention.
Fig. 4 is the Quick Response Code schematic diagram on gripper.
Detailed description of the invention
In order to more specifically describe the present invention, below in conjunction with the drawings and the specific embodiments, technical scheme is described in detail.
As it is shown in figure 1, the robotic workstation that the present invention strengthens based on many depth camera, including multiple depth camera, work platforms, mechanical hand, gripper and control processor, wherein:
Multiple depth camera are fixed on the periphery of robot, and for 3, before laying respectively at and the right and left, depth camera is for the image information of Real-time Collection robot surrounding enviroment.
Control processor and go out target object according to the image recognition of depth camera collection, its three-dimensional coordinate under camera coordinate system is converted to the three-dimensional coordinate under robot coordinate system, and then control mechanical hand moves to the position of target object, adjust the direction of gripper, angle and pattern and capture target object.
The depth camera adopted in the present embodiment is Kinect video camera, and what mechanical hand adopted is EPSON robot, and what gripper adopted is Robotiq3-FingerAdaptiveGripper gripper.
As in figure 2 it is shown, the method for work of the robotic workstation of present embodiment many depth camera enhancing is as follows:
First, utilizing control processor that with robot, depth camera is carried out auto-scaling, concrete steps are as shown in Figure 3.
The Quick Response Code (as shown in Figure 4) for identifying gripper central point is sticked in the center of the end gripper of robot, the content of Quick Response Code is the index point that mechanical hand is automatically moved to, and wherein index point have chosen 4 points of not coplanar in robot coordinate system.Robot by adjust end angle (U, V, W) control gripper towards.(U, V, W) represents coordinate system respectively around X-axis, Y-axis, Z axis anglec of rotation W, V, U, then total spin matrix R=Rz*Ry*Rx
Wherein: R z = cos ( U ) sin ( U ) 0 - sin ( U ) cos ( U ) 0 0 0 1 , R y = cos ( V ) 0 - sin ( V ) 0 1 0 sin ( V ) 0 cos ( V ) , R x = 1 0 0 0 cos ( W ) sin ( W ) 0 - sin ( W ) cos ( W )
Therefore, robot coordinate system the coordinate of 4 index points set can obtain the index point coordinate robot coordinate system: Crobot=M*R+Ctool, wherein CrobotBeing index point coordinate in robot coordinate system, M is the arm end vector to index point, CtoolIt it is the coordinate of arm end.The positional information of 4 index points is encoded into Quick Response Code.
Control processor and identify Quick Response Code from the image that depth camera collects, decode the positional information of 4 index points, control robot afterwards and be automatically moved to the position of 4 index points respectively successively, control processor in the image of depth camera Real-time Collection, obtain the position of Quick Response Code central point, the Quick Response Code central point identified is as index point position in the picture, and then the two-dimensional coordinate of central point is converted to central point three-dimensional coordinate in camera coordinate system, it is designated as Ccamera.According to Ccamera=R (Crobot-T) obtain camera coordinates and be tied to the spin matrix R and translation vector T of robot coordinate system.3 Kinect each self calibrations respectively, obtain camera coordinates corresponding for this Kinect and are tied to the spin matrix R and translation vector T of robot coordinate system.
In order at object by other object circumstance of occlusions under also can be identified by Kinect, the multiple Kinect of table set are separately fixed at multiple angles of robot.Experiment set up 3 Kinect and be separately mounted to front and left and right two side of robot, respectively target object be identified 3 angles and position.In order to mate target object more accurately, control processor for each Kinect, have chosen the target object multiple images under different angles as matching template, mates in the Kinect image gathered with Affine-SIFT algorithm when to object identification.
In order to improve the efficiency of matching algorithm, it is the region that mechanical hand is movable to by the image down of collection, decreases extraction and the coupling of redundant character point, substantially increase the efficiency of object identification.Control the central point of the characteristic point that processor will identify that as the position of target object, be transformed into the characteristic of depth image by coloured image according to Kinect, it is possible to obtained central point three-dimensional coordinate in camera coordinate system by the two-dimensional coordinate of central point.Control the processor Coordinate Conversion by camera coordinate system and robot coordinate system, obtain the target object three-dimensional coordinate robot coordinate system, and then the three-dimensional coordinate of 3 Kinect object obtained respectively is analyzed calibration, if the error of three coordinate figures is within the threshold value accepted, this accepts this point is the coordinate of object;If error within the threshold range accepted, is not then given up this coordinate points, is carried out abnormality processing.
The above-mentioned description to embodiment is to be understood that for ease of those skilled in the art and apply the present invention.Above-described embodiment obviously easily can be made various amendment by person skilled in the art, and General Principle described herein is applied in other embodiments without through performing creative labour.Therefore, the invention is not restricted to above-described embodiment, those skilled in the art's announcement according to the present invention, the improvement made for the present invention and amendment all should within protection scope of the present invention.

Claims (9)

1. the robotic workstation strengthened based on many depth camera, it is characterised in that: the mechanical hand include work platforms, being located on work platforms, the multiple stage depth camera being arranged in work platforms surrounding and a control processor;Described mechanical hand is with gripper, and the centre of the palm of gripper is provided with Quick Response Code, and this Quick Response Code includes and is positioned at mechanical hand and can touch spatial dimension not four fixed points at grade positional information under robot coordinate system;
Described depth camera for carrying out image acquisition to work platforms, and the image collected is supplied to control processor;For arbitrary depth camera, its image collected is processed by the described processor that controls, identify the target object on work platforms to determine its three dimensional local information under this camera coordinate system, simultaneously to this camera coordinate system and robot coordinate system's auto-scaling, calculate target object three dimensional local information under robot coordinate system by Coordinate Conversion, and then make mechanical hand move near target object and control gripper to capture target object.
2. robotic workstation according to claim 1, it is characterised in that: the number of units of described depth camera is be more than or equal to 3, and depth camera adopts Kinect video camera.
3. robotic workstation according to claim 1, it is characterised in that: described controls processor identification target object to determine its three dimensional local information under camera coordinate system, and detailed process is as follows:
First, the multiple templates about target object are obtained;
Then, the image that depth camera is collected carries out cutting, obtains ROI region;
Finally, target object is scanned for coupling to find target object in ROI region according to described template, and then determine target object three dimensional local information in camera coordinate system.
4. robotic workstation according to claim 3, it is characterised in that: the image that depth camera is collected by described control processor carries out cutting and namely removes the area of space that in image, mechanical hand cannot touch.
5. robotic workstation according to claim 3, it is characterised in that: described control processor utilizes Affine-SIFT algorithm that target object scans for coupling in ROI region according to template.
6. robotic workstation according to claim 1, it is characterised in that: namely camera coordinate system and robot coordinate system's auto-scaling are calculated spin matrix between the two and translation vector by the described processor that controls.
7. robotic workstation according to claim 1, it is characterised in that: described control processor is as follows to the detailed process of camera coordinate system and robot coordinate system's auto-scaling:
First, control processor to occurring that Quick Response Code in the picture resolves, obtain four fixed points positional information under robot coordinate system, and then control mechanical hand arrives this four fixed points one by one;
After mechanical hand arrives arbitrary fixed point, controlling processor utilizes depth camera to pass through image acquisition acquisition current two-dimension central point positional information under camera coordinate system, and then in conjunction with this fixed point positional information under robot coordinate system, calculate current two-dimension central point positional information under robot coordinate system;Four fixed points of traversal according to this;
Finally, four groups obtained according to correspondence, about the Quick Response Code central point positional information under camera coordinate system and robot coordinate system respectively, calculate the spin matrix between camera coordinate system and robot coordinate system and translation vector.
8. robotic workstation according to claim 7, it is characterised in that: the connecting points that the standard of the described mechanical hand arbitrary fixed point of arrival is defined as between mechanical hand with gripper overlaps with this fixed point.
9. robotic workstation according to claim 1, it is characterized in that: corresponding multiple stage depth camera, the described processor that controls obtains many groups about target object three dimensional local information under robot coordinate system by calculating, comprehensive each group information rejects the error three dimensional local information beyond tolerance interval, appoint from remaining information and take one group or to determine final goal object three dimensional local information under robot coordinate system in the way of being averaging, and then make mechanical hand move near target object and control gripper to capture target object.
CN201610056940.1A 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras Active CN105729468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610056940.1A CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610056940.1A CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Publications (2)

Publication Number Publication Date
CN105729468A true CN105729468A (en) 2016-07-06
CN105729468B CN105729468B (en) 2018-01-09

Family

ID=56247763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610056940.1A Active CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Country Status (1)

Country Link
CN (1) CN105729468B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106272478A (en) * 2016-09-30 2017-01-04 河海大学常州校区 A kind of full-automatic shopping robot and using method
CN106291278A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of partial discharge of switchgear automatic testing method based on many visual systemes
CN106504321A (en) * 2016-11-07 2017-03-15 达理 Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN107170011A (en) * 2017-04-24 2017-09-15 杭州司兰木科技有限公司 A kind of robot vision tracking and system
CN107291811A (en) * 2017-05-18 2017-10-24 浙江大学 A kind of sense cognition enhancing robot system based on high in the clouds knowledge fusion
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
WO2018108098A1 (en) * 2016-12-14 2018-06-21 国网江苏省电力公司常州供电公司 Autonomous operation method for live working robot based on multi-sensor information fusion
CN108453739A (en) * 2018-04-04 2018-08-28 北京航空航天大学 Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting
CN109886278A (en) * 2019-01-17 2019-06-14 柳州康云互联科技有限公司 A kind of characteristics of image acquisition method based on ARMarker
CN110032922A (en) * 2017-12-13 2019-07-19 虚拟现实软件 The method and system of augmented reality is provided for mining
CN110253575A (en) * 2019-06-17 2019-09-20 深圳前海达闼云端智能科技有限公司 Robot grabbing method, terminal and computer readable storage medium
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN110936378A (en) * 2019-12-04 2020-03-31 中科新松有限公司 Robot hand-eye relation automatic calibration method based on incremental compensation
CN111331604A (en) * 2020-03-23 2020-06-26 北京邮电大学 Machine vision-based valve screwing flexible operation method
CN111339957A (en) * 2020-02-28 2020-06-26 广州中智融通金融科技有限公司 Image recognition-based cashbox bundle state detection method, system and medium
CN111880522A (en) * 2020-06-01 2020-11-03 东莞理工学院 Novel autonomous assembly robot path planning autonomous navigation system and method
CN112207839A (en) * 2020-09-15 2021-01-12 西安交通大学 Mobile household service robot and method
CN112659133A (en) * 2020-12-31 2021-04-16 软控股份有限公司 Glue grabbing method, device and equipment based on machine vision
CN112757300A (en) * 2020-12-31 2021-05-07 广东美的白色家电技术创新中心有限公司 Robot protection system and method
CN113813170A (en) * 2021-08-30 2021-12-21 中科尚易健康科技(北京)有限公司 Target point conversion method between cameras of multi-camera physiotherapy system
CN114750155A (en) * 2022-04-26 2022-07-15 广东天太机器人有限公司 Object classification control system and method based on industrial robot
CN114800508A (en) * 2022-04-24 2022-07-29 广东天太机器人有限公司 Grabbing control system and method of industrial robot
CN115026822A (en) * 2022-06-14 2022-09-09 广东天太机器人有限公司 Industrial robot control system and method based on feature point docking
CN116919391A (en) * 2023-07-25 2023-10-24 凝动万生医疗科技(武汉)有限公司 Movement disorder assessment method and apparatus
CN111339957B (en) * 2020-02-28 2024-04-26 广州运通科金技术有限公司 Method, system and medium for detecting state of money bundle in vault based on image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293752A (en) * 1999-03-19 2001-05-02 松下电工株式会社 Three-D object recognition method and pin picking system using the method
JP2013154457A (en) * 2012-01-31 2013-08-15 Asahi Kosan Kk Workpiece transfer system, workpiece transfer method, and program
CN103500321A (en) * 2013-07-03 2014-01-08 无锡信捷电气股份有限公司 Visual guidance welding robot weld joint fast recognition technology based on double dynamic windows
CN103707300A (en) * 2013-12-20 2014-04-09 上海理工大学 Manipulator device
CN104842361A (en) * 2014-02-13 2015-08-19 通用汽车环球科技运作有限责任公司 Robotic system with 3d box location functionality
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293752A (en) * 1999-03-19 2001-05-02 松下电工株式会社 Three-D object recognition method and pin picking system using the method
JP2013154457A (en) * 2012-01-31 2013-08-15 Asahi Kosan Kk Workpiece transfer system, workpiece transfer method, and program
CN103500321A (en) * 2013-07-03 2014-01-08 无锡信捷电气股份有限公司 Visual guidance welding robot weld joint fast recognition technology based on double dynamic windows
CN103707300A (en) * 2013-12-20 2014-04-09 上海理工大学 Manipulator device
CN104842361A (en) * 2014-02-13 2015-08-19 通用汽车环球科技运作有限责任公司 Robotic system with 3d box location functionality
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291278A (en) * 2016-08-03 2017-01-04 国网山东省电力公司电力科学研究院 A kind of partial discharge of switchgear automatic testing method based on many visual systemes
CN106291278B (en) * 2016-08-03 2019-01-15 国网山东省电力公司电力科学研究院 A kind of partial discharge of switchgear automatic testing method based on more vision systems
CN106272478A (en) * 2016-09-30 2017-01-04 河海大学常州校区 A kind of full-automatic shopping robot and using method
CN106504321A (en) * 2016-11-07 2017-03-15 达理 Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106553195B (en) * 2016-11-25 2018-11-27 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106774309B (en) * 2016-12-01 2019-09-17 天津工业大学 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
WO2018108098A1 (en) * 2016-12-14 2018-06-21 国网江苏省电力公司常州供电公司 Autonomous operation method for live working robot based on multi-sensor information fusion
CN107170011B (en) * 2017-04-24 2019-12-17 杭州艾芯智能科技有限公司 robot vision tracking method and system
CN107170011A (en) * 2017-04-24 2017-09-15 杭州司兰木科技有限公司 A kind of robot vision tracking and system
CN107291811A (en) * 2017-05-18 2017-10-24 浙江大学 A kind of sense cognition enhancing robot system based on high in the clouds knowledge fusion
CN107291811B (en) * 2017-05-18 2019-11-29 浙江大学 A kind of sense cognition enhancing robot system based on cloud knowledge fusion
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
CN110032922A (en) * 2017-12-13 2019-07-19 虚拟现实软件 The method and system of augmented reality is provided for mining
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN108453739A (en) * 2018-04-04 2018-08-28 北京航空航天大学 Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting
CN109886278A (en) * 2019-01-17 2019-06-14 柳州康云互联科技有限公司 A kind of characteristics of image acquisition method based on ARMarker
CN110253575B (en) * 2019-06-17 2021-12-24 达闼机器人有限公司 Robot grabbing method, terminal and computer readable storage medium
CN110253575A (en) * 2019-06-17 2019-09-20 深圳前海达闼云端智能科技有限公司 Robot grabbing method, terminal and computer readable storage medium
CN110936378A (en) * 2019-12-04 2020-03-31 中科新松有限公司 Robot hand-eye relation automatic calibration method based on incremental compensation
CN110900581B (en) * 2019-12-27 2023-12-22 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111339957A (en) * 2020-02-28 2020-06-26 广州中智融通金融科技有限公司 Image recognition-based cashbox bundle state detection method, system and medium
CN111339957B (en) * 2020-02-28 2024-04-26 广州运通科金技术有限公司 Method, system and medium for detecting state of money bundle in vault based on image recognition
CN111331604A (en) * 2020-03-23 2020-06-26 北京邮电大学 Machine vision-based valve screwing flexible operation method
CN111880522A (en) * 2020-06-01 2020-11-03 东莞理工学院 Novel autonomous assembly robot path planning autonomous navigation system and method
CN112207839A (en) * 2020-09-15 2021-01-12 西安交通大学 Mobile household service robot and method
CN112659133A (en) * 2020-12-31 2021-04-16 软控股份有限公司 Glue grabbing method, device and equipment based on machine vision
CN112757300A (en) * 2020-12-31 2021-05-07 广东美的白色家电技术创新中心有限公司 Robot protection system and method
CN113813170A (en) * 2021-08-30 2021-12-21 中科尚易健康科技(北京)有限公司 Target point conversion method between cameras of multi-camera physiotherapy system
CN113813170B (en) * 2021-08-30 2023-11-24 中科尚易健康科技(北京)有限公司 Method for converting target points among cameras of multi-camera physiotherapy system
CN114800508B (en) * 2022-04-24 2022-11-18 广东天太机器人有限公司 Grabbing control system and method of industrial robot
CN114800508A (en) * 2022-04-24 2022-07-29 广东天太机器人有限公司 Grabbing control system and method of industrial robot
CN114750155A (en) * 2022-04-26 2022-07-15 广东天太机器人有限公司 Object classification control system and method based on industrial robot
CN115026822A (en) * 2022-06-14 2022-09-09 广东天太机器人有限公司 Industrial robot control system and method based on feature point docking
CN116919391A (en) * 2023-07-25 2023-10-24 凝动万生医疗科技(武汉)有限公司 Movement disorder assessment method and apparatus

Also Published As

Publication number Publication date
CN105729468B (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN105729468A (en) Enhanced robot workbench based on multiple depth cameras
CN111089569B (en) Large box body measuring method based on monocular vision
CN107914272B (en) Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly
WO2021109575A1 (en) Global vision and local vision integrated robot vision guidance method and device
CN107186708B (en) Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology
CN108182689B (en) Three-dimensional identification and positioning method for plate-shaped workpiece applied to robot carrying and polishing field
CN110751691B (en) Automatic pipe fitting grabbing method based on binocular vision
Jiang et al. An overview of hand-eye calibration
CN106041937A (en) Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN111476841B (en) Point cloud and image-based identification and positioning method and system
EP3510562A1 (en) Method and system for calibrating multiple cameras
JPH08136220A (en) Method and device for detecting position of article
JP2018128897A (en) Detection method and detection program for detecting attitude and the like of object
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
WO2021103558A1 (en) Rgb-d data fusion-based robot vision guiding method and apparatus
CN115629066A (en) Method and device for automatic wiring based on visual guidance
Wan et al. High-precision six-degree-of-freedom pose measurement and grasping system for large-size object based on binocular vision
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
Shen et al. A multi-view camera-projector system for object detection and robot-human feedback
Wang et al. GraspFusionNet: a two-stage multi-parameter grasp detection network based on RGB–XYZ fusion in dense clutter
Fan et al. An automatic robot unstacking system based on binocular stereo vision
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
CN107020545A (en) The apparatus and method for recognizing mechanical workpieces pose
Ren et al. Vision based object grasping of robotic manipulator
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200706

Address after: 310013 3 / F, building C, National University Science Park, Zhejiang University, 525 Xixi Road, Hangzhou, Zhejiang Province

Patentee after: Zhejiang University Holding Group Co., Ltd

Address before: 310027 Hangzhou, Zhejiang Province, Xihu District, Zhejiang Road, No. 38, No.

Patentee before: ZHEJIANG University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210723

Address after: Room 801-804, building 1, Zhihui Zhongchuang center, Xihu District, Hangzhou City, Zhejiang Province, 310013

Patentee after: Zhejiang Zheda Xitou Brain Computer Intelligent Technology Co.,Ltd.

Address before: 3 / F, building C, National University Science Park, Zhejiang University, 525 Xixi Road, Hangzhou, Zhejiang 310013

Patentee before: Zhejiang University Holding Group Co., Ltd

TR01 Transfer of patent right