CN105729468B - A kind of robotic workstation based on the enhancing of more depth cameras - Google Patents

A kind of robotic workstation based on the enhancing of more depth cameras Download PDF

Info

Publication number
CN105729468B
CN105729468B CN201610056940.1A CN201610056940A CN105729468B CN 105729468 B CN105729468 B CN 105729468B CN 201610056940 A CN201610056940 A CN 201610056940A CN 105729468 B CN105729468 B CN 105729468B
Authority
CN
China
Prior art keywords
coordinate system
target object
camera
control processor
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610056940.1A
Other languages
Chinese (zh)
Other versions
CN105729468A (en
Inventor
李石坚
杨莎
陶海
焦文均
叶振宇
潘纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zheda Xitou Brain Computer Intelligent Technology Co.,Ltd.
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610056940.1A priority Critical patent/CN105729468B/en
Publication of CN105729468A publication Critical patent/CN105729468A/en
Application granted granted Critical
Publication of CN105729468B publication Critical patent/CN105729468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40005Vision, analyse image at one station during manipulation at next station

Abstract

The invention discloses a kind of robotic workstation based on the enhancing of more depth cameras, including workbench, the manipulator on workbench, the more depth cameras and a control processor for being arranged in workbench surrounding.The present invention strengthens the visually-perceptible of robotic workstation by more depth cameras, with depth camera and the method for robot auto-scaling, makes to determine calibration method more convenient and quicker;The present invention more accurately can be identified and positioned the three-dimensional coordinate of target object in the case of object is blocked using more depth cameras;When target object identifies, region that the present invention is matched by downscaled images substantially increases the efficiency of image recognition.

Description

A kind of robotic workstation based on the enhancing of more depth cameras
Technical field
The invention belongs to computer intellectual technology field, and in particular to a kind of robot based on the enhancing of more depth cameras Workbench.
Background technology
With the fast development of intelligent robot, intelligent robot widely applies to the various rows such as industry, medical treatment, service Industry.Manipulator plays vital effect in robot task completion, with reference to the characteristics of manipulator self structure, according to certainly The free degree of body, specific task can be completed, for example be moved to the position of target object, realize that the crawl to object etc. is grasped Make.In order that manipulator is more intelligent, external sensor is installed for manipulator, non-contact sensor such as video camera and is swashed Optical scanner plays the role of important for robot perception external environment.Vision sensor can be such that manipulator preferably perceives The environment of surrounding, so as to which manipulator can be interacted preferably with people, there is provided more services.
Vision has had very ripe application in the identification of object, detection, tracking etc..Vision system is in intelligent machine Device people, mobile robot are had a wide range of applications, and it can make robot preferably perceive surrounding enviroment, and position is provided for robot Confidence ceases, such as Willow GaragePR2 robots, and the binocular camera that it can be configured by robot carries out object Identification, realize to the grasping manipulation of object.
Monocular cam and binocular camera are widely used in robotic vision system.Reality based on monocular vision Indoor environment Pioneer3 robots are to realize that mobile robot global positions in environmental map.The letter of monocular cam structure It is single, but the three-dimensional information of object can not be obtained.Binocular camera can obtain the three dimensional local information of object, hence for mesh Mark the matching of object, the path planning of mobile robot provides positional information.With Canadian Point Grey companies Bumblebee2 carries out HSV as a kind of home-services robot of binocular vision sensor by the colouring information of target object (hue-saturation-value) Threshold segmentation, the three-dimensional coordinate of target object is obtained.Binocular camera is obtaining object , it is necessary to be demarcated, corrected to camera during three-dimensional information, error caused by demarcation is influenced whether in follow-up object matches, And then influence the degree of accuracy of object dimensional coordinate.In binocular or single camera vision system, camera is arranged on robot, for The environment that robot faces is perceived, and can not obtain the environment on behind robot or periphery.
The content of the invention
For the above-mentioned technical problem present in prior art, strengthened the invention provides one kind based on more depth cameras Robotic workstation, the auto-scaling between depth camera and robot can be realized, robot is according to being arranged in work Multiple depth cameras identification of different angle on platform and the accurate location of positioning target object, it is moved to the position of target object Put and capture.
A kind of robotic workstation based on the enhancing of more depth cameras, including:Workbench, on workbench Manipulator, the more depth cameras and a control processor for being arranged in workbench surrounding;Described manipulator carries Gripper, the centre of the palm of gripper are provided with Quick Response Code, and the Quick Response Code includes in the accessible spatial dimension of manipulator not same Positional information of four calibration points under robot coordinate system in one plane;
Described depth camera is used to carry out IMAQ to workbench, and the image collected is supplied into control Processor;For any depth camera, described control processor is handled the image that it is collected, and identification is gone to work Make the target object on platform to determine its three dimensional local information under the camera coordinate system, while to the camera coordinates System and robot coordinate system's auto-scaling, three-dimensional position of the target object under robot coordinate system is calculated by Coordinate Conversion Information, and then manipulator is moved near target object simultaneously control machine machinery claw crawl target object.
The number of units of the depth camera is more than or equal to 3, and depth camera uses Kinect video camera.
Described control processor identification target object has to determine its three dimensional local information under camera coordinate system Body process is as follows:
First, the multiple template on target object is obtained;
Then, the image that depth camera collects is cut, obtains ROI region;
Finally, matching is scanned for target object to find target object in ROI region according to described template, entered And determine three dimensional local information of the target object in camera coordinate system.
Described control processor the image that depth camera collects is cut i.e. remove image in manipulator without The area of space that method touches.
Described control processor according to template using Affine-SIFT algorithms in ROI region to target object carry out Search matching.
Described control processor is to calculate between the two to camera coordinate system and robot coordinate system's auto-scaling Spin matrix and translation vector.
Described control processor is as follows to the detailed process of camera coordinate system and robot coordinate system's auto-scaling:
First, control processor parses to the Quick Response Code occurred in the picture, obtains four calibration points in robot Positional information under coordinate system, and then control machinery hand reaches this four calibration points one by one;
After manipulator reaches any calibration point, control processor is obtained current using depth camera by IMAQ Positional information of the Quick Response Code central point under camera coordinate system, and then combine position of the calibration point under robot coordinate system Information, calculate positional information of the current two-dimension central point under robot coordinate system;Four calibration points are traveled through according to this;
Finally, four groups obtained according to corresponding to are on Quick Response Code central point respectively in camera coordinate system and robot coordinate Positional information under system, calculates the spin matrix and translation vector between camera coordinate system and robot coordinate system.
The standard that described manipulator reaches any calibration point is defined as the connecting points between manipulator and gripper and is somebody's turn to do Calibration point overlaps.
Corresponding more depth cameras, described control processor by be calculated it is multigroup on target object in machine Three dimensional local information under people's coordinate system, comprehensive each group information reject the three dimensional local information that error exceeds tolerance interval, from Appoint in remaining information and take one group or determine three-dimensional position of the final goal object under robot coordinate system in a manner of averaging Information, and then manipulator is moved near target object simultaneously control machine machinery claw crawl target object.
The present invention strengthens the visually-perceptible of robotic workstation by more depth cameras, with depth camera and machine The method of people's auto-scaling, make to determine calibration method more convenient and quicker;The present invention is blocked using more depth cameras in object In the case of more accurately can identify and position the three-dimensional coordinate of target object;When target object identifies, the present invention passes through The region of downscaled images matching, substantially increase the efficiency of image recognition.
Brief description of the drawings
Fig. 1 is the structural representation of robotic workstation of the present invention.
Fig. 2 is the workflow schematic diagram of robotic workstation of the present invention.
Fig. 3 is the schematic flow sheet of robotic workstation auto-scaling of the present invention.
Fig. 4 is the Quick Response Code schematic diagram on gripper.
Embodiment
In order to more specifically describe the present invention, below in conjunction with the accompanying drawings and embodiment is to technical scheme It is described in detail.
As shown in figure 1, robotic workstation of the present invention based on the enhancing of more depth cameras, including multiple depth cameras Machine, workbench, manipulator, gripper and control processor, wherein:
Multiple depth cameras are fixed on the periphery of robot, deep respectively positioned above and the right and left exemplified by 3 Spend the image information that video camera is used to gather robot surrounding enviroment in real time.
The image recognition that control processor gathers according to depth camera goes out target object, by it under camera coordinate system Three-dimensional coordinate be converted to three-dimensional coordinate under robot coordinate system, and then control machinery hand is moved to the position of target object, Adjust direction, angle and the pattern crawl target object of gripper.
The depth camera used in the present embodiment is Kinect video camera, and manipulator is using EPSON robots, machine Machinery claw is using Robotiq 3-Finger Adaptive Gripper grippers.
As shown in Fig. 2 the method for work of the robotic workstation of the more depth camera enhancings of present embodiment is as follows:
First, auto-scaling, specific steps such as Fig. 3 institutes are carried out to depth camera and robot using control processor Show.
Sticked at the center of the end gripper of robot for the Quick Response Code for identifying gripper central point (such as Fig. 4 institutes Show), the content of Quick Response Code is the index point that manipulator is automatically moved to, and wherein index point have chosen in robot coordinate system not 4 coplanar points.Robot is by adjusting the direction of angle (U, V, W) control machine machinery claw of end.(U, V, W) is represented sit respectively Mark system is around X-axis, Y-axis, Z axis anglec of rotation W, V, U, then total spin matrix R=Rz*Ry*Rx
Wherein:
Therefore, index point can be obtained in robot coordinate system by the coordinate of 4 index points of robot coordinate system's setting Coordinate:Crobot=M*R+Ctool, wherein CrobotIt is coordinate of the index point in robot coordinate system, M is that arm end arrives The vector of index point, CtoolIt is the coordinate of arm end.The positional information of 4 index points is encoded into Quick Response Code.
Quick Response Code is identified in the image that control processor collects from depth camera, decodes the position of 4 index points Confidence ceases, and control machine people is automatically moved to the position of 4 index points successively respectively afterwards, and control processor is in depth camera The position of Quick Response Code central point is obtained in the image gathered in real time, the Quick Response Code central point identified is as index point in image In position, and then the two-dimensional coordinate of central point is converted into three-dimensional coordinate of the central point in camera coordinate system, is designated as Ccamera.According to Ccamera=R (Crobot- T) camera coordinate system is obtained to the spin matrix R and translation vector of robot coordinate system T.3 Kinect distinguish each self calibration, obtain camera coordinate system corresponding to the Kinect to the spin moment of robot coordinate system Battle array R and translation vector T.
In order in object by other object circumstance of occlusion under can also be identified by Kinect, table set is multiple Kinect is separately fixed at multiple angles of robot.The front that 3 Kinect are separately mounted to robot is set up in experiment With the side of left and right two, target object is identified and positioned respectively in 3 angles.In order to more accurately match target object, control Processor processed when to object identification, have chosen multiple images conduct of the target object under different angle for each Kinect Matching template, match in the image gathered with Affine-SIFT algorithms to Kinect.
In order to improve the efficiency of matching algorithm, it is the region that manipulator is movable to by the image down of collection, reduces The extraction and matching of redundant character point, substantially increase the efficiency of object identification.Control processor will identify that the characteristic point come Position of the central point as target object, the characteristic of depth image is transformed into by coloured image according to Kinect, can be in The two-dimensional coordinate of heart point obtains three-dimensional coordinate of the central point in camera coordinate system.Control processor passes through camera coordinate system With the Coordinate Conversion of robot coordinate system, three-dimensional coordinate of the target object in robot coordinate system is obtained, and then by 3 Kinect The three-dimensional coordinate of the object obtained respectively carries out analytic set, if the error of three coordinate values, within the threshold value of receiving, this connects By the coordinate that the point is object;If error not within the threshold range of receiving, gives up the coordinate points, abnormality processing is carried out.
The above-mentioned description to embodiment is understood that for ease of those skilled in the art and using this hair It is bright.Person skilled in the art obviously can easily make various modifications to above-described embodiment, and described herein General Principle is applied in other embodiment without by performing creative labour.Therefore, the invention is not restricted to above-described embodiment, For those skilled in the art according to the announcement of the present invention, the improvement made for the present invention and modification all should be in the protections of the present invention Within the scope of.

Claims (7)

  1. A kind of 1. robotic workstation based on the enhancing of more depth cameras, it is characterised in that:Including workbench, located at work Manipulator on platform, the more depth cameras and a control processor for being arranged in workbench surrounding;Described machine Tool hand strap has gripper, and the centre of the palm of gripper is provided with Quick Response Code, and the Quick Response Code includes positioned at the accessible spatial dimension of manipulator Inside positional information of four calibration points under robot coordinate system not at grade;
    Described depth camera is used to carry out IMAQ to workbench, and the image collected is supplied into control process Device;For any depth camera, described control processor is handled the image that it is collected, and identifies that work is put down Target object on platform to determine its three dimensional local information under the camera coordinate system, while to the camera coordinate system and Robot coordinate system's auto-scaling, three-dimensional position of the target object under robot coordinate system is calculated by Coordinate Conversion and believed Breath, and then manipulator is moved near target object simultaneously control machine machinery claw crawl target object;
    Described control processor is as follows to the detailed process of camera coordinate system and robot coordinate system's auto-scaling:First, Control processor parses to the Quick Response Code occurred in the picture, obtains position of four calibration points under robot coordinate system Information, and then control machinery hand reaches this four calibration points one by one;
    After manipulator reaches any calibration point, control processor obtains current two dimension using depth camera by IMAQ Positional information of the code central point under camera coordinate system, and then combine position letter of the calibration point under robot coordinate system Breath, calculates positional information of the current two-dimension central point under robot coordinate system;Four calibration points are traveled through according to this;
    Finally, according to corresponding four groups obtained on Quick Response Code central point respectively under camera coordinate system and robot coordinate system Positional information, calculate the spin matrix and translation vector between camera coordinate system and robot coordinate system;
    The standard that described manipulator reaches any calibration point is defined as the connecting points between manipulator and gripper and the demarcation Point overlaps.
  2. 2. robotic workstation according to claim 1, it is characterised in that:The number of units of the depth camera is more than or equal to 3, depth camera uses Kinect video camera.
  3. 3. robotic workstation according to claim 1, it is characterised in that:Described control processor identification target object To determine its three dimensional local information under camera coordinate system, detailed process is as follows:
    First, the multiple template on target object is obtained;
    Then, the image that depth camera collects is cut, obtains ROI region;
    Finally, matching is scanned for target object to find target object in ROI region according to described template, and then really Set the goal three dimensional local information of the object in camera coordinate system.
  4. 4. robotic workstation according to claim 3, it is characterised in that:Described control processor is to depth camera The image collected, which is cut, removes the area of space that manipulator can not touch in image.
  5. 5. robotic workstation according to claim 3, it is characterised in that:Described control processor utilizes according to template Affine-SIFT algorithms scan for matching in ROI region to target object.
  6. 6. robotic workstation according to claim 1, it is characterised in that:Described control processor is to camera coordinates System and robot coordinate system's auto-scaling are to calculate spin matrix and translation vector between the two.
  7. 7. robotic workstation according to claim 1, it is characterised in that:Corresponding more depth cameras, described control Processor processed is by being calculated multigroup three dimensional local information on target object under robot coordinate system, comprehensive each group letter Breath reject error exceed tolerance interval three dimensional local information, from remaining information appoint take one group or in a manner of averaging it is true Determine three dimensional local information of the final goal object under robot coordinate system, and then manipulator is moved to target object nearby simultaneously Control machine machinery claw captures target object.
CN201610056940.1A 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras Active CN105729468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610056940.1A CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610056940.1A CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Publications (2)

Publication Number Publication Date
CN105729468A CN105729468A (en) 2016-07-06
CN105729468B true CN105729468B (en) 2018-01-09

Family

ID=56247763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610056940.1A Active CN105729468B (en) 2016-01-27 2016-01-27 A kind of robotic workstation based on the enhancing of more depth cameras

Country Status (1)

Country Link
CN (1) CN105729468B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291278B (en) * 2016-08-03 2019-01-15 国网山东省电力公司电力科学研究院 A kind of partial discharge of switchgear automatic testing method based on more vision systems
CN106272478A (en) * 2016-09-30 2017-01-04 河海大学常州校区 A kind of full-automatic shopping robot and using method
CN106504321A (en) * 2016-11-07 2017-03-15 达理 Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN106553195B (en) * 2016-11-25 2018-11-27 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106774309B (en) * 2016-12-01 2019-09-17 天津工业大学 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
CN106426186B (en) * 2016-12-14 2019-02-12 国网江苏省电力公司常州供电公司 One kind being based on hot line robot AUTONOMOUS TASK method combined of multi-sensor information
CN107170011B (en) * 2017-04-24 2019-12-17 杭州艾芯智能科技有限公司 robot vision tracking method and system
CN107291811B (en) * 2017-05-18 2019-11-29 浙江大学 A kind of sense cognition enhancing robot system based on cloud knowledge fusion
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
EP3499438A1 (en) * 2017-12-13 2019-06-19 My Virtual Reality Software AS Method and system providing augmented reality for mining operations
CN108115688B (en) * 2017-12-29 2020-12-25 深圳市越疆科技有限公司 Grabbing control method and system of mechanical arm and mechanical arm
CN108453739B (en) * 2018-04-04 2021-03-12 北京航空航天大学 Stereoscopic vision positioning mechanical arm grabbing system and method based on automatic shape fitting
CN109886278A (en) * 2019-01-17 2019-06-14 柳州康云互联科技有限公司 A kind of characteristics of image acquisition method based on ARMarker
CN110253575B (en) * 2019-06-17 2021-12-24 达闼机器人有限公司 Robot grabbing method, terminal and computer readable storage medium
CN110936378B (en) * 2019-12-04 2021-09-03 中科新松有限公司 Robot hand-eye relation automatic calibration method based on incremental compensation
CN110900581B (en) * 2019-12-27 2023-12-22 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111339957A (en) * 2020-02-28 2020-06-26 广州中智融通金融科技有限公司 Image recognition-based cashbox bundle state detection method, system and medium
CN111331604A (en) * 2020-03-23 2020-06-26 北京邮电大学 Machine vision-based valve screwing flexible operation method
CN111880522A (en) * 2020-06-01 2020-11-03 东莞理工学院 Novel autonomous assembly robot path planning autonomous navigation system and method
CN112207839A (en) * 2020-09-15 2021-01-12 西安交通大学 Mobile household service robot and method
CN112757300A (en) * 2020-12-31 2021-05-07 广东美的白色家电技术创新中心有限公司 Robot protection system and method
CN112659133A (en) * 2020-12-31 2021-04-16 软控股份有限公司 Glue grabbing method, device and equipment based on machine vision
CN113813170B (en) * 2021-08-30 2023-11-24 中科尚易健康科技(北京)有限公司 Method for converting target points among cameras of multi-camera physiotherapy system
CN114800508B (en) * 2022-04-24 2022-11-18 广东天太机器人有限公司 Grabbing control system and method of industrial robot
CN114750155B (en) * 2022-04-26 2023-04-07 广东天太机器人有限公司 Object classification control system and method based on industrial robot
CN115026822B (en) * 2022-06-14 2023-03-24 广东天太机器人有限公司 Industrial robot control system and method based on feature point docking
CN116919391A (en) * 2023-07-25 2023-10-24 凝动万生医疗科技(武汉)有限公司 Movement disorder assessment method and apparatus

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10080012B4 (en) * 1999-03-19 2005-04-14 Matsushita Electric Works, Ltd., Kadoma Three-dimensional method of detecting objects and system for picking up an object from a container using the method
JP2013154457A (en) * 2012-01-31 2013-08-15 Asahi Kosan Kk Workpiece transfer system, workpiece transfer method, and program
CN103500321B (en) * 2013-07-03 2016-12-07 无锡信捷电气股份有限公司 Vision guide welding robot weld seam method for quickly identifying based on double dynamic windows
CN103707300A (en) * 2013-12-20 2014-04-09 上海理工大学 Manipulator device
US9233469B2 (en) * 2014-02-13 2016-01-12 GM Global Technology Operations LLC Robotic system with 3D box location functionality
JP6429473B2 (en) * 2014-03-20 2018-11-28 キヤノン株式会社 Robot system, robot system calibration method, program, and computer-readable recording medium

Also Published As

Publication number Publication date
CN105729468A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105729468B (en) A kind of robotic workstation based on the enhancing of more depth cameras
US11049280B2 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN107914272B (en) Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly
CN109344882B (en) Convolutional neural network-based robot control target pose identification method
CN113524194B (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN107186708B (en) Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology
CN111089569B (en) Large box body measuring method based on monocular vision
CN110580725A (en) Box sorting method and system based on RGB-D camera
US9233469B2 (en) Robotic system with 3D box location functionality
JP2018169403A5 (en)
WO2021109575A1 (en) Global vision and local vision integrated robot vision guidance method and device
CN110751691B (en) Automatic pipe fitting grabbing method based on binocular vision
US20040172164A1 (en) Method and apparatus for single image 3D vision guided robotics
CN113781561B (en) Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram
US20200098118A1 (en) Image processing apparatus and image processing method
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
Han et al. Target positioning method in binocular vision manipulator control based on improved canny operator
Lin et al. Vision based object grasping of industrial manipulator
Fan et al. An automatic robot unstacking system based on binocular stereo vision
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
Celik et al. Development of a robotic-arm controller by using hand gesture recognition
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide
CN114266822A (en) Workpiece quality inspection method and device based on binocular robot, robot and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200706

Address after: 310013 3 / F, building C, National University Science Park, Zhejiang University, 525 Xixi Road, Hangzhou, Zhejiang Province

Patentee after: Zhejiang University Holding Group Co., Ltd

Address before: 310027 Hangzhou, Zhejiang Province, Xihu District, Zhejiang Road, No. 38, No.

Patentee before: ZHEJIANG University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210723

Address after: Room 801-804, building 1, Zhihui Zhongchuang center, Xihu District, Hangzhou City, Zhejiang Province, 310013

Patentee after: Zhejiang Zheda Xitou Brain Computer Intelligent Technology Co.,Ltd.

Address before: 3 / F, building C, National University Science Park, Zhejiang University, 525 Xixi Road, Hangzhou, Zhejiang 310013

Patentee before: Zhejiang University Holding Group Co., Ltd

TR01 Transfer of patent right