CN108161931A - The workpiece automatic identification of view-based access control model and intelligent grabbing system - Google Patents

The workpiece automatic identification of view-based access control model and intelligent grabbing system Download PDF

Info

Publication number
CN108161931A
CN108161931A CN201611116892.7A CN201611116892A CN108161931A CN 108161931 A CN108161931 A CN 108161931A CN 201611116892 A CN201611116892 A CN 201611116892A CN 108161931 A CN108161931 A CN 108161931A
Authority
CN
China
Prior art keywords
robot
module
workpiece
image
personal computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611116892.7A
Other languages
Chinese (zh)
Inventor
覃争鸣
梁鹏
周健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201611116892.7A priority Critical patent/CN108161931A/en
Publication of CN108161931A publication Critical patent/CN108161931A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • B25J9/046Revolute coordinate type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of workpiece automatic identification of view-based access control model and intelligent grabbing system, which includes:Image capture module, industrial personal computer module and robot module;Wherein, described image acquisition module is connected with the industrial personal computer module;The industrial personal computer module is connected with the robot module.The present invention program provides image coordinate to the transfer algorithm of robot coordinate by establishing the parameterized model of grasping system, the image pre-processing method provided using specific software, secondary development is carried out under designated environment, it realizes target positioning and robot controls two big basic functions, the final crawl that robot is controlled to complete target workpiece.

Description

Workpiece automatic identification and intelligent grabbing system based on vision
Technical Field
The invention belongs to the field of machine vision positioning, and relates to a workpiece automatic identification and intelligent grabbing system based on vision.
Background
Workpiece recognition and grabbing are important applications of industrial robots in a production line, most industrial robots in the production line control the robots to execute preset command actions in a pre-teaching or off-line programming mode at present, and once a working environment or a target object changes, the robots cannot adapt to the changes in time, so that grabbing failure is caused, and therefore the working mode limits the flexibility and the working efficiency of the industrial robots to a great extent.
The machine vision technology has the characteristics of rapidness and non-contact, the machine vision technology is introduced into the field of industrial robots, the robots are guided to perform task operations such as grabbing and carrying through vision, and the machine vision technology has very important significance for improving the automation level of a production line and widening the application range of the robots.
The invention patent application with application publication number CN105905560A discloses a full-automatic control system for dynamic grabbing and storing and a control method thereof, which realizes that one PLC controller controls a plurality of manipulators, and simultaneously controls material transportation and material tray movement to realize dynamic material grabbing. However, the invention needs to use a PLC controller and continuously operate and control according to the photographing area, so that the required cost is high and the system implementation process is complex.
The thesis 'target identification and positioning of binocular stereo vision, intelligent system declaration, 2011,6(4):303-311, beautiful, Ruan Qiqi, Li Xiaoli', realizes target identification and positioning by adopting binocular stereo vision; the binocular stereo vision system mainly comprises 4 modules of camera calibration, image segmentation, stereo matching and 3-dimensional distance measurement, wherein the stereo matching is the most key step of binocular vision positioning, but the accurate stereo matching of a target area is difficult to realize, the inaccuracy of the stereo matching directly causes the acquired depth information to generate deviation, and the real-time performance of the binocular and multi-view positioning vision system is the biggest challenge.
The invention patent application with application publication number CN104369188B discloses a workpiece grabbing device and method based on machine vision and an ultrasonic sensor. However, the device is composed of hardware such as a camera, a sensor, a liquid crystal display, and a PLC, and therefore, the device requires a large amount of hardware and is expensive.
Disclosure of Invention
The invention aims to provide a vision-based automatic workpiece identification and intelligent grabbing system, which gives a conversion algorithm from image coordinates to robot coordinates by establishing a parameterized model of the grabbing system, utilizes an image preprocessing method provided by specific software to carry out secondary development in a specified environment, realizes two basic functions of target positioning and robot control, finally controls a robot to complete grabbing of a target workpiece, and effectively solves the problem that the existing robot cannot adapt to a working environment or changes of a target object in time, so that operation failure is caused, and the requirement of a flexible production system cannot be met.
In order to solve the technical problems, the invention adopts the following technical scheme: a vision-based workpiece automatic identification and intelligent grasping system, comprising: the system comprises an image acquisition module, an industrial personal computer module and a robot module; the image acquisition module is connected with the industrial personal computer module; the industrial personal computer module is connected with the robot module.
Furthermore, the image acquisition module consists of a camera, a lens, a light source and a workpiece and is used for rapidly acquiring image data information with low noise and high precision.
Furthermore, the industrial personal computer module adopts an industrial personal computer of Taiwan Mohua company and is responsible for receiving the image information acquired by the image acquisition module, and converting the image information into a robot control signal after finishing the workpiece identification by using an image processing algorithm so as to control the actual position of the robot end effector.
Further, the robot module is composed of a driving device and a robot body and is used for executing corresponding operations according to the received control command.
Compared with the prior art, the invention has the following beneficial effects:
according to the scheme of the invention, a conversion algorithm from image coordinates to robot coordinates is given by establishing a parameterized model of the grabbing system, secondary development is carried out under a specified environment by using an image preprocessing method provided by specific software, two basic functions of target positioning and robot control are realized, and finally the robot is controlled to complete grabbing of a target workpiece.
Drawings
Fig. 1 is a block diagram of a vision-based workpiece automatic identification and intelligent grasping system.
Fig. 2 is a schematic diagram of a vision-based workpiece automatic identification and intelligent gripping system.
Fig. 3 is an aperture imaging model.
Fig. 4 is a vision-based workpiece automatic identification and intelligent grasping system parameterized model.
Detailed Description
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
Referring to fig. 1, the invention relates to a vision-based workpiece automatic identification and intelligent grasping system, which comprises: the system comprises an image acquisition module, an industrial personal computer module and a robot module; the image acquisition module is connected with the industrial personal computer module; the industrial personal computer module is connected with the robot module.
(1) The image acquisition module consists of a camera, a lens, a light source and a workpiece and is used for quickly acquiring image data information with low noise and high precision; wherein,
the camera adopts an industrial CCD camera of a Basler company model acA2500-14gm, uses a gigabit Ethernet to communicate with the computer, and is arranged right above the conveyor belt;
the lens adopts a fixed focus lens of M0814-MP2 model of the company COMPUTAR, the focal length is 8mm, the maximum imaging dimension is 8.8mm multiplied by 6.6mm, and the imaging size meets the design requirement;
the light source adopts an LED annular light source of CCS company, the response time of the LED lighting system is fast, and high-quality and high-contrast images can be obtained.
(2) The industrial personal computer module adopts an industrial personal computer of Taiwan Hua company and is responsible for receiving image information acquired by the CCD camera, and converting the image information into a robot control signal after finishing the workpiece recognition by using an image processing algorithm so as to control the actual position of the robot end effector.
(3) The robot module consists of a driving device and a robot body and is used for executing corresponding operation according to the received control command. The robot adopts an IRB120 type robot of the Swiss ABB company, has 6 rotary joints, is driven by an alternating current servo motor, has the tail end repeated positioning precision of 0.01mm, is simple to control, is convenient to program, and is suitable for grabbing operation on a production line.
Referring to fig. 2, the camera of the system is fixedly arranged above the conveyor belt, the conveyor belt runs continuously, the workpiece enters the field of view of the camera from one end of the conveyor belt, a timer is set, the camera is triggered to collect one frame of image every 0.5s, the size of the image is 640 multiplied by 480, the centroid position of the workpiece is determined by a template matching method, the moving speed of the workpiece can be calculated by the displacement of the workpiece in the moving direction of the two frames of images and the time interval for shooting the two frames of images, and finally, converting the pose information into joint angle and angle control information familiar to the industrial robot through inverse solution of robot kinematics, thereby realizing the purpose of accurately grabbing the workpiece by using the vision-guided robot.
The upper flow limit is the position of the workpiece at the moment when the workpiece just starts to enter the grabbing area of the robot, in order to reduce the waiting time of the robot, the upper flow limit is arranged to be a little ahead as much as possible, the lower flow limit is the position of the workpiece when the workpiece leaves the grabbing area, the workpiece must be grabbed by the robot in the grabbing area, otherwise, the grabbing task fails, and the robot gives up tracking the workpiece.
First, parameterized model establishment of grabbing system
1. Camera calibration
Referring to fig. 3, a pinhole imaging model (from thesis "robot vision measurement and control", author: xu de, Tan Min, Liyuan. Beijing: national defense industry Press, 2011.) is provided, a coordinate system is established at the center of an optical axis of a camera, a Z axis is along the direction of the optical axis, an X axis is along the direction of horizontal increase of image coordinates, and the image coordinates are taken along the direction of horizontal increase of the image coordinates in a camera coordinate system OCIn xyz, let the coordinates of point P be (X, Y, Z) and the coordinates of its projection point P on the image plane be (X, Y, Z), where Z ═ f, f is the focal length of the camera.
The following proportional relationship is obtained from the pinhole imaging principle:
as known from the CCD imaging principle, the image on the imaging plane is magnified to obtain a digital image, the image points (X, Y) on the imaging plane are converted into image points (u, v), and the image points (u, v) are recorded0,v0) The image coordinate of the intersection point of the central line of the optical axis and the imaging plane is as follows:
in the formula: dx,dyRespectively, the physical size, s, of a pixel in the X and Y directionsx=1/dx,sy=1/dyThe sampling frequencies in the X and Y directions, respectively, are the number of pixels per unit length.
Substituting formula (1) into formula (2) and rewriting it into a matrix form:
in the formula: f. ofx=fsx,fy=fsyDefined as the equivalent focal length in the X and Y directions, respectively, fx、fy、u0、v0These 4 parameters are only related to the internal structure of the camera and are therefore called internal parameters of the camera, and the external parametric model of the camera is a description of the world coordinate system in the camera coordinate system. Coordinate system O-XwYwZwThe representation in the coordinate system O-xyz constitutes the extrinsic parameter matrix of the camera:
therefore, the relation between world coordinates and image coordinates is established through a camera coordinate system; substituting formula (4) into formula (3) to obtain:
2. hand-eye coordinate calibration
In the system, the camera and the robot are respectively arranged at two ends of the conveyor belt, and the relative pose between the workpiece and the robot cannot be determined by the traditional hand-eye calibration method, so that two reference coordinate systems ref are established on the conveyor belt1And ref2,ref1Established in the field of view of the camera, ref2Is established within the working space of the robot. Referring to fig. 4, the camera calibrates the internal and external parameters by a planar target calibration method, and establishes a reference coordinate system ref by using one of the calibration plate images1To obtain ref1Relative pose to the camera coordinate system camcamHref1;ref1And ref2Only the translation relation in the X direction exists between the two positions, and the pose relation is ref1Href2(ii) a Calibrating a reference coordinate system ref by a three-point method similar to calibrating a workpiece coordinate system2Pose relation with robot basic coordinate systembaseHref2
Establishing a relation between a camera coordinate system cam and a robot basic coordinate system base through the two reference coordinate systemsbaseHcambaseHref2·ref2Href1·(camHref1)-1Obtained by target locationcamHobjAnd then the pose transformation matrix of the target workpiece in the robot basic coordinate system is as follows:baseHobjbaseHcam·camHobj(ii) a This establishes a connection between the target workpiece and the robot.
Second, template matching algorithm based on gray level correlation
Feature extraction and template matching are important links in moving target tracking, the contour, shape, gray value, color histogram and the like of a target image can be used as criteria when the template is matched, and various geometric or gray features can be comprehensively utilized to track the target. After the target features are extracted, a proper search matching algorithm is selected to realize target positioning. In order to meet the requirement of high real-time performance during operation, the adopted image processing algorithm has to have a sufficiently high operation speed and sufficient robustness to illumination changes and environmental factors.
The commonly used template matching algorithm is mainly based on two template matching algorithms of gray correlation and geometric feature. The template matching algorithm based on gray correlation is characterized in that image gray value information is used as characteristic parameters for direct matching, and the algorithm based on gray correlation is mature, simple in principle and easy to implement. Therefore, the invention adopts a template matching algorithm based on gray scale correlation.
The similarity measurement criterion is used for calculating the square sum of the gray value difference values of all pixels between the template image and the image to be searched, namely, the SSD algorithm, the size of the template is set to be M multiplied by N pixels, and the similarity function of the template and the image to be matched is as follows:
in the formula: t (m, n) and S (i + m, j + n) are the grayscale values of the template image and the image to be searched at the (m, n) coordinate and the (i + m, j + n) coordinate, respectively, and whether the image has the same or similar target as the template is determined by calculating the similarity function value at each position. The normalization cross correlation coefficient NCC of template matching is obtained by normalizing the formula:
the NCC coefficient size represents the matching degree of the template and the image to be searched at the position (i, j), the value of the NCC coefficient size is 0-1, the NCC-1 represents that the identical example with the template is found in the image to be searched, and the position of the NCC maximum value, namely the matched target, is found after all searches are completed in the image to be searched. And by utilizing the similarity measurement function, the template image is translated and rotated to find one or more examples of the template in the image to be searched, and the position coordinates and the rotation angle of the examples are determined, so that reliable information is provided for the subsequent grabbing plan of the robot.
Moving target tracking algorithm
1. Kalman filtering
The position of the target on the conveyor belt at the moment of photographing is obtained through camera calibration and template matching, but the target continuously moves on the conveyor belt, and the robot needs a certain time interval to realize the grabbing action, therefore, only the position where the target possibly appears is predicted in advance, the robot moves in advance and the target arrives at a predicted position at the same time to complete workpiece grabbing, the target pose is predicted by a Kalman filter (from paper 'basic principle and application of Kalman filtering', author: Pendingcong. software guide, 2009, 11(8): 32-34.), the Kalman filter is a linear filter, that is, the estimation value of the current state can be calculated as long as the estimation value of the state at the previous time and the observation value of the current state are known, so that the history information of observation or estimation does not need to be recorded. Kalman filters are widely used in visual tracking systems. The method can accurately estimate the future state of the moving object, and further guide the robot to complete a dynamic grabbing task.
2. Establishment of motion model
The workpiece on the conveyor belt generally makes uniform linear motion, the motion state parameters of the target are set as the position and the speed of the target at a certain moment, and in the tracking process, as the time interval between two adjacent frames of images is short, the change of the motion state of the target is small, the target can be assumed to move at a uniform speed in a unit time interval, so the speed parameters can sufficiently reflect the motion trend of the target. The system state x is defined by a four-dimensional variablekI.e. (xs)k,ysk,xvk,yvk) Representing the position and velocity of the target in the x and y directions, respectively, with the equation:
thus, for the present system, the system model is built as follows: dt is tk-1And tkThe time interval of (c).
Only the position of the target can be observed in the image, so the observation model zkComprises the following steps:
detecting the position (xs) of the workpiece at the moment of photographing through template matching0,ys0) Starting velocity (xv) of the workpiece00), is comprised of the first two framesThe image of the workpiece is obtained by calculating the displacement of the center of the workpiece in the moving direction and dividing the displacement by the time for shooting the 2 frames of images, and then the initial state of the system is obtainedAnd the system initial error covariance matrix P0Initializing a Kalman filter (10. eye (4) (eye (4) represents a 4-order diagonal matrix), recording the time of the current image, and obtaining the current motion state by calculating the time interval dt between two frames and substituting the time interval dt into a state prediction equation before carrying out mode matching on the next frame of imageAnd will beThe central Region is used as the ROI (Region of interest) of the current pattern matching, and the best matching of the template is searched in the ROI to obtain (x)1,y1) And recording the time of the current image. Will z1=(x1,y1) And substituting the observation vector into a state updating equation, and updating the state of the filter to obtain the estimated values of the position and the speed of the moving target at each moment. The Kalman filter is adopted to predict the possible position of the workpiece in the image, thereby avoiding searching and matching the whole image, greatly accelerating the speed of template matching and improving the real-time performance of the system. After such estimation-correction process, the position (t) of the target after a certain time Δ t is estimated by using the Kalman filter2And the moment) and planning the motion track and the speed of the robot according to the moment), and controlling the robot to complete the grabbing action through the control cabinet by the generated control instruction.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. Workpiece automatic identification and intelligent grasping system based on vision, its characterized in that, the system includes: the system comprises an image acquisition module, an industrial personal computer module and a robot module; the image acquisition module is connected with the industrial personal computer module; the industrial personal computer module is connected with the robot module.
2. The vision-based system for automatically identifying and intelligently grabbing workpieces according to claim 1, wherein the image acquisition module consists of a camera, a lens, a light source and a workpiece and is used for rapidly acquiring image data information with low noise and high precision.
3. The vision-based automatic workpiece identification and intelligent workpiece grabbing system of claim 1, wherein the industrial personal computer module is an industrial personal computer of taiwan porphyry, and is in charge of receiving image information acquired by the image acquisition module, and converting the image information into a robot control signal after completing workpiece identification by using an image processing algorithm so as to control the actual position of the robot end effector.
4. The vision-based automatic workpiece identification and intelligent workpiece grabbing system of claim 1, wherein the robot module is composed of a driving device and a robot body and is used for executing corresponding operations according to received control commands.
CN201611116892.7A 2016-12-07 2016-12-07 The workpiece automatic identification of view-based access control model and intelligent grabbing system Pending CN108161931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611116892.7A CN108161931A (en) 2016-12-07 2016-12-07 The workpiece automatic identification of view-based access control model and intelligent grabbing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611116892.7A CN108161931A (en) 2016-12-07 2016-12-07 The workpiece automatic identification of view-based access control model and intelligent grabbing system

Publications (1)

Publication Number Publication Date
CN108161931A true CN108161931A (en) 2018-06-15

Family

ID=62526636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611116892.7A Pending CN108161931A (en) 2016-12-07 2016-12-07 The workpiece automatic identification of view-based access control model and intelligent grabbing system

Country Status (1)

Country Link
CN (1) CN108161931A (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107942949A (en) * 2017-03-31 2018-04-20 沈机(上海)智能系统研发设计有限公司 A kind of lathe vision positioning method and system, lathe
CN108709621A (en) * 2018-08-02 2018-10-26 河北工业大学 A kind of special-shaped workpiece detection grabbing device based on supersonic array
CN108827154A (en) * 2018-07-09 2018-11-16 深圳辰视智能科技有限公司 A kind of robot is without teaching grasping means, device and computer readable storage medium
CN108972556A (en) * 2018-08-14 2018-12-11 广东工业大学 Conducting wire grasping system and method on small and special electric machine production line under complex illumination environment
CN109015653A (en) * 2018-08-30 2018-12-18 黄河科技学院 Grab control method, device, storage medium and electronic equipment
CN109048918A (en) * 2018-09-25 2018-12-21 华南理工大学 A kind of visual guide method of wheelchair arm robot
CN109454501A (en) * 2018-10-19 2019-03-12 江苏智测计量技术有限公司 A kind of lathe on-line monitoring system
CN109454638A (en) * 2018-10-31 2019-03-12 昆山睿力得软件技术有限公司 A kind of robot grasping system of view-based access control model guidance
CN109623821A (en) * 2018-12-26 2019-04-16 深圳市越疆科技有限公司 The visual guide method of mechanical hand crawl article
CN109848998A (en) * 2019-03-29 2019-06-07 砚山永盛杰科技有限公司 One kind being used for 3C industry vision four axis flexible robot
CN109911549A (en) * 2019-01-25 2019-06-21 东华大学 A kind of the Robotic Dynamic tracking grasping system and method for fragile goods
CN110102490A (en) * 2019-05-23 2019-08-09 北京阿丘机器人科技有限公司 The assembly line packages device and electronic equipment of view-based access control model technology
CN110480685A (en) * 2019-05-15 2019-11-22 青岛科技大学 A kind of Agricultural vehicle wheel automatic production line vision manipulator
CN110509273A (en) * 2019-08-16 2019-11-29 天津职业技术师范大学(中国职业培训指导教师进修中心) The robot mechanical arm of view-based access control model deep learning feature detects and grasping means
CN110640739A (en) * 2018-11-07 2020-01-03 宁波赛朗科技有限公司 Grabbing industrial robot with center position recognition function
CN110980061A (en) * 2019-12-12 2020-04-10 重庆铁马专用车有限公司 Novel intelligence major possession refuse treatment system
CN111112885A (en) * 2019-11-26 2020-05-08 福尼斯智能装备(珠海)有限公司 Welding system with vision system for feeding and discharging workpieces and self-adaptive positioning of welding seams
CN111216099A (en) * 2018-11-27 2020-06-02 发那科株式会社 Robot system and coordinate conversion method
CN111447366A (en) * 2020-04-27 2020-07-24 Oppo(重庆)智能科技有限公司 Transportation method, transportation device, electronic device, and computer-readable storage medium
CN111815718A (en) * 2020-07-20 2020-10-23 四川长虹电器股份有限公司 Method for quickly switching stations of industrial screw robot based on vision
CN111989540A (en) * 2018-07-13 2020-11-24 深圳配天智能技术研究院有限公司 Workpiece tracking method and system and robot
CN112296999A (en) * 2019-11-12 2021-02-02 太原科技大学 Irregular workpiece machining path generation method based on machine vision
WO2021042376A1 (en) * 2019-09-06 2021-03-11 罗伯特·博世有限公司 Calibration method and apparatus for industrial robot, three-dimensional environment modeling method and device for industrial robot, computer storage medium, and industrial robot operating platform
CN113043334A (en) * 2021-02-23 2021-06-29 上海埃奇机器人技术有限公司 Robot-based photovoltaic cell string positioning method
CN113808197A (en) * 2021-09-17 2021-12-17 山西大学 Automatic workpiece grabbing system and method based on machine learning
CN114347033A (en) * 2022-01-27 2022-04-15 达闼机器人有限公司 Robot article grabbing method and device, robot and storage medium
US20220130066A1 (en) * 2019-01-25 2022-04-28 Sony Interactive Entertainment Inc. Robot controlling system
CN114454177A (en) * 2022-03-15 2022-05-10 浙江工业大学 Robot tail end position compensation method based on binocular stereo vision
CN111216099B (en) * 2018-11-27 2024-09-24 发那科株式会社 Robot system and coordinate conversion method

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107942949A (en) * 2017-03-31 2018-04-20 沈机(上海)智能系统研发设计有限公司 A kind of lathe vision positioning method and system, lathe
CN107942949B (en) * 2017-03-31 2019-01-25 沈机(上海)智能系统研发设计有限公司 A kind of lathe vision positioning method and system, lathe
CN108827154A (en) * 2018-07-09 2018-11-16 深圳辰视智能科技有限公司 A kind of robot is without teaching grasping means, device and computer readable storage medium
CN111989540A (en) * 2018-07-13 2020-11-24 深圳配天智能技术研究院有限公司 Workpiece tracking method and system and robot
CN111989540B (en) * 2018-07-13 2022-04-15 深圳配天智能技术研究院有限公司 Workpiece tracking method and system and robot
CN108709621A (en) * 2018-08-02 2018-10-26 河北工业大学 A kind of special-shaped workpiece detection grabbing device based on supersonic array
CN108709621B (en) * 2018-08-02 2024-04-26 河北工业大学 Abnormal workpiece detection grabbing device based on ultrasonic array
CN108972556A (en) * 2018-08-14 2018-12-11 广东工业大学 Conducting wire grasping system and method on small and special electric machine production line under complex illumination environment
CN108972556B (en) * 2018-08-14 2021-07-09 广东工业大学 Wire grabbing system and method in complex illumination environment on micro special motor production line
CN109015653A (en) * 2018-08-30 2018-12-18 黄河科技学院 Grab control method, device, storage medium and electronic equipment
CN109048918A (en) * 2018-09-25 2018-12-21 华南理工大学 A kind of visual guide method of wheelchair arm robot
CN109048918B (en) * 2018-09-25 2022-02-22 华南理工大学 Visual guide method for wheelchair mechanical arm robot
CN109454501A (en) * 2018-10-19 2019-03-12 江苏智测计量技术有限公司 A kind of lathe on-line monitoring system
CN109454638A (en) * 2018-10-31 2019-03-12 昆山睿力得软件技术有限公司 A kind of robot grasping system of view-based access control model guidance
CN110640739A (en) * 2018-11-07 2020-01-03 宁波赛朗科技有限公司 Grabbing industrial robot with center position recognition function
CN111216099B (en) * 2018-11-27 2024-09-24 发那科株式会社 Robot system and coordinate conversion method
CN111216099A (en) * 2018-11-27 2020-06-02 发那科株式会社 Robot system and coordinate conversion method
CN109623821B (en) * 2018-12-26 2022-04-01 日照市越疆智能科技有限公司 Visual guide method for grabbing articles by mechanical arm
CN109623821A (en) * 2018-12-26 2019-04-16 深圳市越疆科技有限公司 The visual guide method of mechanical hand crawl article
US20220130066A1 (en) * 2019-01-25 2022-04-28 Sony Interactive Entertainment Inc. Robot controlling system
CN109911549A (en) * 2019-01-25 2019-06-21 东华大学 A kind of the Robotic Dynamic tracking grasping system and method for fragile goods
CN109848998A (en) * 2019-03-29 2019-06-07 砚山永盛杰科技有限公司 One kind being used for 3C industry vision four axis flexible robot
CN110480685A (en) * 2019-05-15 2019-11-22 青岛科技大学 A kind of Agricultural vehicle wheel automatic production line vision manipulator
CN110102490B (en) * 2019-05-23 2021-06-01 北京阿丘机器人科技有限公司 Assembly line parcel sorting device based on vision technology and electronic equipment
CN110102490A (en) * 2019-05-23 2019-08-09 北京阿丘机器人科技有限公司 The assembly line packages device and electronic equipment of view-based access control model technology
CN110509273A (en) * 2019-08-16 2019-11-29 天津职业技术师范大学(中国职业培训指导教师进修中心) The robot mechanical arm of view-based access control model deep learning feature detects and grasping means
CN110509273B (en) * 2019-08-16 2022-05-06 天津职业技术师范大学(中国职业培训指导教师进修中心) Robot manipulator detection and grabbing method based on visual deep learning features
WO2021042376A1 (en) * 2019-09-06 2021-03-11 罗伯特·博世有限公司 Calibration method and apparatus for industrial robot, three-dimensional environment modeling method and device for industrial robot, computer storage medium, and industrial robot operating platform
CN114390963B (en) * 2019-09-06 2024-08-09 罗伯特·博世有限公司 Calibration method and device for industrial robot, three-dimensional environment modeling method and device, computer storage medium and industrial robot operation platform
CN114390963A (en) * 2019-09-06 2022-04-22 罗伯特·博世有限公司 Calibration method and device for industrial robot, three-dimensional environment modeling method and device, computer storage medium and industrial robot operating platform
CN112296999B (en) * 2019-11-12 2022-07-08 太原科技大学 Irregular workpiece machining path generation method based on machine vision
CN112296999A (en) * 2019-11-12 2021-02-02 太原科技大学 Irregular workpiece machining path generation method based on machine vision
CN111112885A (en) * 2019-11-26 2020-05-08 福尼斯智能装备(珠海)有限公司 Welding system with vision system for feeding and discharging workpieces and self-adaptive positioning of welding seams
CN110980061A (en) * 2019-12-12 2020-04-10 重庆铁马专用车有限公司 Novel intelligence major possession refuse treatment system
CN111447366A (en) * 2020-04-27 2020-07-24 Oppo(重庆)智能科技有限公司 Transportation method, transportation device, electronic device, and computer-readable storage medium
CN111815718A (en) * 2020-07-20 2020-10-23 四川长虹电器股份有限公司 Method for quickly switching stations of industrial screw robot based on vision
CN113043334A (en) * 2021-02-23 2021-06-29 上海埃奇机器人技术有限公司 Robot-based photovoltaic cell string positioning method
CN113808197A (en) * 2021-09-17 2021-12-17 山西大学 Automatic workpiece grabbing system and method based on machine learning
WO2023143408A1 (en) * 2022-01-27 2023-08-03 达闼机器人股份有限公司 Article grabbing method for robot, device, robot, program, and storage medium
CN114347033B (en) * 2022-01-27 2023-12-08 达闼机器人股份有限公司 Robot character grabbing method and device, robot and storage medium
CN114347033A (en) * 2022-01-27 2022-04-15 达闼机器人有限公司 Robot article grabbing method and device, robot and storage medium
CN114454177A (en) * 2022-03-15 2022-05-10 浙江工业大学 Robot tail end position compensation method based on binocular stereo vision

Similar Documents

Publication Publication Date Title
CN108161931A (en) The workpiece automatic identification of view-based access control model and intelligent grabbing system
CN107618030B (en) Robot dynamic tracking grabbing method and system based on vision
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN108182689B (en) Three-dimensional identification and positioning method for plate-shaped workpiece applied to robot carrying and polishing field
Chen et al. Applying a 6-axis mechanical arm combine with computer vision to the research of object recognition in plane inspection
CN111462154B (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
US8244402B2 (en) Visual perception system and method for a humanoid robot
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN111645074A (en) Robot grabbing and positioning method
CN109202912A (en) A method of objective contour point cloud is registrated based on monocular depth sensor and mechanical arm
CN109454638A (en) A kind of robot grasping system of view-based access control model guidance
CN113146172B (en) Multi-vision-based detection and assembly system and method
CN103895042A (en) Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN113267452A (en) Engine cylinder surface defect detection method and system based on machine vision
CN112775959A (en) Method and system for determining grabbing pose of manipulator and storage medium
Hsu et al. Development of a faster classification system for metal parts using machine vision under different lighting environments
CN113689509A (en) Binocular vision-based disordered grabbing method and system and storage medium
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN116749233A (en) Mechanical arm grabbing system and method based on visual servoing
Buerkle et al. Vision-based closed-loop control of mobile microrobots for microhandling tasks
CN114770461B (en) Mobile robot based on monocular vision and automatic grabbing method thereof
Ruan et al. Feature-based autonomous target recognition and grasping of industrial robots
Cong Visual servoing control of 4-DOF palletizing robotic arm for vision based sorting robot system
Grammatikopoulou et al. Three-dimensional pose estimation of optically transparent microrobots
Guo et al. The research of material sorting system based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180615