CN111462154B - Target positioning method and device based on depth vision sensor and automatic grabbing robot - Google Patents

Target positioning method and device based on depth vision sensor and automatic grabbing robot Download PDF

Info

Publication number
CN111462154B
CN111462154B CN202010122558.2A CN202010122558A CN111462154B CN 111462154 B CN111462154 B CN 111462154B CN 202010122558 A CN202010122558 A CN 202010122558A CN 111462154 B CN111462154 B CN 111462154B
Authority
CN
China
Prior art keywords
target
image
depth
planning
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010122558.2A
Other languages
Chinese (zh)
Other versions
CN111462154A (en
Inventor
汪喆远
游家兴
成春晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electric Rice Information System Co ltd
Original Assignee
China Electric Rice Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electric Rice Information System Co ltd filed Critical China Electric Rice Information System Co ltd
Priority to CN202010122558.2A priority Critical patent/CN111462154B/en
Publication of CN111462154A publication Critical patent/CN111462154A/en
Application granted granted Critical
Publication of CN111462154B publication Critical patent/CN111462154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • B25J9/12Programme-controlled manipulators characterised by positioning means for manipulator elements electric
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a target positioning method and device based on a depth vision sensor and an automatic grabbing robot, and relates to the technical field of machine vision. The target positioning method filters out complex background interference information through a depth vision sensor and a depth segmentation method, positions a target object in a scene and acquires the space position of the target. The automatic grabbing robot performs kinematic planning on the track of the mechanical arm according to the spatial position information of the target, transmits a motion instruction to a singlechip for controlling the mechanical arm, and finally realizes the automatic grabbing of the target by the tail end mechanical claw. According to the method, the target object is rapidly identified through depth segmentation, interference of irrelevant background information is reduced, and positioning accuracy is improved. The invention increases the perception capability of the mechanical arm, and realizes the autonomous grabbing of the target object in the working range according to the target point position autonomous planning motion trail, and the position change of the target object does not need to reprogram the motion trail.

Description

Target positioning method and device based on depth vision sensor and automatic grabbing robot
Technical Field
The invention relates to target identification positioning and mechanical arm kinematics planning, in particular to a target identification method based on a depth vision sensor and a structural design of an autonomous grabbing robot.
Background
Gripping or manipulating objects is a task that is often performed by robotic arms. For the robot arm, the recognition and positioning of the target object are the precondition that the robot arm successfully grabs the target object, however, a general intelligent robot cannot easily recognize and position the target object in a complex environment like a human, the object features are usually extracted through an image processing algorithm to recognize, and the accurate and rapid recognition of the target in the complex environment is always a difficult problem.
In addition, the conventional mechanical arm usually performs target grabbing work according to a set flow, external information cannot be received, and when the target position is changed, a control program needs to be reset, so that the working efficiency of the mechanical arm is affected. Meanwhile, the mechanical arm also has no autonomous grabbing function.
Disclosure of Invention
The invention aims to: the invention aims to provide a target positioning method and device based on a depth vision sensor, which can realize target positioning at a high speed by utilizing the depth vision sensor and matching with an excellent processing algorithm.
The invention further aims to provide an automatic grabbing robot, so that the limitation that a traditional mechanical arm cannot grab autonomously and needs to be programmed repeatedly is overcome.
The technical scheme is as follows: according to a first aspect of the present invention, there is provided a depth vision sensor-based target positioning method comprising the steps of:
acquiring a scene image of a space where a target object is located by using a depth vision sensor;
screening the depth value of each pixel point of the RGB image with the depth information, which is acquired by the depth vision sensor, selecting the pixel points with the depth value in the effective identification depth area, and forming a depth-segmented image;
graying, image filtering and threshold segmentation are carried out on the RGB image subjected to depth segmentation to obtain an ideal image with depth information of each pixel point;
performing traversal matching on a target template to be grabbed and the obtained ideal image by a template matching method based on profile Hu moment characteristic improvement, and positioning a target in the image to obtain a two-dimensional coordinate of the target in an image coordinate system;
and calculating the space coordinates of the target by combining the depth values of the pixel points of the target with the two-dimensional coordinates in the extracted image coordinate system and utilizing the change relation of the image coordinate system and the world coordinate system transformation.
According to a second aspect of the present invention, there is provided a depth vision sensor-based object positioning apparatus comprising:
the image acquisition module is used for acquiring a scene image of the space where the target object is located by using the depth vision sensor;
the image processing module is used for screening the depth value of each pixel point of the RGB image with the depth information, which is acquired by the depth vision sensor, selecting the pixel points with the depth values in the effective identification depth area, and forming a depth-segmented image;
the characteristic extraction module is used for carrying out graying, image filtering and threshold segmentation on the RGB image subjected to the depth segmentation to obtain an ideal image with depth information of each pixel point;
the recognition and positioning module is used for performing traversal matching on a target template to be grasped and an ideal image obtained by a template matching method based on profile Hu moment characteristic improvement, and positioning a target in the image to obtain a two-dimensional coordinate of the target in an image coordinate system;
and the coordinate output module is used for calculating the space coordinates of the target by combining the depth values of the pixel points of the target with the two-dimensional coordinates in the extracted image coordinate system and utilizing the change relation of the transformation of the image coordinate system and the world coordinate system.
According to a third aspect of the present invention, there is provided an automatic gripping robot, including the object positioning device, the motion planning device, and the execution device as described in the second aspect, where the object positioning device obtains three-dimensional coordinates of an object in space by calculation according to a scene image of the space in which the object is located; the motion planning device obtains the space coordinates of the target object from the target positioning device, performs motion planning, and issues a motion instruction to the execution device, and the execution device receives the motion instruction and executes a grabbing task.
Further, the executing device comprises a six-degree-of-freedom mechanical arm, a mechanical arm controller and joint servo motors, the motion planning device utilizes the acquired target space position information to plan the motion trail of the tail end of the six-degree-of-freedom mechanical arm in joint space, the mechanical arm controller receives motion instructions from the motion planning device, controls the joint servo motors to move so that the mechanical claws move to target points to perform autonomous grabbing, and meanwhile feeds the pose of the mechanical claws back to the motion planning device to perform next planning.
The beneficial effects are that:
1. according to the target positioning method, the interference of irrelevant background information is reduced through depth segmentation, so that a target object can be identified more quickly and accurately.
2. The invention increases the perception capability of the mechanical arm, and realizes the autonomous grabbing of the target object in the working range according to the target point position autonomous planning motion trail, and the position change of the target object does not need to reprogram the motion trail.
Drawings
FIG. 1 is a general block diagram of a system of the present invention;
FIG. 2 is a diagram of the operation of the visual recognition system of the present invention;
FIG. 3 is a diagram of the operation of the motion control system of the present invention;
FIG. 4 is a block diagram of a robotic arm of the present invention;
fig. 5 is a software node configuration diagram of the entire system of the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
In one embodiment, the overall structure of the system of the present invention is shown in fig. 1, and is mainly divided into an upper computer and a bottom hardware system:
1) The upper computer is a control center of the whole system, is responsible for receiving image information of the sensor and feedback of the pose of the mechanical arm, and performs tasks such as image processing, target identification and positioning, mechanical arm motion planning and the like, and is a computer with excellent computing capability.
2) The bottom hardware equipment is an actuating mechanism of the whole system and comprises a picture information visual sensor which is responsible for collecting targets, a mechanical arm which is responsible for completing grabbing work, a mechanical arm controller which is responsible for controlling the movement of each joint motor of the mechanical arm and a communication module for information transmission between the controller and an upper computer. The depth vision sensor is fixed on the right opposite side of the mechanical arm space coordinate system, and achieves the positioning function of the target space position in the scene and the autonomous grabbing function of the target. The mechanical arm structure needs to have six degrees of freedom, and the condition that the tail end of the mechanical arm can reach any point in the space is met.
In this embodiment, the upper computer realizes the functions of calculation and planning of the target positioning device and the motion planning device, and the hardware equipment outside the vision sensor realizes the execution function of the execution device. The system can be functionally divided into a visual recognition system and a motion control system:
1) The visual recognition system is used for recognizing and positioning a target object, firstly, an image processing is carried out on an acquired image by a depth visual sensor, and the characteristic extraction of the target object is realized through a series of image processing operations of graying, filtering, threshold segmentation and depth segmentation; and then, identifying the target by using a template matching method based on the improvement of the profile Hu moment characteristics, extracting coordinates of a grabbing point position of the target, and completing positioning of the target object by combining depth information acquired by a depth vision sensor, wherein the working process is shown in figure 2.
Specifically, the visual recognition system works as follows:
a) Screening the depth value of each pixel point of the RGB image with the depth information, which is acquired by the depth vision sensor, selecting the pixel points with the depth value in the effective identification depth area, and forming a depth-segmented image;
the depth vision sensor adopts the image information transmitted by the depth vision sensor to comprise R, G, B values and depth values of all pixel points, the working area of the robot is an effective identification area, the depth distance between the working area and the vision sensor is a limited interval, the depth range of the working area is set, the depth information of all pixel points is used for screening, the pixel information of most irrelevant areas is filtered, and the identification rate is effectively improved.
b) The method comprises the steps of graying, image filtering and threshold segmentation of the RGB image after depth segmentation to obtain an ideal image with depth information of each pixel point, and comprises the following steps:
the RGB color image information is complex, which is unfavorable for extracting morphological characteristics, so the image processing operation is needed: firstly, converting an RGB image into a gray image by using a three-channel weighted graying method, and effectively dividing a target and extracting the characteristics of the target object through gray values. The three-channel weighted graying formula is as follows:
Gray=0.114B+0.587G+0.299R
secondly, filtering the image after graying, wherein the main purpose of the image filtering is to suppress the interference noise in the drawing on the premise of ensuring the original details of the image as much as possible, and the interference noise in the image can be filtered to the greatest extent by comparing a median filtering method, a mean filtering method and a Gaussian filtering method aiming at the image after median filtering in the system under the condition of keeping the edge characteristics of a target;
finally, threshold segmentation is carried out on the filtered image, the threshold segmentation is a process of segmenting the image into different areas and selecting a target area according to a certain criterion, and the target is successfully separated from the background by the OSTU Ojin method in the system through comparison of the commonly used OSTU segmentation method, the minimum error method, the iteration threshold method and other methods, and the edge contour is clean, clear and free of noise points. And carrying out graying, image filtering and threshold segmentation on the RGB image subjected to depth segmentation to obtain an ideal image with depth information of each pixel point.
c) Performing traversal matching on a target template to be grabbed and the obtained ideal image by a template matching method based on profile Hu moment characteristic improvement, and positioning a target in the image to obtain a two-dimensional coordinate of the target in an image coordinate system;
according to the template matching method based on the outline features, the set target template is traversed and searched in the acquired image to confirm the position of the target in the image, but the matching similarity is greatly influenced by the placement position and angle of the target in the environment background, in order to overcome the defect, the Hu moment features of the target are further extracted from the outline of the target by the target detection method of the system, the Hu moment features have invariance in the aspects of stretching, translating and rotating, the position of the target in the ideal image is determined by comparing the Hu moment features of the outline of the template with the similarity of the Hu moment features of the outline in the searched image, and the problem of target loss caused by the difference of the placement position, angle and template can be effectively avoided by the improved template matching method. And performing traversal matching on the target template to be grabbed and the obtained ideal image by a template matching method based on the improvement of the profile Hu moment characteristics, determining that the target is in the ideal image, positioning the target in the image, and obtaining the two-dimensional coordinates of the target in the image coordinate system.
d) Calculating the space coordinates of the target by combining the depth values of the pixel points of the target with the two-dimensional coordinates in the extracted image coordinate system and utilizing the change relation of the image coordinate system and the world coordinate system transformation;
the two-dimensional coordinates of the target in the image coordinate system can only reflect the position of the target in the picture, the mechanical arm needs the coordinates of the target in the mechanical arm coordinate system and the world coordinate system, so the two-dimensional coordinates in the image coordinate system are required to be converted into the two-dimensional coordinates in the mechanical arm coordinate system according to the relative positions of the vision sensor and the mechanical arm in the world coordinate system, the vision sensor acquires the internal parameters of the camera by using a Zhang Zhengyou calibration method, the external parameters are acquired through the relative positions of the vision sensor and the mechanical arm, and the conversion formula is as follows:
namely:
wherein the pixel coordinates of the object in the picture are (u, v), and the coordinates of the object points in the real world in the world coordinate system are denoted as (X W ,Y W ,Z W );Z C The distance from the image coordinate plane to the object, namely the depth value;is a camera internal parameter matrix,/->Is an external parameter matrix and consists of a rotation matrix R and a translation vector t.
By the depth value Z of each pixel point of the target C In combination with the extracted image coordinate systemTwo-dimensional coordinates (u, v), and the spatial coordinates (X) of the object are calculated by using the change relation of the transformation of the image coordinate system and the world coordinate system W ,Y W ,Z W )。
2) The motion control system obtains the coordinates of the target point location, then the upper computer performs kinematic planning on the mechanical arm, the upper computer performs motion track planning on the tail end of the six-degree-of-freedom mechanical arm under the joint space by using the obtained target space position information, a motion instruction is sent downwards, the bottom hardware is responsible for receiving the motion instruction and executing the mechanical arm grabbing task, so that the mechanical claw moves to the target point to perform autonomous grabbing, and meanwhile, the pose is fed back to the upper computer so as to perform next planning. The system provides three-dimensional coordinates of the target point calculated by a vision processing system by using a five-time polynomial planning method in joint space, and a planning algorithm is operated in an ROS platform to calculate the angle, angular velocity and angular acceleration of each joint of the mechanical arm, so as to form a motion track and complete track planning. Because the vision recognition system updates the position of the target in the visual angle range in real time, the position of the target object is changed, the coordinates input into the mechanical arm grabbing system are also changed, the system re-programs the track, and the automatic grabbing function is realized without reprogramming. The working process of the mechanical arm motion control system is shown in fig. 3, wherein the mechanical arm structure is shown in fig. 4, the mechanical arm structure simulates the human arm structure, the mechanical arm structure is 1 a base joint, 2 a shoulder joint, 3 an elbow joint, 4 an arm joint, 5 a wrist joint and 6 a finger clamp joint, the net height from a mechanical arm base to a clamping device is 430mm, and the diameter of a base disc is 120mm.
The above process is developed and integrated through the ROS platform, and the operation connection relation of the software nodes of each subsystem is shown in figure 5. There are a large number of nodes, messages, services, toolkits, and library files in the ROS platform, and the file system enables efficient management of these codes. The visual recognition and the mechanical arm motion planning in the invention relate to a large number of software nodes, and the ROS can be clearly and definitely managed. The iai-Kinect2 node controls the visual sensor to initialize and collect images, the images are transmitted to the cv_processing_positioning node to process the images, the targets are identified to output target coordinates and transmitted to the control_node control node, a planning request is issued to the move_group motion planning group node, the move_group node issues joint track information to the traject_client client to be transmitted to the mechanical arm controller, and meanwhile, the mechanical arm planning result is fed back to the control_node control node through the traject_client; meanwhile, the mobile_group node issues joint track information to the rviz simulation environment node to synchronously display a mechanical arm model at the ROS, and the model simultaneously receives state information fed back by the mechanical arm by the robot_state_publisher state issuing node to realize the synchronization of a physical object and simulation.

Claims (5)

1. The target positioning method based on the depth vision sensor is characterized by comprising the following steps of:
acquiring a scene image of a space where a target object is located by using a depth vision sensor;
screening the depth value of each pixel point of the RGB image with the depth information, which is acquired by the depth vision sensor, selecting the pixel points with the depth value in the effective identification depth area, and forming a depth-segmented image;
and carrying out graying, image filtering and threshold segmentation on the RGB image subjected to the depth segmentation to obtain an ideal image with depth information of each pixel point, wherein the graying is to convert the RGB image into a gray image by using a three-channel weighted graying method, and the three-channel weighted graying formula is as follows: gray=0.114b+0.587g+0.299r; the image filtering is to filter the image after graying by using a median filtering method; threshold segmentation is to perform threshold segmentation on the filtered image by using an OSTU Ojin method;
performing traversal matching on a target template to be grabbed and an ideal image to locate the target in the image by a template matching method based on profile Hu moment characteristic improvement to obtain a two-dimensional coordinate of the target in an image coordinate system, wherein the template matching method based on profile Hu moment characteristic improvement determines the position of the target in the ideal image by comparing the similarity of the Hu moment characteristic of the template profile and the profile Hu moment characteristic in the searched image;
the spatial coordinates of the target are calculated by combining the depth values of the pixels of the target with the two-dimensional coordinates in the extracted image coordinate system and utilizing the change relation of the image coordinate system and the world coordinate system, and the transformation formula is as follows:
wherein the pixel coordinates of the object in the picture are (u, v), and the coordinates of the object points in the real world in the world coordinate system are denoted as (X W ,Y W ,Z W );Z C The distance from the image coordinate plane to the object, namely the depth value;is a camera internal parameter matrix,/->Is an external parameter matrix and consists of a rotation matrix R and a translation vector t;
and planning a motion trail of the tail end of the six-degree-of-freedom mechanical arm under the joint space according to the space coordinates of the target, and sending a motion instruction to the execution mechanism, so that the execution mechanism moves to the target point to perform autonomous grabbing, and simultaneously feeding back the pose for planning in the next step.
2. A depth vision sensor-based object positioning device, comprising:
the image acquisition module is used for acquiring a scene image of the space where the target object is located by using the depth vision sensor;
the image processing module is used for screening the depth value of each pixel point of the RGB image with the depth information, which is acquired by the depth vision sensor, selecting the pixel points with the depth values in the effective identification depth area, and forming a depth-segmented image;
the feature extraction module is used for carrying out graying, image filtering and threshold segmentation on the RGB image subjected to depth segmentation to obtain an ideal image with depth information of each pixel point, wherein the graying is used for converting the RGB image into a gray image by utilizing a three-channel weighted graying method, and the three-channel weighted graying formula is as follows: gray=0.114b+0.587g+0.299r; the image filtering is to filter the image after graying by using a median filtering method; threshold segmentation is to perform threshold segmentation on the filtered image by using an OSTU Ojin method;
the recognition positioning module is used for performing traversal matching on a target template to be grasped and an ideal image to position the target in the image to obtain a two-dimensional coordinate of the target in the image coordinate system by a template matching method based on profile Hu moment characteristic improvement, wherein the position of the target in the ideal image is determined by comparing the similarity of the Hu moment characteristic of the template profile with the profile Hu moment characteristic in the searched image by the template matching method based on profile Hu moment characteristic improvement;
the coordinate output module calculates the space coordinate of the target by combining the depth value of each pixel point of the target with the two-dimensional coordinate in the extracted image coordinate system and utilizing the change relation of the transformation of the image coordinate system and the world coordinate system, and the transformation formula is as follows:
wherein the pixel coordinates of the object in the picture are (u, v), and the coordinates of the object points in the real world in the world coordinate system are denoted as (X W ,Y W ,Z W );Z C The distance from the image coordinate plane to the object, namely the depth value;is a camera internal parameter matrix,/->Is an external parameter matrix and consists of a rotation matrix R and a translation vector t.
3. An automatic grabbing robot is characterized by comprising a target positioning device, a motion planning device and an executing device, wherein the target positioning device obtains three-dimensional coordinates of a target in space through calculation according to a scene image of the space where the target object is located; the motion planning device obtains the space coordinates of the target object from the target positioning device, performs motion planning, and issues a motion instruction to the execution device; and the execution device receives the motion instruction and executes the grabbing task.
4. The automatic gripping robot according to claim 3, wherein the executing device comprises a six-degree-of-freedom mechanical arm, a mechanical arm controller and joint servo motors, the motion planning device performs motion track planning on the tail end of the six-degree-of-freedom mechanical arm under the joint space by using the acquired target space position information, and the mechanical arm controller receives motion instructions from the motion planning device and controls the joint servo motors to move so that the mechanical claws move to target points to perform autonomous gripping and feeds back the pose of the mechanical claws to the motion planning device to perform next planning.
5. The automatic grabbing robot of claim 4, wherein the motion planning device uses a polynomial planning method under joint space for five times, and according to the three-dimensional coordinates of the target point output by the target positioning device, a planning algorithm is run in the ROS platform to calculate the angles, angular velocities and angular accelerations of each joint of the mechanical arm, form a motion track, and complete track planning.
CN202010122558.2A 2020-02-27 2020-02-27 Target positioning method and device based on depth vision sensor and automatic grabbing robot Active CN111462154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010122558.2A CN111462154B (en) 2020-02-27 2020-02-27 Target positioning method and device based on depth vision sensor and automatic grabbing robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010122558.2A CN111462154B (en) 2020-02-27 2020-02-27 Target positioning method and device based on depth vision sensor and automatic grabbing robot

Publications (2)

Publication Number Publication Date
CN111462154A CN111462154A (en) 2020-07-28
CN111462154B true CN111462154B (en) 2024-01-23

Family

ID=71679995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010122558.2A Active CN111462154B (en) 2020-02-27 2020-02-27 Target positioning method and device based on depth vision sensor and automatic grabbing robot

Country Status (1)

Country Link
CN (1) CN111462154B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112834764B (en) * 2020-12-28 2024-05-31 深圳市人工智能与机器人研究院 Sampling control method and device for mechanical arm and sampling system
CN113065392A (en) * 2021-02-24 2021-07-02 苏州盈科电子有限公司 Robot tracking method and device
CN113510718A (en) * 2021-05-11 2021-10-19 江苏师范大学 Intelligent meal selling robot based on machine vision and use method thereof
CN113696186B (en) * 2021-10-09 2022-09-30 东南大学 Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition
CN114290327B (en) * 2021-11-25 2023-05-30 江苏集萃智能制造技术研究所有限公司 Six-axis mechanical arm control system based on first-order variable gain ADRC
CN113997292B (en) * 2021-11-30 2023-05-09 国网四川省电力公司南充供电公司 Operation method of mechanical arm based on machine vision, medium and electronic equipment
CN114505869A (en) * 2022-02-17 2022-05-17 西安建筑科技大学 Chemical reagent intelligent distribution machine control system
CN114355953B (en) * 2022-03-18 2022-07-12 深圳市朗宇芯科技有限公司 High-precision control method and system of multi-axis servo system based on machine vision
CN114851209B (en) * 2022-06-21 2024-04-19 上海大学 Industrial robot working path planning optimization method and system based on vision
CN115786054B (en) * 2022-11-07 2024-07-12 山西万立科技有限公司 Ground jar unstrained spirits material extracting system of eight robots of suspension type

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107414832A (en) * 2017-08-08 2017-12-01 华南理工大学 A kind of mobile mechanical arm crawl control system and method based on machine vision
CN109035200A (en) * 2018-06-21 2018-12-18 北京工业大学 A kind of bolt positioning and position and posture detection method based on the collaboration of single binocular vision
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2882265C (en) * 2007-08-17 2017-01-03 Zimmer, Inc. Implant design analysis suite

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107414832A (en) * 2017-08-08 2017-12-01 华南理工大学 A kind of mobile mechanical arm crawl control system and method based on machine vision
CN109035200A (en) * 2018-06-21 2018-12-18 北京工业大学 A kind of bolt positioning and position and posture detection method based on the collaboration of single binocular vision
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information

Also Published As

Publication number Publication date
CN111462154A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462154B (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
Song et al. CAD-based pose estimation design for random bin picking using a RGB-D camera
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN110480637B (en) Mechanical arm part image recognition and grabbing method based on Kinect sensor
CN111515945A (en) Control method, system and device for mechanical arm visual positioning sorting and grabbing
CN110666801A (en) Grabbing industrial robot for matching and positioning complex workpieces
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
CN112102368B (en) Deep learning-based robot garbage classification and sorting method
CN107220601B (en) Target capture point prediction method based on online confidence degree discrimination
CN113814986B (en) Method and system for controlling SCARA robot based on machine vision
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN112926503B (en) Automatic generation method of grabbing data set based on rectangular fitting
CN110640741A (en) Grabbing industrial robot with regular-shaped workpiece matching function
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
CN112257293A (en) Non-standard object grabbing method and device based on ROS
CN114463244A (en) Vision robot grabbing system and control method thereof
US20230173660A1 (en) Robot teaching by demonstration with visual servoing
CN117841041B (en) Mechanical arm combination device based on multi-arm cooperation
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
Zhao et al. POSITIONING AND GRABBING TECHNOLOGY OF INDUSTRIAL ROBOT BASED ON VISION.
CN116749233A (en) Mechanical arm grabbing system and method based on visual servoing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant