CN108381549B - Binocular vision guide robot rapid grabbing method and device and storage medium - Google Patents

Binocular vision guide robot rapid grabbing method and device and storage medium Download PDF

Info

Publication number
CN108381549B
CN108381549B CN201810076349.1A CN201810076349A CN108381549B CN 108381549 B CN108381549 B CN 108381549B CN 201810076349 A CN201810076349 A CN 201810076349A CN 108381549 B CN108381549 B CN 108381549B
Authority
CN
China
Prior art keywords
point
grabbing
binocular vision
matching
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810076349.1A
Other languages
Chinese (zh)
Other versions
CN108381549A (en
Inventor
陈力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Sansan Technology Co ltd
Original Assignee
Guangdong Sansan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Sansan Technology Co ltd filed Critical Guangdong Sansan Technology Co ltd
Priority to CN201810076349.1A priority Critical patent/CN108381549B/en
Publication of CN108381549A publication Critical patent/CN108381549A/en
Application granted granted Critical
Publication of CN108381549B publication Critical patent/CN108381549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a binocular vision guided robot rapid grabbing method, a device and a storage medium, which comprises the steps of correcting a left camera and a right camera, obtaining a coordinate position of a target point by adopting a matching algorithm based on an edge profile for a left view, solving a matching result on a right view by a self-adaptive weight matching algorithm aiming at the coordinate position of the target point, and obtaining a parallax value; converting the disparity value result into depth information; and calculating the space coordinate of the output target grabbing point according to the depth information, and guiding the robot to finish the rapid positioning grabbing action. The embodiment of the invention also discloses a device and a storage medium for the binocular vision guided robot to rapidly grab. By adopting the method, the interference of light rays can be reduced, the speed can be increased by 3-6 times compared with that of the traditional binocular vision three-dimensional positioning algorithm, the precision in the horizontal direction is as high as 0.1mm, the precision in the vertical direction is as high as within 1mm, and the method is particularly suitable for the grabbing precision requirement of a Scara robot.

Description

Binocular vision guide robot rapid grabbing method and device and storage medium
Technical Field
The invention relates to a machine vision processing method, in particular to a binocular vision guided robot rapid grabbing method, a binocular vision guided robot rapid grabbing device and a storage medium.
Background
The binocular stereo vision technology has the capability of acquiring the depth information of an object to be measured by simulating the processing mode of human eyes, further can acquire the spatial position information of the object to be measured, has the advantages of nondestructive measurement and real-time measurement, and is widely applied to various industries. The binocular stereo vision technology can accurately obtain the spatial position information of the target point while ensuring the advantages of non-contact property, real-time property and the like. The method can be applied to the aspects of three-dimensional high-precision servo control, high-precision space positioning, high-precision three-dimensional measurement and the like. The processing and positioning precision of the part directly influences the processing precision of the part, the positioning precision of the precision part can directly influence whether the processed part is usable or not, and meanwhile, the precision part accurate positioning method based on binocular stereo vision is produced according to the characteristic that the processing and positioning of the precision ultra-precision part usually requires non-contact.
Traditional binocular vision guide robot snatchs, and the matching positioning speed is slow, often needs more than 1 ~ 2s just can accomplish the location, and whole action cycle of snatching exceeds more than 4s usually, is difficult to break through and snatchs efficiency.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method, an apparatus and a storage medium for rapid grabbing by a binocular vision guided robot. The depth information of the grabbing target can be rapidly identified, and the workpiece grabbing can be rapidly carried out.
In order to solve the technical problem, the embodiment of the invention provides a binocular vision guided robot rapid grabbing method, which comprises the following steps:
step1: after the left camera and the right camera are corrected, a matching algorithm based on edge contour is adopted for the left view to obtain the coordinate position of a target point;
step2: aiming at the coordinate position of the target point, a matching result of the target point on a right view is obtained through a self-adaptive weight matching algorithm, and a parallax value is obtained;
and step3: converting the disparity value result into depth information;
and 4, step4: and calculating the space coordinate of the output target grabbing point according to the depth information, and guiding the robot to finish the rapid positioning grabbing action.
Further, the step1 further comprises the following steps:
searching the contour edge of the target object by adopting a Canny algorithm;
and storing the contour data, storing the x and y directions and gradient information of each coordinate point on the edge contour as a template model, and rearranging each coordinate point into a point set taking the barycentric coordinate as a starting point.
Still further, the step1 further comprises the steps of:
the template model is compared to the search images at all locations using a similarity measure.
Still further, the step2 further comprises the steps of:
and searching a matching point corresponding to the target point in the right view by adopting a self-adaptive weighted binocular stereo matching algorithm, and calculating a parallax value by using the matching point and the target point.
Further, the step3 specifically includes:
the depth information is calculated by the following formula:
Figure BDA0001559686160000021
where Z is depth information, f is the camera focal length, T is the baseline distance of the left and right cameras, and d is the disparity value.
Correspondingly, the embodiment of the invention also provides a device for binocular vision guided robot fast grabbing, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the method when executing the computer program.
Accordingly, the embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program realizes the steps of the method when being executed by a processor.
The embodiment of the invention has the following beneficial effects: the invention adopts a matching algorithm based on the edge profile, is slightly influenced by the interference of ambient light, has a three-dimensional matching speed 3-6 times faster than that of the traditional binocular vision three-dimensional positioning algorithm, has the precision of 0.1mm in the horizontal direction and within 1mm in the vertical direction, and is particularly suitable for the grabbing precision requirement of a Scara robot.
Drawings
FIG. 1 is a schematic view of the overall flow structure of the process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
Reference is made to the flow chart shown in figure 1.
The binocular vision guided robot rapid grabbing method provided by the embodiment of the invention is carried out through the following steps.
Respectively preprocessing a left camera and a right camera of a binocular camera before starting, reading camera calibration internal parameters, and performing epipolar line correction on a left image and a right image.
Creating a region of interest (ROI) of interest within a left view of a calibrated left camera
First, edge template based creation
1. And searching the contour edge of the target object by adopting a Canny algorithm.
Step1 smoothing the image with a Gaussian filter
The formula:
Figure BDA0001559686160000031
G(x,y)=f(x,y)*H(x,y)
step2 calculating image gradient by finite difference of first order partial derivative, direction formula
Figure BDA0001559686160000032
Figure BDA0001559686160000033
Figure BDA0001559686160000034
Image gradient Magnitude
Figure BDA0001559686160000035
Image orientation Direction
Figure BDA0001559686160000036
Step3 non-maximum suppression
At each point, the central pixel M of the neighborhood is compared to two pixels along the gradient line.
If the gradient value of M is not larger than the gradient values of two adjacent pixels along the gradient line, let M be 0.
-assuming that the gradient map obtained in the previous step is G (x, y); initializing G (x, y): n (x, y) ═ G (x, y).
-finding n pixel points in each of the gradient and anti-gradient directions. If G (x, y) is not the maximum of these points, then N (x, y) is set to 0, otherwise N (x, y) is kept unchanged.
Step4 bilinear threshold
A typical way to reduce the number of false edge segments is to use a threshold for N i, j. All values below the threshold are assigned zero values.
Using a dual threshold algorithm approach, the dual threshold algorithm applies two thresholds T1 and T2, and 2T1 ≈ T2, to the non-maximum suppressed image, so that two threshold edge images N1[ i, j ] and N2[ i, j ] can be obtained. Since N2[ i, j ] is obtained using a high threshold, there are few false edges, but there are discontinuities (not closings). The dual threshold method connects edges into a contour in N2[ i, j ] and when the end points of the contour are reached, the algorithm finds edges that can be connected to the contour at the 8 neighbor positions of N1[ i, j ] so that the algorithm continues to collect edges in N1[ i, j ] until N2[ i, j ] is connected.
2. Preserving contour data
And storing the x and y directions and gradient information of each coordinate point on the edge contour as a template model. These coordinates will be rearranged into a set of points with the barycentric coordinates as the starting point.
Secondly, searching a template in the single view to obtain the position of the target object
1. Model similarity measurement and optimization acceleration strategy
Setting a contour template point set as:
Figure BDA0001559686160000041
in the x, y directionThe gradient is set as:
Figure BDA0001559686160000042
where i is the pixel data sequence of the template contour.
The gradient of the image to be searched is obtained as follows:
Figure BDA0001559686160000043
wherein u is the row coordinate row of the image to be searched, and v is the row coordinate column of the image to be searched.
In the matching process, the template model should be compared with the search images at all locations using a similarity measure. The idea behind the similarity measure is to sum all normalized points of the gradient vector of the template image and search the image over all points in the model dataset. The result will generate a score value at each point in the search image. The formula is as follows:
Figure BDA0001559686160000044
this function will return a score of 1 if there is a perfect match between the template model and the search image. The score corresponds to the portion of the search image where the object is visible, and if the object is not in the search image, the score will be 0.
Preferably, in order to speed up the search. When a similarity measure is found, a minimum score (Smin) is set for the similarity measure, and all points in the template model need not be evaluated.
In order to check the score S (u, v) of the partial specific point J, the partial sum Sm must be found. Sm at point m can be defined as follows:
Figure BDA0001559686160000051
obviously, the result of this sum is less than or equal to 1, so the calculation can be stopped when the following conditions are met:
Sm>Smin-1+m/n
another criterion for a fast search is that the local score at any point should be greater than the minimum score. Namely:
Sm>Smin*m/n
when this condition is used, the matching efficiency will be very fast.
If the inspection target object has partial edge missing, the local sum result will be low, so the object will be considered as a matching failure. This situation is improved by using another standard approach: checking the first part of the template model with a relatively secure stop search criterion, the remainder with another, more stringent criterion Sminm/n, i.e., the user may specify a greedy parameter (g), setting a portion of the template models to be examined with a high standard. If g is 1, all points in the template model are checked with high standard, if g is 0, all points will be checked with only security standard. This procedure is formulated as follows.
The above calculation of the local score converges on the stopping criterion:
Figure BDA0001559686160000052
this edge-based object matching approach has several advantages:
(1) the matching efficiency is higher than that of a template matching algorithm based on the gray scale (about 2-3 times).
(2) Is invariant to non-linear illumination variations because all gradient vectors are normalized. Since there is no segmentation on the edge filtering, it will show true invariance to prevent arbitrary illumination variations.
(3) This similarity measure is robust when the object is partially visible or mixed with other objects.
2. Outputting point set of search matching result
ML={(x,y)|x=arg maxx(Sm(x,y)),y=arg maxy(Sm(x,y))}
And thirdly, performing binocular stereo matching and calculating height information of the material grabbing points.
1. Binocular stereo matching algorithm-self-adaptive weight algorithm (AW algorithm)
And searching a central point of the material by using template matching as a left view searching point, and searching a corresponding matching point in a right view by using a self-adaptive weighted binocular stereo matching algorithm. The depth information of the point can be obtained by only calculating the disparity value of the point relative to the right view. The search matching calculation amount is greatly reduced, and the algorithm efficiency is improved.
In the left view after epipolar line correction, the matched target central pixel point p is searched for by the edge-based template matching algorithm in the previous step to be used as the center to establish a relevant window, and if q is a certain point in the window, w (p, q) is p, and the cross-correlation weight of the two points q is established.
Calculation method of w (p, q):
Figure BDA0001559686160000061
wherein, Δ cpqIndicates the Euler distance, Δ g, of two points p and q in the chromaticity spacepqRepresents the Euler distance, gamma, of two points p and q in geometric spacecpRepresenting the control space distance weight and the color distance weight coefficient. Wherein Δ cpqAnd Δ gpqIs calculated as follows:
Figure BDA0001559686160000062
Figure BDA0001559686160000063
and establishing a related window with the same size at the same coordinate position in the right view, and moving the window to the left by a single pixel step within the parallax search range, wherein the moving distance is set as d. Then, the similarity degree of two windows in the left image and the right image is calculated simultaneously, and the similarity calculation method of the windows in the left view and the right view comprises the following steps:
Figure BDA0001559686160000064
wherein (p)L∈ML)。
pLRepresenting the center pixel point q of the target object obtained by the left view through template matching in the above stepsLIndicating that the left view is at pLThe remaining pixels, p, in the window being centeredL-dCenter pixel point, q, representing the right view windowL-dRepresenting the remaining pixels of the right view window, WL,WL-dRespectively, a support window centered on the matching point, e (q)L,qL-d) Represents pL,pL-dMatching cost of two points.
When the window moves to a certain position, the similarity of two related windows of the left image and the right image is the highest, and then the central pixels of the two windows are considered to be matched successfully. At this time, according to the WTA Winner eating-All criterion (Winner-Take-All), obtaining the parallax value result dLAW
dLAW(pL,d)=arg mind(Cost(pL,pL-d))
In the process of cost function aggregation of a matching window, the Euler distance of a color space and the Euler distance of a pixel image space are comprehensively considered by the self-adaptive weight matching algorithm, when a certain distance parameter is large, the weight of the pixel point is decreased exponentially, and the proportion occupied in the step of cost function aggregation becomes small. Compared with the traditional local stereo matching algorithm, the matching algorithm of the self-adaptive weight has higher matching precision and lower mismatching rate, even has better processing effect on the parallax at the edge, and has better parallax image smoothness. It is clear that the stability and accuracy of the system are critical for guiding the Scara robot to complete the grabbing action.
2. Computing target depth information
And calculating the depth information of the target according to the parallax value obtained in the last step and in stereo matching.
Depth Z calculation formula:
Figure BDA0001559686160000071
z is pixel depth information, f is a camera focal length, T is a base line distance of the binocular camera, and d is a central pixel parallax value calculated in the previous step.
And fourthly, converting coordinates, outputting three-dimensional space coordinates of the grabbing points, and guiding the robot to complete grabbing.
The embodiment of the invention also provides a device for rapidly grabbing by the binocular vision guided robot, which can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The device for binocular vision guided robot quick grabbing can comprise, but is not limited to, a processor and a memory. It will be understood by those skilled in the art that the schematic diagram is merely an apparatus for binocular vision guided robot quick grab, and does not constitute a limitation of an apparatus for binocular vision guided robot quick grab, and may include more or less components than those shown, or combine some components, or different components, for example, the apparatus for binocular vision guided robot quick grab may further include input and output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor may be a microprocessor or the processor may be any conventional processor or the like, the processor is a control center of the device for binocular vision guided robot fast grabbing, and various interfaces and lines are used to connect various parts of the whole device for binocular vision guided robot fast grabbing.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the apparatus for binocular vision guided robot quick grabbing by running or executing the computer program and/or the module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The device-integrated module/unit for binocular vision guided robot quick grabbing, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The embodiment of the invention has the following advantages:
1) the image edge gradient is calculated through a Canny algorithm, a two-dimensional position of a matched target is obtained through a matching algorithm based on an edge profile, an adaptive weight stereo matching Algorithm (AW) is carried out on a target matching central point on the basis of the two-dimensional position of the matched target, a parallax value is obtained, and the three-dimensional space coordinate is converted through a coordinate conversion relation to complete a quick positioning task.
2) The method has the advantages that the depth information of the target grabbing point can be obtained only by searching and matching one point, and the binocular matching search range of the image is greatly reduced. In the traditional algorithm, all pixel points of the whole image are searched and matched to obtain the whole parallax image, and then the depth information of the target capture point is further analyzed and extracted. It is clear that the efficiency of the present invention is much higher than the conventional algorithm. Especially, the effect is more obvious in high-resolution occasions, and the method is the key for realizing rapid three-dimensional positioning.
3) The invention has the characteristic of high precision of two-dimensional space based on visual image matching search, and the horizontal positioning precision is far higher than that of other traditional three-dimensional matching algorithms (such as a triangular laser method and the like).
4) The method combines the contour matching method and the binocular vision depth calculation method, so that the method not only has the advantage of high-precision positioning of two-dimensional images, but also optimizes the binocular vision matching efficiency and realizes the rapid three-dimensional positioning of the target workpiece. The space grabbing device is particularly suitable for guiding a Scara robot to complete space grabbing tasks.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (4)

1. A binocular vision guide robot rapid grabbing method is characterized by comprising the following steps:
step1: after correcting a left camera and a right camera, obtaining a coordinate position of a target point by adopting a matching algorithm based on an edge contour for a left view, searching the contour edge of the target object by adopting a Canny algorithm, storing contour data, storing x, y direction and gradient information of each coordinate point on the edge contour as a template model, rearranging each coordinate point into a point set taking a gravity center coordinate as a starting point, and comparing the template model with search images at all positions by using similarity measurement;
step2: aiming at the coordinate position of the target point, searching a matching point corresponding to the target point in a right view by adopting a self-adaptive weighted binocular stereo matching algorithm, and calculating a parallax value by using the matching point and the target point;
and step3: converting the disparity value result into depth information;
and 4, step4: and calculating the space coordinate of the output target grabbing point according to the depth information, and guiding the robot to finish the rapid positioning grabbing action.
2. The binocular vision guided robot rapid grabbing method according to claim 1, wherein the step3 specifically comprises:
the depth information is calculated by the following formula:
Figure 616360DEST_PATH_IMAGE001
where Z is depth information, f is the camera focal length, T is the baseline distance of the left and right cameras, and d is the disparity value.
3. An apparatus for binocular vision guided robotic fast grabbing, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to claim 1 or 2 are implemented when the computer program is executed by the processor.
4. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to claim 1 or 2.
CN201810076349.1A 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium Active CN108381549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810076349.1A CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810076349.1A CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN108381549A CN108381549A (en) 2018-08-10
CN108381549B true CN108381549B (en) 2021-12-14

Family

ID=63077475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810076349.1A Active CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN108381549B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685141B (en) * 2018-12-25 2022-10-04 合肥哈工慧拣智能科技有限公司 Robot article sorting visual detection method based on deep neural network
CN109887019B (en) * 2019-02-19 2022-05-24 北京市商汤科技开发有限公司 Binocular matching method and device, equipment and storage medium
CN111452036B (en) * 2019-03-19 2023-08-04 北京伟景智能科技有限公司 Workpiece grabbing method based on line laser binocular stereoscopic vision
CN111768449B (en) * 2019-03-30 2024-05-14 北京伟景智能科技有限公司 Object grabbing method combining binocular vision with deep learning
CN111145254B (en) * 2019-12-13 2023-08-11 上海新时达机器人有限公司 Door valve blank positioning method based on binocular vision
CN111515149B (en) * 2020-04-26 2020-12-29 广东弓叶科技有限公司 Man-machine cooperation sorting system and robot grabbing position obtaining method thereof
CN111539973B (en) * 2020-04-28 2021-10-01 北京百度网讯科技有限公司 Method and device for detecting pose of vehicle
CN114913223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Positive direction identification method and system of visual sweeper
CN113034526B (en) * 2021-03-29 2024-01-16 深圳市优必选科技股份有限公司 Grabbing method, grabbing device and robot
CN114029946A (en) * 2021-10-14 2022-02-11 五邑大学 Method, device and equipment for guiding robot to position and grab based on 3D grating
CN116524010B (en) * 2023-04-25 2024-02-02 北京云中未来科技有限公司 Unmanned crown block positioning method, system and storage medium for bulk material storage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325120A (en) * 2013-06-30 2013-09-25 西南交通大学 Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN103817699A (en) * 2013-09-25 2014-05-28 浙江树人大学 Quick hand-eye coordination method for industrial robot
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN105894499A (en) * 2016-03-25 2016-08-24 华南理工大学 Binocular-vision-based rapid detection method for three-dimensional information of space object
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3002550B1 (en) * 2014-10-03 2017-08-30 Ricoh Company, Ltd. Information processing system and information processing method for distance measurement
CN104794713B (en) * 2015-04-15 2017-07-11 同济大学 Chamber crop digitalized image method based on ARM and binocular vision
CN107392929B (en) * 2017-07-17 2020-07-10 河海大学常州校区 Intelligent target detection and size measurement method based on human eye vision model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325120A (en) * 2013-06-30 2013-09-25 西南交通大学 Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN103817699A (en) * 2013-09-25 2014-05-28 浙江树人大学 Quick hand-eye coordination method for industrial robot
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN105894499A (en) * 2016-03-25 2016-08-24 华南理工大学 Binocular-vision-based rapid detection method for three-dimensional information of space object
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance

Also Published As

Publication number Publication date
CN108381549A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
CN111783820B (en) Image labeling method and device
US11636604B2 (en) Edge detection method and device, electronic equipment, and computer-readable storage medium
CN109479082B (en) Image processing method and apparatus
CN110688947B (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
US10803615B2 (en) Object recognition processing apparatus, object recognition processing method, and program
CN109784250B (en) Positioning method and device of automatic guide trolley
US11145080B2 (en) Method and apparatus for three-dimensional object pose estimation, device and storage medium
US11475593B2 (en) Methods and apparatus for processing image data for machine vision
US11657630B2 (en) Methods and apparatus for testing multiple fields for machine vision
CN108573471B (en) Image processing apparatus, image processing method, and recording medium
JPH0528273A (en) Method and device for processing picture
CN112734652B (en) Near-infrared blood vessel image projection correction method based on binocular vision
CN112085033A (en) Template matching method and device, electronic equipment and storage medium
US10810761B2 (en) Position and orientation estimation apparatus, position and orientation estimation method, and program
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN114638891A (en) Target detection positioning method and system based on image and point cloud fusion
KR101741501B1 (en) Apparatus and Method for Estimation of Distance between Camera and Object
CN114782529A (en) High-precision positioning method and system for line grabbing point of live working robot and storage medium
CN114494857A (en) Indoor target object identification and distance measurement method based on machine vision
CN114511894A (en) System and method for acquiring pupil center coordinates
Choe et al. Vision-based estimation of bolt-hole location using circular hough transform
JP2011175347A (en) Information processing apparatus and method
CN112991179B (en) Method, apparatus, device and storage medium for outputting information
CN113012281B (en) Determination method and device for human body model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant