CN108381549A - A kind of quick grasping means of binocular vision guided robot, device and storage medium - Google Patents

A kind of quick grasping means of binocular vision guided robot, device and storage medium Download PDF

Info

Publication number
CN108381549A
CN108381549A CN201810076349.1A CN201810076349A CN108381549A CN 108381549 A CN108381549 A CN 108381549A CN 201810076349 A CN201810076349 A CN 201810076349A CN 108381549 A CN108381549 A CN 108381549A
Authority
CN
China
Prior art keywords
binocular vision
guided robot
point
vision guided
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810076349.1A
Other languages
Chinese (zh)
Other versions
CN108381549B (en
Inventor
陈力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong 33 Intelligent Technology Co Ltd
Original Assignee
Guangdong 33 Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong 33 Intelligent Technology Co Ltd filed Critical Guangdong 33 Intelligent Technology Co Ltd
Priority to CN201810076349.1A priority Critical patent/CN108381549B/en
Publication of CN108381549A publication Critical patent/CN108381549A/en
Application granted granted Critical
Publication of CN108381549B publication Critical patent/CN108381549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/10Programme-controlled manipulators characterised by positioning means for manipulator elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of quick grasping means of binocular vision guided robot, device and storage mediums, including after being corrected to left and right camera, and coordinate of ground point position is obtained using the matching algorithm based on edge contour to left view;For the coordinate of ground point position, its matching result on right view is found out by adaptive weighting matching algorithm, and obtain parallax value;The parallax value result is converted into depth information;The space coordinate of output target crawl point is calculated according to the depth information, guided robot completes quickly positioning grasping movement.The embodiment of the invention also discloses a kind of devices and storage medium quickly captured for binocular vision guided robot.Using the present invention, the interference of light can be reduced, and can be relatively traditional fast 3 ~ 6 times of binocular vision stereoscopic localized algorithm speed, and precision is up to 0.1mm in the horizontal direction, and vertical direction precision is up in 1mm, and required precision is captured especially suitable for Scara robot.

Description

A kind of quick grasping means of binocular vision guided robot, device and storage medium
Technical field
The present invention relates to a kind of machine vision processing methods more particularly to a kind of binocular vision guided robot quickly to capture Method, apparatus and storage medium.
Background technology
Technique of binocular stereoscopic vision has by the processing mode of simulation mankind's eyes and obtains object under test depth information Ability, and then the spatial positional information of testee can be obtained, while having the advantages that nondestructive measurement again and measuring in real time, All trades and professions are widely used.Energy while technique of binocular stereoscopic vision is the advantages that ensureing untouchable, real-time Enough spatial positional informations for accurately obtaining target point.It can be applied to three-dimensional high-precision SERVO CONTROL, high-precision spatial positioning And high precision three-dimensional measurement etc..The setting accuracy on work of part directly affects the machining accuracy of part, and for essence Close part positioning accuracy is possible to directly affect whether processing part can be used, while being positioned for the processing of accurate ultra part Often require that non-contacting feature, the accurate positioning method of the precision component based on binocular stereo vision is come into being.
Traditional binocular vision guide robot captures, and matching locating speed is slow, and generally requiring 1~2s or more could complete to determine Position, entire grasping movement period are usually more than 4s or more, it is difficult to break through crawl efficiency.
Invention content
Technical problem to be solved of the embodiment of the present invention is, provides a kind of binocular vision guided robot and quickly captures Method, apparatus and storage medium.It can quickly identify the depth information of crawl target, and quickly carry out workpiece grabbing.
In order to solve the above-mentioned technical problem, an embodiment of the present invention provides a kind of binocular vision guided robots quickly to capture Method includes the following steps:
Step 1:After being corrected to left and right camera, target is obtained using the matching algorithm based on edge contour to left view Point coordinates position;
Step 2:For the coordinate of ground point position, it is found out in right view on by adaptive weighting matching algorithm With as a result, and obtaining parallax value;
Step 3:The parallax value result is converted into depth information;
Step 4:The space coordinate of output target crawl point is calculated according to the depth information, guided robot is completed quick Position grasping movement.
Further, the step 1 is further comprising the steps of:
Objects' contour edge is searched using Canny algorithms;
Outline data is preserved, by the x of each coordinate points on edge contour, the directions y and gradient information save as template model, and Each coordinate points are rearranged for the point set using barycentric coodinates as starting point.
Further, the step 1 is further comprising the steps of:
The template model is compared with the search image at all positions using similarity measurement.
Further, the step 2 is further comprising the steps of:
The match point of the correspondence target point in right view is searched for using the Binocular Stereo Matching Algorithm of adaptive weighting, Calculating parallax value is carried out with the match point and the target point.
Further, the step 3 specifically includes:
Calculating depth information is carried out by following formula:
Wherein, Z is depth information, and f is camera focus, and T is the parallax range of left and right camera, and d is parallax value.
Correspondingly, the embodiment of the present invention additionally provides a kind of device quickly captured for binocular vision guided robot, Including memory, processor and it is stored in the computer program that can be run in the memory and on the processor, institute State the step of realizing the above method when processor executes the computer program.
Correspondingly, the embodiment of the present invention additionally provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, is realized such as the step of the above method when computer program is executed by processor.
Implement the embodiment of the present invention, has the advantages that:The present invention uses the matching algorithm based on edge contour, by Ambient light interference effect is small, and Stereo matching speed is 3~6 times faster than traditional binocular visual stereoscopic location algorithm speed, and Horizontal direction precision is up to 0.1mm, and vertical direction precision is up in 1mm, is wanted especially suitable for Scara robot crawl precision It asks.
Description of the drawings
Fig. 1 is the overall flow structural schematic diagram of the method for the present invention.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made into one below in conjunction with attached drawing Step ground detailed description.
Flow chart shown in referring to Fig.1.
A kind of quick grasping means of binocular vision guided robot of the embodiment of the present invention is carried out by following steps.
The left and right camera of binocular camera is pre-processed respectively before start, reads camera calibration internal reference, left and right is schemed As carrying out polar curve correction.
Target region of interest (ROI) is created in the left view of corrected left camera
The first step, the establishment based on border template
1. searching objects' contour edge using Canny algorithms.
Step1:With Gaussian filter smoothed image
Formula:
G (x, y)=f (x, y) * H (x, y)
Step2:Image gradient, direction formula are calculated with the finite difference of single order local derviation
Image gradient Magnitude
Image direction Direction
Step3:Non- maximum value is done to inhibit
To a point, the center pixel M of neighborhood is compared with along the two of gradient line pixel.
If the Grad of M is big unlike two adjacent image point Grad along gradient line, M=0 is enabled.
--- assuming that the gradient map that previous step obtains is G (x, y);G (x, y) is initialized:N (x, y)=G (x, y)。
--- respectively look for n pixel in gradient and antigradient direction.If G (x, y) is not the maximum point in these points, N (x, y) is set to 0, otherwise keeps N (x, y) constant.
Step4:Bilinearity threshold value
The typical method for reducing false edge segment number is to use a threshold value to N [i, j].The all values that will be less than threshold value are assigned Zero.
Using dual threshold algorithmic method, dual threashold value-based algorithm acts on two threshold values T1 and T2 to non-maxima suppression image, and 2T1 ≈ T2, so as to obtain two threshold skirt image N1 [i, j] and N2 [i, j].Since N2 [i, j] is obtained using high threshold It arrives, thus containing seldom false edge, but have interruption (not being closed).Edge will be connected into wheel by dual-threshold voltage in N2 [i, j] Exterior feature, when reaching the endpoint of profile, which just finds in the 8 adjoint point positions of N1 [i, j] and may be coupled to the edge on profile, In this way, algorithm constantly collects edge in N1 [i, j], until connecting N2 [i, j].
2. preserving outline data
By the x of each coordinate points on edge contour, the directions y and gradient information save as template model.These coordinates will be weighed The new point set being arranged in using barycentric coodinates as starting point.
Second step searches template in single-view and obtains target object position
1. model similarity is measured and optimization acceleration strategy
If contour mould point set is:
It is set as in x, the gradient in the directions y:
Wherein i is the pixel data sequence of template contours.
The gradient for seeking image to be found is:
Wherein u is that row the coordinate row, v of image to be found are the row coordinate column of image to be found.
In the matching process, it should be compared template model and the search image at all positions using similarity measurement Compared with.The thought of similarity measure behind is that all normalization points of template image gradient vector are summed, and in model data collection In all the points on search for image.As a result a score value will be generated on each of search image point.Formula is as follows:
If there are perfect matchings, this function will return to score 1 between template model and search image.Score corresponds to The part of viewable objects in image is searched for, if object, not in searching for image, score will be 0.
Preferably, in order to accelerate search speed.When finding similarity measurement, one minimum is set for similarity measurement Score (Smin) need not then assess all the points in template model.
In order to check the score S (u, v) of part specified point J, it is necessary to find part and Sm.Then the Sm at point m can be defined It is as follows:
Obviously, the result of this sum is less than or equal to 1. so can stop calculating when the following conditions are met:
Sm > Smin-1+m/n
The standard of another fast search is:The local score of any point should all be more than minimum score.I.e.:
Sm > Smin*m/n
When using this condition, matching efficiency will be very fast.
If checking that target object has part edge missing, local summing value result will be very low, and the object will in this way It is considered as matching unsuccessful.Then by improving such case with another standard mode:Using a comparatively safe stopping Search criterion checks the first part of template model, remaining another more stringent standard SminM/n, i.e. user can be with A specified greedy parameter (g), setting a portion template model are checked with high standard.If g=1, in template model All the points all with high standard inspection, if g=0, all the points will be checked only with safety standard.Formulate this according to the following steps A program.
The calculating of above-mentioned local score restrains stopping criterion:
This target matching method based on edge has the advantages that several:
(1) matching efficiency is higher than the template matching algorithm (2~3 times or so) based on gray scale.
(2) it is constant for non-linear illumination variation, because all gradient vectors are all normalized.Due to being filtered at edge Do not divide on wave, so it will show real invariance, to prevent arbitrary illumination change.
(3) when object parts are visible or mixed with other objects, this similarity measurement is steady.
2. the point set that matching obtains result is searched in output
ML=(x, y) | x=arg maxx(Sm(x, y)), y=arg maxy(Sm(x, y))}
Third walks, and binocular solid matching calculates material and captures point height information.
1. Binocular Stereo Matching Algorithm --- adaptive weighting algorithm (AW algorithms)
The central point for searching material using template matches is stood as left view Searching point using the binocular of adaptive weighting Body matching algorithm searches for the Corresponding matching point in right view.Only calculate parallax value of the point with respect to right view, you can be somebody's turn to do The depth information of point.Search matching primitives amount is significantly reduced, efficiency of algorithm is improved.
In left view after polar curve correction, the mesh being matched to is searched with template matching algorithm of the previous step based on edge A correlation window is established centered on mark central pixel point p, it is assumed that q is the certain point in the window, and w (p, q) is p, 2 points of q's Cross-correlation weight.
The computational methods of w (p, q):
Wherein, Δ cpqIndicate Euler distances of 2 points of p, the q in chrominance space, Δ gpqIndicate 2 points of p, q in geometric space Euler's distance, γcpIndicate control space length weight and color distance weight coefficient.Wherein Δ cpqWith Δ gpqCalculating such as Under:
An an equal amount of correlation window is established at same coordinate position equally in right view, in disparity search range Interior to move to left window by single pixel step-length, displacement distance is set as d.Then the similar journey of two windows in left images is calculated simultaneously It spends, the similarity calculation method of window is in the view of left and right:
, wherein (pL∈ML)。
pLIndicate the target object central pixel point that left view obtains in above-mentioned steps by template matches, qLIndicating should Left view is in pLCentered on window in rest of pixels point, pL-dIndicate the central pixel point of right view window, qL-dIndicate the right side The rest of pixels point of view window, WL, WL-dThe support window centered on match point, e (q are indicated respectivelyL, qL-d) indicate pL, pL-d 2 points of Matching power flow.
The similarity highest of left images pairwise correlation window when window moves on to some position, then the middle imago of two windows It is plain then be considered successful match.At this time according to WTA winner-take-alls criterion (Winner-Take-All), parallax value result is obtained dLAW
dLAW(pL, d) and=arg mind(Cost(pL, pL-d))
The matching algorithm of adaptive weighting has considered color space during the polymerization of the cost function of matching window Euler distance and pixel image space Euler's distance, when some distance parameter is larger, the weight of this pixel Will exponentially speed decline, in the step of cost function polymerize shared by ratio will become very little.It is vertical with traditional part Body matching algorithm is compared, and the matching precision higher of the matching algorithm of adaptive weighting, error hiding rate is lower, or even is regarded in edge Difference also has preferable treatment effect, disparity map flatness preferable.And obviously the stability of system and accuracy are guiding Scara machines Device people completes the key of grasping movement.
2. calculating target depth information
The parallax value that Stereo matching is obtained according to previous step, calculates the depth information of target.
Depth Z calculation formula:
Z is pixel depth information, and f is camera focus, and T is the parallax range of binocular camera, and d is that abovementioned steps are calculated Center pixel parallax value.
4th step, coordinate transformation, output crawl point three dimensional space coordinate, guided robot complete crawl.
The embodiment of the present invention additionally provides a kind of device quickly captured for binocular vision guided robot, can be table The computing devices such as laptop computer, notebook, palm PC and cloud server.It is described a kind of for binocular vision guiding machine The device that people quickly captures may include, but be not limited only to, processor, memory.It will be understood by those skilled in the art that described show It is merely intended to a kind of device quickly captured for binocular vision guided robot, does not constitute and binocular vision is used for one kind The restriction for the device that guided robot quickly captures may include than illustrating more or fewer components, or the certain portions of combination Part or different components, such as a kind of device quickly captured for binocular vision guided robot can also include Input-output equipment, network access equipment, bus etc..
Alleged processor can be central processing unit (Central Processing Unit, CPU), can also be it His general processor, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor It is a kind of control centre of device quickly captured for binocular vision guided robot Deng, the processor, using each Kind interface and a kind of entire various pieces of the device quickly captured for binocular vision guided robot of connection.
The memory can be used for storing the computer program and/or module, and the processor is by running or executing Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization A kind of various functions of the device quickly captured for binocular vision guided robot.The memory can include mainly storage journey Sequence area and storage data field, wherein storing program area can storage program area, the application program (ratio needed at least one function Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created data according to mobile phone (such as audio data, phone directory etc.) etc..In addition, memory may include high-speed random access memory, can also include non- Volatile memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), safe number Word (Secure Digital, SD) block, flash card (Flash Card), at least one disk memory, flush memory device or its His volatile solid-state part.
If a kind of integrated module/unit of device quickly captured for binocular vision guided robot is with software The form of functional unit is realized and when sold or used as an independent product, can be stored in a computer-readable storage In medium.Based on this understanding, the present invention realizes all or part of flow in above-described embodiment method, can also pass through meter Calculation machine program is completed to instruct relevant hardware, and the computer program can be stored in a computer readable storage medium In, the computer program is when being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the calculating Machine program includes computer program code, and the computer program code can be source code form, object identification code form, can hold Style of writing part or certain intermediate forms etc..The computer-readable medium may include:The computer program code can be carried Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications letter Number and software distribution medium etc..It should be noted that the content that the computer-readable medium includes can be managed according to the administration of justice Local legislation and the requirement of patent practice carry out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent Practice, computer-readable medium do not include electric carrier signal and telecommunication signal.
Inventive embodiments have the following advantages that:
1) Canny algorithms are first passed through and calculate Image edge gradient, matching is obtained using the matching algorithm based on edge contour The two-dimensional position of target does adaptive weighting Stereo Matching Algorithm (AW) for object matching central point on this basis and obtains Parallax value is converted into three dimensional space coordinate by coordinate transformation relation and completes quick location tasks.
2) it is an advantage of the current invention that only needing to scan for the depth that matching obtains target crawl point for a point Information substantially reduces image binocular ranging search range.And traditional algorithm needs first to search for whole picture figure all pixels point Rope matching acquires whole picture disparity map, further the depth information of analysis extraction target crawl point.Obviously the efficiency of the present invention is wanted Significantly larger than traditional algorithm.Especially under high-resolution occasion, effect becomes apparent from, and is the key that realize quick stereo positioning.
3) present invention possesses the high accurancy and precision feature of two-dimensional space view-based access control model images match search, and horizontal positioning accuracy is remote Higher than other conventional three-dimensional matching algorithms (such as triangular laser method).
4) present invention not only has by the way that the method for method and binocular vision calculating depth based on outline to be combined There is two dimensional image high accuracy positioning advantage, also optimize binocular vision matching efficiency, the quick stereo for realizing target workpiece is fixed Position.It is particularly suitable for guiding Scara robot and completes space crawl task.
It is above disclosed to be only a preferred embodiment of the present invention, the power of the present invention cannot be limited with this certainly Sharp range, therefore equivalent changes made in accordance with the claims of the present invention, are still within the scope of the present invention.

Claims (7)

1. a kind of quick grasping means of binocular vision guided robot, which is characterized in that include the following steps:
Step 1:After being corrected to left and right camera, target point is obtained using the matching algorithm based on edge contour to left view and is sat Cursor position;
Step 2:For the coordinate of ground point position, it is found out by adaptive weighting matching algorithm and matches knot on right view Fruit, and obtain parallax value;
Step 3:The parallax value result is converted into depth information;
Step 4:The space coordinate of output target crawl point is calculated according to the depth information, guided robot completes quickly positioning Grasping movement.
2. the quick grasping means of binocular vision guided robot according to claim 1, which is characterized in that the step 1 It is further comprising the steps of:
Objects' contour edge is searched using Canny algorithms;
Outline data is preserved, by the x of each coordinate points on edge contour, the directions y and gradient information save as template model, and by institute It states each coordinate points and is rearranged for point set using barycentric coodinates as starting point.
3. the quick grasping means of binocular vision guided robot according to claim 2, which is characterized in that the step 1 It is further comprising the steps of:
The template model is compared with the search image at all positions using similarity measurement.
4. according to the quick grasping means of claims 1 to 3 any one of them binocular vision guided robot, which is characterized in that The step 2 is further comprising the steps of:
Using the match point of the correspondence target point in the Binocular Stereo Matching Algorithm search right view of adaptive weighting, with institute It states match point and carries out calculating parallax value with the target point.
5. the quick grasping means of binocular vision guided robot according to claim 4, which is characterized in that the step 3 It specifically includes:
Calculating depth information is carried out by following formula:
Wherein, Z is depth information, and f is camera focus, and T is the parallax range of left and right camera, and d is parallax value.
6. a kind of device quickly captured for binocular vision guided robot, including memory, processor and it is stored in institute State the computer program that can be run in memory and on the processor, which is characterized in that the processor executes the meter It is realized such as the step of claim 1 or 5 the method when calculation machine program.
7. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, feature to exist In realization is such as the step of claim 1 or 5 the method when the computer program is executed by processor.
CN201810076349.1A 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium Active CN108381549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810076349.1A CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810076349.1A CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN108381549A true CN108381549A (en) 2018-08-10
CN108381549B CN108381549B (en) 2021-12-14

Family

ID=63077475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810076349.1A Active CN108381549B (en) 2018-01-26 2018-01-26 Binocular vision guide robot rapid grabbing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN108381549B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685141A (en) * 2018-12-25 2019-04-26 哈工大机器人(合肥)国际创新研究院 A kind of robotic article sorting visible detection method based on deep neural network
CN109887019A (en) * 2019-02-19 2019-06-14 北京市商汤科技开发有限公司 A kind of binocular ranging method and device, equipment and storage medium
CN111145254A (en) * 2019-12-13 2020-05-12 上海新时达机器人有限公司 Door valve blank positioning method based on binocular vision
CN111452036A (en) * 2019-03-19 2020-07-28 北京伟景智能科技有限公司 Workpiece grabbing method based on line laser binocular stereo vision
CN111539973A (en) * 2020-04-28 2020-08-14 北京百度网讯科技有限公司 Method and device for detecting pose of vehicle
CN111768449A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Object grabbing method combining binocular vision with deep learning
CN113034526A (en) * 2021-03-29 2021-06-25 深圳市优必选科技股份有限公司 Grabbing method, grabbing device and robot
WO2021217922A1 (en) * 2020-04-26 2021-11-04 广东弓叶科技有限公司 Human-robot collaboration sorting system and robot grabbing position obtaining method therefor
CN114029946A (en) * 2021-10-14 2022-02-11 五邑大学 Method, device and equipment for guiding robot to position and grab based on 3D grating
CN114913223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Positive direction identification method and system of visual sweeper
CN116524010A (en) * 2023-04-25 2023-08-01 北京云中未来科技有限公司 Unmanned crown block positioning method, system and storage medium for bulk material storage

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325120A (en) * 2013-06-30 2013-09-25 西南交通大学 Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN103817699A (en) * 2013-09-25 2014-05-28 浙江树人大学 Quick hand-eye coordination method for industrial robot
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN104794713A (en) * 2015-04-15 2015-07-22 同济大学 Greenhouse crop digital-imaging method based on ARM and binocular vision
US20160098841A1 (en) * 2014-10-03 2016-04-07 Hiroyoshi Sekiguchi Information processing system and information processing method
CN105894499A (en) * 2016-03-25 2016-08-24 华南理工大学 Binocular-vision-based rapid detection method for three-dimensional information of space object
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325120A (en) * 2013-06-30 2013-09-25 西南交通大学 Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN103817699A (en) * 2013-09-25 2014-05-28 浙江树人大学 Quick hand-eye coordination method for industrial robot
US20160098841A1 (en) * 2014-10-03 2016-04-07 Hiroyoshi Sekiguchi Information processing system and information processing method
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN104794713A (en) * 2015-04-15 2015-07-22 同济大学 Greenhouse crop digital-imaging method based on ARM and binocular vision
CN105894499A (en) * 2016-03-25 2016-08-24 华南理工大学 Binocular-vision-based rapid detection method for three-dimensional information of space object
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685141B (en) * 2018-12-25 2022-10-04 合肥哈工慧拣智能科技有限公司 Robot article sorting visual detection method based on deep neural network
CN109685141A (en) * 2018-12-25 2019-04-26 哈工大机器人(合肥)国际创新研究院 A kind of robotic article sorting visible detection method based on deep neural network
CN109887019A (en) * 2019-02-19 2019-06-14 北京市商汤科技开发有限公司 A kind of binocular ranging method and device, equipment and storage medium
CN111452036A (en) * 2019-03-19 2020-07-28 北京伟景智能科技有限公司 Workpiece grabbing method based on line laser binocular stereo vision
CN111452036B (en) * 2019-03-19 2023-08-04 北京伟景智能科技有限公司 Workpiece grabbing method based on line laser binocular stereoscopic vision
CN111768449A (en) * 2019-03-30 2020-10-13 北京伟景智能科技有限公司 Object grabbing method combining binocular vision with deep learning
CN111768449B (en) * 2019-03-30 2024-05-14 北京伟景智能科技有限公司 Object grabbing method combining binocular vision with deep learning
CN111145254A (en) * 2019-12-13 2020-05-12 上海新时达机器人有限公司 Door valve blank positioning method based on binocular vision
CN111145254B (en) * 2019-12-13 2023-08-11 上海新时达机器人有限公司 Door valve blank positioning method based on binocular vision
WO2021217922A1 (en) * 2020-04-26 2021-11-04 广东弓叶科技有限公司 Human-robot collaboration sorting system and robot grabbing position obtaining method therefor
CN111539973A (en) * 2020-04-28 2020-08-14 北京百度网讯科技有限公司 Method and device for detecting pose of vehicle
CN114913223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Positive direction identification method and system of visual sweeper
CN113034526B (en) * 2021-03-29 2024-01-16 深圳市优必选科技股份有限公司 Grabbing method, grabbing device and robot
CN113034526A (en) * 2021-03-29 2021-06-25 深圳市优必选科技股份有限公司 Grabbing method, grabbing device and robot
CN114029946A (en) * 2021-10-14 2022-02-11 五邑大学 Method, device and equipment for guiding robot to position and grab based on 3D grating
CN116524010A (en) * 2023-04-25 2023-08-01 北京云中未来科技有限公司 Unmanned crown block positioning method, system and storage medium for bulk material storage
CN116524010B (en) * 2023-04-25 2024-02-02 北京云中未来科技有限公司 Unmanned crown block positioning method, system and storage medium for bulk material storage

Also Published As

Publication number Publication date
CN108381549B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN108381549A (en) A kind of quick grasping means of binocular vision guided robot, device and storage medium
EP3449466B1 (en) Pallet detection using units of physical length
CN109993793B (en) Visual positioning method and device
US8340400B2 (en) Systems and methods for extracting planar features, matching the planar features, and estimating motion from the planar features
US10466797B2 (en) Pointing interaction method, apparatus, and system
US20170337701A1 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
JP6649796B2 (en) Object state specifying method, object state specifying apparatus, and carrier
Yuan et al. 3D point cloud matching based on principal component analysis and iterative closest point algorithm
KR20160003776A (en) Posture estimation method and robot
CN112070782B (en) Method, device, computer readable medium and electronic equipment for identifying scene contour
CN111652047B (en) Human body gesture recognition method based on color image and depth image and storage medium
CN104766309A (en) Plane feature point navigation and positioning method and device
Fan et al. A simple calibration method of structured light plane parameters for welding robots
CN114081536B (en) Nasopharyngeal swab sampling method, nasopharyngeal swab sampling device, electronic equipment and storage medium
WO2019228471A1 (en) Fingerprint recognition method and device, and computer-readable storage medium
CN111354029B (en) Gesture depth determination method, device, equipment and storage medium
Songhui et al. Objects detection and location based on mask RCNN and stereo vision
CN114004899A (en) Pallet pose identification method, storage medium and equipment
JPH07103715A (en) Method and apparatus for recognizing three-dimensional position and attitude based on visual sense
An et al. High speed robust image registration and localization using optimized algorithm and its performances evaluation
Nakano Stereo vision based single-shot 6d object pose estimation for bin-picking by a robot manipulator
Peng et al. Real time and robust 6D pose estimation of RGBD data for robotic bin picking
CN114494857A (en) Indoor target object identification and distance measurement method based on machine vision
CN112347860B (en) Gradient-based eye state detection method and computer-readable storage medium
Srivastava Method of determining dynamic distance of an object from camera devices using ML and its applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant