CN110253575A - Robot grabbing method, terminal and computer readable storage medium - Google Patents

Robot grabbing method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110253575A
CN110253575A CN201910522372.3A CN201910522372A CN110253575A CN 110253575 A CN110253575 A CN 110253575A CN 201910522372 A CN201910522372 A CN 201910522372A CN 110253575 A CN110253575 A CN 110253575A
Authority
CN
China
Prior art keywords
crawl
image
crawl position
similarity
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910522372.3A
Other languages
Chinese (zh)
Other versions
CN110253575B (en
Inventor
杜国光
王恺
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201910522372.3A priority Critical patent/CN110253575B/en
Publication of CN110253575A publication Critical patent/CN110253575A/en
Application granted granted Critical
Publication of CN110253575B publication Critical patent/CN110253575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of computer vision, and discloses a robot grabbing method, a terminal and a computer readable storage medium. The robot grabbing method comprises the following steps: acquiring first grabbing position information of an object to be grabbed in a first image and second grabbing position information of the object to be grabbed in a second image, wherein the second image is acquired within a preset radius range taking the acquisition position of the first image as the center; judging whether the first grabbing position and the second grabbing position are at the same position or not according to the first grabbing position information and the second grabbing position information; and if the first grabbing position and the second grabbing position are in the same position, carrying out grabbing operation. In the embodiment, the predicted stability of the capturing position can be ensured, and the capturing success probability can be improved.

Description

A kind of method, terminal and the computer readable storage medium of robot crawl
Technical field
The present embodiments relate to computer vision field, in particular to a kind of method, terminal and the meter of robot crawl Calculation machine readable storage medium storing program for executing.
Background technique
With the continuous progress of science and technology, there is intelligent robot.Intelligent robot usually requires to interact with environment, For example, the object in crawl ambient enviroment.
Common robot grasping means specifically includes that geometric analysis method and data-driven method.Geometric analysis method It is by the geometry of the object in analysis image, stochastical sampling crawl position, and judge whether each crawl position meets Force-closed condition, so that it is determined that reasonable crawl position out.Data-driven method relies on the data of known crawl position, passes through machine Device learning algorithm directly or indirectly infers the crawl position of current object;For object existing in model library, can use Three-dimensional registration method obtains the posture of target object, and existing crawl position is moved to current object;For with object in model library The similar object of body can search out corresponding between model library object and current object, be worked as using three-dimensional corresponding method The crawl position of preceding object;It, can be by the convolutional neural networks based on deep learning, directly or indirectly for unknown object Ground estimation, which is sent as an envoy to, grabs the maximum crawl position of the probability of success.
At least there are the following problems in the related technology for inventor's discovery: robot is true according to the data of acquisition image at present Determine crawl position, and is directly grabbed according to determining crawl position, due to odjective cause, the image of acquisition is inaccurate, And then cause robot that cannot steadily grab to object, cause crawl to fail.
Summary of the invention
Method, terminal and the computer-readable storage for being designed to provide a kind of robot crawl of embodiment of the present invention Medium, it can be ensured that the stability of the crawl position of prediction improves and grabs successful probability.
In order to solve the above technical problems, embodiments of the present invention provide a kind of method of robot crawl, comprising: obtain Take the second crawl position of object to be grabbed in the first crawl position information of object to be grabbed in the first image and the second image Confidence breath, wherein the second image is to acquire to obtain within the scope of the pre-set radius centered on the acquisition position of the first image;Root According to the first crawl position information and the second crawl position information, judge whether the first crawl position is in same with the second crawl position One position;If it is determined that the first crawl position and the second crawl position are in same position, then grasping manipulation is executed.
Embodiments of the present invention additionally provide a kind of terminal, comprising: at least one processor;And at least one The memory of processor communication connection;Wherein, memory is stored with the instruction that can be executed by least one processor, instructs by extremely A few processor executes, so that the method that at least one processor is able to carry out above-mentioned robot crawl.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, calculate Machine program realizes above-mentioned robot crawl method when being executed by processor.
Embodiment of the present invention in terms of existing technologies, obtains the first crawl position and the second image of the first image The second crawl position, judge whether the first crawl position and the second crawl position are in same position, if the first crawl position It sets and is in same position with the second crawl position, then show that the second crawl position of determining the first crawl position and determination is Accurately, it and then ensure that according to first crawl position execution grasping manipulation, can accurately grab the object to be grabbed, improve The success rate of crawl;Simultaneously because the second image is the acquisition position for obtaining after the first Image Acquisition, and acquiring the second image For within the scope of the pre-set radius centered on the acquisition position of the first image, i.e., first image and the second image are different perspectives The image of lower acquisition, by being verified to the crawl position determined under different perspectives, it can be ensured that determining crawl position Stability, to improve crawl success rate.
In addition, however, it is determined that the first crawl position and the second crawl position are not at same position, the method for robot crawl Further include: update the first image and the second image;The first crawl position information and the second crawl position information are reacquired, directly The second crawl position behind the first crawl position and reacquisition after to reacquisition is in same position.By updating first Image and the second image, and then the first crawl position and the second crawl position are had updated, until the first crawl position and second are grabbed Fetch bit is set no longer updates the first image and the second image after same position, by constantly judging, so that the crawl determined Position is accurately, to improve crawl success rate.
In addition, judging that the first crawl position is grabbed with second according to the first crawl position information and the second crawl position information Whether fetch bit sets in same position, specifically includes: according to the first crawl position information and the second crawl position information, determining the Similarity between the corresponding image in one crawl position image corresponding with the second crawl position;By similarity to it is preset similar Degree threshold value is compared, however, it is determined that similarity is greater than or equal to similarity threshold, it is determined that the first crawl position and the second crawl Position is in same position, otherwise, it determines the first crawl position and the second crawl position are in distinct locations.First crawl position Corresponding with the second crawl position image similarity of image it is high, then show first crawl position and the second crawl position be same One position, thus similarity judgement is carried out by the image of image and the second crawl position to the first crawl position, it can be fast Speed determines whether the first crawl position and the second crawl position are same position.
In addition, determine the similarity between the corresponding image in the first crawl position image corresponding with the second crawl position, Specifically include: according to the first crawl position information and the second crawl position information, calculate the first crawl position two dimensional image and Two-dimentional similarity between the two dimensional image of second crawl position, and the 3-D image and second of the first crawl position of calculating are grabbed Three-dimensional similarity between the 3-D image that fetch bit is set;According to two-dimentional similarity and the respective weight of three-dimensional similarity, fusion two Similarity and three-dimensional similarity are tieed up, using the similarity after combination as the corresponding image in the first crawl position and the second crawl position Similarity between corresponding image.Since image is divided into two dimensional image and 3-D image, individually by the two of the first crawl position The two-dimentional similarity for tieing up the two dimensional image of image and the second crawl position, alternatively, individually by the 3-D image of the first crawl position And second crawl position 3-D image between three-dimensional similarity as similarity, the accuracy of similarity can be reduced, and tied The similarity of 3-D image and the similarity of two dimensional image are closed, can effectively improve the accuracy of the similarity of calculating.
In addition, the two dimension calculated between the two dimensional image of the first crawl position and the two dimensional image of the second crawl position is similar Degree, specifically includes: determining first eigenvector and the second crawl of the central point of the two dimensional image of the first crawl position respectively The second feature vector of the central point of the two dimensional image of position;Determine first between first eigenvector and second feature vector Angle;According to the first angle, two dimension similarity is determined.If two crawl positions are in same position, then two crawl positions Central point it is identical, and if the feature of the central point of the feature vector of the central point of the first crawl position and the second crawl position to Angle between amount is bigger, then the first crawl position differs bigger with the second crawl position, the first crawl position and the second crawl Position is that the probability of distinct locations is bigger, the angle between feature vector based on this by the central point of two crawl positions, It can quickly reflect the two-dimentional similarity between the image of two crawl positions.
In addition, the three-dimensional calculated between the 3-D image of the first crawl position and the 3-D image of the second crawl position is similar Degree, specifically includes: determining third feature vector and the second crawl of the central point of the 3-D image of the first crawl position respectively The fourth feature vector of the central point of the 3-D image of position;Determine second between third feature vector and fourth feature vector Angle;According to the second angle, three-dimensional similarity is determined.Similarity between the 3-D image of two crawl positions is grabbed with two The principle that similarity between the two dimensional image that fetch bit is set determines is identical, convenient for merging two-dimentional similarity and three-dimensional similarity, subtracts Deviation is merged less.
In addition, updating the first image and the second image, specifically include: obtaining the first acquisition position of the second image of acquisition It sets;Within the scope of pre-set radius centered on the first acquisition position, the second acquisition position is chosen;According to the second acquisition position, adopt Collect third image;Using the second image as updated first image, using third image as updated second image.It updates The second image be to obtain behind mobile collection position, new crawl position is reacquired by the image under different perspectives, is convenient for It is quickly found out accurate crawl position.
In addition, obtaining in the first image object to be grabbed in the first crawl position information of object to be grabbed and the second image Second crawl position information of body, specifically includes: the first image being inputted preset crawl position and determines model, first is obtained and grabs Location information is taken, crawl position determines model according to object to be grabbed in training image data and each training image data Information training in crawl position obtains;Second image is inputted into preset crawl position and determines model, obtains the second crawl position letter Breath.Determine that model can quickly determine crawl position by crawl position.
Detailed description of the invention
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element in attached drawing with same reference numbers label are expressed as similar element, remove Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is a kind of flow chart of the method for robot crawl that first embodiment provides according to the present invention;
Fig. 2 is a kind of expression schematic diagram of the crawl position of object to be grabbed in first embodiment according to the present invention;
Fig. 3 is that crawl position determines the neural network structure schematic diagram of model in first embodiment according to the present invention;
Fig. 4 is a specific implementation schematic diagram for determining two-dimentional similarity in first embodiment according to the present invention;
Fig. 5 is a specific implementation schematic diagram for determining three-dimensional similarity in first embodiment according to the present invention;
Fig. 6 is the local frame signal that source point and target point are formed in PFH algorithm in first embodiment according to the present invention Figure;
Fig. 7 is a kind of flow chart of the method for robot crawl that second embodiment provides according to the present invention;
Fig. 8 is a kind of structural schematic diagram for terminal that third embodiment provides according to the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Each embodiment be explained in detail.However, it will be understood by those skilled in the art that in each embodiment party of the present invention In formula, in order to make the reader understand this application better, many technical details are proposed.But even if without these technical details And various changes and modifications based on the following respective embodiments, the application technical solution claimed also may be implemented.
The first embodiment of the present invention is related to a kind of methods of robot crawl.The robot grasping means is applied to machine Device people, the robot can be individual robotic arm, be also that can be the intelligent robot with grabbing arm.The robot The detailed process of the method for crawl is as shown in Figure 1, comprising:
Step 101: obtaining in the first image described in the first crawl position information of object to be grabbed and the second image Second crawl position information of object to be grabbed, wherein the second image is pre- centered on the acquisition position of the first image If acquisition obtains in radius.
Specifically, acquisition includes the first image of object to be grabbed, after obtaining the first image, with the first image Acquisition position centered on pre-set radius within the scope of acquire the second image.In order to ensure being grabbed to the first crawl position and second Position whether be same position judgement accuracy, the pre-set radius do not answer it is excessive, for example, pre-set radius can be 1 centimetre.It should The shape of pre-set radius range centered on the acquisition position of the first image, which can be with the acquisition position of the first image, is The heart, radius is in 1 centimetre of cube.
The mode that the second image is acquired within the scope of pre-set radius can be the random movement within the scope of the pre-set radius and adopt Collect position, acquires image in new acquisition position, and using the image of acquisition as the second image;It can also be in the pre-set radius model In enclosing, acquisition position is kept, i.e., acquires image again in the acquisition position of the first image, and using the image of acquisition as the second figure Picture.The case where third image and the second image that acquisition is avoided the occurrence of in present embodiment have big difference, influences to crawl position Judgement, the second image is the image acquired after random movement acquisition position within the scope of pre-set radius.
In one concrete implementation, the first image is inputted into preset crawl position and determines model, obtains the first crawl position Confidence breath, crawl position determines model according to the crawl of object to be grabbed in training image data and each training image data Location information training obtains;Second image is inputted into preset crawl position and determines model, obtains the second crawl position information.
Specifically, crawl position determines that model can be trained by the way of deep learning, e.g., convolutional Neural net Network (Convolutional Neural Network, CNN).The training image data can be to acquire under different perspectives and obtain Different objects colour (RGB) image and depth image, be also possible to the textured 3D mould of the different objects according to storage Type is projected, the RGB image and depth image rendered around multiple angles of each 3D model, can be with using this method A large amount of training image data are obtained, training image data abundant make crawl position determine that model is more accurate.
It should be noted that crawl position information includes the location information and crawl angle composition for grabbing quadrangle, usually Five-tuple (x, y, w, h, θ) expression can be used, wherein (x, y) indicates that the coordinate of the center of crawl quadrangle, w indicate The length that the length of crawl quadrangle, i.e. grabbing assembly (such as grabber hand) are opened in parallel, h indicate the width of crawl quadrangle, θ Indicate the angle between crawl quadrangle and trunnion axis.Fig. 2 is a kind of representation of the crawl position information of object to be grabbed, Wherein, rectangular in Fig. 2 is the crawl quadrangle, and label 10 indicates object to be grabbed in Fig. 2.
Each training image in training image data is labeled, the center of most suitable crawl quadrangle is marked Angle theta between (x, y, w, h) and grabbing assembly and trunnion axis is trained the training image data after mark It obtains the crawl position and determines model.Crawl position determines that model can be using neural network structure as shown in Figure 3, the model The network number of plies be 7 layers;At runtime, the RGB image for inputting object to be grabbed zooms to presetted pixel (such as 227x227 picture Element), it is input to the model, the crawl position information of the object to be grabbed in present image can be predicted.
Step 102: according to the first crawl position information and the second crawl position information, judging the first crawl position and second Whether crawl position is in same position;If it is determined that the first crawl position and the second crawl position are in and are same as position, then executing Step 103, however, it is determined that the first crawl position and the second crawl position are in distinct locations, execute step 104.
In one concrete implementation, according to the first crawl position information and the second crawl position information, the first crawl is determined Similarity between the corresponding image in position image corresponding with the second crawl position;By similarity and preset similarity threshold It is compared, however, it is determined that similarity is greater than or equal to similarity threshold, it is determined that at the first crawl position and the second crawl position In same position, otherwise, it determines the first crawl position and the second crawl position are in distinct locations.
Specifically, preset similarity degree threshold value can be configured according to actual needs, for example, this is preset similar Degree threshold value can be set to 90%.Generally for accurately being grabbed to object, the first image of acquisition and the second image packet Include RGB image and depth image.Therefore, the first crawl position information includes the two-dimensional position information and of the first crawl position The three dimensional local information of one crawl position, the second crawl position information include the two-dimensional position information of the second crawl position and described The three dimensional local information of second crawl position.
In one concrete implementation, determine the corresponding image in the first crawl position image corresponding with the second crawl position it Between similarity detailed process are as follows: according to the first crawl position information and the second crawl position information, calculate the first crawl position Two-dimentional similarity between the two dimensional image of the two dimensional image and the second crawl position set, and calculate the three of the first crawl position Tie up the three-dimensional similarity between image and the 3-D image of the second crawl position;Respectively according to two-dimentional similarity and three-dimensional similarity Weight, two-dimentional similarity and three-dimensional similarity are merged, using the similarity after combination as the corresponding image in the first crawl position Similarity between image corresponding with the second crawl position.
Specifically, the first image and the second image include two dimensional image and 3-D image.If only considering the first image In the first crawl position two dimensional image and the two dimensional image of the second crawl position in the second image between two dimension it is similar Degree, or only calculate the three-dimensional figure of the second crawl position in the 3-D image of the first crawl position and the second image in the first image Three-dimensional similarity as between, only with the corresponding image of two-dimentional similarity or the first crawl position of three-dimensional similarity characterization and the Similarity between the corresponding image in two crawl positions, the accuracy rate of similarity is lower, passes through fusion two dimension in present embodiment Similarity and three-dimensional similarity, effectively increase the accuracy rate of similarity.
The similarity can use the amalgamation mode as shown in formula (1) in present embodiment:
Sim (p1, p2)=α * sim2D+ β * sim3D formula (1);
Wherein, p1 is expressed as the first crawl position, and p2 is expressed as the second crawl position, and sim2D indicates two-dimentional similarity, Sim3D indicates three-dimensional similarity, and α indicates the weight of two-dimentional similarity, and β indicates the weight of three-dimensional similarity, and, two-dimentional similarity Weight and the weight of three-dimensional similarity meet, alpha+beta=1.In present embodiment, α is set as 0.5, β and is set as 0.5.
The process that two-dimentional similarity determines and the process that three-dimensional similarity determines are introduced separately below.
The calculating of two-dimentional similarity includes sub-step as shown in Figure 4.
Sub-step 1021: the first eigenvector and the of the central point of the two dimensional image of the first crawl position is determined respectively The second feature vector of the central point of the two dimensional image of two crawl positions.
Specifically, the feature descriptor of the central point of the two dimensional image of first crawl position, feature descriptor are obtained It can be steady using Scale invariant features transform (Scale-invariant feature transform, referred to as " SIFT "), acceleration Strong feature (Speeded Up Robust Features, referred to as " SURF ") orients quickly steady binary descriptor Features such as (Oriented FAST and Rotated BRIEF, referred to as " ORB ") are obtained using SIFT mode in present embodiment The feature descriptor for taking central point, enable the central point of the two dimensional image of the first crawl position feature descriptor be fisrt feature to Amount, is expressed as Vec2dp1.Similarly, available second feature vector in a like fashion is adopted, Vec2d can be expressed asp2
Sub-step 1022: the first angle between first eigenvector and second feature vector is determined.
Specifically, it can use the size that trigonometric function measures the first angle, for example, obtaining cos using cosine formula (θ) can indicate the size of the first angle using the value of cos (θ).
Sub-step 1023: according to the first angle, two dimension similarity is determined.
Specifically, if due to Vec2dp1With Vec2dp2It is completely the same, then cos (θ)=1, simultaneously because cos (θ) ∈ [- 1,1], then the value of cos (θ) closer to 1, show first eigenvector and second feature vector to it is similar.Therefore, present embodiment In using the value of cos (θ) as the value of two-dimentional similarity.That is,
Three-dimensional similarity calculation includes sub-step as shown in Figure 5.
Sub-step 1031: the third feature vector and the of the central point of the 3-D image of the first crawl position is determined respectively The fourth feature vector of the central point of the 3-D image of two crawl positions.
Specifically, three-dimensional depth image is converted into 3D point cloud image, by the 3-D image of the first crawl position The feature description vectors of central point retouch the feature of the central point of the 3-D image of the second crawl position as third feature vector Vector is stated as fourth feature vector.The feature descriptor of 3D point can use point feature histogram (Point Feature Histogram, referred to as " PFH ") feature, rotation image (Spin Images) feature, direction histogram signature (Signatures Of Histograms of Orientations, referred to as " SHOT ") etc. features determine, present embodiment use PFH feature extraction Algorithm determines third feature vector sum fourth feature vector.PFH features the feature distribution of a 3D point p neighborhood point.It is selected away from The all the points for including in the ball for being r from p point radius, calculate the normal vector of every bit, for every a pair of of point p in point p neighborhoodiWith pj(i ≠ j, j < i) is selected one of as source point ps, one of to be used as target point pt, guarantee the normal direction and two o'clock of source point Between line angle it is smaller.The local frame that source point and target point are formed is as shown in fig. 6, Fig. 6 is adopted as UVW coordinate system, ns It is respectively the normal vector of source point and target point with nt.
Source point psWith target point ptBetween connection can be by following 3 characterizing definitions: α=vnt,θ =arctan (wnt,u·nt).For every a pair of of point in point p neighborhood, these three features are all calculated;By taking for each feature Value range is divided into b bucket section, and three kinds of features will constitute b3A barrel of section.Three kinds of features that every a pair of of point is calculated are unique A bucket section is corresponded to, the number for calculating each barrel of midpoint pair accounts for the ratio of all-pair number, as the description of this barrel, It joins together just to constitute b3The vector of dimension, the PFH feature as the point describe.B=4 is usually set, 64 dimensional features is obtained and retouches State symbol.
By PFH algorithm, enabling the 3D descriptor of the central point of the 3-D image of the first crawl position is third feature vector It is expressed as Vec3dp1, fourth feature vector is expressed as Vec3dp2
Sub-step 1032: the second angle between third feature vector and fourth feature vector is determined.
Specifically, similar with the characteristic manner of the first angle, it can use the size that trigonometric function measures the second angle.
Sub-step 1033: according to the second angle, three-dimensional similarity is determined.
Specifically, if due to Vec3dp1With Vec3dp2It is completely the same, thenSimultaneously becauseThenValue closer to 1, show that third feature vector is more similar to fourth feature vector.Therefore, In present embodiment withValue of the value as three-dimensional similarity.That is,
Step 103: executing grasping manipulation.
Specifically, determining that the first crawl position and the second crawl position be in same position, i.e., executable crawl behaviour Make.
Step 104: executing crawl position and determine emergency operation corresponding to mistake.
Specifically, which can also be crawl position determining mentioning for mistake to terminate crawl process It is artificial to show that information is sent, by crawl position is manually arranged.
Embodiment of the present invention in terms of existing technologies, obtains the first crawl position and the second image of the first image The second crawl position, judge whether the first crawl position and the second crawl position are in same position, if the first crawl position It sets and is in same position with the second crawl position, then show that the second crawl position of determining the first crawl position and determination is Accurately, it and then ensure that according to first crawl position execution grasping manipulation, can accurately grab the object to be grabbed, improve The success rate of crawl;Simultaneously because the second image is the acquisition position for obtaining after the first Image Acquisition, and acquiring the second image For within the scope of the pre-set radius centered on the acquisition position of the first image, i.e., first image and the second image are different perspectives The image of lower acquisition, by being verified to the crawl position determined under different perspectives, it can be ensured that determining crawl position Stability, to improve crawl success rate.
Second embodiment of the present invention is related to a kind of method of robot crawl.The robot crawl method include: Obtain second of object to be grabbed described in the first crawl position information of object to be grabbed in the first image and the second image Crawl position information;According to the first crawl position information and the second crawl position information, judge that the first crawl position is grabbed with second Whether fetch bit sets in same position;If it is determined that the first crawl position and the second crawl position are in same position, then execute and grab Extract operation.
Second embodiment is the further improvement to first embodiment, is mainly theed improvement is that: the robot is grabbed The method taken further include: if it is determined that the first crawl position and the second crawl position are in distinct locations, then redefine first and grab Fetch bit is set and the second crawl position.The detailed process of the method for robot crawl is as shown in Figure 7.
Step 201: obtaining in the first image in the first crawl position information of object to be grabbed and the second image wait grab Take the second crawl position information of object.
Step 202: according to the first crawl position information and the second crawl position information, judging the first crawl position and second Whether crawl position is in same position, however, it is determined that the first crawl position and the second crawl position are in same position, then execute Step 203;If it is determined that the first crawl position and the second crawl position are in distinct locations, 204 are thened follow the steps.
Step 203: executing grasping manipulation.
Step 204: updating the first image and the second image, reacquire the first crawl position information and the second crawl position Confidence breath, and return to step 202.
Specifically, it if after performing step 202, determines that the first image and the second image are in same position, then executes Step 203, step 204 is otherwise continued to execute, until second behind the first crawl position and reacquisition after reacquiring grabs Fetch bit is set in same position.
Update the detailed process of the first image and the second image are as follows: obtain the first acquisition position of the second image of acquisition; Within the scope of pre-set radius centered on the first acquisition position, the second acquisition position is chosen;According to the second acquisition position, acquisition the Three images;Using the second image as updated first image, using third image as updated second image.Pre-set radius Roughly the same with the pre-set radius in first embodiment, i.e., the pre-set radius should not be arranged excessive, avoid the occurrence of the of acquisition The case where three images and the second image have big difference.Using the second image as updated first image, using third image as Updated second image.The mode for reacquiring the first crawl position information and the second crawl position information and first is obtained to implement Mode is roughly the same, i.e., updated first image input crawl position is determined model, regain the first crawl position letter Updated second image input crawl position is determined model, regains the second crawl position information by breath.Later, it returns Step 202, judge whether the first crawl position and the second crawl position are in same position.
The robot grasping means provided in present embodiment by updating the first image and the second image, and then updates First crawl position and the second crawl position, after the first crawl position and the second crawl position are in same position no longer The first image and the second image are updated, by constantly judging, so that the crawl position determined is accurately, to grab to improve Take success rate.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or Certain steps are split, multiple steps are decomposed into, as long as including identical logical relation, all in the protection scope of this patent It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed Core design with process is all in the protection scope of the patent.
Third embodiment of the invention is related to a kind of terminal, and the structure of the terminal is as shown in Figure 8, comprising: at least one Manage device 301;And the memory 302 with the communication connection of at least one processor 301;Wherein, be stored with can quilt for memory 302 The instruction that at least one processor 301 executes, instruction is executed by least one processor 301, so that at least one processor 301 It is able to carry out the method grabbed such as the robot in first embodiment or second embodiment.
Wherein, memory 302 is connected with processor 301 using bus mode, and bus may include any number of interconnection Bus and bridge, bus the various circuits of one or more processors and memory are linked together.Bus can also will be all If various other circuits of peripheral equipment, voltage-stablizer and management circuit or the like link together, these are all this fields Known, therefore, it will not be further described herein.Bus interface provides interface between bus and transceiver. Transceiver can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for being situated between in transmission The unit communicated in matter with various other devices.The data handled through processor 301 are passed on the radio medium by antenna Defeated, further, antenna also receives data and transfers data to processor 301.
Processor 301 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connects Mouthful, voltage adjusting, power management and other control functions.And memory 302 can be used for storage processor and execute behaviour Used data when making.
Third embodiment of the invention is related to a kind of computer readable storage medium, is stored with computer program, computer The method such as the crawl of the robot of first embodiment or second embodiment is realized when program is executed by processor.
It will be appreciated by those skilled in the art that implementing the method for the above embodiments is that can pass through Program is completed to instruct relevant hardware, which is stored in a storage medium, including some instructions are used so that one A equipment (can be single-chip microcontroller, chip etc.) or processor (processor) execute each embodiment the method for the application All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention, And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.

Claims (11)

1. a kind of method of robot crawl characterized by comprising
Obtain object to be grabbed described in the first crawl position information of object to be grabbed in the first image and the second image Second crawl position information, wherein second image is in the pre-set radius model centered on the acquisition position of the first image Interior acquisition is enclosed to obtain;
According to first crawl position information and second crawl position information, the first crawl position and the second crawl are judged Whether position is in same position;
If it is determined that first crawl position and second crawl position are in same position, then grasping manipulation is executed.
2. the method for robot according to claim 1 crawl, which is characterized in that if it is determined that first crawl position and Second crawl position is in distinct locations, the method for the robot crawl further include:
The first image and second image are updated, the first crawl position information and the second crawl position letter are reacquired Breath, until the first crawl position after reacquiring is in same position with the second crawl position after reacquiring.
3. the method for robot crawl according to claim 1 or 2, which is characterized in that described according to first crawl Location information and second crawl position information, judge whether the first crawl position and the second crawl position are in same position It sets, specifically includes:
According to first crawl position information and second crawl position information, determine that first crawl position is corresponding Similarity between image image corresponding with second crawl position;
The similarity is compared with preset similarity threshold, however, it is determined that the similarity is greater than or equal to described similar Spend threshold value, it is determined that first crawl position and second crawl position are in same position, otherwise, it determines described first Crawl position and second crawl position are in distinct locations.
4. the method for robot according to claim 3 crawl, which is characterized in that first crawl position information includes The two-dimensional position information of first crawl position and the three dimensional local information of first crawl position, second crawl Location information includes the two-dimensional position information of second crawl position and the three dimensional local information of second crawl position.
5. the method for robot crawl according to claim 4, which is characterized in that determination first crawl position Similarity between corresponding image image corresponding with second crawl position, specifically includes:
According to first crawl position information and second crawl position information, the two dimension of first crawl position is calculated Two-dimentional similarity between image and the two dimensional image of second crawl position, and calculate the three of first crawl position Tie up the three-dimensional similarity between image and the 3-D image of second crawl position;
According to two-dimentional similarity and the respective weight of three-dimensional similarity, the two-dimentional similarity and the three-dimensional similarity are merged, Using the similarity in conjunction with after as the corresponding image in first crawl position image corresponding with second crawl position it Between similarity.
6. the method for robot crawl according to claim 5, which is characterized in that described to calculate first crawl position Two dimensional image and second crawl position two dimensional image between two-dimentional similarity, specifically include:
The first eigenvector and described second for determining the central point of the two dimensional image of first crawl position respectively grab The second feature vector of the central point of the two dimensional image of position;
Determine the first angle between the first eigenvector and the second feature vector;
According to first angle, the two-dimentional similarity is determined.
7. the method for robot crawl according to claim 5, which is characterized in that described to calculate first crawl position 3-D image and second crawl position 3-D image between three-dimensional similarity, specifically include:
The third feature vector and described second for determining the central point of the 3-D image of first crawl position respectively grab The fourth feature vector of the central point of the 3-D image of position;
Determine the second angle between the third feature vector and fourth feature vector;
According to second angle, the three-dimensional similarity is determined.
8. the method for robot according to claim 2 crawl, which is characterized in that the update the first image and Second image, specifically includes:
Obtain the first acquisition position for acquiring second image;
Within the scope of the pre-set radius centered on first acquisition position, the second acquisition position is chosen;
According to second acquisition position, third image is acquired;
Using second image as updated first image, using the third image as updated second image.
9. the method for robot crawl according to any one of claim 1 to 8, which is characterized in that described to obtain first Second crawl position of object to be grabbed described in the first crawl position information of object to be grabbed and the second image in image Information specifically includes:
The first image is inputted into preset crawl position and determines model, obtains first crawl position information, it is described to grab Fetch bit is set determining model and is instructed according to the crawl position information of object to be grabbed in training image data and each training image data Practice and obtains;
Second image is inputted into the preset crawl position and determines model, obtains second crawl position information.
10. a kind of terminal characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out such as the robot crawl any in claim 1 to 9 Method.
11. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the computer program is located The method that reason device realizes the crawl of robot described in any one of claims 1 to 9 when executing.
CN201910522372.3A 2019-06-17 2019-06-17 Robot grabbing method, terminal and computer readable storage medium Active CN110253575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910522372.3A CN110253575B (en) 2019-06-17 2019-06-17 Robot grabbing method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910522372.3A CN110253575B (en) 2019-06-17 2019-06-17 Robot grabbing method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110253575A true CN110253575A (en) 2019-09-20
CN110253575B CN110253575B (en) 2021-12-24

Family

ID=67918606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910522372.3A Active CN110253575B (en) 2019-06-17 2019-06-17 Robot grabbing method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110253575B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243085A (en) * 2020-01-20 2020-06-05 北京字节跳动网络技术有限公司 Training method and device for image reconstruction network model and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293752A (en) * 1999-03-19 2001-05-02 松下电工株式会社 Three-D object recognition method and pin picking system using the method
CN103302666A (en) * 2012-03-09 2013-09-18 佳能株式会社 Information processing apparatus and information processing method
CN104048607A (en) * 2014-06-27 2014-09-17 上海朗煜电子科技有限公司 Visual identification and grabbing method of mechanical arms
CN105729468A (en) * 2016-01-27 2016-07-06 浙江大学 Enhanced robot workbench based on multiple depth cameras
WO2018043525A1 (en) * 2016-09-02 2018-03-08 倉敷紡績株式会社 Robot system, robot system control device, and robot system control method
CN108044627A (en) * 2017-12-29 2018-05-18 深圳市越疆科技有限公司 Detection method, device and the mechanical arm of crawl position
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
CN109377444A (en) * 2018-08-31 2019-02-22 平安科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
US20190061162A1 (en) * 2017-08-24 2019-02-28 Seiko Epson Corporation Robot System

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293752A (en) * 1999-03-19 2001-05-02 松下电工株式会社 Three-D object recognition method and pin picking system using the method
CN103302666A (en) * 2012-03-09 2013-09-18 佳能株式会社 Information processing apparatus and information processing method
CN104048607A (en) * 2014-06-27 2014-09-17 上海朗煜电子科技有限公司 Visual identification and grabbing method of mechanical arms
CN105729468A (en) * 2016-01-27 2016-07-06 浙江大学 Enhanced robot workbench based on multiple depth cameras
WO2018043525A1 (en) * 2016-09-02 2018-03-08 倉敷紡績株式会社 Robot system, robot system control device, and robot system control method
US20190061162A1 (en) * 2017-08-24 2019-02-28 Seiko Epson Corporation Robot System
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
CN108044627A (en) * 2017-12-29 2018-05-18 深圳市越疆科技有限公司 Detection method, device and the mechanical arm of crawl position
CN109377444A (en) * 2018-08-31 2019-02-22 平安科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243085A (en) * 2020-01-20 2020-06-05 北京字节跳动网络技术有限公司 Training method and device for image reconstruction network model and electronic equipment
CN111243085B (en) * 2020-01-20 2021-06-22 北京字节跳动网络技术有限公司 Training method and device for image reconstruction network model and electronic equipment

Also Published As

Publication number Publication date
CN110253575B (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN111179324B (en) Object six-degree-of-freedom pose estimation method based on color and depth information fusion
CN109902548B (en) Object attribute identification method and device, computing equipment and system
WO2022002039A1 (en) Visual positioning method and device based on visual map
Liu et al. RDMO-SLAM: Real-time visual SLAM for dynamic environments using semantic label prediction with optical flow
Hirschmuller Stereo vision in structured environments by consistent semi-global matching
CN110533687B (en) Multi-target three-dimensional track tracking method and device
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN112070818A (en) Robot disordered grabbing method and system based on machine vision and storage medium
Enqvist et al. Robust fitting for multiple view geometry
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
CN112862874B (en) Point cloud data matching method and device, electronic equipment and computer storage medium
CN110473254A (en) A kind of position and orientation estimation method and device based on deep neural network
US20230080133A1 (en) 6d pose and shape estimation method
CN110232418B (en) Semantic recognition method, terminal and computer readable storage medium
CN111191492A (en) Information estimation, model retrieval and model alignment methods and apparatus
WO2023025262A1 (en) Excavator operation mode switching control method and apparatus and excavator
CN109003307B (en) Underwater binocular vision measurement-based fishing mesh size design method
CN113971833A (en) Multi-angle face recognition method and device, computer main equipment and storage medium
CN114742888A (en) 6D attitude estimation method based on deep learning
Zhong et al. Sim2real object-centric keypoint detection and description
CN110253575A (en) Robot grabbing method, terminal and computer readable storage medium
KR102449031B1 (en) Method for indoor localization using deep learning
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN112446845A (en) Map construction method, map construction device, SLAM system, and storage medium
Bojanić et al. A review of rigid 3D registration methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210203

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address