CN110428465A - View-based access control model and the mechanical arm grasping means of tactile, system, device - Google Patents

View-based access control model and the mechanical arm grasping means of tactile, system, device Download PDF

Info

Publication number
CN110428465A
CN110428465A CN201910629058.5A CN201910629058A CN110428465A CN 110428465 A CN110428465 A CN 110428465A CN 201910629058 A CN201910629058 A CN 201910629058A CN 110428465 A CN110428465 A CN 110428465A
Authority
CN
China
Prior art keywords
target
grabbed
mechanical arm
posture
tactile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910629058.5A
Other languages
Chinese (zh)
Inventor
李玉苹
蒋应元
乔红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201910629058.5A priority Critical patent/CN110428465A/en
Publication of CN110428465A publication Critical patent/CN110428465A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to industrial robot fields, and in particular to a kind of view-based access control model and the mechanical arm grasping means of tactile, system, device, it is intended to it is low success rate of to solve the problems, such as that mechanical arm grabs industrial part under different illumination conditions.The method of the present invention includes: acquisition intensity of illumination;If intensity of illumination is less than given threshold, target image to be grabbed is extracted, with the first posture for obtaining target after the matching of corresponding world model;Shade removal is carried out to target image;Based on the first posture, using iteration closest approach and Gauss-Newton algorithm, the matched final identification posture of target image with removal shade is obtained based on world model;If intensity of illumination is greater than threshold value, the tactile pattern of target is obtained, the tactile pattern and the final identification posture of acquisition target after posture corresponding relationship knowledge storehouse matching with the target constructed in advance;Mechanical arm is grabbed according to final identification posture and location information.The present invention improves mechanical arm under different illumination conditions to the success rate of industrial part crawl.

Description

View-based access control model and the mechanical arm grasping means of tactile, system, device
Technical field
The invention belongs to industrial robot fields, and in particular to the mechanical arm grasping means of a kind of view-based access control model and tactile, System, device.
Background technique
Today of robot rapid development, application of the industrial robot in manufacturing industry are also more and more extensive.Such as automobile and Auto parts and components manufacture, machining, electric production, rubber and plastics manufacture, food processing, timber and Furniture manufacture etc. During the automated production in field, robot manipulating task is played an important role.Robot is manufacture to the crawl of industrial part A common task in industry automated production.Currently, vision guide and location technology, which become industrial robot, obtains operation week Enclose the main means of environmental information.
Although vision guide and positioning are widely used, there is also some defects, such as binocular vision system have it is very strong Restore the ability of three-dimensional information, but the stated accuracy of measurement accuracy and video camera is closely related.Meanwhile in improper illumination item Because illumination is insufficient or too strong the problem of will appear target loss under part.Therefore, in addition to computer vision, it is also necessary to be added other Sensor makes up vision guide and positioning system.The present invention is using the technology of visual correlation to industrial part to be processed Image carries out the modeling of spherical surface multi-angle and dynamic removes shade in real time, secretly can not excessively in combination with touch sensor for illumination It captures industrial part pose and illumination light intensity industrial part surface has carried out effective benefit the case where there are reflective interference positions It fills.Comprehensive various possible situations have carried out comprehensive consideration, have for the industrial part crawl under complex environment important Meaning.
Summary of the invention
In order to solve the above problem in the prior art, industry is grabbed in order to solve mechanical arm under different illumination conditions The low problem of part success rate, first aspect present invention propose the mechanical arm grasping means of a kind of view-based access control model and tactile, the party Method includes:
Step S10 obtains the intensity of illumination of target to be grabbed, and executes if the intensity of illumination is within the scope of given threshold Step S20, it is no to then follow the steps S50;
Step S20, the shooting image based on the target to be grabbed carry out the extraction of target image, obtain the first image, And the posture of the target to be grabbed is obtained by the method for views registered based on the corresponding world model of the target to be grabbed, As the first posture;
Step S30 carries out shade removal to the first image, obtains the second image;
Step S40, using first posture as initial attitude, using iteration closest approach algorithm and Gauss-Newton algorithm, base The second posture with second images match is obtained in the corresponding world model of the target to be grabbed, by second posture As final identification posture, step S60 is executed;
Step S50 obtains the tactile pattern of the target to be grabbed, the touching based on target to be grabbed described in constructing in advance Feel image and posture corresponding relationship knowledge base, the third appearance of the target to be grabbed is obtained by the matched method of tactile pattern State, using the third posture as final identification posture;
Step S60, mechanical arm is according to the obtained final identification posture, and the position of the target to be grabbed obtained Confidence breath carries out the crawl of the target to be grabbed.
In some preferred embodiments, in step S20 " based on the target to be grabbed, corresponding world model passes through The method of views registered obtains the posture of the target to be grabbed ", method are as follows: based on the corresponding overall situation of the target to be grabbed Model obtains the set of the 2D projection view of the different points of view generated by virtual ball, obtained by the method for images match and The matched view of the first image, and using its corresponding posture as the posture of the target to be grabbed.
In some preferred embodiments, " shade removal is carried out to the first image " in step S30, method Are as follows: the first image variance is calculated, the point that variance yields is less than given threshold as shadow spots and is removed.
In some preferred embodiments, the variance, calculation method are as follows:
Wherein, V (x, y) is the variance yields of pixel (x, y), and g (x, y) indicates the average gray value of pixel (x, y), I (x, y) indicates the gray value of specific pixel point, NVFor seek variance region side length, x, y be pixel two-dimensional coordinate value.
In some preferred embodiments, the average gray value, calculation method are as follows:
Wherein, NAFor seek average gray value region side length.
In some preferred embodiments, in step S40 " using first posture as initial attitude, most using iteration Proximal point algorithm and Gauss-Newton algorithm, based on the corresponding world model's acquisition of target to be grabbed and second image The second posture matched " obtains following formula by fusion iteration closest approach algorithm and Gauss-Newton algorithm, and leads to more iteration and ask Solution reaches posture when the default condition of convergence:
pt+1=pt+Δp
Wherein, p is Attitude estimation value, and Δ p is renewal vector, and ε is difference vector, JεJacobian matrix for ε relative to p, T, t+1 is moment value, represents the subsequent time of any moment and any moment, T is iteration cycle.
In some preferred embodiments, tactile pattern described in step S50, by placing the target to be grabbed It is obtained in touch sensor surface.
In some preferred embodiments, the touch sensor is array tactile sensor.
The second aspect of the present invention, proposes the mechanical arm grasping system of a kind of view-based access control model and tactile, which grabs Taking system includes image collecting device, touch sensor device, placement platform, processor, mechanical arm;
Described image acquisition device is set to mechanical arm top setting position, for acquiring on the placement platform The image of target to be grabbed;
The touch sensor device is set to the placement platform top, is placed for obtaining the placement platform The tactile pattern of target to be grabbed;
The placement platform is set to the setting position in the crawl radius of the mechanical arm, for placing mesh to be grabbed Mark;
The processor, the shooting image and/or base of the target to be grabbed for being acquired based on described image acquisition device In the tactile pattern of the target to be grabbed of touch sensor device acquisition, pass through the machinery of the view-based access control model and tactile Arm grasping means generates the fetching instruction of the mechanical arm;
The mechanical arm, the fetching instruction for being exported based on the processor grab on the placement platform wait grab Target.
In some preferred embodiments, the mechanical arm grasping system further includes display device, described for showing The image of target to be grabbed on placement platform.
The third aspect of the present invention proposes a kind of storage device, wherein be stored with a plurality of program, described program apply by Processor loads and executes the mechanical arm grasping means to realize above-mentioned view-based access control model and tactile.
The fourth aspect of the present invention proposes a kind of processing setting, including processor, storage device;Processor is suitable for Execute each program;Storage device is suitable for storing a plurality of program;Described program is suitable for being loaded by processor and being executed with reality The mechanical arm grasping means of existing above-mentioned view-based access control model and tactile.
Beneficial effects of the present invention:
The present invention improves mechanical arm under different illumination conditions to the success rate of industrial part crawl.The present invention is normal The world model library under spheric coordinate system is established under illumination condition, realize in the case of 3D to the real-time positioning of industrial part with Crawl;Shade dynamically removed to the picture of acquisition, after shadow removal, improves matched accuracy;Tactile biography is introduced simultaneously Sensor, when light conditions are undesirable, starting touch sensor obtains the contact picture picture and location information of industrial part, is transmitted to Computer is handled and is matched to picture by corresponding Processing Algorithm, so as to accurately carry out the positioning of industrial part With crawl, the success rate of crawl is improved.
Detailed description of the invention
By reading the detailed description done to non-limiting embodiment done referring to the following drawings, the application other Feature, objects and advantages will become more apparent upon.
Fig. 1 is the flow diagram of the view-based access control model of an embodiment of the present invention and the mechanical arm grasping means of tactile;
Fig. 2 is the virtual view ball modeling of the view-based access control model of an embodiment of the present invention and the mechanical arm grasping means of tactile Principle exemplary diagram;
Fig. 3 is average gray value and the side of the view-based access control model of an embodiment of the present invention and the mechanical arm grasping means of tactile Poor exemplary relationship figure;
Fig. 4 is the touch sensor matching of the view-based access control model of an embodiment of the present invention and the mechanical arm grasping means of tactile Exemplary diagram;
The hardware configuration exemplary diagram of the mechanical arm grasping system of the view-based access control model and tactile of Fig. 5 an embodiment of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to the embodiment of the present invention In technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, without It is whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is only used for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to just Part relevant to related invention is illustrated only in description, attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.
The mechanical arm grasping means of view-based access control model and tactile of the invention, as shown in Figure 1, comprising the following steps:
Step S10 obtains the intensity of illumination of target to be grabbed, and executes if the intensity of illumination is within the scope of given threshold Step S20, it is no to then follow the steps S50;
Step S20, the shooting image based on the target to be grabbed carry out the extraction of target image, obtain the first image, And the posture of the target to be grabbed is obtained by the method for views registered based on the corresponding world model of the target to be grabbed, As the first posture;
Step S30 carries out shade removal to the first image, obtains the second image;
Step S40, using first posture as initial attitude, using iteration closest approach algorithm and Gauss-Newton algorithm, base The second posture with second images match is obtained in the corresponding world model of the target to be grabbed, by second posture As final identification posture, step S60 is executed;
Step S50 obtains the tactile pattern of the target to be grabbed, the touching based on target to be grabbed described in constructing in advance Feel image and posture corresponding relationship knowledge base, the third appearance of the target to be grabbed is obtained by the matched method of tactile pattern State, using the third posture as final identification posture;
Step S60, mechanical arm is according to the obtained final identification posture, and the position of the target to be grabbed obtained Confidence breath carries out the crawl of the target to be grabbed.
In order to more clearly to the present invention is based on the mechanical arm grasping means of vision and tactile to be illustrated, below with reference to attached Figure carries out expansion detailed description to each step in a kind of embodiment of the method for the present invention.
Step S10 obtains the intensity of illumination of target to be grabbed, and executes if the intensity of illumination is within the scope of given threshold Step S20, it is no to then follow the steps S50.
Primary and foremost purpose of the present invention is to provide one kind under various complex illumination environment, and mechanical arm is based on touch sensor or view Feel that sensor obtains the posture information of industrial part, and the method grabbed according to posture information to industrial part.In this reality It applies in example, first the illumination condition of current crawl environment is judged, i.e., by obtaining the intensity of illumination of target to be grabbed, if should Intensity of illumination is then thought within the scope of given threshold normally, if illumination is normal, using visual sensor to industry to be processed Part carries out the pretreatment that spherical surface multi-angle carries out modeling and dynamically removes shade in real time;If work can not secretly be captured excessively by illumination occur The case where industry part pose and illumination light intensity industrial part surface are positioned there are reflective interference is then obtained by touch sensor The posture information of industrial part, comprehensive various possible situations have carried out comprehensive consideration.
Step S20, the shooting image based on the target to be grabbed carry out the extraction of target image, obtain the first image, And the posture of the target to be grabbed is obtained by the method for views registered based on the corresponding world model of the target to be grabbed, As the first posture.
For the Attitude estimation method based on model, need to generate global picture library (3D shape from model Model), wherein including the two-dimensional projection for the three dimensional object seen from different points of view.In the present embodiment, using virtual view Ball generates the 2D view of specified 3D object.Virtual camera is placed on around object model, three-dimensional object model is projected to In the picture plane of each camera position, image is obtained.The parameter of virtual camera is equal to the inner parameter of input camera.All views The two-dimensional shapes representation of figure is stored in three-dimensional shape model.The virtual view ball of part, it limits shape The posture range of permission, to minimize the quantity for the two-dimensional projection for needing to calculate and store in three-dimensional shape model.Refer to Determine posture range, a sphere around the object is set.The position of sphere is the boundary by its center being placed on object The center of frame defines.As shown in Fig. 2, the xz plane definition of the object centre coordinate system equatorial plane of spherical surface.The arctic is in negative y On axis, latitude is latitude, and longitude is longitude, and range is all [- 90,90] degree, and pose Range is posture range. On the surface of sphere, a video camera is placed to observe object.Further, it is necessary to specified video camera to object minimum and maximum away from From i.e. spherical radius corresponding to the distance at the object center of video camera to different spheres.Other than defining posture range, must also The allowed band that the roll angle and virtual camera of palpus setting video camera are rotated around z-axis.
The world model library of offline each industrial part is generated by 360 degree of virtual view ball of setting, in this implementation In example, the shooting image based on the target to be grabbed carries out the extraction of target image, is obtained using the world model library preset Take the first posture information of industrial part, i.e. industrial part Attitude estimation information rough in two dimensional image.
Step S30 carries out shade removal to the first image, obtains the second image.
In this example, the first image variance is calculated, the point that variance yields is less than given threshold as shadow spots and is removed, Obtain the second image.
We calculate the average gray value of the corresponding gray level image of two dimensional image of industrial part first, comparatively carry on the back The gray value of scape is very high, and the region of object and the gray value of shade are relatively low, and actually this is a smooth process.It is average Shown in the calculating of gray value such as formula (1):
Wherein, g (x, y) indicates pixel (x, y) in NA*NARegion in average gray value, I (x, y) indicates specific picture The gray value of vegetarian refreshments (current pixel point), NAFor seek average gray value region side length, x, y be pixel two-dimensional coordinate Value.
For the pixel of image, the average variance in each neighborhood of pixels is calculated.In intuitive level, if The region of image is smooth, then the variance in the region will be very low.If the region of image is coarse, the region Variance can be very big.We can speculate that the variance of shadow region is smaller because shadow region be it is smooth, calculate the side of variance Shown in journey such as formula (2):
Wherein, V (x, y) is the variance yields of pixel (x, y), NVFor seek variance region side length.
As shown in figure 3, horizontal axis Variance indicates the variance of pixel in image, longitudinal axis Average Gray is indicated in image The average gray value of pixel.Each pair of point in image should be indicated in the pixel in two dimensional image, the intensity of image comprising current The number of the point of feature, the point with certain feature is more, corresponding brighter in the picture.Because the point in shade has identical Feature, so it is concluded that going out these point a certain point sets in the picture, the point in object area and background It is such.In the picture, it may be seen that three region rectangles and ellipse mark.Point in the rectangle of left side corresponds to back Scape (Background), because background is bright and smooth.Point in the rectangle of the lower left corner corresponds to shade (Shadow), because Shade is black and smooth.Point on the ellipse of the right corresponds to object (Object), because the region of object is black and thick Rough.
In industrial environment, background is relatively easy, and the background of industry image is often to become clear, and enough illuminations are just enough .We use almost pure white as background.So removal shade is very simple after detecting shade, it only need to be by the ash of shadow region Angle value is set as white.This is the mode that the present embodiment preferably removes shade in industrial environment, other removal shades Mode is also suitable in the present invention.
Step S40, using first posture as initial attitude, using iteration closest approach algorithm and Gauss-Newton algorithm, base The second posture with second images match is obtained in the corresponding world model of the target to be grabbed, by second posture As final identification posture, step S60 is executed.
In the present embodiment, in order to obtain more accurate posture, a kind of method of Filled function is needed to estimate to refine posture Meter.The method that we are combined using iteration closest approach (ICP) and Gauss-Newton.
It is by certain rotation and translation transformation that two groups under different coordinates or multiple groups point cloud data are unified to same Under one reference frame.This process can be completed by one group of mapping, recover camera posture information from two groups of 3D points Method be commonly referred to as iteration closest approach (Iterative Closest Point, ICP).Gauss-Newton algorithm is by using Thailand It strangles series expansion to go approximatively to replace nonlinear regression model (NLRM), then successive ignition, repeatedly corrects regression coefficient, makes to return system Number constantly approaches the optimum regression coefficient of nonlinear regression model (NLRM), and the residual sum of squares (RSS) of master mould is finally made to reach minimum.
Optimization process is as follows: initial state is p0, that is, initial attitude, passes through fusion iteration closest approach algorithm and height This-Newton's algorithm obtains formula (3), and postures when logical more iterative solutions reach the default condition of convergence, i.e., based on described wait grab The corresponding world model of target is taken to obtain the posture with second images match:
pt+1=pt+Δp (3)
Renewal vector Δ p can be expressed as formula (4):
Wherein, ε is difference vector, JεJacobian matrix for ε relative to p, t, t+1 be moment value, represent any moment and The subsequent time of any moment, T are iteration cycle, and p is Attitude estimation value, correspondence and minimization problem similar to ICP algorithm Iterative solution will be based on the corresponding world model of the target to be grabbed and obtain and second image until convergence The posture matched i.e. the second posture, as final identification posture.
Step S50 obtains the tactile pattern of the target to be grabbed, the touching based on target to be grabbed described in constructing in advance Feel image and posture corresponding relationship knowledge base, the third appearance of the target to be grabbed is obtained by the matched method of tactile pattern State, using the third posture as final identification posture.
In this example, the knowledge base of contact surface is constructed by a large amount of information in relation to contact surface shape of collection in advance, Knowledge base is used to store the tactile pattern and posture corresponding relationship of target.When illumination condition is undesirable, by industrial part It is placed on touch sensor surface, touch sensor can incude the picture generated about contact surface information, pass through the contact of contact surface Information analysis simultaneously judges the posture information of industrial part and passes computer back by relevant device, the form for the picture passed back such as Fig. 4 It is shown, two distinct types of industrial part contact surface schematic diagram is given, the industrial part of 1 the first seed type of expression in Fig. 4 The modes of two kinds of placements form the schematic diagram of picture on the contact surface, the industrial part of 2 second of type of expression is placed vertically in Fig. 4 The schematic diagram of the related contact surface formed on the contact surface, 3 indicate that the industrial part of second of type is oblique on the contact surface in Fig. 4 Put the schematic diagram of the related contact surface information to be formed.Computer knowledge library performs corresponding processing and matches to the picture passed back, Acquisition including corresponding Objective extraction and pixel coordinate information obtains related industrial part third posture, as final identification Posture.
Step S60, mechanical arm is according to the obtained final identification posture, and the position of the target to be grabbed obtained Confidence breath carries out the crawl of the target to be grabbed.
In this example, if illumination is normal, it is more that spherical surface is carried out to industrial part to be processed using the technology of visual correlation Angle carries out modeling and dynamic removes the pretreatment of shade in real time, obtains more accurate posture information, as final identification appearance State;If it is fixed illumination occur excessively and can not secretly capture industrial part posture and illumination light intensity industrial part surface there are reflective interference The case where position, then obtains the exact posture information of industrial part contact surface by touch sensor, as final identification posture.Most Identification posture coordinate is the coordinate of video camera eventually, there is a fixed conversion square between world coordinate system and camera coordinate system Battle array, computer can automatically process and convert actual coordinate for image coordinate after acquisition picture every time, and mechanical arm is according to conversion Location information and final identification posture afterwards carries out positioning crawl to industrial part.
The mechanical arm grasping system of a kind of view-based access control model and tactile of second embodiment of the invention, the mechanical arm grasping system Including image collecting device, touch sensor device, placement platform, processor, mechanical arm;
As the hardware configuration that Fig. 5 is the system, including part mounting table 1, robotic arm 2, video camera 3, part situation are shown Shield 4, touch sensor 5, industrial part 6, in addition to this there are also remote control computer, wouldn't be marked in figure, the part is put Setting platform is transmission belt, and surface is provided with touch sensor, can carry out the identification of targeted attitude to be grabbed in advance, then carry out position Tracking is set, carries out grasping movement, position tracking and crawl when arrival to crawl position.
Wherein, described image acquisition device, that is, video camera is set to mechanical arm top setting position, for acquiring State the image of the target to be grabbed on placement platform;Touch sensor device, that is, the touch sensor, is set to the placement Platform upper, top can be upper surface or placement platform placed side lower part, as long as being able to detect that the touching of target to be grabbed Feel that image can also obtain the tactile pattern of target to be detected by other means, be used in some other embodiment Obtain the tactile pattern for the target to be grabbed that the placement platform is placed;The placement platform, that is, part mounting table, is set to Setting position in the crawl radius of the mechanical arm, for placing target to be grabbed;The i.e. long-range control of the processor calculates Machine, the shooting image of the target to be grabbed for being acquired based on described image acquisition device, and/or it is based on the touch sensor The tactile pattern of the target to be grabbed of device acquisition generates the machinery by the mechanical arm grasping means of view-based access control model and tactile The fetching instruction of arm;The mechanical arm, the fetching instruction for being exported based on the processor are grabbed on the placement platform Target to be grabbed;Video camera, remote control computer and mechanical arm are successively electrically connected.
The mechanical arm grasping system further includes display device i.e. part situation display screen, for showing the placement platform On target to be grabbed image.
The technical personnel in the technical field can be clearly understood that, for convenience and simplicity of description, foregoing description The specific course of work of system and related explanation, can be no longer superfluous herein with reference to the corresponding process in signature embodiment of the method It states.
It should be noted that the mechanical arm grasping system of view-based access control model provided by the above embodiment and tactile, only with above-mentioned The division of each functional module carries out for example, in practical applications, can according to need and by above-mentioned function distribution by difference Functional module complete, i.e., by the embodiment of the present invention module or step decompose or combine again, for example, above-mentioned implementation The module of example can be merged into a module, multiple submodule can also be further split into, to complete whole described above Or partial function.For module involved in the embodiment of the present invention, the title of step, it is only for distinguish modules or Person's step, is not intended as inappropriate limitation of the present invention.
A kind of storage device of third embodiment of the invention, wherein be stored with a plurality of program, described program be suitable for by Reason device loads and realizes the mechanical arm grasping means of above-mentioned view-based access control model and tactile.
A kind of processing unit of fourth embodiment of the invention, including processor, storage device;Processor is adapted for carrying out each Program;Storage device is suitable for storing a plurality of program;Described program is suitable for being loaded by processor and being executed to realize above-mentioned base In the mechanical arm grasping means of vision and tactile.
The technical personnel in the technical field can be clearly understood that is do not described is convenienct and succinct, foregoing description The specific work process and related explanation of storage device, processing unit, can be with reference to the corresponding process in signature method example, In This is repeated no more.
Those skilled in the art should be able to recognize that, mould described in conjunction with the examples disclosed in the embodiments of the present disclosure Block, method and step, can be realized with electronic hardware, computer software, or a combination of the two, software module, method and step pair The program answered can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electric erasable and can compile Any other form of storage well known in journey ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field is situated between In matter.In order to clearly demonstrate the interchangeability of electronic hardware and software, in the above description according to function generally Describe each exemplary composition and step.These functions are executed actually with electronic hardware or software mode, depend on technology The specific application and design constraint of scheme.Those skilled in the art can carry out using distinct methods each specific application Realize described function, but such implementation should not be considered as beyond the scope of the present invention.
Term " first ", " second " etc. are to be used to distinguish similar objects, rather than be used to describe or indicate specific suitable Sequence or precedence.
Term " includes " or any other like term are intended to cover non-exclusive inclusion, so that including a system Process, method, article or equipment/device of column element not only includes those elements, but also including being not explicitly listed Other elements, or further include the intrinsic element of these process, method, article or equipment/devices.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these Technical solution after change or replacement will fall within the scope of protection of the present invention.

Claims (12)

1. the mechanical arm grasping means of a kind of view-based access control model and tactile, which is characterized in that this method comprises:
Step S10 obtains the intensity of illumination of target to be grabbed, if the intensity of illumination thens follow the steps within the scope of given threshold S20, it is no to then follow the steps S50;
Step S20, the shooting image based on the target to be grabbed carry out the extraction of target image, obtain the first image, and base The posture for obtaining the target to be grabbed by the method for views registered in the corresponding world model of the target to be grabbed, as First posture;
Step S30 carries out shade removal to the first image, obtains the second image;
Step S40, using iteration closest approach algorithm and Gauss-Newton algorithm, is based on institute using first posture as initial attitude State the corresponding world model of target to be grabbed and obtain the second posture with second images match, using second posture as Final identification posture, executes step S60;
Step S50 obtains the tactile pattern of the target to be grabbed, the tactile figure based on target to be grabbed described in constructing in advance Picture and posture corresponding relationship knowledge base obtain the third posture of the target to be grabbed by the matched method of tactile pattern, will The third posture is as final identification posture;
Step S60, mechanical arm is according to the obtained final identification posture, and the position letter of the target to be grabbed obtained Breath carries out the crawl of the target to be grabbed.
2. the mechanical arm grasping means of view-based access control model according to claim 1 and tactile, which is characterized in that in step S20 " posture of the target to be grabbed is obtained by the method for views registered based on the corresponding world model of the target to be grabbed ", Its method are as follows: based on the corresponding world model of the target to be grabbed, obtain the 2D throwing of the different points of view generated by virtual ball The set of video display figure, by the method for images match obtain with the matched view of the first image, and by its corresponding posture Posture as the target to be grabbed.
3. the mechanical arm grasping means of view-based access control model according to claim 1 and tactile, which is characterized in that in step S30 " shade removal is carried out to the first image ", method are as follows: calculate the first image variance, variance yields is less than given threshold Point is as shadow spots and removes.
4. the mechanical arm grasping means of view-based access control model according to claim 3 and tactile, which is characterized in that the variance, Its calculation method are as follows:
Wherein, V (x, y) is the variance yields of pixel (x, y), and g (x, y) indicates the average gray value of pixel (x, y), I (x, y) Indicate the gray value of specific pixel point, NVFor seek variance region side length, x, y be pixel two-dimensional coordinate value.
5. the mechanical arm grasping means of view-based access control model according to claim 4 and tactile, which is characterized in that the average ash Angle value, calculation method are as follows:
Wherein, NAFor seek average gray value region side length.
6. the mechanical arm grasping means of view-based access control model according to claim 1 and tactile, which is characterized in that in step S40 " using first posture as initial attitude, using iteration closest approach algorithm and Gauss-Newton algorithm, based on the mesh to be grabbed Mark the second posture of corresponding world model acquisition and second images match ", pass through fusion iteration closest approach algorithm and height This-Newton's algorithm obtains following formula, and postures when logical more iterative solutions reach the default condition of convergence:
pt+1=pt+Δp
Wherein, p is Attitude estimation value, and Δ p is renewal vector, and ε is difference vector, JεJacobian matrix for ε relative to p, t, t+1 For moment value, the subsequent time of any moment and any moment is represented, T is iteration cycle.
7. the mechanical arm grasping means of view-based access control model according to claim 1 and tactile, which is characterized in that in step S50 The tactile pattern is obtained by the way that the target to be grabbed is placed in touch sensor surface.
8. the mechanical arm grasping means of view-based access control model according to claim 7 and tactile, which is characterized in that the tactile passes Sensor is array tactile sensor.
9. the mechanical arm grasping system of a kind of view-based access control model and tactile, which is characterized in that the mechanical arm grasping system includes image Acquisition device, touch sensor device, placement platform, processor, mechanical arm;
Described image acquisition device is set to mechanical arm top setting position, for acquire on the placement platform wait grab Take the image of target;
The touch sensor device is set to the placement platform top, for obtaining that the placement platform placed wait grab Take the tactile pattern of target;
The placement platform is set to the setting position in the crawl radius of the mechanical arm, for placing target to be grabbed;
The processor, the shooting image of the target to be grabbed for being acquired based on described image acquisition device, and/or it is based on institute The tactile pattern for stating the target to be grabbed of touch sensor device acquisition, it is described in any item based on view by claim 1-8 Feel that the mechanical arm grasping means with tactile generates the fetching instruction of the mechanical arm;
The mechanical arm, the fetching instruction for being exported based on the processor grab the mesh to be grabbed on the placement platform Mark.
10. the mechanical arm grasping system of view-based access control model according to claim 9 and tactile, which is characterized in that the machinery Arm grasping system further includes display device, for showing the image of target to be grabbed on the placement platform.
11. a kind of storage device, wherein being stored with a plurality of program, which is characterized in that described program is applied by processor load simultaneously Execute the mechanical arm grasping means to realize the described in any item view-based access control models of claim 1-8 and tactile.
12. a kind of processing setting, including processor, storage device;Processor is adapted for carrying out each program;Storage device is fitted For storing a plurality of program;It is characterized in that, described program is suitable for being loaded by processor and being executed to realize claim 1-8 The mechanical arm grasping means of described in any item view-based access control models and tactile.
CN201910629058.5A 2019-07-12 2019-07-12 View-based access control model and the mechanical arm grasping means of tactile, system, device Pending CN110428465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910629058.5A CN110428465A (en) 2019-07-12 2019-07-12 View-based access control model and the mechanical arm grasping means of tactile, system, device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910629058.5A CN110428465A (en) 2019-07-12 2019-07-12 View-based access control model and the mechanical arm grasping means of tactile, system, device

Publications (1)

Publication Number Publication Date
CN110428465A true CN110428465A (en) 2019-11-08

Family

ID=68410466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910629058.5A Pending CN110428465A (en) 2019-07-12 2019-07-12 View-based access control model and the mechanical arm grasping means of tactile, system, device

Country Status (1)

Country Link
CN (1) CN110428465A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111055279A (en) * 2019-12-17 2020-04-24 清华大学深圳国际研究生院 Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111204476A (en) * 2019-12-25 2020-05-29 上海航天控制技术研究所 Vision-touch fusion fine operation method based on reinforcement learning
CN111913204A (en) * 2020-07-16 2020-11-10 西南大学 Mechanical arm guiding method based on RTK positioning
CN112809679A (en) * 2021-01-25 2021-05-18 清华大学深圳国际研究生院 Method and device for grabbing deformable object and computer readable storage medium
CN113808198A (en) * 2021-11-17 2021-12-17 季华实验室 Method and device for labeling suction surface, electronic equipment and storage medium
CN114851227A (en) * 2022-06-22 2022-08-05 上海大学 Device based on machine vision and sense of touch fuse perception
CN114872054A (en) * 2022-07-11 2022-08-09 深圳市麦瑞包装制品有限公司 Method for positioning robot hand for industrial manufacturing of packaging container
CN115147411A (en) * 2022-08-30 2022-10-04 启东赢维数据信息科技有限公司 Labeller intelligent positioning method based on artificial intelligence
CN115625713A (en) * 2022-12-05 2023-01-20 开拓导航控制技术股份有限公司 Manipulator grabbing method based on touch-vision fusion perception and manipulator
CN115760805A (en) * 2022-11-24 2023-03-07 中山大学 Positioning method for processing surface depression of element based on visual touch sense

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097131A (en) * 2006-06-30 2008-01-02 廊坊智通机器人系统有限公司 Method for marking workpieces coordinate system
US20080027580A1 (en) * 2006-07-28 2008-01-31 Hui Zhang Robot programming method and apparatus with both vision and force
US20100131235A1 (en) * 2008-11-26 2010-05-27 Canon Kabushiki Kaisha Work system and information processing method
CN102622763A (en) * 2012-02-21 2012-08-01 芮挺 Method for detecting and eliminating shadow
CN103530886A (en) * 2013-10-22 2014-01-22 上海安奎拉信息技术有限公司 Low-calculation background removing method for video analysis
US20140277588A1 (en) * 2013-03-15 2014-09-18 Eli Robert Patt System and method for providing a prosthetic device with non-tactile sensory feedback
CN205121556U (en) * 2015-10-12 2016-03-30 中国科学院自动化研究所 Robot grasping system
CN105930854A (en) * 2016-04-19 2016-09-07 东华大学 Manipulator visual system
US9579801B2 (en) * 2013-06-11 2017-02-28 Somatis Sensor Solutions LLC Systems and methods for sensing objects
CN106845354A (en) * 2016-12-23 2017-06-13 中国科学院自动化研究所 Partial view base construction method, part positioning grasping means and device
CN107234625A (en) * 2017-07-07 2017-10-10 中国科学院自动化研究所 The method that visual servo is positioned and captured
CN107921622A (en) * 2015-08-25 2018-04-17 川崎重工业株式会社 Robot system
CN107972069A (en) * 2017-11-27 2018-05-01 胡明建 The design method that a kind of computer vision and Mechanical Touch are mutually mapped with the time
CN108297083A (en) * 2018-02-09 2018-07-20 中国科学院电子学研究所 Mechanical arm system
CN108537841A (en) * 2017-03-03 2018-09-14 株式会社理光 A kind of implementation method, device and the electronic equipment of robot pickup
CN108638054A (en) * 2018-04-08 2018-10-12 河南科技学院 A kind of intelligence explosive-removal robot five-needle pines blister rust control method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101097131A (en) * 2006-06-30 2008-01-02 廊坊智通机器人系统有限公司 Method for marking workpieces coordinate system
US20080027580A1 (en) * 2006-07-28 2008-01-31 Hui Zhang Robot programming method and apparatus with both vision and force
US20100131235A1 (en) * 2008-11-26 2010-05-27 Canon Kabushiki Kaisha Work system and information processing method
CN102622763A (en) * 2012-02-21 2012-08-01 芮挺 Method for detecting and eliminating shadow
US20140277588A1 (en) * 2013-03-15 2014-09-18 Eli Robert Patt System and method for providing a prosthetic device with non-tactile sensory feedback
US9579801B2 (en) * 2013-06-11 2017-02-28 Somatis Sensor Solutions LLC Systems and methods for sensing objects
CN103530886A (en) * 2013-10-22 2014-01-22 上海安奎拉信息技术有限公司 Low-calculation background removing method for video analysis
CN107921622A (en) * 2015-08-25 2018-04-17 川崎重工业株式会社 Robot system
CN205121556U (en) * 2015-10-12 2016-03-30 中国科学院自动化研究所 Robot grasping system
CN105930854A (en) * 2016-04-19 2016-09-07 东华大学 Manipulator visual system
CN106845354A (en) * 2016-12-23 2017-06-13 中国科学院自动化研究所 Partial view base construction method, part positioning grasping means and device
CN108537841A (en) * 2017-03-03 2018-09-14 株式会社理光 A kind of implementation method, device and the electronic equipment of robot pickup
CN107234625A (en) * 2017-07-07 2017-10-10 中国科学院自动化研究所 The method that visual servo is positioned and captured
CN107972069A (en) * 2017-11-27 2018-05-01 胡明建 The design method that a kind of computer vision and Mechanical Touch are mutually mapped with the time
CN108297083A (en) * 2018-02-09 2018-07-20 中国科学院电子学研究所 Mechanical arm system
CN108638054A (en) * 2018-04-08 2018-10-12 河南科技学院 A kind of intelligence explosive-removal robot five-needle pines blister rust control method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CHAO MA ET AL.: "Flexible Robotic Grasping Strategy with Constrained Region in Environment", 《INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING》 *
DI GUO ET AL.: "Robotic grasping using visual and tactile sensing", 《INFORMATION SCIENCES》 *
J. LI ET AL.: "Slip Detection with Combined Tactile and Visual Information", 《2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 *
伍于添 等: "《医学超声设备 原理•涉及•应用》", 30 April 2012, 科学技术文献出版社 *
卢丹灵: "基于视触觉融合的机械手臂目标抓取研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
孙水发 等: "《视频前景检测及其在水电工程监测中的应用》", 31 December 2014, 国防工业出版社 *
罗时光 等: "《实验设计与数据处理》", 30 April 2018, 中国铁道出版社 *
郭迎达: "机器人抓取中视觉触觉融合的技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111055279B (en) * 2019-12-17 2022-02-15 清华大学深圳国际研究生院 Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111055279A (en) * 2019-12-17 2020-04-24 清华大学深圳国际研究生院 Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111204476A (en) * 2019-12-25 2020-05-29 上海航天控制技术研究所 Vision-touch fusion fine operation method based on reinforcement learning
CN111204476B (en) * 2019-12-25 2021-10-29 上海航天控制技术研究所 Vision-touch fusion fine operation method based on reinforcement learning
CN111913204B (en) * 2020-07-16 2024-05-03 西南大学 Mechanical arm guiding method based on RTK positioning
CN111913204A (en) * 2020-07-16 2020-11-10 西南大学 Mechanical arm guiding method based on RTK positioning
CN112809679A (en) * 2021-01-25 2021-05-18 清华大学深圳国际研究生院 Method and device for grabbing deformable object and computer readable storage medium
CN113808198A (en) * 2021-11-17 2021-12-17 季华实验室 Method and device for labeling suction surface, electronic equipment and storage medium
CN113808198B (en) * 2021-11-17 2022-03-08 季华实验室 Method and device for labeling suction surface, electronic equipment and storage medium
CN114851227A (en) * 2022-06-22 2022-08-05 上海大学 Device based on machine vision and sense of touch fuse perception
CN114851227B (en) * 2022-06-22 2024-02-27 上海大学 Device based on machine vision and touch sense fusion perception
CN114872054A (en) * 2022-07-11 2022-08-09 深圳市麦瑞包装制品有限公司 Method for positioning robot hand for industrial manufacturing of packaging container
CN115147411A (en) * 2022-08-30 2022-10-04 启东赢维数据信息科技有限公司 Labeller intelligent positioning method based on artificial intelligence
CN115760805A (en) * 2022-11-24 2023-03-07 中山大学 Positioning method for processing surface depression of element based on visual touch sense
CN115760805B (en) * 2022-11-24 2024-02-09 中山大学 Positioning method for processing element surface depression based on visual touch sense
CN115625713A (en) * 2022-12-05 2023-01-20 开拓导航控制技术股份有限公司 Manipulator grabbing method based on touch-vision fusion perception and manipulator

Similar Documents

Publication Publication Date Title
CN110428465A (en) View-based access control model and the mechanical arm grasping means of tactile, system, device
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
US12008796B2 (en) Systems and methods for pose detection and measurement
Marton et al. General 3D modelling of novel objects from a single view
Saeedi et al. Vision-based 3-D trajectory tracking for unknown environments
CN109816704A (en) The 3 D information obtaining method and device of object
CN103196370B (en) Measuring method and measuring device of conduit connector space pose parameters
JP2011138490A (en) Method for determining pose of object in scene
Agrawal et al. Vision-guided robot system for picking objects by casting shadows
CN110375765B (en) Visual odometer method, system and storage medium based on direct method
CN109318227B (en) Dice-throwing method based on humanoid robot and humanoid robot
CN112102342B (en) Plane contour recognition method, plane contour recognition device, computer equipment and storage medium
CN111080685A (en) Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
Smith et al. Eye-in-hand robotic tasks in uncalibrated environments
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN112348890A (en) Space positioning method and device and computer readable storage medium
CN112700505B (en) Binocular three-dimensional tracking-based hand and eye calibration method and device and storage medium
Wang et al. Human foot reconstruction from multiple camera images with foot shape database
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
Wang et al. Human behavior imitation for a robot to play table tennis
Walck et al. Automatic observation for 3d reconstruction of unknown objects using visual servoing
Chiu et al. Class-specific grasping of 3D objects from a single 2D image
Qiu et al. Single view based nonlinear vision pose estimation from coplanar points
Wang et al. RGBD object recognition and flat area analysis method for manipulator grasping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191108

RJ01 Rejection of invention patent application after publication