CN116385437B - Multi-view multi-image fusion method and device - Google Patents

Multi-view multi-image fusion method and device Download PDF

Info

Publication number
CN116385437B
CN116385437B CN202310651022.3A CN202310651022A CN116385437B CN 116385437 B CN116385437 B CN 116385437B CN 202310651022 A CN202310651022 A CN 202310651022A CN 116385437 B CN116385437 B CN 116385437B
Authority
CN
China
Prior art keywords
target object
grabbing
mechanical gripper
picture
workbench
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310651022.3A
Other languages
Chinese (zh)
Other versions
CN116385437A (en
Inventor
姚军亭
陈国栋
贾风光
李志锋
丁斌
古缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Zhongqing Intelligent Technology Co ltd
Original Assignee
Shandong Zhongqing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhongqing Intelligent Technology Co ltd filed Critical Shandong Zhongqing Intelligent Technology Co ltd
Priority to CN202310651022.3A priority Critical patent/CN116385437B/en
Publication of CN116385437A publication Critical patent/CN116385437A/en
Application granted granted Critical
Publication of CN116385437B publication Critical patent/CN116385437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/10Nuclear fusion reactors

Abstract

The application discloses a multi-view multi-image fusion method and device.A plurality of optical components are fixed on each grabbing finger of a mechanical gripper, each optical component comprises an illumination optical fiber for projecting indication light spots to a workbench and a finger dividing camera for shooting the workbench, and a current picture shot by the finger dividing camera comprises a target object and each indication light spot. The current picture when the indication light spot is combined with the target object is taken as a first picture, the current picture when the indication light spot is not combined with the target object is taken as a second picture, the control device compares the size changes of the indication light spot in the first picture and the second picture, and the height of the target object can be calculated, so that the longitudinal moving distance when the mechanical gripper grabs the target object is accurately adjusted, and the mechanical gripper is prevented from being worn.

Description

Multi-view multi-image fusion method and device
Technical Field
The application relates to the field of intelligent robots, in particular to a multi-view multi-image fusion method and device.
Background
At present, various automation industries rapidly develop, and certain deviation exists in manual operation for some operations which are simple and repeated and require high precision, and a high-precision mechanical arm is required to operate. Machine vision is a branch of the rapid development of artificial intelligence, and in short, machine vision is to use a machine instead of a human eye to make measurements and decisions. In the process of grabbing an object by an industrial robot arm based on machine vision, a camera is required to acquire a picture of the object, determine position information of the object and calibrate the relative position between the robot arm and the object, so that the robot arm is controlled to perform related operations to grab the object.
However, due to lack of acquisition of position information of the target object in the depth direction, a grabbing instruction issued to the mechanical arm often has difficulty in grabbing the target object stably, and the mechanical arm is likely to touch other obstacles in the depth direction in the process of grabbing the target object, so that the mechanical gripper is worn.
Disclosure of Invention
The present application aims to provide a multi-view multi-image fusion method and device, which can improve the above problems.
Embodiments of the present application are implemented as follows:
in a first aspect, the present application provides a multi-view multi-image fusion apparatus comprising:
the mechanical gripper comprises a gripping driving piece and at least two gripping fingers, wherein the gripping driving piece is used for driving the at least two gripping fingers to realize gripping actions;
the motion driving assembly is used for driving the mechanical gripper to move in a three-dimensional space;
the distance detector is arranged on the mechanical gripper and between the at least two gripping fingers;
the optical components are arranged on the corresponding grabbing fingers, the optical components fix the illumination optical fibers towards the extending direction of the grabbing fingers, and the optical components are also provided with finger dividing cameras;
The output end of the laser is connected with the head end of the illumination optical fiber;
and the control device is respectively and electrically connected with the laser, the finger-dividing camera, the motion driving assembly and the grabbing driving piece.
It can be appreciated that the application discloses a multi-view multi-image fusion device, wherein an optical assembly is fixed on each grabbing finger of a mechanical gripper, the optical assembly comprises an illumination optical fiber for projecting indication light spots to a workbench and a finger dividing camera for shooting the workbench, and a current picture shot by the finger dividing camera comprises a target object and all indication light spots. The current picture when the indication light spot is combined with the target object is taken as a first picture, the current picture when the indication light spot is not combined with the target object is taken as a second picture, the control device compares the size changes of the indication light spot in the first picture and the second picture, and the height of the target object can be calculated, so that the longitudinal moving distance when the mechanical gripper grabs the target object is accurately adjusted, and the mechanical gripper is prevented from being worn.
In an optional embodiment of the present application, a central camera is further disposed on the mechanical gripper and disposed between the at least two gripping fingers, and the central camera is electrically connected with the control device.
It can be understood that the central camera arranged between the grabbing fingers is used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can rapidly and accurately calibrate the relative position between the mechanical arm and the target object by analyzing the current picture, so as to adjust the mechanical gripper to move towards the target object in the horizontal direction.
In an alternative embodiment of the application, the optical assembly comprises an optical barrel, the illumination fiber, a fixture, and a beam expanding optical element; the tail end of the illumination optical fiber is inserted into the light cylinder and is fixed on the light cylinder through the fixing piece, the beam expanding optical element is further arranged on the output light path of the illumination optical fiber and used for expanding the outgoing beam of the illumination optical fiber under the control of the control device.
It can be understood that the output light path of the illumination optical fiber is further provided with a beam expanding optical element, and the beam expanding optical element can be a lens assembly formed by a plurality of lenses and is used for expanding the outgoing light beam of the illumination optical fiber so as to enlarge the area of the indication light spot projected on the workbench. The beam expanding optical element may be an electrically controlled beam expanding optical element, which expands the outgoing beam of the illumination fiber under the control of the control device, and only transmits the outgoing beam of the illumination fiber when the control command of the control device is not received.
In an optional embodiment of the present application, the finger-dividing camera is disposed on the optical barrel, and the finger-dividing camera and the illumination optical fiber are both disposed towards the extending direction of the capturing finger.
In a second aspect, the present application discloses a multi-view multi-image fusion method applied to the control device of the multi-view multi-image fusion device according to any one of the first aspect, the method comprising the steps of:
s1, sending a starting instruction to the laser, so that the laser projects an indication light spot to a workbench through the illumination optical fiber and then obtains a current picture shot by the finger dividing camera;
s2, sending a first instruction to the grabbing driving piece, driving the mechanical gripper to adjust the grabbing finger angle, enabling the grabbing finger to be perpendicular to a workbench, sending a first motion instruction to the motion driving assembly, driving the mechanical gripper to move towards the target object on a plane with a preset height away from the workbench, and taking the current picture as a first picture under the condition that the indication light spot is overlapped with the target object and the indication light spot is tangent to the outline of the target object in the current picture;
S3, sending a second instruction to the grabbing driving piece, driving the mechanical gripper to adjust the grabbing finger angle so that the grabbing direction rotates at one side far away from the target object, and taking the current picture as a second picture under the condition that the indication light spot is not overlapped with the target object and the indication light spot is tangential to the outline of the target object in the current picture;
s4, calculating the height of the target object according to the size change of the indication light spots in the first picture and the second picture.
The application discloses a multi-view multi-image fusion method, which is used for the control device of the multi-view multi-image fusion device, and mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the finger dividing camera and comprises a target object and all the indicating light spots; the angle of the grabbing finger is adjusted through the grabbing driving piece, a current picture when the grabbing finger is perpendicular to the workbench and indicates that the light spot is in weight with the target object is used as a first picture, and a current picture when the grabbing finger is no longer perpendicular to the workbench and indicates that the light spot is no longer in weight with the target object is used as a second picture; the height of the target object can be calculated by comparing the size changes of the indication light spots in the first picture and the second picture, so that the longitudinal moving distance of the mechanical gripper when grabbing the target object can be accurately adjusted, and the mechanical gripper is prevented from being worn.
The steps S1, S2, etc. are only step identifiers, and the execution sequence of the method is not necessarily performed in the order from small to large, for example, the step S2 may be performed first and then the step S1 may be performed, which is not limited by the present application.
In an alternative embodiment of the present application, the method further comprises:
s5, sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench until the mechanical gripper is positioned right above the target object;
s6, calculating the target descent distance according to the following formula:, wherein ,/>Representing the target descent distance, +.>Representing said preset height, +.>Representing the height of the target object;
s7, sending a descending instruction to the motion driving assembly, and driving the mechanical gripper to move the target descending distance to the target object in the direction perpendicular to the workbench;
s8, after the mechanical gripper moves the target to descend for a distance, sending a grabbing instruction to the grabbing driving piece to drive the mechanical gripper to grab the target object.
It can be understood that the target descending distance is calculated according to the height of the target object, and the mechanical gripper descends according to the target descending distance, so that friction with the workbench caused by excessive descending distance is avoided, the target object can be grabbed in the middle area of the target object, and balance can be maintained in the grabbing process.
In an alternative embodiment of the present application, step S5 includes:
s51, after the indicating light spots are projected on the workbench, acquiring a central current picture shot by the central camera;
s52, identifying a target object in the central current picture, and determining a target center point of the target object in the central current picture;
s53, identifying the indication light spots in the central current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile;
and S54, sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane which is at a preset height from the workbench, so that the contour center point coincides with the target center point.
It can be understood that after the indicating light spots are projected on the workbench, a current picture which is shot by the central camera and comprises the target object and each indicating light spot is obtained; identifying a target object and an indication light spot in a current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object. Under the condition that the contour center point is coincident with the target center point, the mechanical gripper is judged to move to be right above the target object, and the gripping operation is most suitable, so that the motion driving assembly is controlled to drive the mechanical gripper to move towards the target object, and the contour center point is coincident with the target center point.
In an alternative embodiment of the present application, step S4 includes:
s41, taking the diameter of the indication light spot in the first picture as a first diameter, taking the diameter of the indication light spot in the second picture as a second diameter, and calculating the diameter ratio of the first diameter to the second diameter;
s42, calculating the height of the target object according to the following formula:
wherein ,representing the height of the target object, +.>Representing the preset height of the manipulator from the table, +.>Represents the divergence angle of the light beam emerging from said optical component,/->Representing the angle between the gripping finger and the direction perpendicular to the workbench, and +.>Representing the diameter ratio.
In an alternative embodiment of the present application, the method further comprises: acquiring the current height of the workbench, which is detected by the distance detector; the method for driving the mechanical gripper to move towards the target object on a plane with a preset height away from the workbench comprises the following steps of: and sending a first motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on the plane with the current height equal to the preset height.
In an optional embodiment of the present application, in a case where the current height is equal to the preset height, an area of the indication light spot in the current picture is greater than or equal to an area threshold.
It will be appreciated that in the case where the contour center point coincides with the target center point, it may be determined that the robot hand moves directly above the target object, but the movement distance of the robot hand in the longitudinal direction cannot be determined. Therefore, the distance between the current mechanical gripper and the target object in the longitudinal direction can be judged by indicating the spot area of the spot. The above area threshold may be formulated by a person skilled in the art according to circumstances, with the aim of determining as a criterion whether the robot hand moves in the longitudinal direction to a position at a preset height from the table.
In a third aspect, the present application provides an interconnected processor and memory; the memory is for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method according to any of the second aspects.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program comprising program instructions which when executed by a processor implement the steps of any of the methods of the second aspect.
The beneficial effects are that: the application discloses a multi-view multi-image fusion device, wherein each grabbing finger of a mechanical grabbing hand is fixedly provided with an optical component, the optical component comprises an illumination optical fiber for projecting indication light spots to a workbench and a finger dividing camera for shooting the workbench, and a current picture shot by the finger dividing camera comprises a target object and all indication light spots. The current picture when the indication light spot is combined with the target object is taken as a first picture, the current picture when the indication light spot is not combined with the target object is taken as a second picture, the control device compares the size changes of the indication light spot in the first picture and the second picture, and the height of the target object can be calculated, so that the longitudinal moving distance when the mechanical gripper grabs the target object is accurately adjusted, and the mechanical gripper is prevented from being worn.
The application discloses a multi-view multi-image fusion method, which is used for the control device of the multi-view multi-image fusion device, and mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the finger dividing camera and comprises a target object and all the indicating light spots; the angle of the grabbing finger is adjusted through the grabbing driving piece, a current picture when the grabbing finger is perpendicular to the workbench and indicates that the light spot is in weight with the target object is used as a first picture, and a current picture when the grabbing finger is no longer perpendicular to the workbench and indicates that the light spot is no longer in weight with the target object is used as a second picture; the height of the target object can be calculated by comparing the size changes of the indication light spots in the first picture and the second picture, so that the longitudinal moving distance of the mechanical gripper when grabbing the target object can be accurately adjusted, and the mechanical gripper is prevented from being worn.
In order to make the above objects, features and advantages of the present application more comprehensible, alternative embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a multi-view multi-image fusion apparatus according to the present application;
FIG. 2 is a schematic diagram of the optical assembly shown in FIG. 1;
FIG. 3 is a schematic diagram of a portion of a first frame according to the present application;
FIG. 4 is a schematic view of the mechanical gripper of FIG. 1 rotated to grip a side pointing away from a target object;
FIG. 5 is a schematic diagram of a portion of a second frame according to the present application;
fig. 6 is a schematic diagram of the calculation principle of the height of the object disclosed in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In a first aspect, as shown in fig. 1, the present application provides a multi-view multi-image fusion apparatus, comprising: the mechanical gripper 1 comprises a gripping driving piece 10, a first gripping finger 11 and a second gripping finger 12, wherein the gripping driving piece 10 is used for driving the first gripping finger 11 and the second gripping finger 12 to realize gripping actions; a motion driving assembly (not shown) for driving the mechanical gripper to move in three-dimensional space; the distance detector 2 is arranged on the mechanical gripper 1 and between the first grabbing finger 11 and the second grabbing finger 12; the first optical component 31 and the second optical component 32, the first optical component 31 is installed on the corresponding first grabbing finger 11, the second optical component 32 is installed on the corresponding second grabbing finger 12, the first optical component 31 fixes the first illumination optical fiber 41 towards the extending direction of the first grabbing finger 11, the second optical component 32 fixes the second illumination optical fiber 42 towards the extending direction of the second grabbing finger 12, the first optical component 31 is further provided with a first finger dividing camera 51, and the second optical component 32 is further provided with a second finger dividing camera 52; a laser (not shown in the figure), the output end of which is connected with the head end of each illumination optical fiber; a control device (not shown) is electrically connected to the laser, each finger camera, the motion drive assembly and the grasping drive member 20, respectively.
In the embodiment of the present application, fig. 1 shows a case where the mechanical gripper is a two-finger gripper, and actually the number of gripping fingers of the mechanical gripper may also be a positive integer greater than 2, for example: three-finger grips, four-finger grips, and the like. Correspondingly, the number of the optical components and the illumination fibers is the same as the number of the grabbing fingers, and each grabbing finger is fixedly provided with one optical component and one illumination fiber through the light fixing device.
It can be appreciated that the application discloses a multi-view multi-image fusion device, wherein an optical assembly is fixed on each grabbing finger of a mechanical gripper, the optical assembly comprises an illumination optical fiber for projecting indication light spots to a workbench and a finger dividing camera for shooting the workbench, and a current picture shot by the finger dividing camera comprises a target object and all indication light spots. The current picture when the indication light spot is combined with the target object is taken as a first picture, the current picture when the indication light spot is not combined with the target object is taken as a second picture, the control device compares the size changes of the indication light spot in the first picture and the second picture, and the height of the target object can be calculated, so that the longitudinal moving distance when the mechanical gripper grabs the target object is accurately adjusted, and the mechanical gripper is prevented from being worn.
In an alternative embodiment of the present application, a central camera (not shown in the figure) is further disposed on the mechanical gripper, and is disposed between at least two gripping fingers, and the central camera is electrically connected with the control device.
It can be understood that the central camera arranged between the grabbing fingers is used for shooting the workbench, and the current picture shot by the central camera comprises a target object and each indication light spot; the control device can rapidly and accurately calibrate the relative position between the mechanical arm and the target object by analyzing the current picture, so as to adjust the mechanical gripper to move towards the target object in the horizontal direction.
In an alternative embodiment of the present application, as shown in fig. 2, taking the first optical assembly 31 as an example, the optical assembly includes an optical barrel 311, an illumination fiber 41, a fixing member 312, and a beam expanding optical element 313; the tail end of the illumination optical fiber is inserted into the light cylinder 311 and is fixed on the light cylinder 311 through a fixing piece 312, a beam expanding optical element 313 is further arranged on the output light path of the illumination optical fiber, and the beam expanding optical element 313 is used for expanding the outgoing beam of the illumination optical fiber under the control of the control device.
It can be understood that the output light path of the illumination optical fiber is further provided with a beam expanding optical element, and the beam expanding optical element can be a lens assembly formed by a plurality of lenses and is used for expanding the outgoing light beam of the illumination optical fiber so as to enlarge the area of the indication light spot projected on the workbench. The beam expanding optical element may be an electrically controlled beam expanding optical element, which expands the outgoing beam of the illumination fiber under the control of the control device, and only transmits the outgoing beam of the illumination fiber when the control command of the control device is not received.
In an alternative embodiment of the present application, taking the first optical component 31 as an example, the finger-dividing camera 51 is disposed on the optical barrel 311, and the finger-dividing camera 51 and the illumination optical fiber 41 are both disposed towards the extending direction of the capturing finger 11.
In a second aspect, the present application discloses a multi-view multi-image fusion method applied to the control device of the multi-view multi-image fusion device according to any one of the first aspects, the method comprising the steps of:
s1, sending a starting instruction to the laser, and obtaining a current picture shot by the finger-dividing camera after the laser projects an indication light spot to the workbench through the illumination optical fiber.
It can be understood that after the indicating light spots are projected on the workbench, the current picture shot by the finger-dividing camera comprises the target object and each indicating light spot.
S2, sending a first instruction to the grabbing driving piece, driving the mechanical gripper to adjust the grabbing finger angle, enabling the grabbing finger to be perpendicular to the workbench, sending a first motion instruction to the motion driving assembly, driving the mechanical gripper to move towards the target object on a plane with a preset height away from the workbench, and taking the current picture as the first picture under the conditions that the indication light spot coincides with the target object and the indication light spot is tangential to the outline of the target object in the current picture.
As shown in fig. 3, the outline of the indication spot 201 and the target object 200 in the current picture is inscribed, that is, the indication spot 201 coincides with the target object 200 and the indication spot 201 contacts with the edge of the target object 200, where r1 represents the current diameter of the indication spot 201.
And S3, sending a second instruction to the grabbing driving piece, driving the mechanical gripper to adjust the grabbing finger angle to enable the grabbing direction to rotate at one side far away from the target object, and taking the current picture as a second picture under the condition that the indication light spot is not overlapped with the target object and the indication light spot is tangential to the outline of the target object in the current picture.
As shown in fig. 4, the mechanical gripper is driven to adjust the angle of the gripping fingers so that the gripping fingers are rotated by an angle θ to the side away from the target object, and the gripping fingers are no longer perpendicular to the table.
As shown in fig. 5, the outline of the indication spot 201 and the target object 200 in the current picture is circumscribed, that is, the indication spot 201 and the target object 200 are no longer coincident and the indication spot 201 is in contact with the edge of the target object 200, where r2 represents the current diameter of the indication spot 201.
S4, calculating the height of the target object according to the size change of the indicated light spots in the first picture and the second picture.
The application discloses a multi-view multi-image fusion method, which is used for a control device of a multi-view multi-image fusion device according to any one of the above, and mainly comprises the following steps: after the indicating light spots are projected on the workbench, acquiring a current picture which is shot by the finger dividing camera and comprises a target object and all the indicating light spots; the angle of the grabbing finger is adjusted through the grabbing driving piece, the current picture when the grabbing finger is perpendicular to the workbench and indicates that the light spot and the target object are in weight is used as a first picture, and the current picture when the grabbing finger is no longer perpendicular to the workbench and indicates that the light spot and the target object are no longer in weight is used as a second picture; the height of the target object can be calculated by comparing the size changes of the indication light spots in the first picture and the second picture, so that the longitudinal moving distance of the mechanical gripper when grabbing the target object can be accurately adjusted, and the mechanical gripper is prevented from being worn.
The steps S1, S2, etc. are only step identifiers, and the execution sequence of the method is not necessarily performed in the order from small to large, for example, the step S2 may be performed first and then the step S1 may be performed, which is not limited by the present application.
In an alternative embodiment of the application, the method further comprises:
s5, sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench until the mechanical gripper is located right above the target object.
S6, calculating the target descent distance according to the following formula:, wherein ,/>Representing the falling distance of the object,representing a preset height,/->Representing the height of the target object.
And S7, sending a descending instruction to the motion driving assembly, and driving the mechanical gripper to move the target object by a target descending distance in a direction perpendicular to the workbench.
S8, after the mechanical gripper moves the target to descend by a distance, sending a grabbing instruction to the grabbing driving piece to drive the mechanical gripper to grab the target object.
It can be understood that the target descending distance is calculated according to the height of the target object, and the mechanical gripper descends according to the target descending distance, so that friction with the workbench caused by excessive descending distance is avoided, the target object can be grabbed in the middle area of the target object, and balance can be maintained in the grabbing process.
In an alternative embodiment of the present application, step S5 includes:
s51, after the indicating light spot is projected on the workbench, a central current picture shot by the central camera is acquired.
S52, identifying a target object in the central current picture, and determining a target center point of the target object in the central current picture.
S53, identifying indication light spots in the central current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile.
S54, sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench, so that the contour center point coincides with the target center point.
It can be understood that after the indicating light spots are projected on the workbench, a current picture which is shot by the central camera and comprises the target object and each indicating light spot is obtained; identifying a target object and an indication light spot in a current picture; connecting all the indicating light spots to form a grabbing range profile, and determining a profile center point of the grabbing range profile; and the relative position between the mechanical arm and the target object is rapidly and accurately calibrated through the position relation between the contour center point and the target center point of the target object. Under the condition that the contour center point is coincident with the target center point, the mechanical gripper is judged to move to be right above the target object, and the gripping operation is most suitable, so that the motion driving assembly is controlled to drive the mechanical gripper to move towards the target object, and the contour center point is coincident with the target center point.
In an alternative embodiment of the present application, step S52 includes:
s521, identifying a target object in the current picture, and determining the object contour of the target object in the current picture.
S522, calculating an edge distance difference value corresponding to each pixel point in the object outline.
S523, finding out the pixel point with the smallest edge distance difference value in the object outline as the target center point of the target object.
It can be understood that the pixel point with the smallest difference in edge distance in the object contour is the target center point of the target object, and the difference in distance between the target center point and the edge of the object contour along the opposite directions is the smallest. In the case of an object contour with an inner edge uniformly applying an external force inward, the force applied to the center point of the object is most uniform.
In an alternative embodiment of the present application, step S522 includes:
s5221, taking each pixel point in the object outline as a target pixel point one by one.
S5222, generating at least one group of direction line groups by taking the target pixel point as an intersection point, wherein the direction line groups comprise two direction lines which are perpendicular to each other.
S5223, two intersection points of the direction line and the object contour are respectively taken as a first intersection point and a second intersection point, a distance between the first intersection point and the target pixel point is taken as a first distance value, and a distance between the second intersection point and the target pixel point is taken as a second distance value.
S5224, calculating an average value of the first distance values in each group of direction line groups corresponding to the target pixel point to be used as a first average distance value, and calculating an average value of the second distance values in each group of direction line groups corresponding to the target pixel point to be used as a second average distance value.
And S5225, taking the absolute value of the difference value between the first average distance value and the second average distance value as the edge distance difference value corresponding to the target pixel point.
In an alternative embodiment of the present application, step S53 includes:
s531, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as the light spot.
S532, under the condition that the number of the indication light spots in the current picture is 2, the two indication light spots are connected through a straight line to form a grabbing line segment.
S533, taking the geometric center point of the grabbing line segment as the contour center point.
In an alternative embodiment of the present application, step S53 includes:
s534, identifying the indication light spot in the current picture, and determining the geometric center point of the indication light spot as the light spot.
And S535, under the condition that the number of the indication light spots in the current picture is 3, connecting the light spots of the indication light spots in sequence through straight lines to form a grabbing triangle profile.
S536, taking the geometric gravity center point of the grabbing triangle profile as the profile center point.
In an alternative embodiment of the present application, step S4 includes:
s41, taking the diameter of the indication light spot in the first picture as a first diameter, taking the diameter of the indication light spot in the second picture as a second diameter, and calculating the diameter ratio of the first diameter to the second diameter;
s42, calculating the height of the target object according to the following formula:
wherein ,representing the height of the target object->Representing a preset height of the mechanical gripper from the table,/->Represents the divergence angle of the beam emerging from the optical component, < >>Represents the angle between the gripping finger and the direction perpendicular to the table, < >>Representing the diameter ratio.
As shown in fig. 6, where r1 'and r2' represent the actual diameters of the indicated spots in the first and second frames respectively,representing a preset height.
In an alternative embodiment of the application, the method further comprises: acquiring the current height of the distance workbench detected by the distance detector; wherein, send a motion command to motion drive assembly, drive mechanical tongs and remove to the target object on the plane of preset height apart from the workstation, include: and sending a first motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with the current height equal to the preset height.
In an alternative embodiment of the present application, in the case where the current height is equal to the preset height, the area of the indication spot in the current picture is greater than or equal to the area threshold.
It will be appreciated that in the case where the contour center point coincides with the target center point, it may be determined that the robot hand moves directly above the target object, but the movement distance of the robot hand in the longitudinal direction cannot be determined. Therefore, the distance between the current mechanical gripper and the target object in the longitudinal direction can be judged by indicating the spot area of the spot. The above area threshold may be formulated by a person skilled in the art according to circumstances, with the aim of determining as a criterion whether the robot hand moves in the longitudinal direction to a position at a preset height from the table.
In a third aspect, the present application provides one or more processors and memory. The processor and the memory are connected through a bus. The memory is for storing a computer program comprising program instructions and the processor is for executing the program instructions stored by the memory. Wherein the processor is configured to invoke the program instructions to perform the operations of any of the methods of the second aspect.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In a fourth aspect, the present invention provides a computer readable storage medium storing a computer program comprising program instructions which when executed by a processor implement the steps of any of the methods of the second aspect.
The computer readable storage medium may be an internal storage unit of the terminal device of any of the foregoing embodiments, for example, a hard disk or a memory of the terminal device. The computer readable storage medium may be an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided in the terminal device. Further, the computer-readable storage medium may further include both an internal storage unit and an external storage device of the terminal device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In several embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method in the various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The terms "first," "second," "the first," or "the second," as used in various embodiments of the present disclosure, may modify various components without regard to order and/or importance, but these terms do not limit the corresponding components. The above description is only configured for the purpose of distinguishing an element from other elements. For example, the first user device and the second user device represent different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "coupled" (operatively or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the one element is directly connected to the other element or the one element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it will be understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), then no element (e.g., a third element) is interposed therebetween.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the application may have the same meaning or may have different meanings, the particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
The above description is only of alternative embodiments of the application and of illustrations of the technical principles applied. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
The above description is only of alternative embodiments of the application and of illustrations of the technical principles applied. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.
The above description is only of alternative embodiments of the present application and is not intended to limit the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (5)

1. A multi-view multi-image fusion method is applied to a control device of a multi-view multi-image fusion device, and the multi-view multi-image fusion device comprises:
the mechanical gripper comprises a gripping driving piece and at least two gripping fingers, wherein the gripping driving piece is used for driving the at least two gripping fingers to realize gripping actions;
the motion driving assembly is used for driving the mechanical gripper to move in a three-dimensional space;
the distance detector is arranged on the mechanical gripper and between the at least two gripping fingers;
the optical components are arranged on the corresponding grabbing fingers, the optical components fix the illumination optical fibers towards the extending direction of the grabbing fingers, and the optical components are also provided with finger dividing cameras;
the output end of the laser is connected with the head end of the illumination optical fiber;
the control device is respectively and electrically connected with the laser, the finger dividing camera, the motion driving assembly and the grabbing driving piece;
the mechanical gripper is also provided with a central camera which is arranged between the at least two gripping fingers, and the central camera is electrically connected with the control device;
The optical component comprises an optical cylinder, the illumination optical fiber, a fixing piece and a beam expanding optical element;
the tail end of the illumination optical fiber is inserted into the light cylinder and is fixed on the light cylinder through the fixing piece, the beam expanding optical element is further arranged on the output light path of the illumination optical fiber and is used for expanding the outgoing beam of the illumination optical fiber under the control of the control device;
the finger dividing camera is arranged on the light cylinder, and the finger dividing camera and the illumination optical fiber are both arranged towards the extending direction of the grabbing finger;
characterized by comprising the following steps:
sending a starting instruction to the laser, so that the laser projects an indication light spot to a workbench through the illumination optical fiber and then obtains a current picture shot by the finger-dividing camera;
sending a first instruction to the grabbing driving piece, driving the mechanical gripper to adjust the grabbing finger angle, enabling the grabbing finger to be perpendicular to a workbench, then sending a first motion instruction to the motion driving assembly, driving the mechanical gripper to move towards a target object on a plane with a preset height away from the workbench, and taking the current picture as a first picture under the condition that the indication light spot coincides with the target object and the indication light spot is tangential to the outline of the target object in the current picture;
Sending a second instruction to the grabbing driving piece, driving the mechanical grabbing hand to adjust the grabbing finger angle so that the grabbing direction rotates at one side far away from the target object, and taking the current picture as a second picture under the condition that the indication light spots are not overlapped with the target object and the indication light spots are tangential to the outline of the target object in the current picture;
calculating the height of the target object according to the size change of the indication light spots in the first picture and the second picture;
the calculating the height of the target object according to the size change of the indication light spot in the first picture and the second picture includes:
taking the diameter of the indication light spot in the first picture as a first diameter, taking the diameter of the indication light spot in the second picture as a second diameter, and calculating the diameter ratio of the first diameter to the second diameter;
calculating the height of the target object according to the following formula:
wherein ,representing the height of the target object, +.>Representing the preset height of the manipulator from the table, +.>Represents the divergence angle of the light beam emerging from said optical component,/- >Representing the angle between the gripping finger and the direction perpendicular to the workbench, and +.>Representing the diameter ratio.
2. The multi-view multi-image fusion method of claim 1, wherein,
the method further comprises the steps of:
sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench until the mechanical gripper is positioned right above the target object;
the target descent distance is calculated according to the following formula:, wherein ,/>Representing the target descent distance, +.>Representing said preset height, +.>Representing the height of the target object;
sending a descending instruction to the motion driving assembly, and driving the mechanical gripper to move the target object by the target descending distance in the direction perpendicular to the workbench;
after the mechanical gripper moves the target descending distance, a grabbing instruction is sent to the grabbing driving piece, and the mechanical gripper is driven to grab the target object.
3. The multi-view multi-image fusion method of claim 2, wherein,
the step of sending a second motion instruction to the motion driving assembly, driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench until the mechanical gripper is positioned right above the target object, comprises the following steps:
After the workbench is projected with the indication light spots, a central current picture shot by the central camera is obtained;
identifying a target object in the central current picture, and determining a target center point of the target object in the central current picture;
identifying the indication light spots in the central current picture, sequentially connecting the indication light spots through straight lines to form a grabbing range profile, and determining a profile center point of the grabbing range profile;
and sending a second motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on a plane with a preset height from the workbench, so that the contour center point coincides with the target center point.
4. The multi-view multi-image fusion method of claim 1, further comprising: acquiring the current height of the workbench, which is detected by the distance detector;
the step of sending a first motion instruction to the motion driving assembly to drive the mechanical gripper to move to the target object on a plane with a preset height from the workbench comprises the following steps:
and sending a first motion instruction to the motion driving assembly, and driving the mechanical gripper to move towards the target object on the plane with the current height equal to the preset height.
5. The multi-view multi-image fusion method of claim 4, wherein,
and under the condition that the current height is equal to the preset height, the area of the indication light spot in the current picture is larger than or equal to an area threshold value.
CN202310651022.3A 2023-06-05 2023-06-05 Multi-view multi-image fusion method and device Active CN116385437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310651022.3A CN116385437B (en) 2023-06-05 2023-06-05 Multi-view multi-image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310651022.3A CN116385437B (en) 2023-06-05 2023-06-05 Multi-view multi-image fusion method and device

Publications (2)

Publication Number Publication Date
CN116385437A CN116385437A (en) 2023-07-04
CN116385437B true CN116385437B (en) 2023-08-25

Family

ID=86971519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310651022.3A Active CN116385437B (en) 2023-06-05 2023-06-05 Multi-view multi-image fusion method and device

Country Status (1)

Country Link
CN (1) CN116385437B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076776A1 (en) * 2016-10-25 2018-05-03 深圳光启合众科技有限公司 Robot, robotic arm and control method and device thereof
CN109269421A (en) * 2018-09-14 2019-01-25 李刚 Omnipotent shooting measuring scale
CN112192603A (en) * 2020-09-30 2021-01-08 中石化四机石油机械有限公司 Minor repair platform oil pipe pushing and supporting manipulator device and using method
CN113093356A (en) * 2021-03-18 2021-07-09 北京空间机电研究所 Large-scale block optical component assembling method based on mechanical arm
CN114046768A (en) * 2021-11-10 2022-02-15 重庆紫光华山智安科技有限公司 Laser ranging method and device, laser ranging equipment and storage medium
CN114526680A (en) * 2022-01-27 2022-05-24 太原理工大学 Thin ice thickness measuring device and method based on reflected light spot image recognition
CN114693590A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 Distance detection method, system, equipment and storage medium based on light spot image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI127908B (en) * 2015-09-22 2019-05-15 Teknologian Tutkimuskeskus Vtt Oy Method and apparatus for measuring the height of a surface

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076776A1 (en) * 2016-10-25 2018-05-03 深圳光启合众科技有限公司 Robot, robotic arm and control method and device thereof
CN109269421A (en) * 2018-09-14 2019-01-25 李刚 Omnipotent shooting measuring scale
CN112192603A (en) * 2020-09-30 2021-01-08 中石化四机石油机械有限公司 Minor repair platform oil pipe pushing and supporting manipulator device and using method
CN114693590A (en) * 2020-12-29 2022-07-01 深圳市光鉴科技有限公司 Distance detection method, system, equipment and storage medium based on light spot image
CN113093356A (en) * 2021-03-18 2021-07-09 北京空间机电研究所 Large-scale block optical component assembling method based on mechanical arm
CN114046768A (en) * 2021-11-10 2022-02-15 重庆紫光华山智安科技有限公司 Laser ranging method and device, laser ranging equipment and storage medium
CN114526680A (en) * 2022-01-27 2022-05-24 太原理工大学 Thin ice thickness measuring device and method based on reflected light spot image recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
单目摄像头精确测距技术及实现方法;樊露;;装备制造技术(03);全文 *

Also Published As

Publication number Publication date
CN116385437A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US10055805B2 (en) Device for measuring positions and postures of plurality of articles, and robot system having the device
US20190193947A1 (en) Article transfer apparatus, robot system, and article transfer method
WO2019114339A1 (en) Method and device for correcting motion of robotic arm
CN113146172B (en) Multi-vision-based detection and assembly system and method
JP6879238B2 (en) Work picking device and work picking method
JP2015213973A (en) Picking device and picking method
JP2011007632A (en) Information processing apparatus, information processing method and program
JPH0798208A (en) Method and system for recognizing three-dimensional position and attitude on the basis of sense of sight
CN112276936A (en) Three-dimensional data generation device and robot control system
EA038279B1 (en) Method and system for grasping an object by means of a robotic device
Kirschner et al. YuMi, come and play with Me! A collaborative robot for piecing together a tangram puzzle
CN112836558A (en) Mechanical arm tail end adjusting method, device, system, equipment and medium
CN113269835A (en) Industrial part pose identification method and device based on contour features and electronic equipment
US20200398420A1 (en) Robot teaching device and robot system
JP6838833B2 (en) Gripping device, gripping method, and program
JPH0798214A (en) Method and device for three dimensional position and attitude recognition method based on sense of sight
CN116385437B (en) Multi-view multi-image fusion method and device
CN112947458B (en) Robot accurate grabbing method based on multi-mode information and computer readable medium
CN114092428A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN111389750B (en) Vision measurement system and measurement method
Martinez et al. Automated 3D vision guided bin picking process for randomly located industrial parts
CN116766183B (en) Mechanical arm control method and device based on visual image
JP2015007639A (en) Information processing apparatus, information processing method and program
US20230123629A1 (en) 3d computer-vision system with variable spatial resolution
WO2023082417A1 (en) Grabbing point information obtaining method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant