WO2022222515A1 - Procédé et appareil de collage de surface d'article basés sur vision robotique, dispositif et support - Google Patents

Procédé et appareil de collage de surface d'article basés sur vision robotique, dispositif et support Download PDF

Info

Publication number
WO2022222515A1
WO2022222515A1 PCT/CN2021/138582 CN2021138582W WO2022222515A1 WO 2022222515 A1 WO2022222515 A1 WO 2022222515A1 CN 2021138582 W CN2021138582 W CN 2021138582W WO 2022222515 A1 WO2022222515 A1 WO 2022222515A1
Authority
WO
WIPO (PCT)
Prior art keywords
gluing
point cloud
image
information
point
Prior art date
Application number
PCT/CN2021/138582
Other languages
English (en)
Chinese (zh)
Inventor
李辉
魏海永
丁有爽
邵天兰
Original Assignee
梅卡曼德(北京)机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 梅卡曼德(北京)机器人科技有限公司 filed Critical 梅卡曼德(北京)机器人科技有限公司
Publication of WO2022222515A1 publication Critical patent/WO2022222515A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present application relates to the field of B25 intelligent robots, and more particularly, to a method for gluing an object surface based on robot vision, a device for gluing an object surface based on robot vision, an electronic device and a storage medium.
  • the present invention has been proposed in order to overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
  • one of the innovations of the present invention is that, in order to overcome the problem that the obtained point cloud information of the item is incomplete, which leads to the inability to correctly obtain the robot gluing trajectory point, the applicant proposes a method to obtain the complete 3D point of the item through a matching method.
  • Cloud information replace incomplete 3D point cloud information with complete 3D point cloud information, perform image processing operations at the 2D image level to obtain 2D gluing track points, and then convert them into 3D gluing track points.
  • the point cloud information is not directly used to obtain the gluing trajectory point, which solves the above technical problems, so that no matter what kind of point cloud information is obtained, the gluing trajectory point can be obtained based on the outline of the item. .
  • the second innovation of the present invention is that the applicant found that the existing 3D image matching algorithm needs to calculate too many pixel points, so the matching efficiency is not high enough, and the 2D image matching algorithm is difficult to obtain the accurate pose of the item information. Therefore, based on the characteristics of the application scenario of robot gluing, the applicant has developed a method for performing item matching based on 3D contour information. Compared with the traditional method, this method can greatly improve the matching accuracy without losing the matching accuracy. matching efficiency.
  • the present application provides a method for gluing the surface of an article based on robot vision, a device for gluing the surface of an article based on robot vision, an electronic device and a storage medium.
  • the glue is applied based on the mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the acquiring 3D point cloud information of the item includes:
  • the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
  • the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
  • the generating 2D gluing track points based on the 2D image information includes:
  • the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
  • the start and end points of the 2D trajectory points coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • 3D point cloud acquisition module used to acquire 3D point cloud information of items
  • a 3D image determination module configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information
  • a 2D image mapping module for mapping the 3D image information to 2D image information
  • a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information
  • a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points
  • Glue module for gluing based on mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the 3D point cloud acquisition module is further configured to:
  • the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
  • the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
  • the trajectory point generation module is specifically used for:
  • the trajectory point generation module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
  • the start and end points of the 2D trajectory points coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements any of the above-mentioned embodiments when the processor executes the computer program.
  • Glue coating method for object surface based on robot vision includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements any of the above-mentioned embodiments when the processor executes the computer program.
  • the computer-readable storage medium of the embodiment of the present application has a computer program stored thereon, and when the computer program is executed by the processor, implements the robot vision-based method for gluing the surface of an article in any of the foregoing embodiments.
  • FIG. 1 is a schematic flowchart of a method for gluing an object surface based on robot vision according to some embodiments of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
  • FIG. 3 is a schematic structural diagram of a robot vision-based object surface gluing device according to some embodiments of the present application.
  • FIG. 4 is a schematic structural diagram of a device for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
  • 5 is a schematic diagram of the missing point cloud and the complete point cloud of the item according to some embodiments of the present application.
  • FIG. 6 is a schematic diagram of the gluing process of certain embodiments of the present application.
  • FIG. 7 is a schematic diagram of the outline shape of an article to be glued according to some embodiments of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG. 1 shows a method for gluing the surface of an article according to an embodiment of the present invention, including:
  • Step S100 acquiring 3D point cloud information of the item
  • Step S110 based on the 3D point cloud information and the preset 3D image template information, determine the 3D image information of the item;
  • Step S120 mapping the 3D image information to 2D image information
  • Step S130 generating 2D gluing track points based on the 2D image information
  • Step S140 mapping the 2D gluing track points to 3D gluing track points
  • Step S150 gluing is performed based on the mapped 3D gluing track points.
  • step S100 point cloud information can be obtained through a 3D industrial camera.
  • the 3D industrial camera is generally equipped with two lenses, which capture the group of objects to be grasped from different angles, and can display a three-dimensional image of the object after processing. Place the group of objects to be grasped under the vision sensor, and shoot with two lenses at the same time.
  • a general binocular stereo vision algorithm to calculate the X, X, and X of each point of the object to be filled.
  • the Y, Z coordinate values and the coordinate orientation of each point are then converted into the point cloud data of the item group to be grasped.
  • components such as laser detectors, visible light detectors such as LEDs, infrared detectors, and radar detectors can also be used to generate point clouds, and the present invention does not limit the specific implementation.
  • the point cloud can be further processed such as point cloud clustering and outlier removal.
  • the point cloud data acquired in the above manner is three-dimensional data.
  • the acquired three-dimensional point cloud data can be orthographically mapped onto a two-dimensional plane.
  • a depth map corresponding to the orthographic projection can also be generated.
  • a two-dimensional color map corresponding to a three-dimensional object region and a depth map corresponding to the two-dimensional color map can be acquired in a depth direction perpendicular to the object.
  • the two-dimensional color map corresponds to an image of a plane area perpendicular to the preset depth direction; each pixel in the depth map corresponding to the two-dimensional color map is in one-to-one correspondence with each pixel in the two-dimensional color map, and each pixel The value of the pixel is the depth value of the pixel.
  • the conversion between 3D pixels and 2D pixels can be performed based on the camera's intrinsic parameters.
  • the camera internal parameters are the internal parameters of the camera, which are only related to the internal properties of the camera (such as focal length, resolution, pixel size, lens distortion, etc.).
  • the three-dimensional space points in the camera coordinate system can be transformed into the imaging plane coordinate system, and after correction processes such as lens distortion, they can be further transformed into two-dimensional pixel points in the image pixel coordinate system. Therefore, there is a mapping relationship between the projection point in the image pixel coordinate system and the three-dimensional space point in the camera coordinate system.
  • the process of obtaining this relationship is called camera internal parameter calibration, and based on this mapping relationship, the 3D image can be completed. Convert to and from 2D images.
  • step S110 when the point cloud information of the item is acquired, the acquired point cloud may be missing due to the reflection of visible light on the surface of the item or because of the material problem of the item, and the missing point cloud data loses a lot of information. Based on the missing point cloud The information cannot correctly plan the glue path of the robot.
  • Figure 5 shows the situation where the point cloud is missing, the solid line part is the outline formed according to the collected point cloud, and the dashed line part is the actual outline of the item. It can be clearly seen from Figure 5 that the outline of the solid line formed by the point cloud collected by the camera is broken at the left edge of the actual outline of the object and forms a non-existent outline inside the object, which is not the actual outline of the object. contour.
  • the contour depicted by the dashed line part opposite to the solid line in Figure 5 is the actual contour of the item.
  • the pre-stored actual contour only has the contour information of the item and no position information.
  • the actual contour corresponding to the item can discard the acquired incomplete contour, and use the pre-stored actual contour at the position of the item according to the position of the item, so as to plan the correct gluing route based on the complete item contour. .
  • the existing 2D image matching algorithm is difficult to obtain the accurate pose of the item, and the 3D image matching algorithm has many pixels and complex point attributes, the matching efficiency is low, in order to obtain the item faster in the case of missing point cloud.
  • Complete and correct point cloud information the applicant developed a method for matching based on the 3D contour point cloud of the item, and obtaining the complete 3D image information of the item including the matched 3D image and the pose information of the item, which is also One of the key points of the present invention
  • FIG. 2 shows a method for acquiring 3D image information according to an embodiment of the present invention, including:
  • Step S200 obtaining the 3D point cloud of the item
  • Step S210 mapping the 3D point cloud of the item into a 2D image, and obtaining a 2D outline of the item based on the 2D image;
  • Step S220 mapping the 2D contour to a 3D contour point cloud
  • Step S230 based on the 3D contour point cloud and the preset 3D image template information, determine the 3D image template information matching the item and the pose information of the item.
  • step S200 a method similar to step S100 may be used to obtain the 3D point cloud of the item, which will not be repeated here.
  • the 2D image includes a color map and a depth map, each pixel of the 2D depth map is in one-to-one correspondence with each pixel of the color map, and the depth map also includes each pixel.
  • Depth information may be the distance between the pixels of the captured picture and the camera used for capturing.
  • the conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • the image template information corresponding to the item of this model can be input to the robot in advance according to the model of the item to be glued, and the model of the item and the corresponding template can be arbitrarily set according to the needs of the actual situation. Circles, arcs or names with words, such as A, B, C; templates can also use corresponding numbers, such as A template, B template, C template, all types of items should have corresponding templates. Templates can be off-the-shelf, or a 3D image template of the item to be glued can be generated based on the item.
  • the image template matching the point cloud of the item and the pose information of the item can be determined based on the feature point matching algorithm and the point set registration method of the iterative closest point algorithm.
  • the feature point matching algorithm extracts the key points in the two images, that is, finds the pixel points with certain features in the image, and then calculates the feature factors of these feature points according to the obtained key point positions.
  • the feature factor is usually a vector, and the distance between two feature factors can reflect the degree of similarity, that is, whether the two feature points are the same. Depending on the eigenfactors, different distance metrics can be chosen.
  • the feature points can be matched by finding the most similar feature points in the set of feature points.
  • the feature point matching algorithm can efficiently and accurately match the same object in two images from different perspectives.
  • the iterative closest point algorithm enables point cloud data at different coordinates (for example, different point cloud images) to be merged into the same coordinate system, in fact, to find the coordinate system from coordinate system 1 (point cloud image 1) to coordinate system 2
  • a rigid transformation of (point cloud image 2) that reflects how one point cloud image is rotated and translated to obtain another point cloud image.
  • the algorithm is essentially an optimal registration method based on the least squares method. The corresponding point pairs are selected repeatedly, and the optimal rigid body transformation is calculated until the convergence accuracy requirements of the correct registration are met. In other words, the method minimizes by continuous iteration.
  • the source data and target data correspond to points to achieve precise stitching.
  • these image information can be used to replace the original incomplete point cloud information, and the subsequent path planning steps can be performed.
  • step S120 in order to facilitate performing image morphological operations such as indentation or contouring, the registered complete 3D image information is mapped to 2D image information.
  • the mapping can be in the form of perspective projection or orthographic projection.
  • perspective projection is a more commonly used mapping method, when the object is placed obliquely or the viewing angle captured by the camera is inclined, the use of perspective mapping may lead to distortion errors in the mapped image, so the present invention preferably uses the orthogonal projection method for 3D images.
  • Mapping of information to 2D image information The conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • step S130 in order to obtain the 2D trajectory point, the contour of the 2D image must be obtained first, and after the image contour is obtained, the contour is indented by a certain distance according to the requirements of the process and the model of the item.
  • Figure 6 shows examples of different gluing processes, in which: in process 1, glue is applied evenly on all four edges, so it is necessary to obtain the contour of all 4 edge positions; in process 2, two layers of glue are applied on a specific edge, so On a specific edge, two contours with different indentation distances need to be given to the edge; in process 3, the specific edge is not glued, so it is not necessary to obtain the contour of the specific edge; the glue path of the specific edge in process 4 is compared with other edges For example, there is a certain indentation, so the indentation distance of the outline of that particular edge should be different from other edges.
  • Figure 7 shows examples of different item types.
  • the contours of the four sides can be indented by the same distance, and the preferred indentation distance is 8mm or 10mm.
  • the second shape the whole is a rectangle, but one corner is an arc, you can use a different indentation distance from the rectangular section at the position of the arc section. If the rectangular section is indented by 8mm or 10mm, Then the arc segment can be retracted by 15mm or 20mm.
  • the sampling interval (also called the traversal interval) can be set according to the actual situation. The smaller the sampling interval, the denser the glue is applied, and vice versa.
  • the sampling interval can be a distance, such as 50mm to 100mm, or the number of track points. For example, it can be set to extract 100 points from the outline of the entire item to form gluing track points, or 150 points to form gluing track point.
  • the complete gluing trajectory points should be connected end to end, that is, the robot should start from the starting point and stop at a position close to but not exceeding the starting point when it reaches the end point.
  • such trajectory points may cause the robot to move at the end point.
  • the glue density is not enough.
  • the position of the end point can be set to coincide with the starting point or exceed the starting point. In this way, the robot can completely walk through all the positions that need to be glued without any omission.
  • the points of the 2D image are not the points of the real world, only the points of the 3D image can express the real world.
  • the 2D points cannot prompt the robot for this information, so that the robot is in the When moving to these positions with small protrusions, the height of the nozzle will not be increased, resulting in an inappropriate distance between the nozzle and the item, and the spraying effect is not good. Therefore, the robot is best to apply glue based on the 3D moving trajectory points. Therefore, after obtaining the 2D gluing track points, it needs to be mapped to the 3D gluing track points.
  • the conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • a robot can be used for gluing.
  • it is necessary to first plan the movement trajectory of the robot that is, the gluing trajectory points obtained in the above steps), the moving speed and the gluing rate.
  • Path and moving speed move on the surface of the object, and execute glue on the surface of the object according to the planned glue discharge rate.
  • the present invention obtains the complete point cloud information of the item through template matching, and uses the complete point cloud information to replace the acquired point cloud information to perform contour acquisition, trajectory point calculation, 2D image and 3D image Therefore, even when the obtained point cloud information of the object to be glued is incomplete, the correct glue application trajectory point of the robot can still be calculated from the incomplete point cloud information;
  • the present invention proposes A method for obtaining 3D image information of an item based on image matching. This method does not use the traditional 2D image and 3D image matching method, but uses the contour of the 3D image for matching.
  • the matching is accurate and fast, and it is especially suitable for gluing objects. industrial scene. It can be seen that the present invention solves the problem that glue cannot be applied correctly due to incomplete point cloud information obtained when using robot gluing in the prior art, as well as the inaccurate and inefficient problems of the existing item matching method .
  • the part of the glass to be glued on the conveyor belt is easily disturbed by the conveyor belt, and then interference points appear.
  • the edge of the lifted part will not be disturbed. Therefore, in the process of straight line fitting based on the two-dimensional contour points, for the edge that is in contact with the conveyor belt, the corresponding edge of the edge can be determined according to the point of the edge that is not in contact with the conveyor belt. the straight line.
  • the Z coordinate of the contour point cloud corresponding to the contour point on this edge can be used to determine which contour points on the edge are in contact with the conveyor belt and which contour points are not in contact with the conveyor belt, that is, the contour points corresponding to the contour points.
  • the coordinate system corresponding to the glass to be glued is established by adhering to the conveyor belt, that is to say, the origin of the above coordinate system is located on the plane where the conveyor belt is located.
  • the above coordinate system can also be established in other forms.
  • a certain point cloud screening rule can be set based on the shape of the non-standard flat glass to be glued in advance, and the corresponding point of the above-mentioned better part can be selected, that is, Select the point corresponding to the part of the non-standard flat glass to be glued that is not in contact with the conveyor belt.
  • the object of the straight line fitting operation is preferably the result after noise removal and smoothing.
  • it can also be contour points without noise removal and smoothing, which is not limited here.
  • the robots in various embodiments of the present invention may be industrial robot arms, and these robot arms may be general-purpose or specialized for applying glue to objects.
  • the initial point of the gluing trajectory point can be set at the position on the gluing path that is closest to the initial pose of the robot, for example, the initial point is set in the middle of the edge close to the robot. That is to say, after the initial pose of the robot is determined, the middle point on the gluing path of the edge closest to the robot's initial pose can be used as the initial point of the gluing trajectory point, and then the robot can be used according to the robot's initial pose.
  • the gluing track point information may include, but is not limited to, the coordinates of the gluing track point, the initial track point of the gluing track point, and the direction of the gluing track point (ie, the order of the gluing track points), etc. .
  • the gluing track point information can be sent to the robot by means of communication.
  • the robot receives the glue application trajectory point information, it can control its own glue spray nozzle to apply glue to the glass to be glued based on the glue application trajectory point information.
  • the glue application trajectory point information is generated on the glue application path, including:
  • the walking sequence of the gluing track points is determined to obtain the gluing track point information.
  • determining the corners and straight lines in the gluing path can be determined based on the relationship between the coordinate values of each point on the gluing path.
  • the X and Y coordinates of the adjacent points at the corner will be different, while the adjacent points at the straight line may have the same X coordinate or the same Y coordinate.
  • the shape of the glass to be glued is a rectangle
  • the X and Y coordinates of the adjacent points at the corners of the four corners will be different, and the adjacent points at the upper line will be different.
  • the Y coordinate of the point will be the same but the X coordinate will be different
  • the Y coordinate of the adjacent point on the lower line will be the same but the X coordinate will be different and the Y coordinate is smaller than the value of the upper line
  • the X coordinate of the adjacent point on the left line It will be the same but the Y coordinate will be different.
  • the X coordinate of the adjacent point on the right line will be the same but the Y coordinate will be different and the X coordinate will be smaller than the value of the left line.
  • the robot When the robot is applying glue to the glass, it will control the glue head to apply glue based on a certain glue discharge rate.
  • the glue dispensing rate as an inherent property of the robot, affects the gluing effect in this embodiment.
  • the glue dispensing rate of the robot can be determined.
  • the spacing between the gluing track points set at the corners of the gluing path may be greater than the spacing between the gluing track points set at the straight line Larger, in order to achieve the balance between the movement speed at the straight line and the movement speed at the corner, and then solve the glue stacking phenomenon that may be caused by the corner.
  • a minimum distance can be set at the straight line to limit the distance between the glue application track points at the straight line, so as to prevent the robot from jamming and stacking glue due to the excessive number of track points at the straight line.
  • the walking sequence of the gluing track points is determined, so as to obtain the gluing track point information.
  • the initial point of the trajectory point is set as a point close to the initial pose of the robot, for example, it can be the trajectory point corresponding to the middle part of the glass to be glued close to the side of the robot. That is to say, after the initial pose of the robot is determined, the trajectory point corresponding to the middle point on the gluing path of the edge closest to the initial pose of the robot (or the trajectory point closest to the point) As the initial trajectory point of the glue-applied trajectory point, after that, other trajectory points can be moved clockwise or counterclockwise.
  • the glue application track point information may specifically include the glue application track point coordinates, the initial track point coordinates, the position sequence of the glue application track points, the motion speed parameters of the glue application track points, and the like.
  • the glue application track point information further includes: normal direction information corresponding to the contour points.
  • the normal information can be the angle value of the normal vector corresponding to each contour point cloud relative to a fixed amount, and can also be the deviation of the point cloud in the corresponding position sequence behind the contour point cloud relative to the previous point cloud. angle value.
  • FIG. 3 shows a schematic structural diagram of a device for gluing the surface of an article based on robot vision according to another embodiment of the present invention, the device includes:
  • the 3D point cloud acquisition module 300 is used to acquire the 3D point cloud information of the item, that is, to implement step S100;
  • a 3D image determination module 310 configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information, that is, to implement step S110;
  • a 2D image mapping module 320 configured to map the 3D image information to 2D image information, that is, to implement step S120;
  • a trajectory point generation module 330 configured to generate 2D glue application trajectory points based on the 2D image information, that is, to implement step S130;
  • the 3D track point mapping module 340 is used to map the 2D gluing track points to the 3D gluing track points, that is, for implementing step S140;
  • the gluing module 350 is used for gluing based on the mapped 3D gluing track points, that is, for implementing step S150.
  • FIG. 4 shows a schematic structural diagram of a device for acquiring 3D image information of an item based on robot vision according to another embodiment of the present invention, and the device includes:
  • the 3D point cloud acquisition module 400 is used to acquire the 3D point cloud of the item, that is, to implement step S200;
  • the 2D contour obtaining module 410 is used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image, that is, to implement step S210;
  • the 3D contour obtaining module 420 is used for mapping the 2D contour into a 3D contour point cloud, that is, for implementing step S220;
  • the 3D image determination module 430 is configured to determine, based on the 3D contour point cloud and preset 3D image template information, the 3D image template information matching the item and the pose information of the item, that is, for implementing step S230.
  • the 3D point cloud acquisition module 300 is used to implement the method of step S100, however, according to the needs of the actual situation, the 3D point cloud acquisition module 300 can also be used to implement the method of steps S200, S300 or S400 or the method of the method. part.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of any one of the foregoing embodiments.
  • the computer program stored in the computer-readable storage medium of the embodiments of the present application may be executed by the processor of the electronic device.
  • the computer-readable storage medium may be a storage medium built in the electronic device, or a storage medium capable of The storage medium of the electronic device is pluggable and pluggable. Therefore, the computer-readable storage medium of the embodiments of the present application has high flexibility and reliability.
  • FIG. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may include: a processor (processor) 902 , a communication interface (Communications Interface) 904 , a memory (memory) 906 , and a communication bus 908 .
  • processor processor
  • Communication interface Communication interface
  • memory memory
  • communication bus 908 a communication bus
  • the processor 902 , the communication interface 904 , and the memory 906 communicate with each other through the communication bus 908 .
  • the communication interface 904 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 902 is configured to execute the program 910, and specifically may execute the relevant steps in the foregoing method embodiments.
  • the program 910 may include program code including computer operation instructions.
  • the processor 902 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the electronic device may be the same type of processors, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 906 is used to store the program 910 .
  • Memory 906 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 910 may specifically be used to cause the processor 902 to perform various operations in the foregoing method embodiments.
  • the content of the present invention includes:
  • a method for gluing the surface of objects based on robot vision comprising:
  • the glue is applied based on the mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the acquiring the 3D point cloud information of the item includes: mapping the 3D point cloud of the item into a 2D image; acquiring the 2D contour of the item based on the 2D image; 2D contours are mapped to 3D contour point clouds.
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
  • the generating 2D gluing track points based on the 2D image information includes: generating a 2D contour based on the 2D image information; traversing the entire contour at predetermined intervals and generating the 2D gluing track points.
  • the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
  • the start point and the end point of the 2D trajectory point coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • a device for gluing the surface of objects based on robot vision comprising:
  • 3D point cloud acquisition module used to acquire 3D point cloud information of items
  • a 3D image determination module configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information
  • a 2D image mapping module for mapping the 3D image information to 2D image information
  • a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information
  • a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points
  • Glue module for gluing based on mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the 3D point cloud acquisition module is further used for:
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
  • the trajectory point generating module is specifically configured to: generate a 2D contour based on 2D image information; traverse the entire contour according to a predetermined interval and generate 2D gluing trajectory points.
  • the trajectory point generating module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
  • the start point and the end point of the 2D trajectory point coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • a method for acquiring 3D image information of objects based on robot vision comprising:
  • the 3D image template information matching the item and the pose information of the item are determined.
  • a feature point-based matching algorithm and/or an iterative closest point algorithm to determine the 3D image template information that matches the item and the pose information of the item
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
  • a device for acquiring 3D image information of objects based on robot vision comprising:
  • 3D point cloud acquisition module used to acquire 3D point cloud of items
  • a 2D contour acquisition module used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image;
  • 3D contour acquisition module for mapping 2D contours to 3D contour point clouds
  • the 3D image determination module is configured to determine the 3D image template information matched with the item and the pose information of the item based on the 3D contour point cloud and the preset 3D image template information.
  • the 3D image determination module is also used to: determine the 3D image template information matched with the item and the pose information of the item based on the feature point matching algorithm and/or the iterative closest point algorithm.
  • the 2D contour acquisition module is further configured to: after acquiring the 3D point cloud of the item, first perform point cloud clustering and/or outlier removal processing, and then map the processed 3D point cloud into a 2D image.
  • the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus.
  • computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
  • the processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable processor Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

La présente demande divulgue un procédé de collage de surface d'article basé sur vision robotique, un appareil de collage de surface d'article basé sur vision robotique, un dispositif électronique et un support de stockage. Le procédé de collage de surface d'article basé sur vision robotique de la présente demande consiste à : acquérir des informations de nuage de points 3D d'un article ; déterminer des informations d'image 3D de l'article sur la base des informations de nuage de points 3D et d'informations de modèle d'image 3D prédéfinies ; mapper les informations d'image 3D en informations d'image 2D ; générer des points de trajectoire de collage 2D sur la base des informations d'image 2D ; mapper les points de trajectoire de collage 2D en points de trajectoire de collage 3D ; et effectuer le collage sur la base des points de trajectoire de collage 3D mappés. Selon la présente invention, des informations de nuage de points 3D complètes de l'article sont acquises par un procédé de mise en correspondance, des informations de nuage de points 3D incomplètes sont remplacées par les informations de nuage de points 3D complètes, une opération de traitement d'image d'un niveau d'image 2D est effectuée pour obtenir les points de trajectoire de collage 2D, puis les points de trajectoire de collage 2D sont convertis en points de trajectoire de collage 3D, de telle sorte qu'une caméra peut toujours obtenir les points de trajectoire de collage sur la base d'un contour de l'article même en acquérant les informations de nuage de points incomplètes. La présente invention concerne en outre un procédé d'acquisition d'informations d'image 3D d'article basé sur vision robotique et, dans le procédé, un procédé de mise en correspondance d'article est effectué sur la base d'informations de contour 3D. Par comparaison avec un procédé classique, le procédé peut améliorer considérablement l'efficacité de mise en correspondance sur la base du principe que la précision de mise en correspondance n'est pas perdue.
PCT/CN2021/138582 2021-04-20 2021-12-15 Procédé et appareil de collage de surface d'article basés sur vision robotique, dispositif et support WO2022222515A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110426175.9A CN112967368A (zh) 2021-04-20 2021-04-20 基于机器人视觉的物品表面涂胶方法、装置、电子设备和存储介质
CN202110426175.9 2021-04-20

Publications (1)

Publication Number Publication Date
WO2022222515A1 true WO2022222515A1 (fr) 2022-10-27

Family

ID=76280904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138582 WO2022222515A1 (fr) 2021-04-20 2021-12-15 Procédé et appareil de collage de surface d'article basés sur vision robotique, dispositif et support

Country Status (2)

Country Link
CN (1) CN112967368A (fr)
WO (1) WO2022222515A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969144A (zh) * 2023-01-09 2023-04-18 东莞市智睿智能科技有限公司 一种鞋底喷胶轨迹生成方法、系统、设备及存储介质
CN117670864A (zh) * 2023-12-28 2024-03-08 北汽利戴工业技术服务(北京)有限公司 基于工业相机的图像识别系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967368A (zh) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 基于机器人视觉的物品表面涂胶方法、装置、电子设备和存储介质
WO2022237544A1 (fr) * 2021-05-11 2022-11-17 梅卡曼德(北京)机器人科技有限公司 Procédé et appareil de génération de trajectoire, et dispositif électronique et support d'enregistrement
CN113199479B (zh) * 2021-05-11 2023-02-10 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113420641B (zh) * 2021-06-21 2024-06-14 梅卡曼德(北京)机器人科技有限公司 图像数据处理方法、装置、电子设备和存储介质
CN113976400B (zh) * 2021-09-30 2022-09-20 歌尔股份有限公司 一种涂胶方法、装置、设备及系统
CN114637562B (zh) * 2022-03-01 2024-02-02 杭州优工品科技有限公司 涂胶零部件可视化展示处理方法、装置、终端及存储介质
CN115570573B (zh) * 2022-12-07 2023-03-17 广东省科学院智能制造研究所 一种机器人高性能涂胶轨迹规划方法、介质及系统
CN116958129B (zh) * 2023-09-18 2023-12-26 华侨大学 一种石板的刷胶路径规划装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019139441A1 (fr) * 2018-01-12 2019-07-18 삼성전자 주식회사 Dispositif et procédé de traitement d'image
CN111695486A (zh) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 一种基于点云的高精度方向标志牌目标提取方法
CN111744706A (zh) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 物件的喷胶方法、装置、电子设备及存储介质
CN111815706A (zh) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 面向单品类拆垛的视觉识别方法、装置、设备及介质
CN112967368A (zh) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 基于机器人视觉的物品表面涂胶方法、装置、电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019139441A1 (fr) * 2018-01-12 2019-07-18 삼성전자 주식회사 Dispositif et procédé de traitement d'image
CN111695486A (zh) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 一种基于点云的高精度方向标志牌目标提取方法
CN111744706A (zh) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 物件的喷胶方法、装置、电子设备及存储介质
CN111815706A (zh) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 面向单品类拆垛的视觉识别方法、装置、设备及介质
CN112967368A (zh) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 基于机器人视觉的物品表面涂胶方法、装置、电子设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969144A (zh) * 2023-01-09 2023-04-18 东莞市智睿智能科技有限公司 一种鞋底喷胶轨迹生成方法、系统、设备及存储介质
CN117670864A (zh) * 2023-12-28 2024-03-08 北汽利戴工业技术服务(北京)有限公司 基于工业相机的图像识别系统
CN117670864B (zh) * 2023-12-28 2024-06-11 北汽利戴工业技术服务(北京)有限公司 基于工业相机的图像识别系统

Also Published As

Publication number Publication date
CN112967368A (zh) 2021-06-15

Similar Documents

Publication Publication Date Title
WO2022222515A1 (fr) Procédé et appareil de collage de surface d'article basés sur vision robotique, dispositif et support
CN113344769B (zh) 基于机器视觉的物品3d图像信息获取方法、装置、介质
WO2022237166A1 (fr) Procédé et appareil de génération de trajectoire, dispositif électronique, support d'enregistrement et caméra 3d
Lysenkov et al. Recognition and pose estimation of rigid transparent objects with a kinect sensor
Lysenkov et al. Pose estimation of rigid transparent objects in transparent clutter
US20160321838A1 (en) System for processing a three-dimensional (3d) image and related methods using an icp algorithm
CN113199479B (zh) 轨迹生成方法、装置、电子设备、存储介质和3d相机
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
CN112132876B (zh) 2d-3d图像配准中的初始位姿估计方法
CN111784655A (zh) 一种水下机器人回收定位方法
CN111429344B (zh) 基于感知哈希的激光slam闭环检测方法及系统
US20220405506A1 (en) Systems and methods for a vision guided end effector
CN113189934A (zh) 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN111523547A (zh) 一种3d语义分割的方法及终端
Bileschi Fully automatic calibration of lidar and video streams from a vehicle
CN115641366A (zh) 面向机器人搽胶的鞋面边墙线配准方法及装置
Natarajan et al. Robust stereo-vision based 3D modelling of real-world objects for assistive robotic applications
WO2022222513A1 (fr) Procédé et appareil permettant de remplir des rainures sur la base de la régulation de la vitesse de déplacement d'un robot
Guo et al. PCAOT: A Manhattan point cloud registration method towards large rotation and small overlap
WO2022222934A1 (fr) Procédé de revêtement d'adhésif pour verre, appareil de revêtement d'adhésif pour verre, dispositif électronique et support de stockage
JP2018156412A (ja) 画像処理装置、画像処理システムおよびプログラム
JP7365567B2 (ja) 計測システム、計測装置、計測方法及び計測プログラム
JP3279610B2 (ja) 環境認識装置
CN113223030A (zh) 玻璃涂胶方法及装置、电子设备和储存介质
Zang et al. Camera localization by CAD model matching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21937737

Country of ref document: EP

Kind code of ref document: A1