WO2022222515A1 - Article surface gluing method and apparatus based on robot vision, device, and medium - Google Patents

Article surface gluing method and apparatus based on robot vision, device, and medium Download PDF

Info

Publication number
WO2022222515A1
WO2022222515A1 PCT/CN2021/138582 CN2021138582W WO2022222515A1 WO 2022222515 A1 WO2022222515 A1 WO 2022222515A1 CN 2021138582 W CN2021138582 W CN 2021138582W WO 2022222515 A1 WO2022222515 A1 WO 2022222515A1
Authority
WO
WIPO (PCT)
Prior art keywords
gluing
point cloud
image
information
point
Prior art date
Application number
PCT/CN2021/138582
Other languages
French (fr)
Chinese (zh)
Inventor
李辉
魏海永
丁有爽
邵天兰
Original Assignee
梅卡曼德(北京)机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 梅卡曼德(北京)机器人科技有限公司 filed Critical 梅卡曼德(北京)机器人科技有限公司
Publication of WO2022222515A1 publication Critical patent/WO2022222515A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present application relates to the field of B25 intelligent robots, and more particularly, to a method for gluing an object surface based on robot vision, a device for gluing an object surface based on robot vision, an electronic device and a storage medium.
  • the present invention has been proposed in order to overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
  • one of the innovations of the present invention is that, in order to overcome the problem that the obtained point cloud information of the item is incomplete, which leads to the inability to correctly obtain the robot gluing trajectory point, the applicant proposes a method to obtain the complete 3D point of the item through a matching method.
  • Cloud information replace incomplete 3D point cloud information with complete 3D point cloud information, perform image processing operations at the 2D image level to obtain 2D gluing track points, and then convert them into 3D gluing track points.
  • the point cloud information is not directly used to obtain the gluing trajectory point, which solves the above technical problems, so that no matter what kind of point cloud information is obtained, the gluing trajectory point can be obtained based on the outline of the item. .
  • the second innovation of the present invention is that the applicant found that the existing 3D image matching algorithm needs to calculate too many pixel points, so the matching efficiency is not high enough, and the 2D image matching algorithm is difficult to obtain the accurate pose of the item information. Therefore, based on the characteristics of the application scenario of robot gluing, the applicant has developed a method for performing item matching based on 3D contour information. Compared with the traditional method, this method can greatly improve the matching accuracy without losing the matching accuracy. matching efficiency.
  • the present application provides a method for gluing the surface of an article based on robot vision, a device for gluing the surface of an article based on robot vision, an electronic device and a storage medium.
  • the glue is applied based on the mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the acquiring 3D point cloud information of the item includes:
  • the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
  • the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
  • the generating 2D gluing track points based on the 2D image information includes:
  • the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
  • the start and end points of the 2D trajectory points coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • 3D point cloud acquisition module used to acquire 3D point cloud information of items
  • a 3D image determination module configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information
  • a 2D image mapping module for mapping the 3D image information to 2D image information
  • a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information
  • a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points
  • Glue module for gluing based on mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the 3D point cloud acquisition module is further configured to:
  • the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
  • the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
  • the trajectory point generation module is specifically used for:
  • the trajectory point generation module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
  • the start and end points of the 2D trajectory points coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements any of the above-mentioned embodiments when the processor executes the computer program.
  • Glue coating method for object surface based on robot vision includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements any of the above-mentioned embodiments when the processor executes the computer program.
  • the computer-readable storage medium of the embodiment of the present application has a computer program stored thereon, and when the computer program is executed by the processor, implements the robot vision-based method for gluing the surface of an article in any of the foregoing embodiments.
  • FIG. 1 is a schematic flowchart of a method for gluing an object surface based on robot vision according to some embodiments of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
  • FIG. 3 is a schematic structural diagram of a robot vision-based object surface gluing device according to some embodiments of the present application.
  • FIG. 4 is a schematic structural diagram of a device for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
  • 5 is a schematic diagram of the missing point cloud and the complete point cloud of the item according to some embodiments of the present application.
  • FIG. 6 is a schematic diagram of the gluing process of certain embodiments of the present application.
  • FIG. 7 is a schematic diagram of the outline shape of an article to be glued according to some embodiments of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG. 1 shows a method for gluing the surface of an article according to an embodiment of the present invention, including:
  • Step S100 acquiring 3D point cloud information of the item
  • Step S110 based on the 3D point cloud information and the preset 3D image template information, determine the 3D image information of the item;
  • Step S120 mapping the 3D image information to 2D image information
  • Step S130 generating 2D gluing track points based on the 2D image information
  • Step S140 mapping the 2D gluing track points to 3D gluing track points
  • Step S150 gluing is performed based on the mapped 3D gluing track points.
  • step S100 point cloud information can be obtained through a 3D industrial camera.
  • the 3D industrial camera is generally equipped with two lenses, which capture the group of objects to be grasped from different angles, and can display a three-dimensional image of the object after processing. Place the group of objects to be grasped under the vision sensor, and shoot with two lenses at the same time.
  • a general binocular stereo vision algorithm to calculate the X, X, and X of each point of the object to be filled.
  • the Y, Z coordinate values and the coordinate orientation of each point are then converted into the point cloud data of the item group to be grasped.
  • components such as laser detectors, visible light detectors such as LEDs, infrared detectors, and radar detectors can also be used to generate point clouds, and the present invention does not limit the specific implementation.
  • the point cloud can be further processed such as point cloud clustering and outlier removal.
  • the point cloud data acquired in the above manner is three-dimensional data.
  • the acquired three-dimensional point cloud data can be orthographically mapped onto a two-dimensional plane.
  • a depth map corresponding to the orthographic projection can also be generated.
  • a two-dimensional color map corresponding to a three-dimensional object region and a depth map corresponding to the two-dimensional color map can be acquired in a depth direction perpendicular to the object.
  • the two-dimensional color map corresponds to an image of a plane area perpendicular to the preset depth direction; each pixel in the depth map corresponding to the two-dimensional color map is in one-to-one correspondence with each pixel in the two-dimensional color map, and each pixel The value of the pixel is the depth value of the pixel.
  • the conversion between 3D pixels and 2D pixels can be performed based on the camera's intrinsic parameters.
  • the camera internal parameters are the internal parameters of the camera, which are only related to the internal properties of the camera (such as focal length, resolution, pixel size, lens distortion, etc.).
  • the three-dimensional space points in the camera coordinate system can be transformed into the imaging plane coordinate system, and after correction processes such as lens distortion, they can be further transformed into two-dimensional pixel points in the image pixel coordinate system. Therefore, there is a mapping relationship between the projection point in the image pixel coordinate system and the three-dimensional space point in the camera coordinate system.
  • the process of obtaining this relationship is called camera internal parameter calibration, and based on this mapping relationship, the 3D image can be completed. Convert to and from 2D images.
  • step S110 when the point cloud information of the item is acquired, the acquired point cloud may be missing due to the reflection of visible light on the surface of the item or because of the material problem of the item, and the missing point cloud data loses a lot of information. Based on the missing point cloud The information cannot correctly plan the glue path of the robot.
  • Figure 5 shows the situation where the point cloud is missing, the solid line part is the outline formed according to the collected point cloud, and the dashed line part is the actual outline of the item. It can be clearly seen from Figure 5 that the outline of the solid line formed by the point cloud collected by the camera is broken at the left edge of the actual outline of the object and forms a non-existent outline inside the object, which is not the actual outline of the object. contour.
  • the contour depicted by the dashed line part opposite to the solid line in Figure 5 is the actual contour of the item.
  • the pre-stored actual contour only has the contour information of the item and no position information.
  • the actual contour corresponding to the item can discard the acquired incomplete contour, and use the pre-stored actual contour at the position of the item according to the position of the item, so as to plan the correct gluing route based on the complete item contour. .
  • the existing 2D image matching algorithm is difficult to obtain the accurate pose of the item, and the 3D image matching algorithm has many pixels and complex point attributes, the matching efficiency is low, in order to obtain the item faster in the case of missing point cloud.
  • Complete and correct point cloud information the applicant developed a method for matching based on the 3D contour point cloud of the item, and obtaining the complete 3D image information of the item including the matched 3D image and the pose information of the item, which is also One of the key points of the present invention
  • FIG. 2 shows a method for acquiring 3D image information according to an embodiment of the present invention, including:
  • Step S200 obtaining the 3D point cloud of the item
  • Step S210 mapping the 3D point cloud of the item into a 2D image, and obtaining a 2D outline of the item based on the 2D image;
  • Step S220 mapping the 2D contour to a 3D contour point cloud
  • Step S230 based on the 3D contour point cloud and the preset 3D image template information, determine the 3D image template information matching the item and the pose information of the item.
  • step S200 a method similar to step S100 may be used to obtain the 3D point cloud of the item, which will not be repeated here.
  • the 2D image includes a color map and a depth map, each pixel of the 2D depth map is in one-to-one correspondence with each pixel of the color map, and the depth map also includes each pixel.
  • Depth information may be the distance between the pixels of the captured picture and the camera used for capturing.
  • the conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • the image template information corresponding to the item of this model can be input to the robot in advance according to the model of the item to be glued, and the model of the item and the corresponding template can be arbitrarily set according to the needs of the actual situation. Circles, arcs or names with words, such as A, B, C; templates can also use corresponding numbers, such as A template, B template, C template, all types of items should have corresponding templates. Templates can be off-the-shelf, or a 3D image template of the item to be glued can be generated based on the item.
  • the image template matching the point cloud of the item and the pose information of the item can be determined based on the feature point matching algorithm and the point set registration method of the iterative closest point algorithm.
  • the feature point matching algorithm extracts the key points in the two images, that is, finds the pixel points with certain features in the image, and then calculates the feature factors of these feature points according to the obtained key point positions.
  • the feature factor is usually a vector, and the distance between two feature factors can reflect the degree of similarity, that is, whether the two feature points are the same. Depending on the eigenfactors, different distance metrics can be chosen.
  • the feature points can be matched by finding the most similar feature points in the set of feature points.
  • the feature point matching algorithm can efficiently and accurately match the same object in two images from different perspectives.
  • the iterative closest point algorithm enables point cloud data at different coordinates (for example, different point cloud images) to be merged into the same coordinate system, in fact, to find the coordinate system from coordinate system 1 (point cloud image 1) to coordinate system 2
  • a rigid transformation of (point cloud image 2) that reflects how one point cloud image is rotated and translated to obtain another point cloud image.
  • the algorithm is essentially an optimal registration method based on the least squares method. The corresponding point pairs are selected repeatedly, and the optimal rigid body transformation is calculated until the convergence accuracy requirements of the correct registration are met. In other words, the method minimizes by continuous iteration.
  • the source data and target data correspond to points to achieve precise stitching.
  • these image information can be used to replace the original incomplete point cloud information, and the subsequent path planning steps can be performed.
  • step S120 in order to facilitate performing image morphological operations such as indentation or contouring, the registered complete 3D image information is mapped to 2D image information.
  • the mapping can be in the form of perspective projection or orthographic projection.
  • perspective projection is a more commonly used mapping method, when the object is placed obliquely or the viewing angle captured by the camera is inclined, the use of perspective mapping may lead to distortion errors in the mapped image, so the present invention preferably uses the orthogonal projection method for 3D images.
  • Mapping of information to 2D image information The conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • step S130 in order to obtain the 2D trajectory point, the contour of the 2D image must be obtained first, and after the image contour is obtained, the contour is indented by a certain distance according to the requirements of the process and the model of the item.
  • Figure 6 shows examples of different gluing processes, in which: in process 1, glue is applied evenly on all four edges, so it is necessary to obtain the contour of all 4 edge positions; in process 2, two layers of glue are applied on a specific edge, so On a specific edge, two contours with different indentation distances need to be given to the edge; in process 3, the specific edge is not glued, so it is not necessary to obtain the contour of the specific edge; the glue path of the specific edge in process 4 is compared with other edges For example, there is a certain indentation, so the indentation distance of the outline of that particular edge should be different from other edges.
  • Figure 7 shows examples of different item types.
  • the contours of the four sides can be indented by the same distance, and the preferred indentation distance is 8mm or 10mm.
  • the second shape the whole is a rectangle, but one corner is an arc, you can use a different indentation distance from the rectangular section at the position of the arc section. If the rectangular section is indented by 8mm or 10mm, Then the arc segment can be retracted by 15mm or 20mm.
  • the sampling interval (also called the traversal interval) can be set according to the actual situation. The smaller the sampling interval, the denser the glue is applied, and vice versa.
  • the sampling interval can be a distance, such as 50mm to 100mm, or the number of track points. For example, it can be set to extract 100 points from the outline of the entire item to form gluing track points, or 150 points to form gluing track point.
  • the complete gluing trajectory points should be connected end to end, that is, the robot should start from the starting point and stop at a position close to but not exceeding the starting point when it reaches the end point.
  • such trajectory points may cause the robot to move at the end point.
  • the glue density is not enough.
  • the position of the end point can be set to coincide with the starting point or exceed the starting point. In this way, the robot can completely walk through all the positions that need to be glued without any omission.
  • the points of the 2D image are not the points of the real world, only the points of the 3D image can express the real world.
  • the 2D points cannot prompt the robot for this information, so that the robot is in the When moving to these positions with small protrusions, the height of the nozzle will not be increased, resulting in an inappropriate distance between the nozzle and the item, and the spraying effect is not good. Therefore, the robot is best to apply glue based on the 3D moving trajectory points. Therefore, after obtaining the 2D gluing track points, it needs to be mapped to the 3D gluing track points.
  • the conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
  • a robot can be used for gluing.
  • it is necessary to first plan the movement trajectory of the robot that is, the gluing trajectory points obtained in the above steps), the moving speed and the gluing rate.
  • Path and moving speed move on the surface of the object, and execute glue on the surface of the object according to the planned glue discharge rate.
  • the present invention obtains the complete point cloud information of the item through template matching, and uses the complete point cloud information to replace the acquired point cloud information to perform contour acquisition, trajectory point calculation, 2D image and 3D image Therefore, even when the obtained point cloud information of the object to be glued is incomplete, the correct glue application trajectory point of the robot can still be calculated from the incomplete point cloud information;
  • the present invention proposes A method for obtaining 3D image information of an item based on image matching. This method does not use the traditional 2D image and 3D image matching method, but uses the contour of the 3D image for matching.
  • the matching is accurate and fast, and it is especially suitable for gluing objects. industrial scene. It can be seen that the present invention solves the problem that glue cannot be applied correctly due to incomplete point cloud information obtained when using robot gluing in the prior art, as well as the inaccurate and inefficient problems of the existing item matching method .
  • the part of the glass to be glued on the conveyor belt is easily disturbed by the conveyor belt, and then interference points appear.
  • the edge of the lifted part will not be disturbed. Therefore, in the process of straight line fitting based on the two-dimensional contour points, for the edge that is in contact with the conveyor belt, the corresponding edge of the edge can be determined according to the point of the edge that is not in contact with the conveyor belt. the straight line.
  • the Z coordinate of the contour point cloud corresponding to the contour point on this edge can be used to determine which contour points on the edge are in contact with the conveyor belt and which contour points are not in contact with the conveyor belt, that is, the contour points corresponding to the contour points.
  • the coordinate system corresponding to the glass to be glued is established by adhering to the conveyor belt, that is to say, the origin of the above coordinate system is located on the plane where the conveyor belt is located.
  • the above coordinate system can also be established in other forms.
  • a certain point cloud screening rule can be set based on the shape of the non-standard flat glass to be glued in advance, and the corresponding point of the above-mentioned better part can be selected, that is, Select the point corresponding to the part of the non-standard flat glass to be glued that is not in contact with the conveyor belt.
  • the object of the straight line fitting operation is preferably the result after noise removal and smoothing.
  • it can also be contour points without noise removal and smoothing, which is not limited here.
  • the robots in various embodiments of the present invention may be industrial robot arms, and these robot arms may be general-purpose or specialized for applying glue to objects.
  • the initial point of the gluing trajectory point can be set at the position on the gluing path that is closest to the initial pose of the robot, for example, the initial point is set in the middle of the edge close to the robot. That is to say, after the initial pose of the robot is determined, the middle point on the gluing path of the edge closest to the robot's initial pose can be used as the initial point of the gluing trajectory point, and then the robot can be used according to the robot's initial pose.
  • the gluing track point information may include, but is not limited to, the coordinates of the gluing track point, the initial track point of the gluing track point, and the direction of the gluing track point (ie, the order of the gluing track points), etc. .
  • the gluing track point information can be sent to the robot by means of communication.
  • the robot receives the glue application trajectory point information, it can control its own glue spray nozzle to apply glue to the glass to be glued based on the glue application trajectory point information.
  • the glue application trajectory point information is generated on the glue application path, including:
  • the walking sequence of the gluing track points is determined to obtain the gluing track point information.
  • determining the corners and straight lines in the gluing path can be determined based on the relationship between the coordinate values of each point on the gluing path.
  • the X and Y coordinates of the adjacent points at the corner will be different, while the adjacent points at the straight line may have the same X coordinate or the same Y coordinate.
  • the shape of the glass to be glued is a rectangle
  • the X and Y coordinates of the adjacent points at the corners of the four corners will be different, and the adjacent points at the upper line will be different.
  • the Y coordinate of the point will be the same but the X coordinate will be different
  • the Y coordinate of the adjacent point on the lower line will be the same but the X coordinate will be different and the Y coordinate is smaller than the value of the upper line
  • the X coordinate of the adjacent point on the left line It will be the same but the Y coordinate will be different.
  • the X coordinate of the adjacent point on the right line will be the same but the Y coordinate will be different and the X coordinate will be smaller than the value of the left line.
  • the robot When the robot is applying glue to the glass, it will control the glue head to apply glue based on a certain glue discharge rate.
  • the glue dispensing rate as an inherent property of the robot, affects the gluing effect in this embodiment.
  • the glue dispensing rate of the robot can be determined.
  • the spacing between the gluing track points set at the corners of the gluing path may be greater than the spacing between the gluing track points set at the straight line Larger, in order to achieve the balance between the movement speed at the straight line and the movement speed at the corner, and then solve the glue stacking phenomenon that may be caused by the corner.
  • a minimum distance can be set at the straight line to limit the distance between the glue application track points at the straight line, so as to prevent the robot from jamming and stacking glue due to the excessive number of track points at the straight line.
  • the walking sequence of the gluing track points is determined, so as to obtain the gluing track point information.
  • the initial point of the trajectory point is set as a point close to the initial pose of the robot, for example, it can be the trajectory point corresponding to the middle part of the glass to be glued close to the side of the robot. That is to say, after the initial pose of the robot is determined, the trajectory point corresponding to the middle point on the gluing path of the edge closest to the initial pose of the robot (or the trajectory point closest to the point) As the initial trajectory point of the glue-applied trajectory point, after that, other trajectory points can be moved clockwise or counterclockwise.
  • the glue application track point information may specifically include the glue application track point coordinates, the initial track point coordinates, the position sequence of the glue application track points, the motion speed parameters of the glue application track points, and the like.
  • the glue application track point information further includes: normal direction information corresponding to the contour points.
  • the normal information can be the angle value of the normal vector corresponding to each contour point cloud relative to a fixed amount, and can also be the deviation of the point cloud in the corresponding position sequence behind the contour point cloud relative to the previous point cloud. angle value.
  • FIG. 3 shows a schematic structural diagram of a device for gluing the surface of an article based on robot vision according to another embodiment of the present invention, the device includes:
  • the 3D point cloud acquisition module 300 is used to acquire the 3D point cloud information of the item, that is, to implement step S100;
  • a 3D image determination module 310 configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information, that is, to implement step S110;
  • a 2D image mapping module 320 configured to map the 3D image information to 2D image information, that is, to implement step S120;
  • a trajectory point generation module 330 configured to generate 2D glue application trajectory points based on the 2D image information, that is, to implement step S130;
  • the 3D track point mapping module 340 is used to map the 2D gluing track points to the 3D gluing track points, that is, for implementing step S140;
  • the gluing module 350 is used for gluing based on the mapped 3D gluing track points, that is, for implementing step S150.
  • FIG. 4 shows a schematic structural diagram of a device for acquiring 3D image information of an item based on robot vision according to another embodiment of the present invention, and the device includes:
  • the 3D point cloud acquisition module 400 is used to acquire the 3D point cloud of the item, that is, to implement step S200;
  • the 2D contour obtaining module 410 is used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image, that is, to implement step S210;
  • the 3D contour obtaining module 420 is used for mapping the 2D contour into a 3D contour point cloud, that is, for implementing step S220;
  • the 3D image determination module 430 is configured to determine, based on the 3D contour point cloud and preset 3D image template information, the 3D image template information matching the item and the pose information of the item, that is, for implementing step S230.
  • the 3D point cloud acquisition module 300 is used to implement the method of step S100, however, according to the needs of the actual situation, the 3D point cloud acquisition module 300 can also be used to implement the method of steps S200, S300 or S400 or the method of the method. part.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of any one of the foregoing embodiments.
  • the computer program stored in the computer-readable storage medium of the embodiments of the present application may be executed by the processor of the electronic device.
  • the computer-readable storage medium may be a storage medium built in the electronic device, or a storage medium capable of The storage medium of the electronic device is pluggable and pluggable. Therefore, the computer-readable storage medium of the embodiments of the present application has high flexibility and reliability.
  • FIG. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may include: a processor (processor) 902 , a communication interface (Communications Interface) 904 , a memory (memory) 906 , and a communication bus 908 .
  • processor processor
  • Communication interface Communication interface
  • memory memory
  • communication bus 908 a communication bus
  • the processor 902 , the communication interface 904 , and the memory 906 communicate with each other through the communication bus 908 .
  • the communication interface 904 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 902 is configured to execute the program 910, and specifically may execute the relevant steps in the foregoing method embodiments.
  • the program 910 may include program code including computer operation instructions.
  • the processor 902 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the electronic device may be the same type of processors, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 906 is used to store the program 910 .
  • Memory 906 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 910 may specifically be used to cause the processor 902 to perform various operations in the foregoing method embodiments.
  • the content of the present invention includes:
  • a method for gluing the surface of objects based on robot vision comprising:
  • the glue is applied based on the mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the acquiring the 3D point cloud information of the item includes: mapping the 3D point cloud of the item into a 2D image; acquiring the 2D contour of the item based on the 2D image; 2D contours are mapped to 3D contour point clouds.
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
  • the generating 2D gluing track points based on the 2D image information includes: generating a 2D contour based on the 2D image information; traversing the entire contour at predetermined intervals and generating the 2D gluing track points.
  • the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
  • the start point and the end point of the 2D trajectory point coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • a device for gluing the surface of objects based on robot vision comprising:
  • 3D point cloud acquisition module used to acquire 3D point cloud information of items
  • a 3D image determination module configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information
  • a 2D image mapping module for mapping the 3D image information to 2D image information
  • a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information
  • a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points
  • Glue module for gluing based on mapped 3D glue track points.
  • the 3D point cloud information includes a 3D contour point cloud
  • the 3D point cloud acquisition module is further used for:
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
  • the trajectory point generating module is specifically configured to: generate a 2D contour based on 2D image information; traverse the entire contour according to a predetermined interval and generate 2D gluing trajectory points.
  • the trajectory point generating module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
  • the start point and the end point of the 2D trajectory point coincide.
  • the value range of the predetermined interval includes 50mm-100mm.
  • the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
  • a method for acquiring 3D image information of objects based on robot vision comprising:
  • the 3D image template information matching the item and the pose information of the item are determined.
  • a feature point-based matching algorithm and/or an iterative closest point algorithm to determine the 3D image template information that matches the item and the pose information of the item
  • the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
  • the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
  • a device for acquiring 3D image information of objects based on robot vision comprising:
  • 3D point cloud acquisition module used to acquire 3D point cloud of items
  • a 2D contour acquisition module used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image;
  • 3D contour acquisition module for mapping 2D contours to 3D contour point clouds
  • the 3D image determination module is configured to determine the 3D image template information matched with the item and the pose information of the item based on the 3D contour point cloud and the preset 3D image template information.
  • the 3D image determination module is also used to: determine the 3D image template information matched with the item and the pose information of the item based on the feature point matching algorithm and/or the iterative closest point algorithm.
  • the 2D contour acquisition module is further configured to: after acquiring the 3D point cloud of the item, first perform point cloud clustering and/or outlier removal processing, and then map the processed 3D point cloud into a 2D image.
  • the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus.
  • computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
  • the processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable processor Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The present application discloses an article surface gluing method based on robot vision, an article surface gluing apparatus based on robot vision, an electronic device, and a storage medium. The article surface gluing method based on robot vision of the present application comprises: acquiring 3D point cloud information of an article; determining 3D image information of the article on the basis of the 3D point cloud information and preset 3D image template information; mapping the 3D image information into 2D image information; generating 2D gluing trajectory points on the basis of the 2D image information; mapping the 2D gluing trajectory points into 3D gluing trajectory points; and performing gluing on the basis of the mapped 3D gluing trajectory points. According to the present invention, complete 3D point cloud information of the article is acquired by a matching method, incomplete 3D point cloud information is replaced with the complete 3D point cloud information, an image processing operation of a 2D image level is performed to obtain the 2D gluing trajectory points, and then the 2D gluing trajectory points are converted into the 3D gluing trajectory points, such that a camera still can obtain the gluing trajectory points on the basis of a contour of the article even acquiring the incomplete point cloud information. The present invention further provides an article 3D image information acquisition method based on robot vision, and in the method, an article matching method is performed on the basis of 3D contour information. Compared with a conventional method, the method can greatly improve the matching efficiency on the premise that the matching accuracy is not lost.

Description

基于机器人视觉的物品表面涂胶方法、装置、设备和介质Glue coating method, device, equipment and medium for object surface based on robot vision
优先权声明claim of priority
本申请要求2021年4月20日递交的、申请号为CN202110426175.9、名称为“基于机器人视觉的物品表面涂胶方法、装置、电子设备和存储介质”的中国发明专利的优先权,上述专利的所有内容在此全部引入。This application claims the priority of the Chinese invention patent filed on April 20, 2021, with the application number CN202110426175.9 and the title of "Robot Vision-Based Gluing Method, Device, Electronic Equipment and Storage Media on the Surface of Objects". The above patent All content of .
技术领域technical field
本申请涉及B25智能机器人领域,更具体而言,特别涉及一种基于机器人视觉的物品表面涂胶方法、基于机器人视觉的物品表面涂胶装置、电子设备和存储介质。The present application relates to the field of B25 intelligent robots, and more particularly, to a method for gluing an object surface based on robot vision, a device for gluing an object surface based on robot vision, an electronic device and a storage medium.
背景技术Background technique
目前,随着智能程控机器人的广泛普及,已经能够借助智能程控机器人实现在物体表面执行涂胶的操作。现有技术在获取物品的完整点云信息后,提取物体的轮廓,并根据轮廓求取物体的轨迹点或者提前示教出不同型号的物品的轨迹点,然后在获取物品的轮廓后,根据物品放置形态和来料位置,调用该提前示教好的轨迹点。上述两种方式都需要获取物品完整的点云信息,然而在工业场景中,光照条件复杂多变,玻璃材质也各不相同,导致使用深度相机采集点云时,往往会存在较为严重的点云缺失情况,特别是当点云不完整以至于导致物品轮廓点云缺失时,无法根据轮廓点云求取正确的轨迹点,进而出现涂胶失败的问题。At present, with the widespread popularity of intelligent program-controlled robots, it has been possible to implement the operation of applying glue on the surface of objects with the help of intelligent program-controlled robots. In the prior art, after obtaining the complete point cloud information of the item, the outline of the object is extracted, and the trajectory points of the object are obtained according to the outline, or the trajectory points of different types of items are taught in advance, and then after the outline of the item is obtained, according to the item. Place the shape and the incoming material position, and call the track point taught in advance. The above two methods need to obtain the complete point cloud information of the item. However, in industrial scenes, the lighting conditions are complex and changeable, and the glass materials are also different. As a result, when the depth camera is used to collect point clouds, there are often serious point clouds. In the case of missing, especially when the point cloud is incomplete so that the outline point cloud of the item is missing, the correct trajectory point cannot be obtained from the outline point cloud, and the problem of gluing failure occurs.
发明内容SUMMARY OF THE INVENTION
鉴于上述问题,提出了本发明以便克服上述问题或者至少部分地解决上述问题。具体地,本发明的创新之一在于,为了克服获取的物品点云信息不完整而导致无法正确获取机器人涂胶轨迹点的问题,申请人提出了一种通过匹配的方法获取物品完整的3D点云信息,用完整的3D点云信息代替不完整的3D点云信息,进行2D图像层面的图像处理操作获取2D涂胶轨迹点后,再转化为3D涂胶轨迹点的方法,该方法中获取物品的3D点云后,并不直接使用该 点云信息求取涂胶轨迹点,从而解决了上述技术问题,使得无论获取到怎样的点云信息,都能够基于物品轮廓求取涂胶轨迹点。In view of the above-mentioned problems, the present invention has been proposed in order to overcome the above-mentioned problems or at least partially solve the above-mentioned problems. Specifically, one of the innovations of the present invention is that, in order to overcome the problem that the obtained point cloud information of the item is incomplete, which leads to the inability to correctly obtain the robot gluing trajectory point, the applicant proposes a method to obtain the complete 3D point of the item through a matching method. Cloud information, replace incomplete 3D point cloud information with complete 3D point cloud information, perform image processing operations at the 2D image level to obtain 2D gluing track points, and then convert them into 3D gluing track points. After the 3D point cloud of the item, the point cloud information is not directly used to obtain the gluing trajectory point, which solves the above technical problems, so that no matter what kind of point cloud information is obtained, the gluing trajectory point can be obtained based on the outline of the item. .
本发明的创新之二在于,申请人发现现有的3D图像匹配算法,因为需要对过于多的像素点进行计算,因而匹配效率不够高,而2D图像匹配算法,又难以获得物品准确的位姿信息。因此申请人基于机器人涂胶这一应用场景的特点,开发了一种基于3D轮廓信息执行物品匹配的方法,与传统的方法相比,该方法能够在不损失匹配准确性的前提下,大大提高匹配的效率。The second innovation of the present invention is that the applicant found that the existing 3D image matching algorithm needs to calculate too many pixel points, so the matching efficiency is not high enough, and the 2D image matching algorithm is difficult to obtain the accurate pose of the item information. Therefore, based on the characteristics of the application scenario of robot gluing, the applicant has developed a method for performing item matching based on 3D contour information. Compared with the traditional method, this method can greatly improve the matching accuracy without losing the matching accuracy. matching efficiency.
本申请权利要求和说明书所披露的所有方案均具有上述一个或多个创新之处,相应地,能够解决上述一个或多个技术问题。All solutions disclosed in the claims and descriptions of the present application have one or more of the above-mentioned innovations, and correspondingly, can solve one or more of the above-mentioned technical problems.
具体地,本申请提供一种基于机器人视觉的物品表面涂胶方法、基于机器人视觉的物品表面涂胶装置、电子设备和存储介质。Specifically, the present application provides a method for gluing the surface of an article based on robot vision, a device for gluing the surface of an article based on robot vision, an electronic device and a storage medium.
本申请的实施方式的基于机器人视觉的物品表面涂胶方法包括:The robotic vision-based object surface gluing method of the embodiment of the present application includes:
获取物品的3D点云信息;Get the 3D point cloud information of the item;
基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;Determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
将所述3D图像信息映射为2D图像信息;mapping the 3D image information to 2D image information;
基于所述2D图像信息生成2D涂胶轨迹点;generating 2D gluing track points based on the 2D image information;
将所述2D涂胶轨迹点映射为3D涂胶轨迹点;mapping the 2D gluing track points to 3D gluing track points;
基于映射的3D涂胶轨迹点进行涂胶。The glue is applied based on the mapped 3D glue track points.
在某些实施方式中,所述3D点云信息包括3D轮廓点云,所述获取物品的3D点云信息,包括:In some embodiments, the 3D point cloud information includes a 3D contour point cloud, and the acquiring 3D point cloud information of the item includes:
将物品3D点云映射为2D图像;Map the 3D point cloud of the item into a 2D image;
基于所述2D图像获取物品的2D轮廓;obtaining a 2D outline of the item based on the 2D image;
将2D轮廓映射为3D轮廓点云。Map 2D contours to 3D contour point clouds.
在某些实施方式中,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。In some embodiments, the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
在某些实施方式中,所述将所述3D图像信息映射为2D图像信息,包括:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。In some embodiments, the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
在某些实施方式中,所述基于所述2D图像信息生成2D涂胶轨迹点,包括:In some embodiments, the generating 2D gluing track points based on the 2D image information includes:
基于2D图像信息生成2D轮廓;Generate 2D contours based on 2D image information;
按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Traverse the entire contour at predetermined intervals and generate 2D gluing trajectory points.
在某些实施方式中,将轮廓内缩后,再按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。In some embodiments, after the contour is indented, the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
在某些实施方式中,所述2D轨迹点的起点与终点重合。In some embodiments, the start and end points of the 2D trajectory points coincide.
在某些实施方式中,所述预定间隔的取值范围包括50mm-100mm。In some embodiments, the value range of the predetermined interval includes 50mm-100mm.
在某些实施方式中,所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿信息。In some embodiments, the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
本申请的实施方式的基于机器人视觉的物品表面涂胶装置包括:The robot vision-based object surface gluing device according to the embodiment of the present application includes:
3D点云获取模块,用于获取物品的3D点云信息;3D point cloud acquisition module, used to acquire 3D point cloud information of items;
3D图像确定模块,用于基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;a 3D image determination module, configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
2D图像映射模块,用于将所述3D图像信息映射为2D图像信息;a 2D image mapping module for mapping the 3D image information to 2D image information;
轨迹点生成模块,用于基于所述2D图像信息生成2D涂胶轨迹点;a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information;
3D轨迹点映射模块,用于将所述2D涂胶轨迹点映射为3D涂胶轨迹点;a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points;
涂胶模块,用于基于映射的3D涂胶轨迹点进行涂胶。Glue module for gluing based on mapped 3D glue track points.
在某些实施方式中,所述3D点云信息包括3D轮廓点云,所述3D点云获取模块还用于:In some embodiments, the 3D point cloud information includes a 3D contour point cloud, and the 3D point cloud acquisition module is further configured to:
将物品3D点云映射为2D图像;Map the 3D point cloud of the item into a 2D image;
基于所述2D图像获取物品的2D轮廓;obtaining a 2D outline of the item based on the 2D image;
将2D轮廓映射为3D轮廓点云。Map 2D contours to 3D contour point clouds.
在某些实施方式中,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。In some embodiments, the mapping of the 3D point cloud of the item into the 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers, and then processing the processed 3D point cloud. Mapped to a 2D image.
在某些实施方式中,所述2D图像映射模块具体用于:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。In some embodiments, the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
在某些实施方式中,所述轨迹点生成模块具体用于:In some embodiments, the trajectory point generation module is specifically used for:
基于2D图像信息生成2D轮廓;Generate 2D contours based on 2D image information;
按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Traverse the entire contour at predetermined intervals and generate 2D gluing trajectory points.
在某些实施方式中,所述轨迹点生成模块还用于:将轮廓内缩后,再按照预定间隔遍历 整个轮廓并生成2D轨迹点。In some embodiments, the trajectory point generation module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
在某些实施方式中,所述2D轨迹点的起点与终点重合。In some embodiments, the start and end points of the 2D trajectory points coincide.
在某些实施方式中,所述预定间隔的取值范围包括50mm-100mm。In some embodiments, the value range of the predetermined interval includes 50mm-100mm.
在某些实施方式中,所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿信息。In some embodiments, the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
本申请的实施方式的电子设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施方式的基于机器人视觉的物品表面涂胶方法。An electronic device according to an embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements any of the above-mentioned embodiments when the processor executes the computer program. Glue coating method for object surface based on robot vision.
本申请的实施方式的计算机可读存储介质其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施方式的基于机器人视觉的物品表面涂胶方法。The computer-readable storage medium of the embodiment of the present application has a computer program stored thereon, and when the computer program is executed by the processor, implements the robot vision-based method for gluing the surface of an article in any of the foregoing embodiments.
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, from the following description, and in part will become apparent from the following description, or may be learned by practice of the present application.
附图说明Description of drawings
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1是本申请某些实施方式的基于机器人视觉的物品表面涂胶方法的流程示意图;1 is a schematic flowchart of a method for gluing an object surface based on robot vision according to some embodiments of the present application;
图2是本申请某些实施方式的基于机器人视觉的物品3D图像信息获取方法的流程示意图;2 is a schematic flowchart of a method for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
图3是本申请某些实施方式的基于机器人视觉的物品表面涂胶装置的结构示意图;3 is a schematic structural diagram of a robot vision-based object surface gluing device according to some embodiments of the present application;
图4是本申请某些实施方式的基于机器人视觉的物品3D图像信息获取装置的结构示意图;4 is a schematic structural diagram of a device for obtaining 3D image information of an item based on robot vision according to some embodiments of the present application;
图5是本申请某些实施方式的物品的点云缺失情况以及完整点云的示意图;5 is a schematic diagram of the missing point cloud and the complete point cloud of the item according to some embodiments of the present application;
图6是本申请某些实施方式的涂胶工艺的示意图;6 is a schematic diagram of the gluing process of certain embodiments of the present application;
图7是本申请某些实施方式的待涂胶物品轮廓形状的示意图;7 is a schematic diagram of the outline shape of an article to be glued according to some embodiments of the present application;
图8是本申请某些实施方式的电子设备的结构示意图。FIG. 8 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present disclosure will be more thoroughly understood, and will fully convey the scope of the present disclosure to those skilled in the art.
图1示出了根据本发明一个实施例的物品表面涂胶方法,包括:FIG. 1 shows a method for gluing the surface of an article according to an embodiment of the present invention, including:
步骤S100,获取物品的3D点云信息;Step S100, acquiring 3D point cloud information of the item;
步骤S110,基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;Step S110, based on the 3D point cloud information and the preset 3D image template information, determine the 3D image information of the item;
步骤S120,将所述3D图像信息映射为2D图像信息;Step S120, mapping the 3D image information to 2D image information;
步骤S130,基于所述2D图像信息生成2D涂胶轨迹点;Step S130, generating 2D gluing track points based on the 2D image information;
步骤S140,将所述2D涂胶轨迹点映射为3D涂胶轨迹点;Step S140, mapping the 2D gluing track points to 3D gluing track points;
步骤S150,基于映射的3D涂胶轨迹点进行涂胶。Step S150, gluing is performed based on the mapped 3D gluing track points.
在步骤S100中,可以通过3D工业相机获取点云信息,3D工业相机一般装配有两个镜头,分别从不同的角度捕捉待抓取物品组,经过处理后能够实现物体的三维图像的展示。将待抓取物品组置于视觉传感器的下方,两个镜头同时拍摄,根据所得到的两个图像的相对姿态参数,使用通用的双目立体视觉算法计算出待填充物体的各点的X、Y、Z坐标值及各点的坐标朝向,进而转变为待抓取物品组的点云数据。具体实施时,也可以使用激光探测器、LED等可见光探测器、红外探测器以及雷达探测器等元件生成点云,本发明对具体实现方式不作限定。In step S100, point cloud information can be obtained through a 3D industrial camera. The 3D industrial camera is generally equipped with two lenses, which capture the group of objects to be grasped from different angles, and can display a three-dimensional image of the object after processing. Place the group of objects to be grasped under the vision sensor, and shoot with two lenses at the same time. According to the obtained relative pose parameters of the two images, use a general binocular stereo vision algorithm to calculate the X, X, and X of each point of the object to be filled. The Y, Z coordinate values and the coordinate orientation of each point are then converted into the point cloud data of the item group to be grasped. During specific implementation, components such as laser detectors, visible light detectors such as LEDs, infrared detectors, and radar detectors can also be used to generate point clouds, and the present invention does not limit the specific implementation.
获取物体3D点云后,可以对点云进一步进行点云聚类、去除离群点等点云处理。After obtaining the 3D point cloud of the object, the point cloud can be further processed such as point cloud clustering and outlier removal.
通过以上方式获取的点云数据是三维的数据,为了方便数据的处理,提高效率,可将获取的三维点云数据正投影映射到二维平面上。The point cloud data acquired in the above manner is three-dimensional data. In order to facilitate data processing and improve efficiency, the acquired three-dimensional point cloud data can be orthographically mapped onto a two-dimensional plane.
作为一个示例,也可以生成该正投影对应的深度图。可以沿垂直于物品的深度方向获取与三维物品区域相对应的二维彩色图以及对应于二维彩色图的深度图。其中,二维彩色图对应于与预设深度方向垂直的平面区域的图像;对应于二维彩色图的深度图中的各个像素点与二维彩色图中的各个像素点一一对应,且各个像素点的取值为该像素点的深度值。As an example, a depth map corresponding to the orthographic projection can also be generated. A two-dimensional color map corresponding to a three-dimensional object region and a depth map corresponding to the two-dimensional color map can be acquired in a depth direction perpendicular to the object. Wherein, the two-dimensional color map corresponds to an image of a plane area perpendicular to the preset depth direction; each pixel in the depth map corresponding to the two-dimensional color map is in one-to-one correspondence with each pixel in the two-dimensional color map, and each pixel The value of the pixel is the depth value of the pixel.
可以基于相机的内参来进行3D像素点和2D像素点之间的转换。相机内参即相机内部参数,只与相机内部属性(如焦距、分辨率、像素尺寸、镜头畸变等)有关。利用相机内参可 以将相机坐标系中的三维空间点变换到成像平面坐标系中,之后经过镜头畸变等校正过程之后可进一步变换至图像像素坐标系中的二维像素点。因此图像像素坐标系下的投影点和相机坐标系下的三维空间点之间存在着一种映射关系,求取这一关系的过程称为相机内参标定,而基于该映射关系则能完成3D图像与2D图像之间的转换。The conversion between 3D pixels and 2D pixels can be performed based on the camera's intrinsic parameters. The camera internal parameters are the internal parameters of the camera, which are only related to the internal properties of the camera (such as focal length, resolution, pixel size, lens distortion, etc.). Using the camera internal parameters, the three-dimensional space points in the camera coordinate system can be transformed into the imaging plane coordinate system, and after correction processes such as lens distortion, they can be further transformed into two-dimensional pixel points in the image pixel coordinate system. Therefore, there is a mapping relationship between the projection point in the image pixel coordinate system and the three-dimensional space point in the camera coordinate system. The process of obtaining this relationship is called camera internal parameter calibration, and based on this mapping relationship, the 3D image can be completed. Convert to and from 2D images.
在步骤S110中,在获取物品的点云信息时,由于可见光在物品表面的反射或者因为物品材质问题都可能导致获取的点云缺失,缺失的点云数据遗失了很多信息,基于缺失的点云信息无法正确规划机器人的涂胶路径。图5示出了点云缺失的情况,实线部分是根据采集的点云形成的轮廓,虚线部分是物品实际的轮廓。由图5可以清楚地看到,相机所采集的点云形成的实线部分的轮廓在物品实际轮廓的左边沿处断开并在物品内部形成了并不存在的轮廓,该轮廓并非物品的实际轮廓。如果预先设定的机器人的涂胶工艺是沿着轮廓内缩一定距离进行涂胶的话,沿着图5中的实线部分规划涂胶路线,显然会导致机器人将胶涂至物品内部,而不是在边沿附近。图5中与实线相对的虚线部分所描绘出的轮廓才是物品实际的轮廓,预存的实际轮廓只有物品的轮廓信息而没有位置信息,假如成功获取待涂胶物品的位置以及与待涂胶物品所对应的实际的轮廓,就能够舍弃获取的不完整的轮廓,改为在物品所在的位置根据物品的摆放姿态使用预存的实际轮廓,从而基于完整的物品轮廓规划出正确的涂胶路线。由于现有的2D图像匹配算法难以获得物品准确的位姿,而3D图像匹配算法因为像素点多且点的属性复杂,匹配效率较低,为了能够较为快速地在点云缺失的情况下获得物品完整的,正确的点云信息,申请人开发了一种基于物品的3D轮廓点云进行匹配,并获取包括所匹配的3D图像以及物品的位姿信息的物品完整3D图像信息的方法,这也是本发明的重点之一In step S110, when the point cloud information of the item is acquired, the acquired point cloud may be missing due to the reflection of visible light on the surface of the item or because of the material problem of the item, and the missing point cloud data loses a lot of information. Based on the missing point cloud The information cannot correctly plan the glue path of the robot. Figure 5 shows the situation where the point cloud is missing, the solid line part is the outline formed according to the collected point cloud, and the dashed line part is the actual outline of the item. It can be clearly seen from Figure 5 that the outline of the solid line formed by the point cloud collected by the camera is broken at the left edge of the actual outline of the object and forms a non-existent outline inside the object, which is not the actual outline of the object. contour. If the pre-set gluing process of the robot is to shrink a certain distance along the outline for gluing, planning the gluing route along the solid line in Figure 5 will obviously cause the robot to apply the glue to the inside of the item, instead of near the edge. The contour depicted by the dashed line part opposite to the solid line in Figure 5 is the actual contour of the item. The pre-stored actual contour only has the contour information of the item and no position information. The actual contour corresponding to the item can discard the acquired incomplete contour, and use the pre-stored actual contour at the position of the item according to the position of the item, so as to plan the correct gluing route based on the complete item contour. . Because the existing 2D image matching algorithm is difficult to obtain the accurate pose of the item, and the 3D image matching algorithm has many pixels and complex point attributes, the matching efficiency is low, in order to obtain the item faster in the case of missing point cloud. Complete and correct point cloud information, the applicant developed a method for matching based on the 3D contour point cloud of the item, and obtaining the complete 3D image information of the item including the matched 3D image and the pose information of the item, which is also One of the key points of the present invention
.
图2示出了根据本发明一个实施例的3D图像信息获取方法,包括:FIG. 2 shows a method for acquiring 3D image information according to an embodiment of the present invention, including:
步骤S200,获取物品的3D点云;Step S200, obtaining the 3D point cloud of the item;
步骤S210,将物品的3D点云映射为2D图像,并基于所述2D图像获取物品的2D轮廓;Step S210, mapping the 3D point cloud of the item into a 2D image, and obtaining a 2D outline of the item based on the 2D image;
步骤S220,将2D轮廓映射为3D轮廓点云;Step S220, mapping the 2D contour to a 3D contour point cloud;
步骤S230,基于所述3D轮廓点云以及预置的3D图像模板信息,确定与物品匹配的3D图像模板信息以及物品的位姿信息。Step S230, based on the 3D contour point cloud and the preset 3D image template information, determine the 3D image template information matching the item and the pose information of the item.
对于步骤S200,可以采用与步骤S100类似的方法获取物品的3D点云,此处不再赘述。For step S200, a method similar to step S100 may be used to obtain the 3D point cloud of the item, which will not be repeated here.
对于步骤S210和步骤S220,2D图像包括彩色图和深度图,2D深度图的每一个像素点 和彩色图的每一个像素点是一一对应的,同时深度图上还包括了每一个像素点的深度信息,该深度信息可以是所拍摄的图片的像素点与拍摄所使用的相机之间的距离。可以基于相机内参进行2D图像与3D图像的转换,具体可以参考步骤S100中的相关内容,此处不再赘述。For steps S210 and S220, the 2D image includes a color map and a depth map, each pixel of the 2D depth map is in one-to-one correspondence with each pixel of the color map, and the depth map also includes each pixel. Depth information, the depth information may be the distance between the pixels of the captured picture and the camera used for capturing. The conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
对于步骤S230,可以预先根据需要涂胶的物品的型号为机器人输入该型号物品对应的图像模板信息,物品的型号以及对应的模板可以根据实际情况的需要任意设定,例如型号可以是:矩形,圆形,弧形或用文字命名,例如A型,B型,C型;模板也可以使用对应的编号,例如A模板,B模板,C模板,所有类型的物品都应当有对应的模板。模板可以是现成的,也可以基于待涂胶的物品生成该物品的3D图像模板。为了生成某一类物品的3D图像模板,从多个该类物品中,选取完整且标准的物品(即,标准件),在不同角度和光照环境下对该物品拍照,获取多组点云信息,之后将多组点云信息组合、去重后,获得该类物品的图像模板。For step S230, the image template information corresponding to the item of this model can be input to the robot in advance according to the model of the item to be glued, and the model of the item and the corresponding template can be arbitrarily set according to the needs of the actual situation. Circles, arcs or names with words, such as A, B, C; templates can also use corresponding numbers, such as A template, B template, C template, all types of items should have corresponding templates. Templates can be off-the-shelf, or a 3D image template of the item to be glued can be generated based on the item. In order to generate a 3D image template for a certain type of item, select complete and standard items (ie, standard parts) from multiple items of this type, take pictures of the item at different angles and lighting environments, and obtain multiple sets of point cloud information , and then combine and deduplicate multiple sets of point cloud information to obtain the image template of this type of item.
在获取了物品的(可能缺失的)点云后,可以基于特征点匹配算法以及迭代最近点算法的点集配准方法,确定与物品的点云匹配的图像模板以及物品的位姿信息。特征点匹配算法通过提取两幅图像中的关键点,即查找图像中具有某些特征的像素点,根据得到的关键点位置,之后计算这些特征点的特征因子。特征因子通常是一个向量,两个特征因子之间的距离可以反映出其相似的程度,也就是这两个特征点是不是同一个。根据特征因子的不同,可以选择不同的距离度量。如果是浮点类型的描述子,可以使用其欧式距离;对于二进制的特征因子可以使用其汉明距离(两个不同二进制之间的汉明距离指的是两个二进制串不同位的个数),基于计算特征因子相似度的方法,在特征点的集合中寻找和其最相似的特征点,就可以进行特征点的匹配。通过特征点匹配算法可以高效且准确的匹配出两个不同视角的图像中的同一个物体。迭代最近点算法能够使不同的坐标下的点云数据(例如,不同的点云图像)合并到同一个坐标系统中,实际上是要找到从坐标系1(点云图像1)到坐标系2(点云图像2)的一个刚性变换,该刚性变换反映了一个点云图像如何旋转和平移后,获得另一个点云图像。算法本质上是基于最小二乘法的最优配准方法,重复进行选择对应关系点对,计算最优刚体变换,直到满足正确配准的收敛精度要求,换句话说,该方法通过不断迭代最小化源数据与目标数据对应点来实现精确地拼合。After the (possibly missing) point cloud of the item is acquired, the image template matching the point cloud of the item and the pose information of the item can be determined based on the feature point matching algorithm and the point set registration method of the iterative closest point algorithm. The feature point matching algorithm extracts the key points in the two images, that is, finds the pixel points with certain features in the image, and then calculates the feature factors of these feature points according to the obtained key point positions. The feature factor is usually a vector, and the distance between two feature factors can reflect the degree of similarity, that is, whether the two feature points are the same. Depending on the eigenfactors, different distance metrics can be chosen. If it is a floating-point descriptor, its Euclidean distance can be used; for binary eigenfactors, its Hamming distance can be used (the Hamming distance between two different binaries refers to the number of different bits in two binary strings) , based on the method of calculating the similarity of feature factors, the feature points can be matched by finding the most similar feature points in the set of feature points. The feature point matching algorithm can efficiently and accurately match the same object in two images from different perspectives. The iterative closest point algorithm enables point cloud data at different coordinates (for example, different point cloud images) to be merged into the same coordinate system, in fact, to find the coordinate system from coordinate system 1 (point cloud image 1) to coordinate system 2 A rigid transformation of (point cloud image 2) that reflects how one point cloud image is rotated and translated to obtain another point cloud image. The algorithm is essentially an optimal registration method based on the least squares method. The corresponding point pairs are selected repeatedly, and the optimal rigid body transformation is calculated until the convergence accuracy requirements of the correct registration are met. In other words, the method minimizes by continuous iteration. The source data and target data correspond to points to achieve precise stitching.
获得了物品完整的点云轮廓信息以及物品的位姿信息后,就可以使用这些图像信息代替原本的不完整的点云信息,执行后续的路径规划的步骤。After obtaining the complete point cloud outline information of the item and the pose information of the item, these image information can be used to replace the original incomplete point cloud information, and the subsequent path planning steps can be performed.
在步骤S120中,为了方便执行内缩或者求轮廓等图像形态学的操作,将配准的完整的3D图像信息映射为2D图像信息。映射可以采用透视投影或正交投影的方式。虽然透视投影是更为常用的映射方式,但是在物体倾斜摆放或者相机拍摄的视角倾斜时,使用透视映射可能导致映射后的图像存在畸变误差,因此本发明优选使用正交投影方法进行3D图像信息到2D图像信息的映射。可以基于相机内参进行2D图像与3D图像的转换,具体可以参考步骤S100中的相关内容,此处不再赘述。In step S120, in order to facilitate performing image morphological operations such as indentation or contouring, the registered complete 3D image information is mapped to 2D image information. The mapping can be in the form of perspective projection or orthographic projection. Although perspective projection is a more commonly used mapping method, when the object is placed obliquely or the viewing angle captured by the camera is inclined, the use of perspective mapping may lead to distortion errors in the mapped image, so the present invention preferably uses the orthogonal projection method for 3D images. Mapping of information to 2D image information. The conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
对于步骤S130,为了求取2D的轨迹点,先要获取2D图像的轮廓,获取到图像轮廓后,根据工艺的需求以及物品的型号将轮廓内缩一定距离。图6示出了不同的涂胶工艺的例子,其中:工艺1中在四条边上均匀涂胶,因此需要获得全部4个边位置的轮廓;工艺2中在特定边上涂两层胶,因此在特定边上需要给该边两条不同内缩距离的轮廓;工艺3中特定边不涂胶,因此可以不用获得该特定边的轮廓;工艺4中特定边的涂胶路径相比于其他边来说,有一定的内缩,因此该特定边的轮廓的内缩距离应当与其他边不同。图7示出了不同的物品型号的例子,对于第一个形状为规则矩形的物品,可以令四个边的轮廓均内缩相同的距离,优选的内缩距离为8mm或10mm。对于第二个形状特殊较为特殊的,整体是矩形,但是一个角是弧形的物品,则可以在弧形段的位置使用与矩形段不同的内缩距离,假如矩形段内缩8mm或10mm,则弧形段可以内缩15mm或20mm。For step S130, in order to obtain the 2D trajectory point, the contour of the 2D image must be obtained first, and after the image contour is obtained, the contour is indented by a certain distance according to the requirements of the process and the model of the item. Figure 6 shows examples of different gluing processes, in which: in process 1, glue is applied evenly on all four edges, so it is necessary to obtain the contour of all 4 edge positions; in process 2, two layers of glue are applied on a specific edge, so On a specific edge, two contours with different indentation distances need to be given to the edge; in process 3, the specific edge is not glued, so it is not necessary to obtain the contour of the specific edge; the glue path of the specific edge in process 4 is compared with other edges For example, there is a certain indentation, so the indentation distance of the outline of that particular edge should be different from other edges. Figure 7 shows examples of different item types. For the first item with a regular rectangle shape, the contours of the four sides can be indented by the same distance, and the preferred indentation distance is 8mm or 10mm. For the second shape, the whole is a rectangle, but one corner is an arc, you can use a different indentation distance from the rectangular section at the position of the arc section. If the rectangular section is indented by 8mm or 10mm, Then the arc segment can be retracted by 15mm or 20mm.
内缩完成后,则要对内缩后的轮廓点按照一定间隔进行采样,以此方式遍历整个轮廓,获得完整的2D轨迹点。采样间隔(也可以称为遍历间隔)可以根据实际情况的需要自行设定,采样间隔越小,胶涂得越密,反之则会涂得较为稀疏。采样间隔可以为距离,例如设为50mm到100mm,也可以设为轨迹点的数目,例如可以设定需要从整个物品的轮廓中抽取100个点形成涂胶轨迹点,或者150个点形成涂胶轨迹点。完整的涂胶轨迹点应当是首尾相接的,即机器人应当从起点出发,到终点时刚好停在临近但未超过起点的位置,但是工业场景中,这样的轨迹点可能使得机器人在终点处的涂胶密度不够,为了解决这个问题,可以将终点的位置设为与起点重合或超过起点,如此,能够机器人能够完整地走过所有需要涂胶的位置而不会有任何遗漏。After the indentation is completed, the indented contour points should be sampled at a certain interval, and the entire contour should be traversed in this way to obtain a complete 2D trajectory point. The sampling interval (also called the traversal interval) can be set according to the actual situation. The smaller the sampling interval, the denser the glue is applied, and vice versa. The sampling interval can be a distance, such as 50mm to 100mm, or the number of track points. For example, it can be set to extract 100 points from the outline of the entire item to form gluing track points, or 150 points to form gluing track point. The complete gluing trajectory points should be connected end to end, that is, the robot should start from the starting point and stop at a position close to but not exceeding the starting point when it reaches the end point. However, in industrial scenarios, such trajectory points may cause the robot to move at the end point. The glue density is not enough. In order to solve this problem, the position of the end point can be set to coincide with the starting point or exceed the starting point. In this way, the robot can completely walk through all the positions that need to be glued without any omission.
对于步骤S140,2D图像的点并非真实世界的点,3D图像的点才能够表达真实的世界,例如,当物品有一些小凸起时,2D的点无法向机器人提示出该信息,这样机器人在移动到这些具有小凸起的位置时也不会提高喷嘴的高度,导致喷嘴与物品之间的距离不合适,喷涂 效果不佳,因此机器人最好基于3D的移动轨迹点进行涂胶。因此在获取了2D的涂胶轨迹点后,需要将其映射为3D的涂胶轨迹点。可以基于相机内参进行2D图像与3D图像的转换,具体可以参考步骤S100中的相关内容,此处不再赘述。For step S140, the points of the 2D image are not the points of the real world, only the points of the 3D image can express the real world. For example, when the item has some small protrusions, the 2D points cannot prompt the robot for this information, so that the robot is in the When moving to these positions with small protrusions, the height of the nozzle will not be increased, resulting in an inappropriate distance between the nozzle and the item, and the spraying effect is not good. Therefore, the robot is best to apply glue based on the 3D moving trajectory points. Therefore, after obtaining the 2D gluing track points, it needs to be mapped to the 3D gluing track points. The conversion of the 2D image and the 3D image may be performed based on the camera internal parameters. For details, reference may be made to the relevant content in step S100, which will not be repeated here.
对于步骤S150,本发明中可以使用机器人进行涂胶,为此需要先规划出机器人的移动轨迹(即以上步骤中获取的涂胶轨迹点),移动速度和出胶速率等,机器人按照规划好的路径和移动速度,在物体表面移动,并按照规划好的出胶速率在物品表面执行涂胶。For step S150, in the present invention, a robot can be used for gluing. For this purpose, it is necessary to first plan the movement trajectory of the robot (that is, the gluing trajectory points obtained in the above steps), the moving speed and the gluing rate. Path and moving speed, move on the surface of the object, and execute glue on the surface of the object according to the planned glue discharge rate.
根据上述实施例,首先,本发明通过模板匹配的方式获取到物品完整的点云信息,并使用完整的点云信息代替获取的点云信息执行轮廓获取,轨迹点求取,2D图像和3D图像之间的转换等操作,因而即便在获取的待涂胶物品的点云信息不完整时,仍能够由不完整的点云信息计算出机器人的正确的涂胶轨迹点;其次,本发明提出了一种基于图像匹配获取物品3D图像信息的方法,该方法没有使用传统的2D图像和3D图像匹配方法,改为采用3D图像的轮廓进行匹配,匹配准确,速度快,并且特别适用于物体涂胶的工业场景。由此可见,本发明解决了现有技术中使用机器人涂胶时,由于获取的点云信息不完整而导致的无法正确涂胶的问题以及现有的物品匹配方法不准确,效率不高的问题。According to the above embodiment, firstly, the present invention obtains the complete point cloud information of the item through template matching, and uses the complete point cloud information to replace the acquired point cloud information to perform contour acquisition, trajectory point calculation, 2D image and 3D image Therefore, even when the obtained point cloud information of the object to be glued is incomplete, the correct glue application trajectory point of the robot can still be calculated from the incomplete point cloud information; secondly, the present invention proposes A method for obtaining 3D image information of an item based on image matching. This method does not use the traditional 2D image and 3D image matching method, but uses the contour of the 3D image for matching. The matching is accurate and fast, and it is especially suitable for gluing objects. industrial scene. It can be seen that the present invention solves the problem that glue cannot be applied correctly due to incomplete point cloud information obtained when using robot gluing in the prior art, as well as the inaccurate and inefficient problems of the existing item matching method .
另外,本领域技术人员还能够针对上述实施例进行各种改动和变形:In addition, those skilled in the art can also make various changes and modifications to the above-mentioned embodiments:
在采集待涂胶玻璃的点云过程中,针对非标准平面的待涂胶玻璃,其贴合传送带的部分容易受到传送带干扰,进而出现干扰点。而翘起部分的边缘不会受干扰,因此,在基于二维轮廓点进行直线拟合过程中,对于与传送带有接触的边,可以根据此边的不与传送带接触处的点确定该边对应的直线。可以通过此边上轮廓点所对应的轮廓点云的Z向坐标,来确定该边上哪些轮廓点是与传送带有接触,哪些轮廓点是不与传送带接触的,即轮廓点所对应的轮廓点云的Z向坐标为零,则说明该轮廓点与传送带有接触,在轮廓点所对应的轮廓点云的Z向坐标不为零,则说明该轮廓点不与传送带接触,此时可基于不与传送带有接触的轮廓点进行直线拟合,以得到待涂胶玻璃较为精准的轮廓边缘。需要说明的是,本实施例中待涂胶玻璃所对应坐标系贴合传送带建立,也就是说上述坐标系的原点位于传送带所在平面。In the process of collecting the point cloud of the glass to be glued, for the glass to be glued on a non-standard plane, the part of the glass to be glued on the conveyor belt is easily disturbed by the conveyor belt, and then interference points appear. The edge of the lifted part will not be disturbed. Therefore, in the process of straight line fitting based on the two-dimensional contour points, for the edge that is in contact with the conveyor belt, the corresponding edge of the edge can be determined according to the point of the edge that is not in contact with the conveyor belt. the straight line. The Z coordinate of the contour point cloud corresponding to the contour point on this edge can be used to determine which contour points on the edge are in contact with the conveyor belt and which contour points are not in contact with the conveyor belt, that is, the contour points corresponding to the contour points. If the Z coordinate of the cloud is zero, it means that the contour point is in contact with the conveyor belt. If the Z coordinate of the contour point cloud corresponding to the contour point is not zero, it means that the contour point is not in contact with the conveyor belt. Line fitting is performed on the contour points in contact with the conveyor belt to obtain a more accurate contour edge of the glass to be glued. It should be noted that, in this embodiment, the coordinate system corresponding to the glass to be glued is established by adhering to the conveyor belt, that is to say, the origin of the above coordinate system is located on the plane where the conveyor belt is located.
值得一提的是,上述坐标系还可以以其他形式建立,此时可以基于预先根据待涂胶的非标准平面玻璃的形状设定某一点云筛选规则,选取上述较优部分对应的点,即选取待涂胶的非标准平面玻璃未与传送带接触部分对应的点。在本实施例中,直线拟合操作的对象较佳地为噪点去除、平滑处理后的结果,当然,也可以为未进行噪点去除、平滑处理的轮廓点,此 处不作为限定。It is worth mentioning that the above coordinate system can also be established in other forms. At this time, a certain point cloud screening rule can be set based on the shape of the non-standard flat glass to be glued in advance, and the corresponding point of the above-mentioned better part can be selected, that is, Select the point corresponding to the part of the non-standard flat glass to be glued that is not in contact with the conveyor belt. In this embodiment, the object of the straight line fitting operation is preferably the result after noise removal and smoothing. Of course, it can also be contour points without noise removal and smoothing, which is not limited here.
本发明的各个实施方式中的机器人可以是工业机器人手臂,这些机器人手臂可以是通用的,也可以是专用于给物品涂胶的。为了使得机器人走更少的多余轨迹,可将涂胶轨迹点的初始点设置在涂胶路径上与机器人初始位姿最为相近的位置,例如:将初始点设置在靠近机器人那条边的中间。也即是说,在确定机器人的初始位姿之后,可将距离该机器人的初始位姿最近的那条边的涂胶路径上的中间点作为涂胶轨迹点的初始点,之后可根据机器人的固有属性在涂胶路径上设置其他涂胶轨迹点,进而可以得到该涂胶玻璃的涂胶轨迹点信息。值得一提的是,该涂胶轨迹点信息可包括但不限于涂胶轨迹点的坐标、涂胶轨迹点的初始轨迹点以及涂胶轨迹点的走向(即涂胶轨迹点走位顺序)等。在得到涂胶玻璃的涂胶轨迹点信息之后,可采用通信方式将涂胶轨迹点信息发送至机器人。机器人在接收到涂胶轨迹点信息时,可基于涂胶轨迹点信息,控制自身的喷胶喷头对待涂胶玻璃进行涂胶。The robots in various embodiments of the present invention may be industrial robot arms, and these robot arms may be general-purpose or specialized for applying glue to objects. In order to make the robot take fewer redundant trajectories, the initial point of the gluing trajectory point can be set at the position on the gluing path that is closest to the initial pose of the robot, for example, the initial point is set in the middle of the edge close to the robot. That is to say, after the initial pose of the robot is determined, the middle point on the gluing path of the edge closest to the robot's initial pose can be used as the initial point of the gluing trajectory point, and then the robot can be used according to the robot's initial pose. The inherent attribute sets other gluing track points on the gluing path, and then the gluing track point information of the gluing glass can be obtained. It is worth mentioning that the gluing track point information may include, but is not limited to, the coordinates of the gluing track point, the initial track point of the gluing track point, and the direction of the gluing track point (ie, the order of the gluing track points), etc. . After obtaining the gluing track point information of the glued glass, the gluing track point information can be sent to the robot by means of communication. When the robot receives the glue application trajectory point information, it can control its own glue spray nozzle to apply glue to the glass to be glued based on the glue application trajectory point information.
在某些实施方式中,根据机器人的固有属性以及机器人初始位姿,在涂胶路径上生成涂胶轨迹点信息,包括:In some embodiments, according to the inherent properties of the robot and the initial pose of the robot, the glue application trajectory point information is generated on the glue application path, including:
确定涂胶路径中的拐角处和直线处;Determine the corners and straight lines in the glue path;
根据机器人的出胶速率、运动速度在拐弯处以及直线处以相应密度设置涂胶轨迹点;According to the glue output rate and movement speed of the robot, set the glue application trajectory points at the corresponding density at the corners and straight lines;
根据机器人初始位姿确定涂胶轨迹点的走位顺序,以得到涂胶轨迹点信息。According to the initial pose of the robot, the walking sequence of the gluing track points is determined to obtain the gluing track point information.
具体地,确定涂胶路径中的拐角处和直线处,可以基于涂胶路径上各点的坐标值间的关系确定。拐角处相邻点的X坐标和Y坐标均会不一样,而直线处相邻点,可能其X坐标会一样或Y坐标会一样。例如:假设待涂胶玻璃的形状为矩形,则该待涂胶玻璃的涂胶路径中,四个角的拐角处相邻点的X坐标和Y坐标均会不一样,而上边直线处相邻点的Y坐标会一样而X坐标会不一样,下边直线处相邻点的Y坐标会一样而X坐标会不一样且Y坐标相对于上边直线处数值小,左边直线处相邻点的X坐标会一样而Y坐标会不一样,右边直线处相邻点的X坐标会一样而Y坐标会不一样且X坐标相对于左边直线处数值小。Specifically, determining the corners and straight lines in the gluing path can be determined based on the relationship between the coordinate values of each point on the gluing path. The X and Y coordinates of the adjacent points at the corner will be different, while the adjacent points at the straight line may have the same X coordinate or the same Y coordinate. For example: Assuming that the shape of the glass to be glued is a rectangle, in the glued path of the glass to be glued, the X and Y coordinates of the adjacent points at the corners of the four corners will be different, and the adjacent points at the upper line will be different. The Y coordinate of the point will be the same but the X coordinate will be different, the Y coordinate of the adjacent point on the lower line will be the same but the X coordinate will be different and the Y coordinate is smaller than the value of the upper line, and the X coordinate of the adjacent point on the left line It will be the same but the Y coordinate will be different. The X coordinate of the adjacent point on the right line will be the same but the Y coordinate will be different and the X coordinate will be smaller than the value of the left line.
机器人在对玻璃进行涂胶时,会基于一定的出胶速率控制出胶头进行涂胶。出胶速率作为机器人的固有属性,影响本实施例中涂胶效果。为了能够方便参考机器人的出胶速率在涂胶路径上设置涂胶轨迹点,以避免堆胶情况,可确定该机器人的出胶速率。When the robot is applying glue to the glass, it will control the glue head to apply glue based on a certain glue discharge rate. The glue dispensing rate, as an inherent property of the robot, affects the gluing effect in this embodiment. In order to conveniently refer to the glue dispensing rate of the robot and set the gluing trajectory points on the gluing path to avoid the glue stacking situation, the glue dispensing rate of the robot can be determined.
机器人运动的固有属性还体现为,若机器人在拐角处和直线处设置同样的运动速度参数,其在拐角处和直线处的运动速度会不同,具体拐角处运动速度慢于直线处运动速度。而 实际情况下机器人另一固有属性出胶速率是不变的,因此对于合适直线的出胶速率与运动速度参数,在拐弯处就会造成堆胶情况。在某些实施方式中,在保证机器人沿着所确定的涂胶路径移动的前提下,在涂胶路径上的拐角处设置的涂胶轨迹点的间距可以比直线处设置的涂胶轨迹点间距大些,以达到直线处运动速度与拐角处运动速度的平衡,进而解决拐角可能造成的堆胶现象。可在直线处设置一最小间距用于限定直线处涂胶轨迹点的间距,防止直线处由于机器人由于轨迹点数量过多而出现卡顿堆胶的情况。还可在直线处和拐角处设置数值不同的运动速度参数以达到直线处运动速度与拐角处运动速度的平衡,解决由于固有属性导致的堆胶问题。The inherent property of robot motion is also reflected in that if the robot sets the same motion speed parameters at the corner and the straight line, its motion speed at the corner and the straight line will be different, and the specific motion speed at the corner is slower than that at the straight line. In practice, another inherent property of the robot, the glue dispensing rate is unchanged, so for the proper straight line glue dispensing rate and motion speed parameters, the glue stacking situation will occur at the corners. In some embodiments, on the premise of ensuring that the robot moves along the determined gluing path, the spacing between the gluing track points set at the corners of the gluing path may be greater than the spacing between the gluing track points set at the straight line Larger, in order to achieve the balance between the movement speed at the straight line and the movement speed at the corner, and then solve the glue stacking phenomenon that may be caused by the corner. A minimum distance can be set at the straight line to limit the distance between the glue application track points at the straight line, so as to prevent the robot from jamming and stacking glue due to the excessive number of track points at the straight line. You can also set motion speed parameters with different values at the straight line and the corner to achieve a balance between the motion speed at the straight line and the motion speed at the corner, and solve the problem of glue stacking caused by inherent properties.
根据机器人初始位姿确定涂胶轨迹点的走位顺序,以得到所述涂胶轨迹点信息。可以理解,为了使得机器人走更少的多余轨迹,设置轨迹点的初始点为靠近机器人初始位姿的点,例如:可以为待涂胶玻璃的靠近机器人那条边的中间部位对应的轨迹点。也即是说,在确定机器人的初始位姿之后,可将距离该机器人的初始位姿最近的那条边的涂胶路径上的中间点对应的轨迹点(或者距离该点最近的轨迹点)作为涂胶轨迹点的初始轨迹点,之后,可以顺时针走位其他轨迹点,也可以逆时针走位其他轨迹点。According to the initial pose of the robot, the walking sequence of the gluing track points is determined, so as to obtain the gluing track point information. It can be understood that, in order to make the robot take fewer redundant trajectories, the initial point of the trajectory point is set as a point close to the initial pose of the robot, for example, it can be the trajectory point corresponding to the middle part of the glass to be glued close to the side of the robot. That is to say, after the initial pose of the robot is determined, the trajectory point corresponding to the middle point on the gluing path of the edge closest to the initial pose of the robot (or the trajectory point closest to the point) As the initial trajectory point of the glue-applied trajectory point, after that, other trajectory points can be moved clockwise or counterclockwise.
在某些实施方式中,涂胶轨迹点信息具体可以包括涂胶轨迹点坐标,初始轨迹点坐标、涂胶轨迹点的走位顺序、涂胶轨迹点的运动速度参数等。In some embodiments, the glue application track point information may specifically include the glue application track point coordinates, the initial track point coordinates, the position sequence of the glue application track points, the motion speed parameters of the glue application track points, and the like.
在某些实施方式中,涂胶轨迹点信息还包括:轮廓点对应的法向信息。In some embodiments, the glue application track point information further includes: normal direction information corresponding to the contour points.
具体地,法向信息可以为各轮廓点云对应的法向量相对于一固定量的角度值,还可以为各轮廓点云中相应走位顺序在后的点云相对于其前一点云的偏离角度值。Specifically, the normal information can be the angle value of the normal vector corresponding to each contour point cloud relative to a fixed amount, and can also be the deviation of the point cloud in the corresponding position sequence behind the contour point cloud relative to the previous point cloud. angle value.
图3示出了根据本发明又一个实施例的基于机器人视觉的物品表面涂胶装置的结构示意图,该装置包括:3 shows a schematic structural diagram of a device for gluing the surface of an article based on robot vision according to another embodiment of the present invention, the device includes:
3D点云获取模块300,用于获取物品的3D点云信息,即用于实现步骤S100;The 3D point cloud acquisition module 300 is used to acquire the 3D point cloud information of the item, that is, to implement step S100;
3D图像确定模块310,用于基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息,即用于实现步骤S110;A 3D image determination module 310, configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information, that is, to implement step S110;
2D图像映射模块320,用于将所述3D图像信息映射为2D图像信息,即用于实现步骤S120;A 2D image mapping module 320, configured to map the 3D image information to 2D image information, that is, to implement step S120;
轨迹点生成模块330,用于基于所述2D图像信息生成2D涂胶轨迹点,即用于实现步骤S130;A trajectory point generation module 330, configured to generate 2D glue application trajectory points based on the 2D image information, that is, to implement step S130;
3D轨迹点映射模块340,用于将所述2D涂胶轨迹点映射为3D涂胶轨迹点,即用于实现步骤S140;The 3D track point mapping module 340 is used to map the 2D gluing track points to the 3D gluing track points, that is, for implementing step S140;
涂胶模块350,用于基于映射的3D涂胶轨迹点进行涂胶,即用于实现步骤S150。The gluing module 350 is used for gluing based on the mapped 3D gluing track points, that is, for implementing step S150.
图4示出了根据本发明又一个实施例的基于机器人视觉的物品3D图像信息获取装置的结构示意图,该装置包括:FIG. 4 shows a schematic structural diagram of a device for acquiring 3D image information of an item based on robot vision according to another embodiment of the present invention, and the device includes:
3D点云获取模块400,用于获取物品的3D点云,即用于实现步骤S200;The 3D point cloud acquisition module 400 is used to acquire the 3D point cloud of the item, that is, to implement step S200;
2D轮廓获取模块410,用于将物品的3D点云映射为2D图像,并基于所述2D图像获取物品的2D轮廓,即用于实现步骤S210;The 2D contour obtaining module 410 is used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image, that is, to implement step S210;
3D轮廓获取模块420,用于将2D轮廓映射为3D轮廓点云,即用于实现步骤S220;The 3D contour obtaining module 420 is used for mapping the 2D contour into a 3D contour point cloud, that is, for implementing step S220;
3D图像确定模块430,用于基于所述3D轮廓点云以及预置的3D图像模板信息,确定与物品匹配的3D图像模板信息以及物品的位姿信息,即用于实现步骤S230。The 3D image determination module 430 is configured to determine, based on the 3D contour point cloud and preset 3D image template information, the 3D image template information matching the item and the pose information of the item, that is, for implementing step S230.
上述图3-图4所示的装置实施例中,仅描述了模块的主要功能,各个模块的全部功能与方法实施例中相应步骤相对应,各个模块的工作原理同样可以参照方法实施例中相应步骤的描述,此处不再赘述。另外,虽然上述实施例中限定了功能模块的功能与方法的对应关系,然而本领域技术人员能够理解,功能模块的功能并不局限于上述对应关系,即特定的功能模块还能够实现其他方法步骤或方法步骤的一部分。例如,上述实施例描述了3D点云获取模块300用于实现步骤S100的方法,然而根据实际情况的需要,3D点云获取模块300也可以用于实现步骤S200、S300或S400的方法或方法的一部分。In the apparatus embodiments shown in Figures 3 to 4 above, only the main functions of the modules are described, and all the functions of each module correspond to the corresponding steps in the method embodiments, and the working principles of each module can also refer to the corresponding steps in the method embodiments. The description of the steps will not be repeated here. In addition, although the above-mentioned embodiment defines the corresponding relationship between the functions of the functional modules and the methods, those skilled in the art can understand that the functions of the functional modules are not limited to the above-mentioned corresponding relationships, that is, a specific functional module can also implement other method steps. or part of a method step. For example, the above embodiments describe that the 3D point cloud acquisition module 300 is used to implement the method of step S100, however, according to the needs of the actual situation, the 3D point cloud acquisition module 300 can also be used to implement the method of steps S200, S300 or S400 or the method of the method. part.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施方式的方法。需要指出的是,本申请实施方式的计算机可读存储介质存储的计算机程序可以被电子设备的处理器执行,此外,计算机可读存储介质可以是内置在电子设备中的存储介质,也可以是能够插拔地插接在电子设备的存储介质,因此,本申请实施方式的计算机可读存储介质具有较高的灵活性和可靠性。The present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of any one of the foregoing embodiments. It should be noted that the computer program stored in the computer-readable storage medium of the embodiments of the present application may be executed by the processor of the electronic device. In addition, the computer-readable storage medium may be a storage medium built in the electronic device, or a storage medium capable of The storage medium of the electronic device is pluggable and pluggable. Therefore, the computer-readable storage medium of the embodiments of the present application has high flexibility and reliability.
图8示出了根据本发明实施例的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。FIG. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention. The specific embodiment of the present invention does not limit the specific implementation of the electronic device.
如图8所示,该电子设备可以包括:处理器(processor)902、通信接口(Communications Interface)904、存储器(memory)906、以及通信总线908。As shown in FIG. 8 , the electronic device may include: a processor (processor) 902 , a communication interface (Communications Interface) 904 , a memory (memory) 906 , and a communication bus 908 .
其中:in:
处理器902、通信接口904、以及存储器906通过通信总线908完成相互间的通信。The processor 902 , the communication interface 904 , and the memory 906 communicate with each other through the communication bus 908 .
通信接口904,用于与其它设备比如客户端或其它服务器等的网元通信。The communication interface 904 is used to communicate with network elements of other devices such as clients or other servers.
处理器902,用于执行程序910,具体可以执行上述方法实施例中的相关步骤。The processor 902 is configured to execute the program 910, and specifically may execute the relevant steps in the foregoing method embodiments.
具体地,程序910可以包括程序代码,该程序代码包括计算机操作指令。Specifically, the program 910 may include program code including computer operation instructions.
处理器902可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。The processor 902 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processors, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
存储器906,用于存放程序910。存储器906可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The memory 906 is used to store the program 910 . Memory 906 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
程序910具体可以用于使得处理器902执行上述方法实施例中的各项操作。The program 910 may specifically be used to cause the processor 902 to perform various operations in the foregoing method embodiments.
概括地说,本发明的发明内容包括:In general, the content of the present invention includes:
一种基于机器人视觉的物品表面涂胶方法,包括:A method for gluing the surface of objects based on robot vision, comprising:
获取物品的3D点云信息;Get the 3D point cloud information of the item;
基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;Determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
将所述3D图像信息映射为2D图像信息;mapping the 3D image information to 2D image information;
基于所述2D图像信息生成2D涂胶轨迹点;generating 2D gluing track points based on the 2D image information;
将所述2D涂胶轨迹点映射为3D涂胶轨迹点;mapping the 2D gluing track points to 3D gluing track points;
基于映射的3D涂胶轨迹点进行涂胶。The glue is applied based on the mapped 3D glue track points.
可选的,所述3D点云信息包括3D轮廓点云,所述获取物品的3D点云信息,包括:将物品3D点云映射为2D图像;基于所述2D图像获取物品的2D轮廓;将2D轮廓映射为3D轮廓点云。Optionally, the 3D point cloud information includes a 3D contour point cloud, and the acquiring the 3D point cloud information of the item includes: mapping the 3D point cloud of the item into a 2D image; acquiring the 2D contour of the item based on the 2D image; 2D contours are mapped to 3D contour point clouds.
可选的,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。Optionally, the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
可选的,所述将所述3D图像信息映射为2D图像信息,包括:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。Optionally, the mapping of the 3D image information to the 2D image information includes: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
可选的,所述基于所述2D图像信息生成2D涂胶轨迹点,包括:基于2D图像信息生成2D轮廓;按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Optionally, the generating 2D gluing track points based on the 2D image information includes: generating a 2D contour based on the 2D image information; traversing the entire contour at predetermined intervals and generating the 2D gluing track points.
可选的,将轮廓内缩后,再按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Optionally, after the contour is indented, the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
可选的,所述2D轨迹点的起点与终点重合。Optionally, the start point and the end point of the 2D trajectory point coincide.
可选的,所述预定间隔的取值范围包括50mm-100mm。Optionally, the value range of the predetermined interval includes 50mm-100mm.
可选的,所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿信息。Optionally, the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
一种基于机器人视觉的物品表面涂胶装置,包括:A device for gluing the surface of objects based on robot vision, comprising:
3D点云获取模块,用于获取物品的3D点云信息;3D point cloud acquisition module, used to acquire 3D point cloud information of items;
3D图像确定模块,用于基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;a 3D image determination module, configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
2D图像映射模块,用于将所述3D图像信息映射为2D图像信息;a 2D image mapping module for mapping the 3D image information to 2D image information;
轨迹点生成模块,用于基于所述2D图像信息生成2D涂胶轨迹点;a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information;
3D轨迹点映射模块,用于将所述2D涂胶轨迹点映射为3D涂胶轨迹点;a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points;
涂胶模块,用于基于映射的3D涂胶轨迹点进行涂胶。Glue module for gluing based on mapped 3D glue track points.
可选的,所述3D点云信息包括3D轮廓点云,所述3D点云获取模块还用于:Optionally, the 3D point cloud information includes a 3D contour point cloud, and the 3D point cloud acquisition module is further used for:
将物品3D点云映射为2D图像;Map the 3D point cloud of the item into a 2D image;
基于所述2D图像获取物品的2D轮廓;obtaining a 2D outline of the item based on the 2D image;
将2D轮廓映射为3D轮廓点云。Map 2D contours to 3D contour point clouds.
可选的,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。Optionally, the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
可选的,所述2D图像映射模块具体用于:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。Optionally, the 2D image mapping module is specifically configured to: map the matched 3D image information into 2D image information by using an orthogonal projection method.
可选的,所述轨迹点生成模块具体用于:基于2D图像信息生成2D轮廓;按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Optionally, the trajectory point generating module is specifically configured to: generate a 2D contour based on 2D image information; traverse the entire contour according to a predetermined interval and generate 2D gluing trajectory points.
可选的,所述轨迹点生成模块还用于:将轮廓内缩后,再按照预定间隔遍历整个轮廓并生成2D轨迹点。Optionally, the trajectory point generating module is further configured to: shrink the contour, traverse the entire contour at predetermined intervals, and generate 2D trajectory points.
可选的,所述2D轨迹点的起点与终点重合。Optionally, the start point and the end point of the 2D trajectory point coincide.
可选的,所述预定间隔的取值范围包括50mm-100mm。Optionally, the value range of the predetermined interval includes 50mm-100mm.
可选的,所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿 信息。Optionally, the 3D image information of the item includes 3D image template information matched with the item and/or pose information of the item.
一种基于机器人视觉的物品3D图像信息获取方法,包括:A method for acquiring 3D image information of objects based on robot vision, comprising:
获取物品的3D点云;Get the 3D point cloud of the item;
将物品的3D点云映射为2D图像,并基于所述2D图像获取物品的2D轮廓;mapping the 3D point cloud of the item into a 2D image, and obtaining a 2D outline of the item based on the 2D image;
将2D轮廓映射为3D轮廓点云;Map 2D contours to 3D contour point clouds;
基于所述3D轮廓点云以及预置的3D图像模板信息,确定与物品匹配的3D图像模板信息以及物品的位姿信息。Based on the 3D contour point cloud and the preset 3D image template information, the 3D image template information matching the item and the pose information of the item are determined.
可选的,基于特征点的匹配算法和/或迭代最近点算法确定与物品匹配的3D图像模板信息以及物品的位姿信息Optionally, a feature point-based matching algorithm and/or an iterative closest point algorithm to determine the 3D image template information that matches the item and the pose information of the item
可选的,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。Optionally, the mapping of the 3D point cloud of the item into a 2D image includes: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or processing of removing outliers, and then mapping the processed 3D point cloud into a 2D point cloud. image.
可选的,预先获取物品完整的点云轮廓作为物品的3D图像模板信息。Optionally, obtain the complete point cloud outline of the item in advance as the 3D image template information of the item.
可选的,所述预先获取物品完整的点云轮廓包括:从同类物品中选取一标准件,获取该标准件的点云,如果点云不完整,再次获取该标准件的点云,并与上次获得的点云组合,如果仍不完整,重复点云获取和组合的步骤,直到获取到完整的点云后,进一步获取该完整点云的轮廓作为所述物品完整的点云轮廓。Optionally, the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
可选的,使用相机获取物品的3D点云,并基于相机的内参将物品的3D点云映射为2D图像。Optionally, use the camera to obtain the 3D point cloud of the item, and map the 3D point cloud of the item into a 2D image based on the camera's internal parameters.
一种基于机器人视觉的物品3D图像信息获取装置,包括:A device for acquiring 3D image information of objects based on robot vision, comprising:
3D点云获取模块,用于获取物品的3D点云;3D point cloud acquisition module, used to acquire 3D point cloud of items;
2D轮廓获取模块,用于将物品的3D点云映射为2D图像,并基于所述2D图像获取物品的2D轮廓;a 2D contour acquisition module, used to map the 3D point cloud of the item into a 2D image, and obtain the 2D contour of the item based on the 2D image;
3D轮廓获取模块,用于将2D轮廓映射为3D轮廓点云;3D contour acquisition module for mapping 2D contours to 3D contour point clouds;
3D图像确定模块,用于基于所述3D轮廓点云以及预置的3D图像模板信息,确定与物品匹配的3D图像模板信息以及物品的位姿信息。The 3D image determination module is configured to determine the 3D image template information matched with the item and the pose information of the item based on the 3D contour point cloud and the preset 3D image template information.
可选的,所述3D图像确定模块还用于:基于特征点的匹配算法和/或迭代最近点算法确定与物品匹配的3D图像模板信息以及物品的位姿信息Optionally, the 3D image determination module is also used to: determine the 3D image template information matched with the item and the pose information of the item based on the feature point matching algorithm and/or the iterative closest point algorithm.
可选的,所述2D轮廓获取模块还用于:获取物品3D点云后,先执行点云聚类和/或去 除离群点处理,再将处理后的3D点云映射为2D图像。Optionally, the 2D contour acquisition module is further configured to: after acquiring the 3D point cloud of the item, first perform point cloud clustering and/or outlier removal processing, and then map the processed 3D point cloud into a 2D image.
可选的,预先获取物品完整的点云轮廓作为物品的3D图像模板信息。Optionally, obtain the complete point cloud outline of the item in advance as the 3D image template information of the item.
可选的,所述预先获取物品完整的点云轮廓包括:从同类物品中选取一标准件,获取该标准件的点云,如果点云不完整,再次获取该标准件的点云,并与上次获得的点云组合,如果仍不完整,重复点云获取和组合的步骤,直到获取到完整的点云后,进一步获取该完整点云的轮廓作为所述物品完整的点云轮廓。Optionally, the pre-acquiring the complete point cloud profile of the item includes: selecting a standard part from similar items, acquiring the point cloud of the standard part, if the point cloud is incomplete, acquiring the point cloud of the standard part again, and combining with the standard part. If the point cloud combination obtained last time is still incomplete, repeat the steps of point cloud acquisition and combination until a complete point cloud is acquired, and further acquire the contour of the complete point cloud as the complete point cloud contour of the item.
可选的,使用相机获取物品的3D点云,并基于相机的内参将物品的3D点云映射为2D图像。Optionally, use the camera to obtain the 3D point cloud of the item, and map the 3D point cloud of the item into a 2D image based on the camera's internal parameters.
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference to the terms "one embodiment," "some embodiments," "exemplary embodiment," "example," "specific example," or "some examples" or the like is meant to be used in conjunction with the described embodiments. A particular feature, structure, material, or characteristic described in a manner or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以 例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use by an instruction execution system, apparatus or apparatus (such as a computer-based system, a system including a processing module, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus or apparatus), or in conjunction with such instruction execution system, apparatus or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable processor Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
应当理解,本申请的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or a combination of the following techniques known in the art: Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.
此外,在本申请的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and alterations.

Claims (20)

  1. 一种基于机器人视觉的物品表面涂胶方法,其特征在于,包括:A method for gluing the surface of objects based on robot vision, comprising:
    获取物品的3D点云信息;Get the 3D point cloud information of the item;
    基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;Determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
    将所述3D图像信息映射为2D图像信息;mapping the 3D image information to 2D image information;
    基于所述2D图像信息生成2D涂胶轨迹点;generating 2D gluing track points based on the 2D image information;
    将所述2D涂胶轨迹点映射为3D涂胶轨迹点;mapping the 2D gluing track points to 3D gluing track points;
    基于映射的3D涂胶轨迹点进行涂胶。The glue is applied based on the mapped 3D glue track points.
  2. 根据权利要求1所述的物品表面涂胶方法,其特征在于,所述3D点云信息包括3D轮廓点云,所述获取物品的3D点云信息,包括:The method for gluing the surface of an item according to claim 1, wherein the 3D point cloud information includes a 3D contour point cloud, and the acquiring the 3D point cloud information of the item includes:
    将物品3D点云映射为2D图像;Map the 3D point cloud of the item into a 2D image;
    基于所述2D图像获取物品的2D轮廓;obtaining a 2D outline of the item based on the 2D image;
    将2D轮廓映射为3D轮廓点云。Map 2D contours to 3D contour point clouds.
  3. 根据权利要求2所述的物品表面涂胶方法,其特征在于,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。The method for gluing the surface of an item according to claim 2, wherein the mapping the 3D point cloud of the item into a 2D image comprises: after acquiring the 3D point cloud of the item, first performing point cloud clustering and/or removing outliers process, and then map the processed 3D point cloud into a 2D image.
  4. 根据权利要求1所述的物品表面涂胶方法,其特征在于,所述将所述3D图像信息映射为2D图像信息,包括:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。The method for gluing the surface of an article according to claim 1, wherein the mapping the 3D image information to the 2D image information comprises: using an orthogonal projection method to map the matched 3D image information to the 2D image information.
  5. 根据权利要求1所述的物品表面涂胶方法,其特征在于,所述基于所述2D图像信息生成2D涂胶轨迹点,包括:The method for gluing the surface of an article according to claim 1, wherein the generating a 2D gluing track point based on the 2D image information comprises:
    基于2D图像信息生成2D轮廓;Generate 2D contours based on 2D image information;
    按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Traverse the entire contour at predetermined intervals and generate 2D gluing trajectory points.
  6. 根据权利要求5所述的物品表面涂胶方法,其特征在于:将轮廓内缩后,再按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。The method for gluing the surface of an article according to claim 5, wherein: after shrinking the contour, the entire contour is traversed at predetermined intervals to generate 2D gluing track points.
  7. 根据权利要求5所述的物品表面涂胶方法,其特征在于:所述2D轨迹点的起点与终点重合。The method for gluing the surface of an article according to claim 5, wherein the starting point and the ending point of the 2D trajectory point coincide.
  8. 根据权利要求5所述的物品表面涂胶方法,其特征在于:所述预定间隔的取值范围 包括50mm-100mm。The method for gluing the surface of an article according to claim 5, wherein the value range of the predetermined interval includes 50mm-100mm.
  9. 根据权利要求1-8中任一项所述的物品表面涂胶方法,其特征在于,所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿信息。The method for gluing the surface of an article according to any one of claims 1 to 8, wherein the 3D image information of the article includes 3D image template information matched with the article and/or pose information of the article.
  10. 一种基于机器人视觉的物品表面涂胶装置,其特征在于,包括:A device for gluing an object surface based on robot vision, characterized in that it includes:
    3D点云获取模块,用于获取物品的3D点云信息;3D point cloud acquisition module, used to acquire 3D point cloud information of items;
    3D图像确定模块,用于基于所述3D点云信息以及预置的3D图像模板信息,确定物品的3D图像信息;a 3D image determination module, configured to determine the 3D image information of the item based on the 3D point cloud information and the preset 3D image template information;
    2D图像映射模块,用于将所述3D图像信息映射为2D图像信息;a 2D image mapping module for mapping the 3D image information to 2D image information;
    轨迹点生成模块,用于基于所述2D图像信息生成2D涂胶轨迹点;a trajectory point generation module for generating 2D gluing trajectory points based on the 2D image information;
    3D轨迹点映射模块,用于将所述2D涂胶轨迹点映射为3D涂胶轨迹点;a 3D track point mapping module for mapping the 2D gluing track points to 3D gluing track points;
    涂胶模块,用于基于映射的3D涂胶轨迹点进行涂胶。Glue module for gluing based on mapped 3D glue track points.
  11. 根据权利要10所述的基于机器人视觉的物品表面涂胶装置,其特征在于,所述3D点云信息包括3D轮廓点云,所述3D点云获取模块还用于:The robot vision-based object surface gluing device according to claim 10, wherein the 3D point cloud information includes a 3D contour point cloud, and the 3D point cloud acquisition module is further used for:
    将物品3D点云映射为2D图像;Map the 3D point cloud of the item into a 2D image;
    基于所述2D图像获取物品的2D轮廓;obtaining a 2D outline of the item based on the 2D image;
    将2D轮廓映射为3D轮廓点云。Map 2D contours to 3D contour point clouds.
  12. 根据权利要11所述的物品表面涂胶装置,其特征在于,所述将物品3D点云映射为2D图像包括:获取物品3D点云后,先执行点云聚类和/或去除离群点处理,再将处理后的3D点云映射为2D图像。The article surface gluing device according to claim 11, wherein the mapping of the 3D point cloud of the article into a 2D image comprises: after acquiring the 3D point cloud of the article, first performing point cloud clustering and/or removing outliers process, and then map the processed 3D point cloud into a 2D image.
  13. 根据权利要10所述的物品表面涂胶装置,其特征在于,所述2D图像映射模块具体用于:采用正交投影方法将匹配的3D图像信息映射为2D图像信息。The device for gluing the surface of an article according to claim 10, wherein the 2D image mapping module is specifically used for: using an orthogonal projection method to map the matched 3D image information into 2D image information.
  14. 根据权利要10所述的物品表面涂胶装置,其特征在于,所述轨迹点生成模块具体用于:The article surface gluing device according to claim 10, wherein the trajectory point generation module is specifically used for:
    基于2D图像信息生成2D轮廓;Generate 2D contours based on 2D image information;
    按照预定间隔遍历整个轮廓并生成2D涂胶轨迹点。Traverse the entire contour at predetermined intervals and generate 2D gluing trajectory points.
  15. 根据权利要14所述的物品表面涂胶装置,其特征在于,所述轨迹点生成模块还用于:将轮廓内缩后,再按照预定间隔遍历整个轮廓并生成2D轨迹点。The article surface gluing device according to claim 14, wherein the trajectory point generating module is further used for: shrinking the contour, then traversing the entire contour at predetermined intervals and generating 2D trajectory points.
  16. 根据权利要14所述的物品表面涂胶装置,其特征在于:所述2D轨迹点的起点与终 点重合。The article surface gluing device according to claim 14, wherein the start point and the end point of the 2D track point coincide.
  17. 根据权利要14所述的物品表面涂胶装置,其特征在于:所述预定间隔的取值范围包括50mm-100mm。The device for gluing the surface of an article according to claim 14, wherein the predetermined interval ranges from 50 mm to 100 mm.
  18. 根据权利要10-17中任一项所述的物品表面涂胶装置,其特征在于:所述物品的3D图像信息包括与物品匹配的3D图像模板信息和/或物品的位姿信息。The device for gluing the surface of an article according to any one of claims 10 to 17, wherein the 3D image information of the article includes 3D image template information matched with the article and/or pose information of the article.
  19. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至9中任一项所述的基于机器人视觉的物品表面涂胶方法。An electronic device, characterized by comprising: a memory, a processor, and a computer program stored on the memory and running on the processor, the processor implementing claims 1 to 1 when the processor executes the computer program The method for gluing the surface of an article based on robot vision according to any one of 9.
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的基于机器人视觉的物品表面涂胶方法。A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the robot vision-based method for gluing the surface of an article according to any one of claims 1 to 9 is realized .
PCT/CN2021/138582 2021-04-20 2021-12-15 Article surface gluing method and apparatus based on robot vision, device, and medium WO2022222515A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110426175.9 2021-04-20
CN202110426175.9A CN112967368A (en) 2021-04-20 2021-04-20 Object surface gluing method and device based on robot vision, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022222515A1 true WO2022222515A1 (en) 2022-10-27

Family

ID=76280904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138582 WO2022222515A1 (en) 2021-04-20 2021-12-15 Article surface gluing method and apparatus based on robot vision, device, and medium

Country Status (2)

Country Link
CN (1) CN112967368A (en)
WO (1) WO2022222515A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969144A (en) * 2023-01-09 2023-04-18 东莞市智睿智能科技有限公司 Sole glue spraying track generation method, system, equipment and storage medium
CN117670864A (en) * 2023-12-28 2024-03-08 北汽利戴工业技术服务(北京)有限公司 Image recognition system based on industrial camera

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967368A (en) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 Object surface gluing method and device based on robot vision, electronic equipment and storage medium
CN113199479B (en) * 2021-05-11 2023-02-10 梅卡曼德(北京)机器人科技有限公司 Track generation method and device, electronic equipment, storage medium and 3D camera
WO2022237544A1 (en) * 2021-05-11 2022-11-17 梅卡曼德(北京)机器人科技有限公司 Trajectory generation method and apparatus, and electronic device and storage medium
CN113420641A (en) * 2021-06-21 2021-09-21 梅卡曼德(北京)机器人科技有限公司 Image data processing method, image data processing device, electronic equipment and storage medium
CN113976400B (en) * 2021-09-30 2022-09-20 歌尔股份有限公司 Gluing method, device, equipment and system
CN114637562B (en) * 2022-03-01 2024-02-02 杭州优工品科技有限公司 Visual display processing method and device for gluing parts, terminal and storage medium
CN115570573B (en) * 2022-12-07 2023-03-17 广东省科学院智能制造研究所 Robot high-performance gluing track planning method, medium and system
CN116958129B (en) * 2023-09-18 2023-12-26 华侨大学 Stone plate brushing path planning device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019139441A1 (en) * 2018-01-12 2019-07-18 삼성전자 주식회사 Image processing device and method
CN111695486A (en) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111744706A (en) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 Glue spraying method and device for object, electronic equipment and storage medium
CN111815706A (en) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 Visual identification method, device, equipment and medium for single-article unstacking
CN112967368A (en) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 Object surface gluing method and device based on robot vision, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019139441A1 (en) * 2018-01-12 2019-07-18 삼성전자 주식회사 Image processing device and method
CN111695486A (en) * 2020-06-08 2020-09-22 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111744706A (en) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 Glue spraying method and device for object, electronic equipment and storage medium
CN111815706A (en) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 Visual identification method, device, equipment and medium for single-article unstacking
CN112967368A (en) * 2021-04-20 2021-06-15 梅卡曼德(北京)机器人科技有限公司 Object surface gluing method and device based on robot vision, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969144A (en) * 2023-01-09 2023-04-18 东莞市智睿智能科技有限公司 Sole glue spraying track generation method, system, equipment and storage medium
CN117670864A (en) * 2023-12-28 2024-03-08 北汽利戴工业技术服务(北京)有限公司 Image recognition system based on industrial camera

Also Published As

Publication number Publication date
CN112967368A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
WO2022222515A1 (en) Article surface gluing method and apparatus based on robot vision, device, and medium
CN113344769A (en) Method, device and medium for acquiring 3D image information of article based on machine vision
CN109816703B (en) Point cloud registration method based on camera calibration and ICP algorithm
Lysenkov et al. Recognition and pose estimation of rigid transparent objects with a kinect sensor
WO2022237166A1 (en) Trajectory generation method and apparatus, electronic device, storage medium, and 3d camera
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN113199479B (en) Track generation method and device, electronic equipment, storage medium and 3D camera
Munoz-Banon et al. Targetless camera-lidar calibration in unstructured environments
CN111429344B (en) Laser SLAM closed loop detection method and system based on perceptual hashing
CN111784655A (en) Underwater robot recovery positioning method
CN112132876B (en) Initial pose estimation method in 2D-3D image registration
CN113189934A (en) Trajectory generation method and apparatus, electronic device, storage medium, and 3D camera
CN111523547A (en) 3D semantic segmentation method and terminal
Bileschi Fully automatic calibration of lidar and video streams from a vehicle
Natarajan et al. Robust stereo-vision based 3D modelling of real-world objects for assistive robotic applications
WO2022222513A1 (en) Method and apparatus for filling grooves on basis of controlling moving speed of robot
Guo et al. PCAOT: A Manhattan point cloud registration method towards large rotation and small overlap
JP6915326B2 (en) Image processing equipment, image processing systems and programs
WO2022222934A1 (en) Glass adhesive coating method, glass adhesive coating apparatus, electronic device, and storage medium
US20220405506A1 (en) Systems and methods for a vision guided end effector
JP7365567B2 (en) Measurement system, measurement device, measurement method and measurement program
CN113223030A (en) Glass gluing method and device, electronic equipment and storage medium
Zang et al. Camera localization by CAD model matching
CN113223029A (en) Glass gluing method, glass gluing device, electronic equipment and storage medium
Mkhitaryan et al. RGB-D sensor data correction and enhancement by introduction of an additional RGB view

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21937737

Country of ref document: EP

Kind code of ref document: A1