WO2022237166A1 - 轨迹生成方法、装置、电子设备、存储介质和3d相机 - Google Patents

轨迹生成方法、装置、电子设备、存储介质和3d相机 Download PDF

Info

Publication number
WO2022237166A1
WO2022237166A1 PCT/CN2021/138581 CN2021138581W WO2022237166A1 WO 2022237166 A1 WO2022237166 A1 WO 2022237166A1 CN 2021138581 W CN2021138581 W CN 2021138581W WO 2022237166 A1 WO2022237166 A1 WO 2022237166A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
trajectory
point
points
item
Prior art date
Application number
PCT/CN2021/138581
Other languages
English (en)
French (fr)
Inventor
李辉
魏海永
丁有爽
邵天兰
Original Assignee
梅卡曼德(北京)机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 梅卡曼德(北京)机器人科技有限公司 filed Critical 梅卡曼德(北京)机器人科技有限公司
Publication of WO2022237166A1 publication Critical patent/WO2022237166A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05BSPRAYING APPARATUS; ATOMISING APPARATUS; NOZZLES
    • B05B12/00Arrangements for controlling delivery; Arrangements for controlling the spray area
    • B05B12/08Arrangements for controlling delivery; Arrangements for controlling the spray area responsive to condition of liquid or other fluent material to be discharged, of ambient medium or of target ; responsive to condition of spray devices or of supply means, e.g. pipes, pumps or their drive means
    • B05B12/12Arrangements for controlling delivery; Arrangements for controlling the spray area responsive to condition of liquid or other fluent material to be discharged, of ambient medium or of target ; responsive to condition of spray devices or of supply means, e.g. pipes, pumps or their drive means responsive to conditions of ambient medium or target, e.g. humidity, temperature position or movement of the target relative to the spray apparatus
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present application relates to the field of image processing, and more specifically, to a trajectory generation method, device, electronic equipment, storage medium and 3D camera.
  • Intelligent industrial robots can already replace humans in various fields, such as spraying, grabbing, and handling.
  • the control methods of the robot can be roughly divided into two categories: one is to pre-plan the movement path and operation mode for the operation of the robot, and the robot operates according to the pre-planned behavior.
  • the objects in the robot change their actions, so the robot's work objects, such as items to be sprayed/grabbed/handled, must be placed in strict accordance with the pre-planned position and posture; the other type is based on the robot's on-site Perception, such as the use of vision technology, enables the robot to perceive the position and placement of items on the industrial site, etc., and plan the path and behavior of the robot.
  • the robot after planning a two-dimensional trajectory for the robot, the robot will perform operations such as spraying based on the trajectory, which is suitable for low-altitude It is feasible for items with poor or no height difference, but the items that need to be sprayed are not always regular. In other words, in some industrial scenes, it may be necessary to spray tube-shaped or line-shaped items that extend in all directions , or spray these items stacked together.
  • spraying in the above scene if the robot only moves at a fixed height, or only moves on the XY plane for spraying, it cannot be adjusted along the direction of object extension on the Z axis, that is, the height. There will be abnormalities such as uneven spraying and collision between the nozzle and the object.
  • the present invention has been proposed in order to overcome the above problems or at least partly solve the above problems.
  • one of the innovations of the present invention is that, in order to enable the robot system based on visual recognition to correctly operate on objects with three-dimensional characteristics, the applicant proposes a method for obtaining the two-dimensional trajectory of the robot and then making it three-dimensional.
  • the trajectory point of the robot’s operation is first obtained based on the two-dimensional image, and then the height information of the trajectory point is obtained, combined with the two-dimensional trajectory point and the height of the trajectory point
  • the information is obtained to obtain the three-dimensional trajectory points of the robot operation.
  • the robot performs spraying and other operations based on three-dimensional trajectory points, which can greatly improve the accuracy of spraying objects to be sprayed with large height differences.
  • the second innovation of the present invention is that for a specific industrial scene, that is, the items to be sprayed are items that pass through all directions, such as steel pipes that are twisted in the XYZ axis directions, and the spraying requirement is to spray along the twisted steel pipes.
  • the robot performs spraying and other operations based on the three-dimensional trajectory point, which can greatly improve the accuracy of spraying in this specific scene.
  • the third innovation of the present invention is that, for another specific industrial scene, the items to be sprayed are a plurality of items passing through various directions placed together, for example, a plurality of twisted steel pipes as described above are stacked together, and the spraying requires As long as the surface of this pile of steel pipes can be sprayed, there is no need to consider whether it will be sprayed on industrial scenes other than steel pipes.
  • the method is first based on this pile of
  • the two-dimensional image of the item generates the circumscribing rectangle of this pile of items, based on the circumscribing rectangle of the item, generates a two-dimensional spraying trajectory point that can fully cover the entire circumscribing rectangle, and then converts the two-dimensional trajectory point based on the height information of the item is a trajectory point that is distorted in 3D space and fully covers the entire surface.
  • the robot performs operations such as spraying based on the three-dimensional trajectory point, which can effectively avoid collisions with objects during spraying in this specific scene.
  • the present application provides a trajectory generation method, device, electronic equipment, storage medium and 3D camera.
  • the three-dimensional trajectory point is generated based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • the acquiring the two-dimensional image of the item includes mapping the three-dimensional point cloud of the item along the vertical direction of the object surface to generate a two-dimensional image.
  • the acquiring the height information of the two-dimensional track point includes: acquiring the height information of the track point according to the depth information of the item pixel at the track point.
  • the generated three-dimensional trajectory points are smoothed in height.
  • the generating the two-dimensional locus point based on the circumscribing rectangle includes: generating a tangent line on one side of the circumscribing rectangle at predetermined intervals, and using the line segment between the tangent line and the intersection point of the two sides of the circumscribing rectangle as a dividing line , generating two-dimensional trajectory points based on the boundary line.
  • the generating the two-dimensional trajectory points based on the boundary includes: starting from any corner of the circumscribed rectangle, using the corner of the circumscribed rectangle and the intersection of the boundary and the side as the inflection point, traversing the points in zigzag The boundary line and the sides parallel to the boundary line, so as to generate the two-dimensional trajectory points.
  • track points are generated according to a specific distance.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • the two-dimensional image acquisition module is used to acquire the two-dimensional image of the item
  • a circumscribing rectangle calculation module configured to calculate the circumscribing rectangle of the two-dimensional image
  • a two-dimensional trajectory point generating module configured to generate two-dimensional trajectory points based on the circumscribed rectangle
  • a height information acquisition module configured to obtain height information of two-dimensional track points
  • the three-dimensional track point generation module is used to generate three-dimensional track points based on the two-dimensional track points and the acquired height information of the two-dimensional track points.
  • the two-dimensional image acquisition module is specifically configured to map the three-dimensional point cloud of the item along the vertical direction of the object surface and generate a two-dimensional image.
  • the height information acquiring module is specifically configured to acquire the height information of the track point according to the depth information of the item pixel at the track point.
  • the altitude information acquisition module is further configured to smooth the height of the generated three-dimensional trajectory points.
  • the two-dimensional trajectory point generation module is specifically configured to generate a tangent line on one side of the circumscribed rectangle according to a predetermined interval, and use the line segment between the tangent line and the intersection point of the two sides of the circumscribed rectangle as the boundary line, based on the The boundary line is used to generate two-dimensional trajectory points.
  • the generating the two-dimensional trajectory points based on the boundary includes: starting from any corner of the circumscribed rectangle, using the corner of the circumscribed rectangle and the intersection of the boundary and the side as the inflection point, traversing the points in zigzag The boundary line and the sides parallel to the boundary line, so as to generate the two-dimensional trajectory points.
  • trajectory points when traversing the dividing line and the sides parallel to the dividing line, trajectory points are generated according to a specific distance.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • the electronic device of the embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements any of the above embodiments when executing the computer program. Trajectory generation method.
  • the computer-readable storage medium of the embodiments of the present application stores a computer program thereon, and when the computer program is executed by a processor, the trajectory generation method of any of the above-mentioned embodiments is realized.
  • the 3D camera in the embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements any of the above-mentioned embodiments when executing the computer program. Trajectory generation method.
  • FIG. 1 is a schematic flow diagram of a trajectory generation method in some embodiments of the present application.
  • FIG. 2 is a schematic flow diagram of a contour-based trajectory generation method in some embodiments of the present application.
  • Figure 3 is a schematic diagram of the outline of an object in some embodiments of the present application.
  • Fig. 4 is a schematic diagram of a method for calculating the midpoint of an item in some embodiments of the present application.
  • FIG. 5 is a schematic flow diagram of a trajectory generation method based on a circumscribing rectangle in some embodiments of the present application
  • Fig. 6 is a schematic diagram of generating a moving path of a robot based on a circumscribing rectangle in some embodiments of the present application;
  • Fig. 7 is a schematic structural diagram of a trajectory generation device in some embodiments of the present application.
  • Fig. 8 is a schematic structural diagram of a contour-based trajectory generation device in some embodiments of the present application.
  • Fig. 9 is a schematic structural diagram of a trajectory generation device based on a circumscribing rectangle in some embodiments of the present application.
  • Fig. 10 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • Fig. 1 shows a trajectory generation method according to one embodiment of the present invention, including:
  • Step S100 acquiring the 3D point cloud of the item
  • Step S110 generating a two-dimensional image of the item based on the three-dimensional point cloud of the item
  • Step S120 generating two-dimensional trajectory points based on the two-dimensional image of the item
  • Step S130 acquiring height information of two-dimensional track points
  • Step S140 generating a three-dimensional trajectory point based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • the point cloud information can be obtained through a 3D industrial camera.
  • the 3D industrial camera is generally equipped with two lenses to capture the object group to be captured from different angles, and the three-dimensional image display of the object can be realized after processing. Place the group of items to be captured under the visual sensor, and shoot the two lenses at the same time.
  • the relative attitude parameters of the two images obtained use the general binocular stereo vision algorithm to calculate the X,
  • the Y and Z coordinate values and the coordinate orientation of each point are converted into point cloud data of the item group to be captured.
  • components such as laser detectors, visible light detectors such as LEDs, infrared detectors, and radar detectors can also be used to generate point clouds, and the present invention does not limit specific implementation methods.
  • the present invention can be used in industrial scenes where robots are used to spray objects.
  • robot vision technology can be used to calculate three-dimensional spraying trajectory points based on the method of the present invention, and then the robot can be sprayed along the three-dimensional spraying trajectory points.
  • the article can be divided into two sides, correspondingly, spraying is also carried out according to the front and back sides, and the spraying method used for the front and back is the same.
  • the obtained 3D point cloud data may be orthographically mapped onto a 2D plane.
  • a depth map corresponding to the orthographic projection may also be generated.
  • the two-dimensional color image corresponding to the three-dimensional object area and the depth image corresponding to the two-dimensional color image can be acquired along a direction perpendicular to the sprayed surface of the object.
  • the two-dimensional color map corresponds to the image of the plane area perpendicular to the preset direction; each pixel in the depth map corresponding to the two-dimensional color map corresponds to each pixel in the two-dimensional color map, and each pixel
  • the value of the point is the depth value of the pixel point.
  • the obtained two-dimensional image can be a two-dimensional color image, or a two-dimensional image including only two values, that is, the value of all pixels of the image is 0, or a non-zero value, such as 255 (the value of the pixel A value of 0 is black and a value of 255 is white). It is also possible to convert a two-dimensional color image into a two-dimensional image including two values after obtaining it.
  • the camera In order to facilitate the acquisition of two-dimensional images, you can use the camera to take pictures facing the sprayed surface, or you can take pictures at a certain angle. If you do not take pictures directly facing the sprayed surface, at least you need to be able to see the spraying in the captured image face, otherwise the correction cannot be performed.
  • the point cloud of the object After obtaining the 3D point cloud, the point cloud of the object can be corrected from the camera coordinate system at the time of shooting to the coordinate system where the Z axis is perpendicular to the surface of the item to be sprayed, that is to say, the captured item can be captured based on the shooting parameters of the camera
  • the 3D point cloud is "aligned", and the object is corrected so that the sprayed surface is facing the camera.
  • step S120 the method of generating two-dimensional trajectory points is closely related to specific industrial scenarios, and different spraying requirements and scenarios will have different and distinctive trajectory point generation methods.
  • the method for generating two-dimensional trajectory points is not limited, and any method for generating two-dimensional trajectory points can be applied to the method of this embodiment.
  • the number of track points can be preset according to actual needs. Generally speaking, the more the number of trajectory points, the higher the degree of consistency between the robot's motion path and the ideal trajectory path, and correspondingly, the higher the control complexity.
  • the total number of track points can be preset, and the interval between track points can also be preset. After determining the moving path of the robot, track points can be selected according to the preset track point interval; if the total number of track points is preset, the track point interval can be calculated based on the path length and the total number of track points, and then Select track points according to the interval of track points.
  • a depth map of the item may be obtained based on the 3D point cloud of the item, the depth map includes depth information of each pixel point, and the depth information represents the distance between the point and the camera.
  • the height of the pixel is the value of the pixel on the Z axis when the sprayed surface is used as the XY surface.
  • the depth information of the pixel is correlated with the height information of the pixel, and the height information of the pixel can be obtained based on the depth information .
  • a two-dimensional trajectory point is a point generated based on a two-dimensional image, usually located in a two-dimensional image. In order to obtain the local height information of the item at the two-dimensional trajectory point, you can first find the most relevant point on the two-dimensional image.
  • Pixels (if the trajectory points are generated based on the contour of the two-dimensional image, the most relevant pixel on the contour and the trajectory point can also be found first), such as the pixel closest to the trajectory point, and then the height information of the pixel is calculated.
  • the acquired height information of the item pixel most related to the two-dimensional trajectory point is assigned to the two-dimensional trajectory point, thereby converting the two-dimensional trajectory point into a three-dimensional trajectory point.
  • the robot can not only plan the trajectory of the plane in real time based on vision technology, but also can plan the movement path on the XYZ axis according to the height of the recognized trajectory, so that even if the spraying object is a shape extending in three directions Irregular line-shaped items or items with uneven surfaces to be sprayed can achieve good spraying effects and avoid uneven or uneven spraying.
  • the trajectory in height is not smooth enough overall.
  • the smooth moving track point can make the robot move more smoothly according to the track point without getting stuck, thus making the painting more uniform.
  • the generated 3D track points can be smoothed in the height direction, so that the whole track points are smoothly connected in the height.
  • Gaussian smoothing can be used for filtering and smoothing, so that the 3D trajectory points are uniform and smooth.
  • the object to be sprayed may be in the shape of a line and bend in all directions, while the spraying process requires spraying along the line of the object.
  • Fig. 2 shows the trajectory generation method based on the outline of an item according to one embodiment of the present invention, comprising:
  • Step S200 acquiring a two-dimensional image of the item
  • Step S210 acquiring the outline of the item based on the two-dimensional image of the item
  • Step S220 generating two-dimensional trajectory points based on the outline of the item
  • Step S230 acquiring height information of two-dimensional track points
  • Step S240 generating a three-dimensional trajectory point based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • a two-dimensional image of the item may be acquired in a manner similar to steps S100 and S110, which will not be repeated here.
  • step S210 when the point cloud of the object is collected in the industrial site, the collected point cloud may be missing or even broken due to the problem of illumination or reflection of the object itself. Therefore, after obtaining the two-dimensional image of the item, the two-dimensional image can be processed by image morphology first, so that the two-dimensional image is complete and there is no breakage, and it is best to make the image wider appropriately to facilitate follow-up Processing steps for contour recognition and trajectory point generation.
  • the image can be expanded to fill in the missing and broken positions of the point cloud. For example, for each non-black pixel point on the image (non-black pixel point is the pixel point of the item, and the position where there is no item exists, the pixel point is black), you can put a certain number of points around the point, such as 8 -25 points, set to the same value as the point. If there are only white or black pixels in the two-dimensional image (that is, the pixels in the two-dimensional image either have a value of 0 or a value of 255), this step is equivalent to whitening every white pixel around, Therefore, if the 2D image is missing or broken, this operation will fill all the missing or broken parts of the 2D image with colored pixels. After this processing, the 2D image will become complete without missing or broken.
  • an erosion operation can also be performed on the dilated 2D image. For example, for each black pixel on the image, a certain number of points, such as 8-25 points, can be set to 0. This operation is equivalent to painting black around every black pixel on the two-dimensional image.
  • the points near the edge of the two-dimensional image will be blackened, so the two-dimensional image will become "thin” as a whole, so that the processed two-dimensional image is closer to the real object.
  • the trajectory points generated by the 2D image are more accurate.
  • contour analysis may be performed on the object to obtain the contour of the object.
  • the outline of the object can be obtained by the edge detection method, and the edge refers to the collection of those pixels whose gray scale changes sharply around it, which is the most basic feature of the image.
  • the edge exists between the target, the background and the region, so it is the most important basis for image segmentation.
  • the edge of the object is determined by extracting the features of the discontinuous part of the image, such as the position where the gray value of the pixels on both sides is significantly different, or the position of the turning point where the gray value changes from small to large and then to small.
  • the edge of the item can also be extracted through a deep learning algorithm, which will not be repeated here.
  • the outline of the article can be in any shape, such as regular circles, rectangles, etc., or irregular shapes, such as small parts of various shapes, steel pipes bent in various directions, and the like.
  • FIG. 3 shows the outline of the item in this embodiment.
  • the pixels at the four corners of the outline are corner outline points.
  • the corner outline points can be obtained by obtaining the circumscribed rectangle of the two-dimensional image of the item, based on the item pixel point and the circumscribed rectangle The relationship of Get the corner contour points of the item contour.
  • the trajectory points on which the robot moves are multiple midpoints acquired at certain intervals in the two-dimensional image of the item.
  • a certain interval can be preset, for example, if the interval of 10mm is preset, then after acquiring one track point, the next track point can be acquired at an interval of 10mm; the total number of track points can also be preset, and the total number of track points and the item The length of the track point interval is calculated, and then the track point is obtained according to the interval.
  • any corner contour point In order to obtain the trajectory point of the robot movement, you can start from any corner contour point, and take a long side of the object contour as a reference (the object contour includes two narrow sides and two long sides, and any long side can be used as a reference), along the Select contour points along the long side at a certain interval, and for each selected contour point (including corner contour points), calculate the midpoint of the local two-dimensional image of the item (ie at the contour point) as the trajectory point.
  • the tangent a of the long side where A is located at A can be calculated first, and then the perpendicular b of the tangent at A can be calculated, and the vertical line b and the outline of the item can be calculated
  • the two long sides intersect to form two intersection points A and B, then the midpoint C of the line segment AB is the midpoint that is sought, and the midpoint is used as the trajectory point for the robot to move.
  • Steps S230 and S240 can be implemented in a manner similar to steps S130 and S140, and will not be repeated here.
  • FIG. 5 shows a trajectory generation method based on an item circumscribing rectangle according to an embodiment of the present invention, including:
  • Step S300 acquiring a two-dimensional image of the item
  • Step S310 calculating the circumscribed rectangle of the two-dimensional image
  • Step S320 generating two-dimensional trajectory points based on the circumscribing rectangle
  • Step S330 acquiring height information of two-dimensional track points
  • Step S340 generating a three-dimensional trajectory point based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • the two-dimensional image of the item may be acquired in a manner similar to that of steps S100 and S110, which will not be repeated here.
  • step S310 for an item group composed of multiple items, since it is not necessary to spray along the bending or extending direction of each item, the specific outline of a single item can be ignored, but a piece can be drawn to include all items Blind spraying is carried out on this area with uniform, high coverage, and low repetition rate. This area should be able to include all items, but it should not be too large, otherwise it will make the path calculation more complicated and wasteful.
  • the two-dimensional image of multiple items is regarded as a whole, and its circumscribing rectangle is calculated, and then the planning of the robot movement path can be performed based on the circumscribing rectangle area; For a single item, directly calculate the circumscribing rectangle of the item.
  • the circumscribed rectangle can be divided vertically or horizontally according to a certain interval, and multiple points can be taken on one side along the division direction according to a certain interval, and at each point, calculate The tangent line of the edge, the tangent line is used as the dividing line.
  • FIG. 6 shows the dividing line of the item and the circumscribed rectangle of the item, the dividing line intersects with the two sides of the circumscribed rectangle, along the direction in which the item is divided, the corner points of the circumscribed rectangle on the side and the dividing line and the rectangle
  • the intersection points of are marked as P 0 , P 1 , P 2 ...P n+1
  • the corner points and intersection points on the opposite side are marked as P 0 ', P 1 ', P 2 '...P n+1 '.
  • the interval between adjacent inflection points (for example between P 0 and P 1 ) can be arbitrarily set according to the needs of actual conditions, generally speaking, the smaller the interval, the denser the spraying, the larger the interval, the sparser the spraying, the present invention preferably 20mm.
  • the distance between the trajectory points can be set first, and a specific distance or the total number of preset trajectory points can be directly preset, and then the distance between the trajectory points can be calculated based on the path length and the total number of trajectory points.
  • the spacing and total number can be determined according to the height change of the object to be sprayed. Generally speaking, if the height of the surface of the object to be sprayed changes greatly, more trajectory points need to be set, so that the obtained regular points are closer to the actual trajectory; If the altitude change is small, fewer track points can be set.
  • the robot’s moving path When planning the trajectory point of the robot’s moving path, one can choose one of the four corner points of the circumscribed rectangle as the starting position of the trajectory point.
  • the robot’s moving path should start from the starting trajectory point along the line parallel to the dividing line When encountering an inflection point, follow the zigzag direction along the side of the circumscribed rectangle or the dividing line, traverse the dividing line and the side of the circumscribed rectangle, and select the track points according to the preset track point spacing until all the points and divisions are traversed. Lines parallel to the sides of the circumscribed rectangle up to the corners.
  • the robot’s motion path should start from P 0 , move to P 0 ', turn right 90 degrees after reaching P 0 ', and move to P 1 ' move, turn left 90 degrees after reaching P 1 ', move towards P 1 , and after reaching P 1 , turn right 90 degrees according to the zigzag trajectory, in this way, until passing the rightmost edge and reaching P n+ 1 or P n+1 ', that is, move according to the path of P 0 ⁇ P 0 ' ⁇ P 1 ' ⁇ P 1 ....P n+1 or P n+1 '.
  • Two-dimensional track points are generated on the motion path based on the aforementioned track point intervals.
  • Steps S330 and S340 can be implemented in a manner similar to that of steps S130 and S140, which will not be repeated here.
  • the present invention proposes a method for generating the integrity of three-dimensional trajectory points through two-dimensional trajectory points based on robot vision technology, which avoids the problem of uneven spraying that may be caused by the height of the object when the robot is spraying Or the collision between objects and nozzles; secondly, for specific industrial scenarios, even when robots are used to spray irregular objects, such as long strips that bend in all directions, the present invention can also accurately generate three-dimensional patterns along the surface of the object.
  • the present invention can also accurately generate three-dimensional trajectories that can fully cover the entire item or the entire surface of the item. It can be seen that the present invention solves the problems of how to generate a three-dimensional moving trajectory of a robot based on robot vision and how to generate a two-dimensional trajectory in a specific industrial scene and generate a three-dimensional trajectory based on the two-dimensional trajectory.
  • the robots in various embodiments of the present invention may be industrial robot arms, and these robot arms may be general-purpose or dedicated for spraying.
  • the present invention can spray any object, such as glass, table board, steel plate, cushion, steel pipe, etc.
  • the present invention is not limited to the specific application field.
  • the present invention is especially suitable for being a single irregular steel pipe along Its surface is sprayed or a plurality of such steel pipes are stacked together, and the stacked steel pipes are sprayed.
  • a spraying head can be installed at the operating end of an industrial robot with a pre-established communication connection, and the spraying trajectory information of the object to be sprayed can be generated according to the spraying size of the spraying head and the preset moving reference point on the spraying head.
  • the moving reference point is used for locating the position of the spraying head when it moves, and when positioning during the movement, the position of the spraying head takes this point as a reference instead of locating parts other than the moving reference point of the spraying head.
  • the specific moving reference point is on the spraying head can be preset according to specific needs, and this embodiment does not limit it.
  • the shape of the spraying head of the spraying head can be any shape, for example, it can be rectangular or circular.
  • the moving reference point of the above-mentioned spraying head can be set at a certain end of the spraying head according to requirements, and can also be set at the center of the spraying head.
  • the spraying head can also be circular, and the moving reference point can be the center of the spraying head, or can be located on the circumference of the circular spraying head.
  • the spraying size in this embodiment can be the actual size of the spraying head, for example, when the spraying head shape of the spraying head is a rectangle, the spraying size of the spraying head can include width and length; When it is circular, the spraying size of the spraying head can be the spraying diameter.
  • the spraying size may also be a size corresponding to the shadow projected by the spraying head on the object to be sprayed.
  • Track information can be sent to the industrial robot so that the mobile reference point moves along the track information, optionally, it can be based on TCP protocol, HTTP protocol, GRPC protocol (Google Remote Procedure Call Protocol, Google Remote Procedure Call Protocol) and the above-mentioned industrial robot Communication, and then send the above trajectory information.
  • TCP protocol HTTP protocol
  • GRPC protocol Google Remote Procedure Call Protocol, Google Remote Procedure Call Protocol
  • the trajectory information of the object to be sprayed is generated, and the spraying trajectory information is provided to the industrial robot, so as to Make the mobile reference point move along the spraying trajectory information, so as to realize the full coverage of the surface of the object to be sprayed without repeated spraying.
  • the initial point of the trajectory point can be set at the position closest to the initial pose of the robot on the trajectory path, for example: set the initial point in the middle of the side close to the robot. That is to say, after determining the initial pose of the robot, the middle point on the trajectory path of the side closest to the initial pose of the robot can be used as the initial point of the trajectory point, and then according to the inherent properties of the robot, the Set other track points on the track path, and then track point information can be obtained.
  • the track point information may include but not limited to the coordinates of the track point, the initial track point of the track point, and the direction of the track point (ie, the sequence of the track point movement).
  • the track point information can be sent to the robot by means of communication. When the robot receives the track point information, it can control its own spraying head to spray based on the track point information.
  • track point information is generated on the track path, including:
  • determining the corners and straight lines in the trajectory path may be determined based on the relationship between coordinate values of points on the trajectory path.
  • the X coordinates and Y coordinates of adjacent points at the corner will be different, but the X coordinates or Y coordinates of adjacent points on the straight line may be the same.
  • the shape of the object to be sprayed is a rectangle
  • the X coordinates and Y coordinates of the adjacent points at the corners of the four corners will be different, and the Y coordinates of the adjacent points at the upper straight line will be different.
  • the coordinates will be the same but the X coordinates will be different, the Y coordinates of the adjacent points on the lower straight line will be the same but the X coordinates will be different and the Y coordinates will be smaller than the upper straight line, and the X coordinates of the adjacent points on the left straight line will be the same but The Y coordinates will be different, the X coordinates of adjacent points on the right straight line will be the same but the Y coordinates will be different, and the X coordinates will be smaller than the left straight line.
  • the robot When the robot is spraying, it will control the spraying head to spray based on a certain spraying rate. As an inherent attribute of the robot, the spraying rate affects the effect of spraying in this embodiment. In order to conveniently refer to the spraying rate of the robot to set the trajectory point on the trajectory path to avoid the situation of stockpiling, the spraying rate of the robot can be determined.
  • the inherent properties of robot motion are also reflected in that if the robot sets the same motion speed parameters at corners and straight lines, its motion speeds at corners and straight lines will be different, and the specific motion speed at corners is slower than that at straight lines.
  • the spraying rate remains unchanged. Therefore, for the spraying rate and motion speed parameters of a suitable straight line, material accumulation will occur at the corner.
  • the distance between the trajectory points set at the corners on the trajectory path can be larger than the distance between the trajectory points set at the straight line, so as to achieve a straight line. The balance between the movement speed at the corner and the movement speed at the corner can solve the possible stacking phenomenon caused by the corner.
  • a minimum spacing can be set on the straight line to limit the spacing of the track points on the straight line, so as to prevent the robot from jamming due to too many track points on the straight line. It is also possible to set motion speed parameters with different values at the straight line and the corner to achieve a balance between the motion speed at the straight line and the motion speed at the corner, and solve the problem of stacking due to inherent properties.
  • the initial point of the trajectory point is set to a point close to the initial pose of the robot, for example: it can be the trajectory point corresponding to the middle part of the side of the robot to be sprayed. That is to say, after determining the initial pose of the robot, the trajectory point corresponding to the intermediate point on the trajectory path of the side closest to the initial pose of the robot (or the trajectory point closest to the point) can be used as The initial track point of the track point. After that, other track points can be moved clockwise, or other track points can be moved counterclockwise.
  • the track point information may specifically include track point coordinates, initial track point coordinates, track point movement sequence, track point movement speed parameters, and the like.
  • the trajectory point information further includes: normal information corresponding to the contour point.
  • the normal direction information can be the angle value of the normal vector corresponding to each contour point cloud relative to a fixed amount, and can also be the deviation of the point cloud corresponding to the next position in each contour point cloud relative to its previous point cloud angle value.
  • Fig. 7 shows a schematic structural diagram of a trajectory generating device according to yet another embodiment of the present invention, the device comprising:
  • the three-dimensional point cloud acquisition module 400 is used to acquire the three-dimensional point cloud of the item, that is, to realize step S100;
  • a two-dimensional image generating module 410 configured to generate a two-dimensional image of the item based on the three-dimensional point cloud of the item, that is, to implement step S110;
  • a two-dimensional trajectory point generation module 420 configured to generate two-dimensional trajectory points based on the two-dimensional image of the item, that is, to implement step S120;
  • the altitude information acquisition module 430 is used to acquire the altitude information of the two-dimensional track point, that is, to implement step S130;
  • the 3D track point generating module 440 is configured to generate a 3D track point based on the 2D track point and the acquired height information of the 2D track point, that is, to implement step S140.
  • Fig. 8 shows a schematic structural diagram of a trajectory generation device based on an item outline according to yet another embodiment of the present invention, the device comprising:
  • a two-dimensional image acquisition module 500 configured to acquire a two-dimensional image of an item, that is, to implement step S200;
  • a profile acquisition module 510 configured to acquire the profile of the item based on the two-dimensional image of the item, that is, to implement step S210;
  • a two-dimensional trajectory point generation module 520 configured to generate a two-dimensional trajectory point based on the outline of the item, that is, to implement step S220;
  • a height information acquisition module 530 configured to acquire height information of two-dimensional track points, that is, to implement step S230;
  • the 3D track point generation module 540 is configured to generate a 3D track point based on the 2D track point and the acquired height information of the 2D track point, that is, to implement step S240.
  • Fig. 9 shows a schematic structural diagram of a trajectory generation device based on a circumscribing rectangle according to yet another embodiment of the present invention, the device includes:
  • a two-dimensional image acquisition module 600 configured to acquire a two-dimensional image of an item, that is, to implement step S300;
  • a circumscribing rectangle calculation module 610 configured to calculate the circumscribing rectangle of the two-dimensional image, that is, to implement step S310;
  • a two-dimensional trajectory point generation module 620 configured to generate a two-dimensional trajectory point based on the circumscribed rectangle, that is, to implement step S320;
  • a height information acquisition module 630 configured to acquire height information of two-dimensional trajectory points, that is, to implement step S330;
  • the 3D track point generation module 640 is configured to generate a 3D track point based on the 2D track point and the acquired height information of the 2D track point, that is, to implement step S340.
  • the above-mentioned embodiment has described the method that the three-dimensional track point generation module 640 is used to realize step S340, however, according to the needs of actual situations, the three-dimensional track point generation module 640 can also be used to realize the method or methods of steps S310, S320 or S330 part.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method in any one of the above-mentioned implementation modes is implemented.
  • the computer program stored in the computer-readable storage medium in the embodiment of the present application can be executed by the processor of the electronic device.
  • the computer-readable storage medium can be a storage medium built in the electronic device, or can be The storage medium of the electronic device is pluggable and pluggable. Therefore, the computer-readable storage medium in the embodiments of the present application has high flexibility and reliability.
  • FIG. 10 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may be a 3D camera.
  • the electronic device may include: a processor (processor) 902, a communication interface (Communications Interface) 904, a memory (memory) 906, and a communication bus 908.
  • processor processor
  • Communication interface Communication Interface
  • memory memory
  • the processor 902 , the communication interface 904 , and the memory 906 communicate with each other through the communication bus 908 .
  • the communication interface 904 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 902 is configured to execute the program 910, and may specifically execute relevant steps in the foregoing method embodiments.
  • the program 910 may include program codes including computer operation instructions.
  • the processor 902 may be a central processing unit CPU, or an ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the one or more processors included in the electronic device may be of the same type, such as one or more CPUs, or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 906 is used to store the program 910 .
  • the memory 906 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the program 910 may be specifically configured to cause the processor 902 to perform various operations in the foregoing method embodiments.
  • the content of the invention of the present invention includes:
  • a trajectory generation method comprising:
  • the three-dimensional trajectory point is generated based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • generating the two-dimensional image of the item based on the three-dimensional point cloud of the item includes mapping the three-dimensional point cloud of the item along a vertical direction of the object surface and generating a two-dimensional image.
  • the generating two-dimensional track points based on the two-dimensional image of the item includes: generating two-dimensional track points based on a preset number of track points and/or track point intervals.
  • the acquiring the height information of the two-dimensional track point includes: acquiring the height information of the track point based on the depth information of the item pixel at the track point.
  • it also includes: smoothing the height of the generated three-dimensional trajectory points.
  • the generating the two-dimensional track point based on the two-dimensional image of the item includes: generating the two-dimensional track point based on the outline of the item and/or based on a bounding rectangle of the two-dimensional image.
  • a trajectory generating device comprising:
  • the 3D point cloud acquisition module is used to obtain the 3D point cloud of the item
  • a two-dimensional image generating module configured to generate a two-dimensional image of the item based on the three-dimensional point cloud of the item
  • a two-dimensional trajectory point generating module configured to generate two-dimensional trajectory points based on the two-dimensional image of the item
  • a height information acquisition module configured to obtain height information of two-dimensional track points
  • the three-dimensional track point generation module is used to generate three-dimensional track points based on the two-dimensional track points and the acquired height information of the two-dimensional track points.
  • the two-dimensional image generation module is specifically configured to map the three-dimensional point cloud of the item along the vertical direction of the object surface and generate a two-dimensional image.
  • the two-dimensional track point generation module generates two-dimensional track points based on a preset number of track points and/or a track point interval.
  • the height information acquisition module is specifically configured to acquire the height information of the track point according to the depth information of the item pixel at the track point.
  • the 3D trajectory point generating module is further configured to smooth the height of the generated 3D trajectory points.
  • the two-dimensional track point generation module generates two-dimensional track points based on the outline of the item and/or based on the bounding rectangle of the two-dimensional image.
  • a trajectory generation method comprising:
  • the three-dimensional trajectory point is generated based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • the acquiring the height information of the two-dimensional track point includes: acquiring the height information of the track point according to the depth information of the item pixel at the track point.
  • dilation and/or erosion operations are performed on the two-dimensional image.
  • the generating two-dimensional trajectory points based on the outline of the item includes: from the first corner outline point to the second corner outline point, selecting a outline point at a specific distance on the outline of the item, based on The contour points generate two-dimensional trajectory points.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • first corner contour point and the second corner contour point are on the same long side of the object contour.
  • the generating two-dimensional trajectory points based on the contour points includes: calculating a midpoint at each selected contour point, and using the midpoint as a trajectory point of the item.
  • the calculating the midpoint includes: calculating a tangent line of the item outline at the selected outline point, calculating a perpendicular line of the tangent line at the outline point, and calculating two lengths between the perpendicular line and the item outline Computes the midpoint of the line connecting the intersection points of the edges.
  • a trajectory generating device comprising:
  • the two-dimensional image acquisition module is used to acquire the two-dimensional image of the item
  • a profile acquisition module configured to acquire the profile of the item based on the two-dimensional image of the item
  • a two-dimensional track point generating module configured to generate a two-dimensional track point based on the outline of the item
  • a height information acquisition module configured to obtain height information of two-dimensional track points
  • the three-dimensional track point generation module is used to generate three-dimensional track points based on the two-dimensional track points and the acquired height information of the two-dimensional track points.
  • the height information acquisition module is specifically configured to acquire the height information of the track point according to the depth information of the item pixel at the track point.
  • the outline acquisition module is further configured to perform dilation and/or erosion operations on the two-dimensional image before acquiring the outline of the item based on the two-dimensional image of the item.
  • the two-dimensional trajectory point generating module is specifically configured to select a contour point at a specific distance on the object contour from the first corner contour point to the second corner contour point, based on the contour point Generate 2D trajectory points.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • first corner contour point and the second corner contour point are on the same long side of the object contour.
  • the generating two-dimensional trajectory points based on the contour points includes: calculating a midpoint at each selected contour point, and using the midpoint as a trajectory point of the item.
  • the calculating the midpoint includes: calculating a tangent line of the item outline at the selected outline point, calculating a perpendicular line of the tangent line at the outline point, and calculating two lengths between the perpendicular line and the item outline Computes the midpoint of the line connecting the intersection points of the edges.
  • a trajectory generation method characterized in that, comprising:
  • the three-dimensional trajectory point is generated based on the two-dimensional trajectory point and the acquired height information of the two-dimensional trajectory point.
  • the acquiring the two-dimensional image of the item includes mapping the three-dimensional point cloud of the item along a vertical direction of the object surface to generate a two-dimensional image.
  • the acquiring the height information of the two-dimensional track point includes: acquiring the height information of the track point according to the depth information of the item pixel at the track point.
  • the height of the generated 3D trajectory points is smoothed.
  • the generating the two-dimensional locus point based on the circumscribing rectangle includes: generating a tangent line on one side of the circumscribing rectangle at predetermined intervals, using a line segment between the tangent line and the intersection point of the two sides of the circumscribing rectangle as a boundary line, based on the The boundary line is used to generate two-dimensional trajectory points.
  • the generating the two-dimensional trajectory point based on the boundary includes: starting from any corner point of the circumscribed rectangle, using the corner point of the circumscribed rectangle and the intersection point of the boundary line and the side as an inflection point, traversing the boundary line in zigzag and The dividing line is parallel to the edge, thus generating the 2D trajectory points.
  • track points are generated according to a specific distance.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • a trajectory generating device comprising:
  • the two-dimensional image acquisition module is used to acquire the two-dimensional image of the item
  • a circumscribing rectangle calculation module configured to calculate the circumscribing rectangle of the two-dimensional image
  • a two-dimensional trajectory point generating module configured to generate two-dimensional trajectory points based on the circumscribed rectangle
  • a height information acquisition module configured to obtain height information of two-dimensional track points
  • the three-dimensional track point generation module is used to generate three-dimensional track points based on the two-dimensional track points and the acquired height information of the two-dimensional track points.
  • the two-dimensional image acquisition module is specifically configured to map the three-dimensional point cloud of the item along the vertical direction of the object surface and generate a two-dimensional image.
  • the height information acquiring module is specifically configured to acquire the height information of the track point according to the depth information of the item pixel at the track point.
  • the altitude information acquisition module is further configured to smooth the height of the generated three-dimensional trajectory points.
  • the two-dimensional trajectory point generation module is specifically configured to generate a tangent line on one side of the circumscribed rectangle at predetermined intervals, and use a line segment between the tangent line and the intersection point of the two sides of the circumscribed rectangle as a boundary line, based on the boundary line Generate 2D trajectory points.
  • the generating the two-dimensional trajectory point based on the boundary includes: starting from any corner point of the circumscribed rectangle, using the corner point of the circumscribed rectangle and the intersection point of the boundary line and the side as an inflection point, traversing the boundary line in zigzag and The dividing line is parallel to the edge, thus generating the 2D trajectory points.
  • track points are generated according to a specific distance.
  • the specific distance is calculated based on a preset distance between track points and/or a preset total number of track points.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate or transmit a program for use in or in conjunction with an instruction execution system, device or device.
  • computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, since the program can be read, for example, by optically scanning the paper or other medium, followed by editing, interpretation or other suitable processing if necessary.
  • the program is processed electronically and stored in computer memory.
  • the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • each part of the embodiments of the present application may be realized by hardware, software, firmware or a combination thereof.
  • various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.

Abstract

本申请公开了一种轨迹生成方法、装置、电子设备、存储介质和3D相机。轨迹生成方法包括:获取物品的二维图像;计算所述二维图像的外接矩形;基于所述外接矩形生成二维轨迹点;获取二维轨迹点的高度信息;基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。本发明对于特定的工业场景,即待喷涂物品为放置在一起的多个向各个方向弯曲的物品且喷涂要求是能把这堆物品表面都喷上就行,无需考虑是否会喷涂到单个物品之外的工业场景,通过基于物品的外接矩形,生成二维的能够全面覆盖整个外接矩形的喷涂轨迹点,之后再基于物品的高度信息将二维轨迹点转换为在三维空间扭曲且全面覆盖整个面的轨迹点。使得机器人能够基于该三维轨迹点进行喷涂等操作,有效避免该特定场景下进行喷涂时与物品的碰撞。

Description

轨迹生成方法、装置、电子设备、存储介质和3D相机
优先权声明
本申请要求2021年5月11日递交的、申请号为CN202110511370.1、名称为“轨迹生成方法、装置、电子设备、存储介质和3D相机,上述专利的所有内容在此全部引入。
技术领域
本申请涉及图像处理领域,更具体而言,特别涉及轨迹生成方法、装置、电子设备、存储介质和3D相机。
背景技术
智能工业机器人已经可以在各个领域代替人类进行作业,例如喷涂,抓取,搬运等。机器人的控制方式大致可以分为两类:一类是预先为机器人的运行规划好移动路径,运行方式等,机器人按照预先规划的行为方式运作,由于以此方式运行时,机器人并不会根据场景中的对象改变自己的动作,因此需要机器人的作业对象,例如待喷涂/待抓取/待搬运的物品,必须严格按照事先规划好位置和姿态进行摆放;另一类则是基于机器人对现场的感知,例如使用视觉技术,使机器人能够感知工业现场物品的位置和摆放等,规划机器人的路径和行为。
在后一种方式中,目前的使用场景中出于通用性和系统复杂度的考虑,为机器人规划出二维的轨迹后,便会令机器人基于该轨迹执行例如喷涂等操作,这对于低高度差或者无高度差的物品来说是可行,然而需要进行喷涂的物品并不总是规则的,换句话说,有的工业场景中,可能需要对向各个方向延伸的管状或线条状物品进行喷涂,或者对堆放在一起的这些物品进行喷涂。在上述场景下进行喷涂时,如果机器人仅按照固定高度,或者说仅在XY平面上移动进行喷涂,而不能在Z轴上,即高度上,沿物体延伸方向调整。会出现喷涂不均匀、喷头与物体碰撞等异常情况。
发明内容
鉴于上述问题,提出了本发明以便克服上述问题或者至少部分地解决上述问题。具体地,本发明的创新之一在于,为了使得基于视觉识别的机器人系统能够正确地运作于具有三维特点的对象,申请人提出了一种获取机器人的二维轨迹后将其三维化的方法,该方法中,在通 过三维点云获取了物品的二维图像后,先基于二维图像获取机器人运行的轨迹点,再获取该轨迹点的高度信息,结合二维的轨迹点以及轨迹点的高度信息从而获取到机器人运行的三维轨迹点。机器人基于三维轨迹点进行喷涂等操作,能够大大提高对有较大的高度差的待喷涂物品进行喷涂时的准确性。
本发明的创新之二在于,对于特定的工业场景,即待喷涂物品为经过各个方向的物品,例如在XYZ轴方向均有扭曲的钢管,且喷涂要求是顺着扭曲的钢管进行喷涂的工业场景,申请人发明了一种专用于此场景或类似场景的轨迹生成方法,该方法首先基于物品的二维图像,生成物品的轮廓,之后基于轮廓生成二维的顺着物品扭曲的轨迹点,之后再将二维的顺着物品扭曲的轨迹点基于物品的高度信息转换为顺着物品在三维空间扭曲的轨迹点。机器人基于该三维轨迹点进行喷涂等操作,能够大大提高该特定场景下进行喷涂的准确性。
本发明的创新之三在于,对于另一个特定的工业场景,即待喷涂物品为放置在一起的多个经过各个方向的物品,例如多个如上所述的扭曲的钢管堆放在一起,且喷涂要求是能把这堆钢管表面都喷上就行,无需考虑是否会喷涂到钢管之外的工业场景,申请人发明了一种专用于此场景或类似场景的轨迹生成方法,该方法首先基于这一堆物品的二维图像,生成这一堆物品的外接矩形,基于物品的外接矩形,生成二维的能够全面覆盖整个外接矩形的喷涂轨迹点,之后再将二维的轨迹点基于物品的高度信息转换为在三维空间扭曲且全面覆盖整个面的轨迹点。机器人基于该三维轨迹点进行喷涂等操作,能够有效避免该特定场景下进行喷涂时与物品的碰撞。
本申请权利要求和说明书所披露的所有方案均具有上述一个或多个创新之处,相应地,能够解决上述一个或多个技术问题。具体地,本申请提供一种轨迹生成方法、装置、电子设备、存储介质和3D相机。
本申请的实施方式的轨迹生成方法包括:
获取物品的二维图像;
计算所述二维图像的外接矩形;
基于所述外接矩形生成二维轨迹点;
获取二维轨迹点的高度信息;
基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
在某些实施方式中,所述获取物品的二维图像包括将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
在某些实施方式中,所述获取二维轨迹点的高度信息包括:根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
在某些实施方式中,对生成的三维轨迹点在高度上进行平滑处理。
在某些实施方式中,所述基于所述外接矩形生成二维轨迹点包括:在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
在某些实施方式中,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。
在某些实施方式中,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹点。
在某些实施方式中,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
本申请的实施方式的轨迹生成装置包括:
二维图像获取模块,用于获取物品的二维图像;
外接矩形计算模块,用于计算所述二维图像的外接矩形;
二维轨迹点生成模块,用于基于所述外接矩形生成二维轨迹点;
高度信息获取模块,用于获取二维轨迹点的高度信息;
三维轨迹点生成模块,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
在某些实施方式中,所述二维图像获取模块具体用于将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
在某些实施方式中,所述高度信息获取模块具体用于根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
在某些实施方式中,所述高度信息获取模块还用于对生成的三维轨迹点在高度上进行平滑处理。
在某些实施方式中,所述二维轨迹点生成模块具体用于在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
在某些实施方式中,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。
在某些实施方式中,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹 点。
在某些实施方式中,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
本申请的实施方式的电子设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施方式的轨迹生成方法。
本申请的实施方式的计算机可读存储介质其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施方式的轨迹生成方法。
本申请的实施方式的3D相机包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施方式的轨迹生成方法。
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的轨迹生成方法的流程示意图;
图2是本申请某些实施方式的基于轮廓的轨迹生成方法的流程示意图;
图3是本申请某些实施方式的物品轮廓的示意图;
图4是本申请某些实施方式的物品中点计算方法的示意图;
图5是本申请某些实施方式的基于外接矩形的轨迹生成方法的流程示意图;
图6是本申请某些实施方式的基于外接矩形生成机器人移动路径的示意图;
图7是本申请某些实施方式的轨迹生成装置的结构示意图;
图8是本申请某些实施方式的基于轮廓的轨迹生成装置的结构示意图;
图9是本申请某些实施方式的基于外接矩形的轨迹生成装置的结构示意图;
图10是本申请某些实施方式的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。 相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一个实施例的轨迹生成方法,包括:
步骤S100,获取物品的三维点云;
步骤S110,基于物品的三维点云生成物品的二维图像;
步骤S120,基于物品的二维图像生成二维轨迹点;
步骤S130,获取二维轨迹点的高度信息;
步骤S140,基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
在步骤S100中,可以通过3D工业相机获取点云信息,3D工业相机一般装配有两个镜头,分别从不同的角度捕捉待抓取物品组,经过处理后能够实现物体的三维图像的展示。将待抓取物品组置于视觉传感器的下方,两个镜头同时拍摄,根据所得到的两个图像的相对姿态参数,使用通用的双目立体视觉算法计算出待填充物体的各点的X、Y、Z坐标值及各点的坐标朝向,进而转变为待抓取物品组的点云数据。具体实施时,也可以使用激光探测器、LED等可见光探测器、红外探测器以及雷达探测器等元件生成点云,本发明对具体实现方式不作限定。
本发明可以用在使用机器人进行物品喷涂的工业场景中,在该场景中可以通过使用机器人视觉技术,基于本发明的方法计算出三维的喷涂轨迹点,再令机器人沿着三维喷涂轨迹点执行喷涂。通常可以将物品分为正反两面,相应地,喷涂也按照正反两面进行,正面和反面所使用的喷涂方法是一样的。
在步骤S110中,为了方便数据的处理,提高效率,可将获取的三维点云数据正投影映射到二维平面上。作为一个示例,也可以生成该正投影对应的深度图。可以沿垂直于物品喷涂面的方向获取与三维物品区域相对应的二维彩色图以及对应于二维彩色图的深度图。其中,二维彩色图对应于与预设方向垂直的平面区域的图像;对应于二维彩色图的深度图中的各个像素点与二维彩色图中的各个像素点一一对应,且各个像素点的取值为该像素点的深度值。
获取的二维图像可以是二维彩色图,也可以是只包括两个值的二维图像,即图像的所有像素点的取值为0,或者某个非0值,例如255(像素点的值为0时是黑色,值为255时是白色)。也可以在获取二维彩色图后,将其转化为包括两个值的二维图像。
为了方便获取二维图像,可以使用相机正对着喷涂面进行拍照,也可以以一定的角度进行拍摄,如果不正对着喷涂面进行拍照进行拍摄,至少需要在拍摄的图像中能够看到要喷涂的面,否则无法进行校正。获得3D点云后,可以将物体点云从拍摄时的相机坐标系校正到Z轴垂直于物品待喷涂的面的坐标系下,也就是说,可以基于相机的拍摄参数将拍到的物品 的3D点云“摆正”,将物品矫正为喷涂面正对着镜头的位姿。
在步骤S120中,生成二维轨迹点的方法与具体的工业场景联系紧密,不同的喷涂需求和场景会有不同的、各具特色的轨迹点生成方法。本实施例中不对生成二维轨迹点的方法进行限定,任意的二维轨迹点生成方法都可以适用于本实施例的方法。
轨迹点的数量可以根据实际情况的需要而预先设置。一般来说,轨迹点的数量越多,则机器人的运动路径与理想的轨迹路径的一致程度越高,相应地,控制复杂度也越高。可以预设轨迹点总数,也可以预设轨迹点之间的间隔。在确定了机器人的移动路径后,可以按照预设的轨迹点间隔,选取轨迹点;若预设的是轨迹点的总数,则可以基于路径长度以及轨迹点总数计算出轨迹点的间隔后,再根据轨迹点的间隔选取轨迹点。
在步骤S130中,可以基于物品的三维点云获取物品的深度图,深度图中包含了各个像素点的深度信息,该深度信息表示的是该点与相机的距离。像素点的高度是在将喷涂面作为XY面时,像素点在Z轴上的取值,像素点的深度信息与像素点的高度信息具有相关性,可以基于深度信息来获取像素点的高度信息。二维轨迹点是根据二维图像生成的点,通常位于二维图像内,为了求取该二维轨迹点处的物品的局部高度信息,可以先找到二维图像上与该轨迹点最相关的像素点(如果基于二维图像的轮廓生成轨迹点,也可以先找到轮廓上与轨迹点最相关的像素点),例如距该轨迹点距离最近的像素点,然后计算该像素点的高度信息。在步骤S140中,将获取与二维轨迹点最相关的物品像素点的高度信息赋予该二维轨迹点,从而将该二维轨迹点转换为三维轨迹点。以此方式,机器人不仅能够能够基于视觉技术实时规划出平面的轨迹,还能够根据识别出轨迹的高度,从而可以在XYZ轴上规划移动路径,这样即便喷涂对象为在三个方向上延伸的形状不规则的线条状物品或者待喷涂面凹凸不平的物品,都能够达到良好的喷涂效果,避免喷涂不均匀或者不平整。
获取3D轨迹点后,有时在高度上的轨迹整体上不够平滑。而平滑的移动轨迹点,能够使得机器人根据轨迹点移动时也能更为顺滑,不卡顿,从而使得喷漆更均匀。为此,在获取轨迹点的高度信息后,可以对生成的3D轨迹点在高度方向上进行平滑处理,使得整个轨迹点在高度上平滑衔接。可以用高斯平滑进行过滤、平滑处理,使得3D轨迹点均匀,平滑。在实际的工业场景中,待喷涂物品可能呈线条状并向各个方向弯曲,而喷涂工艺则要求顺着物品的线条喷涂,例如对于一根向各个方向弯曲的钢管,可能要求顺着钢管在钢管的表面上喷漆,因此需要机器人顺着物品的线条走位并喷涂。发明人开发了一种专用于此种场景下的基于物品轮廓的喷涂轨迹生成方法,这是本发明的重点之一。图2示出了根据本发明一个实施例的基于物品轮廓的轨迹生成方法,包括:
步骤S200,获取物品的二维图像;
步骤S210,基于物品的二维图像获取物品的轮廓;
步骤S220,基于所述物品的轮廓生成二维轨迹点;
步骤S230,获取二维轨迹点的高度信息;
步骤S240,基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。在步骤S200中,可以使用与步骤S100和S110类似的方式获取物品的二维图像,此处不再赘述。
在步骤S210中,由于在工业现场中采集物品点云时,可能因为光照问题或者物品本身反光的问题导致采集的点云存在缺失,甚至断裂的情况。因此在获取物品的二维图像后,可以先对该二维图像进行图像形态学的处理,使得二维图像完整,不存在断裂的情形,并且最好能让图像适当地变宽以方便进行后续轮廓识别以及轨迹点生成的处理步骤。
可以在获取二维图像信息后,对图像执行膨胀处理,以填补点云缺失、断裂的位置。例如,对于图像上的每一个非黑的像素点(非黑的像素点是物品的像素点,没有物品存在的位置,像素点是黑的),可以把该点周围一定数量的点,例如8-25个点,设为与该点相同的值。如果二维图像中只存在白色或黑色的像素点(即该二维图像中的像素点要么值为0,要么值为255),该步骤相当于把每个白色的像素点周围都抹白,因此假如二维图像存在缺失或断裂,该操作会将二维图像的缺失或断裂部分全部填充为有颜色的像素点,如此处理之后,二维图像就会变得完整,不存在缺失或断裂。
在经过膨胀处理后,虽然图像中间的缺失或断裂能够被填充,但是二维图像整体上会因为膨胀而变“胖”,适当的膨胀有助于后续进一步的求轮廓,求轨迹点等图像处理操作,然而膨胀过多则可能使得二维图像失真,导致最终确定的轨迹点位置不准确。为了避免膨胀过多,也可以对膨胀后的二维图像执行腐蚀操作。例如,对于图像上的每一个黑色像素点,可以把该点一定数量的点,例如8-25个点,均设为0。如此操作,相当于把二维图像上每个黑色的像素点周围都涂黑。经过腐蚀处理后,二维图像边缘位置的附近的点会被涂黑,因此二维图像整体上会变“瘦”,从而使得处理后的二维图像更接近于真实的物品,基于该处理后的二维图像生成的轨迹点更准确。
在步骤S220中,可以先对物品进行轮廓分析,以得到待物品的轮廓。可以通过边缘检测方法获取物品的轮廓,边缘是指其周围像素灰度急剧变化的那些象素的集合,它是图像最基本的特征。边缘存在于目标、背景和区域之间,所以,它是图像分割所依赖的最重要的依据。通过提取图像中不连续部分的特征来确定物品的边缘,例如两边像素的灰度值明显不同的位置,或者灰度值由小到大再到小的变化转折点的位置。也可以通过深度学习算法提取物品的边缘,此处不再赘述。
可以理解的是,物品的物品轮廓可以为任意形状,例如为规则的圆形、矩形等,也可以 为不规则形状,例如,各种形状的小零件,向各个方向弯曲的钢管等。作为一个示例,图3示出了本实施例的物品轮廓,轮廓中四个角的像素点为角轮廓点,角轮廓点可以通过获取物品二维图像的外接矩形,基于物品像素点与外接矩形的关系获取物品轮廓的角轮廓点。机器人移动时所依据的轨迹点为在该物品的二维图像中按照一定间隔获取的多个中点。其中,一定间隔可以是预设的,例如,预设10mm的间隔,则获取一个轨迹点后,间隔10mm获取下一个轨迹点;也可以预设轨迹点总数,则可以根据该轨迹点总数以及物品的长度计算轨迹点间隔,再依据该间隔获取轨迹点。
为了获取机器人移动的轨迹点,可以从任一角轮廓点开始,以物品轮廓的一长边为基准(物品轮廓包括两个窄边和两个长边,可以以任一长边为基准),沿着该长边按照一定间隔选取轮廓点,对于每个选取的轮廓点(包括角轮廓点),计算物品二维图像局部(即该轮廓点处)的中点作为轨迹点。如图4所示,对于任一物品局部的轮廓点A,可以先计算该A所在的长边在A处的切线a,接着计算该切线在A处的垂线b,垂线b与物品轮廓的两个长边相交,形成两个交点A和B,则线段AB的中点C即是所求的中点,将该中点作为机器人移动的轨迹点。对于步骤S230和S240,可以采用与步骤S130和S140类似的方式实现,此处不再赘述。在一些工业场景中,待喷涂的是排列在一起的多个物品,而喷涂工艺的要求是无需精准,但求无遗漏地在每个物品表面喷涂(即就算会喷到物品之外也没关系,只要每个物品表面都能喷上就行),例如多根上个实施例中的向各个方向弯曲的钢管放置在一起,要求为这一堆钢管进行喷涂。为此,发明人开发了一种专用于此种场景下的基于物品外接矩形的喷涂轨迹生成方法,这是本发明的重点之一。该方法同样适用于待喷涂物品的待喷涂表面可能较为宽大,并且凹凸不平,喷涂工艺要求涂满整个表面的工业场景。为了简洁,本实施方式中的“物品”可以指代单个物品,也可以指代多个物品构成的物品组。图5示出了根据本发明一个实施例的基于物品外接矩形的轨迹生成方法,包括:
步骤S300,获取物品的二维图像;
步骤S310,计算所述二维图像的外接矩形;
步骤S320,基于所述外接矩形生成二维轨迹点;
步骤S330,获取二维轨迹点的高度信息;
步骤S340,基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。在步骤S300中,可以使用与步骤S100和S110类似的方式获取物品的二维图像,此处不再赘述。
在步骤S310中,对于由多个物品构成的物品组,由于并不需要沿着每个物品的弯曲或者延伸方向进行喷涂,因此可以忽略单个物品的具体轮廓,而是划出一片能包括所有物品的区域,在该区域上进行均匀、高覆盖率、低重复率的盲喷,该区域应当能包括所有物品,但 不宜过大,否则会导致路径计算更加复杂且浪费。为了求取这样的区域,在获得物品的二维图像后,将多个物品的二维图像视为一个整体,计算其外接矩形,之后即可基于该外接矩形的区域执行机器人运动路径的规划;对于单个物品,直接计算该物品的外接矩形即可。
对于步骤S320,获取物品的外接矩形之后,可以将外接矩形沿纵向或横向按照一定的间隔进行划分,可以沿着划分方向在一边上按照一定的间隔取多个点,在每一个点处,计算该边的切线,将该切线作为划分线。作为一个示例,图6示出了物品以及物品的外接矩形的划分线,划分线与外接矩形的两边相交,沿着物品被划分的方向将该边上的外接矩形的角点以及划分线与矩形的交点分别记为P 0,P 1,P 2…P n+1,对侧的角点以及交点记为P 0’,P 1’,P 2’…P n+1’。这些点将作为机器人运动路径的拐点使用。相邻拐点之间的间隔(例如P 0和P 1之间)可以根据实际情况的需要任意设置,通常来说间隔越小,则喷涂越密,间隔越大,则喷涂越稀疏,本发明优选为20mm。
在确定机器人移动路径的轨迹点之前,可以先设定轨迹点的间距,可以直接预设特定的间距或者预设轨迹点的总数,再基于路径长度以及轨迹点总数计算轨迹点的间距。间距和总数可以根据待喷涂物品的高度变化来确定,大体上来说,如果待喷涂物品表面的高度变化较大,则需要设置较多的轨迹点,这样获取的规矩点更贴近真实需要的轨迹;而高度变化较小的话,则可以设置较少的轨迹点。
在规划机器人移动路径的轨迹点时,可以先从外接矩形的四个角点中任选一个作为轨迹点的起始位置,机器人的移动路径应当从该起始轨迹点开始沿着与划分线平行的方向,在碰到拐点时,按照Z字形沿着外接矩形的边或者划分线转向,遍历划分线以及外接矩形的边,按照预设的轨迹点间距选取轨迹点,直到遍历完所有的与划分线平行的外接矩形的边并达到角点为止。作为一个示例,在图6中选取一个角点,例如P 0作为起始点,则机器人的运动路径应当为从P 0开始,向P 0’移动,到达P 0’之后右转90度,向P 1’移动,到达P 1’后左转90度,向着P 1移动,到达P 1后,按照Z字形轨迹右转90度,以此方式,直到走过最右侧的边并到达P n+1或P n+1’为之,即按照P 0→P 0’→P 1’→P 1….P n+1或P n+1’的路径运动。在该运动路径上基于前述的轨迹点间隔生成二维轨迹点。
对于步骤S330和S340,可以采用与步骤S130和S140类似的方式实现,此处不再赘述。
根据上述实施例,首先,本发明提出了基于机器人视觉技术的通过二维轨迹点生成三维轨迹点的整体性的方法,该方法避免了机器人在进行喷涂时由于物品高度可能导致的喷涂不均匀问题或物体与喷头的碰撞问题;其次,针对特定的工业场景,即使用机器人喷涂不规则物品,例如向各个方向弯曲的长条状物品时,本发明也能够准确地生成沿着物品表面的在三个方向上延伸的轨迹;第三,针对另一个特定的工业场景,即使用机器人喷涂多个并列的不 规则形状物品,例如多个如上所述的向各个方向弯曲的长条状物品或者喷涂表面具有较大高度起伏的物品时,本发明也能够准确地生成立体且能够全面覆盖全部物品或整个物品表面的三维轨迹。由此可见,本发明解决了如何基于机器人视觉生成机器人三维的移动轨迹以及特定工业场景中如何生成二维轨迹并基于该二维轨迹生成三维轨迹的问题。
另外,本领域技术人员还能够针对上述实施例进行各种改动和变形:
本发明的各个实施方式中的机器人可以是工业机器人手臂,这些机器人手臂可以是通用的,也可以是专用于进行喷涂的。本发明可以对任意物体进行喷涂,例如玻璃,桌板,钢板,坐垫,钢管等,本发明对具体应用领域不作限制,作为优选的实施例,本发明特别适用于在为单个不规则钢管沿着其表面喷涂或者多个这样的钢管堆在一起,为堆在一起的钢管喷涂。
在一些实施方式中,可以在预先建立通信连接的工业机器人的操作末端安装喷涂头,根据喷涂头的喷涂尺寸和喷涂头上的预设移动参考点,生成待喷涂物体的喷涂轨迹信息。
可以理解的是,移动参考点为喷涂头移动时定位其位置所用,在移动过程中定位时,喷涂头位置以该点为参考,而不定位喷涂头移动参考点以外部位。具体移动参考点在喷涂头上的哪一部位,可以根据具体需求而预先设定,本实施例不做限制。
喷涂头的喷涂头形状可以为任意形状,例如可以为矩形,或者圆形等。可选地,上述的喷涂头的移动参考点可以根据需求设置在喷涂头的某一端,也可以设置在喷涂头的中心位置,例如喷涂头为矩形则该移动参考点可设置在喷涂头一端的中点或角点,或者设置在喷涂头对角线交叉点,即中心点。在另一可选实施例中,喷涂头还可以为圆形,移动参考点可以为喷涂头的圆心、或者可位于该圆形喷涂头的圆周上。
需要说明的是,本实施例中的喷涂尺寸可以为喷涂头的实际尺寸,例如在喷涂头的喷涂头形状为矩形时,喷涂头的喷涂尺寸可以包括宽度以及长度;在喷涂头的喷涂头形状为圆形时,喷涂头的喷涂尺寸可以为喷涂直径。可选地,喷涂尺寸还可以为喷涂头在待喷涂物体上投影所得阴影对应尺寸。
可以发送轨迹信息至工业机器人,以使得移动参考点沿轨迹信息移动,可选地,可基于TCP协议、HTTP协议、GRPC协议(Google Remote Procedure Call Protocol,谷歌远程过程调用协议)与上述工业机器人进行通信,进而发送上述轨迹信息。
在需要通过工业机器人对待喷涂物品进行全覆盖喷涂时,结合喷涂头的喷涂尺寸和喷涂头上的预设移动参考点,生成待喷涂物体的轨迹信息,并将喷涂轨迹信息提供给工业机器人,以使得移动参考点沿喷涂轨迹信息移动,进而实现对待喷涂物体表面的全覆盖且无重复喷涂。
为了使得机器人走更少的多余轨迹,可将轨迹点的初始点设置在轨迹路径上与机器人初始位姿最为相近的位置,例如:将初始点设置在靠近机器人那条边的中间。也即是说,在确 定机器人的初始位姿之后,可将距离该机器人的初始位姿最近的那条边的轨迹路径上的中间点作为轨迹点的初始点,之后可根据机器人的固有属性在轨迹路径上设置其他轨迹点,进而可以得到轨迹点信息。值得一提的是,该轨迹点信息可包括但不限于轨迹点的坐标、轨迹点的初始轨迹点以及轨迹点的走向(即轨迹点走位顺序)等。在得到轨迹点信息之后,可采用通信方式将轨迹点信息发送至机器人。机器人在接收到轨迹点信息时,可基于轨迹点信息,控制自身的喷涂头进行喷涂。
在某些实施方式中,根据机器人的固有属性以及机器人初始位姿,在轨迹路径上生成轨迹点信息,包括:
确定轨迹路径中的拐角处和直线处;
根据机器人的喷涂速率、运动速度在拐弯处以及直线处以相应密度设置轨迹点;
根据机器人初始位姿确定轨迹点的走位顺序,以得到轨迹点信息。
具体地,确定轨迹路径中的拐角处和直线处,可以基于轨迹路径上各点的坐标值间的关系确定。拐角处相邻点的X坐标和Y坐标均会不一样,而直线处相邻点,可能其X坐标会一样或Y坐标会一样。例如:假设待喷涂物品的形状为矩形,则该待喷涂物品的轨迹路径中,四个角的拐角处相邻点的X坐标和Y坐标均会不一样,而上边直线处相邻点的Y坐标会一样而X坐标会不一样,下边直线处相邻点的Y坐标会一样而X坐标会不一样且Y坐标相对于上边直线处数值小,左边直线处相邻点的X坐标会一样而Y坐标会不一样,右边直线处相邻点的X坐标会一样而Y坐标会不一样且X坐标相对于左边直线处数值小。
机器人进行喷涂时,会基于一定的喷涂速率控制喷涂头进行喷涂。喷涂速率作为机器人的固有属性,影响本实施例中喷涂的效果。为了能够方便参考机器人的喷涂速率在轨迹路径上设置轨迹点,以避免堆料情况,可确定该机器人的喷涂速率。
机器人运动的固有属性还体现为,若机器人在拐角处和直线处设置同样的运动速度参数,其在拐角处和直线处的运动速度会不同,具体拐角处运动速度慢于直线处运动速度。而实际情况下机器人另一固有属性喷涂速率是不变的,因此对于合适直线的喷涂速率与运动速度参数,在拐弯处就会造成堆料情况。在某些实施方式中,在保证机器人沿着所确定的轨迹路径移动的前提下,在轨迹路径上的拐角处设置的轨迹点的间距可以比直线处设置的轨迹点间距大些,以达到直线处运动速度与拐角处运动速度的平衡,进而解决拐角可能造成的堆料现象。可在直线处设置一最小间距用于限定直线处轨迹点的间距,防止直线处由于机器人由于轨迹点数量过多而出现卡顿堆料的情况。还可在直线处和拐角处设置数值不同的运动速度参数以达到直线处运动速度与拐角处运动速度的平衡,解决由于固有属性导致的堆料问题。
根据机器人初始位姿确定轨迹点的走位顺序,以得到所述轨迹点信息。可以理解,为了 使得机器人走更少的多余轨迹,设置轨迹点的初始点为靠近机器人初始位姿的点,例如:可以为待喷涂物品的靠近机器人那条边的中间部位对应的轨迹点。也即是说,在确定机器人的初始位姿之后,可将距离该机器人的初始位姿最近的那条边的轨迹路径上的中间点对应的轨迹点(或者距离该点最近的轨迹点)作为轨迹点的初始轨迹点,之后,可以顺时针走位其他轨迹点,也可以逆时针走位其他轨迹点。
在某些实施方式中,轨迹点信息具体可以包括轨迹点坐标,初始轨迹点坐标、轨迹点的走位顺序、轨迹点的运动速度参数等。
在某些实施方式中,轨迹点信息还包括:轮廓点对应的法向信息。
具体地,法向信息可以为各轮廓点云对应的法向量相对于一固定量的角度值,还可以为各轮廓点云中相应走位顺序在后的点云相对于其前一点云的偏离角度值。
图7示出了根据本发明又一个实施例的轨迹生成装置的结构示意图,该装置包括:
三维点云获取模块400,用于获取物品的三维点云,即用于实现步骤S100;
二维图像生成模块410,用于基于物品的三维点云生成物品的二维图像,即用于实现步骤S110;
二维轨迹点生成模块420,用于基于物品的二维图像生成二维轨迹点,即用于实现步骤S120;
高度信息获取模块430,用于获取二维轨迹点的高度信息,即用于实现步骤S130;
三维轨迹点生成模块440,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点,即用于实现步骤S140。
图8示出了根据本发明又一个实施例的基于物品轮廓的轨迹生成装置的结构示意图,该装置包括:
二维图像获取模块500,用于获取物品的二维图像,即用于实现步骤S200;
轮廓获取模块510,用于基于物品的二维图像获取物品的轮廓,即用于实现步骤S210;
二维轨迹点生成模块520,用于基于所述物品的轮廓生成二维轨迹点,即用于实现步骤S220;
高度信息获取模块530,用于获取二维轨迹点的高度信息,即用于实现步骤S230;
三维轨迹点生成模块540,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点,即用于实现步骤S240。
图9示出了根据本发明又一个实施例的基于外接矩形的轨迹生成装置的结构示意图,该装置包括:
二维图像获取模块600,用于获取物品的二维图像,即用于实现步骤S300;
外接矩形计算模块610,用于计算所述二维图像的外接矩形,即用于实现步骤S310;
二维轨迹点生成模块620,用于基于所述外接矩形生成二维轨迹点,即用于实现步骤S320;
高度信息获取模块630,用于获取二维轨迹点的高度信息,即用于实现步骤S330;
三维轨迹点生成模块640,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点,即用于实现步骤S340。
上述图7-图9所示的装置实施例中,仅描述了模块的主要功能,各个模块的全部功能与方法实施例中相应步骤相对应,各个模块的工作原理同样可以参照方法实施例中相应步骤的描述,此处不再赘述。另外,虽然上述实施例中限定了功能模块的功能与方法的对应关系,然而本领域技术人员能够理解,功能模块的功能并不局限于上述对应关系,即特定的功能模块还能够实现其他方法步骤或方法步骤的一部分。例如,上述实施例描述了三维轨迹点生成模块640用于实现步骤S340的方法,然而根据实际情况的需要,三维轨迹点生成模块640也可以用于实现步骤S310、S320或S330的方法或方法的一部分。
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施方式的方法。需要指出的是,本申请实施方式的计算机可读存储介质存储的计算机程序可以被电子设备的处理器执行,此外,计算机可读存储介质可以是内置在电子设备中的存储介质,也可以是能够插拔地插接在电子设备的存储介质,因此,本申请实施方式的计算机可读存储介质具有较高的灵活性和可靠性。
图10示出了根据本发明实施例的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定,优选地,该电子设备可以是3D相机。
如图10所示,该电子设备可以包括:处理器(processor)902、通信接口(Communications Interface)904、存储器(memory)906、以及通信总线908。
其中:
处理器902、通信接口904、以及存储器906通过通信总线908完成相互间的通信。
通信接口904,用于与其它设备比如客户端或其它服务器等的网元通信。
处理器902,用于执行程序910,具体可以执行上述方法实施例中的相关步骤。
具体地,程序910可以包括程序代码,该程序代码包括计算机操作指令。
处理器902可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器906,用于存放程序910。存储器906可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序910具体可以用于使得处理器902执行上述方法实施例中的各项操作。
概括地说,本发明的发明内容包括:
一种轨迹生成方法,包括:
获取物品的三维点云;
基于物品的三维点云生成物品的二维图像;
基于物品的二维图像生成二维轨迹点;
获取二维轨迹点的高度信息;
基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述基于物品的三维点云生成物品的二维图像包括将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
可选的,所述基于物品的二维图像生成二维轨迹点包括:基于预设的轨迹点数量和/或轨迹点间隔生成二维轨迹点。
可选的,所述获取二维轨迹点的高度信息包括:基于轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
可选的,还包括:对生成的三维轨迹点在高度上进行平滑处理。
可选的,所述基于物品的二维图像生成二维轨迹点包括:基于物品的轮廓和/或基于二维图像的外接矩形生成二维轨迹点。
一种轨迹生成装置,包括:
三维点云获取模块,用于获取物品的三维点云;
二维图像生成模块,用于基于物品的三维点云生成物品的二维图像;
二维轨迹点生成模块,用于基于物品的二维图像生成二维轨迹点;
高度信息获取模块,用于获取二维轨迹点的高度信息;
三维轨迹点生成模块,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述二维图像生成模块具体用于将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
可选的,所述二维轨迹点生成模块基于预设的轨迹点数量和/或轨迹点间隔生成二维轨迹点。
可选的,所述高度信息获取模块具体用于根据轨迹点处的物品像素点的深度信息获取轨 迹点的高度信息。
可选的,所述三维轨迹点生成模块还用于对生成的三维轨迹点在高度上进行平滑处理。
可选的,所述二维轨迹点生成模块基于物品的轮廓和/或基于二维图像的外接矩形生成二维轨迹点。
一种轨迹生成方法,包括:
获取物品的二维图像;
基于物品的二维图像获取物品的轮廓;
基于所述物品的轮廓生成二维轨迹点;
获取二维轨迹点的高度信息;
基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述获取二维轨迹点的高度信息包括:根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
可选的,在基于物品的二维图像获取物品的轮廓之前,对二维图像执行膨胀和/或腐蚀操作。
可选的,所述基于所述物品的轮廓生成二维轨迹点包括:从第一角轮廓点起,到第二角轮廓点为止,在物品轮廓上每隔特定的距离选择一轮廓点,基于所述轮廓点生成二维轨迹点。
可选的,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
可选的,所述第一角轮廓点和第二角轮廓点在物品轮廓的同一长边上。
可选的,所述基于所述轮廓点生成二维轨迹点包括:在每一个所选择的轮廓点处,计算中点,将所述中点作为物品的轨迹点。
可选的,所述计算中点包括:计算物品轮廓在所选择的轮廓点处的切线,计算所述切线在所述轮廓点处的垂线,计算所述垂线与物品轮廓的两条长边的交点的连线,计算该连线的中点。
一种轨迹生成装置,包括:
二维图像获取模块,用于获取物品的二维图像;
轮廓获取模块,用于基于物品的二维图像获取物品的轮廓;
二维轨迹点生成模块,用于基于所述物品的轮廓生成二维轨迹点;
高度信息获取模块,用于获取二维轨迹点的高度信息;
三维轨迹点生成模块,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述高度信息获取模块具体用于根据轨迹点处的物品像素点的深度信息获取轨 迹点的高度信息。
可选的,轮廓获取模块还用于在基于物品的二维图像获取物品的轮廓之前,对二维图像执行膨胀和/或腐蚀操作。
可选的,所述二维轨迹点生成模块具体用于从第一角轮廓点起,到第二角轮廓点为止,在物品轮廓上每隔特定的距离选择一轮廓点,基于所述轮廓点生成二维轨迹点。
可选的,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
可选的,所述第一角轮廓点和第二角轮廓点在物品轮廓的同一长边上。
可选的,所述基于所述轮廓点生成二维轨迹点包括:在每一个所选择的轮廓点处,计算中点,将所述中点作为物品的轨迹点。
可选的,所述计算中点包括:计算物品轮廓在所选择的轮廓点处的切线,计算所述切线在所述轮廓点处的垂线,计算所述垂线与物品轮廓的两条长边的交点的连线,计算该连线的中点。
一种轨迹生成方法,其特征在于,包括:
获取物品的二维图像;
计算所述二维图像的外接矩形;
基于所述外接矩形生成二维轨迹点;
获取二维轨迹点的高度信息;
基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述获取物品的二维图像包括将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
可选的,所述获取二维轨迹点的高度信息包括:根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
可选的,对生成的三维轨迹点在高度上进行平滑处理。
可选的,所述基于所述外接矩形生成二维轨迹点包括:在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
可选的,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。
可选的,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹点。
可选的,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
一种轨迹生成装置,包括:
二维图像获取模块,用于获取物品的二维图像;
外接矩形计算模块,用于计算所述二维图像的外接矩形;
二维轨迹点生成模块,用于基于所述外接矩形生成二维轨迹点;
高度信息获取模块,用于获取二维轨迹点的高度信息;
三维轨迹点生成模块,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
可选的,所述二维图像获取模块具体用于将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
可选的,所述高度信息获取模块具体用于根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
可选的,所述高度信息获取模块还用于对生成的三维轨迹点在高度上进行平滑处理。
可选的,所述二维轨迹点生成模块具体用于在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
可选的,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。。
可选的,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹点。
可选的,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实 现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
应当理解,本申请的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (19)

  1. 一种轨迹生成方法,其特征在于,包括:
    获取物品的二维图像;
    计算所述二维图像的外接矩形;
    基于所述外接矩形生成二维轨迹点;
    获取二维轨迹点的高度信息;
    基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维轨迹点。
  2. 根据权利要求1所述的轨迹生成方法,其特征在于,所述获取物品的二维图像包括将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
  3. 根据权利要求1所述的轨迹生成方法,其特征在于,所述获取二维轨迹点的高度信息包括:根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
  4. 根据权利要求1所述的轨迹生成方法,其特征在于:对生成的三维轨迹点在高度上进行平滑处理。
  5. 根据权利要求1所述的轨迹生成方法,其特征在于,所述基于所述外接矩形生成二维轨迹点包括:在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
  6. 根据权利要求5所述的轨迹生成方法,其特征在于,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。
  7. 根据权利要求6所述的轨迹生成方法,其特征在于,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹点。
  8. 根据权利要求7所述的轨迹生成方法,其特征在于,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
  9. 一种轨迹生成装置,其特征在于,包括:
    二维图像获取模块,用于获取物品的二维图像;
    外接矩形计算模块,用于计算所述二维图像的外接矩形;
    二维轨迹点生成模块,用于基于所述外接矩形生成二维轨迹点;
    高度信息获取模块,用于获取二维轨迹点的高度信息;
    三维轨迹点生成模块,用于基于二维轨迹点以及获取的二维轨迹点的高度信息生成三维 轨迹点。
  10. 根据权利要求9所述的轨迹生成装置,其特征在于,所述二维图像获取模块具体用于将物品的三维点云沿物体表面的垂直方向进行映射并生成二维图像。
  11. 根据权利要求9所述的轨迹生成装置,其特征在于,所述高度信息获取模块具体用于根据轨迹点处的物品像素点的深度信息获取轨迹点的高度信息。
  12. 根据权利要求9所述的轨迹生成装置,其特征在于,所述高度信息获取模块还用于对生成的三维轨迹点在高度上进行平滑处理。
  13. 根据权利要求12所述的轨迹生成装置,其特征在于,所述二维轨迹点生成模块具体用于在外接矩形的一边按照预定间隔生成该边的切线,将切线与外接矩形两边的交点之间的线段作为分界线,基于所述分界线生成二维轨迹点。
  14. 根据权利要求13所述的轨迹生成装置,其特征在于,所述基于分界线生成所述二维轨迹点包括:从外接矩形任意角点开始,以外接矩形的角点以及分界线与边的交点作为拐点,以Z字形遍历分界线以及与分界线平行的边,从而生成所述二维轨迹点。
  15. 根据权利要求14所述的轨迹生成装置,其特征在于,在遍历分界线以及与分界线平行的边时,按照特定的距离生成轨迹点。
  16. 根据权利要求15所述的轨迹生成装置,其特征在于,所述特定的距离是基于预设的轨迹点间距和/或预设的总轨迹点数计算得到的。
  17. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的轨迹生成方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的轨迹生成方法。
  19. 一种3D相机,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的轨迹生成方法。
PCT/CN2021/138581 2021-05-11 2021-12-15 轨迹生成方法、装置、电子设备、存储介质和3d相机 WO2022237166A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110511370.1 2021-05-11
CN202110511370.1A CN113199480B (zh) 2021-05-11 2021-05-11 轨迹生成方法、装置、电子设备、存储介质和3d相机

Publications (1)

Publication Number Publication Date
WO2022237166A1 true WO2022237166A1 (zh) 2022-11-17

Family

ID=77030773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138581 WO2022237166A1 (zh) 2021-05-11 2021-12-15 轨迹生成方法、装置、电子设备、存储介质和3d相机

Country Status (2)

Country Link
CN (1) CN113199480B (zh)
WO (1) WO2022237166A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113591A (zh) * 2023-10-23 2023-11-24 深圳市南科佳安机器人科技有限公司 产品加工方法、装置及存储介质和终端设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113199479B (zh) * 2021-05-11 2023-02-10 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
WO2022237544A1 (zh) * 2021-05-11 2022-11-17 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备及存储介质
CN113199480B (zh) * 2021-05-11 2023-02-10 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN114618704B (zh) * 2022-02-23 2023-06-20 深圳远荣智能制造股份有限公司 一种3d视觉引导机器人免编程的喷涂方法及其系统
CN115488897A (zh) * 2022-10-28 2022-12-20 安徽省凤阳县前力玻璃制品有限公司 机械手臂码垛最优空间轨迹规划方法
CN116188480B (zh) * 2023-04-23 2023-07-18 安徽同湃特机器人科技有限公司 喷涂机器人天花板作业时agv行进路径点的计算方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808415A (zh) * 2017-11-17 2018-03-16 中国科学院合肥物质科学研究院 基于机器视觉的鞋底边缘轨迹及涂胶位姿提取方法
CN109954613A (zh) * 2017-12-25 2019-07-02 广州智信科技有限公司 喷涂方法
CN110717984A (zh) * 2019-09-10 2020-01-21 佛山缔乐视觉科技有限公司 基于三维重构的鞋底自动涂胶方法、系统及存储介质
CN111369593A (zh) * 2020-03-16 2020-07-03 梅卡曼德(北京)机器人科技有限公司 玻璃涂胶方法、装置、电子设备和存储介质
CN111744706A (zh) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 物件的喷胶方法、装置、电子设备及存储介质
WO2020232406A1 (en) * 2019-05-16 2020-11-19 University Of Maryland, College Park Confidence-based robotically-assisted surgery system
US20200406615A1 (en) * 2019-06-28 2020-12-31 Board Of Regents, The University Of Texas System Line width control and trajectory planning for robot guided inkjet deposition
CN113189934A (zh) * 2021-05-11 2021-07-30 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113199480A (zh) * 2021-05-11 2021-08-03 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113199479A (zh) * 2021-05-11 2021-08-03 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484883A (zh) * 2014-12-24 2015-04-01 河海大学常州校区 基于视频的三维虚拟船舶定位及其轨迹模拟方法
CN106600643B (zh) * 2016-10-25 2019-06-28 长安大学 一种基于轨迹分析的人数统计方法
CN108636671B (zh) * 2018-05-24 2020-08-04 盐城工学院 一种不规则面片偏置喷涂路径规划方法
US11170526B2 (en) * 2019-03-26 2021-11-09 Samsung Electronics Co., Ltd. Method and apparatus for estimating tool trajectories
US11017586B2 (en) * 2019-04-18 2021-05-25 Adobe Inc. 3D motion effect from a 2D image
CN113996457B (zh) * 2020-03-06 2022-07-01 梅卡曼德(北京)机器人科技有限公司 喷胶轨迹信息确定方法及装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808415A (zh) * 2017-11-17 2018-03-16 中国科学院合肥物质科学研究院 基于机器视觉的鞋底边缘轨迹及涂胶位姿提取方法
CN109954613A (zh) * 2017-12-25 2019-07-02 广州智信科技有限公司 喷涂方法
WO2020232406A1 (en) * 2019-05-16 2020-11-19 University Of Maryland, College Park Confidence-based robotically-assisted surgery system
US20200406615A1 (en) * 2019-06-28 2020-12-31 Board Of Regents, The University Of Texas System Line width control and trajectory planning for robot guided inkjet deposition
CN110717984A (zh) * 2019-09-10 2020-01-21 佛山缔乐视觉科技有限公司 基于三维重构的鞋底自动涂胶方法、系统及存储介质
CN111369593A (zh) * 2020-03-16 2020-07-03 梅卡曼德(北京)机器人科技有限公司 玻璃涂胶方法、装置、电子设备和存储介质
CN111744706A (zh) * 2020-06-23 2020-10-09 梅卡曼德(北京)机器人科技有限公司 物件的喷胶方法、装置、电子设备及存储介质
CN113189934A (zh) * 2021-05-11 2021-07-30 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113199480A (zh) * 2021-05-11 2021-08-03 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113199479A (zh) * 2021-05-11 2021-08-03 梅卡曼德(北京)机器人科技有限公司 轨迹生成方法、装置、电子设备、存储介质和3d相机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113591A (zh) * 2023-10-23 2023-11-24 深圳市南科佳安机器人科技有限公司 产品加工方法、装置及存储介质和终端设备
CN117113591B (zh) * 2023-10-23 2024-02-23 深圳市南科佳安机器人科技有限公司 产品加工方法、装置及存储介质和终端设备

Also Published As

Publication number Publication date
CN113199480A (zh) 2021-08-03
CN113199480B (zh) 2023-02-10

Similar Documents

Publication Publication Date Title
WO2022237166A1 (zh) 轨迹生成方法、装置、电子设备、存储介质和3d相机
CN113199479B (zh) 轨迹生成方法、装置、电子设备、存储介质和3d相机
WO2022222515A1 (zh) 基于机器人视觉的物品表面涂胶方法、装置、设备和介质
Kuhnert et al. Fusion of stereo-camera and pmd-camera data for real-time suited precise 3d environment reconstruction
US11667036B2 (en) Workpiece picking device and workpiece picking method
KR101954855B1 (ko) 볼륨 내 물체의 심도 맵핑을 위한 광 패턴의 강도 변화의 사용
US9275302B1 (en) Object detection and identification
CN113189934A (zh) 轨迹生成方法、装置、电子设备、存储介质和3d相机
US9424649B1 (en) Moving body position estimation device and moving body position estimation method
JP6469905B2 (ja) 適応型グランドプレーン推定を用いた自律運転のためのモノキュラ3d位置特定
CN113869422B (zh) 多相机目标匹配方法、系统、电子设备及可读存储介质
CN109313822B (zh) 基于机器视觉的虚拟墙构建方法及装置、地图构建方法、可移动电子设备
JP6040264B2 (ja) 情報処理装置、情報処理装置の制御方法、およびプログラム
KR101706092B1 (ko) 3차원 물체 추적 방법 및 장치
KR20200042781A (ko) 입체 모델 생성 방법 및 장치
WO2024019899A1 (en) Virtual production based on display assembly pose and pose error correction
CN116840853A (zh) 一种多线激光雷达拖尾点与离群点去除方法及系统
WO2022237544A1 (zh) 轨迹生成方法、装置、电子设备及存储介质
CN112967307A (zh) 基于机器人移动速度控制的凹槽填充方法、装置、电子设备和存储介质
WO2022222934A1 (zh) 玻璃涂胶方法、玻璃涂胶装置、电子设备和存储介质
CN115661252A (zh) 一种实时位姿估计方法、装置、电子设备以及存储介质
CN112975222B (zh) 基于多线结构光焊缝跟踪传感器的焊枪末端位姿识别方法
CN113223029A (zh) 玻璃涂胶方法、玻璃涂胶装置、电子设备和存储介质
Li et al. Visual localization and object tracking for the NAO robot in dynamic environment
JP6186072B2 (ja) 単一カメラを用いた3dでの移動物体の位置測定

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941728

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE