CN115113623A - Pallet identification and positioning method and system based on 3D sensor - Google Patents

Pallet identification and positioning method and system based on 3D sensor Download PDF

Info

Publication number
CN115113623A
CN115113623A CN202210750713.4A CN202210750713A CN115113623A CN 115113623 A CN115113623 A CN 115113623A CN 202210750713 A CN202210750713 A CN 202210750713A CN 115113623 A CN115113623 A CN 115113623A
Authority
CN
China
Prior art keywords
point cloud
pallet
sensor
target
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210750713.4A
Other languages
Chinese (zh)
Inventor
王冠
张腾宇
赵越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiangong Intelligent Technology Co ltd
Original Assignee
Shanghai Xiangong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiangong Intelligent Technology Co ltd filed Critical Shanghai Xiangong Intelligent Technology Co ltd
Priority to CN202210750713.4A priority Critical patent/CN115113623A/en
Publication of CN115113623A publication Critical patent/CN115113623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a pallet identification and positioning method and system based on a 3D sensor, wherein the identification method comprises the following steps: s1, determining each end face parameter of the target pallet for identification, and establishing a characteristic template; s2, establishing a target point cloud based on sensing data acquired by the 3D sensor, preprocessing the target point cloud, filtering ground point cloud, and combining the point clouds on the same end face of the target pallet to obtain a planar point cloud block; s3, converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of the 3D sensor, S4 establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset feature recognition distance in the coding graph, synchronously moving according to a preset step length, carrying out line segment fitting scanning on points on the 2D plane encountered on a path, and outputting a recognition result corresponding to the feature module when judging that the current line lengths respectively fitted by the double-line sliding window all accord with the feature template. Therefore, the method is universally suitable for various 3D sensors, the universality is improved, and meanwhile, the calculation performance requirement is reduced.

Description

Pallet identification and positioning method and system based on 3D sensor
Technical Field
The invention relates to a robot vision positioning technology, in particular to a method and a system for identifying and positioning pallet surface shapes based on data acquired by a traditional 3D sensor.
Background
The robot referred to in the present invention is a mobile robot capable of automatic operation, such as a wheeled robot, and can be classified according to different working attributes as follows: cleaning robots, disinfection robots, inspection robots, transfer robots, and the like.
In the field of industrial application, an automated robot with moving and transporting functions is gradually replacing human intervention into the existing production system to realize tasks such as material transportation, insertion and fetching, however, the robot can realize automated control, and mainly depends on the identification and positioning technology of the continuously developed robot.
For example, an automatic forklift robot transfers goods by inserting and taking a pallet, if the specific pose of the pallet is not known in advance, the situation of insertion and taking failure is easy to occur in the robot insertion and taking process, and therefore how to quickly identify and position the position of the pallet is an objective which is improved in the field in an iterative manner all the time.
The existing technology, which generally uses an RGB-D camera to determine the position of the pallet according to the characteristics, is also widely used, but has the problem of low versatility and depends on a specific depth camera device.
On the other hand, the technology based on sample learning is also a popular technology at present, namely deep learning, and the deep learning can enable the recognition to reach high accuracy, but has higher requirements on the calculation performance.
Disclosure of Invention
Therefore, the invention mainly aims to provide a pallet identification and positioning method and system based on a 3D sensor, so that the universality is improved by universally adapting to various 3D sensors, and meanwhile, the calculation performance requirement is reduced.
In order to achieve the above object, according to one aspect of the present invention, there is provided a pallet identification method based on a 3D sensor, comprising the steps of:
s1, determining each end face parameter of the target pallet for identification, and establishing a characteristic template;
s2, establishing a target point cloud based on sensing data acquired by the 3D sensor, preprocessing the target point cloud, filtering ground point cloud, and combining the point clouds on the same end face of the target pallet to obtain a planar point cloud block;
s3, converting the planar point cloud block into a 2D plane under a camera coordinate system through the external parameters of the 3D sensor;
s4, establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset feature recognition distance in the coding graph, synchronously moving according to a preset step length to perform line segment fitting scanning on the points on the 2D plane encountered on the path, and outputting the recognition result corresponding to the feature module when the lengths of the current lines respectively fitted by the double-line sliding window are judged to be in accordance with the feature template.
In a possible preferred embodiment, the feature template comprises: a first sub-characteristic that is a continuous line segment characteristic and the length exists in an end parameter range of the target pallet; and the second sub-characteristic is a plurality of interval line segment characteristics, and the length and the interval distance of each interval line segment exist in the end face parameter range of the target pallet.
In a possible preferred embodiment, the step of preprocessing the target point cloud in step S2 includes:
s21, filtering the target point cloud, converting the target point cloud into a robot coordinate system according to external parameters of the 3D sensor to obtain corresponding point cloud coordinates, and thus, referring to height parameters of the target pallet, filtering unmatched target point cloud;
s22, removing outliers by statistical filtering the target point cloud processed in the step S21.
In a possible preferred embodiment, the step of filtering the ground point cloud in step S2 includes:
s23, respectively extracting a plurality of points from the target point cloud randomly for a plurality of times to fit a plurality of reference planes;
s24, counting the number of corresponding points between each datum plane and all points of the target point cloud within the tolerance distance range;
s25 selects the reference plane having the largest number of corresponding points as the ground plane, so as to assign all points on the reference plane to the ground component for rejection, and assign the remaining points to the object component.
In a possible preferred embodiment, the step of combining the point clouds on the same end surface of the target pallet to obtain the planar point cloud block in step S2 includes:
s26 randomly selecting seed points from the object component point cloud, judging whether the seed points and non-seed points serving as the peripheries of the seed points are in the same plane or not, wherein the normal vector of the seed points is vertical to the normal vector of the ground, and when the seed points and the non-seed points are determined to be in the same plane, determining the non-seed points as new seed points;
s27, iteratively judging whether the new seed point and the surrounding non-seed points are in the same plane, and counting all the seed points in a point cloud area growing mode;
s28, judging whether the counted number of the seed points is within a preset number range, when the number of the seed points is within the number range, constructing the planar point cloud blocks based on the counted seed points, and simultaneously merging all the planar point cloud blocks judged to be on the same side of the same object.
In a possible preferred embodiment, the step of determining the same face of the same object includes: and converting each planar point cloud block into a plane equation ax + by + cz =1, judging whether the three factors a, b and c in the plane equation of each planar point cloud block are similar or not, judging that the absolute value of the difference of the factors is less than a preset threshold value, and judging that the planar point cloud blocks are the same surface of the same object if the factors are consistent with the preset threshold value.
In a possible preferred embodiment, the step of converting the planar point cloud block to a 2D plane in the camera coordinate system by using the external parameters of the 3D sensor in step S3 includes:
s31, each point in the planar point cloud block is coded into (h, w, z, a, b, c, yaw), wherein h and w are coordinates of the planar point cloud block in a camera coordinate system, and a, b and c are factors of a planar equation ax + by + cz =1 of the planar point cloud block, and then the planar point cloud block is calculated
Figure 187880DEST_PATH_IMAGE001
S32 rotates each planar point cloud yaw with the center point of the planar point cloud block so as to be parallel to the h and w axes, thereby acquiring a 2D plane.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a pallet positioning method based on a 3D sensor, comprising the steps of:
s1, according to the pallet recognition method based on 3D sensor as claimed in any one of claims 1 to 7, obtaining the average value of the center points of the matched two-line sliding window fitting lines as the center point coordinates of the currently recognized target pallet end face, and using the center point coordinates as the x, y, z parameters of 6 Dpos;
s2 setting the angle of roll and pitch to 0;
s3, according to a plane equation ax + by + cz =1 established by the planar point cloud block, calculating a yaw parameter value according to the plane equation and three factors of a, b and c
Figure 895417DEST_PATH_IMAGE001
To obtain the target pallet 6D position.
To achieve the above object, according to another aspect of the present invention, there is also provided a pallet identification system based on a 3D sensor, including:
a storage unit storing a program for implementing the steps of the 3D sensor based pallet identification method according to any one of claims 1 to 7 for the control unit and the processing unit to timely invoke and execute;
the control unit controls the 3D sensor to acquire a target point cloud in a scene so as to send the target point cloud to the processing unit;
the processing unit filters ground point clouds from the target point clouds and combines the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; and then converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of the 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to carry out line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the current line lengths respectively fitted by the double-line sliding window all accord with the characteristic template.
To achieve the above object, according to another aspect of the present invention, there is also provided a 3D sensor-based pallet positioning system, comprising:
a storage unit storing a program for implementing the steps of the 3D sensor based pallet identification method according to any one of claims 1 to 7 for the control unit and the processing unit to timely invoke and execute;
the control unit controls the 3D sensor to acquire a target point cloud in a scene so as to send the target point cloud to the processing unit;
the processing unit filters ground point clouds from the target point clouds and combines the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of a 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to perform line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the lengths of current lines respectively fitted by the double-line sliding window all accord with the characteristic template;
the processing unit further takes the average value of the center points of the matched double-line fitting lines as the center point coordinates of the currently identified end face of the target pallet, the center point coordinates are taken as x, y and z parameters of 6Dpos, the angles of roll and pitch are set to be 0, and the value of the yaw parameter is calculated
Figure 639382DEST_PATH_IMAGE001
To obtain the target pallet 6D position.
The pallet identification and positioning method and system based on the 3D sensor are particularly suitable for identifying and positioning the pallet, and for other objects needing to be interacted with the robot, as long as the objects have certain and continuous surface characteristics at the angle capable of being scanned by the sensor, the objects and the positions of the objects can be accurately and quickly identified. And the expansibility is very strong, for example, the pallet in the example of the scheme, no matter the pallet is a standard pallet or a nonstandard pallet, the subsequent matching identification step can be carried out only by obtaining each structure parameter of the pallet in advance.
In addition, the method can be widely applied to various 3D sensors, has strong universality, can be applied to both depth cameras, multi-line laser radars and solid-state laser radars, and can directly calculate the point cloud without splicing. On the other hand, the scheme of the invention can perform recognition judgment and target pose calculation at any time without pre-training samples, so that the method is lower in occupied calculation performance and more ingenious compared with a deep learning scheme.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic illustration of end parameters of an exemplary target pallet of the present invention;
FIG. 2 is a schematic diagram illustrating steps of a pallet identification method based on a 3D sensor according to the present invention;
FIG. 3 is a schematic diagram of a 3D sensor coordinate system (camera coordinate system) according to the present invention;
FIG. 4 is a schematic diagram of a point cloud of an end face of a pallet in a code diagram fitted by a two-line sliding window in the 3D sensor-based pallet identification method according to the present invention;
FIG. 5 is a schematic diagram of a two-line sliding window performing fitting matching on a pallet end point cloud in a code map and stopping when a characteristic template is met in the pallet identification method based on a 3D sensor according to the present invention;
FIG. 6 is a schematic diagram of a feature template in the 3D sensor-based pallet identification method according to the present invention;
fig. 7 is a schematic structural diagram of the pallet identification and positioning system based on 3D sensor according to the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following will clearly and completely describe the specific technical solution of the present invention with reference to the embodiments to help those skilled in the art to further understand the present invention. It should be apparent that the embodiments described herein are only a few embodiments of the present invention, and not all embodiments. It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without departing from the inventive concept and without conflicting therewith by those skilled in the art. All other embodiments based on the embodiments of the present invention, which can be obtained by a person of ordinary skill in the art without any creative effort, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S1," "S2," and the like in the description and claims of the present invention and in the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those described herein. Also, the terms "including" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. Unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in this case can be understood by those skilled in the art in combination with the prior art as the case may be.
It should be noted that, in the example of the present invention, the 3D sensor may be a 3D laser sensor, or a 3D camera sensor and only depth information may be used. In particular, the approach of the present invention is applicable to 3D lidar in non-repetitive scanning mode, but equally to mechanically scanned 3D lidar as well as depth cameras. For the mechanically scanned 3D laser radar, the point cloud can be directly calculated without being spliced; for a depth camera, a depth map needs to be converted into a 3D point cloud, and calculation can be performed without point cloud splicing.
Furthermore, in the scheme of the invention, on the conception, the discrete 3D point cloud is converted into the 2D plane through special coding, and meanwhile, the pallet can be identified and positioned by using a two-line traversal method through simple calculation, so that the method is suitable for various 3D sensors such as a depth camera, a multi-line laser radar, a solid-state laser radar and the like.
In addition, although the following examples illustrate the identification and positioning process of a pallet with one structure, those skilled in the art can understand that the solution of the present invention can also be applied to identify a plurality of pallets with different kinds, and can even be extended to non-standard pallets according to the following embodiments. The pose of the pallet can be quickly and accurately identified and positioned only by configuring the parameters of the pallet in advance and establishing the characteristic template.
Specifically, as shown in fig. 1 to 6, the pallet identification method based on a 3D sensor provided by the present invention includes the steps of:
step S1
And determining the parameters of each end face of the target pallet for identification, and establishing a characteristic template.
Specifically, as shown in fig. 1, various parameters on the target pallet end are predetermined and then a characteristic template is established based on characteristics on the different pallet ends, such as the three-leg configuration of the pallet exemplified in this case, and thus to correspond to the two-wire traversal method proposed in this case, the characteristic template can be exemplified by two different size and configuration wire segments at specific locations on the pallet end.
For example, the feature template includes: a first sub-characteristic that is a continuous line segment characteristic and the length exists in an end parameter range of the target pallet; and the second sub-characteristic is a plurality of interval line segment characteristics, and the length and the interval distance of each interval line segment exist in the end face parameter range of the target pallet.
As shown in fig. 6, since there are 3 legs in the pallet, the line at L1 represents the continuous characteristic of the upper board body of the pallet, and the length dimension thereof is referred to as a1, while the spacing line at L2 represents the characteristic of the board body in which the 3 legs are spaced from each other, and the length of each line itself is referred to as b1, c1, while the spacing distance is referred to as d1, and the spacing between L1 and L2 is referred to as e1, whereby the characteristic template is established by these preset identification constraints and is used for the subsequent identification.
The a1, b1, c1, d1 and e1 are respectively corresponding to the parameters a, b, c, d and e of the pallet and are basically the same or similar to the parameters a, b, c, d and e.
Furthermore, those skilled in the art should understand that the above-mentioned feature templates are only examples, and not intended to be limiting, and those skilled in the art can also make similar arrangements according to the specific structure of the pallet without departing from the spirit of the present invention, and the arrangements also fall within the scope of the present disclosure.
Step S2
And establishing a target point cloud based on sensing data acquired by the 3D sensor, preprocessing the target point cloud, filtering out ground point cloud, and combining the point clouds on the same end surface of the target pallet to obtain a planar point cloud block.
Specifically, in the step of preprocessing the target point cloud, the pallet point cloud data needs to be preprocessed by filtering, and then ground components and object components are separated from the filtered pallet point cloud data. The filtering processing operation may specifically be that the acquired pallet point cloud data is given as a point cloud coordinate corresponding to a natural three-axis coordinate system, then a required height of the pallet to be identified is acquired, and then the pallet point cloud data unmatched with the required height under the point cloud coordinate is filtered according to the required height.
For example, when the height of the pallet is 15cm, if the pallet is placed on a horizontal ground, the point cloud may be acquired only 0-15 cm away from the ground by using the pass-through filter, and if the pallet is stacked on two layers of pallets with the same height, the point cloud may be acquired 30-45 cm away from the ground by using the pass-through filter.
And then, removing outliers from the point cloud obtained after the processing by adopting a statistical filter. Therefore, noise and redundant data can be effectively reduced through a preprocessing mode of multiple filtering, so that the accuracy and the operation speed of a subsequent algorithm are improved, and the robustness and the real-time performance of the algorithm are improved
Further, the step of filtering the ground point cloud comprises: and selecting the reference plane with the largest number of points in the tolerance distance range from all the points in the target point cloud data to the reference plane as the ground, and attributing all the points on the ground to the ground component and the rest points to the object component.
Specifically, to perform ground point cloud elimination on the point cloud after the preprocessing, the ground components and the object components in the pallet point cloud data need to be extracted respectively, and therefore in this example, the ground components and the object components in the point cloud data are separated in a random sampling consensus algorithm and plane model matching manner. Specifically, a plurality of points can be randomly extracted from the pallet point cloud data for a plurality of times respectively, and a plurality of reference planes are correspondingly fitted; then, counting the number of corresponding points of the distance between all points in the point cloud data and the plurality of reference planes within a tolerance distance range; the number of counted points is then compared to determine the ground component from the plurality of reference planes.
For example, the reference plane with the largest number of points within the tolerance distance range from the reference plane may be selected as the ground under the point cloud data, and all the points on the ground may be assigned to the ground component, so as to separate the ground component and the object component in the pallet point cloud data.
Further, the step of combining the point clouds on the same end face of the target pallet to obtain the planar point cloud block comprises the following steps:
and extracting planar point cloud blocks vertical to the ground, for example, randomly selecting seed points from the object component point cloud data, judging whether the seed points and non-seed points serving as the peripheries of the seed points are in the same plane, wherein the normal vector of the seed points is vertical to the normal vector of the ground, and when the seed points and the non-seed points are determined to be in the same plane, determining the non-seed points as new seed points.
And then, iteratively judging whether the new seed point and the surrounding non-seed points are in the same plane or not, and counting all the seed points in a point cloud region growing mode.
And constructing a planar point cloud block vertical to the ground based on the statistical seed points. Specifically, it is determined whether the counted number of seed points is within a preset number range, and when the number of seed points is within the number range, an area-shaped point cloud block perpendicular to the ground may be constructed based on the counted seed points.
Meanwhile, if the number of the seed points is too high or too low, the face corresponding to the counted seed points is determined not to belong to the planar point cloud block, and the method can obtain the plane equation parameters.
Then, after all the planar point cloud blocks are obtained, in order to ensure the integrity of the same surface of the same object, in this example, all the planar point cloud blocks perpendicular to the ground are further merged. If the judgment is made through the plane equation and the distance, if the factors of the plane equation are similar and the distance meets the threshold value, the two point cloud blocks are merged into the same point cloud block.
As in the example, any plane equation can be written as ax + by + cz =1, and if the three factors a, b, c are similar and the absolute value of the difference between the factors is less than the threshold value in the plane equations of two planar cloud blocks of dots, the two planar cloud blocks of dots can be merged into the same cloud block of dots. Thereby ensuring the integrity of the same side of the same object.
In addition, in order to facilitate the calculation of the subsequent steps, the central point of the planar point cloud block can be further calculated, for example, in the invention, because the pallet is symmetrical, the maximum value and the minimum value of the planar point cloud block of the z axis under the camera coordinate system can be directly calculated, and the central coordinate of the planar point cloud block can be obtained by averaging two point clouds.
Step S3
And converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of the 3D sensor. Specifically, each point in the planar point cloud block is first encoded as (h, w, z, a, b, c, yaw), where h and w are the translation coordinates of the y and x axes thereof in the camera coordinate system, where the origin can be translated to the upper left corner, and a, b, c are factors of the planar equation ax + by + cz =1 of the planar point cloud, and thus, the planar point cloud block can be calculated
Figure 751695DEST_PATH_IMAGE001
At this time, because the central coordinates of the planar point cloud block are obtained in step S2, at this time, by rotating yaw each planar point cloud with the central point of the planar point cloud block, all the planar point clouds can be rotated to be parallel to the x and y axes, so as to obtain the end surface 2D plane of the target pallet under the camera coordinate system, as shown in fig. 4, for subsequent identification.
Step S4
As shown in fig. 4 to 5, the encoding pattern is first created, wherein the size of the encoding pattern can be referred to the resolution size supported by the 3D sensor to construct the encoding pattern with the size of W × H as shown in fig. 4 to 5, and then the 2D plane obtained in step S3 is placed therein.
Further, a two-wire sliding window is constructed starting from the uppermost end of the code pattern, wherein the two-wire sliding window is arranged in parallel between L1 and L2, and the spacing distance is preferably e1 which is slightly smaller than the pallet parameter e in the present example.
Furthermore, the moving step length of the double-line sliding window is set, the double-line sliding window is made to start to move along the positive direction of the H in parallel to the W axis, the distance between the double-line sliding windows is not changed, then, the points in the double-line sliding window need to be calculated every time the double-line sliding window moves, namely, the points on the L1 and the L2 are subjected to line segment fitting scanning, and fitting is performed according to the direct distance between two adjacent points and the a, b and c of the encoding parameters of the two adjacent points as fitting references, so that the same plane meets the ax + by + cz =1 equation, and the distance between the points can be judged to be the same plane.
Then, length judgment is carried out, and the length example of the first sub-characteristic needing to be identified by the L1 in the characteristic template is an a1 parameter range corresponding to the size of the pallet a; the length and the spacing distance of the second sub-feature to be identified by the L2 are the parameter ranges of b1, c1 and d1 corresponding to the sizes of the pallets b, c and d, so that the judgment of template matching can be performed according to the following logic:
Figure 555703DEST_PATH_IMAGE002
wherein
Figure 753466DEST_PATH_IMAGE003
Is an error threshold.
And continuously moving the double-line sliding window to perform matching judgment, and recording the result when judging that the lengths of the current lines respectively fitted by the double-line sliding window all accord with the characteristic template. And judging whether the sliding is finished or not, if not, continuing the sliding, otherwise, finishing the identification, and outputting an identification result corresponding to the characteristic module.
It should be noted that the end face of the pallet shown in fig. 5 in this example is an ideal face, which is shown more completely, but in an actual process, for example, at three feet of the pallet, there may be a phenomenon that part of the point cloud is missing, so as shown in fig. 5, during the advancing process of the two-line sliding window, one time with the highest matching degree may be selected as the recognition result to be output.
Therefore, through the identification processes in the steps S1 to S4, it can be determined whether the object represented by the target point cloud is the corresponding pallet. And on the other hand, the method can also provide basis for calculating the pose of the pallet.
Therefore, the invention also provides a pallet positioning method based on the 3D sensor, which comprises the following steps:
step S1
According to the pallet identification method based on the 3D sensor, the average value of the center points of the matched double-line sliding window fitting lines is obtained and is used as the center point coordinate of the currently identified target pallet end face, and the center point coordinate is used as the x, y and z parameters of the 6 Dposition.
Specifically, there are 6 degrees of freedom in the space for the pose of the pallet, i.e., the amount of translation x, y, z and the amount of rotation roll, pitch, yaw, i.e., 6d position. Therefore, according to the optimal line segment which is fit by the double-line sliding window and meets the conditions, the average value of the midpoints of a1 and c1 is selected as the center position of the pallet, and then the center point is restored to the camera coordinate system, so that the x, y and z parameters can be obtained.
Step S2
Since both the pallet and the forklift robot default to the same space and the floor is horizontal, then the pallet can be considered parallel with respect to the forklift tines at this time, i.e. there is no roll and pitch angle, so the roll and pitch angle can be set to 0.
At this time, only the raw angle value needs to be calculated, and the 6d position of the pallet can be obtained.
Step S3
Based on the obtained plane equation ax + by + cz =1 of the pallet-surface-shaped point cloud and three equation factors of a, b and c thereof in the previous steps of the pallet identification method based on the 3D sensor, the method can be used according to the obtained plane equation ax + by + cz =1
Figure 35543DEST_PATH_IMAGE004
And calculating the yaw parameter value to obtain the complete 6D position of the target pallet, and positioning the target pallet.
On the other hand, referring to fig. 7, the present invention also provides a pallet recognition system based on a 3D sensor, which includes:
and the storage unit stores a program for realizing the steps of the pallet identification method based on the 3D sensor in the embodiment, so that the control unit and the processing unit can call and execute the steps at proper time.
The control unit controls the 3D sensor to acquire the target point cloud in the scene to send the target point cloud to the processing unit.
The processing unit filters ground point clouds from the target point clouds and combines the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; and then converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of the 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to carry out line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the current line lengths respectively fitted by the double-line sliding window all accord with the characteristic template.
On the other hand, referring to fig. 7, the present invention also provides a pallet positioning system based on a 3D sensor, which corresponds to the above-mentioned identification method, and includes:
and the storage unit stores a program for realizing the steps of the pallet identification method based on the 3D sensor in the embodiment, so that the control unit and the processing unit can call and execute the steps at proper time.
The control unit controls the 3D sensor to acquire the target point cloud in the scene to send the target point cloud to the processing unit.
The processing unit filters ground point clouds from the target point clouds and merges the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; and then converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of the 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to carry out line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the current line lengths respectively fitted by the double-line sliding window all accord with the characteristic template.
The processing unit further takes the average value of the center points of the matched double-line fitting lines as the center point coordinates of the currently identified end face of the target pallet, the center point coordinates are taken as x, y and z parameters of 6Dpos, the angles of roll and pitch are set to be 0, and the value of the yaw parameter is calculated
Figure 2362DEST_PATH_IMAGE001
To obtain the target pallet 6D position, and to complete the positioning thereof.
In summary, the pallet recognition and positioning method and system based on the 3D sensor provided by the invention are particularly suitable for recognition and positioning of the pallet, and for other objects needing to interact with the robot, as long as the object has a certain and continuous surface characteristic at an angle which can be scanned by the sensor, the object and the position thereof can be recognized accurately and rapidly. And the expansibility is very strong, for example, the pallet in the example of the scheme, no matter the pallet is a standard pallet or a nonstandard pallet, the subsequent matching identification step can be carried out only by obtaining each structure parameter of the pallet in advance.
In addition, the method can be widely applied to various 3D sensors, has strong universality, can be applied to both depth cameras, multi-line laser radars and solid-state laser radars, and can directly calculate the point cloud without splicing. On the other hand, the scheme of the invention can perform recognition judgment and target pose calculation at any time without pre-training samples, so that the method is lower in occupied calculation performance and more ingenious compared with a deep learning scheme.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
It will be appreciated by those skilled in the art that, in addition to implementing the system, apparatus and various modules thereof provided by the present invention in the form of pure computer readable program code, the same procedures may be implemented entirely by logically programming method steps such that the system, apparatus and various modules thereof provided by the present invention are implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
In addition, all or part of the steps of the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A pallet identification method based on a 3D sensor is characterized by comprising the following steps:
s1, determining each end face parameter of the target pallet for identification, and establishing a characteristic template;
s2, establishing a target point cloud based on sensing data acquired by the 3D sensor, preprocessing the target point cloud, filtering ground point cloud, and combining the point clouds on the same end face of the target pallet to obtain a planar point cloud block;
s3, converting the planar point cloud block into a 2D plane under a camera coordinate system through the external parameters of the 3D sensor;
s4, a coding graph is established to put the 2D plane into it, a double-line sliding window with a preset feature recognition distance is constructed in the coding graph and moves synchronously according to a preset step length to carry out line segment fitting scanning on the points on the 2D plane encountered on the path, and when the lengths of the current lines respectively fitted by the double-line sliding window are judged to be in accordance with the feature template, the recognition result corresponding to the feature module is output.
2. The 3D sensor-based pallet identification method according to claim 1 wherein said feature templates comprise: a first sub-characteristic that is a continuous line segment characteristic and the length exists in an end parameter range of the target pallet; and the second sub-characteristic is a plurality of interval line segment characteristics, and the length and the interval distance of each interval line segment exist in the end face parameter range of the target pallet.
3. The pallet identification method based on 3D sensor as claimed in claim 1, wherein the step of preprocessing the target point cloud in step S2 comprises:
s21, filtering the target point cloud, converting the target point cloud into a robot coordinate system according to external parameters of the 3D sensor to obtain corresponding point cloud coordinates, and filtering out unmatched target point cloud according to the height parameters of the target pallet;
s22, removing outliers by statistical filtering the target point cloud processed in the step S21.
4. The pallet identification method based on 3D sensor as claimed in claim 1, wherein the step of filtering out the ground point cloud in step S2 comprises:
s23, respectively extracting a plurality of points from the target point cloud randomly for a plurality of times to fit a plurality of reference planes;
s24, counting the number of corresponding points between each datum plane and all points of the target point cloud within the tolerance distance range;
s25 selects the reference plane having the largest number of corresponding points as the ground plane, so as to assign all points on the reference plane to the ground component for rejection, and assign the remaining points to the object component.
5. The method for identifying a pallet based on a 3D sensor as claimed in claim 4 wherein the step of merging the point clouds of the same end face of the target pallet to obtain the planar point cloud blocks in step S2 comprises:
s26 randomly selecting seed points from the object component point cloud, judging whether the seed points and non-seed points serving as the peripheries of the seed points are in the same plane or not, wherein the normal vector of the seed points is vertical to the normal vector of the ground, and when the seed points and the non-seed points are determined to be in the same plane, determining the non-seed points as new seed points;
s27, iteratively judging whether the new seed points and the surrounding non-seed points are in the same plane, and counting all the seed points in a point cloud region growing mode;
s28, judging whether the counted number of the seed points is within a preset number range, when the number of the seed points is within the number range, constructing the planar point cloud blocks based on the counted seed points, and simultaneously merging all the planar point cloud blocks judged to be on the same side of the same object.
6. The method for identifying a pallet based on a 3D sensor as claimed in claim 5 wherein said step of determining the same side of the same object comprises: and converting each planar point cloud block into a plane equation ax + by + cz =1, judging whether the three factors a, b and c in the plane equation of each planar point cloud block are similar or not, judging that the absolute value of the difference of the factors is less than a preset threshold value, and judging that the planar point cloud blocks are the same surface of the same object if the factors are consistent with the preset threshold value.
7. The pallet identification method according to claim 1 wherein the step of converting the planar point cloud block to a 2D plane in a camera coordinate system by using external parameters of the 3D sensor in step S3 comprises:
s31, each point in the planar point cloud block is coded into (h, w, z, a, b, c, yaw), wherein h and w are coordinates of the planar point cloud block in a camera coordinate system, and a, b and c are factors of a planar equation ax + by + cz =1 of the planar point cloud block, and then the planar point cloud block is calculated
Figure DEST_PATH_IMAGE001
;
S32 rotates each planar point cloud yaw with the center point of the planar point cloud block so as to be parallel to the h and w axes, thereby acquiring a 2D plane.
8. A pallet positioning method based on a 3D sensor is characterized by comprising the following steps:
s1, according to the pallet recognition method based on 3D sensor as claimed in any one of claims 1 to 7, obtaining the average value of the center points of the matched two-line sliding window fitting lines as the center point coordinates of the currently recognized target pallet end face, and using the center point coordinates as the x, y, z parameters of 6 Dpos;
s2 setting the angle of roll and pitch to 0;
s3, according to a plane equation ax + by + cz =1 established by the planar point cloud block, calculating a yaw parameter value according to the plane equation and three factors of a, b and c
Figure 299424DEST_PATH_IMAGE001
To obtain the target pallet 6D position.
9. The utility model provides a pallet identification system based on 3D sensor which characterized in that includes:
a storage unit storing a program for implementing the steps of the 3D sensor based pallet identification method according to any one of claims 1 to 7 for the control unit and the processing unit to timely invoke and execute;
the control unit controls the 3D sensor to collect target point clouds in a scene so as to send the target point clouds to the processing unit;
the processing unit filters ground point clouds from the target point clouds and combines the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; and then converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of a 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to perform line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the current line lengths respectively fitted by the double-line sliding window accord with the characteristic template.
10. A pallet positioning system based on a 3D sensor is characterized by comprising:
a storage unit storing a program for implementing the steps of the 3D sensor based pallet identification method according to any one of claims 1 to 7 for the control unit and the processing unit to timely invoke and execute;
the control unit controls the 3D sensor to acquire a target point cloud in a scene so as to send the target point cloud to the processing unit;
the processing unit filters ground point clouds from the target point clouds and combines the point clouds on the same end face of the target pallet to obtain planar point cloud blocks; then, converting the planar point cloud block into a 2D plane under a camera coordinate system through external parameters of a 3D sensor, then establishing a coding graph, putting the 2D plane into the coding graph, constructing a double-line sliding window with a preset characteristic identification distance in the coding graph, synchronously moving according to a preset step length so as to perform line segment fitting scanning on points on the 2D plane encountered on a path, and outputting an identification result corresponding to the characteristic module when judging that the lengths of current lines respectively fitted by the double-line sliding window accord with the characteristic template;
the processing unit further takes the average value of the center points of the matched double-line fitting lines as the center point coordinates of the currently identified end face of the target pallet, the center point coordinates are taken as x, y and z parameters of 6Dpos, the angles of roll and pitch are set to be 0, and the value of the yaw parameter is calculated
Figure 291651DEST_PATH_IMAGE002
To obtain the target pallet 6D position.
CN202210750713.4A 2022-06-28 2022-06-28 Pallet identification and positioning method and system based on 3D sensor Pending CN115113623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210750713.4A CN115113623A (en) 2022-06-28 2022-06-28 Pallet identification and positioning method and system based on 3D sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210750713.4A CN115113623A (en) 2022-06-28 2022-06-28 Pallet identification and positioning method and system based on 3D sensor

Publications (1)

Publication Number Publication Date
CN115113623A true CN115113623A (en) 2022-09-27

Family

ID=83330604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210750713.4A Pending CN115113623A (en) 2022-06-28 2022-06-28 Pallet identification and positioning method and system based on 3D sensor

Country Status (1)

Country Link
CN (1) CN115113623A (en)

Similar Documents

Publication Publication Date Title
CN107945192B (en) Tray carton pile type real-time detection method
JP7433609B2 (en) Method and computational system for object identification
JP5469216B2 (en) A device for picking up bulk items by robot
CN112070838B (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
EP2345515A1 (en) Method for taking out work
US9576363B2 (en) Object picking system, object detecting device, object detecting method
JP2010207989A (en) Holding system of object and method of detecting interference in the same system
CN110076029A (en) Glue spraying control method, system, computer equipment and computer storage medium
CN113264312A (en) Container extraction method, device and system, robot and storage medium
Sansoni et al. Optoranger: A 3D pattern matching method for bin picking applications
CN110415363A (en) A kind of object recognition positioning method at random based on trinocular vision
JP5544464B2 (en) 3D position / posture recognition apparatus and method for an object
JP2023041731A (en) Method and computing system for performing robot motion planning and repository detection
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN115116048A (en) Method and system for identifying and positioning pallet based on point cloud mass center
CN115113623A (en) Pallet identification and positioning method and system based on 3D sensor
CN114399463A (en) Saw blade picking method and system based on digital image processing
CN112950618A (en) Appearance defect detection method and system
CN116673597A (en) Laser line image feature extraction system and method for double-V composite groove
CN116197885B (en) Image data filtering method, device, equipment and medium based on press-fit detection
CN115063475A (en) Surface shape recognition and positioning method based on 3D sensor
CN111322963A (en) Dynamic arrangement method for parts based on binocular image processing
CN118234602A (en) Computer-implemented method of operation for processing a workpiece by repair model reconstruction of an occlusion
CN114155291A (en) Box body pose identification method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination