CN116342858B - Object detection method, device, electronic equipment and storage medium - Google Patents
Object detection method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116342858B CN116342858B CN202310610780.0A CN202310610780A CN116342858B CN 116342858 B CN116342858 B CN 116342858B CN 202310610780 A CN202310610780 A CN 202310610780A CN 116342858 B CN116342858 B CN 116342858B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- target object
- plane
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 56
- 238000012216 screening Methods 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000004590 computer program Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000000638 solvent extraction Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 7
- 230000000877 morphologic effect Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 3
- 230000007797 corrosion Effects 0.000 description 2
- 238000005260 corrosion Methods 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000010410 layer Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the application discloses an object detection method, an object detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial point cloud data set corresponding to a target object; acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; constructing a first space straight line according to the projection of the first point cloud data set on the second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; dividing the second point cloud data set to obtain a plurality of groups of second point cloud data; and determining second point cloud data belonging to the target object from a plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object. By implementing the embodiment of the application, the pose of the target object is accurately determined.
Description
Technical Field
The application relates to the technical field of logistics transportation, in particular to an object detection method, an object detection device, electronic equipment and a storage medium.
Background
With the continued development of trade, the importance of logistics transportation is becoming increasingly apparent. The goods are taken out through the forklift, and the forklift is an important ring in the logistics transportation process. In the existing forklift goods taking mode, a 2d laser radar or a depth camera and other detection devices are generally used for detecting goods taking positions, and goods to be taken out are identified and positioned according to generated point cloud data or image information. However, when the goods are stored in the high-position goods shelf, the detection equipment on the forklift can shake, the pose of the goods to be taken out cannot be accurately determined, the efficiency of taking out the goods is affected, and the goods are possibly damaged.
Disclosure of Invention
The embodiment of the application discloses an object detection method, an object detection device, electronic equipment and a storage medium, which can accurately determine the pose of goods to be taken out, thereby improving the goods taking-out efficiency and avoiding damage to the goods in the taking-out process.
The embodiment of the application discloses an object detection method, which comprises the following steps:
acquiring an initial point cloud data set corresponding to a target object;
Obtaining a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground;
constructing a first space straight line according to the projection of the first point cloud data set on a second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground;
dividing the second point cloud data set to obtain a plurality of groups of second point cloud data;
and determining second point cloud data belonging to the target object from the plurality of sets of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
A second aspect of an embodiment of the present application discloses an object detection apparatus, the apparatus including:
the data acquisition module is used for acquiring an initial point cloud data set corresponding to the target object;
The first screening module is used for acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground;
the second screening module is used for constructing a first space straight line according to the projection of the first point cloud data set on a second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground;
the data segmentation module is used for segmenting the second point cloud data set to obtain a plurality of groups of second point cloud data;
the object detection module is used for determining second point cloud data belonging to the target object from the plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
The third aspect of the embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor realizes any one of the object detection methods disclosed by the embodiment of the application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements an object detection method disclosed in the embodiments of the present application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
acquiring an initial point cloud data set corresponding to a target object, acquiring a projection image corresponding to the initial point cloud data set on a first plane, and performing first screening on the point cloud data according to projection pixels in the projection image to obtain a first point cloud data set; constructing a first space straight line according to the projection of the first point cloud data set on the second plane, and carrying out second screening according to the straight line distance point cloud data of each first point cloud data and the first space straight line to determine a second point cloud data set; and dividing the second point cloud data set to obtain a plurality of groups of second point cloud data, determining the second point cloud data belonging to the target object according to the object information of the target object, and finally determining the pose parameters of the target object. By implementing the embodiment of the application, the first point cloud data set which is possibly the target object in the initial point cloud data set can be primarily screened from the angle parallel to the ground according to the projection condition of the target object on the first plane parallel to the ground; then constructing a first space straight line according to the projection of the first point cloud data on a second plane perpendicular to the ground, and further screening a second point cloud data set which is possibly a target object from the angle perpendicular to the ground according to the distance between the first point cloud data and the constructed first space straight line; finally, whether each group of subdivided second point cloud data is the point cloud data belonging to the target object can be judged according to the object information of the target object, and the accuracy of the determined point cloud data belonging to the target object is improved, so that the pose parameters of the target object determined according to the point cloud data belonging to the target object are more accurate, the pose of the target object is accurately determined, and the cargo taking efficiency is improved and cargo damage in the cargo taking process is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an object detection method disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of an object detection method according to one embodiment;
FIG. 3 is a schematic diagram of a projection image obtained by projecting an initial point cloud data set on a first plane according to an embodiment of the present application;
FIG. 4 is a flow chart of another object detection method disclosed in one embodiment;
FIG. 5A is a schematic diagram showing the effect of morphological image processing according to an embodiment of the present application;
FIG. 5B is a schematic diagram showing the effect of another morphological image processing according to an embodiment of the present application;
FIG. 6 is a flow chart of yet another object detection method disclosed in one embodiment;
FIG. 7 is a flow chart of yet another object detection method disclosed in one embodiment;
FIG. 8 is a schematic diagram of an object detection device according to an embodiment;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments of the present application and the accompanying drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an object detection method, an object detection device, electronic equipment and a storage medium, which can accurately determine the pose of goods to be taken out, thereby improving the goods taking-out efficiency and avoiding damage to the goods in the taking-out process. The following will describe in detail.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an object detection method according to an embodiment of the present application, which may include an unmanned forklift 10 (also referred to as an automatic transfer robot or an automatic guided vehicle, i.e. Automated Guided Vehicle, AGV) and a target object 20. In a working scene of warehouse logistics, a sensing module can be arranged on the unmanned forklift 10, point cloud data is acquired through the sensing module, the target object 20 at a target goods taking position (namely, a position for placing goods to be taken out of the unmanned forklift 10 can be determined according to specific requirements of the unmanned forklift 10 in the working scene) is detected and identified by utilizing the point cloud data, the point cloud data of the target object 20 at the target goods taking position can be determined, and then information such as the position and the posture of the target object 20 at the target goods taking position can be determined according to the point cloud data of the target object 20, and the fork arm of the unmanned forklift 20 can be controlled to accurately realize the taking out of the target object 20, namely, goods according to the information such as the position and the posture of the target object 20.
In some embodiments, as shown in fig. 1, the unmanned forklift 10 may include a perception module that may include a perception element 11 and a processing element (not specifically shown). The sensing element 11 may include various types of sensors such as a 3D laser radar, and is configured to collect point cloud data near the unmanned forklift 10, especially in front of the unmanned forklift, so as to acquire at least one frame of initial point cloud data for the target object 20 at the target pick-up location. The sensing element 11 may be disposed at a midpoint between the fork arm roots of the unmanned forklift 10 (for example, disposed on a vehicle body or on a fork arm, the former is shown in fig. 1), or may be disposed at other positions according to actual conditions (for example, the disposed positions may be adjusted according to the shape of the unmanned forklift and the different storage space environments), which is not particularly limited in the embodiment of the present application. The processing element may be configured to process the initial point cloud data corresponding to the target pickup location, so as to finally determine pose information corresponding to the target object 20 at the target pickup location.
The unmanned forklift 10 shown in fig. 1 is a vehicle, and this is only an example. In other embodiments, the unmanned forklift 10 may also have other different configurations, such as a track robot, a non-vehicle trackless robot, etc., which are not particularly limited in the embodiments of the present application. The sensing module mounted on the unmanned forklift 10 may include various devices or systems including the sensing element 11 and the processing element, such as a vehicle machine, a computer, and a point cloud scanning processing System based on a SoC (System-on-a-Chip) connected with various sensors such as a 3D laser radar, and the embodiment of the application is not limited in particular.
It should be noted that, the target object 20 may be stored on a shelf, where the target picking position may be a position of the shelf, the shelf may include a single layer or multiple layers (as shown in fig. 1), each layer of shelf may have a beam for carrying and stacking the goods, and before the unmanned forklift 10 picks up the target object 20 from the target picking position, the target object 20 may be placed on the shelf in different pose forms, and detection and identification are required by the unmanned forklift 10, so as to specifically determine accurate pose information of the target object 20 and corresponding pose for picking up the target object 20.
In the related art, the unmanned forklift 10 basically uses a 2D laser radar and a TOF camera as detection devices, and performs algorithm processing according to the acquired point cloud data or picture information to obtain pose information of the goods to be taken out, but the above manner often makes it difficult to accurately detect the pose information of the goods to be taken out, especially for the situation of the goods to be taken out on a high-level shelf (such as a shelf with a height of 11 meters or more), and the risk of fork arm collision and even goods falling in the goods taking out process is easily caused due to insufficient detection precision. In the embodiment of the present application, in order to make the unmanned forklift 10 smoothly take out the target object 20 from the target picking position, and overcome the difficulty in accurately detecting the pose of the target object 20 in the related art, the unmanned forklift 10 may acquire multi-frame point cloud data collected for the target object 20 at the target picking position, and perform multiple screening on the point cloud data according to the condition that the point cloud data is on different planes, so as to detect the pose information of the target object 20 at the target picking position.
Illustratively, the unmanned forklift 10 may acquire multi-frame initial point cloud data corresponding to the target object 20 at the target pickup position, and obtain an initial point cloud data set. Further, the unmanned forklift 10 may acquire a projection image corresponding to the initial point cloud data set on a first plane parallel to the ground, and screen the first point cloud data set from the initial point cloud data set according to projection pixels in the projection image. On the basis, the unmanned forklift 10 can construct a first space straight line based on the projection of the first point cloud data set on a second plane perpendicular to the ground, and screen out a second point cloud data set according to the straight line distance between each first point cloud data and the first space straight line; the second point cloud data set is segmented to obtain a plurality of groups of second point cloud data, and the second point cloud data corresponding to the target object 20 at the target picking position is determined from the plurality of groups of second point cloud data according to the object information of the target object 20, so that the gesture information corresponding to the target object 20 at the target picking position can be determined according to the second point cloud data corresponding to the target object 20, and the gesture information can be used for indicating the unmanned forklift 10 to obtain the target picking position where the target object 20 is located and the gesture of the target object 20, so that the unmanned forklift 10 can take out the target object 20 from the target picking position.
It can be seen that, by implementing the object detection method of the present embodiment, according to the projection condition of the target object 20 on the first plane parallel to the ground, the first point cloud data set that may be the target object 20 in the initial point cloud data set is primarily screened from the angle parallel to the ground; then, a first space straight line can be constructed according to the projection of the first point cloud data on a second plane perpendicular to the ground, and a second point cloud data set which is possibly the target object 20 is further screened from the angle perpendicular to the ground according to the distance between the first point cloud data and the constructed first space straight line; finally, whether each group of subdivided second point cloud data is the point cloud data belonging to the target object 20 can be judged according to the object information of the target object 20, the accuracy of the determined point cloud data belonging to the target object 20 is improved, so that the pose parameters of the target object 20 determined according to the point cloud data belonging to the target object 20 are more accurate, the pose of the target object 20 is accurately determined, and the cargo taking efficiency is improved and cargo damage in the taking process is avoided.
Referring to fig. 2, fig. 2 is a schematic flow chart of an object detection method disclosed in an embodiment, where the method may be applied to an electronic device, and the electronic device may include the unmanned forklift 10 in the application scenario shown in fig. 1, and may also include a mobile phone, a computer, a wearable device, and other types of vehicles, which are not limited herein. The following description will be made with respect to an electronic device as an unmanned forklift. As shown in fig. 2, the method may include the steps of:
210. And acquiring an initial point cloud data set corresponding to the target object.
In the embodiment of the application, the unmanned forklift can acquire an initial point cloud data set aiming at a target object through a sensing element such as a 3D laser radar; the target object may include a cargo to be taken out by the unmanned forklift, and/or a tray for carrying the cargo to be taken out by the unmanned forklift, and the like. The initial point cloud data set is a point cloud data set for a target object directly detected by the 3D laser radar. In the specific implementation, the unmanned forklift can move to the target goods taking position where the target object is placed in advance according to the indicated target goods taking position, or move to a detection position where the unmanned forklift can detect the target object. When the unmanned forklift moves to a target goods taking position where a target object is placed or moves to a detection position where the unmanned forklift can detect the target object, the unmanned forklift can acquire initial point cloud data corresponding to the target object through the sensing element so as to acquire an initial point cloud data set corresponding to the target object at the target goods taking position.
In some embodiments, the unmanned forklift can acquire multi-frame initial point cloud data corresponding to the target object through the 3D laser radar, and combine the multi-frame initial point cloud data to obtain an initial point cloud data set.
In the embodiment of the application, after the unmanned forklift acquires multi-frame initial point cloud data acquired by the 3D laser radar, an initial point cloud data set with higher precision can be obtained by a multi-frame merging mode. For example, multiple sets of matching point cloud data in multiple sets of matching point cloud data may be determined, each set of matching point cloud data may include initial point cloud data having a correspondence relationship in the multiple sets of matching point cloud data, and for each set of matching point cloud data, the initial point cloud data included in each set of matching point cloud data may be superimposed, and corresponding initial point cloud data remaining in the set of initial point cloud data may be determined. Omission of point cloud data can be avoided, and accuracy of pose information of the finally determined target object is improved.
In some embodiments, the unmanned forklift may calculate, according to the coordinate positions of the initial point cloud data included in each set of matching point cloud data, an average coordinate position corresponding to each set of matching point cloud data as the coordinate position of the initial point cloud data corresponding to the set of matching point cloud data. On this basis, each group of initial point cloud data corresponding to the matching point cloud data can constitute an initial point cloud data set.
In some embodiments, the unmanned forklift may also cluster the initial data corresponding to each set of matching point cloud data, and determine the initial point cloud data corresponding to each set of matching point cloud data according to the corresponding cluster center. Optionally, in the clustering process, if an outlier exists, the outlier may be removed from the set of matching point cloud data, and then corresponding initial point cloud data may be determined according to the remaining initial point cloud data. On this basis, the initial point cloud data corresponding to each set of matching point cloud data can be formed into an initial point cloud data set.
220. Acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground.
In the embodiment of the application, an unmanned forklift projects three-dimensional initial point cloud data in an initial point cloud data set onto a first plane to obtain two-dimensional projection points, a projection image corresponding to the initial point cloud data set on the first plane is formed by the area of the two-dimensional projection points on the first plane, and each projection point on the first plane can be used as a projection pixel point in the projection image. The projection image corresponding to the initial point cloud data set on the first plane is an image obtained by projecting each initial point cloud data in the initial point cloud data set on the first plane; the projection pixel points are projection points corresponding to each initial point cloud data on the projection image.
In the embodiment of the present application, the first plane may be a plane whose normal vector is perpendicular to the ground. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a projection of an initial point cloud data set onto a first plane to obtain a projection image according to an embodiment of the present application. As shown in fig. 3, a spatial rectangular coordinate system established based on a specified reference origin is exemplified, the normal vector of the X-Y plane (the plane defined by the X-axis and the Y-axis) of which is perpendicular to the ground.
Illustratively, the initial point cloud data set (shown as a point-grain cube in fig. 3) is projected onto the first plane with the X-Y plane as the first plane, so as to obtain a corresponding projection image 30 (shown as a quadrangle on the X-Y plane in fig. 3). In some embodiments, each initial point cloud data in the initial point cloud data set may have a corresponding projected pixel point on the projection plane; in other embodiments, some initial point cloud data in the initial point cloud data set may be filtered and projected onto the projection plane (for example, the gray value of the corresponding projection pixel is set to P, p+.0), and the initial point cloud data that is not filtered is not projected (for example, the gray value of the corresponding projection pixel is set to 0).
It should be noted that, the spatial shape corresponding to the initial point cloud data set shown in fig. 3 is a cube, which is only a simplified example, in an actual working scenario, the initial point cloud data set corresponding to the target object at the target picking position may have a plurality of different forms, and based on the difference of the relative positions of the unmanned forklift, there may also be different void combinations, which is not particularly limited in the embodiment of the present application.
In the embodiment of the application, the unmanned forklift can screen the initial point cloud data set according to the projection pixel points in the projection image, specifically, if a plurality of initial point cloud data correspond to the same projection pixel points, the unmanned forklift screens one initial point cloud data from the plurality of initial point cloud data to reserve, and discards other initial point cloud data in the plurality of initial point cloud data; in the above manner, each initial point cloud data held by the filtering is composed into a first point cloud data set. The first point cloud data set is a set of initial point cloud data obtained by screening the initial point cloud data set according to projection pixel points in the projection image.
230. Constructing a first space straight line according to the projection of the first point cloud data set on the second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground.
In the embodiment of the application, the unmanned forklift projects the screened three-dimensional first point cloud data set onto the second plane to obtain a plurality of projection points of the first point cloud data set on the second plane, and a first space straight line is constructed according to the plurality of projection points. For example, the unmanned forklift may perform straight line fitting on each projection point on the second plane according to the straight line fitting manner, so as to obtain the first space straight line. The second plane is a plane whose normal vector is parallel to the ground, for example, as shown in fig. 3, taking a space rectangular coordinate system established based on a specified reference origin as an example, the normal vector of the X-Z plane (plane determined by the X-axis and the Z-axis) or the normal vector of the Y-Z plane (plane determined by the Y-axis and the Z-axis) of the space rectangular coordinate system is perpendicular to the ground, and the second plane may be the X-Z plane or the Y-Z plane. The first space straight line is the space straight line which can reflect the distribution condition of the first point cloud data set on the second plane most; the quadrangle on the X-Z plane in fig. 3 is the area where the first point cloud data set is projected onto the second plane, and the dotted line in the quadrangle is the constructed first space straight line.
In the embodiment of the application, an unmanned forklift calculates the linear distance between each first point cloud data in the first point cloud data set and the constructed first space straight line, and screens the first point cloud data in the first point cloud data set according to the linear distances to obtain a second point cloud data set. The unmanned forklift can screen out first point cloud data with the linear distance in a preset interval from the calculated plurality of linear distances and/or first point cloud data with the linear distance meeting a preset value (greater than or less than a preset value and the like) from the first point cloud data in the point cloud data set, and construct a second point cloud data set from the screened first point cloud data. The second point cloud data set is a set of first point cloud data obtained by screening the first point cloud data set according to the linear distance between the first point cloud data and the first space straight line.
240. And dividing the second point cloud data set to obtain a plurality of groups of second point cloud data.
In the embodiment of the application, the unmanned forklift can divide the second point cloud data set, and particularly can divide the second point cloud data according to a plurality of preset intervals or preset intervals, so as to obtain a plurality of groups of second point cloud data. Each group of second point cloud data comprises a plurality of second point cloud data, and the distance between the plurality of second point cloud data contained in each group of second point cloud data can be smaller than a preset interval.
250. And determining second point cloud data belonging to the target object from a plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
In the embodiment of the application, the unmanned forklift determines second point cloud data belonging to the target object from a plurality of groups of second point cloud data according to the object information of the target object, such as the length, width, height and other size information of the target object. For example, the unmanned forklift can determine a three-dimensional area according to object information such as length, width, height and the like of the target object, and determine second point cloud data located in the three-dimensional area as second point cloud data belonging to the target object.
In the embodiment of the application, the unmanned forklift determines the pose parameters of the target object according to the second point cloud data belonging to the target object. For example, the pose parameters may include a position parameter and a pose parameter; the unmanned forklift can determine one or more second point cloud data positioned in the center according to the second point cloud data belonging to the target object, calculate a three-dimensional space coordinate according to the three-dimensional space coordinate of the one or more second point cloud data, and determine the three-dimensional space coordinate as a position parameter of the target object; the unmanned forklift can conduct straight line fitting on second point cloud data belonging to the target object to obtain a space straight line, and according to included angles between the space straight line and an X-Y plane, an X-Z plane and a Y-Z plane, the angles between the target object and the X-Y plane, the X-Z plane and the Y-Z plane are determined, so that posture information of the target object is obtained.
After the pose information of the target object is determined, the position and the pose of the unmanned forklift, the height of the fork and the fork taking angle can be adjusted, so that the fork is controlled to take the target object, and the target object is taken out.
By adopting the embodiment, the first point cloud data set which is possibly the target object in the initial point cloud data set can be initially screened from the angle parallel to the ground according to the projection condition of the target object on the first plane parallel to the ground; then, a first space straight line can be constructed according to the projection of the first point cloud data on a second plane perpendicular to the ground, and a second point cloud data set which is possibly a target object is further screened from the angle perpendicular to the ground according to the distance between the first point cloud data and the constructed first space straight line; finally, whether each group of subdivided second point cloud data is the point cloud data belonging to the target object can be judged according to the object information of the target object, and the accuracy of the determined point cloud data belonging to the target object is improved, so that the pose parameters of the target object determined according to the point cloud data belonging to the target object are more accurate, the pose of the target object is accurately determined, and the cargo taking efficiency is improved and cargo damage in the cargo taking process is avoided.
Referring to fig. 4, fig. 4 is a flow chart illustrating another object detection method according to an embodiment, which can be applied to the unmanned forklift 10 shown in fig. 1. As shown in fig. 4, the object detection method may include the steps of:
401. and acquiring an initial point cloud data set corresponding to the target object.
In some embodiments, after the initial point cloud data set corresponding to the target object is acquired, the unmanned forklift may acquire indication information for the target object, where the indication information includes pose information of the target object or a tray on which the target object is loaded, where the pose information includes at least three-dimensional space coordinates and an inclination angle of the target object or the tray. The unmanned forklift can determine a target area according to the pose information, and perform preliminary filtering on the initial point cloud data set according to the target area to obtain preliminary initial point cloud data. For example, the unmanned forklift can reject initial point cloud data with three-dimensional space coordinates outside the target area, so as to realize preliminary filtering of the initial point cloud data set. The initial point cloud data with larger deviation can be eliminated, the operand of the subsequent detection process is reduced, and the accuracy of the final detection result is ensured.
402. And extracting point cloud data corresponding to each grid from the initial point cloud data set based on the plurality of grids divided on the first plane.
In the embodiment of the application, the unmanned forklift can divide the first plane into a plurality of grids, and the size or the dimension of each grid is the same. And then projecting the initial point cloud data in the initial point cloud data set onto a first plane, and extracting point cloud data corresponding to each grid from the initial point cloud data set according to the projection points of the grids on the first plane.
Illustratively, the unmanned forklift divides the first plane into 5*5 numbers of 25 grids, each of which is 1*1 in size. The unmanned forklift projects the initial point cloud data set onto the first plane, 5 grids contain a plurality of initial point cloud data, 7 grids contain one initial point cloud data, and 13 grids do not have the initial point cloud data. The unmanned forklift can select initial point cloud data with minimum three-dimensional space coordinates from the multiple initial point cloud data, as representative point cloud data of the grids, aiming at each grid containing the multiple initial point cloud data; regarding a grid containing only one initial point cloud data, taking the contained initial point cloud data as representative point cloud data of the grid; there is no grid containing initial point cloud data, and no representative point cloud data.
Preferably, to ensure the effect of the subsequent projected image, the grid on the first plane may be divided into grids of the same size as the pixels in the projected image, such as 1*1.
403. And determining a projection pixel point corresponding to the point cloud data corresponding to each grid on the first plane, and obtaining a projection image of the initial point cloud data set on the first plane according to the projection pixel point.
In the embodiment of the application, the unmanned forklift can generate projection pixel points according to the divided grids, and set the gray value of the projection pixel points corresponding to the grids with the representative pixel points as P, wherein P is not equal to 0, such as 255; the gray value of the projected pixel corresponding to the grid without the representative point cloud data is set to 0. And the unmanned forklift forms a projection image of the initial point cloud data set on the first plane by each projection pixel point with the gray value adjusted.
By adopting the embodiment, the generated projection image can better reflect the distribution condition of the initial point cloud data on the first plane, and a reasonable image basis is provided for the subsequent screening of the initial point cloud data set.
In some embodiments, the unmanned forklift performs morphological image processing on a projection image of the initial point cloud dataset on the first plane.
In the embodiment of the present application, the morphological image processing may include an image expansion operation, an image erosion operation, and the like. Referring to fig. 5A, fig. 5A is a schematic diagram illustrating an effect of morphological image processing according to an embodiment of the present application, wherein a left side illustrates an effect of performing an image expansion operation on a projection image (a black box indicates an expanded pixel point), and a right side illustrates an effect of performing an image erosion operation on the projection image (a cross box indicates an eroded pixel point).
In the embodiment of the application, the unmanned forklift can perform expansion corrosion operation on the projection image, namely, firstly perform image expansion operation on the projection image, and then perform image corrosion operation on the image obtained by expansion, as shown in fig. 5B, fig. 5B is a schematic diagram of the effect of another morphological image processing disclosed in the embodiment of the application. In this way, the fine gaps (such as hollow small holes and the like) in the projection image can be closed (or eliminated), so that the initial point cloud data set can be cleaned accordingly, point cloud data possibly having interference can be removed, and the subsequent detection of the target object can be facilitated more accurately.
404. And acquiring pixel points of the projection image in the first direction, and clustering the pixel points in the first direction to obtain pixel points respectively contained in a plurality of clusters.
In the embodiment of the application, an unmanned forklift acquires the pixel points of the projection image in the first direction, and clusters the pixel points in the first direction to obtain the pixel points respectively contained in a plurality of clusters. The first direction is a certain arrangement direction of the pixel points in the projection image, such as a row direction or a column direction.
For example, the unmanned forklift may acquire the line pixels of the projection image, that is, the pixels in the line direction, and confirm the line pixels that are continuously adjacent and have a gray value greater than zero as the same cluster, and confirm the line pixels that are not adjacent and/or have a gray value equal to zero as different clusters, so as to obtain the pixels included in the clusters.
405. And determining the width of the area corresponding to each cluster according to the pixel points contained in each cluster.
406. Determining target cluster types according to the area width corresponding to each cluster type and the object width information of the target object, and determining a target area according to pixel points contained in the target cluster types; the difference value between the area width corresponding to the target cluster class and the object width information of the target object is smaller than or equal to a difference threshold value.
In the embodiment of the application, the unmanned forklift determines the width of the area corresponding to each cluster according to the pixel points contained in each cluster, and in the specific implementation, the unmanned forklift can determine the width of the area according to the direction of the pixel points contained in each cluster and the size of the pixel points; or the width of each cluster in the first direction can be determined according to the number of the pixels of each cluster in the first direction and the size of the pixels, and the width is determined as the area width; or determining the width of each cluster in the first direction according to the number of the pixels of each cluster in the first direction and the size of the pixels, and determining the actual width of each cluster mapped to the point cloud according to the proportional relationship between the point cloud data and the projected pixels after the point cloud data is projected to the projected image, namely, the actual width of the point cloud mapped to the point cloud by the size corresponding to one pixel.
In the embodiment of the application, the unmanned forklift compares the width of the area corresponding to each cluster with the object width information of the target object, and determines the target cluster according to the comparison result.
The unmanned forklift calculates the difference value between the area width corresponding to each cluster and the object width information of the target object respectively, compares the difference value with a preset difference threshold value (preferably 0.07 m), and determines the cluster with the difference value smaller than or equal to the difference threshold value as the target cluster, thereby obtaining one or more target cluster.
And the unmanned forklift determines one or more target areas according to the pixel points contained in the target cluster. In the implementation, the unmanned forklift can determine one or more target areas according to the pixel points contained in each target cluster and the size of the pixel points; or determining one or more areas in the projection image according to the pixel points and the pixel point sizes contained in each target cluster, and then determining one or more target areas according to the proportional relation between the point cloud data and the projection pixel points after the point cloud data are projected to the projection image, namely mapping the size corresponding to one pixel point to the actual width of the point cloud.
407. And acquiring point cloud data in the target area in the initial point cloud data set to obtain a first point cloud data set.
In the embodiment of the application, the unmanned forklift screens the initial point cloud data set according to the determined one or more target areas, and in the specific implementation, the unmanned forklift screens the initial point cloud data positioned in the target areas from the initial point cloud data set, and forms the part of the initial point cloud data into the first point cloud data set.
By adopting the embodiment, the pixel points in the projection image are classified according to the condition that the pixel points are in the first direction, the cluster type of the matched target object is determined according to the object width information of the target object, one or more target areas are determined according to the pixel points of the cluster type, and the initial point cloud data set is screened, so that the first point cloud data set is obtained, the screening of the initial point cloud data set according to the object width information of the target object is realized, the obtained first point cloud data set at least meets the object width of the target object, and a reasonable basis is provided for the subsequent gesture detection of the target object.
408. Constructing a first space straight line according to the projection of the first point cloud data set on the second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground.
409. And dividing the second point cloud data set to obtain a plurality of groups of second point cloud data.
410. And determining second point cloud data belonging to the target object from a plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
Referring to fig. 6, fig. 6 is a flowchart illustrating another object detection method according to an embodiment, which can be applied to the unmanned forklift 10 shown in fig. 1. As shown in fig. 6, the object detection method may include the steps of:
601. and acquiring an initial point cloud data set corresponding to the target object.
602. Acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground.
603. Acquiring projection points of first point cloud data in the first point cloud data set on a second plane, and constructing a plurality of space straight lines according to the projection points; wherein the normal vector of the second plane is parallel to the ground.
604. According to the distance between the projection point and the second space straight line, determining point cloud data corresponding to the second space straight line, and counting the number of the point cloud data corresponding to the second space straight line; the second space straight line is any space straight line; the point cloud data corresponding to the second spatial lines are first point cloud data corresponding to projection points where the distance between the second spatial lines is smaller than a first distance threshold.
605. And determining the space straight line with the largest corresponding point cloud data amount as a first space straight line.
In the embodiment of the application, an unmanned forklift projects first point cloud data in a first point cloud data set onto a second plane to obtain projection points of the first point cloud data set on the second plane; the normal vector of the second plane is parallel to the ground. And then the unmanned forklift builds a plurality of space straight lines according to the projection points on the second plane. In the specific implementation, the unmanned forklift can construct a space straight line with any at least two projection points, so that a plurality of space straight lines are constructed.
For any one constructed space straight line, namely a second space straight line, the unmanned forklift respectively calculates the distance between the unmanned forklift and the second space straight line according to other projection points except for the projection point for constructing the second space straight line, determines first point cloud data corresponding to the projection point with the distance smaller than a first distance threshold value between the unmanned forklift and the second space straight line as point cloud data corresponding to the second space straight line according to the calculated distance between each projection point and the second space straight line, and counts the number of the point cloud data corresponding to the second space straight line. Wherein the second spatial straight line is any spatial straight line; the first distance threshold is a distance threshold between the projection point and the constructed second spatial line.
After counting the number of the point cloud data corresponding to each second space line, the unmanned forklift determines the second space line with the largest number of the corresponding point cloud data, and determines the second space line as the first space line.
For example, the unmanned forklift randomly selects two points a and b from a plurality of projection points of the first point cloud data set on the second plane, and establishes a space straight line ab according to the two points currently selected. The distances from the other projection points other than the projection points a and b to the space straight line ab are calculated. And judging a projection point with the distance smaller than the first distance threshold value from the space straight line ab according to the preset first distance threshold value, wherein first point cloud data corresponding to the projection point is point cloud data corresponding to the constructed space straight line ab. For example, if only the distance between the projection points C and D and the space line ab is smaller than the first distance threshold, the first point cloud data C and D with the projection points C and D on the second plane are the point cloud data corresponding to the space line ab.
And determining point cloud data corresponding to the space straight line constructed by any two projection points according to the mode. And then, the unmanned forklift can count the number of the point cloud data corresponding to each space straight line, and determine the space straight line with the maximum number of the corresponding point cloud data from the point cloud data. For example, the space straight line ab corresponds to two point cloud data; the space straight line ac corresponds to one point cloud data; the space straight line ad corresponds to one point cloud data; the space straight line bc corresponds to one point cloud data; the space straight line bd corresponds to zero point cloud data; the spatial straight line cd corresponds to zero point cloud data. Then, the unmanned forklift may determine the spatial line ab as the first spatial line.
In some embodiments, before any two projection points are selected to establish a space straight line, the unmanned forklift can also judge whether the distance between the two projection points is larger than a preset width, if so, the space straight line is continuously established; if not, the other projection points are re-selected for space straight line establishment.
The second plane is an X-Z plane, and the unmanned forklift can determine whether the distance between the X coordinates of the two projection points is greater than a preset width (such as the pier width of the target object), and if so, continue to construct a space straight line; if not, the other projection points are re-selected for space straight line establishment. Because the object is always provided with the object pier, a plurality of space lines are constructed according to any two projection points on the second plane to screen out the first space line, so as to judge whether the projection of the object on the second plane accords with the characteristics of the object pier width and the like, the preliminary screening is carried out on any two projection points according to the pier width, the establishment of space lines which do not accord with the requirements can be avoided, and the operation amount is reduced.
In some embodiments, after any two projection points are selected to establish a space straight line, the unmanned forklift can determine whether the slope between the space straight line and the second plane is smaller than or equal to a preset slope; if yes, continuing to judge the distance between other projection points and the space straight line; if not, the other projection points are re-selected for space straight line establishment.
The second plane is an X-Z plane, and the unmanned forklift judges and establishes a slope between the space straight line and the X axis or the Z axis, and if the slope exceeds a preset slope (for example tan20 °), other projection points are selected again for space straight line establishment; if so, judging the distance between other projection points and the space straight line so as to confirm the corresponding point cloud data and count the number of the corresponding point cloud data. In the actual picking process, the gradient between the target object and the second plane cannot exceed the preset gradient, and if the gradient exceeds the preset gradient, the unmanned forklift cannot pick up goods at the self-adaptive angle. Therefore, by judging the magnitude relation between the slope between the space straight line and the second plane and the preset slope in advance to judge whether the subsequent process is necessary, the operation amount can be effectively reduced.
606. Determining a third plane according to the first space straight line, calculating the distance between each first point cloud data and the third plane, and selecting first point cloud data with the distance smaller than or equal to a second distance threshold value to obtain a second point cloud data set; the third plane is parallel to the second plane, and the first space line is located in the third plane.
In the embodiment of the application, after the unmanned forklift determines the first space straight line, the third plane can be determined according to the first space straight line. In specific implementation, the unmanned forklift can determine a plurality of planes containing the first space straight line according to the first space straight line, and select a plane parallel to the second plane from the planes as a third plane.
The unmanned forklift calculates the distance between each first point cloud data in the first point cloud data set and the third plane, and compares each distance with a second distance threshold value respectively. And screening out first point cloud data with the distance from the first point cloud data set to the third plane being smaller than or equal to a second distance threshold value, and forming the part of first point cloud data into a second point cloud data set.
By adopting the embodiment, the proper first space straight line is constructed according to the distribution condition of the projection points of the first point cloud data in the first point cloud data set on the second plane, and the third plane parallel to the second plane is determined according to the first space straight line, so that the first point cloud data set is further screened to obtain the second point cloud data set, the distribution of the second point cloud data set on the second plane is consistent with the distribution of the target object on the second plane, the accuracy of the screened point cloud data can be improved, and the accuracy of the identification of the subsequent target object is improved.
607. And dividing the second point cloud data set to obtain a plurality of groups of second point cloud data.
608. And determining second point cloud data belonging to the target object from a plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
Referring to fig. 7, fig. 7 is a flow chart illustrating a further object detection method according to an embodiment, which can be applied to the unmanned forklift 10 shown in fig. 1. As shown in fig. 7, the object detection method may include the steps of:
701. and acquiring an initial point cloud data set corresponding to the target object.
702. Acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground.
703. Constructing a first space straight line according to the projection of the first point cloud data set on the second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground.
704. And acquiring a plurality of coordinate intervals which are divided according to preset intervals on the second plane.
705. And determining the second point cloud data with the coordinates on the second plane in the same coordinate interval as a group of second point cloud data according to the coordinates of each second point cloud data in the second point cloud data set on the second plane.
In the embodiment of the application, after the second point cloud data set is screened out, the unmanned forklift can divide the second point cloud data in the second point cloud data set. In specific implementation, the unmanned forklift may divide the second plane into a plurality of coordinate intervals according to a preset interval, for example, divide the second plane into a plurality of coordinate intervals according to a preset interval of 1 cm.
And then the unmanned forklift acquires the coordinates of each second point cloud data in the second point cloud data set on a second plane, determines which coordinate interval is divided in which the coordinates of each second point cloud data in the second plane are located, and determines the second point cloud data in the same coordinate interval as the same group of second point cloud data after determining the coordinate interval in which each second point cloud data is located, thereby realizing the grouping division of the second point cloud data in the second point cloud data set.
Illustratively, the second point cloud data set includes 5 second point cloud data a-e. The second plane is an X-Z plane, and the second plane is divided into a plurality of coordinate intervals according to a preset interval of 1cm on the X axis. The coordinates of the second point cloud data a on the second plane are (0.5, 5), the coordinates of the second point cloud data b on the second plane are (9.8,4), the coordinates of the second point cloud data c on the second plane are (10, 10), the coordinates of the second point cloud data d on the second plane are (12.5,5), and the coordinates of the second point cloud data a on the second plane are (12.8,5). Therefore, according to the coordinates of the 5 second point cloud data on the second plane, it is possible to determine that the second point cloud data a is solely a set of second point cloud data; the second point cloud data b and c are the same group of second point cloud data; the second point cloud data d and e are the same set of second point cloud data.
706. Clustering the second point cloud data according to a preset cluster radius, and determining the second point cloud data contained in a plurality of clusters and the cluster width corresponding to each cluster; the preset cluster radius comprises one or more coordinate intervals.
707. And determining a width threshold value and a spacing threshold value according to the object information of the target object.
708. And comparing the cluster width corresponding to each cluster with a width threshold to obtain a first comparison result, and comparing the distance between each cluster and the adjacent cluster with a distance threshold to obtain a second comparison result.
In the embodiment of the application, the unmanned forklift can perform clustering processing on the second point cloud data according to the preset clustering radius, determine the second point cloud data contained in a plurality of clusters, and determine the clustering width corresponding to each cluster according to the coordinates of the second point cloud data contained in each cluster. The preset clustering radius comprises one or more divided coordinate intervals.
The unmanned forklift determines a width threshold and a spacing threshold according to object information of a target object, and compares the cluster width corresponding to each cluster with the width threshold to obtain a first comparison result; meanwhile, the unmanned forklift can determine adjacent cluster types of each cluster type according to coordinates of second point cloud data contained in each cluster type, and compare the distance between each cluster type and the adjacent cluster type with the distance threshold value to obtain a second comparison result.
709. And determining second point cloud data belonging to the target object according to the first comparison result and the second comparison result, and determining pose parameters of the target object according to the second point cloud data belonging to the target object.
In the embodiment of the application, the unmanned forklift judges each group of second point cloud data divided by the second point cloud data set according to the obtained first comparison result and the second comparison result, so as to determine one or more groups of second point cloud data belonging to the target object, and determines pose parameters of the target object according to coordinates of the second point cloud data belonging to the target object.
By adopting the embodiment, the second point cloud data sets are subdivided according to the coordinates of the second point cloud data sets on the second plane, clustering processing is carried out on the basis of each group of the second point cloud data, and the second point cloud data contained in each cluster are compared and screened according to the width threshold value and the interval threshold value determined by the object information of the target object, so that the second point cloud data belonging to the target object is obtained; through the grouping form, the reasonable number of point cloud data can be used as a whole for processing, so that the operand is reduced; and comparing and screening the second point cloud data contained in each cluster according to the width threshold value and the interval threshold value determined by the object information of the target object, so that the second point cloud data belonging to the target object accords with the object information of the target object, the accuracy of the point cloud data screening is improved, and the accuracy of the subsequent target object identification is improved.
In some embodiments, the step of comparing the cluster width corresponding to each cluster with a width threshold to obtain a first comparison result, and comparing the distance between each cluster and an adjacent cluster with a distance threshold to obtain a second comparison result, may include the following steps:
comparing the cluster width corresponding to each cluster with a width threshold, and determining the cluster with the corresponding cluster width smaller than or equal to the width threshold as a first target cluster;
comparing the distance between each cluster and the adjacent cluster with a distance threshold value, and determining the cluster with the distance smaller than or equal to the distance threshold value as a second target cluster;
in the step of determining the second point cloud data belonging to the target object according to the first comparison result and the second comparison result, the unmanned forklift may include the following steps:
and determining cluster types belonging to the first target cluster type and the second target cluster type as target object cluster types, and determining second point cloud data contained in the target object cluster types as second point cloud data belonging to the target objects.
In the embodiment of the application, the unmanned forklift determines the cluster width corresponding to each cluster, and after determining the width threshold and the interval threshold according to the object information of the target object, the cluster width corresponding to each cluster can be compared with the width threshold, and the cluster with the corresponding cluster width smaller than or equal to the width threshold is determined as the first target cluster; in addition, the unmanned forklift can also compare the distance between each cluster and the adjacent cluster with a distance threshold value, and determine the cluster with the distance smaller than or equal to the distance threshold value as a second target cluster.
The unmanned forklift determines the cluster class belonging to the first target cluster class and the cluster class belonging to the second target cluster class as the target object cluster class, and determines the second point cloud data contained in the target object cluster class as the second point cloud data belonging to the target object.
The width threshold determined by the unmanned forklift according to the object information of the target object is an pier width threshold (actual pier width) of the target object, and the distance threshold determined by the unmanned forklift according to the object information of the target object corresponds to a maximum allowable error (for example, ±0.1 cm) of the clamping plate. And determining second point cloud data within 5cm as second point cloud data of the same cluster by the unmanned forklift, namely grouping according to a preset distance of 1cm, wherein one cluster comprises five groups of second point cloud data. And the unmanned forklift determines the coordinates of the second point cloud data of each cluster at the center as the coordinates of the cluster where the second point cloud data of each cluster is located, and determines the adjacent clusters of each cluster and the distance between each cluster and the adjacent clusters according to the coordinates of each cluster. And determining the distance between any two second point cloud data according to the second point cloud data contained in each cluster by the unmanned forklift, and determining the maximum distance as the cluster width of the cluster. Furthermore, the unmanned forklift compares the cluster width corresponding to each cluster with a width threshold value, and determines the cluster with the corresponding cluster width smaller than or equal to the width threshold value as a first target cluster; and comparing the distance between each cluster and the adjacent cluster with a distance threshold value by the unmanned forklift, and determining the cluster with the distance smaller than or equal to the distance threshold value as a second target cluster. And finally, determining the cluster class belonging to the first target cluster class and the cluster class belonging to the second target cluster class as the target object cluster class by the unmanned forklift, and determining the second point cloud data contained in the target object cluster class as the second point cloud data belonging to the target object.
By adopting the embodiment, each cluster can be used as one pier of the target object to be processed, whether the cluster width accords with the pier width of the target object or not is judged, and whether the distance between the cluster and the adjacent cluster accords with the maximum allowable error of the clamping plate or not is judged, so that the second point cloud data belonging to the target object is screened out, and the accuracy of the determined second point cloud data belonging to the target object can be improved.
In some embodiments, after the unmanned forklift obtains a plurality of clusters through clustering, the number of clusters can be compared with the pier number of the target object; and if the number of the clusters is greater than or equal to the pier number of the target object, executing subsequent steps by the unmanned forklift. Because the target object of the unmanned forklift for taking goods generally comprises a plurality of piers, if the number of clusters obtained by the clustering process is less than that of the piers of the target object, the clustering result can be considered to be not in line with the situation of the target object, so that the follow-up process is not required to be executed, and unnecessary operation is avoided.
In some embodiments, the target object comprises a tray comprising a plurality of piers; the pose parameters include position parameters and yaw angle.
In the step of determining the pose parameter of the target object according to the second point cloud data belonging to the target object, the unmanned forklift may include the following steps:
Determining position parameters of each pier of the tray according to the center point cloud data of each target object cluster; wherein each target object cluster corresponds to one pier of the tray;
determining the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier according to the plane coordinate of the second point cloud data contained in each target object cluster on the second plane; the first pier is any pier of the tray, and the second pier is a pier adjacent to the first pier on the tray;
and determining the yaw angle of the tray according to the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier.
In an embodiment of the application, the target object comprises a pallet, and the pallet comprises a plurality of piers; the pose parameters of the target object include a position parameter and a yaw angle of the target object.
The unmanned forklift determines the cluster class belonging to the first target cluster class and the second target cluster class as the target object cluster class, and each target object cluster class corresponds to one pier of the tray, namely the number of the target object cluster classes is equal to the number of piers contained in the tray. The unmanned forklift determines the center point cloud data of each target object cluster, and determines the position parameters of each pier of the tray according to the center point cloud data of each target object cluster.
And the unmanned forklift determines the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier according to the plane coordinate of the second point cloud data contained in each target object cluster on the second plane. Wherein the first pier is any one pier of the tray, and the second pier is a pier adjacent to the first pier on the tray. And finally, determining the yaw angle of the tray according to the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier by the unmanned forklift.
For each target object cluster, the unmanned forklift acquires center point cloud data of the target object cluster, and determines three-dimensional space coordinates of the center point cloud data as three-dimensional space coordinates of piers corresponding to the target object cluster, so that position parameters of piers corresponding to the target object cluster are obtained.
In addition, for each target object cluster, the unmanned forklift determines a maximum plane coordinate (x1_max, z1_max) and a minimum plane coordinate (x1_min, z1_min) of the first pier, and a maximum plane coordinate (x2_max, z2_max) and a minimum plane coordinate (x2_min, z2_min) of the second pier according to the plane coordinates of the second point cloud data contained in the target object cluster on the second plane. The ratio d1= (z1_max-z1_min)/(x1_max-x1_min) of the first pier and the ratio d2= (z2_max-z2_min)/(x2_max-x2_min) of the second pier are first found, and then the difference between d1 and d2 is calculated, which is determined as the yaw angle yaw of the target object.
With the above embodiment, since the target object for picking up goods by the unmanned forklift is generally placed on the pallet, and the pallet generally includes a plurality of piers, pose information of the target object can be accurately represented by calculating position parameters and yaw angles of the piers, and accuracy of the determined position parameters and yaw angles can be improved by adopting the position parameters and yaw angle confirmation method of the above embodiment.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an object detection device according to an embodiment, which can be applied to the forklift 10 in the application scenario shown in fig. 1. The object detection device 800 may include: the data acquisition module 810, the first screening module 820, the second screening module 830, the data segmentation module 840, and the object detection module 850.
The data acquisition module 810 is configured to acquire an initial point cloud data set corresponding to a target object;
the first screening module 820 is configured to obtain a projection image corresponding to the initial point cloud data set on the first plane, and screen the initial point cloud data set according to projection pixels in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground;
The second screening module 830 is configured to construct a first spatial line according to the projection of the first point cloud data set on the second plane, and determine a second point cloud data set according to the linear distance between each first point cloud data in the first point cloud data set and the first spatial line; wherein the normal vector of the second plane is parallel to the ground;
the data segmentation module 840 is configured to segment the second point cloud data set to obtain multiple sets of second point cloud data;
the object detection module 850 is configured to determine second point cloud data belonging to the target object from the plurality of sets of second point cloud data according to object information of the target object, and determine pose parameters of the target object according to the second point cloud data belonging to the target object.
In some embodiments, the first screening module 820 is further configured to:
extracting point cloud data corresponding to each grid from an initial point cloud data set based on a plurality of grids divided on a first plane;
and determining a projection pixel point corresponding to the point cloud data corresponding to each grid on the first plane, and obtaining a projection image of the initial point cloud data set on the first plane according to the projection pixel point.
In some embodiments, the first screening module 820 is further configured to:
Acquiring pixel points of a projection image in a first direction, and clustering the pixel points in the first direction to obtain pixel points respectively contained in a plurality of clusters;
determining the width of a region corresponding to each cluster according to the pixel points contained in each cluster;
determining target cluster types according to the area width corresponding to each cluster type and the object width information of the target object, and determining a target area according to pixel points contained in the target cluster types; the difference value between the area width corresponding to the target cluster and the object width information of the target object is smaller than or equal to a difference value threshold;
and acquiring point cloud data in the target area in the initial point cloud data set to obtain a first point cloud data set.
In some embodiments, the second screening module 830 is further configured to:
acquiring projection points of first point cloud data in the first point cloud data set on a second plane, and constructing a plurality of space straight lines according to the projection points;
according to the distance between the projection point and the second space straight line, determining point cloud data corresponding to the second space straight line, and counting the number of the point cloud data corresponding to the second space straight line; the second space straight line is any space straight line; the point cloud data corresponding to the second space straight line are first point cloud data corresponding to projection points with the distance between the second space straight lines being smaller than a first distance threshold value;
Determining a space straight line with the largest corresponding point cloud data amount as a first space straight line;
determining a third plane according to the first space straight line, calculating the distance between each first point cloud data and the third plane, and selecting first point cloud data with the distance smaller than or equal to a second distance threshold value to obtain a second point cloud data set; the third plane is parallel to the second plane, and the first space line is located in the third plane.
In some embodiments, the data segmentation module 840 is further configured to:
acquiring a plurality of coordinate intervals on a second plane, wherein the coordinate intervals are divided according to preset intervals;
according to the coordinates of each second point cloud data in the second point cloud data set on the second plane, determining the second point cloud data with the coordinates on the second plane in the same coordinate interval as a group of second point cloud data;
the object detection module 850 is further configured to:
clustering the second point cloud data according to a preset cluster radius, and determining the second point cloud data contained in a plurality of clusters and the cluster width corresponding to each cluster; the preset clustering radius comprises one or more coordinate intervals;
determining a width threshold and a spacing threshold according to object information of a target object;
Comparing the cluster width corresponding to each cluster with a width threshold to obtain a first comparison result, and comparing the distance between each cluster and the adjacent cluster with a distance threshold to obtain a second comparison result;
and determining second point cloud data belonging to the target object according to the first comparison result and the second comparison result.
In some embodiments, object detection module 850 is further to:
comparing the cluster width corresponding to each cluster with a width threshold, and determining the cluster with the corresponding cluster width smaller than or equal to the width threshold as a first target cluster;
comparing the distance between each cluster and the adjacent cluster with a distance threshold value, and determining the cluster with the distance smaller than or equal to the distance threshold value as a second target cluster;
determining second point cloud data belonging to the target object according to the first comparison result and the second comparison result, wherein the second point cloud data comprises:
and determining cluster types belonging to the first target cluster type and the second target cluster type as target object cluster types, and determining second point cloud data contained in the target object cluster types as second point cloud data belonging to the target objects.
In some embodiments, object detection module 850 is further to:
Determining position parameters of each pier of the tray according to the center point cloud data of each target object cluster; wherein each target object cluster corresponds to one pier of the tray;
determining the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier according to the plane coordinate of the second point cloud data contained in each target object cluster on the second plane; the first pier is any pier of the tray, and the second pier is a pier adjacent to the first pier on the tray;
and determining the yaw angle of the tray according to the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment. As shown in fig. 9, the electronic device 900 may include:
a memory 910 in which executable program code is stored.
A processor 920 coupled to the memory 910.
Wherein the processor 920 invokes executable program code stored in the memory 910 to perform any of the object detection methods disclosed in the embodiments of the present application.
It should be noted that, the electronic device shown in fig. 9 may further include components not shown, such as a power supply, an input key, a camera, a speaker, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, etc., which are not described in detail in this embodiment.
The embodiment of the application discloses a computer readable storage medium storing a computer program, wherein the computer program enables a computer to execute any one of the object detection methods disclosed in the embodiment of the application.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform any of the object detection methods disclosed in the embodiments of the present application.
The above description of the object detection method, the device, the electronic apparatus and the storage medium disclosed in the embodiments of the present application has been provided in detail, and specific examples are applied to illustrate the principles and the embodiments of the present application, where the above description of the embodiments is only for helping to understand the method and the core idea of the present application. Meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope according to the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (10)
1. An object detection method, the method comprising:
Acquiring an initial point cloud data set corresponding to a target object;
obtaining a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground;
constructing a first space straight line according to the projection of the first point cloud data set on a second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground;
dividing the second point cloud data set to obtain a plurality of groups of second point cloud data;
determining second point cloud data belonging to the target object from the plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object; wherein the pose parameters comprise position parameters and pose parameters;
the constructing a first spatial straight line according to the projection of the first point cloud data set on a second plane comprises:
Projecting the first point cloud data set onto a second plane to obtain a plurality of projection points of the first point cloud data set on the second plane, and performing straight line fitting on each projection point on the second plane by adopting a straight line fitting mode to obtain a first space straight line; the first space straight line is the space straight line which can reflect the distribution condition of the first point cloud data set on the second plane most;
the determining the pose parameter of the target object according to the second point cloud data belonging to the target object comprises the following steps:
determining one or more second point cloud data positioned in the center according to the second point cloud data of the target object, calculating a three-dimensional space coordinate according to the three-dimensional space coordinate of the one or more second point cloud data, and determining the three-dimensional space coordinate as a position parameter of the target object;
and performing linear fitting on the second point cloud data of the target object to obtain a space straight line, and determining angles between the target object and three different planes of the three-dimensional coordinate system according to included angles between the space straight line and the three different planes of the three-dimensional coordinate system respectively to obtain the attitude information of the target object.
2. The method of claim 1, wherein the acquiring a corresponding projection image of the initial point cloud dataset on a first plane comprises:
extracting point cloud data corresponding to each grid from the initial point cloud data set based on a plurality of grids divided on a first plane;
and determining a projection pixel point corresponding to the point cloud data corresponding to each grid on the first plane, and obtaining a projection image of the initial point cloud data set on the first plane according to the projection pixel point.
3. The method of claim 1, wherein the screening the initial point cloud data set according to the projected pixels in the projected image to obtain a first point cloud data set comprises:
acquiring pixel points of the projection image in a first direction, and clustering the pixel points in the first direction to obtain pixel points respectively contained in a plurality of clusters;
determining the width of a region corresponding to each cluster according to the pixel points contained in each cluster;
determining a target cluster class according to the area width corresponding to each cluster class and the object width information of the target object, and determining a target area according to the pixel points contained in the target cluster class; the difference value between the area width corresponding to the target cluster and the object width information of the target object is smaller than or equal to a difference value threshold;
And acquiring point cloud data in the target area in the initial point cloud data set to obtain a first point cloud data set.
4. The method of claim 1, wherein constructing a first spatial line from the projection of the first point cloud data set onto a second plane and determining a second point cloud data set from a linear distance between each first point cloud data set in the first point cloud data set and the first spatial line comprises:
acquiring projection points of first point cloud data in the first point cloud data set on a second plane, and constructing a plurality of space straight lines according to the projection points;
according to the distance between the projection point and the second space straight line, determining point cloud data corresponding to the second space straight line, and counting the number of the point cloud data corresponding to the second space straight line; the second space straight line is any space straight line; the point cloud data corresponding to the second space straight line are first point cloud data corresponding to projection points, wherein the distance between the first space straight line and the second space straight line is smaller than a first distance threshold value;
determining a space straight line with the largest corresponding point cloud data amount as a first space straight line;
Determining a third plane according to the first space straight line, calculating the distance between each first point cloud data and the third plane, and selecting first point cloud data with the distance smaller than or equal to a second distance threshold value to obtain a second point cloud data set; wherein the third plane is parallel to the second plane, and the first spatial line is located in the third plane.
5. The method of claim 1, wherein the partitioning the second point cloud data set to obtain a plurality of sets of second point cloud data includes:
acquiring a plurality of coordinate intervals on a second plane, wherein the coordinate intervals are divided according to preset intervals;
according to the coordinates of each second point cloud data in the second point cloud data set on the second plane, determining second point cloud data with coordinates on the second plane in the same coordinate interval as a group of second point cloud data;
the determining, according to the object information of the target object, second point cloud data belonging to the target object from the plurality of sets of second point cloud data includes:
clustering the second point cloud data according to a preset cluster radius, and determining the second point cloud data contained in a plurality of clusters and the cluster width corresponding to each cluster; wherein the preset cluster radius comprises one or more coordinate intervals;
Determining a width threshold and a distance threshold according to the object information of the target object;
comparing the cluster width corresponding to each cluster with the width threshold to obtain a first comparison result, and comparing the distance between each cluster and the adjacent cluster with the interval threshold to obtain a second comparison result;
and determining second point cloud data belonging to the target object according to the first comparison result and the second comparison result.
6. The method of claim 5, wherein comparing the cluster width corresponding to each cluster with the width threshold to obtain a first comparison result, and comparing the distance between each cluster and an adjacent cluster with the pitch threshold to obtain a second comparison result, comprises:
comparing the cluster width corresponding to each cluster with the width threshold, and determining the cluster with the corresponding cluster width smaller than or equal to the width threshold as a first target cluster;
comparing the distance between each cluster and the adjacent cluster with a distance threshold value, and determining the cluster with the distance smaller than or equal to the distance threshold value as a second target cluster;
The determining the second point cloud data belonging to the target object according to the first comparison result and the second comparison result comprises the following steps:
and determining cluster types belonging to the first target cluster type and the second target cluster type as target object cluster types, and determining second point cloud data contained in the target object cluster types as second point cloud data belonging to the target object.
7. The method of claim 6, wherein the target object comprises a tray comprising a plurality of piers; the determining the pose parameter of the target object according to the second point cloud data belonging to the target object comprises the following steps:
determining position parameters of each pier of the tray according to the center point cloud data of each target object cluster; wherein each target object cluster corresponds to one pier of the tray;
determining the maximum plane coordinate and the minimum plane coordinate of the first pier and the maximum plane coordinate and the minimum plane coordinate of the second pier according to the plane coordinate of the second point cloud data contained in each target object cluster on the second plane; the first pier is any pier of the tray, and the second pier is a pier adjacent to the first pier on the tray;
And determining the yaw angle of the tray according to the maximum plane coordinates and the minimum plane coordinates of the first pier and the maximum plane coordinates and the minimum plane coordinates of the second pier.
8. An object detection device, the device comprising:
the data acquisition module is used for acquiring an initial point cloud data set corresponding to the target object;
the first screening module is used for acquiring a projection image corresponding to the initial point cloud data set on a first plane, and screening the initial point cloud data set according to projection pixel points in the projection image to obtain a first point cloud data set; wherein the normal vector of the first plane is perpendicular to the ground;
the second screening module is used for constructing a first space straight line according to the projection of the first point cloud data set on a second plane, and determining a second point cloud data set according to the straight line distance between each first point cloud data in the first point cloud data set and the first space straight line; wherein the normal vector of the second plane is parallel to the ground;
the data segmentation module is used for segmenting the second point cloud data set to obtain a plurality of groups of second point cloud data;
The object detection module is used for determining second point cloud data belonging to the target object from the plurality of groups of second point cloud data according to the object information of the target object, and determining pose parameters of the target object according to the second point cloud data belonging to the target object, wherein the pose parameters comprise position parameters and pose parameters;
the second screening module is specifically configured to project the first point cloud data set onto a second plane to obtain a plurality of projection points of the first point cloud data set on the second plane, and perform straight line fitting on each projection point on the second plane by adopting a straight line fitting mode to obtain a first spatial straight line; the first space straight line is the space straight line which can reflect the distribution condition of the first point cloud data set on the second plane most;
the object detection module is specifically configured to determine one or more second point cloud data located at the center according to the second point cloud data of the target object, calculate a three-dimensional space coordinate according to the three-dimensional space coordinates of the one or more second point cloud data, and determine the three-dimensional space coordinate as a position parameter of the target object;
And performing linear fitting on the second point cloud data of the target object to obtain a space straight line, and determining angles between the target object and three different planes of the three-dimensional coordinate system according to included angles between the space straight line and the three different planes of the three-dimensional coordinate system respectively to obtain the attitude information of the target object.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310610780.0A CN116342858B (en) | 2023-05-29 | 2023-05-29 | Object detection method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310610780.0A CN116342858B (en) | 2023-05-29 | 2023-05-29 | Object detection method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116342858A CN116342858A (en) | 2023-06-27 |
CN116342858B true CN116342858B (en) | 2023-08-25 |
Family
ID=86895225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310610780.0A Active CN116342858B (en) | 2023-05-29 | 2023-05-29 | Object detection method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116342858B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819883A (en) * | 2021-01-28 | 2021-05-18 | 华中科技大学 | Rule object detection and positioning method |
CN114612665A (en) * | 2022-03-15 | 2022-06-10 | 北京航空航天大学 | Pose estimation and dynamic vehicle detection method based on normal vector histogram features |
CN115308708A (en) * | 2022-08-03 | 2022-11-08 | 浙江中力机械股份有限公司 | Tray pose identification method and system based on laser radar |
CN115546300A (en) * | 2022-10-11 | 2022-12-30 | 未来机器人(深圳)有限公司 | Method and device for identifying pose of tray placed tightly, computer equipment and medium |
CN115631233A (en) * | 2022-11-04 | 2023-01-20 | 未来机器人(深圳)有限公司 | Cargo information identification method and device, computer equipment and storage medium |
CN116128841A (en) * | 2023-01-11 | 2023-05-16 | 未来机器人(深圳)有限公司 | Tray pose detection method and device, unmanned forklift and storage medium |
CN116168056A (en) * | 2023-03-28 | 2023-05-26 | 武汉理工大学 | Method, device, equipment and storage medium for extracting target object contour point cloud |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4832596B2 (en) * | 2008-08-29 | 2011-12-07 | 三菱電機株式会社 | Overhead image generation device, overhead image generation method, and overhead image generation program |
WO2012141235A1 (en) * | 2011-04-13 | 2012-10-18 | 株式会社トプコン | Three-dimensional point group position data processing device, three-dimensional point group position data processing system, three-dimensional point group position data processing method and program |
-
2023
- 2023-05-29 CN CN202310610780.0A patent/CN116342858B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819883A (en) * | 2021-01-28 | 2021-05-18 | 华中科技大学 | Rule object detection and positioning method |
CN114612665A (en) * | 2022-03-15 | 2022-06-10 | 北京航空航天大学 | Pose estimation and dynamic vehicle detection method based on normal vector histogram features |
CN115308708A (en) * | 2022-08-03 | 2022-11-08 | 浙江中力机械股份有限公司 | Tray pose identification method and system based on laser radar |
CN115546300A (en) * | 2022-10-11 | 2022-12-30 | 未来机器人(深圳)有限公司 | Method and device for identifying pose of tray placed tightly, computer equipment and medium |
CN115631233A (en) * | 2022-11-04 | 2023-01-20 | 未来机器人(深圳)有限公司 | Cargo information identification method and device, computer equipment and storage medium |
CN116128841A (en) * | 2023-01-11 | 2023-05-16 | 未来机器人(深圳)有限公司 | Tray pose detection method and device, unmanned forklift and storage medium |
CN116168056A (en) * | 2023-03-28 | 2023-05-26 | 武汉理工大学 | Method, device, equipment and storage medium for extracting target object contour point cloud |
Non-Patent Citations (1)
Title |
---|
张名芳 ; 付锐 ; 石涌泉 ; 程文冬 ; .基于激光雷达的远距离运动车辆位姿估计.公路交通科技.2017,第34卷(第12期),第131-139页. * |
Also Published As
Publication number | Publication date |
---|---|
CN116342858A (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10198805B2 (en) | Method for detecting objects in a warehouse and/or for spatial orientation in a warehouse | |
US9630320B1 (en) | Detection and reconstruction of an environment to facilitate robotic interaction with the environment | |
CN112070838B (en) | Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics | |
US9424470B1 (en) | Systems and methods for scale invariant 3D object detection leveraging processor architecture | |
US9205562B1 (en) | Integration of depth points into a height map | |
CN110837814B (en) | Vehicle navigation method, device and computer readable storage medium | |
CN107610176A (en) | A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium | |
CN112070759B (en) | Fork truck tray detection and positioning method and system | |
CN105431370A (en) | Method and system for automatically landing containers on a landing target using a container crane | |
JPWO2016199366A1 (en) | Dimension measuring apparatus and dimension measuring method | |
US20210349468A1 (en) | Identifying elements in an environment | |
CN110796118B (en) | Method for obtaining attitude adjustment parameters of transportation equipment, transportation equipment and storage medium | |
CN112166458A (en) | Target detection and tracking method, system, equipment and storage medium | |
CN115546300A (en) | Method and device for identifying pose of tray placed tightly, computer equipment and medium | |
CN116342858B (en) | Object detection method, device, electronic equipment and storage medium | |
CN117533803A (en) | Stack type image generation method and device, stacking robot and robot stacking method | |
CN115861060A (en) | Upper part extraction method and device, computer equipment and storage medium | |
CN115626588A (en) | Obstacle avoidance method for unmanned forklift, unmanned forklift and storage medium | |
CN115100271A (en) | Method and device for detecting goods taking height, computer equipment and storage medium | |
US20210019902A1 (en) | Image processing apparatus | |
CN116342695B (en) | Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium | |
CN118247473B (en) | Bounding box extraction method, device, electronic equipment and readable storage medium | |
CN114612558B (en) | Fork truck tray space positioning method and system based on monocular camera detection | |
CN116873819B (en) | Cargo transportation method, device, terminal equipment and storage medium | |
US20210019901A1 (en) | Image processing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |