CN117710714A - AGV tray identification method and device - Google Patents

AGV tray identification method and device Download PDF

Info

Publication number
CN117710714A
CN117710714A CN202311708081.6A CN202311708081A CN117710714A CN 117710714 A CN117710714 A CN 117710714A CN 202311708081 A CN202311708081 A CN 202311708081A CN 117710714 A CN117710714 A CN 117710714A
Authority
CN
China
Prior art keywords
rgb
point cloud
parameter range
tray
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311708081.6A
Other languages
Chinese (zh)
Inventor
周焜
姚禹
胡运强
耿帅
陈俊杰
王金成
马益超
王子渊
王瀚森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hangcha Intelligent Technology Co ltd
Hangcha Group Co Ltd
Original Assignee
Zhejiang Hangcha Intelligent Technology Co ltd
Hangcha Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hangcha Intelligent Technology Co ltd, Hangcha Group Co Ltd filed Critical Zhejiang Hangcha Intelligent Technology Co ltd
Priority to CN202311708081.6A priority Critical patent/CN117710714A/en
Publication of CN117710714A publication Critical patent/CN117710714A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an AGV tray identification method and device, which relate to the field of object identification, wherein similar point clouds of RGB parameters are clustered into an RGB point cloud space by determining an RGB-D point cloud image of an area where a tray is positioned, and average RGB parameters in each RGB point cloud space are determined; clustering similar RGB point cloud spaces of the average RGB parameters into an RGB point cloud space, determining the number of point clouds contained in each RGB point cloud space, and determining the RGB point cloud space with the number of point clouds within the preset point cloud number range as the RGB point cloud space of the tray. Based on the characteristics of different colors of different objects, RGB point cloud spaces corresponding to the objects in the image are identified by utilizing RGB parameters, and then the RGB point cloud spaces of the tray are determined, so that the tray can be accurately identified, the influence caused by goods shielding, shooting angles and distances is avoided, and the accuracy of tray identification is improved.

Description

AGV tray identification method and device
Technical Field
The invention relates to the field of object identification, in particular to an AGV tray identification method and device.
Background
In the field of automated warehousing and logistics, automated guided vehicles (AGVs, automated Guided Vehicle) have been widely used for cargo handling and warehouse operations. In order to realize efficient goods transportation, a camera is usually installed on an automatic guiding vehicle at present, an image of an area where a tray is located is shot by the camera, and the position of the tray to be transported in the image is identified by an image identification mode so as to position the actual position of the tray.
Because the size and the structural shape of the tray in the image can change along with the change of the shooting angle and the shooting distance of the camera, the tray structure can be blocked by goods placed on the tray, and therefore the problems that the image information of the tray identified by the automatic guiding vehicle is incomplete or noise interference is introduced and the like can possibly occur, the automatic guiding vehicle is difficult to determine the actual position of the tray, and the carrying work cannot be completed.
Disclosure of Invention
The invention aims to provide an AGV tray identification method and device, which can accurately identify a tray, avoid the influence of goods shielding, shooting angles and distances, and improve the accuracy of tray identification.
In order to solve the technical problems, the invention provides an AGV tray identification method, which comprises the following steps:
determining an RGB-D point cloud image of an area where the tray is located;
in the RGB-D point cloud image, clustering point clouds with the similarity of RGB parameters larger than a first preset similarity into an RGB point cloud space;
determining average RGB parameters in each RGB point cloud space;
clustering the RGB point cloud spaces with the average RGB parameters having the similarity larger than a second preset similarity into an RGB point cloud space;
determining the number of point clouds contained in each RGB point cloud space;
determining the RGB point cloud space of which the point cloud quantity is within a preset point cloud quantity range as the RGB point cloud space of the tray;
the second preset similarity is smaller than the first preset similarity.
In one aspect, after determining the RGB-D point cloud image of the area where the tray is located, the method further includes:
judging whether the number of the point clouds in the RGB-D point cloud image is larger than a preset number or not;
if not, judging that the tray does not exist;
if yes, a step of clustering the point clouds with the RGB parameters having the similarity larger than the first preset similarity into an RGB point cloud space in the RGB-D point cloud image is entered.
In one aspect, after determining the RGB-D point cloud image of the area where the tray is located, the method further includes:
determining average distances between all point clouds in the RGB-D point cloud image and an automatic guide vehicle;
determining a preset distance corresponding to the average distance;
acquiring all point clouds with the distance smaller than a preset distance from the automatic guide vehicle in the RGB-D point cloud image;
and constructing a new RGB-D point cloud image based on all the acquired point clouds.
In one aspect, in the RGB-D point cloud image, clustering the point clouds with the RGB parameters having the similarity greater than the first preset similarity into an RGB point cloud space includes:
determining a plurality of RGB parameter ranges according to a preset color division rule;
according to the RGB parameters, respectively distributing each point cloud in the RGB-D point cloud image to the corresponding RGB parameter range;
clustering the point clouds in the same RGB parameter range into an RGB point cloud space.
In one aspect, before clustering the point clouds within the same RGB parameter range into one RGB point cloud space, the method further includes:
s21: sequentially sequencing all the RGB parameter ranges according to a preset sequence;
s22: taking the first RGB parameter range as a current RGB parameter range;
s23: judging whether the current RGB parameter range is the last RGB parameter range or not; if yes, entering a step of clustering the point clouds in the same RGB parameter range into an RGB point cloud space; if not, entering S24;
s24: determining a representative point cloud of the current RGB parameter range;
s25: respectively determining RGB difference values between RGB parameters of each point cloud in the RGB parameter range next to the RGB parameter range of the representative point cloud and the RGB parameter range;
s26: moving a point cloud, in which the RGB difference value in the RGB parameter range next to the current RGB parameter range is smaller than a preset difference value, into the current RGB parameter range;
s27, performing S27; and taking the RGB parameter range next to the current RGB parameter range as a new current RGB parameter range, and returning to S23.
In one aspect, the method further comprises:
when the point clouds in the current RGB parameter range are all moved into one RGB parameter range on the current RGB parameter range, taking the representative point cloud of the one RGB parameter range on the current RGB parameter range as the representative point cloud of the current RGB parameter range;
moving the point cloud with the RGB difference value smaller than a preset difference value in the next RGB parameter range of the current RGB parameter range into the current RGB parameter range, including:
and moving the point cloud with the RGB difference value smaller than a preset difference value in the RGB parameter range next to the current RGB parameter range to the RGB parameter range last to the current RGB parameter range.
In one aspect, after determining the RGB point cloud space of the point cloud number within the preset point cloud number range as the RGB point cloud space of the tray, the method further includes:
constructing a minimum circumscribed rectangle of an RGB point cloud space of the tray;
determining the aspect ratio of the minimum circumscribed rectangle;
and determining the type of the tray according to the aspect ratio of the minimum circumscribed rectangle.
The application also provides an AGV tray recognition device, include:
a memory for storing a computer program;
and the processor is used for realizing the steps of the AGV tray identification method when executing the computer program.
The AGV tray identification method and device have the beneficial effects that the RGB-D point cloud images of the area where the tray is located are determined, point clouds with the similarity of RGB parameters larger than the first preset similarity are clustered into an RGB point cloud space, and average RGB parameters in each RGB point cloud space are determined; clustering RGB point cloud spaces with average RGB parameters having similarity larger than second preset similarity into an RGB point cloud space, determining the number of point clouds contained in each RGB point cloud space, and determining the RGB point cloud spaces with the number of point clouds within the preset point cloud number range as the RGB point cloud spaces of the tray. Based on the characteristics of different colors of different objects, RGB point cloud spaces corresponding to the objects in the image are identified by utilizing RGB parameters, then RGB point cloud spaces with the number of point clouds within the preset point cloud number range of the tray are found out from the RGB point cloud spaces, and the RGB point cloud spaces are determined to be the tray. By combining the point cloud with the RGB image, the tray can be accurately identified, the influence caused by goods shielding, shooting angles and distances is avoided, and the accuracy of tray identification is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the prior art and the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an AGV pallet identification method provided by the present application;
fig. 2 is a flowchart of a clustering method of RGB point cloud space provided in the present application;
FIG. 3 is a flowchart of another AGV pallet identification method provided herein;
fig. 4 is a schematic structural view of an AGV tray recognition device provided in the present application.
Detailed Description
The invention provides a method and a device for identifying an AGV tray, which can accurately identify the tray, avoid the influence of shielding goods and shooting angles and distances, and improve the accuracy of tray identification.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
When the automatic guide vehicle is used for loading and unloading cargoes, the specific position of the tray and the position of the fork inlet hole on the tray are required to be obtained, so that the vehicle control system of the automatic guide vehicle can control the automatic guide vehicle to complete loading and unloading tasks. The automatic guided vehicle has higher precision requirement when grabbing the tray, and in the prior art, the position of the tray is generally determined in an image recognition mode. Because the material and the variety of tray are various and complicated, and camera shooting angle and ambient light in different application scenes have the difference, and these factors all can influence the precision of image recognition, consequently the condition such as tray variety, loading and unloading position are not fixed and environmental change in the prior art can't deal with in the actual scene, can't accurately discern the tray.
Referring to fig. 1, fig. 1 is a flowchart of an AGV tray recognition method provided in the present application, including:
s1: determining an RGB-D point cloud image of an area where the tray is located;
considering that the prior art adopts a mode of identifying the tray by adopting a camera alone, the size and the position of the identified tray are changed due to the influence of shooting angles and ambient illumination, and the actual position of the tray is difficult to position by adopting an image identification mode.
In the application, an area where a tray is located is shot by an RGB-D (red green blue+depth map) camera, an image shot by the camera comprises two common RGB (red green blue) images and a Depth image, the RGB images and the Depth image are fused to obtain an RGB value of each pixel point in the Depth image, and then the Depth image is converted into a point cloud image (for example, by a matlab or by a coordinate transformation mode) to obtain a point cloud image containing RGB parameters, namely, an RGB-D point cloud image. Because the RGB-D point cloud image is provided with the RGB information and the depth information of each pixel point, the distance and the position between each object in the image and the automatic guide vehicle can be accurately determined by combining the RGB information and the depth information, and compared with the mode of relying on a camera in the prior art, the recognition precision is effectively improved.
S2: in the RGB-D point cloud image, clustering the point clouds with the similarity of RGB parameters larger than the first preset similarity into an RGB point cloud space;
in order to identify the parts belonging to the tray from the RGB-D point cloud image (for simplicity of description, the term "point cloud image" is used to refer to the RGB-D point cloud image in the following), each point cloud in the point cloud image needs to be clustered. Considering that different objects in a practical application scene have color differences, for example, the ground is usually gray, goods or containers on the tray are usually white or light wood, the tray is usually dark wood or metal, and the like. Therefore, the point clouds with the approximate RGB parameters (greater than the first preset similarity) can be clustered into an RGB point cloud space according to the RGB parameters of each point cloud in the point cloud map, that is, the color-approximated portion is determined to be an object.
It should be noted that, considering that there may be multiple identical objects placed at different positions in an actual application scene (such as stacking goods), in order to avoid misjudging the multiple identical objects into a single object, when clustering, it is also necessary to perform clustering by means of depth information, when comparing whether RGB parameters of a point cloud are similar, only whether RGB parameters of a certain point cloud are similar to RGB parameters of other point clouds nearby, and not comparing a certain point cloud with other point clouds far away from itself. In this way, misjudgment of multiple identical objects placed at different positions and at different distances as a single object can be avoided.
S3: determining average RGB parameters in each RGB point cloud space;
s4: clustering RGB point cloud spaces with average RGB parameters having similarity larger than second preset similarity into an RGB point cloud space;
considering that the environmental illumination can influence the RGB parameters of an object, the RGB parameters of different parts of the same object have larger difference, and the RGB point cloud belonging to the same object is further divided into a plurality of RGB point cloud spaces; therefore, after the preliminary clustering, any two RGB point cloud spaces need to be compared by using a lower similarity (a second preset similarity), and two RGB point cloud spaces with average RGB parameters being similar (smaller than the first preset similarity but larger than the second preset similarity) are clustered into one RGB point cloud space. In this way, the RGB point cloud space corresponding to each object can be simply and accurately obtained.
S5: determining the number of point clouds contained in each RGB point cloud space;
s6: determining an RGB point cloud space with the point cloud quantity within a preset point cloud quantity range as an RGB point cloud space of the tray;
the second preset similarity is smaller than the first preset similarity.
Because the physical structure of the tray is unchanged, the RGB point cloud space of the tray can be predetermined and stored in the processor to determine the RGB parameters of the tray, the quantity of the point clouds and other information; in practical application, comparing the pre-stored point cloud quantity with the point cloud quantity in the RGB point cloud space corresponding to each object, if the point cloud quantity of a certain object is similar to the pre-stored point cloud quantity (the point cloud quantity is in the range of the preset quantity), the object can be described as a tray. This is because there is a large difference in the area and volume of different objects, for example the volume of goods is typically much larger than the pallet, so the RGB point cloud spaces belonging to the pallet can be identified from these RGB point cloud spaces by the number of point clouds.
In addition, the RGB parameters of the tray stored in advance can be combined, the RGB parameters can be compared with the average RGB parameters of the RGB point cloud space corresponding to each object, and if the RGB parameters are similar, and the number of the point clouds is similar, the object can be more accurately described as the tray.
In summary, clustering point clouds with the similarity of RGB parameters larger than a first preset similarity into an RGB point cloud space by determining an RGB-D point cloud image of an area where a tray is located, and determining average RGB parameters in each RGB point cloud space; clustering RGB point cloud spaces with average RGB parameters having similarity larger than second preset similarity into an RGB point cloud space, determining the number of point clouds contained in each RGB point cloud space, and determining the RGB point cloud spaces with the number of point clouds within the preset point cloud number range as the RGB point cloud spaces of the tray. Based on the characteristics of different colors of different objects, RGB point cloud spaces corresponding to the objects in the image are identified by utilizing RGB parameters, then RGB point cloud spaces with the number of point clouds within the preset point cloud number range of the tray are found out from the RGB point cloud spaces, and the RGB point cloud spaces are determined to be the tray. By combining the point cloud with the RGB image, the tray can be accurately identified, the influence caused by goods shielding, shooting angles and distances is avoided, and the accuracy of tray identification is improved.
Based on the above embodiments:
in some embodiments, after determining the RGB-D point cloud image of the area in which the tray is located, further comprising:
judging whether the number of the point clouds in the RGB-D point cloud image is larger than a preset number or not;
if not, judging that the tray does not exist;
if yes, a step of clustering the point clouds with the similarity of the RGB parameters larger than the first preset similarity into an RGB point cloud space in the RGB-D point cloud image is entered.
In order to improve efficiency, in the application, determining which area in the field of view of the camera belongs to the area where the tray is located needs to be determined according to a user instruction, and a user determines the area needing to be identified as the area where the tray is located according to the user demand. In practical applications, during the task execution of the automatic guided vehicle, there may be a situation that the tray is not in the area, for example, a situation that other devices take the tray away. In order to avoid useless detection on the area where the tray does not exist, after determining the RGB-D point cloud image of the area where the tray exists, if the number of point clouds in the area is greater than a threshold value, indicating that the tray may exist in the area; if the number of point clouds in the area is not greater than the threshold, it is stated that there is necessarily no tray in the area. When it is determined that no tray exists in the area, the subsequent steps are not performed continuously, and only after it is determined that the tray may exist, the subsequent steps of clustering through RGB parameters and the like can be performed to identify the tray position. Based on this, by simply judging whether a tray exists, unnecessary task steps can be omitted, thereby improving efficiency.
In some embodiments, after determining the RGB-D point cloud image of the area in which the tray is located, further comprising:
determining the average distance between all point clouds in the RGB-D point cloud image and the automatic guide vehicle;
determining a preset distance corresponding to the average distance;
acquiring all point clouds with the distance smaller than a preset distance from an automatic guide vehicle in the RGB-D point cloud image;
and constructing a new RGB-D point cloud image based on all the acquired point clouds.
In order to simply determine the point cloud image of the plane where the tray is located, in the application, when determining the point cloud image of the tray, the object and the ground in the three-dimensional space area are generally detected, so that in the point cloud image of the area where the tray is located, the point cloud image of the ground exists in addition to the point cloud image of the tray and the point cloud image of the cargo container located above the tray.
The depth value of the point cloud in the point cloud image is averaged, and as the ground point cloud is uniformly distributed, a distance value positioned in the middle of a region can be obtained when the ground point cloud is averaged, after the tray and the goods are added, the point cloud on the plane where the tray and the goods are positioned is closer to the automatic guiding vehicle than the ground point cloud below the tray, so that the position of the tray can influence the average value, the point cloud in a certain distance range is extracted according to the obtained average value, and the point cloud in the range is the point cloud image from the automatic guiding vehicle body to the plane where the tray is positioned. In this way, it is possible to avoid acquiring excessive ground point clouds, reducing the amount of calculation in the subsequent steps.
In some embodiments, in the RGB-D point cloud image, clustering the point clouds with the RGB parameters having the similarity greater than the first preset similarity into an RGB point cloud space includes:
determining a plurality of RGB parameter ranges according to a preset color division rule;
according to the RGB parameters, respectively distributing each point cloud in the RGB-D point cloud image to the corresponding RGB parameter range;
and clustering the point clouds in the same RGB parameter range into an RGB point cloud space.
In order to simply cluster the point clouds, in the application, an RGB histogram may be constructed, wherein the horizontal axis of the histogram is an RGB parameter range defined according to a preset color division rule, the vertical axis is the number of the point clouds, and the preset color division rule refers to an area divided according to a certain color change direction.
As a simple example, the color division rule and the color change direction thereof are red-orange-yellow … …, and the top RGB parameter may be set to be the red-orange RGB parameter range in which the RGB parameter is (255, 30, 0) (255, 80, 0), the range of (255, 81,0) to (255, 130,0) is the pure orange RGB parameter range, and the range of (255, 131,0) to (255, 180,0) is the orange RGB parameter range. When clustering is carried out, according to the RGB parameters of each point cloud in the point cloud graph, the RGB parameters are distributed into the corresponding RGB parameter ranges for clustering, and the RGB point cloud space corresponding to the color is obtained and used for indicating that the point clouds belong to the same color.
In some embodiments, before clustering the point clouds within the same RGB parameter range into one RGB point cloud space, the method further includes:
s21: sequentially sequencing all RGB parameter ranges according to a preset sequence;
s22: taking the first RGB parameter range as the current RGB parameter range;
s23: judging whether the current RGB parameter range is the last RGB parameter range or not; if yes, a step of clustering point clouds in the same RGB parameter range into an RGB point cloud space is entered; if not, entering S24;
s24: determining a representative point cloud of a current RGB parameter range;
s25: respectively determining RGB difference values between RGB parameters of each point cloud in the RGB parameter range representing the point cloud and the next RGB parameter range of the current RGB parameter range;
s26: moving the point cloud with the RGB difference value smaller than the preset difference value in the next RGB parameter range of the current RGB parameter range into the current RGB parameter range;
s27, performing S27; and taking the next RGB parameter range of the current RGB parameter range as a new current RGB parameter range, and returning to S23.
In order to further classify the point clouds, in the application, considering that the color performance of the object in the image is affected by the change of the ambient light, after each point cloud in the point cloud image is allocated to different RGB parameter ranges according to the above embodiment, any two adjacent RGB parameter ranges need to be further detected, and the point clouds with similar RGB parameters are classified into the same class.
Specifically, referring to fig. 2, fig. 2 is a flowchart of a clustering method of RGB point cloud space provided in the present application. Sequencing according to a preset sequence such as the sequence of RGB parameter ranges from large to small or from small to large, or sequencing according to a color change sequence (red-orange-yellow … … in the embodiment), firstly taking a first RGB parameter range as a current RGB parameter range, and determining a point cloud which can most represent the current RGB parameter range as a representative point cloud; and calculating the difference value of RGB parameters between each point cloud and the representative point cloud in other RGB parameter ranges adjacent to the current RGB parameter range by using the representative point cloud, and then moving the point cloud with the difference value smaller than the preset difference value into the current RGB parameter range, thereby realizing the purpose of further classifying the point clouds. It should be noted that, since the process is performed according to the arrangement order of the RGB parameter ranges, after the process of calculating the difference value between the RGB parameter ranges and moving the point cloud is finished, the next RGB parameter range is used as the new current RGB parameter range, and since the new current RGB parameter range has already been calculated with the previous RGB parameter range, it is not necessary to calculate the representative point cloud in the new current RGB parameter range with the point cloud in the previous RGB parameter range, but only calculate the representative point cloud with the point cloud in the next RGB parameter range.
When determining the representative point cloud, the point cloud with the RGB parameter closest to the middle value of the RGB parameter range may be used as the representative point cloud, or the RGB average parameters of all the point clouds in the RGB parameter range may be calculated, and the point cloud with the RGB average parameter closest to the RGB average parameter may be used as the representative point cloud.
In calculating the RGB difference, the following formula may be used for calculation:
wherein delta is RGB difference, r 1 、g 1 、b 1 R is RGB parameter representing point cloud n 、g n 、b n And the RGB parameters representing the nth point cloud in the next RGB parameter range, n is any positive integer which is not more than the number of the point clouds in the next RGB parameter range, and alpha, beta and gamma are super-parameters for adjusting the proportion of the influence of the three-channel color difference on the calculation result.
In this way, the point clouds which have been allocated to different RGB parameter ranges can be moved to more reasonable RGB parameter ranges, thereby improving the rationality of RGB point cloud space obtained by subsequent clustering.
In addition, considering that the color change is cyclic, when the first RGB parameter range is taken as the current RGB parameter range, the RGB difference between the representative point cloud of the current RGB parameter range and each point cloud in the last RGB parameter range may be determined in addition to the RGB difference between the representative point cloud of the current RGB parameter range and each point cloud in the next RGB parameter range. It will be appreciated that assuming that the RGB parameter range of red is the first RGB parameter range, orange red is the next RGB parameter range, and mauve belongs to the last RGB parameter range, mauve is more similar to red, so the red RGB parameter range needs to be compared with the mauve RGB parameter range.
In some embodiments, further comprising:
when the point clouds in the current RGB parameter range are all moved into the RGB parameter range which is the last RGB parameter range of the current RGB parameter range, taking the representative point cloud of the RGB parameter range which is the last RGB parameter range of the current RGB parameter range as the representative point cloud of the current RGB parameter range;
moving the point cloud with the RGB difference value smaller than the preset difference value in the next RGB parameter range of the current RGB parameter range into the current RGB parameter range comprises the following steps:
and moving the point cloud with the RGB difference value smaller than the preset difference value in the next RGB parameter range of the current RGB parameter range to the previous RGB parameter range of the current RGB parameter range.
When a certain RGB parameter range is used as the current RGB parameter range, if all the point clouds in the current RGB parameter range are moved to the previous RGB parameter range, the description may consider the current RGB parameter range and the previous RGB parameter range as RGB parameter ranges belonging to the same object. When the calculation of the RGB difference value is performed on the point cloud in the next RGB parameter range of the current RGB parameter range, the calculation may be continued based on the representative point cloud of the previous RGB parameter range.
In some embodiments, after determining the RGB point cloud space having the point cloud number within the preset point cloud number range as the RGB point cloud space of the tray, further includes:
constructing a minimum circumscribed rectangle of an RGB point cloud space of the tray;
determining the aspect ratio of the minimum circumscribed rectangle;
and determining the type of the tray according to the aspect ratio of the minimum circumscribed rectangle.
In order to obtain detailed information of the tray, please refer to fig. 3 in the present application, fig. 3 is a flowchart of another method for identifying the tray of the AGV provided in the present application. After the RGB point cloud space of the tray is finally obtained, the minimum external rectangle of the RGB point cloud space of the tray is calculated first, the length and the width of the minimum external rectangle are calculated respectively through the distance between two point clouds which are farthest on the horizontal axis and the distance between two point clouds which are farthest on the vertical axis of the RGB point cloud space, the aspect ratio of the minimum external rectangle is calculated through pre-storing the aspect ratio of each tray, the most similar aspect ratio is found out in the aspect ratio of each preset tray, and therefore the tray type corresponding to the minimum external rectangle is determined.
Furthermore, when the minimum circumscribed rectangle is constructed, a concave-pack algorithm can be used for constructing the minimum circumscribed rectangle, and the concave-pack algorithm can determine the outline of the fork inlet holes on the tray, so that the type of the tray can be judged jointly by combining the length-width ratio with the parameters such as the number and the size of the fork inlet holes. And finally, determining the coordinate position of the fork inlet hole according to the type of the tray, and determining whether the goods can be taken according to the coordinate position.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an AGV tray recognition device provided in the present application, including:
a memory 21 for storing a computer program;
a processor 22 for performing the steps of the AGV tray identification method described above when executing a computer program.
For a detailed description of the AGV tray recognition device provided in the present application, please refer to an embodiment of the above-mentioned AGV tray recognition method, and the detailed description is omitted herein.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An AGV pallet identification method, comprising:
determining an RGB-D point cloud image of an area where the tray is located;
in the RGB-D point cloud image, clustering point clouds with the similarity of RGB parameters larger than a first preset similarity into an RGB point cloud space;
determining average RGB parameters in each RGB point cloud space;
clustering the RGB point cloud spaces with the average RGB parameters having the similarity larger than a second preset similarity into an RGB point cloud space;
determining the number of point clouds contained in each RGB point cloud space;
determining the RGB point cloud space of which the point cloud quantity is within a preset point cloud quantity range as the RGB point cloud space of the tray;
the second preset similarity is smaller than the first preset similarity.
2. The AGV tray identification method of claim 1, further comprising, after determining the RGB-D point cloud image of the area in which the tray is located:
judging whether the number of the point clouds in the RGB-D point cloud image is larger than a preset number or not;
if not, judging that the tray does not exist;
if yes, a step of clustering the point clouds with the RGB parameters having the similarity larger than the first preset similarity into an RGB point cloud space in the RGB-D point cloud image is entered.
3. The AGV tray identification method of claim 1, further comprising, after determining the RGB-D point cloud image of the area in which the tray is located:
determining average distances between all point clouds in the RGB-D point cloud image and an automatic guide vehicle;
determining a preset distance corresponding to the average distance;
acquiring all point clouds with the distance smaller than the preset distance from the automatic guide vehicle in the RGB-D point cloud image;
and constructing a new RGB-D point cloud image based on all the acquired point clouds.
4. The AGV tray recognition method according to claim 1, wherein clustering the point clouds having the RGB parameters with the similarity greater than the first preset similarity into an RGB point cloud space in the RGB-D point cloud map comprises:
determining a plurality of RGB parameter ranges according to a preset color division rule;
according to the RGB parameters, respectively distributing each point cloud in the RGB-D point cloud image to the corresponding RGB parameter range;
clustering the point clouds in the same RGB parameter range into an RGB point cloud space.
5. The AGV tray identification method as set forth in claim 4, further comprising, prior to clustering said point clouds located within the same RGB parameter range into an RGB point cloud space:
s21: sequentially sequencing all the RGB parameter ranges according to a preset sequence;
s22: taking the first RGB parameter range as a current RGB parameter range;
s23: judging whether the current RGB parameter range is the last RGB parameter range or not; if yes, entering a step of clustering the point clouds in the same RGB parameter range into an RGB point cloud space; if not, entering S24;
s24: determining a representative point cloud of the current RGB parameter range;
s25: respectively determining RGB difference values between RGB parameters of each point cloud in the RGB parameter range next to the RGB parameter range of the representative point cloud and the RGB parameter range;
s26: moving a point cloud, in which the RGB difference value in the RGB parameter range next to the current RGB parameter range is smaller than a preset difference value, into the current RGB parameter range;
s27, performing S27; and taking the RGB parameter range next to the current RGB parameter range as a new current RGB parameter range, and returning to S23.
6. The AGV tray identification method as set forth in claim 5, further comprising:
when the point clouds in the current RGB parameter range are all moved into one RGB parameter range on the current RGB parameter range, taking the representative point cloud of the one RGB parameter range on the current RGB parameter range as the representative point cloud of the current RGB parameter range;
moving the point cloud with the RGB difference value smaller than a preset difference value in the next RGB parameter range of the current RGB parameter range into the current RGB parameter range, including:
and moving the point cloud with the RGB difference value smaller than a preset difference value in the RGB parameter range next to the current RGB parameter range to the RGB parameter range last to the current RGB parameter range.
7. The AGV tray recognition method according to any one of claims 1 to 6, further comprising, after determining the RGB point cloud space in which the point cloud number is within a preset point cloud number range as the RGB point cloud space of the tray:
constructing a minimum circumscribed rectangle of an RGB point cloud space of the tray;
determining the aspect ratio of the minimum circumscribed rectangle;
and determining the type of the tray according to the aspect ratio of the minimum circumscribed rectangle.
8. An AGV tray recognition apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the AGV tray identification method according to any one of claims 1 to 7 when executing said computer program.
CN202311708081.6A 2023-12-12 2023-12-12 AGV tray identification method and device Pending CN117710714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311708081.6A CN117710714A (en) 2023-12-12 2023-12-12 AGV tray identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311708081.6A CN117710714A (en) 2023-12-12 2023-12-12 AGV tray identification method and device

Publications (1)

Publication Number Publication Date
CN117710714A true CN117710714A (en) 2024-03-15

Family

ID=90156344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311708081.6A Pending CN117710714A (en) 2023-12-12 2023-12-12 AGV tray identification method and device

Country Status (1)

Country Link
CN (1) CN117710714A (en)

Similar Documents

Publication Publication Date Title
CN110054121B (en) Intelligent forklift and container pose deviation detection method
CN110793512A (en) Pose recognition method and device, electronic equipment and storage medium
CN107063261B (en) Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle
CN114820391B (en) Point cloud processing-based storage tray detection and positioning method and system
CN112070838A (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112935703B (en) Mobile robot pose correction method and system for identifying dynamic tray terminal
CN112288724B (en) Defect detection method and device, electronic equipment and storage medium
CN113264302A (en) Control method and device of warehousing robot, robot and warehousing system
US11538238B2 (en) Method and system for performing image classification for object recognition
Zaccaria et al. A comparison of deep learning models for pallet detection in industrial warehouses
CN114170521B (en) Forklift pallet butt joint identification positioning method
CN115546300A (en) Method and device for identifying pose of tray placed tightly, computer equipment and medium
CN115546202A (en) Tray detection and positioning method for unmanned forklift
CN117710714A (en) AGV tray identification method and device
CN116443527B (en) Pallet fork method, device, equipment and medium based on laser radar
CN116228854B (en) Automatic parcel sorting method based on deep learning
Varga et al. Improved autonomous load handling with stereo cameras
CN113313803B (en) Stack type analysis method, apparatus, computing device and computer storage medium
CN116433742A (en) Method and device for judging rule violation of transport vehicle, electronic equipment and readable storage medium
CN115272465A (en) Object positioning method, device, autonomous mobile device and storage medium
CN111738253B (en) Fork truck tray positioning method, device, equipment and readable storage medium
CN113978987A (en) Pallet object packaging and picking method, device, equipment and medium
CN115147491B (en) Method for estimating position and attitude information of transport target of AGV (automatic guided vehicle)
CN117115240A (en) Universal pallet 3D pose positioning method and system and storage medium
CN116342858B (en) Object detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination