CN111754515B - Sequential gripping method and device for stacked articles - Google Patents

Sequential gripping method and device for stacked articles Download PDF

Info

Publication number
CN111754515B
CN111754515B CN201911301902.8A CN201911301902A CN111754515B CN 111754515 B CN111754515 B CN 111754515B CN 201911301902 A CN201911301902 A CN 201911301902A CN 111754515 B CN111754515 B CN 111754515B
Authority
CN
China
Prior art keywords
point cloud
dimensional coordinates
dimensional
article
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911301902.8A
Other languages
Chinese (zh)
Other versions
CN111754515A (en
Inventor
刘伟峰
万保成
曹凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201911301902.8A priority Critical patent/CN111754515B/en
Publication of CN111754515A publication Critical patent/CN111754515A/en
Application granted granted Critical
Publication of CN111754515B publication Critical patent/CN111754515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a sequential grabbing method and device for stacked articles, and relates to the technical field of computers. One embodiment of the method comprises the following steps: dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and acquiring the three-dimensional coordinates of the divided point cloud; mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of the object by using the calibration relation of the two-dimensional camera and the three-dimensional camera; digging out a corresponding region from a two-dimensional image of the article according to the two-dimensional coordinates, and identifying the article in the region; and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and calculating the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object. According to the embodiment, the object can be accurately identified and the position of the object can be accurately determined, so that the object can be grabbed more accurately, conveniently and efficiently, and the collision probability when the object is grabbed is reduced.

Description

Sequential gripping method and device for stacked articles
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for sequentially grabbing stacked articles.
Background
The picking task in the robot box refers to the fact that the robot takes out corresponding quantity of articles from the turnover box according to the picking task issued by the system on the basis of visual assistance and puts the articles into a scene of a designated position. One difficulty with this scenario is how to ensure a layer-by-layer grabbing of items from high to low to avoid the end effector from hitting other items in the bin.
In practice, a large number of articles are square packages, such as foods, 3C, etc. A plurality of articles of the same kind are generally densely arranged in one turnover box. For in-box picking tasks in such situations, the current machine vision technology mainly has the following two kinds:
(1) Identifying a segmented target on the 3D point cloud, and calculating the position of the target (which refers to the spatial coordinate of a target center point under the world coordinate system of the robot) and the grabbing gesture of the end effector (which refers to the gesture of a tool coordinate system of the tail end of the robot in the world coordinate system of the robot when the robot picks the target, wherein the gesture can be represented by three Euler angles);
(2) And identifying a segmentation target on the 2D image, acquiring a point cloud by using the calibration relation of the 2D camera and the 3D camera, and calculating the position of the target and the grabbing gesture of the end effector.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
(1) The method for dividing the target through the point cloud can realize layering, but the resolution of the point cloud image is generally not high, and the precision of the point cloud is low; furthermore, the packaging materials of many articles are easy to reflect light, and voids (invalid data) are easy to exist on the point cloud at the position of reflecting light, so that the accuracy of directly dividing the target on the point cloud is not high;
(2) According to the method for dividing the object by the 2D image, although higher dividing accuracy can be achieved, the height of the object cannot be distinguished on the 2D image, the object at the lower position is easy to grasp, and the probability of collision in the box is increased.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a sequential grabbing method and device for stacking articles, which can accurately identify the articles and determine the positions of the articles, so that the grabbing of the articles is more accurate, convenient and efficient, and the probability of collision in a box when grabbing the articles is greatly reduced.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a sequential gripping method of stacked articles.
A method of sequential gripping of stacked articles, comprising: dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and acquiring the three-dimensional coordinates of the divided point cloud; mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of an article by using a calibration relation of a two-dimensional camera and a three-dimensional camera; digging out a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates, and identifying the article in the area; and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and calculating the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object.
Optionally, segmenting the point cloud of the uppermost layer of the items from the point cloud data of the stacked items includes: and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
Optionally, extracting the corresponding region from the two-dimensional image of the article according to the two-dimensional coordinates includes: acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by the two-dimensional coordinates; and according to the coordinate range, the area corresponding to the minimum circumscribed rectangle is scratched out from the two-dimensional image of the article.
Optionally, after mapping the three-dimensional coordinates of the segmented point cloud to two-dimensional coordinates on a two-dimensional image of the article, further comprising: generating a point cloud matrix with the same size as the two-dimensional image of the object; and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object comprises: and acquiring the three-dimensional coordinates of the corresponding point cloud from the point cloud matrix according to the two-dimensional coordinates of the identified object.
Optionally, generating a point cloud matrix of the same size as the two-dimensional image of the item comprises: expanding the three-dimensional coordinates of the segmented point cloud by using specified data to generate a point cloud matrix with the same size as the two-dimensional image of the object; and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object comprises: according to the two-dimensional coordinates of the identified object, taking out the three-dimensional coordinates stored at the position corresponding to the two-dimensional coordinates from the point cloud matrix; processing the extracted three-dimensional coordinates, and deleting the specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates; and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
According to another aspect of an embodiment of the present invention, there is provided a sequential gripping device for stacking articles.
A sequential gripping device for stacking articles, comprising: the point cloud segmentation module is used for segmenting the point cloud of the uppermost layer of article from the point cloud data of the stacked articles and acquiring the three-dimensional coordinates of the segmented point cloud; the coordinate mapping module is used for mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of the article by using the calibration relation of the two-dimensional camera and the three-dimensional camera; the article identification module is used for digging out a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates and identifying the article in the area; the point cloud computing module is used for acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and computing the center position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object.
Optionally, the point cloud segmentation module is further configured to: and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
Optionally, the article identification module is further configured to: acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by the two-dimensional coordinates; and according to the coordinate range, the area corresponding to the minimum circumscribed rectangle is scratched out from the two-dimensional image of the article.
Optionally, the method further comprises a matrix generation module for: after mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of an article, generating a point cloud matrix with the same size as the two-dimensional image of the article; and, the point cloud computing module is further to: and acquiring the three-dimensional coordinates of the corresponding point cloud from the point cloud matrix according to the two-dimensional coordinates of the identified object.
Optionally, the matrix generation module is further configured to: expanding the three-dimensional coordinates of the segmented point cloud by using specified data to generate a point cloud matrix with the same size as the two-dimensional image of the object; and, the point cloud computing module is further to: according to the two-dimensional coordinates of the identified object, taking out the three-dimensional coordinates stored at the position corresponding to the two-dimensional coordinates from the point cloud matrix; processing the extracted three-dimensional coordinates, and deleting the specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates; and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
According to yet another aspect of an embodiment of the present invention, an electronic device for sequentially gripping stacked articles is provided.
An electronic device for sequentially gripping stacked items, comprising: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the sequential grabbing method for stacked articles provided by the embodiment of the invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer readable medium having stored thereon a computer program which when executed by a processor implements a sequential gripping method of stacked items provided by an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: the method comprises the steps of dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and obtaining three-dimensional coordinates of the divided point cloud; then mapping the three-dimensional coordinates of the separated point cloud into two-dimensional coordinates on the two-dimensional image of the article; then, a corresponding area is scratched out of the two-dimensional image of the article according to the two-dimensional coordinates, and article identification is carried out in the area; finally, the three-dimensional coordinates of the corresponding point cloud are obtained according to the two-dimensional coordinates of the identified object, the center position of the identified object and the grabbing gesture of the end picking device are calculated according to the three-dimensional coordinates of the obtained point cloud, so that object grabbing is performed, commodity layering is achieved through 3D camera point cloud data, then the corresponding area on the 2D image is extracted according to the calibration relation, object identification is performed, layering identification of the object is achieved, object identification can be accurately performed, the position of the object can be accurately determined, object grabbing is accurate, convenient and efficient, and the probability of collision in a box when the object is grabbed is greatly reduced.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic view of a picking operation scenario in a robot box according to an embodiment of the present invention;
FIG. 2 is a schematic view of square article placement in a transfer case according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the main steps of a sequential gripping method of stacked articles according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an implementation of an embodiment of the present invention;
FIG. 5 is a schematic view of the main modules of a sequential gripping device for stacking articles according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic view of a picking operation scene in a robot box according to an embodiment of the present invention. As shown in fig. 1, in the embodiment of the invention, the camera is pre-installed above the turnover box of the object to be grabbed, and comprises a 2D (two-dimensional) camera and a 3D (three-dimensional) camera, the resolution of the 2D camera is high, the object identification is facilitated, and the point cloud shot by the 3D camera is used for layered identification and the central position of the target object and the gesture calculation of the end effector. And, after the camera is installed, the calibration of the two cameras is performed. The robot is secured adjacent the picking station and picks articles from the tote located at the picking station by the upper end effector.
Fig. 2 is a schematic view showing square articles in a transfer box according to an embodiment of the present invention. As shown in fig. 2, square articles in the turnover box are placed next to each other, and the positions of the articles are fixed and cannot swing.
After the article sorting system issues a sorting task, a turnover box for placing articles to be picked is placed at a sorting station of the robot sorting system, and the robot picks the articles according to the central position of the target articles and the pose of the end pick-up calculated by the article picking system.
Fig. 3 is a schematic diagram of main steps of a sequential gripping method of stacked articles according to an embodiment of the present invention. As shown in fig. 3, the sequential gripping method of stacked articles according to the embodiment of the present invention mainly includes the following steps S301 to S304.
Step S301: and dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and acquiring the three-dimensional coordinates of the divided point cloud.
According to an embodiment of the present invention, step S301 may specifically be when the point cloud of the uppermost layer of the articles is segmented from the point cloud data of the stacked articles:
and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
Specifically, for example, it is: projecting the point clouds of the articles in the turnover box onto a plane corresponding to the picking station, and then calculating the distance from each point cloud in the point cloud data of the articles to the plane of the picking station. The plane equation of the picking station is pre-calculated preset data. After the distance from each point cloud to the picking station plane is obtained, a distance histogram can be counted, and subsequent point cloud segmentation is performed according to the distance histogram.
When the point cloud segmentation is carried out, the point cloud of the uppermost layer of the object can be segmented according to the set point cloud number threshold value and the layer thickness threshold value. In general, since the accuracy of the point cloud data is not high enough, a threshold value of the number of point clouds per layer can be set when performing hierarchical confirmation based on the point clouds, and when the number of point clouds is not less than the threshold value, the point clouds can be considered to constitute one layer. Meanwhile, the thickness threshold of each layer can be set, the thickness threshold of each layer can be determined according to the thickness of the object to be grabbed, and when the depth of the point cloud block meets the thickness threshold, the point cloud can be considered to form one layer. In the specific implementation process, in order to accurately divide the point cloud, the point cloud corresponding to the uppermost layer of the object can be determined according to the set point cloud number threshold and the layer thickness threshold, and the point cloud is divided for subsequent processing. It should be noted that the point cloud of the uppermost layer of the article may be a whole piece of data, and may also include a plurality of pieces of data, which is determined by the placement position of the article.
When the point cloud segmentation is performed, three-dimensional coordinates of the segmented point clouds are required to be acquired and stored in a two-dimensional matrix, wherein each position in the matrix is the (x, y, z) three-dimensional coordinate of one point cloud. And then obtaining the outlines of the separated point clouds according to the three-dimensional coordinates of the point clouds so as to obtain the effective area to be processed at the time. When the point clouds have a plurality of blocks, the three-dimensional coordinate set of each block of point clouds is correspondingly stored in a two-dimensional matrix. Correspondingly, each point cloud is also processed respectively when further processing is performed later.
Because the resolution of the 2D camera is high, the object identification is facilitated, and therefore, after the point cloud of the object on the uppermost layer is obtained, the point cloud is mapped onto the two-dimensional image to identify the object on the uppermost layer, and the identification precision can be improved.
Step S302: and mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of the object by using the calibration relation of the two-dimensional camera and the three-dimensional camera.
Wherein, the calibration relation P of the 2D camera and the 3D camera 34 An internal reference matrix M of a 2D camera 33 And an extrinsic matrix T between the 2D camera and the 3D camera obtained by calibration 34 Multiplication. Namely:
P 34 =M 33 *T 34 =M 33 *[R 33 t 31 ];
wherein M is 33 An internal reference matrix of the 2D camera is a 3 multiplied by 3 matrix, and is an inherent parameter; t (T) 34 The external matrix between the 2D camera and the 3D camera obtained by calibration is a 3×4 matrix, and comprises a rotation component R 33 (as a 3 x 3 matrix) and a translation component t 31 (3 x 1 matrix), the external parameter calibration can be realized by adopting the prior art.
In the point cloud coordinate mapping, three-dimensional coordinates (x, y, z) of the divided point cloud may be mapped to two-dimensional coordinates (U, V) in a two-dimensional image coordinate system one by one. The two-dimensional image coordinate system is generally established with the top left corner vertex of the image as the origin, with the horizontal right direction as the positive X-axis direction, and with the vertical downward direction as the positive Y-axis direction. (U, V) can be calculated by the following formula:
according to the method, the three-dimensional coordinates of the separated point cloud can be mapped to the two-dimensional coordinates under the two-dimensional image coordinate system.
Step S303: and digging out the corresponding area from the two-dimensional image of the object according to the two-dimensional coordinates, and identifying the object in the area.
According to one embodiment of the present invention, when a corresponding region is scratched out from a two-dimensional image of an article according to two-dimensional coordinates, the method specifically may be:
acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by two-dimensional coordinates;
and digging out the area corresponding to the minimum circumscribed rectangle from the two-dimensional image of the article according to the coordinate range.
In acquiring the coordinate range of the minimum circumscribed rectangle of the graph composed of two-dimensional coordinates, it is specifically possible to obtain it from the minimum value (Umin, vmin) and the maximum value (Umax, vmax) in the two-dimensional coordinate values. And then, a region corresponding to the minimum circumscribed rectangle can be scratched out of the two-dimensional image of the article, wherein the data in the region of the minimum circumscribed rectangle is taken out of the two-dimensional image during the scratching.
Since there may be one or more items in the uppermost layer, there may be one or more items corresponding to the identified target item when the item is identified.
Step S304: and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and calculating the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object.
According to the technical scheme of the invention, as the number of the identified articles can be one or more, when the articles are grabbed, the central position and the grabbing state of the end effector are calculated according to the corresponding point cloud data for each article respectively. The center position of the object refers to the coordinates of the center point of the object in the world coordinate system of the robot; the gripping posture of the end effector refers to the relationship between the robot end tool (end effector) coordinate system and the robot world coordinate system when the robot grips an article. After the article is identified, the center point of the minimum circumscribed rectangle of the article can be used for calculating the center position of the article, the grabbing gesture of the end effector is calculated by establishing a local coordinate system by using two sides of the circumscribed rectangle of the article and the normal line of a plane, and the correct gesture of the robot for grabbing the article is to enable the tool coordinate system of the end effector at the tail end of the robot to coincide with the local coordinate system established on the article.
According to the technical scheme of the embodiment of the invention, in order to obtain the corresponding point cloud according to the two-dimensional coordinates of the identified object, after the three-dimensional coordinates of the segmented point cloud are mapped to the two-dimensional coordinates on the two-dimensional image of the object, a point cloud matrix with the same size as the two-dimensional image of the object can be generated; and acquiring the three-dimensional coordinates of the corresponding point cloud from the point cloud matrix according to the two-dimensional coordinates of the identified object. The size of the point cloud matrix is the same as that of the two-dimensional image, and three-dimensional coordinate data (x, y, z) of the corresponding point cloud is stored at a position of a certain two-dimensional coordinate (U, V) of the point cloud matrix, so that the three-dimensional coordinate of the point cloud corresponding to the two-dimensional coordinate of the object can be obtained from the point cloud matrix by generating the point cloud matrix. Since the two-dimensional images are formed by a single pixel, each two-dimensional image can be represented by a two-dimensional matrix, and elements of a V-th row and a U-th column (identified by two-dimensional coordinates (U, V)) in the two-dimensional matrix are three-dimensional coordinate data (x, y, z) of the corresponding point cloud.
Since the pixels of the 2D camera are much higher than those of the 3D camera, the result obtained when mapping the three-dimensional coordinates of the point cloud under the two-dimensional image coordinate system is sparse. In order to obtain the corresponding point cloud according to the two-dimensional coordinates of the identified object, data expansion is needed to obtain a point cloud matrix with the same size as the two-dimensional image of the object.
According to an embodiment of the present invention, when generating a point cloud matrix having the same size as a two-dimensional image of an object, it may specifically be: the three-dimensional coordinates of the segmented point cloud are extended by using the specified data to generate a point cloud matrix of the same size as the two-dimensional image of the object. And, when the three-dimensional coordinates of the corresponding point cloud are acquired according to the two-dimensional coordinates of the identified object, the method can be realized by the following steps:
according to the two-dimensional coordinates of the identified object, three-dimensional coordinates stored at positions corresponding to the two-dimensional coordinates are taken out from the point cloud matrix;
processing the extracted three-dimensional coordinates, and deleting specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates;
and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
When the data expansion is performed, the used data is not real point cloud data, namely invalid data. In specific implementation, the designated data can be used as invalid data, so that the invalid data of the part can be deleted after the three-dimensional coordinates are acquired from the point cloud matrix according to the two-dimensional coordinates, and the accuracy of a processing result is ensured.
Fig. 4 is a schematic flow chart of an implementation of an embodiment of the present invention. As shown in FIG. 4, which shows a specific implementation flow of one embodiment of the present invention, the method mainly comprises the following steps:
(1) And counting the point cloud distribution of the articles in the turnover box. When the step is realized, the point cloud distribution of the scene shown in fig. 1 can be counted firstly, and then the point cloud distribution of the articles in the turnover box is extracted from the point cloud distribution;
(2) And calculating the distance from each point cloud in the point clouds of the object to the plane of the picking station, and counting a distance histogram. The plane equation of the picking station is preset data calculated in advance;
(3) The point cloud of the uppermost layer is segmented. Dividing the point cloud of the uppermost layer according to the counted distance histogram, the set point threshold value included in each layer and the set thickness threshold value of the layer;
(4) Extracting the contour of the separated point cloud to obtain an effective area;
(5) Each segmented point cloud maps its three-dimensional coordinates (x, y, z) to two-dimensional image coordinates (U, V). Specifically, coordinate mapping is realized according to the calibration relation between the 2D camera and the 3D camera;
(6) A point cloud matrix of the same size as the 2D image is generated. Wherein the (U, V) position in the point cloud matrix stores the three-dimensional coordinates (x, y, z) of the corresponding point cloud;
(7) The coordinate range of the minimum circumscribed rectangle formed by the calculated (U, V) coordinates;
(8) The region corresponding to the minimum circumscribed rectangle is scratched out from the 2D image;
(9) Identifying the target object in the scratched area;
(10) According to the identification result, three-dimensional coordinates (x, y, z) of the point cloud of the position corresponding to each target object are taken out from the point cloud matrix;
(11) Calculating the center position of the target object and the grabbing gesture of the end effector, and sending the calculated center position and grabbing gesture of the end effector to the mechanical arm for object grabbing.
Fig. 5 is a schematic diagram of main modules of a sequential gripping device for stacking articles according to an embodiment of the present invention. As shown in fig. 5, the sequential grabbing device 500 for stacking articles according to the embodiment of the present invention mainly includes a point cloud segmentation module 501, a coordinate mapping module 502, an article identification module 503, and a point cloud calculation module 504.
The point cloud segmentation module 501 is configured to segment a point cloud of an uppermost layer of articles from point cloud data of stacked articles, and obtain three-dimensional coordinates of the segmented point cloud;
the coordinate mapping module 502 is configured to map, for the segmented point cloud, a three-dimensional coordinate of the segmented point cloud into a two-dimensional coordinate on a two-dimensional image of an article by using a calibration relationship between a two-dimensional camera and a three-dimensional camera;
an article identifying module 503, configured to extract a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates, and identify the article in the area;
the point cloud computing module 504 is configured to obtain three-dimensional coordinates of a corresponding point cloud according to two-dimensional coordinates of the identified object, and compute a center position of the identified object and a capturing gesture of the end effector according to the obtained three-dimensional coordinates of the point cloud, so as to capture the object.
According to one embodiment of the invention, the point cloud segmentation module 501 may also be configured to:
and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
According to another embodiment of the invention, the item identification module 503 may also be configured to:
acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by the two-dimensional coordinates;
and according to the coordinate range, the area corresponding to the minimum circumscribed rectangle is scratched out from the two-dimensional image of the article.
According to yet another embodiment of the present invention, the sequential gripping device 500 for stacking articles may further comprise a matrix generation module (not shown in the figures) for:
after mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of an article, generating a point cloud matrix with the same size as the two-dimensional image of the article;
and, the point cloud computing module 504 may also be configured to:
and acquiring the three-dimensional coordinates of the corresponding point cloud from the point cloud matrix according to the two-dimensional coordinates of the identified object.
According to yet another embodiment of the invention, the matrix generation module may be further adapted to:
expanding the three-dimensional coordinates of the segmented point cloud by using specified data to generate a point cloud matrix with the same size as the two-dimensional image of the object;
and, the point cloud computing module 504 may also be configured to:
according to the two-dimensional coordinates of the identified object, taking out the three-dimensional coordinates stored at the position corresponding to the two-dimensional coordinates from the point cloud matrix;
processing the extracted three-dimensional coordinates, and deleting the specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates;
and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
According to the technical scheme of the embodiment of the invention, the point cloud of the uppermost layer of article is segmented from the point cloud data of the stacked articles, and the three-dimensional coordinates of the segmented point cloud are obtained; then mapping the three-dimensional coordinates of the separated point cloud into two-dimensional coordinates on the two-dimensional image of the article; then, a corresponding area is scratched out of the two-dimensional image of the article according to the two-dimensional coordinates, and article identification is carried out in the area; finally, the three-dimensional coordinates of the corresponding point cloud are obtained according to the two-dimensional coordinates of the identified object, the center position of the identified object and the grabbing gesture of the end picking device are calculated according to the three-dimensional coordinates of the obtained point cloud, so that object grabbing is performed, commodity layering is achieved through 3D camera point cloud data, then the corresponding area on the 2D image is extracted according to the calibration relation, object identification is performed, layering identification of the object is achieved, object identification can be accurately performed, the position of the object can be accurately determined, object grabbing is accurate, convenient and efficient, and the probability of collision in a box when the object is grabbed is greatly reduced.
Fig. 6 illustrates an exemplary system architecture 600 to which the sequential gripping method of stacked articles or the sequential gripping apparatus of stacked articles of an embodiment of the present invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 605 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages, etc. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using terminal devices 601, 602, 603. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the method for sequentially grabbing stacked articles provided in the embodiment of the present invention is generally executed by the server 605, and accordingly, the sequential grabbing device for stacked articles is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing a terminal device or server in accordance with an embodiment of the present invention. The terminal device or server shown in fig. 7 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described units or modules may also be provided in a processor, for example, as: a processor includes a point cloud segmentation module, a coordinate mapping module, an item identification module, and a point cloud computing module. The names of these units or modules do not limit the units or modules themselves in some cases, and for example, the point cloud segmentation module may also be described as "a module for segmenting a point cloud of an uppermost layer of articles from point cloud data of stacked articles, and acquiring three-dimensional coordinates of the segmented point cloud".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and acquiring the three-dimensional coordinates of the divided point cloud; mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of an article by using a calibration relation of a two-dimensional camera and a three-dimensional camera; digging out a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates, and identifying the article in the area; and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and calculating the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object.
According to the technical scheme of the embodiment of the invention, the point cloud of the uppermost layer of article is segmented from the point cloud data of the stacked articles, and the three-dimensional coordinates of the segmented point cloud are obtained; then mapping the three-dimensional coordinates of the separated point cloud into two-dimensional coordinates on the two-dimensional image of the article; then, a corresponding area is scratched out of the two-dimensional image of the article according to the two-dimensional coordinates, and article identification is carried out in the area; finally, the three-dimensional coordinates of the corresponding point cloud are obtained according to the two-dimensional coordinates of the identified object, the center position of the identified object and the grabbing gesture of the end picking device are calculated according to the three-dimensional coordinates of the obtained point cloud, so that object grabbing is performed, commodity layering is achieved through 3D camera point cloud data, then the corresponding area on the 2D image is extracted according to the calibration relation, object identification is performed, layering identification of the object is achieved, object identification can be accurately performed, the position of the object can be accurately determined, object grabbing is accurate, convenient and efficient, and the probability of collision in a box when the object is grabbed is greatly reduced.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of sequentially gripping stacked articles, comprising:
dividing the point cloud of the uppermost layer of articles from the point cloud data of the stacked articles, and acquiring the three-dimensional coordinates of the divided point cloud;
mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of an article by using a calibration relation of a two-dimensional camera and a three-dimensional camera;
digging out a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates, and identifying the article in the area;
acquiring three-dimensional coordinates of a corresponding point cloud according to the two-dimensional coordinates of the identified object, and calculating the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object;
after mapping the three-dimensional coordinates of the segmented point cloud to two-dimensional coordinates on a two-dimensional image of the item, further comprising: expanding the three-dimensional coordinates of the segmented point cloud by using specified data to generate a point cloud matrix with the same size as the two-dimensional image of the object;
and acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object comprises: according to the two-dimensional coordinates of the identified object, taking out the three-dimensional coordinates stored at the position corresponding to the two-dimensional coordinates from the point cloud matrix; processing the extracted three-dimensional coordinates, and deleting the specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates; and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
2. The method of claim 1, wherein segmenting the point cloud of the uppermost layer of items from the point cloud data of the stacked items comprises:
and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
3. The method of claim 1, wherein matting out the corresponding region from the two-dimensional image of the item according to the two-dimensional coordinates comprises:
acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by the two-dimensional coordinates;
and according to the coordinate range, the area corresponding to the minimum circumscribed rectangle is scratched out from the two-dimensional image of the article.
4. A sequential gripping device for stacked articles, comprising:
the point cloud segmentation module is used for segmenting the point cloud of the uppermost layer of article from the point cloud data of the stacked articles and acquiring the three-dimensional coordinates of the segmented point cloud;
the coordinate mapping module is used for mapping the three-dimensional coordinates of the segmented point cloud into two-dimensional coordinates on a two-dimensional image of the article by using the calibration relation of the two-dimensional camera and the three-dimensional camera;
the article identification module is used for digging out a corresponding area from the two-dimensional image of the article according to the two-dimensional coordinates and identifying the article in the area;
the point cloud computing module is used for acquiring the three-dimensional coordinates of the corresponding point cloud according to the two-dimensional coordinates of the identified object, and computing the central position of the identified object and the grabbing gesture of the end effector according to the three-dimensional coordinates of the acquired point cloud so as to grab the object;
a matrix generation module for: after mapping the three-dimensional coordinates of the segmented point cloud to two-dimensional coordinates on a two-dimensional image of an article, expanding the three-dimensional coordinates of the segmented point cloud by using specified data to generate a point cloud matrix having the same size as the two-dimensional image of the article;
and, the point cloud computing module is further to: according to the two-dimensional coordinates of the identified object, taking out the three-dimensional coordinates stored at the position corresponding to the two-dimensional coordinates from the point cloud matrix; processing the extracted three-dimensional coordinates, and deleting the specified data from the three-dimensional coordinates to obtain effective three-dimensional coordinates; and obtaining the three-dimensional coordinates of the corresponding point cloud according to the effective three-dimensional coordinates.
5. The apparatus of claim 4, wherein the point cloud segmentation module is further configured to:
and (3) calculating the distance from each point cloud in the point cloud data of the stacked articles to the plane of the picking station, and dividing the point cloud of the uppermost article layer according to the set point cloud number threshold value and the layer thickness threshold value.
6. The apparatus of claim 4, wherein the article identification module is further to:
acquiring a coordinate range of a minimum circumscribed rectangle of a graph formed by the two-dimensional coordinates;
and according to the coordinate range, the area corresponding to the minimum circumscribed rectangle is scratched out from the two-dimensional image of the article.
7. An electronic device for sequentially gripping stacked articles, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-3.
8. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-3.
CN201911301902.8A 2019-12-17 2019-12-17 Sequential gripping method and device for stacked articles Active CN111754515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911301902.8A CN111754515B (en) 2019-12-17 2019-12-17 Sequential gripping method and device for stacked articles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911301902.8A CN111754515B (en) 2019-12-17 2019-12-17 Sequential gripping method and device for stacked articles

Publications (2)

Publication Number Publication Date
CN111754515A CN111754515A (en) 2020-10-09
CN111754515B true CN111754515B (en) 2024-03-01

Family

ID=72673005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911301902.8A Active CN111754515B (en) 2019-12-17 2019-12-17 Sequential gripping method and device for stacked articles

Country Status (1)

Country Link
CN (1) CN111754515B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464410B (en) * 2020-12-02 2021-11-16 熵智科技(深圳)有限公司 Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN112509145B (en) * 2020-12-22 2023-12-08 珠海格力智能装备有限公司 Material sorting method and device based on three-dimensional vision
CN112734932A (en) * 2021-01-04 2021-04-30 深圳辰视智能科技有限公司 Strip-shaped object unstacking method, unstacking device and computer-readable storage medium
CN112802106B (en) * 2021-02-05 2024-06-18 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device
CN112802107A (en) * 2021-02-05 2021-05-14 梅卡曼德(北京)机器人科技有限公司 Robot-based control method and device for clamp group
CN112802093B (en) * 2021-02-05 2023-09-12 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device
CN112907668B (en) * 2021-02-26 2024-01-30 梅卡曼德(北京)机器人科技有限公司 Method and device for identifying stacking box bodies in stack and robot
CN112818930B (en) * 2021-02-26 2023-12-05 梅卡曼德(北京)机器人科技有限公司 Method for identifying stacking box body and method for determining grabbing pose
CN112837370A (en) * 2021-02-26 2021-05-25 梅卡曼德(北京)机器人科技有限公司 Object stacking judgment method and device based on 3D bounding box and computing equipment
CN112818992B (en) * 2021-02-26 2024-02-09 梅卡曼德(北京)机器人科技有限公司 Identification method for stacking box
CN112565616A (en) * 2021-03-01 2021-03-26 民航成都物流技术有限公司 Target grabbing method, system and device and readable storage medium
CN113284179B (en) * 2021-05-26 2022-09-13 吉林大学 Robot multi-object sorting method based on deep learning
CN113269834A (en) * 2021-07-19 2021-08-17 浙江华睿科技股份有限公司 Logistics container carrying method and device and storage medium
CN113524187B (en) * 2021-07-20 2022-12-13 熵智科技(深圳)有限公司 Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN113538582B (en) * 2021-07-20 2024-06-07 熵智科技(深圳)有限公司 Method, device, computer equipment and medium for determining workpiece grabbing sequence
CN113688704A (en) * 2021-08-13 2021-11-23 北京京东乾石科技有限公司 Item sorting method, item sorting device, electronic device, and computer-readable medium
CN113731860B (en) * 2021-09-03 2023-10-24 西安建筑科技大学 Automatic sorting system and method for piled articles in container
CN115511807B (en) * 2022-09-16 2023-07-28 北京远舢智能科技有限公司 Method and device for determining position and depth of groove

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331894A (en) * 2014-11-19 2015-02-04 山东省科学院自动化研究所 Robot unstacking method based on binocular stereoscopic vision
CN105096300A (en) * 2014-05-08 2015-11-25 株式会社理光 Object detecting method and device
CN106056659A (en) * 2016-05-27 2016-10-26 山东科技大学 Building corner space position automatic extraction method in vehicle laser scanning point cloud
CN108022307A (en) * 2017-11-26 2018-05-11 中国人民解放军陆军装甲兵学院 The adaptive planar layer method of point cloud model is remanufactured based on increasing material
WO2018214084A1 (en) * 2017-05-25 2018-11-29 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing environmental elements, system, and vehicle/robot
CN109033989A (en) * 2018-07-02 2018-12-18 深圳辰视智能科技有限公司 Target identification method, device and storage medium based on three-dimensional point cloud
CN109436820A (en) * 2018-09-17 2019-03-08 武汉库柏特科技有限公司 A kind of the de-stacking method and de-stacking system of stacks of goods
WO2019100647A1 (en) * 2017-11-21 2019-05-31 江南大学 Rgb-d camera-based object symmetry axis detection method
CN109870983A (en) * 2017-12-04 2019-06-11 北京京东尚科信息技术有限公司 Handle the method, apparatus of pallet stacking image and the system for picking of storing in a warehouse
CN110148116A (en) * 2019-04-12 2019-08-20 深圳大学 A kind of forest biomass evaluation method and its system
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096300A (en) * 2014-05-08 2015-11-25 株式会社理光 Object detecting method and device
CN104331894A (en) * 2014-11-19 2015-02-04 山东省科学院自动化研究所 Robot unstacking method based on binocular stereoscopic vision
CN106056659A (en) * 2016-05-27 2016-10-26 山东科技大学 Building corner space position automatic extraction method in vehicle laser scanning point cloud
WO2018214084A1 (en) * 2017-05-25 2018-11-29 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing environmental elements, system, and vehicle/robot
WO2019100647A1 (en) * 2017-11-21 2019-05-31 江南大学 Rgb-d camera-based object symmetry axis detection method
CN108022307A (en) * 2017-11-26 2018-05-11 中国人民解放军陆军装甲兵学院 The adaptive planar layer method of point cloud model is remanufactured based on increasing material
CN109867077A (en) * 2017-12-04 2019-06-11 北京京东尚科信息技术有限公司 For the system for picking of storing in a warehouse, method, apparatus, order-picking trucks and shuttle
CN109870983A (en) * 2017-12-04 2019-06-11 北京京东尚科信息技术有限公司 Handle the method, apparatus of pallet stacking image and the system for picking of storing in a warehouse
CN109033989A (en) * 2018-07-02 2018-12-18 深圳辰视智能科技有限公司 Target identification method, device and storage medium based on three-dimensional point cloud
CN109436820A (en) * 2018-09-17 2019-03-08 武汉库柏特科技有限公司 A kind of the de-stacking method and de-stacking system of stacks of goods
CN110148116A (en) * 2019-04-12 2019-08-20 深圳大学 A kind of forest biomass evaluation method and its system
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device
CN110276793A (en) * 2019-06-05 2019-09-24 北京三快在线科技有限公司 A kind of method and device for demarcating three-dimension object

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种多目立体视觉的三维激光扫描系统设计;徐渊;王亚洲;周建华;边育心;;计算机与数字工程(11);全文 *

Also Published As

Publication number Publication date
CN111754515A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111754515B (en) Sequential gripping method and device for stacked articles
CN109870983B (en) Method and device for processing tray stack image and system for warehousing goods picking
CN110632608B (en) Target detection method and device based on laser point cloud
CN113724368B (en) Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium
CN110370268B (en) Method, device and system for in-box sorting
CN110276829A (en) The three dimensional representation handled by multiple dimensioned voxel Hash
CN111639147B (en) Map compression method, system and computer readable storage medium
CN114627239B (en) Bounding box generation method, device, equipment and storage medium
CN114638846A (en) Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium
CN115713547A (en) Motion trail generation method and device and processing equipment
CN114633979A (en) Goods stacking method and device, electronic equipment and computer readable medium
CN112241977A (en) Depth estimation method and device for feature points
CN114454168A (en) Dynamic vision mechanical arm grabbing method and system and electronic equipment
CN111815683B (en) Target positioning method and device, electronic equipment and computer readable medium
CN111428536B (en) Training method and device for detecting network for detecting article category and position
CN113682828A (en) Method, device and system for stacking objects
CN112907164A (en) Object positioning method and device
CN111488890B (en) Training method and device for object detection model
CN113688704A (en) Item sorting method, item sorting device, electronic device, and computer-readable medium
CN110634159A (en) Target detection method and device
CN113345023A (en) Positioning method and device of box body, medium and electronic equipment
CN113095176A (en) Method and device for background reduction of video data
CN113780269A (en) Image recognition method, device, computer system and readable storage medium
CN114092502A (en) Method and device for processing point cloud data
CN113160324B (en) Bounding box generation method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210305

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration: 20210305

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

GR01 Patent grant
GR01 Patent grant