Disclosure of Invention
In view of the above, it is an object of the present invention to provide a method and apparatus for centering an article, and a logistics system and a storage medium.
According to one aspect of the present disclosure, there is provided an article centering method including: respectively obtaining a two-dimensional image and a three-dimensional point cloud image which are acquired by a two-dimensional image pick-up device and a three-dimensional image pick-up device and correspond to a target object; acquiring a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image; generating a fitting surface of the target object according to the point cloud image; and obtaining three-dimensional center position information and posture information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
Optionally, the obtaining the point cloud image corresponding to the two-dimensional image based on the calibration matrix between the two-dimensional image capturing device and the three-dimensional point cloud image includes: acquiring a two-dimensional template image corresponding to the target object; matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image; and processing the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
Optionally, the processing the three-dimensional point cloud image based on the calibration matrix includes: performing coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix to obtain two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image; establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data; acquiring image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the corresponding relation; and generating the image area point cloud image based on the image area point cloud data.
Optionally, the fitting processing on the point cloud image to generate a fitting surface of the target object includes: and carrying out space plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm to obtain a fitting surface of the target object corresponding to the image area point cloud image.
Optionally, two-dimensional coordinates of a center point of the target object are obtained in the image area.
Optionally, the obtaining the three-dimensional center position information and the posture information of the target object according to the two-dimensional center position information, the calibration matrix and the fitting surface of the target object includes: calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate and the fitting surface of the target object; and obtaining target object attitude angle information corresponding to the three-dimensional coordinates of the center point according to the fitting surface of the target object.
Optionally, the three-dimensional coordinates of the center point are
Wherein M 33 represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents a column vector of column 4 of the M matrix; the two-dimensional coordinates of the central point are (U, V), the calibration matrix is P 34, and the fitting surface of the target object is A x+B y+C z+D=0;
The M matrix is
According to another aspect of the present disclosure, there is provided an article centering device comprising: the image acquisition module is used for respectively acquiring two-dimensional images and three-dimensional point cloud images which are acquired by the two-dimensional image pickup device and the three-dimensional image pickup device and correspond to the target object; the point cloud image generation module is used for obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image; the fitting surface generation module is used for generating a fitting surface of the target object according to the point cloud image; the position and posture obtaining module is used for obtaining three-dimensional center position information and posture information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
Optionally, the point cloud image generating module includes: a template acquisition unit for acquiring a two-dimensional template image corresponding to the target object; the area determining unit is used for matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image; and the point cloud processing unit is used for processing the three-dimensional point cloud image based on the calibration matrix and obtaining an image area point cloud image corresponding to the image area.
Optionally, the point cloud processing unit is specifically configured to perform coordinate conversion processing on each piece of point cloud data in the three-dimensional point cloud image based on the calibration matrix, so as to obtain two-dimensional coordinate data corresponding to each piece of point cloud data in the two-dimensional image; establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data; acquiring image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the corresponding relation; and generating the image area point cloud image based on the image area point cloud data.
Optionally, the fitting surface generating module is configured to perform spatial plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm, so as to obtain a fitting surface of the target object corresponding to the image area point cloud image.
Optionally, the position and posture obtaining module includes: a two-dimensional position obtaining unit for obtaining the two-dimensional coordinates of the center point of the target object in the image area.
Optionally, the position and posture obtaining module further includes: the three-dimensional position calculation unit is used for calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate and the fitting surface of the target object; and the gesture information obtaining unit is used for obtaining gesture angle information of the target object corresponding to the three-dimensional coordinates of the central point according to the fitting surface of the target object.
According to yet another aspect of the present disclosure, there is provided an article centering device comprising: a memory; and a processor coupled to the memory, the processor configured to perform the method as described above based on instructions stored in the memory.
According to yet another aspect of the present disclosure, there is provided a logistics system comprising: a robot, an article center positioning device as described above; the article center positioning device sends three-dimensional center position information and posture information of the target article to the robot.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium storing computer instructions for execution by a processor to perform the method as described above.
According to the article center positioning method, the article center positioning device, the logistics system and the storage medium, a point cloud image corresponding to a two-dimensional image is obtained according to a calibration matrix and a three-dimensional point cloud image, a fitting surface of a target article is generated according to the point cloud image, and three-dimensional center position information and posture information are obtained according to two-dimensional center position information in the two-dimensional image, the calibration matrix and the fitting surface; the problem of the central point position location inaccuracy that causes because of the point cloud cavity is solved, utilize the higher advantage of accuracy of determining the target in the two-dimensional image, can discern central point position and the gesture of article accurately, can improve the success rate of picking article to can improve work efficiency and security.
Detailed Description
The present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the disclosure are shown. The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure. The technical solutions of the present disclosure are described in various aspects below with reference to the drawings and the embodiments.
At present, a square packing box is generally adopted for articles, a plurality of packing boxes for articles of the same kind are densely arranged in a turnover box, and the method for identifying the central position and the posture of the articles (the packing boxes for the articles) is as follows: and establishing a 3D model of the object, and calculating the central position and the posture of the picking target object by matching the 3D model of the object on the scene point cloud.
The method for identifying the central position and the gesture of the article has the following defects: the method for matching the point cloud model can be obviously interfered by the quality of the point cloud; for example, the current 3D cameras have low general resolution, low measurement accuracy and difficult resolution of gaps between objects, and packaging materials of many objects are easy to reflect light, so that voids (invalid data) exist on point clouds at the position of reflecting light easily, and the target matching accuracy is affected.
The invention provides an article center positioning method for improving the picking success rate of a robot end pick.
FIG. 1 is a flow chart illustrating one embodiment of an article centering method according to the present disclosure, as shown in FIG. 1, the article centering method comprising steps 101-104.
Step 101, respectively obtaining a two-dimensional image and a three-dimensional point cloud image which are acquired by a two-dimensional image pickup device and a three-dimensional image pickup device and correspond to a target object.
The two-dimensional imaging device may be a 2D camera or the like, and the three-dimensional imaging device may be a 3D camera or the like. The target object can be a packing box of various commodities, and the two-dimensional image and the three-dimensional point cloud image can be scene images containing the target object.
And 102, obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix and the three-dimensional point cloud image between the two-dimensional image pickup device and the three-dimensional image pickup device.
There may be a plurality of calibration matrices between the two-dimensional camera and the three-dimensional camera. For example, the calibration matrix between the two-dimensional image capturing device and the three-dimensional image capturing device may be a matrix P 34, where the matrix P 34 implies a rotational and translational relationship between the three-dimensional image capturing device coordinate system and the two-dimensional image capturing device coordinate system, and internal parameters of the two-dimensional image capturing device. And multiplying the point cloud coordinates obtained by the three-dimensional camera device by the calibration matrix P 34 to obtain the coordinates of the point corresponding to the two-dimensional image acquired by the two-dimensional camera device.
Step 103, generating a fitting surface of the target object according to the point cloud image. The surface fitting of the target object can be performed by adopting various existing methods, and the fitting surface of the generated target object can be a plane, a curved surface and the like.
And 104, obtaining three-dimensional center position information and attitude information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
FIG. 2 is a schematic view of a picking job scenario according to one embodiment of the item centering method of the present disclosure, as shown in FIG. 2: after the logistics system issues a picking task, the turnover box is adjusted to a picking station of the logistics system, and the robot grabs commodities according to the target position and the gesture identified by the vision system. The vision system includes a 2D camera and a 3D camera. The resolution of the 2D camera is high, so that the segmentation of the close-packed situation of the same kind of objects is facilitated; the point cloud shot by the 3D camera is used for calculating the picking position and the gesture of the target.
In one embodiment, there are a number of ways to obtain a point cloud image corresponding to a two-dimensional image based on a calibration matrix and a three-dimensional point cloud image. For example, a two-dimensional template image corresponding to the target article is acquired, and the two-dimensional template image may be a rectangular image containing the target article, or the like. Matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image, wherein the image area can be a rectangular frame area which corresponds to the two-dimensional template image and contains the target object, and the like. And processing the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
Template matching is a pattern recognition method that determines where a pattern of a specific object is located in an image, and further recognizes the object. The two-dimensional template image is a pair of two-dimensional images corresponding to the target object, and template matching refers to searching the target object based on the two-dimensional template image in the two-dimensional images acquired by the two-dimensional camera device. The image area of the target object obtained by matching in the two-dimensional image acquired by the two-dimensional camera device has the same size, direction and image elements as the two-dimensional image template. The image matching process may be performed using a variety of algorithms available,
And carrying out coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix, obtaining two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image, and establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data. Pixels in the image area are obtained, image area point cloud data corresponding to two-dimensional coordinate data of each pixel in the image area are obtained based on the corresponding relation, and an image area point cloud image is generated based on the image area point cloud data.
And carrying out space plane fitting processing on the three-dimensional point cloud data in the point cloud image of the image area based on a preset plane fitting algorithm to obtain a fitting surface of the target object corresponding to the point cloud image of the image area, wherein the fitting surface can be a plane, a curved surface and the like. The plane fitting algorithm may be any of a variety of existing plane fitting algorithms, such as a random sample consensus algorithm, etc.
In one embodiment, a two-dimensional coordinate of a center point of the target object is obtained in the image area, a three-dimensional coordinate of the center point corresponding to the two-dimensional coordinate of the center point is calculated according to the calibration matrix, the two-dimensional coordinate of the center point and the fitting surface of the target object, and posture angle information of the target object corresponding to the three-dimensional coordinate of the center point is obtained according to the fitting surface of the target object. If the fitting surface is a plane, the attitude angle information of the fitting surface is the attitude angle information of the target object; if the fitting surface is a curved surface, determining a position corresponding to the three-dimensional coordinates of the center point on the fitting surface, and obtaining attitude angle information of the position as the attitude angle information of the target object. The target object attitude angle information may be euler angles, which may be represented by nutation angle θ, precession angle ψ, and rotation angleComposition is prepared.
FIG. 3 is a flow chart of another embodiment of a method of centering an article according to the present disclosure, as shown in FIG. 3:
step 301, starting a 2D camera and a 3D camera to take a picture, and obtaining a scene 2D image and a scene 3D point cloud. The scene 2D image and the scene 3D point cloud both contain target objects, and the target objects can be packing boxes of commodities and the like.
Step 302, matching the 2D template of the target object with the scene 2D image to obtain a rectangular frame containing the target object in the scene 2D image.
For example, the target item to be picked by the robot is matched out on the scene 2D image according to the 2D template of the target item. For example, a rectangular box containing the object can be obtained in the scene 2D image, the rectangular box being characterized by coordinates of 4 points, [ (u 0,v0),(u1,v1),(u2,v2),(u3,v3) ].
And step 303, acquiring the effective point cloud of the target object according to the rectangular frame on the scene 2D image and the scene 3D point cloud.
For example, an effective point cloud of the target object is acquired from a 2D image of the scene acquired by the 2D camera, a 3D point cloud of the scene acquired by the 3D camera, and a calibration matrix P 34 (3 rows and 4 columns matrix) between the 3D camera and the 2D camera.
Let the coordinates in the 2D image of the scene captured by the 3D camera corresponding to a point v= [ x, y, z ] T in the 3D point cloud of the scene captured by the 2D camera be w= [ u, V ] T, then the specific calculation of w is as follows:
According to the formulas (1-1) and (1-2), the two-dimensional coordinates of each effective point cloud in the 3D point clouds of the scene shot by the 3D camera in the 2D image of the scene can be calculated, and a point cloud image (without the corresponding effective point cloud position filled with invalid data) consistent with the size of a rectangular frame containing the target object can be established.
Step 304, generating a fitting plane a x+b x+y+c x z+d=0 according to the effective point cloud, and calculating the attitude euler angles (α, β, γ) of the target object.
For example, in step 302, a rectangular box containing the target item in the scene 2D image may be identified, and in step 303, a point cloud image is created that is consistent with the size of the rectangular box containing the target item in the scene 2D image, so that an effective point cloud for the target item may be obtained.
Using an effective point cloud, a random sampling consensus algorithm (RANSAC algorithm) can be used to fit a fitting plane equation a x+b x+c x+z+d=0 of a target object (the surface of a package of the object), so that the gesture of picking the target object can be calculated and is characterized by euler angles (α, β, γ).
Step 305, calculating the grabbing center position (x, y, z) of the target object according to the rectangular frame center (U, V) of the target object in the scene 2D image and the plane fitted by the effective point cloud of the target object.
For example, a rectangular frame [ (U 0,v0),(u1,v1),(u2,v2),(u3,v3) ] containing the target object is obtained in the scene 2D image, and the center point (U, V) of the target object in the scene 2D image is calculated. The center point of the target item may be obtained in the scene 2D image in a number of ways. For example, the center point of the target object is the geometric center of the rectangular frame containing the target object, and the like. According to the calibration matrix P 34 of the 3D camera and the 2D camera and the fitting plane equation a x+b x+y+c x z+d=0, the center position of the target object can be calculated. The specific calculation process is as follows:
the formula (1-2) can be rewritten as follows with reference to the formulas (1-1) and (1-2):
The fitting plane equation a x+b y+c z+d=0 can be written as follows:
The scene 2D image and the scene 3D point cloud satisfy the following equations:
The solution of equations (1-5) can be written as Referring to formula (1-3), bringing formula (1-3) into formula (1-5) yields:
combining (1-4) with (1-6) to obtain:
Bringing the center point (U, V) of the object into the formula (1-7) and recording the matrix:
The center position of the target object can be solved from the equation (1-7) as follows:
M 33 in the formulas (1-9) represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents the column vector of column 4 of the M matrix. The central position [ x, y, z ] of the target object is obtained by the formula (1-9), and the posture (alpha, beta, gamma) of the target object can be calculated by fitting the plane equation a x+b y+c z+d=0.
Step 305, the obtained center position and posture (x, y, z, α, β, γ) of the target article are sent to the robot, and picking operation is performed by the robot.
When the corresponding 3D point (x, y, z) is deduced from the coordinates (u, v) in the two-dimensional image, multiple solutions are generated, and all the solutions form a ray, so that the picking position of the target item cannot be directly determined by the center of the target item determined on the two-dimensional image. By fitting a fitting surface equation of the target object using the effective point cloud of the target object, a unique spatial location can be determined; according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object, the three-dimensional center position information and the posture information of the target object are obtained, and the problem of inaccurate center position positioning caused by point cloud cavities (invalid data) is solved.
In one embodiment, as shown in fig. 4, the present disclosure provides an article centering device 40 comprising: an image acquisition module 41, a point cloud image generation module 42, a fitting surface generation module 43, and a position and orientation acquisition module 44.
The image acquisition module 41 acquires two-dimensional images and three-dimensional point cloud images corresponding to the target object acquired by the two-dimensional image pickup device and the three-dimensional image pickup device, respectively. The point cloud image generation module 42 obtains a point cloud image corresponding to the two-dimensional image based on the calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image. The fitting surface generation module 43 generates a fitting surface of the target object from the point cloud image. The position and orientation acquisition module 44 acquires three-dimensional center position information and orientation information of the target item from the two-dimensional center position information of the target item in the two-dimensional image, the calibration matrix, and the fitting surface of the target item.
In one embodiment, as shown in FIG. 5, the point cloud image generation module 42 includes: a template acquisition unit 421, a region determination unit 422, and a point cloud processing unit 423. The template acquisition unit 421 acquires a two-dimensional template image corresponding to a target article. The region determining unit 422 matches the two-dimensional template image with a two-dimensional image in which an image region of the target article is obtained. The point cloud processing unit 423 processes the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
The point cloud processing unit 423 performs coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix, obtains two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image, and establishes a corresponding relationship between the point cloud data and the two-dimensional coordinate data. The point cloud processing unit 423 obtains image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the correspondence, and generates an image area point cloud map based on the image area point cloud data. The fitting surface generating module 43 performs spatial plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm, and obtains a fitting surface of the target object corresponding to the image area point cloud image.
In one embodiment, as shown in FIG. 6, the position and attitude acquisition module 44 includes: a two-dimensional position obtaining unit 441, a three-dimensional position calculating unit 442, and a posture information obtaining unit 423. The two-dimensional position obtaining unit 441 obtains the center point two-dimensional coordinates of the target article in the image area. The three-dimensional position calculation unit 442 calculates a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate from the calibration matrix, the center point two-dimensional coordinate, and the fitting surface of the target object. The posture information obtaining unit 443 obtains target article posture angle information corresponding to the three-dimensional coordinates of the center point from the fitting surface of the target article.
For example, the three-dimensional coordinates of the center point are
Wherein M 33 represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents a column vector of column 4 of the M matrix; the two-dimensional coordinates of the central point are (U, V), the calibration matrix is P 34, and the fitting surface of the target object is A x+B y+C z+D=0;
M matrix is
In one embodiment, fig. 7 is a block diagram of another embodiment of an article centering device according to the present disclosure. As shown in fig. 7, the apparatus may include a memory 71, a processor 72, a communication interface 73, and a bus 74. The memory 71 is for storing instructions and the processor 72 is coupled to the memory 71, the processor 72 being configured to implement the article centering method described above based on the instructions stored by the memory 71.
The memory 71 may be a high-speed RAM memory, a nonvolatile memory (non-volatile memory), or the like, and the memory 71 may be a memory array. The memory 71 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. Processor 72 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement the article-centering methods of the present disclosure.
In one embodiment, the present disclosure provides a logistics system comprising: the robot, the article center positioning device in any of the above embodiments, the article center positioning device transmitting three-dimensional center position information and attitude information of the target article to the robot.
In one embodiment, the present disclosure provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the article centering method of any of the above embodiments.
According to the article center positioning method, the article center positioning device, the logistics system and the storage medium, a point cloud image corresponding to a two-dimensional image is obtained according to a calibration matrix and a three-dimensional point cloud image, a fitting surface of a target article is generated according to the point cloud image, and three-dimensional center position information and posture information are obtained according to two-dimensional center position information, the calibration matrix and the fitting surface in the two-dimensional image; the problem of the central point position location inaccuracy that causes because of the point cloud cavity is solved, utilize the higher advantage of accuracy of determining the target in the two-dimensional image, can discern central point position and the gesture of article accurately, can improve the success rate of picking article to can improve work efficiency and security.
The methods and systems of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
The description of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.