CN111666935B - Article center positioning method and device, logistics system and storage medium - Google Patents

Article center positioning method and device, logistics system and storage medium Download PDF

Info

Publication number
CN111666935B
CN111666935B CN201910167736.0A CN201910167736A CN111666935B CN 111666935 B CN111666935 B CN 111666935B CN 201910167736 A CN201910167736 A CN 201910167736A CN 111666935 B CN111666935 B CN 111666935B
Authority
CN
China
Prior art keywords
dimensional
image
point cloud
target object
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910167736.0A
Other languages
Chinese (zh)
Other versions
CN111666935A (en
Inventor
刘伟峰
万保成
曹凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201910167736.0A priority Critical patent/CN111666935B/en
Publication of CN111666935A publication Critical patent/CN111666935A/en
Application granted granted Critical
Publication of CN111666935B publication Critical patent/CN111666935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides a method and a device for positioning an article center, a logistics system and a storage medium, and relates to the technical field of logistics, wherein the method comprises the following steps: acquiring a point cloud image corresponding to the two-dimensional image based on a calibration matrix and the three-dimensional point cloud image between the two-dimensional image pickup device and the three-dimensional image pickup device; generating a fitting surface of the target object according to the point cloud image; and obtaining three-dimensional center position information and attitude information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object. According to the article center positioning method, the article center positioning device, the logistics system and the storage medium, the problem of inaccurate center position positioning caused by the point cloud cavity is solved, the center position and the posture of an article can be accurately identified by utilizing the advantage of high accuracy of determining the target in the two-dimensional image, the success rate of picking the article can be improved, and the working efficiency and the safety can be improved.

Description

Article center positioning method and device, logistics system and storage medium
Technical Field
The disclosure relates to the technical field of logistics, in particular to a method and a device for centering an article, a logistics system and a storage medium.
Background
The robot in the logistics system executes the picking task of the articles, and the robot takes out the corresponding amount of articles from the turnover box according to the picking task on the basis of visual assistance and puts the articles into the designated position, so that the center position and the gesture of the articles need to be accurately identified in the process, and the robot can accurately grasp the articles. At present, the adopted technologies for identifying the central position and the gesture of the commodity are insufficient, the obtained central position and gesture of the commodity are inaccurate, and the picking success rate of the robot is reduced.
Disclosure of Invention
In view of the above, it is an object of the present invention to provide a method and apparatus for centering an article, and a logistics system and a storage medium.
According to one aspect of the present disclosure, there is provided an article centering method including: respectively obtaining a two-dimensional image and a three-dimensional point cloud image which are acquired by a two-dimensional image pick-up device and a three-dimensional image pick-up device and correspond to a target object; acquiring a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image; generating a fitting surface of the target object according to the point cloud image; and obtaining three-dimensional center position information and posture information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
Optionally, the obtaining the point cloud image corresponding to the two-dimensional image based on the calibration matrix between the two-dimensional image capturing device and the three-dimensional point cloud image includes: acquiring a two-dimensional template image corresponding to the target object; matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image; and processing the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
Optionally, the processing the three-dimensional point cloud image based on the calibration matrix includes: performing coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix to obtain two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image; establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data; acquiring image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the corresponding relation; and generating the image area point cloud image based on the image area point cloud data.
Optionally, the fitting processing on the point cloud image to generate a fitting surface of the target object includes: and carrying out space plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm to obtain a fitting surface of the target object corresponding to the image area point cloud image.
Optionally, two-dimensional coordinates of a center point of the target object are obtained in the image area.
Optionally, the obtaining the three-dimensional center position information and the posture information of the target object according to the two-dimensional center position information, the calibration matrix and the fitting surface of the target object includes: calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate and the fitting surface of the target object; and obtaining target object attitude angle information corresponding to the three-dimensional coordinates of the center point according to the fitting surface of the target object.
Optionally, the three-dimensional coordinates of the center point are
Wherein M 33 represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents a column vector of column 4 of the M matrix; the two-dimensional coordinates of the central point are (U, V), the calibration matrix is P 34, and the fitting surface of the target object is A x+B y+C z+D=0;
The M matrix is
According to another aspect of the present disclosure, there is provided an article centering device comprising: the image acquisition module is used for respectively acquiring two-dimensional images and three-dimensional point cloud images which are acquired by the two-dimensional image pickup device and the three-dimensional image pickup device and correspond to the target object; the point cloud image generation module is used for obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image; the fitting surface generation module is used for generating a fitting surface of the target object according to the point cloud image; the position and posture obtaining module is used for obtaining three-dimensional center position information and posture information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
Optionally, the point cloud image generating module includes: a template acquisition unit for acquiring a two-dimensional template image corresponding to the target object; the area determining unit is used for matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image; and the point cloud processing unit is used for processing the three-dimensional point cloud image based on the calibration matrix and obtaining an image area point cloud image corresponding to the image area.
Optionally, the point cloud processing unit is specifically configured to perform coordinate conversion processing on each piece of point cloud data in the three-dimensional point cloud image based on the calibration matrix, so as to obtain two-dimensional coordinate data corresponding to each piece of point cloud data in the two-dimensional image; establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data; acquiring image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the corresponding relation; and generating the image area point cloud image based on the image area point cloud data.
Optionally, the fitting surface generating module is configured to perform spatial plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm, so as to obtain a fitting surface of the target object corresponding to the image area point cloud image.
Optionally, the position and posture obtaining module includes: a two-dimensional position obtaining unit for obtaining the two-dimensional coordinates of the center point of the target object in the image area.
Optionally, the position and posture obtaining module further includes: the three-dimensional position calculation unit is used for calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate and the fitting surface of the target object; and the gesture information obtaining unit is used for obtaining gesture angle information of the target object corresponding to the three-dimensional coordinates of the central point according to the fitting surface of the target object.
According to yet another aspect of the present disclosure, there is provided an article centering device comprising: a memory; and a processor coupled to the memory, the processor configured to perform the method as described above based on instructions stored in the memory.
According to yet another aspect of the present disclosure, there is provided a logistics system comprising: a robot, an article center positioning device as described above; the article center positioning device sends three-dimensional center position information and posture information of the target article to the robot.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium storing computer instructions for execution by a processor to perform the method as described above.
According to the article center positioning method, the article center positioning device, the logistics system and the storage medium, a point cloud image corresponding to a two-dimensional image is obtained according to a calibration matrix and a three-dimensional point cloud image, a fitting surface of a target article is generated according to the point cloud image, and three-dimensional center position information and posture information are obtained according to two-dimensional center position information in the two-dimensional image, the calibration matrix and the fitting surface; the problem of the central point position location inaccuracy that causes because of the point cloud cavity is solved, utilize the higher advantage of accuracy of determining the target in the two-dimensional image, can discern central point position and the gesture of article accurately, can improve the success rate of picking article to can improve work efficiency and security.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the description of the prior art, it being obvious that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow diagram of one embodiment of a method of centering an article according to the present disclosure;
FIG. 2 is a schematic view of a picking job scenario according to one embodiment of the item centering method of the present disclosure;
FIG. 3 is a flow diagram of another embodiment of an article centering method according to the present disclosure;
FIG. 4 is a schematic block diagram of one embodiment of an article centering device according to the present disclosure;
FIG. 5 is a block diagram of a point cloud image generation module in one embodiment of an item centering device according to the present disclosure;
FIG. 6 is a block diagram of a position and orientation acquisition module in one embodiment of an article centering device according to the present disclosure;
Fig. 7 is a block schematic diagram of another embodiment of an article centering device according to the present disclosure.
Detailed Description
The present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the disclosure are shown. The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure. The technical solutions of the present disclosure are described in various aspects below with reference to the drawings and the embodiments.
At present, a square packing box is generally adopted for articles, a plurality of packing boxes for articles of the same kind are densely arranged in a turnover box, and the method for identifying the central position and the posture of the articles (the packing boxes for the articles) is as follows: and establishing a 3D model of the object, and calculating the central position and the posture of the picking target object by matching the 3D model of the object on the scene point cloud.
The method for identifying the central position and the gesture of the article has the following defects: the method for matching the point cloud model can be obviously interfered by the quality of the point cloud; for example, the current 3D cameras have low general resolution, low measurement accuracy and difficult resolution of gaps between objects, and packaging materials of many objects are easy to reflect light, so that voids (invalid data) exist on point clouds at the position of reflecting light easily, and the target matching accuracy is affected.
The invention provides an article center positioning method for improving the picking success rate of a robot end pick.
FIG. 1 is a flow chart illustrating one embodiment of an article centering method according to the present disclosure, as shown in FIG. 1, the article centering method comprising steps 101-104.
Step 101, respectively obtaining a two-dimensional image and a three-dimensional point cloud image which are acquired by a two-dimensional image pickup device and a three-dimensional image pickup device and correspond to a target object.
The two-dimensional imaging device may be a 2D camera or the like, and the three-dimensional imaging device may be a 3D camera or the like. The target object can be a packing box of various commodities, and the two-dimensional image and the three-dimensional point cloud image can be scene images containing the target object.
And 102, obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix and the three-dimensional point cloud image between the two-dimensional image pickup device and the three-dimensional image pickup device.
There may be a plurality of calibration matrices between the two-dimensional camera and the three-dimensional camera. For example, the calibration matrix between the two-dimensional image capturing device and the three-dimensional image capturing device may be a matrix P 34, where the matrix P 34 implies a rotational and translational relationship between the three-dimensional image capturing device coordinate system and the two-dimensional image capturing device coordinate system, and internal parameters of the two-dimensional image capturing device. And multiplying the point cloud coordinates obtained by the three-dimensional camera device by the calibration matrix P 34 to obtain the coordinates of the point corresponding to the two-dimensional image acquired by the two-dimensional camera device.
Step 103, generating a fitting surface of the target object according to the point cloud image. The surface fitting of the target object can be performed by adopting various existing methods, and the fitting surface of the generated target object can be a plane, a curved surface and the like.
And 104, obtaining three-dimensional center position information and attitude information of the target object according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object.
FIG. 2 is a schematic view of a picking job scenario according to one embodiment of the item centering method of the present disclosure, as shown in FIG. 2: after the logistics system issues a picking task, the turnover box is adjusted to a picking station of the logistics system, and the robot grabs commodities according to the target position and the gesture identified by the vision system. The vision system includes a 2D camera and a 3D camera. The resolution of the 2D camera is high, so that the segmentation of the close-packed situation of the same kind of objects is facilitated; the point cloud shot by the 3D camera is used for calculating the picking position and the gesture of the target.
In one embodiment, there are a number of ways to obtain a point cloud image corresponding to a two-dimensional image based on a calibration matrix and a three-dimensional point cloud image. For example, a two-dimensional template image corresponding to the target article is acquired, and the two-dimensional template image may be a rectangular image containing the target article, or the like. Matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image, wherein the image area can be a rectangular frame area which corresponds to the two-dimensional template image and contains the target object, and the like. And processing the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
Template matching is a pattern recognition method that determines where a pattern of a specific object is located in an image, and further recognizes the object. The two-dimensional template image is a pair of two-dimensional images corresponding to the target object, and template matching refers to searching the target object based on the two-dimensional template image in the two-dimensional images acquired by the two-dimensional camera device. The image area of the target object obtained by matching in the two-dimensional image acquired by the two-dimensional camera device has the same size, direction and image elements as the two-dimensional image template. The image matching process may be performed using a variety of algorithms available,
And carrying out coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix, obtaining two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image, and establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data. Pixels in the image area are obtained, image area point cloud data corresponding to two-dimensional coordinate data of each pixel in the image area are obtained based on the corresponding relation, and an image area point cloud image is generated based on the image area point cloud data.
And carrying out space plane fitting processing on the three-dimensional point cloud data in the point cloud image of the image area based on a preset plane fitting algorithm to obtain a fitting surface of the target object corresponding to the point cloud image of the image area, wherein the fitting surface can be a plane, a curved surface and the like. The plane fitting algorithm may be any of a variety of existing plane fitting algorithms, such as a random sample consensus algorithm, etc.
In one embodiment, a two-dimensional coordinate of a center point of the target object is obtained in the image area, a three-dimensional coordinate of the center point corresponding to the two-dimensional coordinate of the center point is calculated according to the calibration matrix, the two-dimensional coordinate of the center point and the fitting surface of the target object, and posture angle information of the target object corresponding to the three-dimensional coordinate of the center point is obtained according to the fitting surface of the target object. If the fitting surface is a plane, the attitude angle information of the fitting surface is the attitude angle information of the target object; if the fitting surface is a curved surface, determining a position corresponding to the three-dimensional coordinates of the center point on the fitting surface, and obtaining attitude angle information of the position as the attitude angle information of the target object. The target object attitude angle information may be euler angles, which may be represented by nutation angle θ, precession angle ψ, and rotation angleComposition is prepared.
FIG. 3 is a flow chart of another embodiment of a method of centering an article according to the present disclosure, as shown in FIG. 3:
step 301, starting a 2D camera and a 3D camera to take a picture, and obtaining a scene 2D image and a scene 3D point cloud. The scene 2D image and the scene 3D point cloud both contain target objects, and the target objects can be packing boxes of commodities and the like.
Step 302, matching the 2D template of the target object with the scene 2D image to obtain a rectangular frame containing the target object in the scene 2D image.
For example, the target item to be picked by the robot is matched out on the scene 2D image according to the 2D template of the target item. For example, a rectangular box containing the object can be obtained in the scene 2D image, the rectangular box being characterized by coordinates of 4 points, [ (u 0,v0),(u1,v1),(u2,v2),(u3,v3) ].
And step 303, acquiring the effective point cloud of the target object according to the rectangular frame on the scene 2D image and the scene 3D point cloud.
For example, an effective point cloud of the target object is acquired from a 2D image of the scene acquired by the 2D camera, a 3D point cloud of the scene acquired by the 3D camera, and a calibration matrix P 34 (3 rows and 4 columns matrix) between the 3D camera and the 2D camera.
Let the coordinates in the 2D image of the scene captured by the 3D camera corresponding to a point v= [ x, y, z ] T in the 3D point cloud of the scene captured by the 2D camera be w= [ u, V ] T, then the specific calculation of w is as follows:
According to the formulas (1-1) and (1-2), the two-dimensional coordinates of each effective point cloud in the 3D point clouds of the scene shot by the 3D camera in the 2D image of the scene can be calculated, and a point cloud image (without the corresponding effective point cloud position filled with invalid data) consistent with the size of a rectangular frame containing the target object can be established.
Step 304, generating a fitting plane a x+b x+y+c x z+d=0 according to the effective point cloud, and calculating the attitude euler angles (α, β, γ) of the target object.
For example, in step 302, a rectangular box containing the target item in the scene 2D image may be identified, and in step 303, a point cloud image is created that is consistent with the size of the rectangular box containing the target item in the scene 2D image, so that an effective point cloud for the target item may be obtained.
Using an effective point cloud, a random sampling consensus algorithm (RANSAC algorithm) can be used to fit a fitting plane equation a x+b x+c x+z+d=0 of a target object (the surface of a package of the object), so that the gesture of picking the target object can be calculated and is characterized by euler angles (α, β, γ).
Step 305, calculating the grabbing center position (x, y, z) of the target object according to the rectangular frame center (U, V) of the target object in the scene 2D image and the plane fitted by the effective point cloud of the target object.
For example, a rectangular frame [ (U 0,v0),(u1,v1),(u2,v2),(u3,v3) ] containing the target object is obtained in the scene 2D image, and the center point (U, V) of the target object in the scene 2D image is calculated. The center point of the target item may be obtained in the scene 2D image in a number of ways. For example, the center point of the target object is the geometric center of the rectangular frame containing the target object, and the like. According to the calibration matrix P 34 of the 3D camera and the 2D camera and the fitting plane equation a x+b x+y+c x z+d=0, the center position of the target object can be calculated. The specific calculation process is as follows:
the formula (1-2) can be rewritten as follows with reference to the formulas (1-1) and (1-2):
The fitting plane equation a x+b y+c z+d=0 can be written as follows:
The scene 2D image and the scene 3D point cloud satisfy the following equations:
The solution of equations (1-5) can be written as Referring to formula (1-3), bringing formula (1-3) into formula (1-5) yields:
combining (1-4) with (1-6) to obtain:
Bringing the center point (U, V) of the object into the formula (1-7) and recording the matrix:
The center position of the target object can be solved from the equation (1-7) as follows:
M 33 in the formulas (1-9) represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents the column vector of column 4 of the M matrix. The central position [ x, y, z ] of the target object is obtained by the formula (1-9), and the posture (alpha, beta, gamma) of the target object can be calculated by fitting the plane equation a x+b y+c z+d=0.
Step 305, the obtained center position and posture (x, y, z, α, β, γ) of the target article are sent to the robot, and picking operation is performed by the robot.
When the corresponding 3D point (x, y, z) is deduced from the coordinates (u, v) in the two-dimensional image, multiple solutions are generated, and all the solutions form a ray, so that the picking position of the target item cannot be directly determined by the center of the target item determined on the two-dimensional image. By fitting a fitting surface equation of the target object using the effective point cloud of the target object, a unique spatial location can be determined; according to the two-dimensional center position information of the target object in the two-dimensional image, the calibration matrix and the fitting surface of the target object, the three-dimensional center position information and the posture information of the target object are obtained, and the problem of inaccurate center position positioning caused by point cloud cavities (invalid data) is solved.
In one embodiment, as shown in fig. 4, the present disclosure provides an article centering device 40 comprising: an image acquisition module 41, a point cloud image generation module 42, a fitting surface generation module 43, and a position and orientation acquisition module 44.
The image acquisition module 41 acquires two-dimensional images and three-dimensional point cloud images corresponding to the target object acquired by the two-dimensional image pickup device and the three-dimensional image pickup device, respectively. The point cloud image generation module 42 obtains a point cloud image corresponding to the two-dimensional image based on the calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image. The fitting surface generation module 43 generates a fitting surface of the target object from the point cloud image. The position and orientation acquisition module 44 acquires three-dimensional center position information and orientation information of the target item from the two-dimensional center position information of the target item in the two-dimensional image, the calibration matrix, and the fitting surface of the target item.
In one embodiment, as shown in FIG. 5, the point cloud image generation module 42 includes: a template acquisition unit 421, a region determination unit 422, and a point cloud processing unit 423. The template acquisition unit 421 acquires a two-dimensional template image corresponding to a target article. The region determining unit 422 matches the two-dimensional template image with a two-dimensional image in which an image region of the target article is obtained. The point cloud processing unit 423 processes the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area.
The point cloud processing unit 423 performs coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix, obtains two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image, and establishes a corresponding relationship between the point cloud data and the two-dimensional coordinate data. The point cloud processing unit 423 obtains image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the correspondence, and generates an image area point cloud map based on the image area point cloud data. The fitting surface generating module 43 performs spatial plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm, and obtains a fitting surface of the target object corresponding to the image area point cloud image.
In one embodiment, as shown in FIG. 6, the position and attitude acquisition module 44 includes: a two-dimensional position obtaining unit 441, a three-dimensional position calculating unit 442, and a posture information obtaining unit 423. The two-dimensional position obtaining unit 441 obtains the center point two-dimensional coordinates of the target article in the image area. The three-dimensional position calculation unit 442 calculates a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate from the calibration matrix, the center point two-dimensional coordinate, and the fitting surface of the target object. The posture information obtaining unit 443 obtains target article posture angle information corresponding to the three-dimensional coordinates of the center point from the fitting surface of the target article.
For example, the three-dimensional coordinates of the center point are
Wherein M 33 represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents a column vector of column 4 of the M matrix; the two-dimensional coordinates of the central point are (U, V), the calibration matrix is P 34, and the fitting surface of the target object is A x+B y+C z+D=0;
M matrix is
In one embodiment, fig. 7 is a block diagram of another embodiment of an article centering device according to the present disclosure. As shown in fig. 7, the apparatus may include a memory 71, a processor 72, a communication interface 73, and a bus 74. The memory 71 is for storing instructions and the processor 72 is coupled to the memory 71, the processor 72 being configured to implement the article centering method described above based on the instructions stored by the memory 71.
The memory 71 may be a high-speed RAM memory, a nonvolatile memory (non-volatile memory), or the like, and the memory 71 may be a memory array. The memory 71 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. Processor 72 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement the article-centering methods of the present disclosure.
In one embodiment, the present disclosure provides a logistics system comprising: the robot, the article center positioning device in any of the above embodiments, the article center positioning device transmitting three-dimensional center position information and attitude information of the target article to the robot.
In one embodiment, the present disclosure provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the article centering method of any of the above embodiments.
According to the article center positioning method, the article center positioning device, the logistics system and the storage medium, a point cloud image corresponding to a two-dimensional image is obtained according to a calibration matrix and a three-dimensional point cloud image, a fitting surface of a target article is generated according to the point cloud image, and three-dimensional center position information and posture information are obtained according to two-dimensional center position information, the calibration matrix and the fitting surface in the two-dimensional image; the problem of the central point position location inaccuracy that causes because of the point cloud cavity is solved, utilize the higher advantage of accuracy of determining the target in the two-dimensional image, can discern central point position and the gesture of article accurately, can improve the success rate of picking article to can improve work efficiency and security.
The methods and systems of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
The description of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (9)

1.A method of centering an article, comprising:
Respectively obtaining a two-dimensional image and a three-dimensional point cloud image which are acquired by a two-dimensional image pick-up device and a three-dimensional image pick-up device and correspond to a target object;
obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image capturing device and the three-dimensional point cloud image, including:
Acquiring a two-dimensional template image corresponding to the target object; matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image; processing the three-dimensional point cloud image based on the calibration matrix to obtain an image area point cloud image corresponding to the image area;
Generating a fitting surface of the target object according to the image area point cloud picture;
Calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate of the target object in the two-dimensional image and the fitting surface of the target object;
and obtaining target object attitude angle information corresponding to the three-dimensional coordinates of the central point according to the fitting surface.
2. The method of claim 1, the processing the three-dimensional point cloud image based on the calibration matrix comprising:
Performing coordinate conversion processing on each point cloud data in the three-dimensional point cloud image based on the calibration matrix to obtain two-dimensional coordinate data corresponding to each point cloud data in the two-dimensional image;
Establishing a corresponding relation between the point cloud data and the two-dimensional coordinate data;
Acquiring image area point cloud data corresponding to the two-dimensional coordinate data of each pixel in the image area based on the corresponding relation;
and generating the image area point cloud image based on the image area point cloud data.
3. The method of claim 2, the generating a fitted surface of the target object from the image area point cloud map comprising:
And carrying out space plane fitting processing on the point cloud data in the image area point cloud image based on a preset plane fitting algorithm to obtain a fitting surface of the target object corresponding to the image area point cloud image.
4. A method as in claim 3, further comprising:
a two-dimensional coordinate of a center point of the target object is obtained in the image area.
5. The method of claim 4, wherein,
The three-dimensional coordinates of the central point are
Wherein M 33 represents a matrix consisting of the first 3 columns of the M matrix; -M ;4 represents a column vector of column 4 of the M matrix; the two-dimensional coordinates of the central point are (U, V), the calibration matrix is P 34, and the fitting surface of the target object is A x+B y+C z+D=0;
The M matrix is
6. An article centering device comprising:
the image acquisition module is used for respectively acquiring two-dimensional images and three-dimensional point cloud images which are acquired by the two-dimensional image pickup device and the three-dimensional image pickup device and correspond to the target object;
The point cloud image generation module is used for obtaining a point cloud image corresponding to the two-dimensional image based on a calibration matrix between the two-dimensional image pickup device and the three-dimensional point cloud image;
The point cloud image generation module comprises:
A template acquisition unit for acquiring a two-dimensional template image corresponding to the target object;
the area determining unit is used for matching the two-dimensional template image with the two-dimensional image, and obtaining an image area of the target object in the two-dimensional image;
The point cloud processing unit is used for processing the three-dimensional point cloud image based on the calibration matrix and obtaining an image area point cloud image corresponding to the image area;
the fitting surface generation module is used for generating a fitting surface of the target object according to the image area point cloud picture;
a position and orientation acquisition module comprising:
The three-dimensional position calculating unit is used for calculating a center point three-dimensional coordinate corresponding to the center point two-dimensional coordinate according to the calibration matrix, the center point two-dimensional coordinate of the target object in the two-dimensional image and the fitting surface of the target object;
And the gesture information obtaining unit is used for obtaining gesture angle information of the target object corresponding to the three-dimensional coordinates of the central point according to the fitting surface.
7. An article centering device comprising:
A memory; and a processor coupled to the memory, the processor configured to perform the method of any of claims 1-5 based on instructions stored in the memory.
8. A logistic system comprising:
A robot, an article centering device as claimed in any one of claims 6 to 7; the article center positioning device sends three-dimensional center position information and posture information of the target article to the robot.
9. A computer readable storage medium storing computer instructions for execution by a processor of the method of any one of claims 1 to 5.
CN201910167736.0A 2019-03-06 2019-03-06 Article center positioning method and device, logistics system and storage medium Active CN111666935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910167736.0A CN111666935B (en) 2019-03-06 2019-03-06 Article center positioning method and device, logistics system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910167736.0A CN111666935B (en) 2019-03-06 2019-03-06 Article center positioning method and device, logistics system and storage medium

Publications (2)

Publication Number Publication Date
CN111666935A CN111666935A (en) 2020-09-15
CN111666935B true CN111666935B (en) 2024-05-24

Family

ID=72382190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910167736.0A Active CN111666935B (en) 2019-03-06 2019-03-06 Article center positioning method and device, logistics system and storage medium

Country Status (1)

Country Link
CN (1) CN111666935B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344108A (en) * 2021-06-25 2021-09-03 视比特(长沙)机器人科技有限公司 Commodity identification and attitude estimation method and device
CN115526896A (en) * 2021-07-19 2022-12-27 中核利华消防工程有限公司 Fire prevention and control method and device, electronic equipment and readable storage medium
CN116109781B (en) * 2023-04-12 2023-06-23 深圳市其域创新科技有限公司 Three-dimensional reconstruction method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139416A (en) * 2015-10-10 2015-12-09 北京微尘嘉业科技有限公司 Object identification method based on image information and depth information
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
CN107948499A (en) * 2017-10-31 2018-04-20 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
JP2018119833A (en) * 2017-01-24 2018-08-02 キヤノン株式会社 Information processing device, system, estimation method, computer program, and storage medium
GB201813197D0 (en) * 2018-08-13 2018-09-26 Imperial Innovations Ltd Mapping object instances using video data
CN108648230A (en) * 2018-05-14 2018-10-12 南京阿凡达机器人科技有限公司 A kind of package dimensions measurement method, system, storage medium and mobile terminal

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9210404B2 (en) * 2012-12-14 2015-12-08 Microsoft Technology Licensing, Llc Calibration and registration of camera arrays using a single circular grid optical target
US9269187B2 (en) * 2013-03-20 2016-02-23 Siemens Product Lifecycle Management Software Inc. Image-based 3D panorama
US10021371B2 (en) * 2015-11-24 2018-07-10 Dell Products, Lp Method and apparatus for gross-level user and input detection using similar or dissimilar camera pair
CN106228537A (en) * 2016-07-12 2016-12-14 北京理工大学 A kind of three-dimensional laser radar and the combined calibrating method of monocular-camera
CN106909875B (en) * 2016-09-12 2020-04-10 湖南拓视觉信息技术有限公司 Face type classification method and system
GB201616887D0 (en) * 2016-10-05 2016-11-16 Queen Mary University Of London And King's College London Fingertip proximity sensor with realtime visual-based calibration
US10646999B2 (en) * 2017-07-20 2020-05-12 Tata Consultancy Services Limited Systems and methods for detecting grasp poses for handling target objects
CN107833270B (en) * 2017-09-28 2020-07-03 浙江大学 Real-time object three-dimensional reconstruction method based on depth camera
CN107977997B (en) * 2017-11-29 2020-01-17 北京航空航天大学 Camera self-calibration method combined with laser radar three-dimensional point cloud data
CN109255813B (en) * 2018-09-06 2021-03-26 大连理工大学 Man-machine cooperation oriented hand-held object pose real-time detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139416A (en) * 2015-10-10 2015-12-09 北京微尘嘉业科技有限公司 Object identification method based on image information and depth information
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
JP2018119833A (en) * 2017-01-24 2018-08-02 キヤノン株式会社 Information processing device, system, estimation method, computer program, and storage medium
CN107948499A (en) * 2017-10-31 2018-04-20 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN108648230A (en) * 2018-05-14 2018-10-12 南京阿凡达机器人科技有限公司 A kind of package dimensions measurement method, system, storage medium and mobile terminal
GB201813197D0 (en) * 2018-08-13 2018-09-26 Imperial Innovations Ltd Mapping object instances using video data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Accuracy Study of Close Range 3D Object Reconstruction based on Point Clouds ";Grzegorz Gabara等;《 2017 Baltic Geodetic Congress (BGC Geomatics)》;全文 *
"Multiple objects monitoring based on 3D information from multiple cameras";Thun Chitmaitredejsakul等;《 2014 International Electrical Engineering Congress (iEECON)》;全文 *
"基于机器视觉的服务机器人智能抓取研究";杨扬;《中国博士学位论文全文数据库信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN111666935A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN111151463B (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN110893617B (en) Obstacle detection method and device and storage device
CN110702111B (en) Simultaneous localization and map creation (SLAM) using dual event cameras
CN111666935B (en) Article center positioning method and device, logistics system and storage medium
US20180066934A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
JP5618569B2 (en) Position and orientation estimation apparatus and method
CN111673735A (en) Mechanical arm control method and device based on monocular vision positioning
KR20180120647A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN109345588A (en) A kind of six-degree-of-freedom posture estimation method based on Tag
US10102629B1 (en) Defining and/or applying a planar model for object detection and/or pose estimation
CN110992356A (en) Target object detection method and device and computer equipment
CN109752003B (en) Robot vision inertia point-line characteristic positioning method and device
CN106503671A (en) The method and apparatus for determining human face posture
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
CN111340869B (en) Express package surface flatness identification method, device, equipment and storage medium
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
US20130182903A1 (en) Robot apparatus and position and orientation detecting method
CN112509036B (en) Pose estimation network training and positioning method, device, equipment and storage medium
CN110926330A (en) Image processing apparatus, image processing method, and program
CN107300382A (en) A kind of monocular visual positioning method for underwater robot
CN110375732A (en) Monocular camera pose measurement method based on inertial measurement unit and point line characteristics
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN111142514A (en) Robot and obstacle avoidance method and device thereof
CN111105467B (en) Image calibration method and device and electronic equipment
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210304

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration: 20210304

Address after: Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant