CN114882112A - Tray stacking method, apparatus, computer device and computer-readable storage medium - Google Patents

Tray stacking method, apparatus, computer device and computer-readable storage medium Download PDF

Info

Publication number
CN114882112A
CN114882112A CN202210517041.2A CN202210517041A CN114882112A CN 114882112 A CN114882112 A CN 114882112A CN 202210517041 A CN202210517041 A CN 202210517041A CN 114882112 A CN114882112 A CN 114882112A
Authority
CN
China
Prior art keywords
reference object
positioning reference
tray
image
stacked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210517041.2A
Other languages
Chinese (zh)
Inventor
王琛
李陆洋
方牧
鲁豫杰
杨秉川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co Ltd filed Critical Visionnav Robotics Shenzhen Co Ltd
Priority to CN202210517041.2A priority Critical patent/CN114882112A/en
Publication of CN114882112A publication Critical patent/CN114882112A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a tray stacking method, a tray stacking device, a computer device and a storage medium. The method comprises the following steps: when carrying equipment carries out tray stacking processing, image contents of a first positioning reference object on a tray to be stacked and a second positioning reference object on a stacked tray are collected through image collection equipment; determining relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment according to the image content; and determining the pose difference between the tray to be stacked and the stacked tray according to the relative position information, and controlling the carrying equipment to stack the tray on the tray to be stacked. According to the tray stacking method and device, the positioning reference object is arranged on the tray, the image content is analyzed to obtain the relative posture of the first positioning reference object and the second positioning reference object with the image acquisition equipment respectively, the posture difference of the tray to be stacked and the stacked tray is reversely deduced according to the relative posture, the carrying equipment is controlled according to the posture difference to complete tray stacking, and the accuracy of tray stacking can be improved.

Description

Tray stacking method, apparatus, computer device and computer-readable storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a tray stacking method, apparatus, computer device, and computer-readable storage medium.
Background
With the development of industrial technology, the technology of automatically stacking trays has gradually replaced the conventional technology of manual stacking. At present, the preset stacking position of the stacked tray is mainly calculated through the tray to be stacked, and the tray stacking is carried out by the carrying equipment according to the calculated preset stacking position. However, in the stacking process, there may be a deviation between the actual stacking position of the trays to be stacked and the preset stacking position, thereby resulting in a low accuracy in stacking the trays.
Disclosure of Invention
In view of the above, it is necessary to provide a tray stacking method, apparatus, computer device and computer readable storage medium capable of improving accuracy of tray stacking.
In a first aspect, the present application provides a method of stacking trays. The method comprises the following steps:
acquiring a target image through image acquisition equipment; the target image comprises first image content of a first positioning reference object on a tray to be stacked and second image content of a second positioning reference object on the stacked tray;
determining relative position information of the first positioning reference object compared with the image acquisition equipment according to the first image content, and determining relative position information of the second positioning reference object compared with the image acquisition equipment according to the second image content;
determining a pose difference between the tray to be stacked and the stacked tray according to relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment;
and controlling the carrying equipment to stack the tray to be stacked according to the pose difference.
In some embodiments, said determining relative position information of said first positioning reference object compared to said image acquisition device from said first image content and said second positioning reference object compared to said image acquisition device from said second image content comprises:
respectively carrying out feature detection on the first image content and the second image content to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively;
and for each of the first positioning reference object and the second positioning reference object, obtaining relative position information of the positioning reference object compared with the image acquisition equipment according to the image acquisition equipment internal reference of the image acquisition equipment and the feature set corresponding to the positioning reference object.
In some embodiments, the performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively includes:
extracting contours of the first image content and the second image content respectively to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object;
and respectively extracting the features of the first image contour and the second image contour to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object.
In some embodiments, the relative position information comprises an external reference matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object comprises a plurality of feature points of the positioning reference object;
the obtaining, for each of the first positioning reference object and the second positioning reference object, the relative position information of the positioning reference object compared to the image acquisition device according to the image acquisition device internal reference of the image acquisition device and the feature set corresponding to the positioning reference object, includes:
for each of the first positioning reference object and the second positioning reference object, calculating an external parameter matrix of the positioning reference object relative to the image acquisition equipment according to the pixel coordinates of each feature point corresponding to the positioning reference object in a pixel coordinate system, the coordinates of each feature point in the three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameters of the image acquisition equipment;
the three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed by taking a plane where the positioning reference object is located as a coordinate plane and taking one of a plurality of characteristic points of the positioning reference object as a coordinate origin.
In some embodiments, the relative position information of the first positioning reference object compared to the image acquisition device comprises a first external reference matrix; the relative position information of the second positioning reference object compared with the image acquisition equipment comprises a second appearance matrix; the determining a difference in pose between the tray to be stacked and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared to the image capturing apparatus includes:
multiplying the inverse matrix of the second external reference matrix by the first external reference matrix to obtain a third external reference matrix of the first positioning reference object relative to the second positioning reference object;
and obtaining the pose difference according to the third external parameter matrix.
In some embodiments, the image capture device is secured to the handling device; the first positioning reference object is a first two-dimensional code arranged on the tray to be stacked; the second positioning reference object is a second two-dimensional code arranged on the stacked tray; the plurality of feature points of the first two-dimensional code comprise at least three boundary corner points of the first two-dimensional code; the plurality of feature points of the second two-dimensional code comprise at least three boundary corner points of the second two-dimensional code.
In some embodiments, the controlling the carrying apparatus to perform tray stacking processing on the tray to be stacked according to the posture difference includes:
obtaining the relative pose of the image acquisition equipment and the stacked tray according to the pose difference;
and if the relative pose is not in the preset pose range, adjusting the current pose of the carrying equipment according to the relative pose, and returning to execute the step of collecting the target image through the image collecting equipment after adjusting the pose of the carrying equipment.
In a second aspect, the present application further provides a tray stacking apparatus. The device comprises:
the image acquisition module is used for acquiring a target image through image acquisition equipment; the target image comprises first image content of a first positioning reference object on a tray to be stacked and second image content of a second positioning reference object on the stacked tray;
an information obtaining module, configured to determine, according to the first image content, relative position information of the first positioning reference object compared to the image acquisition device, and determine, according to the second image content, relative position information of the second positioning reference object compared to the image acquisition device;
the pose calculation module is used for determining a pose difference between the tray to be stacked and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment;
and the tray stacking module is used for controlling the carrying equipment to stack the trays to be stacked according to the pose difference.
In a third aspect, the application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the above-mentioned tray stacking method when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the above-described tray stacking method.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, performs the steps of the above-described tray stacking method.
According to the tray stacking method, the tray stacking device, the computer equipment and the computer-readable storage medium, when the carrying equipment stacks the trays to be stacked, the target image is acquired through the image acquisition equipment; the target image comprises first image content of a first positioning reference object on the tray to be stacked and second image content of a second positioning reference object on the stacked tray; determining first relative position information between the first positioning reference object and the image acquisition equipment according to the first image content; determining second relative position information between the second positioning reference object and the image acquisition equipment according to the second image content; and determining the pose difference between the tray to be stacked and the stacked tray according to the first relative position information and the second relative position information. The positioning reference object is arranged on the tray, the image contents of the first positioning reference object on the tray to be stacked and the image contents of the second positioning reference object on the stacked tray are collected and analyzed, the relative postures of the first positioning reference object and the second positioning reference object on the tray to be stacked and the image acquisition equipment can be obtained, and the deviation of the relative postures of the tray to be stacked and the stacked tray is reversely deduced according to the relative postures, so that the carrying equipment can be further controlled according to the posture difference to accurately complete the tray stacking work.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a tray stacking method;
FIG. 2 is a schematic flow chart of a method of stacking trays in some embodiments;
FIG. 3 is a schematic flow chart of the steps of calculating relative positional information of the first positioning reference object and the second positioning object, respectively, with respect to the image capture device in some embodiments;
FIG. 4 is a schematic flow chart of the feature detection steps performed on the first image content and the second image content in some embodiments;
FIG. 5 is a schematic flow chart of the step of calculating the difference in pose between the tray to be stacked and the stacked tray in some embodiments;
FIG. 6 is a schematic flow chart of the tray stacking process steps performed by the handling device according to the pose difference in some embodiments;
FIG. 7 is a schematic diagram of a stacked tray in some embodiments;
FIG. 8 is a schematic flow chart of a method for stacking trays in other embodiments;
FIG. 9 is a block diagram of the structure of a tray stacking apparatus in some embodiments;
FIG. 10 is a diagram of the internal structure of a computer device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The tray stacking method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. In the embodiments of the present application, a tray to be stacked (not shown in the figures) is placed on the carrying device 102, and the carrying device 102 is responsible for transporting the tray to be stacked to the stacked tray 104 and stacking the tray to be stacked, wherein a first positioning reference object for positioning is disposed on the surface of the tray to be stacked, and a second positioning reference object 106 for positioning is disposed on the surface of the stacked tray 104.
When the carrying device 102 needs to stack the tray to be stacked onto the stacked tray 104, the tray to be stacked and the stacked tray 104 may be photographed by an image capturing device to form a target image including first image contents of the first positioning reference object on the tray to be stacked and second image contents of the second positioning reference object 106 on the stacked tray 104.
After the image acquisition device acquires the target image, the relative position information of the first positioning reference object compared with the image acquisition device can be determined according to the first image content, and the relative position information of the second positioning reference object 106 compared with the image acquisition device can be determined according to the second image content; determining a pose difference between the tray to be stacked and the stacked tray 104 according to relative position information of the first positioning reference object and the second positioning reference object 106 compared with the image acquisition equipment respectively; and controlling the carrying equipment 102 to stack the tray according to the pose difference.
In some embodiments, as shown in fig. 2, the method may be applied to a handling apparatus, and may also be applied to other computer apparatuses communicatively connected to the handling apparatus, and the embodiments of the present application are not limited in particular.
The method is applied to the conveying equipment in fig. 1 as an example, and comprises the following steps:
step 202, when the carrying device performs stacking processing on the tray to be stacked, a target image is collected through the image collecting device.
The handling device according to the present application is a transportation device for handling trays to be stacked, wherein the handling device may be, but is not limited to, an automatic Guided Vehicle (AGV cart) and a forklift. It should be noted that the conveying device may be responsible for conveying only the trays to be stacked, that is, manually stacking the trays to be stacked after the conveying device conveys the trays to be stacked to a specified position such as the vicinity of the stacked trays. In addition, the carrying equipment not only can be responsible for carrying the trays to be stacked, but also can be responsible for carrying out tray stacking processing on the trays to be stacked, namely when the carrying equipment carries the trays to be stacked to the position near the trays to be stacked, the carrying equipment can automatically carry out tray stacking processing on the trays to be stacked.
Pallets, also known as bulk containers, are a cargo vehicle for transporting goods in groups. Here, the tray to be stacked means a tray to be subjected to stacking processing. Accordingly, the stacked tray is used to indicate a position where the tray to be stacked needs to be stacked, for example, the stacked tray is used to indicate that the tray to be stacked is stacked above the stacked tray, and the like. In the embodiment of the present application, a positioning reference object for positioning is provided on each tray (including a tray to be stacked and a tray to be stacked), and the positioning reference object may be specifically set to a certain pattern for positioning the tray. By taking the tray to be stacked and the stacked tray as an example, the first positioning reference object is arranged on the tray to be stacked, and the second positioning reference object is arranged on the stacked tray, wherein the pattern of the first positioning reference object and the pattern of the second positioning reference object can be the same or different, and the present application is not limited specifically.
The image capturing device refers to a device with a photographing function, and may be, but is not limited to, various cameras, mobile devices, cameras, video cameras, and scanners. In the embodiment of the application, the image acquisition device may be provided separately from the carrying device, or the image acquisition device may be a component of the carrying device, that is, the image acquisition device may be fixed to the carrying device, where the positional relationship between the image acquisition device and the carrying device is not limited.
Specifically, when the carrying device needs to stack the tray to be stacked, the angle of the image acquisition device is adjusted to enable the visual field range of the image acquisition device to completely cover the first positioning reference object and the second positioning reference object, and then the target image is acquired, so that the acquired target image comprises the first image content of the first positioning reference object on the tray to be stacked and the second image content of the second positioning reference object on the stacked tray, and the relative position information of the first positioning reference object and the second positioning reference object compared with the image acquisition device can be calculated simultaneously according to one target image.
Step 204, determining the relative position information of the first positioning reference object compared with the image acquisition device according to the first image content, and determining the relative position information of the second positioning reference object compared with the image acquisition device according to the second image content.
The relative position information includes relative position information of the first positioning reference object compared with the image acquisition device, that is, a relative relationship between a position where the first positioning reference object is located and a position where the image acquisition device is located. The relative position information further includes relative position information of the second positioning reference object compared to the image capturing apparatus, i.e., a relative relationship between a position where the second positioning reference object is located and a position where the image capturing apparatus is located.
And step 206, determining the pose difference between the tray to be stacked and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment.
It can be understood that, during the process of stack confirmation, the first positioning reference object on the tray to be stacked, the second positioning reference object on the stacked tray, and the image capturing device may be fixed, so that, by comparing the relative position information of the first positioning reference object and the second positioning reference object respectively with the image capturing device through the image capturing device, the relative posture of the first positioning reference object and the image capturing device, and the relative posture of the second positioning reference object and the image capturing device respectively, the deviation of the relative postures of the tray to be stacked and the stacked tray, that is, the posture difference between the tray to be stacked and the stacked tray, can be obtained by reversely deducing the deviation of the relative postures of the tray to be stacked and the stacked tray according to the relative posture of the first positioning reference object and the image capturing device and the relative posture of the second positioning reference object and the image capturing device.
And 208, controlling the carrying equipment to stack the tray to be stacked according to the pose difference.
Specifically, the relative pose of the image capture device and the stacked pallet can be calculated according to the pose difference, after the relative pose is calculated, whether the relative pose is in a preset pose range needs to be judged, if the relative pose is not in the preset pose range, the pose of the carrying device needing to be aligned is determined according to the relative pose, and after the pose of the carrying device is adjusted, the step of capturing the target image through the image capture device is executed in a returning mode. And if the relative pose is in the preset pose range, directly controlling the carrying equipment to stack the tray to be stacked onto the stacked tray.
In the tray stacking method, when the carrying equipment stacks the trays to be stacked, the image acquisition equipment acquires the target image; the target image comprises first image content of a first positioning reference object on the tray to be stacked and second image content of a second positioning reference object on the stacked tray; determining first relative position information between the first positioning reference object and the image acquisition equipment according to the first image content; determining second relative position information between the second positioning reference object and the image acquisition equipment according to the second image content; and determining the pose difference between the tray to be stacked and the stacked tray according to the first relative position information and the second relative position information. The positioning reference object is arranged on the tray, the image contents of the first positioning reference object on the tray to be stacked and the image contents of the second positioning reference object on the stacked tray are collected and analyzed, the relative postures of the first positioning reference object and the second positioning reference object with the image acquisition equipment can be obtained, and the deviation of the relative postures of the tray to be stacked and the stacked tray is reversely deduced according to the relative postures, so that the carrying equipment can be further controlled to accurately complete the tray stacking work according to the posture difference.
In some embodiments, as shown in fig. 3, step 204 specifically includes, but is not limited to including:
step 302, performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively.
The feature set corresponding to the first positioning reference object refers to feature points of a plurality of feature points on the first positioning reference object in a pixel coordinate system. The feature set corresponding to the second positioning reference object is feature points of a plurality of feature points on the second positioning reference object in a pixel coordinate system.
Specifically, it is necessary to determine the position information of the plurality of feature points of the first positioning reference object in the first image content and the position information of the plurality of feature points of the second positioning reference object in the second image content, and form feature sets corresponding to the first positioning reference object and the second positioning reference object, respectively, based on the above position information. The plurality of feature points of the first positioning reference object and the second positioning reference object are a plurality of positioning points which are set on the first positioning reference object or the second positioning reference object by a user according to the outlines and actual requirements of the first positioning reference object and the second positioning reference object and are used for positioning.
In practical applications, the positioning point includes, but is not limited to, at least one of a center point and a boundary corner point of the first positioning reference object. Likewise, the positioning points further include, but are not limited to, at least one of center points and boundary corner points of the second positioning reference object.
Step 304, for each of the first positioning reference object and the second positioning reference object, obtaining relative position information of the positioning reference object compared with the image acquisition device according to the feature set corresponding to the internal reference of the image acquisition device and the positioning reference object.
The method comprises the steps of acquiring a first positioning reference object and a second positioning reference object, acquiring internal parameters of an image acquisition device, acquiring relative position information of the positioning reference object compared with the image acquisition device according to a characteristic set corresponding to the internal parameters of the image acquisition device and the positioning reference object, and acquiring relative position information of the second positioning reference object compared with the image acquisition device according to the characteristic set corresponding to the internal parameters of the image acquisition device and the first positioning reference object.
Specifically, the relative relationship between the position of the first positioning reference object and the position of the image acquisition device can be determined by the feature set corresponding to the first positioning reference object and the internal reference of the image acquisition device, and the relative relationship between the position of the second positioning reference object and the position of the image acquisition device can be determined by the feature set corresponding to the second positioning reference object and the internal reference of the image acquisition device.
In some embodiments, the image capture device is a camera and the parameter within the image capture device is a camera parameter associated with image capture. Such as camera parameters like the focal length of the camera.
In some embodiments, as shown in fig. 4, step 302 specifically includes, but is not limited to including:
step 402, performing contour extraction on the first image content and the second image content respectively to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object.
Specifically, the contour of the first positioning reference object displayed in the first image content is extracted to obtain the first image contour, and the contour of the second positioning reference object displayed in the second image content is extracted to obtain the second image contour.
And step 404, respectively extracting features of the first image contour and the second image contour to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object.
Specifically, position information of a plurality of feature points on the first positioning reference object in a pixel coordinate system is extracted from the first image contour, position information of a plurality of feature points on the second positioning reference object in the pixel coordinate system is extracted from the second image contour, and feature sets corresponding to the first positioning reference object and the second positioning reference object are formed according to the position information.
In some embodiments, before step 402, the method further comprises the steps of: preprocessing a target image, specifically comprising:
and converting the target image into a gray level image, sequentially performing corrosion operation and expansion operation on the gray level image by utilizing opening operation of image morphology to obtain a processed gray level image, and performing binarization processing on the processed gray level image to obtain a preprocessed target image. The gray level image is subjected to corrosion operation to remove burrs on the image, and the gray level image is subjected to expansion operation to fill the missing part of the first image content and the second image content. The images are preprocessed, so that the accuracy of subsequent feature detection on the target images can be improved, the calculated pose difference is more accurate, and the accuracy of tray stacking is improved.
In some embodiments, the relative position information comprises an external reference matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object comprises a plurality of feature points of the positioning reference object.
Wherein the positioning reference object refers to a first positioning reference object and a second positioning reference object, the relative position information includes an external reference matrix of the first positioning reference object relative to the image capturing device and an external reference matrix of the second positioning reference object relative to the image capturing device, the feature set corresponding to the first positioning reference object includes a plurality of feature points of the first positioning reference object in the pixel coordinate system, and the feature set corresponding to the second positioning reference object includes a plurality of feature points of the second positioning reference object in the pixel coordinate system, and step 304 specifically includes, but is not limited to, the following steps:
and aiming at each of the first positioning reference object and the second positioning reference object, calculating an external parameter matrix of the positioning reference object relative to the image acquisition equipment according to the pixel coordinates of each characteristic point corresponding to the positioning reference object in a pixel coordinate system, the coordinates of each characteristic point in a three-dimensional coordinate system constructed based on the positioning reference object and the internal parameters of the image acquisition equipment.
The external parameter matrix of the positioning reference object relative to the image acquisition equipment describes a process of converting a three-dimensional coordinate system constructed based on the first positioning reference object into a coordinate system constructed based on the image acquisition equipment, and a process of converting a three-dimensional coordinate system constructed based on the second positioning reference object into a coordinate system constructed based on the image acquisition equipment. For convenience of description, the present application refers to a coordinate system constructed based on the image capturing device itself as a reference coordinate system.
Specifically, an external parameter matrix of the first positioning reference object compared with the image acquisition equipment is obtained through calculation according to pixel coordinates of each feature point corresponding to the first positioning reference object in a pixel coordinate system, coordinates of each feature point in a three-dimensional coordinate system constructed based on the first positioning reference object, and internal parameters of the image acquisition equipment. And calculating to obtain an external parameter matrix of the second positioning reference object compared with the image acquisition equipment according to the pixel coordinate of each characteristic point corresponding to the second positioning reference object in the pixel coordinate system, the coordinate of each characteristic point in the three-dimensional coordinate system constructed based on the second positioning reference object and the internal parameters of the image acquisition equipment.
Specifically, the external reference matrix comprises a rotation matrix and a translation matrix, the rotation matrix and the translation matrix jointly describe how each feature point is converted from the three-dimensional coordinate system to the reference coordinate system, the rotation matrix describes the directions of coordinate axes of the three-dimensional coordinate system relative to corresponding coordinate axes of the reference coordinate system, and the translation matrix describes the position of a space origin under the three-dimensional coordinate system.
The three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed by taking a plane where the positioning reference object is located as a coordinate plane and taking one of a plurality of characteristic points of the positioning reference object as a coordinate origin. Taking the first positioning reference object as an example, 4 feature points are provided on the first positioning reference object, and each feature point includes a feature point a, a feature point B, a feature point C, and a feature point D, and then the process of constructing the three-dimensional coordinate system based on the first positioning reference object is as follows: the method includes the steps of taking a plane where a first positioning reference object is located as a coordinate plane, taking one feature point a of the first positioning reference object as a coordinate origin, and constructing a three-dimensional coordinate system based on the first positioning reference object, wherein if the plane where the first positioning object is located is a vertical plane, the plane where the first positioning object is located can be understood as a ZY plane, and the feature point a can be a feature point of a left vertex of the first reference object (for example, a two-dimensional code), and specifically refer to fig. 1. It should be noted that the process of constructing the three-dimensional coordinate system based on the second positioning reference object is the same as the process of constructing the three-dimensional coordinate system based on the first positioning reference object, and is not described here any more.
In some embodiments, the image capture device internal reference comprises at least one of a focal length and pixel coordinates, etc. of the image capture device. The pixel coordinate refers to a conversion of a certain coordinate point in a reference coordinate system of the image acquisition device relative to a pixel coordinate system of the target image, where the coordinate point refers to a certain feature point in the first image content or a certain feature point in the second image content in the embodiment of the present application.
It should be noted that, when the external parameter matrices corresponding to the first positioning reference object and the second positioning reference object are calculated, the internal parameter of the image acquisition device is used, the distortion parameter can also be obtained, and the corresponding external parameter matrices are calculated by combining the distortion parameter. It can be understood that under the influence of the manufacturing precision of the lens, the image shot by the image acquisition equipment may have distortion of different degrees, and distortion parameters can be formed according to the distortion condition, so that the application of the distortion parameters can be considered to calculate the external parameter matrix, and the accuracy is improved.
In practical application, according to the pixel coordinates of each feature point corresponding to the positioning reference object in the pixel coordinate system, the coordinates of each feature point in the three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameters of the image acquisition equipment, the external parameter matrix of the positioning reference object relative to the image acquisition equipment can be solved automatically by calling a pose solving algorithm.
The specific pose solving algorithm can be selected from a solvePnP algorithm of OpenCV, and can also be selected from a solvePnP algorithm of OpenCV. The OpenCV is a cross-platform computer vision and machine learning software library, the solvePnP algorithm is a monocular relative pose estimation algorithm, and the solvePPRansac algorithm is a consistency sampling algorithm.
Taking the solvePnP algorithm as an example, the specific process of calculating the external reference matrix of the positioning reference object relative to the image acquisition device is shown in formulas (1) to (3):
Figure BDA0003641767710000121
wherein Z is c Refers to an unknown coordinate of a certain feature point in the first image content or a certain feature point in the second image content in the reference coordinate system of the image capturing device, specifically, Z c It can also be understood as homogeneous coordinates of a certain target point in a reference coordinate system, generally called scale factor (coefficient), which defaults to 1, which is related to the pinhole imaging principle, generally Z c Smaller, meaning closer to the image acquisition device and larger imaged, f x 、f y 、u 0 And v 0 Are all parameters of the image acquisition device, in particular f x And f y Representing the focal length, u, of the image-capturing device 0 And v 0 Refers to a pixel coordinate, X, obtained by conversion of a coordinate point (a feature point in the first image content or a feature point in the second image content) in a reference coordinate system in which the image capturing device itself is located with respect to a pixel coordinate system in which the target image is located W 、Y W And Z W For 3D coordinates (three-dimensional coordinates) in a three-dimensional coordinate system established based on the positioning reference object, it is understood that w is an abbreviation of a world coordinate system, and it is understood that the three-dimensional coordinate system established based on the positioning reference object belongs to the world coordinate system, so the coordinates in the three-dimensional coordinate system are schematically distinguished by w indices.
Since the first positioning reference object in the embodiment of the present application is a planar object, Z W Where "0", X and y are 2D coordinates (two-dimensional coordinates) of a feature point corresponding to the positioning reference object in the pixel coordinate system, and X is X W 、Y W X and y may form a set of 3D to 2D coordinates, R 11 、R 12 、R 13 、R 21 、R 22 、R 23 、R 31 、R 32 And R 33 Are all unknowns, T, used to construct the rotation matrix 1 、T 2 And T 3 And also unknown numbers used to construct the translation matrix, the rotation matrix and the translation matrix combining to form the extrinsic reference matrix.
By converting the formula (1), the following formulas (2) to (4) can be obtained:
Z c *x=X W *(f x *R 11 +u 0 *R 31 )+Y W *(f x *R 12 +u 0 *R 32 )+Z W *(f x *R 13 +u 0 *R 33 )+f x *T 1 +u 0 *T 3 (2)
Z c *y=X W *(f y *R 21 +v 0 *R 31 )+Y W *(f y *R 22 +v 0 *R 32 )+Z W *(f y *R 23 +v 0 *R 33 )+f y *T 2 +v 0 *T 3 (3)
Z c =X W *R 31 +Y W *R 32 +Z W *R 33 +T 3 (4)
the unknowns for constructing the rotation matrix and the translation matrix are obtained through the formulas (2) to (4). Since the rotation matrix is an orthogonal matrix, a total of 6 unknowns is added to 3 unknowns for constructing the translation matrix, and the coordinate X is converted from each set of 3D to 2D W 、Y W X and y can determine two equations, so that in the embodiment of the present application, at least three sets of coordinates are required to solve six unknowns, and therefore, at least three sets of coordinates of feature points corresponding to 3D to 2D need to be obtained.
After the unknowns for constructing the rotation matrix and the translation matrix are solved, an external reference matrix (R, T) of one of the positioning reference objects relative to the image acquisition device can be obtained, that is, a rotation and translation matrix of a reference coordinate system is equivalent to a three-dimensional coordinate system established based on one of the positioning reference objects, wherein R is the rotation matrix and T is the translation matrix.
In some embodiments, the relative position information of the first positioning reference object compared to the image acquisition device comprises a first external reference matrix; the relative position information of the second positioning reference object compared to the image capturing device includes a second external reference matrix, as shown in fig. 5, and step 206 specifically includes, but is not limited to, including:
step 502, the inverse matrix of the second external reference matrix is multiplied by the first external reference matrix to obtain a third external reference matrix of the first positioning reference object relative to the second positioning reference object.
Specifically, the first external reference matrix is M1, the second external reference matrix is M2, and the inverse matrix of M2 is multiplied by M1 to obtain the external reference matrix of the three-dimensional coordinate system where the first positioning object is located relative to the three-dimensional coordinate system where the second positioning reference object is located, that is, the rotational offset matrix M3.
And step 504, obtaining a pose difference according to the third external parameter matrix.
Specifically, the difference in the pose between the first positioned object and the second positioned object is found based on the matrix parameters in the third external reference matrix M3, that is, including the amount of rotation and the amount of translation.
In some embodiments, as shown in fig. 6, step 208 specifically includes, but is not limited to including:
and step 602, obtaining the relative pose of the image acquisition equipment and the stacked tray according to the pose difference.
Specifically, the relative pose of the image acquisition device and the stacked tray includes a pitch angle and a translation amount of the image acquisition device relative to the stacked tray, wherein after the rotation vector and the translation vector are obtained by the pose solving algorithm, the rotation vector needs to be converted into a rotation matrix through the rogue transformation, and then converted into an euler angle, that is, the pitch angle of the image acquisition device relative to the stacked tray, provided by the embodiment of the present application; it should be noted that the translation vector, i.e., the translation amount, is obtained by solving the pose solution algorithm.
And step 604, if the relative pose is not in the preset pose range, adjusting the current pose of the carrying equipment according to the relative pose, and returning to execute the step of collecting the target image through the image collecting equipment after adjusting the pose of the carrying equipment.
Specifically, after the relative pose of the image acquisition device and the stacked tray is calculated, whether the relative pose is in a preset pose range needs to be judged, if the relative pose is in the preset pose range, the pose required to be aligned by the carrying device is determined according to the relative pose, and after the pose of the carrying device is adjusted, the step of acquiring the target image through the image acquisition device is returned to be executed.
If the relative pose calculated in step 602 is within the preset pose range, step 606 is executed, in which the handling apparatus is directly controlled to stack the tray to be stacked on the stacked tray.
In some embodiments, the relative attitude of the image capture device to the stacked tray includes a pitch angle and a translation amount of the image capture device with respect to the stacked tray, and the preset attitude range includes a pitch angle range and a translation amount range. After the pitch angle and the translation amount of the image acquisition equipment relative to the stacked tray are calculated, whether the pitch angle and the translation amount simultaneously meet the pitch angle range and the translation amount range in the preset pose range is judged, if not, the angle and the distance required to be straightened by the carrying equipment are determined according to the relative pose of the image acquisition equipment and the stacked tray including the pitch angle and the translation amount of the image acquisition equipment relative to the stacked tray, the current pose of the carrying equipment is adjusted, and the step of acquiring the target image by the image acquisition equipment is returned to be executed after the pose of the carrying equipment is adjusted.
If the pitch angle and the translation amount simultaneously meet the range of the pitch angle and the range of the translation amount in the preset pose range, the tray to be stacked is directly stacked on the stacked tray according to the current pose of the carrying equipment without pose adjustment.
In some embodiments, the amount of translation includes an amount of translation in the X-axis direction and an amount of translation in the Y-axis direction. In practical application, a person skilled in the art can set the translation range in the X-axis direction to be-1.5 cm to +1.5 cm, the translation range in the Y-axis direction to be-1.5 cm to +1.5 cm, and the pitch angle range to be-1.5 degrees to +1.5 degrees according to actual requirements.
In some embodiments, the image capture device is secured to the handling device; the first positioning reference object is a first two-dimensional code arranged on a tray to be stacked; the second positioning reference object is a second two-dimensional code arranged on the stacked tray; the plurality of feature points of the first two-dimensional code comprise at least three boundary corner points of the first two-dimensional code; the plurality of feature points of the second two-dimensional code comprise at least three boundary corner points of the second two-dimensional code.
In some embodiments, 4 boundary corner points may be selected from the plurality of feature points of the first two-dimensional code, and 4 boundary corner points may be selected from the plurality of feature points of the second two-dimensional code. When a three-dimensional coordinate system is constructed based on the first two-dimensional code, one of the 4 boundary corner points (for example, the boundary corner point on the upper left of the first two-dimensional code, namely, the left vertex of the first two-dimensional code) is selected as a coordinate origin, and a plane (for example, a ZY plane) where the first two-dimensional code is located is used as a coordinate plane to construct the three-dimensional coordinate system based on the first two-dimensional code. It should be noted that the process of constructing the three-dimensional coordinate system based on the second two-dimensional code is the same as the process of constructing the three-dimensional coordinate system based on the first two-dimensional code, and is not described in detail here.
It should be noted that the two-dimensional codes are arranged at the upper cover and the bottom support of the tray to be stacked, the two-dimensional codes are also arranged at the upper cover and the bottom support of the stacked tray, the first positioning reference object represents the two-dimensional codes arranged on the bottom support of the tray to be stacked, namely, the first two-dimensional codes, and the second positioning reference object represents the two-dimensional codes arranged on the upper cover of the stacked tray, namely, the second two-dimensional codes.
It should be noted that the purpose of fixing the image capturing device to the conveying device is that, since the positional relationship between the image capturing device and the conveying device is fixed, the image capturing device and the conveying device can be directly regarded as a whole, and therefore, the difference between the position of the image capturing device and the position of the first positioning reference object and the position of the image capturing device can be regarded as the difference between the image capturing device and the first positioning reference object and the second positioning reference object and the position of the image capturing device and the conveying device, respectively, and the position of the image capturing device and the conveying device are regarded as the difference.
It can be understood that the first positioning reference object and the second positioning reference object are set in the form of two-dimensional codes for positioning, because the two-dimensional codes have the same data unit, and the positioning can be performed quickly without performing complicated processing on the acquired data information.
In some embodiments, two-dimensional codes are fixed on one surface, e.g., the front surface, of each tray (including the tray to be stacked and the tray to be stacked), referring to fig. 7, taking the stacked tray as an example, the stacked tray includes an upper cover 702 and a bottom support 704, wherein a first two-dimensional code 706 is disposed on the front surface of the bottom support 704, a second two-dimensional code 708 is disposed on the front surface of the upper cover 702, and the second two-dimensional code 708 is located right above the first two-dimensional code 706.
In addition, the width of the first two-dimensional code 706 is consistent with the thickness of the bottom support 704, the width of the second two-dimensional code 708 is consistent with the thickness of the upper cover 702, and positioning can be carried out by utilizing the information of the two-dimensional codes. The width of the first two-dimensional code 706 and the width of the second two-dimensional code 708 are set to be consistent with the thickness of the bottom support 704, so that all contents of the first two-dimensional code 706 can be ensured to be located on the front surface of the bottom support 704, and all contents of the second two-dimensional code 708 are ensured to be located on the front surface of the upper cover 702, and therefore it is ensured that all contents of the first two-dimensional code 706 and all contents of the second two-dimensional code 708 can be acquired by image acquisition equipment, accurate characteristic points can be located based on complete two-dimensional code contents, calculated pose difference is more accurate, and accuracy of tray stacking is improved.
It should be noted that, if the first positioning reference object and the second positioning reference object are two-dimensional codes, when performing feature detection on the first image content and the second image content, it may be considered to use boundary corner points of the two-dimensional codes, that is, four vertices of the two-dimensional codes. In practical application, if other positions are taken as feature points of the two-dimensional code, the feature points are difficult to correspond to coordinates under a reference coordinate system where the image acquisition equipment is located, and because a plane where the image acquisition equipment is located may be inclined, the boundary corner points of the two-dimensional code are taken as the feature points, and the pose solving efficiency can be improved.
If the image acquisition device needs to be fixed on the conveying device, the image acquisition device can be considered to be installed on the side of the conveying device, and specifically, the image acquisition device can be considered to be installed on the left side or the right side of the conveying device. After the initial installation is completed, the angle of the image acquisition equipment needs to be adjusted, and the two-dimensional code A of the tray to be stacked and the two-dimensional code B of the stacked tray are ensured to be simultaneously within the visual angle range of the image acquisition equipment.
In some embodiments, as shown in fig. 8, the process of the tray stacking method of the present application may further be:
fixing two-dimensional codes at a bottom support and an upper cover of each tray, installing a 2D camera at the side edge of the AGV, adjusting the visual angle range of the camera to a two-dimensional code A covering the bottom support of the tray to be stacked and a two-dimensional code B covering the upper cover of the tray to be stacked, acquiring a target image comprising the two-dimensional code A and the two-dimensional code B by the 2D camera when the AGV forks the tray to be stacked to the upper part of the tray to be stacked, respectively detecting four characteristic points of the two-dimensional code A and the two-dimensional code B by a two-dimensional code detection program, respectively solving an external reference matrix between the two-dimensional code A and the two-dimensional code B by solvePn, calculating a rotation and translation matrix M1 of a reference coordinate system under the three-dimensional coordinate system established by the two-dimensional code A, calculating a rotation and translation matrix M2 of the reference coordinate system under the three-dimensional coordinate system established by the two-dimensional code B, and obtaining a rotation and translation matrix M3 of the two-dimensional code A in the two-dimensional code B by multiplying the M1 by the M2 inverse matrix, the pose of the AGV with respect to the stacked pallet is known by M3, so that stack confirmation is performed, and the AGV can accurately complete the pallet stacking work.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a tray stacking apparatus for implementing the above-mentioned tray stacking method. The solution of the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the tray stacking apparatus provided below can be referred to the limitations of the above tray stacking method, and are not described herein again.
In one embodiment, as shown in fig. 9, there is provided a tray stacking apparatus including: an image acquisition module 902, an information acquisition module 904, a pose calculation module 906, and a tray stacking module 908, wherein:
an image acquisition module 902, configured to acquire a target image through an image acquisition device; the target image comprises first image content of a first positioning reference object on the tray to be stacked and second image content of a second positioning reference object on the stacked tray.
An information obtaining module 904, configured to determine, according to the first image content, relative position information of the first positioning reference object compared to the image capturing apparatus, and determine, according to the second image content, relative position information of the second positioning reference object compared to the image capturing apparatus.
A pose calculation module 906, configured to determine a pose difference between the tray to be stacked and the stacked tray according to relative position information of the first positioning reference object and the second positioning reference object respectively compared to the image capturing device.
And the tray stacking module 908 is used for controlling the carrying equipment to stack the trays according to the pose difference.
According to the tray stacking device, the positioning reference object is arranged on the tray, the image content of the first positioning reference object on the tray to be stacked and the image content of the second positioning reference object on the stacked tray are collected and analyzed, the relative postures of the first positioning reference object and the second positioning reference object with the image acquisition equipment can be obtained, the deviation of the relative postures of the tray to be stacked and the stacked tray is reversely pushed according to the relative postures, and therefore the carrying equipment can be further controlled to accurately complete tray stacking work according to the posture difference.
In some embodiments, the information acquisition module includes a feature detection unit and a position acquisition unit, the feature detection unit is configured to perform feature detection on the first image content and the second image content, respectively, to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object, respectively; the position acquisition unit is used for acquiring relative position information of the positioning reference object compared with the image acquisition equipment according to the internal reference of the image acquisition equipment and the feature set corresponding to the positioning reference object aiming at each of the first positioning reference object and the second positioning reference object.
In some embodiments, the feature detection unit is further configured to perform contour extraction on the first image content and the second image content, respectively, to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object; and respectively extracting the features of the first image contour and the second image contour to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object.
In some embodiments, the position obtaining unit is further configured to calculate, for each of the first positioning reference object and the second positioning reference object, an external parameter matrix of the positioning reference object relative to the image acquisition device according to a pixel coordinate of each feature point corresponding to the positioning reference object in a pixel coordinate system, a coordinate of each feature point in a three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameter of the image acquisition device; the three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed by taking a plane where the positioning reference object is located as a coordinate plane and taking one of a plurality of characteristic points of the positioning reference object as a coordinate origin.
In some embodiments, the pose calculation module is further configured to multiply the inverse of the second appearance matrix by the first appearance matrix to obtain a third appearance matrix of the first positioning reference object relative to the second positioning reference object; and obtaining a pose difference according to the third external parameter matrix.
In some embodiments, the tray stacking module is further configured to obtain a relative pose of the image capturing device and the stacked tray according to the pose difference; and if the relative pose is not in the preset pose range, adjusting the current pose of the carrying equipment according to the relative pose, and returning to execute the step of acquiring the target image through the image acquisition equipment after adjusting the pose of the carrying equipment.
The division of the modules in the tray stacking apparatus is only for illustration, and in other embodiments, the detection apparatus may be divided into different modules as required to complete all or part of the functions of the detection apparatus.
The various modules in the above tray stacking apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be the handling device of fig. 1, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image information and tray data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a tray stacking method.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is further provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps in the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of stacking trays, the method comprising:
collecting a target image through image collection equipment; the target image comprises first image content of a first positioning reference object on a tray to be stacked and second image content of a second positioning reference object on the stacked tray;
determining relative position information of the first positioning reference object compared with the image acquisition equipment according to the first image content, and determining relative position information of the second positioning reference object compared with the image acquisition equipment according to the second image content;
determining a pose difference between the tray to be stacked and the stacked tray according to relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment;
and controlling the carrying equipment to stack the tray to be stacked according to the pose difference.
2. The method of claim 1, wherein determining the relative position information of the first positioning reference object compared to the image acquisition device from the first image content and the relative position information of the second positioning reference object compared to the image acquisition device from the second image content comprises:
respectively carrying out feature detection on the first image content and the second image content to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively;
and for each of the first positioning reference object and the second positioning reference object, obtaining relative position information of the positioning reference object compared with the image acquisition equipment according to the image acquisition equipment internal reference of the image acquisition equipment and the feature set corresponding to the positioning reference object.
3. The method according to claim 2, wherein the performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively comprises:
extracting contours of the first image content and the second image content respectively to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object;
and respectively extracting the features of the first image contour and the second image contour to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object.
4. The method according to claim 2, wherein the relative position information comprises an external reference matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object comprises a plurality of feature points of the positioning reference object;
the obtaining, for each of the first positioning reference object and the second positioning reference object, the relative position information of the positioning reference object compared to the image acquisition device according to the image acquisition device internal reference of the image acquisition device and the feature set corresponding to the positioning reference object, includes:
for each of the first positioning reference object and the second positioning reference object, calculating an external parameter matrix of the positioning reference object relative to the image acquisition equipment according to the pixel coordinates of each feature point corresponding to the positioning reference object in a pixel coordinate system, the coordinates of each feature point in the three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameters of the image acquisition equipment;
the three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed by taking a plane where the positioning reference object is located as a coordinate plane and taking one of a plurality of characteristic points of the positioning reference object as a coordinate origin.
5. The method of claim 4, wherein the relative position information of the first positioning reference object compared to the image acquisition device comprises a first external parameter matrix; the relative position information of the second positioning reference object compared with the image acquisition equipment comprises a second external parameter matrix; the determining a difference in pose between the tray to be stacked and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared to the image capturing apparatus includes:
multiplying the inverse matrix of the second external reference matrix by the first external reference matrix to obtain a third external reference matrix of the first positioning reference object relative to the second positioning reference object;
and obtaining the pose difference according to the third external parameter matrix.
6. The method of claim 4, wherein the image capture device is secured to the handling device; the first positioning reference object is a first two-dimensional code arranged on the tray to be stacked; the second positioning reference object is a second two-dimensional code arranged on the stacked tray; the plurality of feature points of the first two-dimensional code comprise at least three boundary corner points of the first two-dimensional code; the plurality of feature points of the second two-dimensional code comprise at least three boundary corner points of the second two-dimensional code.
7. The method according to any one of claims 1 to 6, wherein the controlling the carrying apparatus to perform tray stacking processing on the tray to be stacked according to the posture difference includes:
obtaining the relative pose of the image acquisition equipment and the stacked tray according to the pose difference;
and if the relative pose is not in the preset pose range, adjusting the current pose of the carrying equipment according to the relative pose, and returning to execute the step of collecting the target image through the image collecting equipment after adjusting the pose of the carrying equipment.
8. A tray stacking apparatus, comprising:
the image acquisition module is used for acquiring a target image through image acquisition equipment; the target image comprises first image content of a first positioning reference object on a tray to be stacked and second image content of a second positioning reference object on the stacked tray;
an information obtaining module, configured to determine, according to the first image content, relative position information of the first positioning reference object compared to the image acquisition device, and determine, according to the second image content, relative position information of the second positioning reference object compared to the image acquisition device;
the pose calculation module is used for determining a pose difference between the tray to be stacked and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition equipment;
and the tray stacking module is used for controlling the carrying equipment to stack the trays to be stacked according to the pose difference.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210517041.2A 2022-05-13 2022-05-13 Tray stacking method, apparatus, computer device and computer-readable storage medium Pending CN114882112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210517041.2A CN114882112A (en) 2022-05-13 2022-05-13 Tray stacking method, apparatus, computer device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210517041.2A CN114882112A (en) 2022-05-13 2022-05-13 Tray stacking method, apparatus, computer device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN114882112A true CN114882112A (en) 2022-08-09

Family

ID=82675450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210517041.2A Pending CN114882112A (en) 2022-05-13 2022-05-13 Tray stacking method, apparatus, computer device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114882112A (en)

Similar Documents

Publication Publication Date Title
KR20180120647A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN110992356A (en) Target object detection method and device and computer equipment
CN112132523B (en) Method, system and device for determining quantity of goods
CN113494893B (en) Calibration method and device of three-dimensional laser scanning system and computer equipment
CN113689578B (en) Human body data set generation method and device
CN110487274B (en) SLAM method and system for weak texture scene, navigation vehicle and storage medium
CN113592957A (en) Multi-laser radar and multi-camera combined calibration method and system
CN112348890B (en) Space positioning method, device and computer readable storage medium
CN110926330A (en) Image processing apparatus, image processing method, and program
CN112686950A (en) Pose estimation method and device, terminal equipment and computer readable storage medium
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
CN111915681B (en) External parameter calibration method, device, storage medium and equipment for multi-group 3D camera group
CN113329179A (en) Shooting alignment method, device, equipment and storage medium
CN117315046A (en) Method and device for calibrating looking-around camera, electronic equipment and storage medium
CN114882112A (en) Tray stacking method, apparatus, computer device and computer-readable storage medium
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN116021519A (en) TOF camera-based picking robot hand-eye calibration method and device
JP2011174891A (en) Device and method for measuring position and attitude, and program
KR20230094950A (en) Method and device for cargo counting, computer equipment, and storage medium
CN115100271A (en) Method and device for detecting goods taking height, computer equipment and storage medium
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
Pan et al. 3-D positioning system based QR code and monocular vision
CN113589263A (en) Multi-homologous sensor combined calibration method and system
CN114511894A (en) System and method for acquiring pupil center coordinates
CN114332345B (en) Binocular vision-based metallurgical reservoir area local three-dimensional reconstruction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination