CN112037283B - Machine vision-based integrated card positioning and box alignment detection method - Google Patents

Machine vision-based integrated card positioning and box alignment detection method Download PDF

Info

Publication number
CN112037283B
CN112037283B CN202010921668.5A CN202010921668A CN112037283B CN 112037283 B CN112037283 B CN 112037283B CN 202010921668 A CN202010921668 A CN 202010921668A CN 112037283 B CN112037283 B CN 112037283B
Authority
CN
China
Prior art keywords
image
target container
container
lock button
corner fitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010921668.5A
Other languages
Chinese (zh)
Other versions
CN112037283A (en
Inventor
陈环
梁浩
杨佳乐
洪俊明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yumo Information Technology Co ltd
Original Assignee
Shanghai Yumo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yumo Information Technology Co ltd filed Critical Shanghai Yumo Information Technology Co ltd
Priority to CN202010921668.5A priority Critical patent/CN112037283B/en
Publication of CN112037283A publication Critical patent/CN112037283A/en
Application granted granted Critical
Publication of CN112037283B publication Critical patent/CN112037283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/46Position indicators for suspended loads or for crane elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a machine vision-based method for positioning a set card and detecting a box, which comprises the following steps: acquiring a first image, wherein the first image at least comprises a vehicle plate lock button image corresponding to a corner fitting of a target container, extracting features of the first image, calculating the pose of a collector card, and controlling a trolley, a cart and a lifting appliance to move corresponding distances for preliminary positioning; when the target container approaches the target position of the collector card, a second image is acquired, wherein the second image at least comprises corner fitting images and corresponding lock button images of the target container, the second image is subjected to feature extraction, and the relative pose of the target container and the collector card is calculated, so that the trolley, the cart and the lifting appliance are controlled to move corresponding distances to dynamically mount the container. According to the machine vision-based container truck positioning and box-to-box detection method, the corner fitting image and the corresponding vehicle plate lock button image of the target container are acquired, and the pose of the container truck and the target container are acquired respectively, so that the container is loaded to the container truck trailer with higher precision.

Description

Machine vision-based integrated card positioning and box alignment detection method
Technical Field
The invention relates to the field of crane loading and unloading, in particular to a machine vision-based integrated card positioning and box detection method.
Background
At present, a port is used for loading and unloading containers by adopting a manual operation mode, and a tire crane driver operates a tire crane to load and unload containers to the containers (skeleton type truck and flat truck) below the tire crane. The disadvantage of manual operation by tyre crane drivers is that the operation efficiency is low, the operation quality is unstable, the condition that the container is smashed into the collecting card guide plate roughly often occurs, and the container and the collecting card are damaged to a certain extent. Some automatic modification schemes exist in the prior art, for example, a laser radar can be used for scanning the contour of the container, and the container loading and unloading operation is performed after the position of the container is determined. However, for different types of collector cards and different types of trailers, a template library needs to be established for comparison, deviation is easy to occur, and automatic loading and unloading are unsuccessful.
Therefore, it is necessary to provide a method for positioning and detecting the container, which can realize the automatic positioning and loading and unloading of the container for the operation of the container.
Disclosure of Invention
The invention aims to solve the technical problem of providing a machine vision-based set card positioning and box alignment detection method, which is characterized in that through acquiring corner fitting images of a target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, center point coordinates of the corner fitting of the target container and the vehicle plate lock button corresponding to the corner fitting of the target container are calculated to respectively acquire the positions of the set card and the target container, so that the loading precision of the container to a set card trailer is higher.
The invention provides a machine vision-based method for positioning a set card and detecting a box, which aims to solve the technical problems and comprises the following steps:
Acquiring a first image, wherein the first image at least comprises a vehicle plate lock button image corresponding to a corner fitting of a target container, extracting characteristics of the first image, calculating a center point coordinate of the vehicle plate lock button corresponding to the corner fitting of the target container, and calculating the pose of the collector card, so as to control the movement of the trolley, the cart and the lifting appliance for preliminary positioning;
When the target container approaches the target position of the collector card, a second image is acquired, wherein the second image at least comprises a corner piece image of the target container and a vehicle plate lock button image corresponding to the corner piece of the target container, the second image is subjected to feature extraction, the center point coordinates of the corner piece of the target container and the center point coordinates of the vehicle plate lock button corresponding to the corner piece of the target container are calculated respectively, and the relative pose of the target container and the collector card is calculated, so that the trolley, the cart and the lifting appliance are controlled to move by corresponding distances to dynamically mount the container.
Preferably, the feature extraction of the first image includes performing point cloud cluster segmentation on a vehicle panel lock button image corresponding to a corner fitting of the target container;
and performing feature extraction on the second image comprises performing point cloud cluster segmentation on corner fitting images of the target container and vehicle panel lock button images corresponding to the corner fittings of the target container.
Preferably, the method further comprises:
Performing feature extraction on the first image comprises performing deep learning target detection or instance segmentation on a vehicle panel lock button image corresponding to a corner fitting of the target container, extracting an ROI (region of interest) of the vehicle panel lock button image corresponding to the corner fitting of the target container, and calculating 3D coordinates corresponding to an ROI center point of the vehicle panel lock button image;
And performing feature extraction on the second image comprises performing deep learning target detection or instance segmentation on corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, extracting ROIs of the corner fitting images of the target container and the vehicle plate lock button image corresponding to the corner fitting of the target container, and respectively calculating 3D coordinates corresponding to the ROI center point of the corner fitting image of the target container and 3D coordinates corresponding to the ROI center point of the vehicle plate lock button image corresponding to the corner fitting of the target container.
Preferably, the second image includes two corner piece images of the long side of the target container and a panel lock knob image corresponding to the two corner pieces of the long side of the target container.
Preferably, the method further comprises:
after extracting the ROI of the lock button image of the sweep corresponding to the corner fitting of the target container, calculating the edge profile of the corner fitting of the target container;
After the corner fitting image of the target container and the ROI of the panel lock button image corresponding to the corner fitting of the target container are extracted, calculating the edge profiles of the corner fitting of the target container and the panel lock button corresponding to the corner fitting of the target container.
Preferably, calculating the pose of the collector truck includes calculating the pose of the collector truck (Xt, yt, zt, θ) according to the following formula according to the width H of the deck of the collector truck trailer:
Xt=(Pl1.x+Pl2.x)/2+H/2*cosθ
Yt=(Pl1.y+Pl2.y)/2+H/2*sinθ
Zt=(Pl1.z+Pl2.z)/2
The heading angle is an included angle between a straight line connected with the centers of the vehicle plate lock buttons corresponding to the two corner pieces on the long side of the target container and the longitudinal coordinate of the cart coordinate system, (Xt, yt, zt) is the coordinate of the center point of the vehicle plate in the cart coordinate system, pl1 is the coordinate of the center point of one vehicle plate lock button corresponding to the two corner pieces on the long side of the target container in the cart coordinate system, and Pl2 is the coordinate of the center point of the other vehicle plate lock button corresponding to the two corner pieces on the long side of the target container in the cart coordinate system.
Preferably, the first image and the second image are acquired by an image acquisition device which is installed on a leg of a cart near a side of the deck, and which faces the yard.
Preferably, the mounting height of the image acquisition device is flush with the deck of the truck trailer, and the image acquisition device obtains the height of the truck lock according to the acquired first image or second image so as to automatically adjust the height of the image acquisition device.
Preferably, the image acquisition device comprises a lidar, a binocular camera, a depth camera and an RGBD camera.
Preferably, the dimensions of the target container include 20 inch containers and 40 inch containers, and when the target container is a 20 inch container, corner piece images of the 20 inch container are acquired by two of the image acquisition devices, and when the target container is a 40 inch container, corner piece images of the 40 inch container are acquired by the other two of the image acquisition devices.
Compared with the prior art, the invention has the following beneficial effects: according to the machine vision-based integrated card positioning and box alignment detection method provided by the invention, through obtaining the corner fitting image of the target container and the plate lock button image corresponding to the corner fitting of the target container, calculating the center point coordinates of the corner fitting of the target container and the plate lock button corresponding to the corner fitting of the target container, and obtaining the relative positions of the corner fittings and the corresponding lock buttons in real time, so that the trolley, the cart and the lifting appliance are controlled to move corresponding distances to dynamically align the boxes;
Further, point cloud cluster segmentation or depth learning target detection or instance segmentation is performed on corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, ROIs of the vehicle plate lock button images corresponding to the corner fitting of the target container are extracted, 3D coordinates corresponding to the ROI center points of the vehicle plate lock button images are calculated, and center or boundary coordinates of the vehicle plate lock button images are estimated according to point clouds in the ROIs, so that detection accuracy and robustness can be improved.
Drawings
FIG. 1 is a flow chart of a method for positioning a set card and detecting a box based on machine vision in an embodiment of the invention;
FIG. 2 is a schematic diagram of a machine vision-based set card positioning and box alignment detection device in an embodiment of the present invention;
Fig. 3 is a schematic diagram of a truck trailer structure of a machine vision-based truck positioning and alignment detection device according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and examples.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. Accordingly, the specific details are set forth merely as examples, and the specific details may vary from the spirit and scope of the disclosure and are still considered within the spirit and scope of the disclosure.
Fig. 1 is a flow chart of a machine vision-based method for positioning a set card and detecting a box in an embodiment of the present invention, fig. 2 is a schematic structural diagram of a machine vision-based device for positioning a set card and detecting a box in an embodiment of the present invention, and fig. 3 is a schematic structural diagram of a set card trailer of a machine vision-based device for positioning a set card and detecting a box in an embodiment of the present invention. Referring now to fig. 1, the invention provides a machine vision-based method for positioning a set card and detecting a box, comprising the following steps:
Step 101: acquiring a first image, wherein the first image at least comprises a vehicle panel lock button image corresponding to a corner fitting of a target container;
step 102: extracting features of the first image, calculating coordinates of a central point of a lock button of a vehicle plate corresponding to a corner fitting of the target container, and calculating the pose of the collecting card, so as to control the corresponding distances of movement of a trolley, a cart and a lifting appliance to perform preliminary positioning;
Step 103: when the target container approaches to the target position of the collection card, acquiring a second image, wherein the second image at least comprises a corner fitting image of the target container and a sweep lock button image corresponding to the corner fitting of the target container;
Step 104: and extracting the characteristics of the second image, respectively calculating the coordinates of the central points of corner fittings of the target container and the coordinates of central points of lock buttons of the vehicle plate corresponding to the corner fittings of the target container, and calculating the relative pose of the target container and the collector card, so as to control the corresponding distances of the trolley, the cart and the lifting appliance to dynamically mount the container.
In a specific implementation, as shown in fig. 2, the first image and the second image are acquired by image acquisition devices C1, C2, C3, C4, the image acquisition devices C1, C2, C3, C4 are mounted on the leg of the cart near one side of the truck collecting lane, and the image acquisition devices C1, C2, C3, C4 face the yard. The mounting heights of the image acquisition devices C1, C2, C3 and C4 are flush with the vehicle plate of the truck trailer, and the image acquisition devices C1, C2, C3 and C4 acquire the heights of the truck lock buttons according to the acquired first image or second image so as to automatically adjust the heights of the image acquisition devices. The image acquisition devices C1, C2, C3, C4 include a laser radar, a binocular camera, a depth camera, and an RGBD camera. In a specific implementation, the number of the image acquisition devices may be more than 4 to acquire more accurate image information.
The dimensions of the target container include 20 inch containers and 40 inch containers, the corner piece images of the 20 inch containers are acquired by two of the image acquisition devices when the target container is a 20 inch container, and the corner piece images of the 40 inch containers are acquired by the other two of the image acquisition devices when the target container is a 40 inch container. In a specific implementation, as shown in fig. 2 and 3, the cameras C1 and C2 are used for detecting the container alignment box and the container attitude of the container with 40 feet, the cameras C3 and C4 are used for detecting the container alignment box and the container alignment box with 20 feet, or the cameras C3 and C4 are used for detecting the container alignment box and the container alignment box with 40 feet, and the cameras C1 and C2 are used for detecting the container alignment box with 20 feet, so that the camera can acquire the corner piece image of the complete target container and the lock button image of the vehicle plate corresponding to the corner piece of the target container. The panel lock buttons corresponding to the corner pieces of the target container are lock buttons 32, 33, 36, 37 or 31, 301, 35, 391 or 302, 34, 392, 38 when the target container is a 20 inch container, and the panel lock buttons corresponding to the corner pieces of the target container are lock buttons 31, 34, 35, 38 when the target container is a 40 inch container.
Preferably, the second image includes two corner piece images of the long side of the target container and a panel lock knob image corresponding to the two corner pieces of the long side of the target container. When two corner pieces on one long side of the target container and corresponding two vehicle plate lock buttons are successful for the container, the two corner pieces on the other long side of the target container and corresponding other two vehicle plate lock buttons can be successful for the container due to the characteristics of the rigid body of the container.
In particular, as shown in fig. 3, when the target container is a 20 inch container, the corresponding panel lock buttons of the two corner pieces of the long side of the target container are lock buttons 32, 33 or 36, 37, and may also be lock buttons 31, 301 or 35, 391, and may also be lock buttons 302, 34 or 392, 38, and when the two corner pieces of the long side of the 20 inch target container and the corresponding panel lock buttons 32, 33 or 36, 37 are successful in a box, and may also be lock buttons 31, 301 or 35, 391, and may also be lock buttons 302, 34 or 392, 38, and the two corner pieces of the other long side of the 20 inch target container and the corresponding other two panel lock buttons 36, 37 or 32, 33 are successful in a box, and may also be lock buttons 35, 391 or 31, 301, and may also be lock buttons 392, 38 or 302, 34. When the target container is a 40 inch container, the corresponding panel lock buttons of the two corner pieces on the long side of the target container are lock buttons 31, 34 or 35, 38, and when the two corner pieces on one long side of the 40 inch target container and the corresponding panel lock buttons 31, 34 or 35, 38 are successful in the case, the two corner pieces on the other long side of the 40 inch target container and the corresponding other two panel lock buttons 35, 38 or 31, 34 are successful in the case.
In the implementation, firstly, the pose of the collector card is calculated according to the acquired first image, and the trolley, the cart and the lifting appliance are controlled to move by corresponding distances, so that the target container is close to the target position of the collector card. In the initial state, since the image acquisition device is installed on the leg of the cart near one side of the truck collecting lane, if the target container is far away from the truck deck, the first image can only comprise the truck deck lock button image corresponding to the corner fitting of the target container, and the corner fitting image of the target container can not be acquired. If the target container is close to the vehicle plate, the first image comprises a corner piece image of the target container and a vehicle plate lock button image corresponding to the corner piece of the target container. When the target container approaches the target position of the collector card, a second image is acquired, wherein the second image comprises corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, the relative pose of the target container and the collector card is calculated according to the acquired second image, and the trolley, the cart and the lifting appliance are controlled to move corresponding distances to carry out dynamic box loading.
The feature extraction of the first image can comprise point cloud clustering segmentation of a vehicle panel lock button image corresponding to a corner fitting of the target container; and performing feature extraction on the second image comprises performing point cloud cluster segmentation on corner fitting images of the target container and vehicle panel lock button images corresponding to the corner fittings of the target container.
The feature extraction of the first image may also include performing deep learning target detection or instance segmentation on a panel lock button image corresponding to a corner fitting of the target container, extracting an ROI (Region of Interest ) of the panel lock button image corresponding to the corner fitting of the target container, and calculating a 3D coordinate corresponding to a ROI center point of the panel lock button image; and performing feature extraction on the second image comprises performing deep learning target detection or instance segmentation on corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, extracting ROIs of the corner fitting images of the target container and the vehicle plate lock button image corresponding to the corner fitting of the target container, and respectively calculating 3D coordinates corresponding to the ROI center point of the corner fitting image of the target container and 3D coordinates corresponding to the ROI center point of the vehicle plate lock button image corresponding to the corner fitting of the target container.
In machine vision and image processing, a region to be processed is outlined from a processed image in a box, circle, ellipse, irregular polygon and the like, and is called a region of interest ROI. Various operators and functions are commonly used on machine vision software Halcon, openCV, matlab and the like to calculate the region of interest ROI and to process the image in the next step. The corner fittings of the container, also called the box corners and the lifting corners, are mainly used at the corners of the container, and the number of the corner fittings is usually 4, and the corner fittings are respectively arranged at the 4 corners of the container. The corner fittings play a key role in the operations of lifting, carrying, fixing, stacking and fastening of the container, and play a role in protecting the whole container body as the outermost edge of the container body.
In a specific implementation, after extracting the ROI of the panel lock button image corresponding to the corner fitting of the target container, calculating the edge profile of the corner fitting of the target container; after the corner fitting image of the target container and the ROI of the panel lock button image corresponding to the corner fitting of the target container are extracted, calculating the edge profiles of the corner fitting of the target container and the panel lock button corresponding to the corner fitting of the target container.
Referring now to fig. 3, in the case where the target container shown in fig. 3 is a 40-ruler container, cameras C1, C2 are used to acquire first and second images, or cameras C1, C2, C3, C4 are used to acquire first and second images at the same time, so as to acquire corresponding image data more precisely, perform point cloud cluster segmentation on the acquired first and second images, extract point clouds of the lock buttons 35, 38 images, and calculate centers Pl1, pl2 thereof.
In a specific implementation, according to the width H of the deck of the truck trailer, the pose (Xt, yt, zt, θ) of the truck can be calculated according to the following formula:
Xt=(Pl1.x+Pl2.x)/2+H/2*cosθ
Yt=(Pl1.y+Pl2.y)/2+H/2*sinθ
Zt=(Pl1.z+Pl2.z)/2
The θ is a heading angle of the collector card, the heading angle is an included angle between a straight line connected with a center of a lock button of a vehicle plate corresponding to two corner pieces on a long side of the target container and an ordinate of a cart coordinate system, that is, an included angle between a straight line connected with centers Pl1 and Pl2 of the lock buttons 35 and 38 and the ordinate of the cart coordinate system, (Xt, yt, zt) is a coordinate of a central point of the collector truck plate in the cart coordinate system, pl1 is a coordinate of a central point of the lock button 35 of the collector truck plate in the cart coordinate system, and Pl2 is a coordinate of a central point of the lock button 38 of the collector truck plate in the cart coordinate system.
According to the calculated central points of the locking buttons 31, 34, 35 and 38 and the central points of corner pieces of the 40-inch container at the corresponding camera coordinate system positions Pln and Pcn (n=1, 2,3 and 4), the relative positions delta Pn= Pln-Pcn= (delta x, delta y and delta z) of the corner pieces of the 40-inch container and the corresponding locking buttons can be obtained in real time, so that the corresponding distances of the movement of the trolley, the cart and the lifting appliance are controlled to dynamically pair the boxes.
When the target container is a 20-ruler container, the pose calculation method of the 20-ruler target container and the collecting card is consistent with that of the 40-ruler container, and the details are not repeated here.
In summary, according to the machine vision-based method for positioning and detecting a container by using the set card, the corner fitting image of the target container and the lock button image of the vehicle plate corresponding to the corner fitting of the target container are obtained, the coordinates of the center points of the corner fitting of the target container and the lock button of the vehicle plate corresponding to the corner fitting of the target container are calculated, and the relative positions of the corner fittings and the corresponding lock buttons are obtained in real time, so that the trolley, the cart and the lifting appliance are controlled to move corresponding distances to dynamically pair the container;
Further, point cloud cluster segmentation or depth learning target detection or instance segmentation is performed on corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, ROIs of the vehicle plate lock button images corresponding to the corner fitting of the target container are extracted, 3D coordinates corresponding to the ROI center points of the vehicle plate lock button images are calculated, and center or boundary coordinates of the vehicle plate lock button images are estimated according to point clouds in the ROIs, so that detection accuracy and robustness can be improved.
While the invention has been described with reference to the preferred embodiments, it is not intended to limit the invention thereto, and it is to be understood that other modifications and improvements may be made by those skilled in the art without departing from the spirit and scope of the invention, which is therefore defined by the appended claims.

Claims (7)

1. The method for positioning the set card and detecting the box based on the machine vision is characterized by comprising the following steps of:
Acquiring a first image, wherein the first image at least comprises a vehicle panel lock button image corresponding to a corner fitting of a target container;
Extracting features of the first image, calculating coordinates of a central point of a lock button of a vehicle plate corresponding to a corner fitting of the target container, and calculating the pose of the collecting card, so as to control the corresponding distances of movement of a trolley, a cart and a lifting appliance to perform preliminary positioning;
When the target container approaches to the target position of the collection card, acquiring a second image, wherein the second image at least comprises a corner fitting image of the target container and a sweep lock button image corresponding to the corner fitting of the target container;
Extracting features of the second image, respectively calculating the center point coordinates of corner fittings of the target container and the center point coordinates of the lock buttons of the vehicle plate corresponding to the corner fittings of the target container, and calculating the relative pose of the target container and the collector card, so as to control the corresponding distances of the trolley, the cart and the lifting appliance to carry out dynamic box loading;
Calculating the pose of the collector card comprises calculating the pose (Xt, yt, zt, theta) of the collector card according to the following formula according to the width H of the deck of the collector card trailer:
Xt= (Pl1.x+Pl2.x)/2+H/2*cosθ
Yt= (Pl1.y+Pl2.y)/2+H/2*sinθ
Zt= (Pl1.z+Pl2.z)/2
The heading angle is an included angle between a straight line connected with the centers of the vehicle plate lock buttons corresponding to the two corner pieces on the long side of the target container and the longitudinal coordinate of the cart coordinate system, (Xt, yt, zt) is the coordinate of the center point of the vehicle plate in the cart coordinate system, pl1 is the coordinate of the center point of one vehicle plate lock button corresponding to the two corner pieces on the long side of the target container in the cart coordinate system, and Pl2 is the coordinate of the center point of the other vehicle plate lock button corresponding to the two corner pieces on the long side of the target container in the cart coordinate system;
The first image and the second image are acquired through an image acquisition device, the image acquisition device is arranged on the leg of the cart close to one side of the card collecting lane, and the image acquisition device faces a storage yard;
The mounting height of the image acquisition device is flush with the sweep of the truck trailer, and the image acquisition device obtains the height of the truck lock button according to the acquired first image or second image so as to automatically adjust the height of the image acquisition device.
2. The machine vision based card positioning and box alignment detection method of claim 1, wherein,
The feature extraction of the first image comprises the point cloud clustering segmentation of a vehicle panel lock button image corresponding to the corner fitting of the target container;
and performing feature extraction on the second image comprises performing point cloud cluster segmentation on corner fitting images of the target container and vehicle panel lock button images corresponding to the corner fittings of the target container.
3. The machine vision based card positioning and box-to-box detection method of claim 1, further comprising:
Performing feature extraction on the first image comprises performing deep learning target detection or instance segmentation on a vehicle panel lock button image corresponding to a corner fitting of the target container, extracting an ROI (region of interest) of the vehicle panel lock button image corresponding to the corner fitting of the target container, and calculating 3D coordinates corresponding to an ROI center point of the vehicle panel lock button image;
And performing feature extraction on the second image comprises performing deep learning target detection or instance segmentation on corner fitting images of the target container and a vehicle plate lock button image corresponding to the corner fitting of the target container, extracting ROIs of the corner fitting images of the target container and the vehicle plate lock button image corresponding to the corner fitting of the target container, and respectively calculating 3D coordinates corresponding to the ROI center point of the corner fitting image of the target container and 3D coordinates corresponding to the ROI center point of the vehicle plate lock button image corresponding to the corner fitting of the target container.
4. The machine vision based set card positioning and box alignment detection method of claim 1, wherein the second image comprises two corner piece images of a long side of the target container and a panel lock button image corresponding to the two corner pieces of the long side of the target container.
5. The machine vision based card positioning and box-to-box detection method of claim 3, further comprising:
after extracting the ROI of the lock button image of the sweep corresponding to the corner fitting of the target container, calculating the edge profile of the corner fitting of the target container;
After the corner fitting image of the target container and the ROI of the panel lock button image corresponding to the corner fitting of the target container are extracted, calculating the edge profiles of the corner fitting of the target container and the panel lock button corresponding to the corner fitting of the target container.
6. The machine vision based set card positioning and bin detection method of claim 1, wherein the image acquisition device comprises a lidar, a binocular camera, a depth camera, and an RGBD camera.
7. The machine vision based header positioning and case-to-case inspection method of claim 1, wherein the dimensions of the target container include 20 inch containers and 40 inch containers, corner piece images of the 20 inch containers are acquired by two of the image acquisition devices when the target container is a20 inch container, and corner piece images of the 40 inch containers are acquired by the other two of the image acquisition devices when the target container is a 40 inch container.
CN202010921668.5A 2020-09-04 2020-09-04 Machine vision-based integrated card positioning and box alignment detection method Active CN112037283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010921668.5A CN112037283B (en) 2020-09-04 2020-09-04 Machine vision-based integrated card positioning and box alignment detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010921668.5A CN112037283B (en) 2020-09-04 2020-09-04 Machine vision-based integrated card positioning and box alignment detection method

Publications (2)

Publication Number Publication Date
CN112037283A CN112037283A (en) 2020-12-04
CN112037283B true CN112037283B (en) 2024-04-30

Family

ID=73590553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010921668.5A Active CN112037283B (en) 2020-09-04 2020-09-04 Machine vision-based integrated card positioning and box alignment detection method

Country Status (1)

Country Link
CN (1) CN112037283B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113140007B (en) * 2021-05-17 2023-12-19 上海驭矩信息科技有限公司 Concentrated point cloud-based set card positioning method and device
CN113460888B (en) * 2021-05-24 2023-11-24 武汉港迪智能技术有限公司 Automatic box grabbing method for gantry crane lifting appliance

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251381A (en) * 2007-12-29 2008-08-27 武汉理工大学 Dual container positioning system based on machine vision
KR20110069205A (en) * 2009-12-17 2011-06-23 한국과학기술원 Apparatus for estimating position and distance of container in container landing system and method thereof
CN108263950A (en) * 2018-02-05 2018-07-10 上海振华重工(集团)股份有限公司 Harbour gantry crane suspender based on machine vision it is automatic case system and method
CN110171779A (en) * 2019-06-26 2019-08-27 中国铁道科学研究院集团有限公司运输及经济研究所 Front handling mobile crane lifts by crane safely control system and control method
CN110902570A (en) * 2019-11-25 2020-03-24 上海驭矩信息科技有限公司 Dynamic measurement method and system for container loading and unloading operation
CN111137279A (en) * 2020-01-02 2020-05-12 广州赛特智能科技有限公司 Port unmanned truck collection station parking method and system
WO2020098933A1 (en) * 2018-11-14 2020-05-22 Abb Schweiz Ag System and method to load a container on a landing target

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251381A (en) * 2007-12-29 2008-08-27 武汉理工大学 Dual container positioning system based on machine vision
KR20110069205A (en) * 2009-12-17 2011-06-23 한국과학기술원 Apparatus for estimating position and distance of container in container landing system and method thereof
CN108263950A (en) * 2018-02-05 2018-07-10 上海振华重工(集团)股份有限公司 Harbour gantry crane suspender based on machine vision it is automatic case system and method
WO2020098933A1 (en) * 2018-11-14 2020-05-22 Abb Schweiz Ag System and method to load a container on a landing target
CN110171779A (en) * 2019-06-26 2019-08-27 中国铁道科学研究院集团有限公司运输及经济研究所 Front handling mobile crane lifts by crane safely control system and control method
CN110902570A (en) * 2019-11-25 2020-03-24 上海驭矩信息科技有限公司 Dynamic measurement method and system for container loading and unloading operation
CN111137279A (en) * 2020-01-02 2020-05-12 广州赛特智能科技有限公司 Port unmanned truck collection station parking method and system

Also Published As

Publication number Publication date
CN112037283A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
US11433812B2 (en) Hitching maneuver
EP3683721B1 (en) A material handling method, apparatus, and system for identification of a region-of-interest
CN112037283B (en) Machine vision-based integrated card positioning and box alignment detection method
CN112027918B (en) Detection method for preventing lifting of container truck based on machine vision
CN111508023B (en) Laser radar auxiliary container alignment method for unmanned collection card of port
CN110837814B (en) Vehicle navigation method, device and computer readable storage medium
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN105431370A (en) Method and system for automatically landing containers on a landing target using a container crane
CN106839985A (en) The automatic identification localization method of unmanned overhead traveling crane coil of strip crawl
CN107067439B (en) Container truck positioning and guiding method based on vehicle head detection
US20220189055A1 (en) Item detection device, item detection method, and industrial vehicle
CN112417591A (en) Vehicle modeling method, system, medium and equipment based on holder and scanner
CN111704035B (en) Automatic positioning device and method for container loading and unloading container truck based on machine vision
WO2022121460A1 (en) Agv intelligent forklift, and method and apparatus for detecting platform state of floor stack inventory areas
US11873195B2 (en) Methods and systems for generating landing solutions for containers on landing surfaces
CN105469401B (en) A kind of headchute localization method based on computer vision
CN112581519B (en) Method and device for identifying and positioning radioactive waste bag
JP2021160931A (en) Cargo handling system
US20230068916A1 (en) Forklift and stowage position detecting method for forklift
JP2020190814A (en) Trajectory generation device
CN114283193A (en) Pallet three-dimensional visual positioning method and system
CN111854678A (en) Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision
CN107516328B (en) AGV working point positioning method and system
WO2020212148A1 (en) Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system
Park et al. Container chassis alignment and measurement based on vision for loading and unloading containers automatically

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant