CN116620888A - Railway freight container automatic loading and unloading method based on machine vision - Google Patents

Railway freight container automatic loading and unloading method based on machine vision Download PDF

Info

Publication number
CN116620888A
CN116620888A CN202310523152.9A CN202310523152A CN116620888A CN 116620888 A CN116620888 A CN 116620888A CN 202310523152 A CN202310523152 A CN 202310523152A CN 116620888 A CN116620888 A CN 116620888A
Authority
CN
China
Prior art keywords
coordinate system
container
carriage
coordinates
top surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310523152.9A
Other languages
Chinese (zh)
Inventor
黄威
曹志俊
李恒
石先城
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Guide Intelligent Technology Co ltd
Original Assignee
Wuhan Guide Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Guide Intelligent Technology Co ltd filed Critical Wuhan Guide Intelligent Technology Co ltd
Priority to CN202310523152.9A priority Critical patent/CN116620888A/en
Publication of CN116620888A publication Critical patent/CN116620888A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G67/00Loading or unloading vehicles
    • B65G67/02Loading or unloading land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G65/00Loading or unloading
    • B65G65/005Control arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G69/00Auxiliary measures taken, or devices used, in connection with loading or unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2201/00Indexing codes relating to handling devices, e.g. conveyors, characterised by the type of product or load being conveyed or handled
    • B65G2201/02Articles
    • B65G2201/0235Containers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T30/00Transportation of goods or passengers via railways, e.g. energy recovery or reducing air resistance

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • Control And Safety Of Cranes (AREA)

Abstract

The invention provides a machine vision-based railway freight container automatic loading and unloading method, which comprises the following steps: s1: a PLC is configured on a track crane, a camera is installed on a main beam of the track crane, and the camera is positioned above a train loading lane to calibrate the camera; s2: extracting top surface edge corner points and carriage top surface edge corner points of the container in video image data acquired by a camera; s3: according to the container type of the container and the position of the container sent by the PLC, determining the coordinates of the edge corner point of the upper top surface of the container under a world coordinate system, and respectively solving the rotation angle and translation distance of the container and the carriage relative to the lifting appliance; s4: in the loading process of the container, extracting the edge corner point of the top of the carriage by a deep learning method and judging whether the container is placed in the carriage or not; s5: and the loading and unloading working conditions are communicated with the PLC in real time, and the PLC drives the lifting appliance to realize the boxing and unloading work of the container.

Description

Railway freight container automatic loading and unloading method based on machine vision
Technical Field
The invention relates to the technical field of automatic hoisting operation of containers, in particular to an automatic loading and unloading method of railway freight containers based on machine vision.
Background
In order to improve the transportation efficiency of inland containers, the transportation industry has higher requirements on the automatic operation work of railway containers. At present, in the process of transferring the container from the train to the tank area or from the tank area to the container entering and exiting of the train carriage in the railway container yard, the conventional method is often used for assisting in realizing the container entering and exiting of the container by positioning the position of the train carriage and the container in the train through radar data. The scheme has the problems of low efficiency and low safety due to the real-time performance and accuracy of radar data acquisition.
To sum up, in order to solve the efficiency and safety problems in the railway container operation process, it is necessary to develop a high-efficiency, safe and perfect train positioning system by combining the existing equipment aiming at the application scene of railway container in-and-out.
Disclosure of Invention
In view of the above, the invention provides an automatic railway freight container loading and unloading method based on machine vision, which has the advantages of good real-time performance and higher precision and is used for grabbing and placing a container.
The technical scheme of the invention is realized as follows: the invention provides a machine vision-based railway freight container automatic loading and unloading method, which comprises the following steps:
s1: a PLC is configured on a track crane, a camera is installed on a main beam of the track crane and positioned above a train loading lane, and the camera is calibrated to obtain internal parameters and external parameters of the camera; establishing a world coordinate system based on the track crane;
s2: collecting video image data of a working train carriage through a camera, and extracting top surface edge corner points and carriage top surface edge corner points of a container in the video image data through a deep learning method;
s3: according to the container type of the container and the position of the container sent by the PLC, determining the coordinates of the edge corner point of the upper top surface of the container under a world coordinate system, and respectively solving the rotation angle and translation distance of the container and the carriage relative to the lifting appliance;
s4: in the loading process of the container, extracting the edge corner point at the top of the carriage and judging whether the container is placed in the carriage or not by a deep learning method, if the container is placed in the carriage, extracting the edge corner point of the upper top surface of the placed container and calculating the coordinates of the top surface of the placed container in a world coordinate system, otherwise, only calculating the coordinates of the edge corner point at the top of the carriage in the world coordinate system;
s5: and the loading and unloading working conditions are communicated with the PLC in real time, and the PLC drives the lifting appliance to realize the boxing and unloading work of the container.
On the basis of the above technical solution, preferably, in step S1, the camera is calibrated to obtain the internal parameters and the external parameters of the camera, by using a Zhang Zhengyou camera calibration method, the origin of the world coordinate system is located at the center of the train loading lane, the X-axis direction of the world coordinate system is the horizontal width direction of the train loading lane, the Y-axis direction of the world coordinate system is the horizontal length extending direction of the train loading lane, and the Z-axis direction of the world coordinate system is the direction vertical to the ground; the camera internal reference after internal reference calibration is K,f denotes a focal length of the camera, dx and dy denote widths of individual pixels in an image in x-direction and y-direction, respectively, f/dx uses pixels to describe a length of the focal length in the x-axis direction, f/dy uses pixels to describe a length of the focal length in the y-axis direction, u 0 And v 0 Respectively representing the coordinates of the center of the photosensitive plate under a pixel coordinate system; camera with camera bodyThe external parameters are determined by a rotation vector R and a translation vector T of a camera coordinate system relative to a world coordinate system, and in the scheme, the camera external parameters calibrated by a positive friend external parameter calibration method are additionally selected>
Preferably, in step S2, the video image data of the working train carriage is collected by a camera, the top surface edge corner point and the carriage top edge corner point of the container in the video image data are extracted by a deep learning method, which is any one of an example segmentation method based on deep learning or a target detection method based on deep learning, and the top surface edge corner point and the carriage top edge corner point of the container are extracted by a deep learning model obtained by training, and the pixel coordinates of each edge corner point are obtained.
Further preferably, in step S3, the coordinates of the edge corner points of the top surface of the container under the world coordinate system are determined according to the container type of the container and the position of the container sent by the PLC, and the rotation angle and the translation distance of the container and the carriage relative to the lifting appliance are respectively solved, after the edge corner points of the top surface of the container are obtained in step S2, the space coordinate system of the top surface of the container is established, so that the pixel coordinates of the four edge corner points of the top surface of the container are respectively B 1 (X b1 ,Y b1 )、B 2 (X b2 ,Y b2 )、B 3 (X b3 ,Y b3 )、B 4 (X b4 ,Y b4 ) Let the length of the container be H and the width be W, and B 1 The space coordinates of the four edge corner points on the top surface of the container in the space coordinate system of the top surface of the container are respectively B C1 (0,0,0)B C2 (H,0,0)、B C3 (0, W, 0) and B C4 (H, W, 0) solving the relative pose relation between the top surface space coordinate system and the world coordinate system on the container by combining the camera internal parameter K, the camera external parameter E and the pixel coordinates and the space coordinates of each edge corner point of the container;
after the step S2 of extracting the edge corner points of the top of the train carriage to be grabbed is obtained, the carriage top is extractedCorner pixel coordinates T of the edge 1 (X t1 ,Y t1 )、T 2 (X t2 ,Y t2 )、T 3 (X t3 ,Y t3 ) And T 4 (X t4 ,Y t4 ) Then, a plane space coordinate system on the train carriage is established, and the length corresponding to the train carriage is H t Width W t The method comprises the steps of carrying out a first treatment on the surface of the By T 1 (X t1 ,Y t1 ) Deriving the coordinate of the top edge corner point of the train carriage on the plane space coordinate system of the train carriage as T for the origin of the plane space coordinate system of the train carriage C1 (0,0,0)、T C2 (H t ,0,0)、T C3 (0,W t 0), and T C4 (H t ,W t 0), solving the relative pose relation between the plane space coordinate system on the train carriage and the world coordinate system by combining the camera inner parameter K and the camera outer parameter E and the pixel coordinate of the carriage top edge corner and the coordinate of the carriage top edge corner on the plane space coordinate system on the train carriage.
Further preferably, the solving the relative pose relationship between the top surface space coordinate system and the world coordinate system on the container is to convert the coordinates of each edge corner point of the top surface on the top surface of the container in the top surface space coordinate system on the container into the following coordinates of the camera:n=1, 2,3,4, wherein B xn For the coordinates of each edge corner point of the upper top surface of the container under the camera coordinate system, B cn For the coordinates of each edge corner point of the upper top surface of the container under the upper top surface space coordinate system of the container, R b For the rotation matrix from the top surface space coordinate system to the camera coordinate system on the container, T b A translation matrix from a top surface space coordinate system to a camera coordinate system on the container;
and then converting the coordinates of each edge corner point of the upper top surface of the container in the camera coordinate system into the coordinates of each edge corner point in the world coordinate system:wherein B is wn For each edge of the top surface of the container in the world coordinate systemAngular point coordinates, rotation vector R and translation vector T are obtained at camera calibration; after the coordinates of the corner points of each edge of the upper top surface of the container in the world coordinate system are obtained, the positions, the rotation angles, the trolley translation amount and the trolley translation amount of the container relative to the lifting tool are calculated by combining the coordinates of the lifting tool lock head in the world coordinate system in real time.
Further preferably, after the coordinates of the corner points of the top surface of the container in the world coordinate system are obtained, the position, the rotation angle, the trolley translation amount and the trolley translation amount of the container relative to the lifting tool are calculated according to the coordinates B of the corner points of the top surface of the container in the world coordinate system by combining the coordinates of the lifting tool lock head in the world coordinate system in real time wn And the coordinates of the lifting tool lock head under the world coordinate system provided by the PLC are calculated and obtained to obtain the rotation angle, the trolley translation amount and the cart translation amount of the container relative to the lifting tool.
Further preferably, the solving of the relative pose relationship between the planar spatial coordinate system on the train carriage and the world coordinate system is to convert the coordinates of the carriage top edge corner point on the planar spatial coordinate system on the train carriage into the following coordinates of the camera coordinate system:
n=1, 2,3,4, wherein B xtn For the coordinates of each edge corner point at the top of the carriage under the camera coordinate system, B tn For the coordinates of corner points of each edge of the top of a carriage under a plane space coordinate system on the carriage of the train, R bt T is a rotation matrix from a plane space coordinate system to a camera coordinate system on a train carriage bt A translation matrix from a plane space coordinate system to a camera coordinate system on a train carriage;
and then converting the coordinates of each edge corner point at the top of the carriage under the camera coordinate system into the world coordinate system:wherein B is wtn For the coordinates of each edge corner point at the top of a carriage in a world coordinate system, a rotation vector R and a translation vector T are obtained when a camera is calibrated; when obtaining world seatAnd after the coordinates of each edge angular point at the top of the carriage under the standard system are combined with the coordinates of the lifting tool lock head under the world coordinate system in real time, calculating the pose, the rotation angle, the trolley translation and the trolley translation of the carriage relative to the lifting tool.
Further preferably, after the coordinates of each edge angular point of the top of the carriage in the world coordinate system are obtained, the position, rotation angle, trolley translation and cart translation of the carriage relative to the lifting tool are calculated according to the coordinates B of each edge angular point of the top surface of the carriage in the world coordinate system by combining the coordinates of the lifting tool lock head in the world coordinate system in real time wtn And the coordinates of the lifting tool lock head under the world coordinate system provided by the PLC are calculated and obtained, and the rotation angle of the carriage relative to the lifting tool, the translation amount of the trolley and the translation amount of the trolley are obtained.
Compared with the prior art, the automatic loading and unloading method for the railway freight container provided by the invention has the following beneficial effects:
(1) After camera internal and external parameters are obtained through calibration of the camera, the object to be grabbed, namely the edge corner point of the upper top surface of the container, is extracted, the coordinates of the object corner point are converted into a world coordinate system by utilizing a rotation and translation matrix calibrated by the external parameters, and the relative positions among the container, the carriage and the lifting appliance can be returned to the PLC in real time according to the real-time posture and the position coordinates of the lifting appliance sent by the PLC, and the PLC controls the lifting appliance through the information, so that the container grabbing and placing function of the container is realized;
(2) After the preliminary point and the surface features are extracted through deep learning, the method combines the traditional imaging and morphological schemes to accurately position the target corner, so that the obtained pixel coordinates of the top surface edge corner on the container and the top edge corner of the carriage are more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a machine vision based method for automated loading and unloading of railway freight containers in accordance with the present invention;
FIG. 2 is a schematic view of camera mounting locations of an automated method for loading and unloading railway freight containers based on machine vision according to the present invention;
FIG. 3 is a view of the relative position of a container and a carriage of an automated method for loading and unloading railway freight containers based on machine vision according to the present invention;
fig. 4 is a schematic diagram of a machine vision based method for automated handling of railway freight containers according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will clearly and fully describe the technical aspects of the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
As shown in fig. 1-2, the invention provides an automatic loading and unloading method of a railway freight container based on machine vision, which comprises the following steps:
s1: a PLC is configured on a track crane, a camera is installed on a main beam of the track crane and positioned above a train loading lane, and the camera is calibrated to obtain internal parameters and external parameters of the camera; and establishing a world coordinate system based on the track crane.
As shown in FIG. 2, the track crane comprises a main beam, a cart, a trolley, a lifting appliance, a PLC, a camera and the like, wherein the cart is arranged above a train loading lane in a crossing manner, the camera and the trolley are arranged on the main beam, the Y-axis of a graphical world coordinate system is parallel to the length extending direction of the train loading lane, the X-axis of the world coordinate system faces the width extending direction of the train loading lane, the cart can reciprocate along the Y-axis of the world coordinate system, the trolley can reciprocate along the X-axis of the world coordinate system, and the lifting appliance can move or rotate along the Z-axis of the world coordinate system, so that the lifting appliance is close to the position right above a container to be loaded and unloaded from above.
In the scheme, a camera is calibrated to obtain an internal reference and an external reference of the camera by using a Zhang Zhengyou camera calibration method, an origin of a world coordinate system is positioned at the center of a train loading lane, the X-axis direction of the world coordinate system is the horizontal width direction of the train loading lane, the Y-axis direction of the world coordinate system is the horizontal length extension direction of the train loading lane, and the Z-axis direction of the world coordinate system is the direction vertical to the ground; the camera internal reference after internal reference calibration is K,f denotes a focal length of the camera, dx and dy denote widths of individual pixels in an image in x-direction and y-direction, respectively, f/dx uses pixels to describe a length of the focal length in the x-axis direction, f/dy uses pixels to describe a length of the focal length in the y-axis direction, u 0 And v 0 Respectively representing the coordinates of the center of the photosensitive plate under a pixel coordinate system; the camera external parameters are determined by a rotation vector R and a translation vector T of a camera coordinate system relative to a world coordinate system, and in the scheme, the camera external parameters calibrated by a positive friend external parameter calibration method are +.>
S2: video image data of the working train carriage is collected through a camera, and top surface edge corner points and carriage top surface edge corner points of containers in the video image data are extracted through a deep learning method.
In a railway container yard, the operation scene aiming at a train comprises container loading and unloading processes, and aiming at the two scenes, the feature points to be extracted have differences, and the extracted target objects are respectively: 1. corner points of the upper top surface edge of the container; 2. the car top edge corner of the train.
In the scheme, in order to facilitate extraction of top surface edge corner points and carriage top surface edge corner points of the container in video image data, specifically, any one of a deep learning-based example segmentation method or a deep learning-based target detection method is adopted, a deep learning model obtained through training is used for extracting the top surface edge corner points and the carriage top surface edge corner points of the container, and pixel coordinates of each edge corner point are obtained. The example segmentation method based on the deep learning or the target detection method based on the deep learning is a conventional technical means in the field of visual analysis and is used for accurately segmenting and extracting different categories in an image, and the example segmentation method based on the deep learning or the target detection method based on the deep learning is an open source algorithm. By separating the container and the carriage from the background pattern and obtaining the top surface edge of the container and the top edge of the carriage, the intersections of the edges form the top surface edge corner point of the container and the top edge corner point of the carriage respectively. As shown in fig. 3, triangles are identified as top surface edge corner points on the container and circles are identified as car top edge corner points.
S3: according to the container type of the container and the position of the container sent by the PLC, the coordinates of the edge corner point of the upper top surface of the container under the world coordinate system are determined, and the rotation angle and the translation distance of the container and the carriage relative to the lifting appliance are respectively solved.
Step S3 includes two parts of content: (1) Extracting the top surface edge corner points of the container in the video image data, obtaining the pixel coordinates of the top surface edge corner points of the container in a pixel coordinate system, and then solving the position relation between the container posture determined by each edge corner point of the top surface of the container and the lifting appliance at the current position; (2) And extracting the top edge corner point of the carriage in the video image data, obtaining the pixel coordinates of the top edge corner point of the carriage in a pixel coordinate system, and then obtaining the position relation between the top edge corner point of the carriage and the lifting appliance at the current position, so that the lifting appliance can be used for grabbing the container, or the lifting appliance can be used for driving the grabbed container to be placed in the carriage.
For the part (1), after the edge corner points of the upper top surface of the container are obtained in the step S2, a space coordinate system of the upper top surface of the container is established, and the pixel coordinates of the four edge corner points of the upper top surface of the container are respectively B 1 (X b1 ,Y b1 )、B 2 (X b2 ,Y b2 )、B 3 (X b3 ,Y b3 )B 4 (X b4 ,Y b4 ) Let the length of the container be H and the width be W, and B 1 The space coordinates of the four edge corner points on the top surface of the container in the space coordinate system of the top surface of the container are respectively B C1 (0,0,0)B C2 (H,0,0)、B C3 (0, W, 0) and B C4 (H, W, 0) solving the relative pose relation between the top surface space coordinate system and the world coordinate system on the container by combining the camera internal parameter K and the camera external parameter E and the pixel coordinates and the space coordinates of each edge corner point of the container.
The relative pose relation between the top surface space coordinate system and the world coordinate system of the container is solved, specifically, the coordinates of each edge corner point of the top surface of the container in the top surface space coordinate system of the container are converted into the following coordinate system of a camera:n=1, 2,3,4, wherein B xn For the coordinates of each edge corner point of the upper top surface of the container under the camera coordinate system, B cn For the coordinates of each edge corner point of the upper top surface of the container under the upper top surface space coordinate system of the container, R b For the rotation matrix from the top surface space coordinate system to the camera coordinate system on the container, T b A translation matrix from a top surface space coordinate system to a camera coordinate system on the container; rotation matrix R b And a translation matrix T b Can be according to the above-mentioned coordinate B xn And B cn The method is obtained by combining a SlovePNP algorithm in the OpenCV, such as a P3P or EPnP solution of the SlovePNP algorithm, and the SlovePNP algorithm belongs to a conventional algorithm in the field of image posture adjustment.
And then converting the coordinates of each edge corner point of the upper top surface of the container in the camera coordinate system into the coordinates of each edge corner point in the world coordinate system:wherein B is wn For the coordinates of each edge corner point of the upper top surface of the container under the world coordinate system, a rotation vector R and a translation vector T are obtained when the camera is calibrated; after obtaining the coordinates of each edge corner point of the upper top surface of the container in the world coordinate system, the container is tiedAnd (3) combining coordinates of the lifting tool lock head under a world coordinate system in real time, and calculating the pose, the rotation angle, the trolley translation amount and the cart translation amount of the container relative to the lifting tool.
Further improved, the pose, rotation angle, trolley translation and cart translation of the container relative to the lifting appliance are calculated according to the coordinates B of each edge corner point of the upper top surface of the container in the world coordinate system wn And calculating and obtaining the rotation angle, the trolley translation amount and the trolley translation amount of the container relative to the lifting tool by using the lifting tool lock head coordinate under the world coordinate system provided by the PLC.
For the (2) th part, after the extraction of the top edge corner points of the train carriage to be grabbed is obtained in the step S2, the carriage top edge corner point pixel coordinates T are extracted 1 (X t1 ,Y t1 )、T 2 (X t2 ,Y t2 )、T 3 (X t3 ,Y t3 ) And T 4 (X t4 ,Y t4 ) Then, a plane space coordinate system on the train carriage is established, and the length corresponding to the train carriage is H t Width W t The method comprises the steps of carrying out a first treatment on the surface of the By T 1 (X t1 ,Y t1 ) Deriving the coordinate of the top edge corner point of the train carriage on the plane space coordinate system of the train carriage as T for the origin of the plane space coordinate system of the train carriage C1 (0,0,0)、T C2 (H t ,0,0)、T C3 (0,W t 0), and T C4 (H t ,W t 0), solving the relative pose relation between the plane space coordinate system on the train carriage and the world coordinate system by combining the camera inner parameter K and the camera outer parameter E and the pixel coordinate of the carriage top edge corner and the coordinate of the carriage top edge corner on the plane space coordinate system on the train carriage.
Similarly, the relative pose relationship between the planar spatial coordinate system on the railcar and the world coordinate system is solved here by converting the coordinates of the railcar top edge corner point in the planar spatial coordinate system on the railcar into the camera coordinate system:
n=1,2,3,4, wherein B xtn For the coordinates of each edge corner point at the top of the carriage under the camera coordinate system, B tn For the coordinates of corner points of each edge of the top of a carriage under a plane space coordinate system on the carriage of the train, R bt T is a rotation matrix from a plane space coordinate system to a camera coordinate system on a train carriage bt A translation matrix from a plane space coordinate system to a camera coordinate system on a train carriage; likewise, the matrix R is rotated bt And a translation matrix T bt Can be according to the above-mentioned coordinate B xtn And B tn And combining SlovePNP algorithm in OpenCV.
And then converting the coordinates of each edge corner point at the top of the carriage under the camera coordinate system into the world coordinate system:wherein B is wtn For the coordinates of each edge corner point at the top of a carriage in a world coordinate system, a rotation vector R and a translation vector T are obtained when a camera is calibrated; after the coordinates of each edge angular point at the top of the carriage under the world coordinate system are obtained, the position, the rotation angle, the trolley translation and the trolley translation of the carriage relative to the lifting tool are calculated by combining the coordinates of the lifting tool lock head under the world coordinate system in real time.
The specific method for calculating the pose, the rotation angle, the trolley translation amount and the cart translation amount of the carriage relative to the lifting appliance comprises the following steps: according to the coordinates B of corner points of each edge of the upper top surface of the carriage under the world coordinate system wtn And calculating and obtaining the rotation angle of the carriage relative to the lifting tool, the translation amount of the trolley and the translation amount of the trolley according to the lifting tool lock head coordinates under the world coordinate system provided by the PLC, namely adjusting the lifting tool angle, the X-axis moving distance of the trolley along the world coordinate system and the Y-axis moving distance of the trolley along the world coordinate system in the world coordinate system.
S4: in the loading process of the container, the edge corner point at the top of the carriage is extracted through a deep learning method, whether the container is placed in the carriage is judged, if the container is placed in the carriage, the edge corner point of the upper top surface of the placed container is extracted, the coordinates of the top surface of the placed container in a world coordinate system are calculated, and otherwise, only the coordinates of the edge corner point at the top of the carriage in the world coordinate system are calculated.
S5: and the loading and unloading working conditions are communicated with the PLC in real time, and the PLC drives the lifting appliance to realize the boxing and unloading work of the container.
As shown in fig. 3 and 4, for the gripping action, there are three situations in common for gripping a container and placing it into a car, in combination with the size of the container used: the target car is loaded with two twenty-ruler containers, the target car is loaded with one twenty-ruler container and the target car is loaded with one forty-ruler container. And will now be discussed separately.
As shown in the upper graph of fig. 4, when the target carriage is loaded with two twenty-ruler containers, after the relative positions of the grabbing boxes in the carriage are confirmed according to the PLC instruction, for example, the left box or the right box, edge corner points of the upper top surface of the container in the image are extracted through a deep learning method, the pixel coordinates of the edge corner points of the upper top surface of the container are extracted precisely, the positions of the target containers are determined according to the relative position relation among the edge corner points, further, the rotation angle, the trolley translation amount and the trolley translation amount of the target container relative to the lifting appliance are determined, parameters are sent to the PLC, and the PLC controls the track crane to realize grabbing of the container. On the contrary, when the railway crane moves to the vicinity of the target carriage when the carriage is put into the carriage, the PLC triggers the carriage top edge corner detection and the container loading position, the carriage top edge corner of the train is extracted by a deep learning method, the relative pose of the carriage and the lifting appliance of the railway crane is calculated, and the PLC controls the lifting appliance to put the carriage.
As shown in the middle diagram of fig. 4, when only one twenty-ruler container is loaded in the target carriage, the working condition is that the relative position of each edge corner point at the top of the carriage is not required to be confirmed, and the relative pose is calculated after the edge corner points of the upper top surface of the container are extracted by using a deep learning algorithm. On the contrary, when the railway crane moves to the vicinity of the target carriage when the carriage is put into the carriage, the PLC triggers the carriage top edge corner detection and the container loading position, the carriage top edge corner of the train is extracted by a deep learning method, the relative pose of the carriage and the lifting appliance of the railway crane is calculated, and the PLC controls the lifting appliance to put the carriage.
As shown in the lower diagram of fig. 4, when only one forty containers are loaded in the target compartment, the working condition is that the relative positions of all edge corner points at the top of the compartment are not required to be confirmed, and the relative pose is calculated after the edge corner points of the upper top surface of the container are extracted by using a deep learning algorithm. On the contrary, when the box is put into the carriage, after the box grabbing action is completed by the track crane, when the track crane moves to the vicinity of the target carriage, the detection of the top edge corner point of the carriage is triggered by the PLC, the top edge corner point of the carriage of the train is extracted by a deep learning method, the relative pose between the carriage and the track crane is calculated, information is sent to the PLC, and the track crane is controlled by the PLC to realize box putting.
In addition, in the box discharging operation, the track crane is moved to the target carriage or the vicinity of the target container, and then the box loading and unloading contents are adopted. The space relation between the frame of the current container and the non-target container and the placed container in the target container needs to be acquired in real time, so that a certain safety distance threshold value is set, and the possible collision phenomenon in the process of grabbing the container is avoided.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. The automatic railway freight container loading and unloading method based on machine vision is characterized by comprising the following steps of:
s1: a PLC is configured on a track crane, a camera is installed on a main beam of the track crane and positioned above a train loading lane, and the camera is calibrated to obtain internal parameters and external parameters of the camera; establishing a world coordinate system based on the track crane;
s2: collecting video image data of a working train carriage through a camera, and extracting top surface edge corner points and carriage top surface edge corner points of a container in the video image data through a deep learning method;
s3: according to the container type of the container and the position of the container sent by the PLC, determining the coordinates of the edge corner point of the upper top surface of the container under a world coordinate system, and respectively solving the rotation angle and translation distance of the container and the carriage relative to the lifting appliance;
s4: in the loading process of the container, extracting the edge corner point at the top of the carriage and judging whether the container is placed in the carriage or not by a deep learning method, if the container is placed in the carriage, extracting the edge corner point of the upper top surface of the placed container and calculating the coordinates of the top surface of the placed container in a world coordinate system, otherwise, only calculating the coordinates of the edge corner point at the top of the carriage in the world coordinate system;
s5: and the loading and unloading working conditions are communicated with the PLC in real time, and the PLC drives the lifting appliance to realize the boxing and unloading work of the container.
2. The automatic unloading method of the railway freight container based on machine vision according to claim 1, wherein the step S1 is to calibrate a camera to obtain an internal reference and an external reference of the camera, wherein a Zhang Zhengyou camera calibration method is used, an origin of a world coordinate system is located at a center position of a train loading lane, an X-axis direction of the world coordinate system is a horizontal width direction of the train loading lane, a Y-axis direction of the world coordinate system is a horizontal length extending direction of the train loading lane, and a Z-axis direction of the world coordinate system is a direction vertical to the ground; the camera internal reference after internal reference calibration is K,f denotes a focal length of the camera, dx and dy denote widths of individual pixels in an image in x-direction and y-direction, respectively, f/dx uses pixels to describe a length of the focal length in the x-axis direction, f/dy uses pixels to describe a length of the focal length in the y-axis direction, u 0 And v 0 Respectively representing the coordinates of the center of the photosensitive plate under a pixel coordinate system; the camera external parameters are determined by a rotation vector R and a translation vector T of a camera coordinate system relative to a world coordinate system, and in the scheme, the camera external parameters calibrated by a positive friend external parameter calibration method are +.>
3. The method for automatically loading and unloading railway freight containers based on machine vision according to claim 2, wherein in step S2, video image data of the working train carriage is collected by a camera, top surface edge corner points and carriage top surface edge corner points of the container in the video image data are extracted by a deep learning method, either an example segmentation method based on deep learning or a target detection method based on deep learning is adopted, the top surface edge corner points and the carriage top surface edge corner points of the container are extracted by a deep learning model obtained by training, and pixel coordinates of the edge corner points are obtained.
4. The method for automatically loading and unloading railway freight containers based on machine vision as claimed in claim 3, wherein in step S3, the coordinates of the edge corner points of the upper top surface of the container in the world coordinate system are determined according to the container type and the container position sent by the PLC, and the rotation angle and the translation distance of the container and the carriage relative to the lifting appliance are respectively solved, after the edge corner points of the upper top surface of the container are obtained in step S2, the space coordinate system of the upper top surface of the container is established, and the pixel coordinates of the four edge corner points of the upper top surface of the container are respectively B 1 (X b1 ,Y b1 )、B 2 (X b2 ,Y b2 )、B 3 (X b3 ,Y b3 )、B 4 (X b4 ,Y b4 ) Let the length of the container be H and the width be W, and B 1 The space coordinates of the four edge corner points on the top surface of the container in the space coordinate system of the top surface of the container are respectively B C1 (0,0,0)、B C2 (H,0,0)、B C3 (0, W, 0) and B C4 (H, W, 0) solving the relative pose relation between the top surface space coordinate system and the world coordinate system on the container by combining the camera internal parameter K, the camera external parameter E and the pixel coordinates and the space coordinates of each edge corner point of the container;
after the step S2 of extracting the top edge corner point of the train carriage to be grabbed is obtained, the pixel coordinate T of the top edge corner point of the carriage is extracted 1 (X t1 ,Y t1 )、T 2 (X t2 ,Y t2 )、T 3 (X t3 ,Y t3 ) And T 4 (X t4 ,Y t4 ) Then, a plane space coordinate system on the train carriage is established, and the length corresponding to the train carriage is H t Width W t The method comprises the steps of carrying out a first treatment on the surface of the By T 1 (X t1 ,Y t1 ) Deriving the coordinate of the top edge corner point of the train carriage on the plane space coordinate system of the train carriage as T for the origin of the plane space coordinate system of the train carriage C1 (0,0,0)、T C2 (H t ,0,0)、T C3 (0,W t 0), and T C4 (H t ,W t 0), solving the relative pose relation between the plane space coordinate system on the train carriage and the world coordinate system by combining the camera inner parameter K and the camera outer parameter E and the pixel coordinate of the carriage top edge corner and the coordinate of the carriage top edge corner on the plane space coordinate system on the train carriage.
5. The automated machine vision-based method for loading and unloading railway freight containers according to claim 4, wherein the solving of the relative pose relationship between the top surface space coordinate system and the world coordinate system of the container is to convert the coordinates of the edge points of the top surface of the container in the top surface space coordinate system of the container into the following camera coordinate system:n=1, 2,3,4, wherein B xn For the coordinates of each edge corner point of the upper top surface of the container under the camera coordinate system, B cn For the coordinates of each edge corner point of the upper top surface of the container under the upper top surface space coordinate system of the container, R b For the rotation matrix from the top surface space coordinate system to the camera coordinate system on the container, T b A translation matrix from a top surface space coordinate system to a camera coordinate system on the container;
and then converting the coordinates of each edge corner point of the upper top surface of the container in the camera coordinate system into the coordinates of each edge corner point in the world coordinate system:wherein B is wn Is the world coordinate systemThe coordinates of the corner points of each edge of the upper top surface of the lower container, and a rotation vector R and a translation vector T are obtained when the camera is calibrated; after the coordinates of the corner points of each edge of the upper top surface of the container in the world coordinate system are obtained, the positions, the rotation angles, the trolley translation amount and the trolley translation amount of the container relative to the lifting tool are calculated by combining the coordinates of the lifting tool lock head in the world coordinate system in real time.
6. The automated machine vision-based railway freight container unloading method as claimed in claim 5, wherein after obtaining the coordinates of each edge corner point of the top surface of the container in the world coordinate system, the position, rotation angle, trolley translation and trolley translation of the container relative to the lifting tool are calculated by combining the coordinates of the lifting tool lock in the world coordinate system in real time according to the coordinates B of each edge corner point of the top surface of the container in the world coordinate system wn And the coordinates of the lifting tool lock head under the world coordinate system provided by the PLC are calculated and obtained to obtain the rotation angle, the trolley translation amount and the cart translation amount of the container relative to the lifting tool.
7. The automated machine vision-based method for loading and unloading railway freight containers as defined in claim 4, wherein the solving of the relative pose relationship between the planar spatial coordinate system on the railcar and the world coordinate system is to convert the coordinates of the top edge corner point of the railcar in the planar spatial coordinate system on the railcar into the camera coordinate system:
n=1, 2,3,4, wherein B xtn For the coordinates of each edge corner point at the top of the carriage under the camera coordinate system, B tn For the coordinates of corner points of each edge of the top of a carriage under a plane space coordinate system on the carriage of the train, R bt T is a rotation matrix from a plane space coordinate system to a camera coordinate system on a train carriage bt A translation matrix from a plane space coordinate system to a camera coordinate system on a train carriage;
the sides of the top of the carriage under the camera coordinate system are then used for the followingConverting the coordinates of the corner points into a world coordinate system:wherein B is wtn For the coordinates of each edge corner point at the top of a carriage in a world coordinate system, a rotation vector R and a translation vector T are obtained when a camera is calibrated; after the coordinates of each edge angular point at the top of the carriage under the world coordinate system are obtained, the position, the rotation angle, the trolley translation and the trolley translation of the carriage relative to the lifting tool are calculated by combining the coordinates of the lifting tool lock head under the world coordinate system in real time.
8. The automated machine vision-based railway freight container loading and unloading method according to claim 7, wherein after the coordinates of each edge corner point at the top of the carriage in the world coordinate system are obtained, the position, rotation angle, carriage translation and cart translation of the carriage relative to the lifting tool are calculated according to the coordinates B of each edge corner point at the top surface of the carriage in the world coordinate system in combination with the coordinates of the lifting tool lock in the world coordinate system in real time wtn And the coordinates of the lifting tool lock head under the world coordinate system provided by the PLC are calculated and obtained, and the rotation angle of the carriage relative to the lifting tool, the translation amount of the trolley and the translation amount of the trolley are obtained.
CN202310523152.9A 2023-05-10 2023-05-10 Railway freight container automatic loading and unloading method based on machine vision Pending CN116620888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310523152.9A CN116620888A (en) 2023-05-10 2023-05-10 Railway freight container automatic loading and unloading method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310523152.9A CN116620888A (en) 2023-05-10 2023-05-10 Railway freight container automatic loading and unloading method based on machine vision

Publications (1)

Publication Number Publication Date
CN116620888A true CN116620888A (en) 2023-08-22

Family

ID=87591163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310523152.9A Pending CN116620888A (en) 2023-05-10 2023-05-10 Railway freight container automatic loading and unloading method based on machine vision

Country Status (1)

Country Link
CN (1) CN116620888A (en)

Similar Documents

Publication Publication Date Title
Mi et al. A fast automated vision system for container corner casting recognition
CN105431370B (en) For method and system container automatically unloaded using container crane in unloading target
CN112528721B (en) Bridge crane integrated card safety positioning method and system
CN114219842B (en) Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation
CN106934813A (en) A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning
CN115180512B (en) Automatic loading and unloading method and system for container truck based on machine vision
CN108897246B (en) Stack box control method, device, system and medium
CN111704035B (en) Automatic positioning device and method for container loading and unloading container truck based on machine vision
CN113340287B (en) Cabin hatch identification method for ship loader
CN115582827B (en) Unloading robot grabbing method based on 2D and 3D visual positioning
CN110378957A (en) Torpedo tank car visual identity and localization method and its system towards metallurgical operation
CN113379684A (en) Container corner line positioning and automatic container landing method based on video
CN114241269B (en) A collection card vision fuses positioning system for bank bridge automatic control
CN111761575A (en) Workpiece, grabbing method thereof and production line
JP2021504262A (en) Methods and system backgrounds for generating landing solutions for containers on landing
CN114119741A (en) Shore bridge control method and device based on machine vision
CN110110823A (en) Object based on RFID and image recognition assists in identifying system and method
CN116991162A (en) Autonomous positioning visual identification method for non-line inspection robot
CN112037283A (en) Truck positioning and box aligning detection method based on machine vision
CN105469401B (en) A kind of headchute localization method based on computer vision
CN115880372A (en) Unified calibration method and system for external hub positioning camera of automatic crane
CN114863250A (en) Container lockhole identification and positioning method, system and storage medium
CN107514994B (en) A kind of headchute localization method based on error compensation
CN116620888A (en) Railway freight container automatic loading and unloading method based on machine vision
CN112102396B (en) Method, device, equipment and storage medium for positioning vehicle under bridge crane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination