CN115180512B - Automatic loading and unloading method and system for container truck based on machine vision - Google Patents

Automatic loading and unloading method and system for container truck based on machine vision Download PDF

Info

Publication number
CN115180512B
CN115180512B CN202211100899.5A CN202211100899A CN115180512B CN 115180512 B CN115180512 B CN 115180512B CN 202211100899 A CN202211100899 A CN 202211100899A CN 115180512 B CN115180512 B CN 115180512B
Authority
CN
China
Prior art keywords
container
image
coordinate
identification model
keyhole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211100899.5A
Other languages
Chinese (zh)
Other versions
CN115180512A (en
Inventor
刘彪
李华章
刘文奇
于怀
毛微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Gangyi Intelligent Technology Co.,Ltd.
Original Assignee
Hunan Yanmar Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yanmar Information Co ltd filed Critical Hunan Yanmar Information Co ltd
Priority to CN202211100899.5A priority Critical patent/CN115180512B/en
Publication of CN115180512A publication Critical patent/CN115180512A/en
Application granted granted Critical
Publication of CN115180512B publication Critical patent/CN115180512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/04Auxiliary devices for controlling movements of suspended loads, or preventing cable slack
    • B66C13/08Auxiliary devices for controlling movements of suspended loads, or preventing cable slack for depositing loads in desired attitudes or positions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/46Position indicators for suspended loads or for crane elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Control And Safety Of Cranes (AREA)

Abstract

The application provides a container truck automatic loading and unloading method and system based on machine vision, and the method comprises the following steps: constructing a stock yard coordinate system; acquiring box position information, and acquiring a first image after distributing a lifting appliance above a specified operation area according to the box position information; inputting the first image into a container recognition model to recognize lock holes at four corners of the container and obtain coordinate values of the lock holes; obtaining a first central point coordinate of the container according to the keyhole coordinate value; adjusting the position of the lifting appliance according to the first central point coordinate, and grabbing the container; acquiring a second image, the second image comprising the container truck; inputting the second image into a container truck identification model to identify a frame of the container truck and obtain calibration coordinate values of four corner points of the frame; obtaining a second center point coordinate and an angle offset according to the calibration coordinate value and the corner point coordinate of the standard parking area; and determining the displacement and deflection angle of the lifting appliance according to the second central point coordinate and the angle offset, and placing the container.

Description

Automatic loading and unloading method and system for container truck based on machine vision
Technical Field
The application relates to the field of machine vision and image processing, in particular to a container truck automatic loading and unloading method and system based on machine vision.
Background
The container truck is an important logistics tool for port transfer and loading and unloading, after the traditional container truck arrives at a loading and unloading operation area, a parking position and a standard parking area defined by a yard usually have certain deviation, a truck driver moves forwards, backwards, leftwards and rightwards in a visual inspection mode to finely adjust to complete alignment, a track crane driver also needs to communicate with the truck driver in real time and closely match, and one-time loading and unloading operation can be completed after the accurate position of a hanger is repeatedly confirmed. Completely rely on under the condition of manual experience, handling efficiency is relatively low, also has the possibility of scraping between hoist, container and the truck frame simultaneously, has a great deal of potential safety hazard problem.
Disclosure of Invention
In view of the above, it is desirable to provide a method and system for automatically loading and unloading container trucks based on machine vision.
A first aspect of the application provides a machine vision based container truck auto-loading and unloading method, the method comprising:
s1, constructing a stock yard coordinate system, wherein one angular point of a stock yard is taken as an original point of the stock yard coordinate system, the length direction of container placement is taken as an X axis, the width direction of container placement is taken as a Y axis, and the stacking direction of the containers is taken as a Z axis;
s2, acquiring box position information, distributing a lifting appliance to a position above a specified operation area according to the box position information, and acquiring a first image;
s3, inputting the first image into a container identification model to identify the lock holes at four corners of the container, and obtaining a lock hole coordinate value corresponding to each lock hole;
s4, obtaining a first central point coordinate of the container according to the keyhole coordinate value;
s5, adjusting the position of the lifting appliance according to the first central point coordinate, and grabbing the container;
s6, acquiring a second image, wherein the second image comprises a container truck;
s7, inputting the second image into a container truck identification model to identify a frame of the container truck and obtain calibration coordinate values of four corner points of the frame;
s8, obtaining a second center point coordinate and an angle offset according to the calibration coordinate value and the corner point coordinate of the standard parking area;
and S9, determining the displacement and deflection angle of the lifting appliance according to the second central point coordinate and the angle offset, and placing the container.
In a possible implementation manner, the obtaining manner of the container identification model includes: acquiring a plurality of training images, wherein the training images comprise a complete container image, and the training images comprise partial images of one or more other containers around each corner point of the container; clipping the training image to obtain a first partial image, a second partial image, a third partial image and a fourth partial image which respectively correspond to four corner points of the container; the method comprises the steps of inputting a plurality of first local images, a plurality of second local images, a plurality of third local images and a plurality of fourth local images into different sub-neural network models respectively, and training to obtain corresponding first lock hole recognition models, second lock hole recognition models, third lock hole recognition models and fourth lock hole recognition models, wherein the container recognition models comprise the first lock hole recognition models, the second lock hole recognition models, the third lock hole recognition models and the fourth lock hole recognition models.
In one possible implementation manner, the inputting the first image into a container recognition model to recognize the lock holes at the four corners of the container, and obtaining the lock hole coordinate value of each lock hole includes: inputting a first image into the first keyhole identification model, the second keyhole identification model, the third keyhole identification model and the fourth keyhole identification model respectively to determine the relative position of the keyhole in the first image; obtaining a first keyhole coordinate value, a second keyhole coordinate value, a third keyhole coordinate value and a fourth keyhole coordinate value according to the relative position of the keyhole in the first image and the coordinate of the acquisition position of the first image; the obtaining of the first center point coordinate of the container according to the key hole coordinate value includes: and obtaining the first central point coordinate according to the first keyhole coordinate value, the second keyhole coordinate value, the third keyhole coordinate value and the fourth keyhole coordinate value.
In one possible implementation, the training images include images taken under different weather conditions, and images of containers at different levels of impairment.
In one possible implementation manner, the adjusting the position and the deflection angle of the spreader according to the second center point coordinate and the angle offset, and placing the container includes: determining the displacement of the lifting appliance according to the second central point coordinate; adjusting the deflection angle of the lifting appliance according to the angle offset; and placing the container according to the displacement and the deflection angle.
A second aspect of the present application provides a machine vision based container truck auto-loading and unloading system, the system comprising:
a rail crane;
the lifting appliance is arranged below the track crane;
the first image acquisition device is arranged on one side of the lifting appliance facing the container;
the second image acquisition device is arranged on a cross beam of the track crane and faces a parking area of the container truck;
a processor for implementing instructions, the processor electrically connected to the spreader, the first image capture device, and the second image capture device; and
a storage device to store the instructions;
wherein the instructions are to be loaded and executed by the processor to: constructing a stock dump coordinate system, wherein one angular point of a stock dump is used as an origin of the stock dump coordinate system, the length direction of container placement is used as an X axis, the width direction of container placement is used as a Y axis, and the container stacking direction is used as a Z axis; acquiring box position information, and acquiring a first image after distributing the lifting appliance to a position above a specified operation area according to the box position information, wherein the first image is acquired by the first image acquisition device; inputting the first image into a container identification model to identify the lockholes at the four corners of the container and obtain a lockhole coordinate value corresponding to each lockhole; obtaining a first central point coordinate of the container according to the keyhole coordinate value; adjusting the position of the spreader according to the first center point coordinate, and grabbing the container; acquiring a second image, the second image comprising a container truck, the second image acquired by the second image acquisition device; inputting the second image into a container truck identification model to identify a frame of the container truck and obtain calibration coordinate values of four corner points of the frame; obtaining a second central point coordinate and an angle offset according to the calibration coordinate value and the corner point coordinate of the standard parking area; and determining the displacement and deflection angle of the lifting appliance according to the second central point coordinate and the angle offset, and placing the container.
In one possible implementation, the instructions further include: acquiring a plurality of training images, wherein the training images comprise a complete container image, and the training images comprise partial images of one or more other containers around each corner point of the container; cutting the training image to obtain a first partial image, a second partial image, a third partial image and a fourth partial image which respectively correspond to four corner points of the container; the method comprises the steps of inputting a plurality of first local images, a plurality of second local images, a plurality of third local images and a plurality of fourth local images into different sub-neural network models respectively, and training to obtain corresponding first lock hole identification models, second lock hole identification models, third lock hole identification models and fourth lock hole identification models, wherein the container identification models comprise the first lock hole identification models, the second lock hole identification models, the third lock hole identification models and the fourth lock hole identification models.
In one possible implementation, the instructions further include: the instructions further include: inputting a first image into the first keyhole identification model, the second keyhole identification model, the third keyhole identification model and the fourth keyhole identification model respectively to determine the relative position of the keyhole in the first image; obtaining a first keyhole coordinate value, a second keyhole coordinate value, a third keyhole coordinate value and a fourth keyhole coordinate value according to the relative position of the keyhole in the first image and the coordinate of the acquisition position of the first image; and obtaining the first central point coordinate according to the first keyhole coordinate value, the second keyhole coordinate value, the third keyhole coordinate value and the fourth keyhole coordinate value.
In one possible implementation, the training images include images taken under different weather conditions, and images of containers at different levels of defects.
In one possible implementation, the instructions further include: determining the displacement of the lifting appliance according to the second central point coordinate; adjusting the deflection angle of the lifting appliance according to the angle offset; and placing the container according to the displacement and the deflection angle.
Compared with the prior art, the application has the following beneficial effects:
the first central point coordinate is obtained by identifying the lock hole of the container, the actual position of the container is determined according to the first central point coordinate, the second central point coordinate is obtained by identifying the corner point of the container truck, the displacement and the deflection angle of the container truck relative to a standard parking area are determined according to the second central point coordinate, the grabbed container is placed in the container truck, and the automatic loading and unloading of the container are completed.
Drawings
Fig. 1 is a schematic flow chart of a method for automatically loading and unloading a container truck based on machine vision according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for acquiring a container identification model according to an embodiment of the present application.
Fig. 3 is a schematic view of a top surface of a container according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a first partial image according to an embodiment of the present application.
Fig. 5 is a sub-flowchart of step S3 in fig. 1.
Fig. 6 is a schematic diagram of a machine vision based container truck auto-loading system in accordance with an embodiment of the present application.
The following detailed description will further illustrate the present application in conjunction with the above-described figures.
Description of the main elements
100. A container truck automatic loading and unloading system based on machine vision; 10. a rail crane; 20. a spreader; 30. a first image acquisition device; 40. a second image acquisition device; 50. a processor; 60. a storage device.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Referring to fig. 1, an embodiment of the present application provides a method for automatically loading and unloading a container truck based on machine vision, which includes the following steps:
and S1, constructing a stock yard coordinate system.
In a possible implementation manner, one corner point of the yard is used as an origin of the yard coordinate system, the length direction of container placement is used as an X axis, the width direction of container placement is used as a Y axis, and the container stacking direction is used as a Z axis.
Further, the yard is divided by setting the grid precision to 10 cm. Each grid can calculate the relative distance from the origin of the stock dump coordinate system by combining the grid precision, and any point in the stock dump coordinate system can be connected
Figure 51342DEST_PATH_IMAGE001
And converting with the bin information. Specifically, the bit J, the column K, and the level L can be obtained by the following equations (1), (2), and (3), respectively. Wherein c1 is a shellfish numberI.e. the length of a standard 20 foot container. C2 is the width of a standard container. C3 is the height of a standard container.
Figure 702903DEST_PATH_IMAGE002
Formula (1)
K=
Figure 826848DEST_PATH_IMAGE003
Formula (2)
L=
Figure 690899DEST_PATH_IMAGE004
Formula (3)
In one possible implementation, the coordinates mentioned in this embodiment are based on the yard coordinate system.
And S2, acquiring box position data, distributing a lifting appliance to the position above the designated operation area according to the box position data, and acquiring a first image.
It can be understood that, in the container collection process of the container truck, when the container truck reaches the working bay, the information recognition device provided in the parking area of the container truck reads the information of the container truck. The upper computer issues a box grabbing action instruction by acquiring a work execution command from the container terminal management system and gives box position information.
In one possible implementation, the bin information includes a bit J, a column K, and a horizon L.
In a possible implementation, a first image acquisition device is provided on the side of the spreader facing the container. The first image acquisition device is used for acquiring a first image containing an image of the container. It will be appreciated that the first image may comprise an image of only one container or may comprise images of a plurality of containers. When the first image includes images of a plurality of containers, the container located closest to the very middle of the image is determined to be the container of the current job.
It will be appreciated that only the approximate location of the container can be determined from the bin information and that if a spreader is dropped in this position, the spreader will not be aligned with the container and therefore cannot lift the container.
And S3, inputting the first image into the container identification model to identify the lock holes at four corners of the container, and obtaining the coordinate value of each lock hole.
Please refer to fig. 2, which is a flow chart of the method for acquiring the container identification model.
Step S301, a plurality of training images are obtained, the training images comprise a complete container image, and the training images comprise partial images of one or more other containers around each corner of the container.
It will be appreciated that to ensure the pertinence of the training model, each training image typically includes only one complete container image. In addition, in order to accurately identify different lockholes in different placing scenes in the subsequent model using process, the training image comprises partial images of other containers around the container to be operated.
It can be understood that the top surface of each container is provided with four locking holes, and the four locking holes are respectively positioned at four corner points of the top surface. The lock head on the lifting appliance can be matched with the lock hole to realize the connection of the lifting appliance and the container.
In one possible implementation, the training images include images taken under different weather conditions, and images of containers at different levels of defectivity. It can be understood that images such as light, rain, snow and the like can be acquired under different weather conditions, certain item can be played for image recognition, and the applicability of the model can be improved by increasing the training image set of the part. It can be understood that, when a container is loaded or unloaded, the container may be damaged due to collision, and images with different defect degrees, such as images of containers with recesses at different parts, are put into the training image set, which also can improve the applicability of the model.
Step S302, clipping the training image to obtain a first partial image, a second partial image, a third partial image and a fourth partial image respectively corresponding to four corner points of the container.
In one possible implementation, the training images used for model training are processed to include only one complete container image. The container with the complete image is the container to be worked on. And the complete container image on the training image has four lock holes.
Referring to fig. 3, a schematic view of the top surface of the container is shown. In one possible implementation, the part in the upper left corner in fig. 3 may be cut as the first partial image, i.e., part a shown in the figure. It will be appreciated that other container images around the container to be serviced are omitted from figure 3. In fig. 3, the upper right portion can be cropped to be the second partial image, the lower left portion can be cropped to be the third partial image, and the lower right portion can be cropped to be the fourth partial image (not shown).
Referring to fig. 4, for ease of understanding, fig. 4 shows a possible schematic diagram of the first partial image. In a possible case, other containers are placed in a plurality of areas around the container to be handled, for example, the areas B, C and D are all provided with containers in the figure, and it can be understood that the containers may be at the same level as the container to be handled, or at different levels. In other possible cases, there may be only one or two or even none of the areas B, C and D with containers in them, for example when the containers to be handled are located at the corners of the yard.
It can be understood that the positions of the four lock holes are cut to form different training images, so that the complexity of the model can be reduced, machine learning is facilitated, and the accuracy of the model obtained by targeted learning is higher.
Step S303, inputting the first local image, the second local image, the third local image, and the fourth local image to different sub-neural network models, respectively, and training to obtain a corresponding first keyhole identification model, a corresponding second keyhole identification model, a corresponding third keyhole identification model, and a corresponding fourth keyhole identification model. The container identification model comprises a first lock hole identification model, a second lock hole identification model, a third lock hole identification model and a fourth lock hole identification model.
It can be understood that different submodels are arranged for the lockholes in different positions of the top surface of the container, so that image recognition can be carried out more efficiently, and the coordinates of each lockhole can be extracted quickly. And each sub-model is only responsible for identifying the lock hole at one position, so that the complexity of the model is greatly reduced.
Referring to fig. 5, the substeps of step S3 are shown.
And S31, inputting the first image into the first keyhole identification model, the second keyhole identification model, the third keyhole identification model and the fourth keyhole identification model respectively to determine the relative position of the keyhole in the first image.
In one possible implementation, only the top surface of one container is included in the first image.
And S32, obtaining a first keyhole coordinate value, a second keyhole coordinate value, a third keyhole coordinate value and a fourth keyhole coordinate value according to the relative position of the keyhole in the first image and the coordinate of the acquisition position of the first image.
It can be understood that, by inputting the acquired first images into the four keyhole identification models respectively, the positions of the four keyholes can be identified respectively, and the positions of the keyholes in the first images can be determined. Meanwhile, the specific coordinates of each lock hole can be obtained by combining the coordinates of the central point of the first image.
It is understood that the coordinates of the center point of the first image may be obtained by the position of the first image acquisition device, wherein the first image acquisition device is used for acquiring the first image. In a possible implementation manner, the coordinates of the central point of the first image can be determined by the coordinates of the first image capturing device and the lens shift amount.
And S4, obtaining the first central point coordinate of the container according to the keyhole coordinate value.
In a possible implementation manner, the first center point coordinate is obtained according to the first keyhole coordinate value, the second keyhole coordinate value, the third keyhole coordinate value and the fourth keyhole coordinate value. It can be understood that the first center point coordinate is located at the coordinate of the center point of the rectangle enclosed by the four lock holes.
And S5, adjusting the position of the lifting appliance according to the first central point coordinate, and grabbing the container.
In one possible implementation the spreader can be moved to a position directly opposite the first centre point co-ordinate and lowered to grab the container.
It will be appreciated that when the centre point of the container is located, the spreader can be aligned with the container. So, can realize that the tapered end on the hoist can just fall to the lockhole of container to fall after both cooperate, with fixed container and hoist, fall the hoist again in order to realize snatching of container.
And S6, acquiring a second image, wherein the second image comprises the container truck.
In one possible implementation, the second image is obtained by a second image acquisition device located on the trolley beam and located towards the parking area of the container truck. The second image includes an image of a container truck located in a container truck parking area.
And S7, inputting the second image into the container truck identification model to identify the frame of the container truck and obtain calibration coordinate values of four corner points of the frame.
In one possible implementation, the container truck identification model can be obtained by deep learning images of the frame of the container truck. The container truck identification model is capable of identifying the frame of the container truck and determining the coordinates of each corner point.
And S8, obtaining a second center point coordinate and an angle offset according to the calibration coordinate value and the corner point coordinate of the standard parking area.
It is understood that since the position of the standard parking area is fixed, the container truck may have a positional deviation when parked in the standard parking area. And according to a preset program, the rail crane cannot place the container into the container truck with the position deviation. It is therefore necessary to determine the deviation from the coordinates of the current container truck, for example the coordinates of the index points and the coordinates of the corner points of the standard parking area. In one possible implementation, the coordinates of the calibration point are coordinates of a center point of a rectangle surrounded by four corner points of the frame.
In one possible implementation, the deviation further includes an angular offset. It will be appreciated that the container truck may be angularly offset when parked.
And S9, determining the displacement and deflection angle of the lifting appliance according to the second central point coordinate and the angle offset, and placing the container.
It can be understood that the actual position of the frame of the container truck can be determined by the coordinate of the second center point and the angle offset, and the displacement and the deflection angle of the spreader can be determined according to the actual position and the current position. The container can be accurately placed into the frame according to the displacement and the deflection angle.
Referring also to fig. 6, an embodiment of the present application provides a machine vision-based container truck auto-handling system 100, which includes a track crane 10, a spreader 20, a first image capturing device 30, a second image capturing device 40, a processor 50, and a storage device 60.
The rail-bound crane 10 is erected above the containers of the yard. The spreader 20 is arranged below the rail crane 10 and can be moved between different positions by means of a rail on the rail crane 10. The first image acquisition device 30 is arranged on the side of the spreader 20 facing the container. The second image capturing device 40 is provided on the cross member of the track crane 10 and faces the parking area of the container truck. The processor 50 is used for implementing instructions, and the processor 50 is electrically connected to the track crane 10, the spreader 20, the first image capturing device 30 and the second image capturing device 40. The storage device 60 is used to store the instructions; wherein the instructions are for loading and executing, by the processor, the steps in the above-described machine vision based container truck auto-loading and unloading method embodiment.
This application is through the lockhole of discernment container in order to obtain first central point coordinate to confirm the actual position of container in order to carry out snatching of container in view of the above, through the angular point of discernment container truck in order to obtain second central point coordinate, and confirm the displacement volume and the deflection angle of container truck for the standard parking area in view of the above, and then place into the container truck with the container of snatching, accomplish the auto-control handling of container.
It should be understood by those skilled in the art that the above embodiments are only for illustrating the present application and are not used as limitations of the present application, and that suitable modifications and changes of the above embodiments are within the scope of the claims of the present application as long as they are within the spirit and scope of the present application.

Claims (1)

1. An automatic loading and unloading method of an automatic loading and unloading system of a container truck based on machine vision is characterized in that,
the system comprises:
a rail crane;
the lifting appliance is arranged below the track crane;
the first image acquisition device is arranged on one side, facing the container, of the lifting appliance;
the second image acquisition device is arranged on a cross beam of the track crane and faces a parking area of the container truck;
the processor is used for realizing instructions and is electrically connected with the track crane, the lifting appliance, the first image acquisition device and the second image acquisition device; and a storage device for storing the instructions; the instructions are for loading and execution by the processor;
the method comprises the following steps:
s1, constructing a stock yard coordinate system, wherein one angular point of a stock yard is taken as an original point of the stock yard coordinate system, the length direction of container placement is taken as an X axis, the width direction of container placement is taken as a Y axis, and the stacking direction of the containers is taken as a Z axis;
setting the grid precision to be 10cm to divide the storage yard, calculating the relative distance between each grid and the origin of a storage yard coordinate system, and combining the grid precision to divide any point in the storage yard coordinate system
Figure 824697DEST_PATH_IMAGE001
Converting the position information with the box position information, and respectively obtaining a shell position J, a column position K and a layer position L through formulas (1), (2) and (3);
J=
Figure 219906DEST_PATH_IMAGE002
formula (1)
K=
Figure 703453DEST_PATH_IMAGE003
Formula (2)
L=
Figure 525915DEST_PATH_IMAGE004
Formula (3)
Wherein c1 is the length of one decibel, i.e., the length of a standard 20 foot container, c2 is the width of a standard container, and c3 is the height of a standard container;
s2, acquiring box position information, distributing a lifting appliance to a position above a specified operation area according to the box position information, and acquiring a first image;
s3, inputting the first image into a container identification model to identify the lock holes at four corners of the container and obtain a lock hole coordinate value corresponding to each lock hole, wherein the container identification model comprises a first lock hole identification model, a second lock hole identification model, a third lock hole identification model and a fourth lock hole identification model;
s31, inputting a first image into the first keyhole identification model, the second keyhole identification model, the third keyhole identification model and the fourth keyhole identification model respectively to determine the relative position of the keyhole in the first image;
s32, obtaining a first keyhole coordinate value, a second keyhole coordinate value, a third keyhole coordinate value and a fourth keyhole coordinate value according to the relative position of the keyhole in the first image and the center point coordinate of the first image;
the method for acquiring the container identification model comprises the following steps:
s301, acquiring a plurality of training images, wherein the training images comprise a complete container image, and the training images comprise partial images of one or more other containers around each corner point of the container;
s302, cutting the training image to obtain a first partial image, a second partial image, a third partial image and a fourth partial image which respectively correspond to four corner points of the container;
s303, inputting the first local image, the second local image, the third local image and the fourth local image into different sub-neural network models respectively, and training to obtain a corresponding first lock hole identification model, a corresponding second lock hole identification model, a corresponding third lock hole identification model and a corresponding fourth lock hole identification model;
s4, obtaining a first central point coordinate of the container according to the lock hole coordinate values, wherein the first central point coordinate is a coordinate of a central point of a rectangle surrounded by the four lock holes;
s5, adjusting the position of the lifting appliance according to the first central point coordinate, enabling the lifting appliance to be over against the first central point coordinate, and grabbing the container;
s6, acquiring a second image, wherein the second image comprises a container truck;
s7, inputting the second image into a container truck identification model to identify a frame of the container truck and obtain calibration coordinate values of four corner points of the frame;
s8, obtaining a second center point coordinate and an angle offset according to the calibration coordinate value and the corner point coordinate of the standard parking area;
s9, determining the displacement and deflection angle of the lifting appliance according to the second central point coordinate and the angle offset, and placing the container.
CN202211100899.5A 2022-09-09 2022-09-09 Automatic loading and unloading method and system for container truck based on machine vision Active CN115180512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211100899.5A CN115180512B (en) 2022-09-09 2022-09-09 Automatic loading and unloading method and system for container truck based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211100899.5A CN115180512B (en) 2022-09-09 2022-09-09 Automatic loading and unloading method and system for container truck based on machine vision

Publications (2)

Publication Number Publication Date
CN115180512A CN115180512A (en) 2022-10-14
CN115180512B true CN115180512B (en) 2023-01-20

Family

ID=83524418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211100899.5A Active CN115180512B (en) 2022-09-09 2022-09-09 Automatic loading and unloading method and system for container truck based on machine vision

Country Status (1)

Country Link
CN (1) CN115180512B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115773745A (en) * 2022-11-03 2023-03-10 东风商用车有限公司 Unmanned card-gathering alignment method, device, equipment and readable storage medium
CN116109606B (en) * 2023-02-13 2023-12-08 交通运输部水运科学研究所 Container lock pin disassembly and assembly safety management method and system based on image analysis

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11116060A (en) * 1997-10-17 1999-04-27 Hitachi Zosen Corp Container placed condition detection method for cargo handling device
CN205575400U (en) * 2016-04-08 2016-09-14 湖南中铁五新重工有限公司 Automatic hoisting accessory and system
CN108460800A (en) * 2016-12-12 2018-08-28 交通运输部水运科学研究所 Container representation localization method and system
CN109455619A (en) * 2018-12-30 2019-03-12 三海洋重工有限公司 Localization method, device and the suspender controller of container posture
CN110255209A (en) * 2019-07-05 2019-09-20 上海振华重工(集团)股份有限公司 Transport vehicle and container transshipment system
CN110543612A (en) * 2019-06-27 2019-12-06 浙江工业大学 card collection positioning method based on monocular vision measurement
CN113213340A (en) * 2021-05-11 2021-08-06 上海西井信息科技有限公司 Method, system, equipment and storage medium for unloading container truck based on lockhole identification
CN114863250A (en) * 2022-04-06 2022-08-05 淮阴工学院 Container lockhole identification and positioning method, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112010177B (en) * 2020-08-31 2022-04-29 上海驭矩信息科技有限公司 Automatic container landing method for ground containers in storage yard
CN113076842B (en) * 2021-03-26 2023-04-28 烟台大学 Method for improving traffic sign recognition accuracy in extreme weather and environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11116060A (en) * 1997-10-17 1999-04-27 Hitachi Zosen Corp Container placed condition detection method for cargo handling device
CN205575400U (en) * 2016-04-08 2016-09-14 湖南中铁五新重工有限公司 Automatic hoisting accessory and system
CN108460800A (en) * 2016-12-12 2018-08-28 交通运输部水运科学研究所 Container representation localization method and system
CN109455619A (en) * 2018-12-30 2019-03-12 三海洋重工有限公司 Localization method, device and the suspender controller of container posture
CN110543612A (en) * 2019-06-27 2019-12-06 浙江工业大学 card collection positioning method based on monocular vision measurement
CN110255209A (en) * 2019-07-05 2019-09-20 上海振华重工(集团)股份有限公司 Transport vehicle and container transshipment system
CN113213340A (en) * 2021-05-11 2021-08-06 上海西井信息科技有限公司 Method, system, equipment and storage medium for unloading container truck based on lockhole identification
CN114863250A (en) * 2022-04-06 2022-08-05 淮阴工学院 Container lockhole identification and positioning method, system and storage medium

Also Published As

Publication number Publication date
CN115180512A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN115180512B (en) Automatic loading and unloading method and system for container truck based on machine vision
EP3033293B1 (en) Method and system for automatically landing containers on a landing target using a container crane
US11625854B2 (en) Intelligent forklift and method for detecting pose deviation of container
KR102581928B1 (en) System and method for loading containers into landing target
US11873195B2 (en) Methods and systems for generating landing solutions for containers on landing surfaces
CN111704035B (en) Automatic positioning device and method for container loading and unloading container truck based on machine vision
CN111243016B (en) Automatic container identification and positioning method
EP3418244B1 (en) Loading a container on a landing target
CN115057362A (en) Container loading and unloading guiding method, device, system and equipment and container crane
WO2024093616A1 (en) Unmanned drayage truck alignment method, apparatus, and device, and readable storage medium
CN112037283B (en) Machine vision-based integrated card positioning and box alignment detection method
CN117115249A (en) Container lock hole automatic identification and positioning system and method
CN115586552A (en) Method for accurately secondarily positioning unmanned truck collection under port tyre crane or bridge crane
CN114463751A (en) Corner positioning method and device based on neural network and detection algorithm
CN112580517A (en) Anti-smashing protection system and method for truck head, computer storage medium and gantry crane
CN116935250B (en) Building template size estimation method based on unmanned aerial vehicle shooting
CN113140007B (en) Concentrated point cloud-based set card positioning method and device
CN116243716B (en) Intelligent lifting control method and system for container integrating machine vision
CN116620888A (en) Railway freight container automatic loading and unloading method based on machine vision
CN115771866A (en) Pallet pose identification method and device for unmanned high-position forklift
CN115239797A (en) Positioning method, electronic device and storage medium
CN116242334A (en) Port lock station-oriented vehicle positioning method and system
CN116337043A (en) Positioning method, system, AGV trolley and electronic equipment
CN117151593A (en) Flat-warehouse intelligent visual disc warehouse method, device and system
CN114454617A (en) Code spraying total system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230419

Address after: Room 801, Building 8, Xincheng Science Park, No. 588, Yuelu West Avenue, High-tech Development Zone, Changsha City, Hunan Province, 410000

Patentee after: Hunan Gangyi Intelligent Technology Co.,Ltd.

Address before: Room 801, Building 8, Core City Science Park, No. 588, Yuelu West Avenue, New Development Zone, Changsha, Hunan 410000

Patentee before: Hunan Yanmar Information Co.,Ltd.