CN114119741A - Shore bridge control method and device based on machine vision - Google Patents

Shore bridge control method and device based on machine vision Download PDF

Info

Publication number
CN114119741A
CN114119741A CN202111275349.2A CN202111275349A CN114119741A CN 114119741 A CN114119741 A CN 114119741A CN 202111275349 A CN202111275349 A CN 202111275349A CN 114119741 A CN114119741 A CN 114119741A
Authority
CN
China
Prior art keywords
container
image
target
truck
box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111275349.2A
Other languages
Chinese (zh)
Inventor
郑智辉
闫威
唐波
郭宸瑞
王硕
董昊天
闫涛
李钊
张伯川
张海荣
赵玲
朱泽林
亓欣媛
常城
朱敏
许敏
张艺佳
武鹏
彭皓
任子建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Automatic Control Research Institute
Original Assignee
Beijing Aerospace Automatic Control Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Automatic Control Research Institute filed Critical Beijing Aerospace Automatic Control Research Institute
Priority to CN202111275349.2A priority Critical patent/CN114119741A/en
Priority to PCT/CN2022/072004 priority patent/WO2023070954A1/en
Publication of CN114119741A publication Critical patent/CN114119741A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application relates to a shore bridge control method and device based on machine vision, belongs to the technical field of port shore bridge operation assistance, and solves the problems that the existing method is low in truck positioning precision and low in shore bridge pick-and-place operation efficiency caused by insufficient single-box and double-box judgment accuracy rate. The method comprises the following steps: calibrating a target parking position of the container truck and the estimation of the height of the container truck for loading the container; calculating an initial positioning deviation distance based on the target rectangular area and the target parking position; performing image cropping on the vehicle body image to generate a vehicle body sub-image; recognizing coordinates of a box hole and characters or coordinates of a frame guide plate by using a second target detection model; performing single-box and double-box judgment based on whether the fusion area, the box hole coordinates and the characters exist; estimating the height of the container; calculating a movement deviation distance to guide the truck to a target parking position; and adjusting the hanger shape of the shore bridge according to the judged single and double boxes to carry out box grabbing or box placing operation. The truck positioning accuracy and the single-double box judgment accuracy are high, and the operation efficiency of the quay crane box grabbing and releasing is improved.

Description

Shore bridge control method and device based on machine vision
Technical Field
The application relates to the technical field of port shore bridge operation assistance, in particular to a shore bridge control method and device based on machine vision.
Background
The container is used as an important carrier in the modern logistics transportation process, and the loading and unloading operation efficiency of the port on the container directly influences the whole logistics transportation efficiency. The quay crane is a bridge type equipment used at the shore for unloading containers from ships to wharfs or loading containers from wharfs to important operation tools of the ships, but in the process of loading and unloading cargo ships, a quay crane cart is still, and a quay crane driver controls a trolley and a lifting appliance through a handle to grab and release containers from a container truck in a port. Because the truck driver does not have correct target guidance, when the internal truck grabbing and box placing operation is carried out each time, the truck driver needs to carry out accurate grabbing and box placing operation through manual observation or external personnel guidance when the lifting appliance reaches the position right above the internal truck collecting frame or the container. On the other hand, a shore bridge driver does not need to concentrate on observing the conditions of the single box and the double boxes in the current working vehicle, and then the different forms of the lifting appliance are controlled. Therefore, the operation efficiency of the container is greatly reduced, and the work burden of a truck and a shore bridge driver is increased. The container truck is a short name of a container truck and is divided into an inner container truck and an outer container truck, wherein the inner container truck refers to a truck operated in a container port, and the outer container truck refers to a truck from the outside to the container port.
Conventionally, a laser radar scanning method is generally used for identifying container or frame of a container truck, but the laser radar is expensive and has single function, and the accuracy cannot be effectively guaranteed.
The existing bridge crane truck-collecting safety positioning method marks a vehicle parking point through a camera in advance, then when a vehicle enters an identification area, a certain range is added or subtracted according to the marked vehicle parking point, an image is cut, an area of a truck-collecting container or a vehicle frame is identified and divided by using a Mask-RCNN algorithm, a central point is obtained, and the Euclidean distance between the central point and the pre-marked parking point is calculated, so that the vehicle is guided to reach an accurate position.
The prior method has the following problems:
1. the existing method assumes that a vehicle needs to firstly travel to the position near an accurate target parking position, so that the collection cards exceeding a set certain range when the deviation distance is large cannot be effectively divided, and further the accurate guidance cannot be caused.
2. When the height changes due to the change of container specifications, the method cannot effectively estimate the height of the container on the current container truck, and therefore, the guiding error is caused.
3. Due to the complexity of the appearance of the inner truck frame, the method is difficult to effectively and accurately divide the truck frame, and further guide errors are caused.
4. Because the method needs to carry out segmentation binarization on container trucks or frames under various working conditions and weathers, a large amount of outline mask data of the container trucks or frames needs to be marked, the required labor development cost is high, and meanwhile, the operation of a segmentation bidimensional algorithm is time-consuming, and processing delay or hardware cost is also caused.
5. This method does not explicitly give a method of determining the traveling direction of the vehicle, i.e., whether the vehicle should advance or retreat.
The machine vision part of the existing double-container detection method is to shoot the middle part of the container by using a camera so as to obtain an image of the middle part of the container, and to identify a container hole by using a container hole identification model so as to judge a single container and a double container.
The prior method has the following problems:
1) the method only depends on the box hole identification of the container intermediate image to judge whether double boxes exist, and does not carry out corresponding processing on box hole missing detection or false detection, so that the hidden danger of double-box false judgment exists, and the judgment accuracy is reduced.
2) The method cannot ensure the accuracy of data acquisition, namely, cannot ensure that the acquired image is just the middle area of the container, thereby causing misjudgment.
Disclosure of Invention
In view of the foregoing analysis, the embodiments of the present application aim to provide a shore bridge control method and apparatus based on machine vision, so as to solve the problem that the operation efficiency of the shore bridge pick-and-place box is low due to low truck positioning accuracy and insufficient single-box and double-box judgment accuracy in the existing method.
In one aspect, an embodiment of the present application provides a shore bridge control method based on machine vision, including: calibrating a target parking position of the container truck and the estimation of the height of the container truck for loading the container; acquiring and utilizing a first target detection model to identify a target rectangular area of a container or a frame in a vehicle body image, and calculating an initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area comprises a minimum circumscribed rectangle; performing image cropping on a target rectangular area of the container or the frame in the vehicle body image to generate a vehicle body sub-image; acquiring and utilizing a second target detection model to identify box hole coordinates and characters on a container or frame guide plate coordinates on a frame in the vehicle body subimage; performing single-box and double-box judgment based on whether a target rectangular area of the container in the vehicle body subimage is a fusion area or not, and box hole coordinates and characters of a middle subregion in the vehicle body subimage; acquiring the distance and the position of a box hole or a frame guide plate of a lower subregion image in the subimage of the vehicle body based on the box hole coordinate or the frame guide plate coordinate to estimate the height of the container; calculating a movement deviation distance based on the height of the container or the frame and a target straight line generated by a box hole or a frame guide plate, and guiding the truck to the target parking position according to the movement deviation distance; and when the hanger of the shore bridge reaches the position right above the container or the vehicle frame, adjusting the hanger shape of the shore bridge according to the judged single box and double boxes, and performing box grabbing or box releasing operation.
The beneficial effects of the above technical scheme are as follows: the distance range that the container truck can guide and position in the shore bridge is increased through container truck initial positioning, and meanwhile, the positioning accuracy of the inner container truck is improved through detection and positioning of small targets such as box holes or guide plates, so that the accurate guide and positioning of the inner container truck are realized, and the positioning accuracy of the container truck is improved. And adopting the comprehensive container number, container holes and the text information on the upper surface of the container to carry out single-container and double-container judgment. By the combined judgment method, misjudgment caused by single information judgment can be avoided, and the accuracy of single-box and double-box judgment is effectively improved. Therefore, the working efficiency of the quay crane box grabbing and placing is improved.
Based on a further improvement of the above method, calibrating the target parking position of the container and the estimate of the height of the container loaded further comprises: acquiring a vehicle body image of a target parking position of the container truck in advance and identifying the vehicle body image of the target parking position so as to acquire the image coordinates of the container and the corresponding box hole or the image coordinates of the vehicle frame and the corresponding vehicle frame guide plate of the container truck at the target parking position; and generating a target straight line by using the box hole image coordinates or the vehicle frame guide plate image coordinates and estimating a pixel distance factor related to the height.
Based on the further improvement of the method, acquiring and using the first target detection model to identify the target rectangular area of the container or the frame in the vehicle body image further comprises: acquiring a plurality of historical vehicle body images from a database and marking containers or frames in the plurality of historical vehicle body images; establishing a first neural network Yolov5 and training the first neural network Yolov5 by using a plurality of marked historical car body images to obtain a first target detection model; and acquiring a current vehicle body image in real time, identifying a target rectangular area of a container or a vehicle frame in the current vehicle body image by using the first target detection model so as to perform container grabbing or container releasing judgment, judging whether the container is 20 feet or not according to the size of the target rectangular area of the container, and fusing the target rectangular areas of two containers with 20 feet on the same truck to form a fusion area.
Based on a further improvement of the above method, image cropping the target rectangular region of the container or the frame in the vehicle body image to generate a vehicle body sub-image further comprises: cutting the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, wherein when the target rectangular area is a container area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a first upper sub-area, a middle sub-area and a first lower sub-area, wherein box hole and character detection is carried out on the first middle sub-area, and box hole detection is carried out on the first upper sub-area image and the first lower sub-area image; and when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein the frame guide plate detection is carried out on the second upper sub-area and the second lower sub-area.
Based on the further improvement of the method, the obtaining and identifying the coordinates of the container hole on the container or the coordinates of the frame guide plate on the frame in the vehicle body subimage by using the second target detection model further comprises: marking a box hole or a frame guide plate in the historical automobile body subimage; establishing a second neural network Yolov5 and training the second neural network Yolov5 by using the labeled historical automobile body sub-images to obtain a second target detection model; and identifying a box hole or a frame guide plate in the current vehicle body subimage by using the second target detection model, and acquiring box hole coordinates, characters and frame guide plate coordinates.
Based on further improvement of the method, a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera are installed at isolation belts on two opposite sides of a lane on a cross beam of a shore bridge, wherein images of a head of the truck are shot by the first camera to the fourth camera to confirm the identity and the moving direction of the vehicle; and shooting the body image of the container truck by using the fifth camera and the sixth camera to calculate the initial positioning deviation distance and the movement deviation distance of the container truck.
Based on the further improvement of the method, after calibrating the target parking position of the container truck and the estimation of the height of the container truck loaded with the container, the method further comprises the following steps: acquiring a plurality of historical locomotive images from a database and labeling a locomotive and two-dimensional codes in the historical locomotive images; establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical locomotive image to obtain a third target detection model; acquiring a current locomotive image in real time, identifying a truck collecting locomotive in the current locomotive image and a two-dimensional code pasted on the truck collecting locomotive by using a third target detection model, and confirming a truck collecting identity code; and connecting a data receiving unit in a corresponding collecting card cab through a network according to the collecting card identity code.
Based on a further improvement of the above method, calculating an initial positioning deviation distance based on the target rectangular area and the target parking position further comprises: calculating the initial positioning deviation distance of the container truck based on the identified target rectangular area of the container or the frame in the current vehicle body image and a pre-acquired target parking position; and transmitting the initial positioning deviation distance of the container truck to a data receiving unit through the network, and displaying the initial positioning deviation distance through an LED display screen to guide a container truck driver to adjust the position of the container truck, wherein the initial positioning deviation distance of the container truck is calculated through the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0Representing the ordinate of the pre-acquired target parking position and D representing the actual pixel distance factor related to the height.
Based on a further improvement of the above method, calculating a movement deviation distance based on the height of the container or frame and a target straight line generated by a box hole or frame guide, guiding the truck to the target parking position according to the movement deviation distance further comprises: calculating a travel deviation distance of the hub by the following formula further comprises:
Figure BDA0003329169660000061
wherein x is0,y0Respectively and currently detecting the middle points of the two box holes, wherein A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height; and sending the movement deviation distance to the data receiving unit through the network and displaying the movement deviation distance on the LED display so as to continuously guide a driver to adjust the position of the truck collection until the movement deviation distance is smaller than a threshold value, and finishing the positioning guidance of the truck collection.
On the other hand, the embodiment of the present application provides a shore bridge control device based on machine vision, including: the system comprises a video processor and a control module, wherein the video processor comprises a calibration module, a primary positioning module, an image cutting module, an identification module, a single-box and double-box judgment module, a height estimation module and a movement deviation distance calculation module, wherein the calibration module is used for calibrating a target parking position of a container truck and the height estimation of the container truck loading container; the initial positioning module is used for acquiring and utilizing a first target detection model to identify a target rectangular area of a container or a frame in a vehicle body image, and calculating an initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area comprises a minimum circumscribed rectangle; the image clipping module is used for clipping the image of the target rectangular area of the container or the frame detected in the vehicle body image to generate a vehicle body sub-image; the recognition module is used for acquiring and recognizing coordinates of a container hole on a container and characters or coordinates of a frame guide plate on a frame in the sub-image of the vehicle body by using a second target detection model; the single-double box judgment module is used for judging whether a target rectangular area of the container in the vehicle body subimage is a fusion area or not, and box hole coordinates and characters of a middle subregion in the vehicle body subimage; the height estimation module is used for acquiring the distance and the position of a box hole or a frame guide plate of a lower subregion image in the subimage of the vehicle body based on the box hole coordinate or the frame guide plate coordinate so as to estimate the height of the container; the moving deviation distance calculation module is used for calculating a moving deviation distance based on the height of the container or the frame and a target straight line generated by a box hole or a frame guide plate so as to guide the container truck to the target parking position according to the moving deviation distance; and the control module is used for adjusting the hanger shape of the shore bridge according to the judged single box and double boxes when the hanger of the shore bridge reaches the position right above the container or the frame, and carrying out box grabbing or box releasing operation.
The further improvement based on the device comprises a camera, a control device and a control device, wherein the camera is used for acquiring a vehicle body image of a target parking position of the collecting card in advance and acquiring the vehicle body image intermittently; the calibration module further comprises a target position calibration submodule and a height estimation calibration submodule, wherein the target position calibration submodule of the camera is used for identifying an automobile body image of the target parking position and acquiring image coordinates of a container and a corresponding box hole or image coordinates of a frame and a corresponding frame guide plate of the container truck at the target parking position to generate a target straight line; and the height estimation calibration submodule estimates a pixel distance factor related to the height by using the box hole image coordinate or the frame guide plate image coordinate.
Based on further improvement of the device, the primary positioning module comprises an annotation submodule, a first target detection model and a target rectangle generation submodule, wherein the first annotation submodule is used for acquiring a plurality of historical vehicle body images from a database and annotating containers or frames in the plurality of historical vehicle body images; the first target detection model is used for establishing a first neural network Yolov5 and training the first neural network Yolov5 by using the marked multiple historical vehicle body images to obtain a first target detection model; and the target rectangle generation submodule is used for acquiring a current vehicle body image in real time, identifying a target rectangular area of a container or a vehicle frame in the current vehicle body image by using the first target detection model so as to carry out container grabbing or container releasing judgment, judging whether the container is 20 feet or not according to the size of the target rectangular area of the container, and fusing the target rectangular areas of two containers with 20 feet on the same truck to form the fusion area.
Based on a further improvement of the above device, the image cropping module is configured to crop the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, where when the target rectangular region is a container region, the historical vehicle body sub-image or the current vehicle body sub-image includes a first upper sub-region, a middle sub-region, and a first lower sub-region, where box hole and text detection is performed on the first middle sub-region, and box hole detection is performed on the first upper sub-region image and the first lower sub-region image; and when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein the frame guide plate detection is carried out on the second upper sub-area and the second lower sub-area.
Based on further improvement of the device, the identification module comprises a second marking submodule, a second target detection model, a box hole and frame guide plate identification submodule, wherein the second marking submodule is used for marking the box hole or the frame guide plate in the historical vehicle body subimage; the second target detection model is used for establishing a second neural network Yolov5 and training the second neural network Yolov5 by using the labeled historical vehicle body sub-images to obtain a second target detection model; and the box hole and frame guide plate recognition submodule is used for recognizing the box hole and characters or the frame guide plate in the current vehicle body subimage by using the second target detection model and acquiring box hole coordinates or frame guide plate coordinates.
Based on further improvement of the device, the cameras comprise a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera which are arranged at isolation belts on two opposite sides of a lane on a cross beam of a shore bridge, wherein the first camera to the fourth camera are used for shooting head images of the trucks so as to confirm vehicle identities and vehicle moving directions; and the fifth camera and the sixth camera are used for shooting the body image of the container truck so as to calculate the initial positioning deviation distance and the movement deviation distance of the container truck.
Based on further improvement of the device, the shore bridge control device based on machine vision further comprises a data receiving unit, the identification module further comprises a third labeling submodule, a third target detection model and a vehicle head and identity confirmation submodule, wherein the third labeling submodule is used for acquiring a plurality of historical vehicle head images from a database and labeling the vehicle head and the two-dimensional code in the historical vehicle head images; the third target detection model is used for establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical head images to obtain a third target detection model; the vehicle head and identity confirmation submodule is used for acquiring a current vehicle head image in real time, recognizing the truck collecting vehicle head in the current vehicle head image and the two-dimensional code pasted on the truck collecting head by using a third target detection model and confirming a truck collecting identity code; and the data receiving unit is positioned in the truck cab and connected with the video processor through a network according to the truck identity code.
Based on the further improvement of the device, the shore bridge control device based on machine vision further comprises an LED display screen, the primary positioning module is used for calculating the primary positioning deviation distance of the container truck based on the identified target rectangular area of the container or the frame in the current vehicle body image and the pre-acquired target parking position, wherein the primary positioning deviation distance of the container truck is calculated through the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0A vertical coordinate representing a pre-collected target parking position, D representing an actual pixel distance factor related to height; the data receiving unit transmits the initial positioning deviation distance of the hub to the data receiving unit through the network; and the LED display screen is positioned in the card collecting cab and is in communication connection with the data receiving unit, and is used for displaying the initial positioning deviation distance so as to guide the card collecting driver to adjust the position of the card collecting.
In a further improvement of the above apparatus, the step of calculating the movement deviation distance of the truck by the following formula further includes:
Figure BDA0003329169660000091
wherein x is0,y0Respectively and currently detecting the middle points of the two box holes, wherein A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height; the data receiving unit receives the movement deviation distance through the network; and the LED display is used for displaying the movement deviation distance so as to continuously guide a driver to adjust the truck collection position until the movement deviation distance is smaller than a threshold value, and then the truck collection positioning guide is completed.
Compared with the prior art, the application can realize at least one of the following beneficial effects:
1. the application provides a method for carrying out initial positioning of container trucks based on detection of container trucks or vehicle frames and carrying out accurate positioning through common characteristics (box holes and guide plates) of the containers or the vehicle frames by installing cameras on cross beams of a shore bridge. The distance range that the container truck can guide and position in the shore bridge is increased through container truck initial positioning, and meanwhile, the positioning accuracy of the inner container truck is improved through detection and positioning of small targets such as box holes or guide plates, and further the accurate guide and positioning of the inner container truck is realized.
2. The application adopts a height estimation method based on container hole distance. The method effectively distinguishes the guide error caused by different container heights, effectively adapts to the operation working conditions of various container heights by estimating the height of the container truck in the loaded container, and improves the adaptability of the system.
3. The method for judging the single container and the double containers is carried out by integrating the number of the containers, the container holes and the text information on the upper surfaces of the containers. By the joint judgment method, misjudgment caused by single information judgment is avoided, and the accuracy of single-box and double-box judgment is effectively improved.
In the present application, the above technical solutions may be combined with each other to realize more preferable combination solutions. Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the application, wherein like reference numerals are used to designate like parts throughout.
Fig. 1 is a flowchart of a machine vision-based shore bridge control method according to an embodiment of the present application.
Fig. 2 is a schematic view of an installation arrangement of a camera device according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a hub collected by a camera, its recognition result, and a deviation according to an embodiment of the present application.
Fig. 4 is a specific flowchart of a machine vision-based shore bridge control method according to an embodiment of the present application.
FIG. 5 is a schematic diagram of height estimation according to an embodiment of the present application.
Fig. 6 is a schematic view of a vehicle head and two-dimensional code identification according to an embodiment of the application.
Fig. 7 is a schematic diagram of loaded container region clipping and merging according to an embodiment of the present application.
FIG. 8 is a schematic view of frame region cropping and blending according to an embodiment of the present application.
Fig. 9 is a schematic diagram of the operation of the truck in the shore bridge according to the embodiment of the present application.
Fig. 10 is a block diagram of a machine vision-based shore bridge control apparatus according to an embodiment of the present application.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the application and together with the description, serve to explain the principles of the application and not to limit the scope of the application.
The specific embodiment of the application discloses a shore bridge control method based on machine vision. As shown in fig. 1, the machine vision-based shore bridge control method includes: in step S102, a target parking position of the container and an estimate of the height of the container loaded are calibrated; in step S104, a target rectangular region of the container or the frame in the vehicle body image is obtained and identified by using the first target detection model, and an initial positioning deviation distance is calculated based on the target rectangular region and the target parking position, wherein the target rectangular region includes a minimum bounding rectangle; in step S106, image cropping is carried out on a target rectangular area of the container or the frame in the vehicle body image to generate a vehicle body sub-image; in step S108, acquiring and identifying coordinates of a container hole on a container and coordinates of a text or a frame guide plate on a frame in the vehicle body sub-image by using the second target detection model; in step S110, single-box and double-box judgment is performed based on whether a target rectangular region of a container in a vehicle body subimage is a fusion region, and box hole coordinates and characters of a middle subregion in the vehicle body subimage; in step S112, acquiring a distance and a position of a box hole or a frame guide of the lower subregion image in the subimage of the vehicle body based on the box hole coordinates or the frame guide coordinates to estimate a height of the container; in step S114, a movement deviation distance is calculated based on the height of the container or the frame and a target straight line generated by a container hole or a frame guide plate, and the container truck is guided to a target parking position according to the movement deviation distance; and in step S116, when the hanger of the shore bridge reaches the position right above the container or the vehicle frame, the hanger shape of the shore bridge is adjusted according to the judged single box and double boxes, and box grabbing or box releasing operation is carried out.
Compared with the prior art, in the shore bridge control method based on machine vision provided by the embodiment, the distance range within which the collector can guide and position in the shore bridge is increased through the initial positioning of the collector, and meanwhile, the positioning precision of the inner collector is improved through the detection and positioning of small targets such as box holes or guide plates, so that the accurate guide and positioning of the inner collector are realized, and the positioning precision of the collector is improved. And adopting the comprehensive container number, container holes and the text information on the upper surface of the container to carry out single-container and double-container judgment. By the combined judgment method, misjudgment caused by single information judgment can be avoided, and the accuracy of single-box and double-box judgment is effectively improved. Therefore, the working efficiency of the quay crane box grabbing and placing is improved.
Hereinafter, steps S102 to S116 in the machine vision-based shore bridge control method will be described in detail with reference to fig. 1.
Firstly, mounting a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera at isolation zones on two opposite sides of a lane on a cross beam of a shore bridge, wherein the first camera to the fourth camera are used for shooting head images of a truck so as to confirm the identity and the moving direction of a vehicle; and shooting the image of the truck body of the truck by using the fifth camera and the sixth camera to calculate the initial positioning deviation distance and the movement deviation distance of the truck. And storing the shot vehicle body image and the shot vehicle head image in a database.
In step S102, the target parking position of the container and the estimated height of the container loaded are calibrated. Specifically, calibrating the target parking position of the container and the estimate of the height of the container loaded further comprises: acquiring a vehicle body image of a target parking position of a container truck in advance and identifying the vehicle body image of the target parking position so as to acquire the image coordinates of the container and a corresponding box hole or the image coordinates of a vehicle frame and a corresponding vehicle frame guide plate of the container truck at the target parking position; a target line L is generated using the bin-hole image coordinates or the frame guide image coordinates and height-dependent pixel distance factors D and E are estimated.
After calibrating the target parking position of the container and the estimate of the height of the container loaded, further comprising: acquiring a plurality of historical locomotive images from a database and labeling the locomotive and the two-dimensional codes in the historical locomotive images; establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical locomotive image to obtain a third target detection model; acquiring a current vehicle head image in real time, identifying a truck collection head and a two-dimensional code pasted on the truck collection head in the current vehicle head image by using a third target detection model, and confirming a truck collection identity code; and connecting the data receiving unit in the corresponding truck driver cab through a network according to the truck identity code.
In step S104, a target rectangular region of the container or the frame in the vehicle body image is acquired and identified by using the first target detection model, and an initial positioning deviation distance is calculated based on the target rectangular region and the target parking position, wherein the target rectangular region includes a minimum bounding rectangle. Specifically, the acquiring and identifying a target rectangular area of the container or the frame in the vehicle body image by using the first target detection model further comprises: acquiring a plurality of historical vehicle body images from a database and marking containers or frames in the plurality of historical vehicle body images; establishing a first neural network Yolov5 and training the first neural network Yolov5 by using a plurality of marked historical car body images to obtain a first target detection model; and acquiring a current vehicle body image in real time, identifying a target rectangular area of the container or the vehicle frame in the current vehicle body image by using a first target detection model so as to carry out container grabbing or container releasing judgment, wherein whether the container is 20 feet or not is judged according to the size of the target rectangular area of the container, and the target rectangular areas of two containers with 20 feet on the same truck are fused to form a fusion area. Calculating an initial positioning deviation distance based on the target rectangular area and the target parking position further comprises: calculating the initial positioning deviation distance of the container truck based on the identified target rectangular area of the container or the frame in the current vehicle body image and the pre-acquired target parking position; and transmitting the initial positioning deviation distance of the container truck to a data receiving unit through a network, and displaying through an LED display screen to guide a container truck driver to adjust the position of the container truck, wherein the initial positioning deviation distance of the container truck is calculated through the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0Representing the ordinate of the pre-acquired target parking position, D representing the actual pixel distance factor in mm/pixel, which is height-dependent, in exchange forIn other words, the actual pixel distance factor D is also different at individual height levels of 2.4m, 2.6m, or 2.9 m.
In step S106, an image cropping is performed on the target rectangular region of the container or the frame in the vehicle body image to generate a vehicle body sub-image. Specifically, the image cropping the target rectangular area of the container or the frame in the vehicle body image to generate the vehicle body sub-image further comprises: the method comprises the steps that a historical vehicle body image or a current vehicle body image is cut into a historical vehicle body sub-image or a current vehicle body sub-image, wherein when a target rectangular area is a container area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a first upper sub-area, a middle sub-area and a first lower sub-area, box hole and character detection is conducted on the first middle sub-area, and box hole detection is conducted on the first upper sub-area image and the first lower sub-area image; and when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein the frame guide plate detection is carried out on the second upper sub-area and the second lower sub-area. Specifically, when the Y coordinate of the center point of the detection rectangular region is smaller than 1/3 of the image height, the cropping ratios are 0.15, 0.4, 0.45, when the Y coordinate is larger than 1/3 and smaller than 2/3, 0.25, 0.4, 0.35, and when the Y coordinate is larger than 2/3, the ratios are 0.35, 0.4, 0.25. Because the image of the middle subregion has a fixed image proportion, the image is vertically offset at the center of the whole image according to the requirement, and the image cutting method can ensure that box holes and characters are detected.
In step S108, the container hole coordinates and the text on the container or the frame guide coordinates on the frame in the vehicle body sub-image are acquired and identified using the second target detection model. Specifically, the obtaining and identifying box hole coordinates on the container or frame guide coordinates on the frame in the vehicle body sub-image by using the second target detection model further includes: marking a box hole or a frame guide plate in a historical automobile body subimage; establishing a second neural network Yolov5 and training a second neural network Yolov5 by using the labeled historical automobile body sub-images to obtain a second target detection model; and identifying a box hole and characters or a frame guide plate in the current vehicle body subimage by using the second target detection model, and acquiring box hole coordinates or frame guide plate coordinates.
In step S110, a single-box and double-box judgment is performed based on whether the target rectangular region of the container in the body sub-image is a fusion region, that is, whether the target rectangular region is two containers of 20 feet, and the box hole coordinates and characters of the middle sub-region in the body sub-image. And calculating a total score according to whether the container area in the container image is two containers with 20 feet, the box holes and the text of the middle subarea. The total score is calculated by the following formula:
Figure BDA0003329169660000141
wherein, weight0Weight representing the quantity and weight of the container1Whether there is a text weight and weight2Whether there is a bin weight, R0Respectively indicates whether two containers, R1Whether or not there is a character and R2Whether a box hole exists or not is 1, and whether the box hole exists or not is 0. When the total score calculated by the formula is larger than a threshold value, judging that the vehicle-mounted container is a double container; and when the total score calculated by the formula is less than or equal to the threshold value, judging that the vehicle-mounted container is a single container.
In step S112, the distance and position of the box hole or frame guide of the lower subregion image in the subimage of the vehicle body are acquired based on the box hole coordinates or frame guide coordinates to estimate the height of the container. In particular, the height estimation is performed for an inner container card loaded with containers, taking into account the individual height levels 2.4, 2.6 and 2.9 of the containers. Fixed according to the position of the camera, the distance from the camera and thus the bin hole distance in the image is different according to the height of the container, for example, the bin hole distance in the image is the smallest when the height is 2.4, and the bin hole distance in the image is the largest when the height is 2.9. Therefore, it is necessary to estimate the height of the container in the container-loading container, first calculate the position line of the nearest center point of the two holes, and then compare the distance d between the two holes in the current image with the height markDistance d between two box holes in car body image of timed target parking positionkSize of (d-d)k>T0Then it is considered to be a rating of 2.9 if d-dk<T1Then it is considered to be a level of 2.4 if at T0~T1Within the range, a rating of 2.6 is considered, where T0=3,T11. In addition, the height of the vehicle frame is fixed, so the height of the vehicle frame does not need to be estimated.
In step S114, a movement deviation distance is calculated based on the height of the container or the vehicle frame and a target straight line generated by the container hole or the vehicle frame guide plate, and the container truck is guided to a target parking position according to the movement deviation distance. Specifically, calculating a movement deviation distance based on the height of the container or the frame and a target straight line generated by the box hole or the frame guide, and guiding the truck to the target parking position according to the movement deviation distance further comprises: calculating the travel deviation distance of the hub by the following formula further comprises:
Figure BDA0003329169660000151
wherein x is0,y0Respectively and currently detecting the middle points of the two box holes, wherein A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height; and sending the movement deviation distance to a data receiving unit through a network and displaying the movement deviation distance on an LED display so as to continuously guide a driver to adjust the position of the truck until the movement deviation distance is smaller than a threshold value, and finishing truck collection positioning guidance.
In step S116, when the spreader of the quay crane reaches a position right above the container or the vehicle frame, the spreader configuration of the quay crane is adjusted according to the determined single or double containers, and the container grabbing or releasing operation is performed.
In another embodiment of the present application, a shore bridge control apparatus based on machine vision is disclosed. Referring to fig. 10, the machine vision-based shore bridge control apparatus includes: the video processing device comprises a video processor 1002 and a control module 1018, wherein the video processor 1002 comprises a calibration module 1004, a primary positioning module 1006, an image cropping module 1008, a recognition module 1010, a single-double box judgment module 1012, a height estimation module 1014, a movement deviation distance calculation module 1016, a data receiving unit and an LED display screen.
And a camera for acquiring a vehicle body image of a target parking position of the truck in advance and intermittently acquiring the vehicle body image. Referring to fig. 2, the cameras include a first camera 201, a second camera 202, a third camera 203, a fourth camera 204, a fifth camera 205, and a sixth camera 206, which are installed at isolation zones at opposite sides of a lane on a quay crane beam. First to fourth cameras 201 to 204 for capturing images of the head of the container truck to confirm the identity and direction of movement of the vehicle; and a fifth camera 205 and a sixth camera 206 for taking images of the body of the truck to calculate the initial positioning deviation distance and the movement deviation distance of the truck.
A calibration module 1004 for calibrating a target parking position of the truck and an estimate of the height of the truck loading the container. The calibration module further comprises: a target position calibration sub-module and a height estimation calibration sub-module. And the target position calibration submodule is used for identifying the vehicle body image of the target parking position and acquiring the coordinates of the container and the corresponding box hole image or the coordinates of the vehicle frame and the corresponding vehicle frame guide plate image clamped at the target parking position so as to generate a target straight line L. And the height estimation calibration submodule estimates a pixel distance factor related to the height by using the box hole image coordinate or the vehicle frame guide plate image coordinate.
And the primary positioning module 1006 is configured to acquire and identify a target rectangular region of the container or the frame in the vehicle body image by using the first target detection model, and calculate a primary positioning deviation distance based on the target rectangular region and the target parking position, where the target rectangular region includes a minimum bounding rectangle. The primary positioning module 1006 includes an annotation sub-module, a first target detection model, and a target rectangle generation sub-module. And the first labeling submodule is used for acquiring a plurality of historical vehicle body images from the database and labeling the containers or the frames in the plurality of historical vehicle body images. And the first target detection model is used for establishing a first neural network Yolov5 and training the first neural network Yolov5 by using the marked plurality of historical car body images to obtain the first target detection model. And the target rectangle generation submodule is used for acquiring the current vehicle body image in real time and identifying a target rectangular area of the container or the frame in the current vehicle body image by using the first target detection model. The primary positioning module 1006 is configured to calculate a primary positioning deviation distance of the container truck based on the identified target rectangular region of the container or the frame in the current vehicle body image and a pre-acquired target parking position, wherein the primary positioning deviation distance of the container truck is calculated by the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0Representing the ordinate of the pre-acquired target parking position and D representing the actual pixel distance factor related to the height.
And the image cropping module 1008 is used for performing image cropping on the target rectangular area of the container or the frame detected in the vehicle body image to generate a vehicle body sub-image. The image cropping module 1008 is configured to crop the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, where when the target rectangular region is a container region, the historical vehicle body sub-image or the current vehicle body sub-image includes a first upper sub-region, a middle sub-region, and a first lower sub-region, where box hole and text detection is performed on the first middle sub-region, and box hole detection is performed on the first upper sub-region image and the first lower sub-region image; and when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein the frame guide plate detection is carried out on the second upper sub-area and the second lower sub-area.
And the identification module 1010 is used for acquiring and identifying box hole coordinates and characters on the container or frame guide coordinates on the frame in the vehicle body sub-image by using the second target detection model. The recognition module 1010 comprises a third labeling submodule, a third target detection model and a vehicle head and identity confirmation submodule. And the third labeling submodule is used for acquiring a plurality of historical vehicle head images from the database and labeling the vehicle head and the two-dimensional code in the historical vehicle head images. And the third target detection model is used for establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical head images to obtain the third target detection model. And the vehicle head and identity confirmation submodule is used for acquiring a current vehicle head image in real time, recognizing the two-dimensional codes pasted on the truck head and the truck head in the current vehicle head image by using a third target detection model and confirming the identity codes of the truck head and the truck head. In addition, the recognition module 1010 further includes a second labeling submodule, a second target detection model, and a box hole and frame guide recognition submodule. The second labeling submodule is used for labeling a box hole or a frame guide plate in the historical automobile body subimage; the second target detection model is used for establishing a second neural network Yolov5 and training a second neural network Yolov5 by using the labeled historical vehicle body sub-images to obtain a second target detection model; and the box hole and frame guide plate recognition submodule is used for recognizing the box hole or the frame guide plate in the current vehicle body subimage by using the second target detection model and acquiring box hole coordinates and frame guide plate coordinates.
And a single-double box judging module 1012, configured to perform single-double box judgment based on whether a target rectangular region of a container in the vehicle body subimage is a fusion region, and box hole coordinates and characters of a middle subregion in the vehicle body subimage.
A height estimation module 1014 for acquiring a distance and a position of a box hole or a frame guide of the lower subregion image in the subimage of the body based on the box hole coordinates or the frame guide coordinates to estimate a height of the container.
And a movement deviation distance calculation module 1016 for calculating a movement deviation distance based on the height of the container or the frame and a target straight line generated by the box hole or the frame guide plate to guide the truck to a target parking position according to the movement deviation distance.
And the data receiving unit is positioned in the container truck cab and is connected with the video processor through a network according to the container truck identity code. The data receiving unit receives the initial positioning deviation distance of the hub card through the network, transmits the initial positioning deviation distance to the data receiving unit, and receives the movement deviation distance through the network. The LED display screen is positioned in the truck driver cab, is in communication connection with the data receiving unit and is used for displaying the initial positioning deviation distance so as to guide the truck driver to adjust the position of the truck; and displaying the movement deviation distance to continuously guide the driver to adjust the truck-collecting position until the movement deviation distance is smaller than the threshold value to complete the truck-collecting positioning guidance.
And the control module 1018 is used for adjusting the shape of the spreader of the shore bridge according to the judged single-box and double-box to perform box grabbing or box releasing operation when the spreader of the shore bridge reaches the position right above the container or the vehicle frame. Specifically, the step of calculating the movement deviation distance of the truck by the following formula further includes:
Figure BDA0003329169660000181
wherein x is0,y0Respectively, the middle points of the two box holes detected currently, A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height.
Hereinafter, a machine vision-based shore bridge control method will be described in detail by way of specific examples with reference to fig. 1 to 9.
The technical problem that this application will be solved is through interior collecting card initial positioning, solves the guide positioning of interior collecting card long distance, increases the location distance scope promptly, carries out the detection of effectual album card container's altitude estimation and frame and container commonality characteristic (baffle or case hole) simultaneously, accomplishes the accurate positioning of album card, judges through the locomotive simultaneously, confirms album card direction of travel to better guide album card driver. On the other hand, the middle image area of the container can be effectively obtained through image acquisition after the initial positioning of the container truck, and single and double containers are jointly judged by integrating the detection result of the container, the detection result of the text on the container, the number of the container holes and the distance information, so that the accuracy rate of judging the single and double containers is effectively improved.
The application provides a bank bridge inner container truck positioning system based on machine vision and a single-box and double-box judging method, which are characterized in that an intelligent video analysis technology is utilized to effectively identify container areas, box holes, texts, frames, frame guide plates and other information on an inner container truck, and the information is analyzed and judged to realize the positioning of the bank bridge inner container truck and the judgment of single and double boxes. On the other hand, the special identification of the two-dimensional code of the head of the inner container truck is identified through video data, and vehicle identity confirmation and judgment of the vehicle running direction are carried out. Referring to fig. 4, the method comprises the following specific steps:
1. referring to fig. 2, six cameras are mounted on a cross beam of a shore bridge, four cameras (numbers: 201, 202, 203 and 204) shoot a head part of an inner truck, and the other two cameras (numbers: 205 and 206) shoot a body part of the inner truck, wherein the head part cameras have the functions of confirming the identity and the moving direction of a vehicle, and the body cameras are mainly responsible for calculating the positioning deviation distance of the inner truck and judging single and double boxes.
And an LED display and a data receiving unit are arranged in the cab of the inner hub card and are used for displaying the current deviation distance and the deviation direction of the inner hub card.
2. The method comprises the steps of acquiring image data of target parking positions of boxes or empty frames with different containers or different positions, namely to-be-loaded container trucks in advance, analyzing and identifying the data, and acquiring image coordinates of the boxes, box holes or frames and frame guide plates of the vehicles at the target positions and actual distances A mm/pixel and E mm/pixel represented by each pixel with corresponding height.
3. The subsequent height estimation of the inner container truck is carried out by intermittently acquiring the image data of the inner container truck of the loaded container slowly running through the visual fields of the cameras 205 and 206 in advance, and further calculating the Euclidean distance of image coordinates of two box holes close to the camera side at different moving intervals.
4. When a vehicle enters the fields of view of cameras 205 and 206, firstly, a container or an empty vehicle frame area on an inner container truck is identified through a target identification algorithm YoloV5, the size of the container area is judged through identification, 20 feet and 20 feet but not 20 feet are distinguished, then 20 feet of containers on the same container truck are fused according to certain conditions, and the container grabbing or releasing operation is judged according to the identified type (container or vehicle frame). And tracking the inner hub area by using a SORT (simple Online and real tracking) target tracking method.
5. And according to the tracking result, carrying out iterative judgment on the data of the front frame and the rear frame, when the Euclidean distance of the geometric centers of the identification areas of the front frame and the rear frame is less than a certain threshold value and the statistical frame number is more than a certain number, carrying out vehicle parking judgment and determining that the current vehicle is a working vehicle and an operation lane.
6. The vehicle head identification camera identifies the vehicle head of the inner container truck and the two-dimensional code pasted on the vehicle head, confirms the identity number of the inner container truck and is connected with the data receiving unit of the cab of the inner container truck through a network.
7. And (3) comparing the y coordinates to make a difference by identifying the tracked box area center and the target position coordinates acquired in step (2), and calculating the primary positioning deviation distance of the container truck as follows:
offset=D(y-y0)
wherein y represents the current vehicle area image ordinate, y0Representing the pre-acquired target position and D the actual pixel distance factor.
And sending the calculation result to a data receiving unit of a corresponding driver through a network, displaying the calculation result through an LED display screen, and guiding the driver to adjust the position of the inner container truck.
8. After the primary positioning is finished, image cutting is carried out on the detected inner container truck area or the frame area, the container truck loading container is divided into three sub-area images of the cutting area, the middle cutting area is used for detecting box holes and characters, other two last areas are only used for detecting the box holes, the image is cut into two areas in the frame area, and the frame guide plate is detected in the two areas.
9. According to whether the container truck area for loading the container is a fusion area, namely whether the container truck area is two 20 feet, and by combining the box holes of the middle sub-area and the text detection information, judging that the container is single-double-box, as follows:
Figure BDA0003329169660000211
wherein weight0Represents the weight of the container number (0.4), weight1Indicates whether there is a text weight (0.2) and weight2Indicates whether there is a bin weight (0.4), R0Indicating whether two containers, R1Indicates whether or not there is a character and R2Whether a box hole exists or not is indicated, namely 1 exists or not is indicated, namely 0 does not exist, and when the total score is larger than a certain threshold value (0.6), the box is determined to be a double box, and the box is determined to be a single box otherwise.
10. The total height of the vehicle container is estimated by using the lower sub-area box hole distance and position. Assuming that the vehicle frames are at the same height, no height estimation is required.
11. And (4) accurately calculating the moving deviation distance of the inner container truck by combining the height information and a target straight line L generated by a target box hole of the corresponding lane.
Figure BDA0003329169660000212
Wherein x is0,y0The middle points of two box holes respectively detected currently, A, B and C represent that the linear equation parameters E of the two box holes detected in advance represent pixel distance factors related to height.
And sending the calculation result to a data receiving unit of a corresponding driver through a network, displaying the calculation result through an LED display screen, and guiding the driver to adjust the position of the inner container truck.
12. And step 11, continuously adjusting and iterating calculation, and finishing inner-hub positioning guidance when the offset distance offset is smaller than a certain threshold value.
The specific embodiment of the application can be divided into three parts:
1. and calibrating the target parking position of the inner container and the estimated total height of the inner container.
(1) The inner container truck loaded with four different box types (single 20 feet, double 20 feet, 40 feet and 45 feet) and three different box height levels (2.4m, 2.6m and 2.9m) is respectively stopped at the target position of each lane, namely the position where the lifting appliance can accurately grab the box, please refer to fig. 3, and then the image coordinate data of the centers of rectangular areas of containers and corresponding box holes of different box types, different box heights, different lanes and the like are respectively stored. On the other hand, the inner truck of the empty frame to be placed in different boxes (the front 20 feet, the rear 20 feet, the middle 20 feet, the 40 feet and the 45 feet) is respectively stopped at the target position of each lane, namely the position where the lifting appliance can be accurately placed, then the image coordinate data of the frame rectangular area center and the corresponding frame guide plate under different box placing positions, box types, lane information and the like are respectively stored, the pixel distance A mm/pixel is estimated by using the size of a box hole, the operation of the same principle is also carried out on the empty frame, and a target straight line L and an estimated pixel distance factor E mm/pixel are generated.
(2) The container loaded with 40 feet and with the height level of 2.6m slowly runs through each lane, then image data are acquired through gaps, and the Euclidean distance between two front container holes when the container is jammed in different positions of the same lane is stored as a judgment basis for subsequently judging the height of the container jam loaded, and reference is made to figure 5.
2. Guiding by an inner collection card and distinguishing single boxes and double boxes: the identity number of the current inner hub card is confirmed through real-time video data, the deviation calculation of the inner hub card is carried out at the same time, the hub card guide information is sent to an inner hub card cab through a network, LED display is carried out, detection and identification of a single box and double boxes on the inner hub card are carried out at the same time, and the identification result is sent to a shore bridge control system.
(1) Firstly, a pre-trained target detection algorithm Yolov5-0 (the output of the algorithm model is two categories, namely a container and a frame on a truck) and a target detection algorithm Yolov5-1 (the output of the algorithm model is three categories, namely a container hole, a frame guide plate and a text), then using Yolov5-0 to identify a target, judging whether the container is grabbed or placed according to the identification category, then judging whether the size of a rectangle is 20 feet or not 20 feet, then fusing the 20 feet boxes according to a certain constraint condition, namely judging whether the width and the height of a circumscribed rectangular area of any two rectangular areas in an image are within a certain range, if the width and the height of the circumscribed rectangular area are within the certain range, fusing the two rectangular areas into one area, then using an SORT tracking algorithm to track the target, and when the tracking distance of the current two frames of data is less than a certain threshold and the number of statistical frames less than the threshold is more than a certain number, and judging that the vehicle stays, determining the vehicle as a current working vehicle, and judging a current working lane according to the central point of the area.
(2) After calibration, referring to fig. 6, the identity code confirmation and the driving direction of the inner truck are performed through a pre-trained locomotive Yolov5-2 detection model and a two-dimensional code positioning and recognition algorithm Apriltag, and accordingly, the data receiving unit of the truck cab is connected through a network.
(3) Detecting the central coordinate y of the identified target rectangular area and the pre-acquired target position coordinate y saved in the step 1 according to Yolov5-00And calculating initial positioning deviation, sending the initial positioning deviation to a data receiving unit of the truck cab, displaying deviation data and a schematic diagram through the receiving unit, and finishing the initial positioning of the truck when the initial positioning deviation offset is less than a certain threshold value.
(4) Referring to fig. 7 and 8, the image to be cut out is identified by using a YoloV5-1 target detection algorithm, so that the resolution of a target identification sample is increased, and small targets such as container holes, texts or guide plates are better identified. The Yolo V5 is a deep learning neural network target recognition algorithm, and mainly performs off-line learning through samples, trains a model, and then performs recognition of a specified target after acquiring images in real time.
(5) If when the current box grabbing operation, single-box and double-box detection is carried out, the steps are as follows:
firstly, dividing a detected rectangular area into three sub-areas, and carrying out batch detection identification processing by using Yolov5-1 to obtain a detection result.
Secondly, analyzing the detection result of the middle subregion, and analyzing the box holes: if the number of the detected box holes is less than 2, the box holes are not detected, otherwise, the detected box holes are sorted according to the X coordinate, and if the sorted minimum value and the minimum value X direction are within a certain threshold range, the box holes are detected. Text analysis: and if the text is detected by the middle sub-area and the height and width of the rectangle meet a certain range, the text on the upper surface of the container is considered to be detected.
And finally, comprehensively integrating the fusion information, the box hole information and the text information to judge the single and double boxes, wherein the judgment comprises the following steps:
Figure BDA0003329169660000231
weight in this application0、weight1、weight2The weight parameters are 0.4, 0.2, 0.4, respectively, a double bin is considered if the overall score is greater than 0.6, otherwise a single bin is considered.
And if the current operation is the box placing operation, cutting the rectangular area into two sub-areas, and detecting the guide plate by using Yolov5-1 so as to obtain the detected guide plate coordinate.
(6) The method comprises the step of rejecting false detection points by calculating the mode sum of a vector formed by any two points in a detection result and the included angle between the mode sum and the horizontal direction and judging whether the mode sum is within a certain range.
(7) The height estimation is performed for the inner container card loaded with the container, taking into account the individual height levels 2.4, 2.6 and 2.9 of the container. Therefore, the height of the container-loading container must be estimated by first calculating the position line of the nearest center point of the two container holes, and then comparing the distance d between the two container holes with the distance d between the two container holes during height calibrationkSize of (d-d)k>T0Then it is considered to be a rating of 2.9 if d-dk<T1Then it is considered to be a level of 2.4 if at T0~T1Within the range, a rating of 2.6, T, is considered0=3,T1=1。
(8) The inner container truck is guided to be precisely positioned through the container holes or the vehicle frame guide plates, referring to fig. 3, a target straight line distance L is obtained through the middle point P of the two detected container holes or the two detected guide plates, container grabbing and releasing information, lane information, height information and container type information, the straight line distance from the point P to the target straight line L is further calculated, and then the straight line distance is multiplied by E, so that the actual deviation distance of the inner container truck can be obtained. And the distance deviation is sent to an inner truck cab for deviation distance display.
And (5) finishing the guiding and positioning of the inner hub card through continuous iteration of the step 8.
3. And the inner hub card guides and fixes the inner hub card according to the actual deviation distance, and the shore bridge control system adjusts the hanger form of the shore bridge according to the judged single and double boxes when the hanger of the shore bridge reaches the container or the position right above the frame, so as to carry out box grabbing or box releasing operation.
Referring to fig. 9, the industrial control computer stores an AI algorithm, which is mainly used to collect video image information of 1-6 cameras and run an inner container guide positioning and single-box and double-box discrimination algorithm software. The PLC mainly receives the discrimination result of the single box and the double box from the video processor and judges whether the single box or the double box exists at present. The video processor receives the operation information of the shore bridge, and is mainly used for more accurately judging whether the current shore bridge has an inner container card for operation, and whether the current shore bridge needs to run a guide operation and judge whether the current shore bridge has a single box or a double box.
The term "video processor" includes various devices, apparatuses and machines for processing data, e.g., a video processor includes a programmable processor, a computer, multiple processors or multiple computers, etc. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a runtime environment, or a combination of one or more of them.
The methods and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform these functions by operating on surveillance video and generating target detection results.
Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing the historical video data and the data set (e.g., magnetic, magneto-optical disks, or optical disks). Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and storage devices, including by way of example: semiconductor memory devices such as EPROM (erasable programmable read only memory), EEPROM (electrically erasable programmable read only memory), and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM disks and DVD-ROM disks.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application.

Claims (18)

1. A shore bridge control method based on machine vision is characterized by comprising the following steps:
calibrating a target parking position of the container truck and the estimation of the height of the container truck for loading the container;
acquiring and utilizing a first target detection model to identify a target rectangular area of a container or a frame in a vehicle body image, and calculating an initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area comprises a minimum circumscribed rectangle;
performing image cropping on a target rectangular area of the container or the frame in the vehicle body image to generate a vehicle body sub-image;
acquiring and utilizing a second target detection model to identify box hole coordinates and characters on a container or frame guide plate coordinates on a frame in the vehicle body subimage;
performing single-box and double-box judgment based on whether a target rectangular area of the container in the vehicle body subimage is a fusion area or not, and box hole coordinates and characters of a middle subregion in the vehicle body subimage;
acquiring the distance and the position of a box hole or a frame guide plate of a lower subregion image in the subimage of the vehicle body based on the box hole coordinate or the frame guide plate coordinate to estimate the height of the container;
calculating a movement deviation distance based on the height of the container or the frame and a target straight line generated by a box hole or a frame guide plate, and guiding the truck to the target parking position according to the movement deviation distance; and
and when the hanger of the shore bridge reaches the position right above the container or the vehicle frame, adjusting the hanger shape of the shore bridge according to the judged single box and double boxes, and performing box grabbing or box releasing operation.
2. The machine-vision-based shore bridge control method of claim 1, wherein calibrating the target parking position of the truck and the estimate of the height of the truck loading the container further comprises:
acquiring a vehicle body image of a target parking position of the container truck in advance and identifying the vehicle body image of the target parking position so as to acquire the image coordinates of the container and the corresponding box hole or the image coordinates of the vehicle frame and the corresponding vehicle frame guide plate of the container truck at the target parking position;
and generating a target straight line by using the box hole image coordinates or the vehicle frame guide plate image coordinates and estimating a pixel distance factor related to the height.
3. The machine-vision-based shore bridge control method of claim 1, wherein obtaining and identifying a target rectangular region of a container or frame in the body image using the first target detection model further comprises:
acquiring a plurality of historical vehicle body images from a database and marking containers or frames in the plurality of historical vehicle body images;
establishing a first neural network Yolov5 and training the first neural network Yolov5 by using a plurality of marked historical car body images to obtain a first target detection model; and
acquiring a current vehicle body image in real time, identifying a target rectangular area of a container or a vehicle frame in the current vehicle body image by using the first target detection model so as to perform container grabbing or container releasing judgment, judging whether the container is 20 feet or not according to the size of the target rectangular area of the container, and fusing the target rectangular areas of two containers with 20 feet on the same truck to form a fusion area.
4. The machine-vision-based shore bridge control method of claim 3, wherein image cropping a target rectangular area of a container or frame in said body image to generate a body sub-image further comprises:
cutting the historical automobile body image or the current automobile body image into a historical automobile body sub-image or a current automobile body sub-image, wherein,
when the target rectangular area is a container area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a first upper sub-area, a middle sub-area and a first lower sub-area, wherein box holes and characters are detected in the first middle sub-area, and box holes are detected in the first upper sub-area image and the first lower sub-area image; and
when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein frame guide detection is performed on the second upper sub-area and the second lower sub-area.
5. The machine-vision-based shore bridge control method of claim 4, wherein obtaining and identifying bin hole coordinates on a container or frame guide coordinates on a frame in said body sub-image using a second target detection model further comprises:
marking a box hole or a frame guide plate in the historical automobile body subimage;
establishing a second neural network Yolov5 and training the second neural network Yolov5 by using the labeled historical automobile body sub-images to obtain a second target detection model; and
and recognizing a box hole and characters or a frame guide plate in the current vehicle body subimage by using the second target detection model, and acquiring box hole coordinates or frame guide plate coordinates.
6. The machine-vision-based shore bridge control method according to claim 1, wherein a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera are installed at the isolation zones on opposite sides of the lane on the shore bridge girder, wherein,
shooting head images of the collecting card by using the first camera to the fourth camera so as to confirm the identity and the moving direction of the vehicle; and
and shooting the body image of the container truck by using the fifth camera and the sixth camera so as to calculate the initial positioning deviation distance and the movement deviation distance of the container truck.
7. The machine-vision-based shore bridge control method of claim 6, further comprising, after calibrating the target parking position of the trucks and the estimation of the height of the trucks loading the container:
acquiring a plurality of historical locomotive images from a database and labeling a locomotive and two-dimensional codes in the historical locomotive images;
establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical locomotive image to obtain a third target detection model;
acquiring a current locomotive image in real time, identifying a truck collecting locomotive in the current locomotive image and a two-dimensional code pasted on the truck collecting locomotive by using a third target detection model, and confirming a truck collecting identity code and a driving direction; and
and connecting a data receiving unit in a corresponding truck driver cab through a network according to the truck identity code.
8. The machine-vision-based shore bridge control method of claim 7, wherein calculating an initial positioning deviation distance based on said target rectangular area and said target parking position further comprises:
calculating the initial positioning deviation distance of the container truck based on the identified target rectangular area of the container or the frame in the current vehicle body image and a pre-acquired target parking position; and
the network transmits the initial positioning deviation distance of the container truck to a data receiving unit, and the initial positioning deviation distance is displayed through an LED display screen to guide a container truck driver to adjust the position of the container truck, wherein the initial positioning deviation distance of the container truck is calculated through the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0Representing the ordinate of the pre-acquired target parking position and D representing the actual pixel distance factor related to the height.
9. The machine-vision-based shore bridge control method of claim 8, wherein a movement deviation distance is calculated based on a height of said container or frame and a target straight line generated by a box hole or frame guide, guiding said truck to said target parking position according to said movement deviation distance further comprises:
calculating a travel deviation distance of the hub by the following formula further comprises:
Figure FDA0003329169650000041
wherein x is0,y0Respectively and currently detecting the middle points of the two box holes, wherein A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height;
and sending the movement deviation distance to the data receiving unit through the network and displaying the movement deviation distance on the LED display so as to continuously guide a driver to adjust the position of the truck collection until the movement deviation distance is smaller than a threshold value, and finishing the positioning guidance of the truck collection.
10. A shore bridge control device based on machine vision, comprising: the video processor comprises a calibration module, a primary positioning module, an image cutting module, an identification module, a single-box and double-box judgment module, a height estimation module and a movement deviation distance calculation module, wherein,
the calibration module is used for calibrating the target parking position of the container truck and the estimation of the height of the container truck for loading the container;
the initial positioning module is used for acquiring and utilizing a first target detection model to identify a target rectangular area of a container or a frame in a vehicle body image, and calculating an initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area comprises a minimum circumscribed rectangle;
the image clipping module is used for clipping the image of the target rectangular area of the container or the frame detected in the vehicle body image to generate a vehicle body sub-image;
the recognition module is used for acquiring and recognizing coordinates of a container hole on a container and characters or coordinates of a frame guide plate on a frame in the sub-image of the vehicle body by using a second target detection model;
the single-double box judgment module is used for judging whether a target rectangular area of the container in the vehicle body subimage is a fusion area or not, and box hole coordinates and characters of a middle subregion in the vehicle body subimage;
the height estimation module is used for acquiring the distance and the position of a box hole or a frame guide plate of a lower subregion image in the subimage of the vehicle body based on the box hole coordinate or the frame guide plate coordinate to estimate the height of the container
The moving deviation distance calculation module is used for calculating a moving deviation distance based on the height of the container or the frame and a target straight line generated by a box hole or a frame guide plate so as to guide the container truck to the target parking position according to the moving deviation distance; and
and the control module is used for adjusting the hanger shape of the shore bridge according to the judged single-box and double-box to carry out box grabbing or box releasing operation when the hanger of the shore bridge reaches the position right above the container or the vehicle frame.
11. The machine-vision-based shore bridge control apparatus according to claim 10, comprising a camera for capturing a body image of a target parking position of said truck in advance, and intermittently capturing said body image;
the calibration module further comprises: a target position calibration sub-module and a height estimation calibration sub-module, wherein,
the target position calibration submodule is used for identifying the vehicle body image of the target parking position and acquiring the container and corresponding box hole image coordinates or the vehicle frame and corresponding vehicle frame guide plate image coordinates of the container truck at the target parking position so as to generate a target straight line; and
and the height estimation calibration submodule estimates a pixel distance factor related to the height by using the box hole image coordinate or the frame guide plate image coordinate.
12. The machine-vision-based shore bridge control apparatus of claim 10, wherein said primary positioning module comprises an annotation submodule, a first target detection model, a target rectangle generation submodule, wherein,
the first labeling submodule is used for acquiring a plurality of historical vehicle body images from a database and labeling containers or frames in the plurality of historical vehicle body images;
the first target detection model is used for establishing a first neural network Yolov5 and training the first neural network Yolov5 by using the marked multiple historical vehicle body images to obtain a first target detection model; and
the target rectangle generation submodule is used for acquiring a current vehicle body image in real time, identifying a target rectangular area of a container or a vehicle frame in the current vehicle body image by using the first target detection model so as to perform container grabbing or releasing judgment, judging whether the container is 20 feet or not according to the size of the target rectangular area of the container, and fusing the target rectangular areas of two containers with 20 feet on the same truck to form the fusion area.
13. The machine-vision-based shore bridge control apparatus of claim 12, wherein said image cropping module is configured to crop said historical body images or said current body images into historical body sub-images or current body sub-images, wherein,
when the target rectangular area is a container area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a first upper sub-area, a middle sub-area and a first lower sub-area, wherein box holes and characters are detected in the first middle sub-area, and box holes are detected in the first upper sub-area image and the first lower sub-area image; and
when the target rectangular area is a frame area, the historical vehicle body sub-image or the current vehicle body sub-image comprises a second upper sub-area and a second lower sub-area, wherein frame guide detection is performed on the second upper sub-area and the second lower sub-area.
14. The machine-vision-based shore bridge control apparatus of claim 13, wherein said identification module comprises a second labeling submodule, a second target detection model and a box hole and carriage guide identification submodule, wherein,
the second labeling submodule is used for labeling a box hole or a frame guide plate in the historical automobile body subimage;
the second target detection model is used for establishing a second neural network Yolov5 and training the second neural network Yolov5 by using the labeled historical vehicle body sub-images to obtain a second target detection model; and
and the box hole and frame guide plate identification submodule is used for identifying a box hole or a frame guide plate in the current vehicle body subimage by using the second target detection model and acquiring box hole coordinates and frame guide plate coordinates.
15. The machine-vision-based shore bridge control apparatus according to claim 10, wherein said cameras comprise a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera, mounted at isolation zones on opposite sides of a lane on a shore bridge girder, wherein,
the first camera to the fourth camera are used for shooting head images of the container truck so as to confirm the identity and the moving direction of the vehicle; and
and the fifth camera and the sixth camera are used for shooting the body image of the container truck so as to calculate the initial positioning deviation distance and the movement deviation distance of the container truck.
16. The machine-vision-based shore bridge control apparatus of claim 15, further comprising a data receiving unit, and said identification module further comprises a third labeling submodule and a third target detection model and a vehicle head and identity confirmation submodule, wherein,
the third labeling submodule is used for acquiring a plurality of historical vehicle head images from a database and labeling the vehicle head and the two-dimensional code in the historical vehicle head images;
the third target detection model is used for establishing a third neural network Yolov5, and training the third neural network Yolov5 by using the labeled historical head images to obtain a third target detection model;
the vehicle head and identity confirmation submodule is used for acquiring a current vehicle head image in real time, recognizing the truck collecting vehicle head in the current vehicle head image and the two-dimensional code stuck on the truck collecting head by using a third target detection model, and confirming the truck collecting identity code and the driving direction; and
and the data receiving unit is positioned in the truck cab and connected with the video processor through a network according to the truck identity code.
17. The machine-vision-based shore bridge control apparatus of claim 16, further comprising an LED display screen,
the primary positioning module is used for calculating a primary positioning deviation distance of the container truck based on the identified target rectangular area of the container or the frame in the current vehicle body image and a pre-acquired target parking position, wherein the primary positioning deviation distance of the container truck is calculated through the following formula:
offset=D(y-y0)
wherein y represents the ordinate of the current vehicle area image, y0A vertical coordinate representing a pre-collected target parking position, D representing an actual pixel distance factor related to height;
the data receiving unit transmits the initial positioning deviation distance of the hub to the data receiving unit through the network; and
and the LED display screen is positioned in the card collecting cab and is in communication connection with the data receiving unit, and is used for displaying the initial positioning deviation distance so as to guide the card collecting driver to adjust the position of the card collecting.
18. The machine-vision-based shore bridge control apparatus of claim 17, wherein said movement deviation distance calculation module for calculating the movement deviation distance of said truck by the following formula further comprises:
Figure FDA0003329169650000081
wherein x is0,y0Respectively and currently detecting the middle points of the two box holes, wherein A, B and C represent linear equation parameters of the two box holes detected in advance, and E represents an actual pixel distance factor related to the height;
the data receiving unit receives the movement deviation distance through the network;
and the LED display is used for displaying the movement deviation distance so as to continuously guide a driver to adjust the truck collection position until the movement deviation distance is smaller than a threshold value, and then the truck collection positioning guide is completed.
CN202111275349.2A 2021-10-29 2021-10-29 Shore bridge control method and device based on machine vision Pending CN114119741A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111275349.2A CN114119741A (en) 2021-10-29 2021-10-29 Shore bridge control method and device based on machine vision
PCT/CN2022/072004 WO2023070954A1 (en) 2021-10-29 2022-01-14 Container truck guidance and single/double-container identification method and apparatus based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111275349.2A CN114119741A (en) 2021-10-29 2021-10-29 Shore bridge control method and device based on machine vision

Publications (1)

Publication Number Publication Date
CN114119741A true CN114119741A (en) 2022-03-01

Family

ID=80379870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111275349.2A Pending CN114119741A (en) 2021-10-29 2021-10-29 Shore bridge control method and device based on machine vision

Country Status (2)

Country Link
CN (1) CN114119741A (en)
WO (1) WO2023070954A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117647969A (en) * 2022-11-29 2024-03-05 道达尔软银有限公司 Operation position guiding device of unmanned transportation equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116484485B (en) * 2023-06-21 2023-08-29 湖南省交通规划勘察设计院有限公司 Shaft network determining method and system
CN116882433B (en) * 2023-09-07 2023-12-08 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104477779B (en) * 2014-12-31 2016-07-13 北京国泰星云科技有限公司 The para-position of truck and safety control system and method under Property in Container Terminal Bridge Crane Through
CN111369779B (en) * 2018-12-26 2021-09-03 北京图森智途科技有限公司 Accurate parking method, equipment and system for truck in shore crane area
CN113376654B (en) * 2020-03-09 2023-05-26 长沙智能驾驶研究院有限公司 Method and device for detecting anti-smashing of integrated card based on three-dimensional laser and computer equipment
CN112528721B (en) * 2020-04-10 2023-06-06 福建电子口岸股份有限公司 Bridge crane integrated card safety positioning method and system
CN113341987B (en) * 2021-06-17 2023-01-17 天津港第二集装箱码头有限公司 Automatic unmanned card collection guide system and method for shore bridge

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117647969A (en) * 2022-11-29 2024-03-05 道达尔软银有限公司 Operation position guiding device of unmanned transportation equipment

Also Published As

Publication number Publication date
WO2023070954A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
CN114119741A (en) Shore bridge control method and device based on machine vision
EP3683721B1 (en) A material handling method, apparatus, and system for identification of a region-of-interest
US7508956B2 (en) Systems and methods for monitoring and tracking movement and location of shipping containers and vehicles using a vision based system
US8379928B2 (en) Obstacle detection procedure for motor vehicle
EP2998927B1 (en) Method for detecting the bad positioning and the surface defects of specific components and associated detection device
US20050281436A1 (en) Docking assistant
CN110794406B (en) Multi-source sensor data fusion system and method
CN110378957B (en) Torpedo tank car visual identification and positioning method and system for metallurgical operation
CN113885532B (en) Unmanned floor truck control system of barrier is kept away to intelligence
CN111767780A (en) AI and vision combined intelligent hub positioning method and system
CN114581368B (en) Bar welding method and device based on binocular vision
CN114119742A (en) Method and device for positioning container truck based on machine vision
CN111855667A (en) Novel intelligent train inspection system and detection method suitable for metro vehicle
CN114067140A (en) Automatic control system for grain sampler
CN113071500A (en) Method and device for acquiring lane line, computer equipment and storage medium
CN117115249A (en) Container lock hole automatic identification and positioning system and method
CN115755888A (en) AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method
CN115410399A (en) Truck parking method and device and electronic equipment
CN115410105A (en) Container mark identification method, device, computer equipment and storage medium
US20210380119A1 (en) Method and system for operating a mobile robot
CN117268424B (en) Multi-sensor fusion automatic driving hunting method and device
CN114119496A (en) Machine vision-based shore bridge single-box and double-box detection method and device
CN117494029B (en) Road casting event identification method and device
KR102623236B1 (en) Container terminal gate automation container damage detection system based on ai and method performing thereof
CN116310287A (en) Container truck loading and unloading operation positioning and guiding system and method based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination