WO2023070954A1 - Container truck guidance and single/double-container identification method and apparatus based on machine vision - Google Patents

Container truck guidance and single/double-container identification method and apparatus based on machine vision Download PDF

Info

Publication number
WO2023070954A1
WO2023070954A1 PCT/CN2022/072004 CN2022072004W WO2023070954A1 WO 2023070954 A1 WO2023070954 A1 WO 2023070954A1 CN 2022072004 W CN2022072004 W CN 2022072004W WO 2023070954 A1 WO2023070954 A1 WO 2023070954A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
container
target
sub
truck
Prior art date
Application number
PCT/CN2022/072004
Other languages
French (fr)
Chinese (zh)
Inventor
郑智辉
闫威
唐波
郭宸瑞
王硕
董昊天
闫涛
李钊
Original Assignee
北京航天自动控制研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京航天自动控制研究所 filed Critical 北京航天自动控制研究所
Publication of WO2023070954A1 publication Critical patent/WO2023070954A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the technical field of port quay crane operation assistance, in particular to a machine vision-based quay crane control method and device.
  • Containers are an important carrier in the process of modern logistics and transportation, and the efficiency of loading and unloading of containers at ports directly affects the efficiency of the entire logistics and transportation.
  • the quay crane refers to the bridge-type equipment used on the shore. It is an important working tool for unloading containers from the ship to the wharf or loading them on the ship from the wharf.
  • the driver controls the trolley and the spreader through the handle, and carries out the operation of grabbing and releasing boxes from the collection truck in the port. Since the truck driver does not have the correct target guidance, the truck driver needs to manually observe when the spreader reaches the frame or container of the inner truck every time the inner truck grabs and releases the container.
  • Collection truck is the abbreviation of container truck, which is divided into inner collection truck and outer collection truck.
  • the inner collection truck refers to the truck running in the container port
  • the outer collection truck refers to the truck from the outside to the container port.
  • the lidar scanning method is generally used to identify the container or frame of the truck, but because the lidar is expensive, the function is single, and the accuracy cannot be effectively guaranteed.
  • the existing safety positioning method for bridge cranes and trucks uses the camera to calibrate the parking point of the vehicle in advance, and then when the vehicle enters the recognition area, a certain range is added or subtracted according to the calibrated parking point of the vehicle, the image is cropped, and the Mask-RCNN algorithm is used to identify and segment out The area of the truck container or frame, and obtain the center point, and calculate its Euclidean distance with the pre-marked parking point, so as to guide the vehicle to the accurate position.
  • the existing method assumes that the vehicle needs to drive to the vicinity of the exact target parking position first, so it cannot effectively segment the collection trucks when the deviation distance is large, that is, beyond a certain range, which leads to inaccurate guidance.
  • This method does not clearly provide a determination method for the direction of the vehicle, that is, the vehicle should move forward or backward.
  • the machine vision part of the existing double box detection method is to use a camera to shoot the middle of the container, and then obtain the image of the middle of the container, use the box hole recognition model to identify the box hole, and then determine the single and double boxes.
  • the embodiment of the present application aims to provide a machine vision-based quay crane control method and device to solve the problem of low positioning accuracy of the collection truck and insufficient accuracy of single and double tank judgments in the existing method, which cause the quay crane The problem of low efficiency in picking and placing boxes.
  • the embodiment of the present application provides a machine vision-based quay crane control method, including: calibrating the target parking position of the collection truck and the estimated height of the collection truck loaded with containers; acquiring and using the first target detection model to identify the vehicle body image The target rectangular area of the container or frame in the vehicle body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum circumscribed rectangle; for the container in the vehicle body image or the target rectangular area of the vehicle frame to crop the image to generate the sub-image of the vehicle body; obtain and use the second target detection model to identify the box hole coordinates and text on the container in the sub-image of the vehicle body or the frame guide plate coordinates on the vehicle frame ; Based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-area in the body sub-image, carry out single and double box judgment; based on the box hole coordinates or the The frame guide plate coordinates are
  • the beneficial effects of the above-mentioned technical solution are as follows: the initial positioning of the collection truck increases the distance range within which the collection truck can be guided and positioned in the quayside crane, and at the same time, the positioning accuracy of the internal collection truck is improved through the detection and positioning of small targets such as box holes or guide plates. The precise guidance and positioning of the inner collection truck is realized, and the positioning accuracy of the collection truck is improved. Using the comprehensive container quantity, box hole and the text information on the container surface, the single and double box judgment method is carried out. Through this joint judgment method, the misjudgment caused by relying on a single information judgment can be avoided, thereby effectively improving the accuracy of single and double box discrimination. Therefore, the operating efficiency of the quay crane grabbing and releasing boxes is improved.
  • the calibration of the target parking position of the collection truck and the estimated height of the container loaded container further includes: pre-collecting the body image of the target parking position of the collection truck and calculating the body image of the target parking position Carry out identification to obtain the image coordinates of the container and the corresponding box hole or the frame and the corresponding frame guide plate image coordinates of the truck at the target parking position; using the box hole image coordinates or the frame guide plate image coordinates Generate the target line and estimate the height-dependent pixel distance factor.
  • acquiring and using the first target detection model to identify the target rectangular area of the container or frame in the vehicle body image further includes: acquiring multiple historical vehicle body images from the database and analyzing the multiple historical vehicle body images Mark the container or vehicle frame in; establish the first neural network Yolov5 and use the multiple pieces of historical vehicle body images marked to train the first neural network Yolov5 to obtain the first target detection model; and collect the current vehicle body image in real time, and Use the first target detection model to identify the target rectangular area of the container or frame in the current vehicle body image, so as to judge whether to grab or put the box, wherein the determination is made according to the size of the target rectangular area of the container. Whether the container is 20 feet or not, and the target rectangular areas of two 20 feet containers on the same container are merged to form the fusion area.
  • performing image clipping on the target rectangular area of the container or vehicle frame in the vehicle body image to generate a vehicle body sub-image further includes: clipping the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image image or the current body sub-image, wherein, when the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area , wherein box hole and text detection is performed on the first middle sub-region, and box hole detection is performed on the first upper sub-region image and the first lower sub-region image; and when the target rectangular region is frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein the second upper sub-area and the second lower sub-area are Frame guide inspection.
  • acquiring and using the second target detection model to identify the box hole coordinates on the container or the frame guide plate coordinates on the frame in the sub-image of the vehicle body further includes: Mark the box hole or the frame guide plate; establish the second neural network Yolov5 and utilize the historical body image of the label to train the second neural network Yolov5 to obtain the second target detection model; and use the second target detection
  • the model identifies box holes or frame guides in the current body subimage, and obtains box hole coordinates and text and frame guide coordinates.
  • the first camera, the second camera, the third camera, the fourth camera, the fifth camera and the sixth camera are installed at the isolation strips on the opposite sides of the lane on the quay bridge beam, wherein, using the The first camera to the fourth camera capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and use the fifth camera and the sixth camera to capture the body image of the truck, To calculate the initial positioning deviation distance and the movement deviation distance of the collection truck.
  • the target parking position of the collection truck and the estimated height of the collection truck loaded with containers after calibrating the target parking position of the collection truck and the estimated height of the collection truck loaded with containers, it further includes: acquiring a plurality of historical vehicle front images from the database and analyzing the vehicle front and two in the historical vehicle front images Dimensional codes are marked; the third neural network Yolov5 is established, and the third neural network Yolov5 is trained to obtain the third target detection model by using the marked historical front image; the current front image is collected in real time, and the third target detection model is used Recognizing the collection truck head in the current front image and the two-dimensional code pasted on the collection truck head and confirming the collection card identity code; and connecting the data receiving unit in the cab of the corresponding collection truck through the network according to the collection card identity code.
  • calculating the initial positioning deviation distance based on the target rectangular area and the target parking position further includes: based on the identified target rectangular area of the container or frame in the current vehicle body image and the pre-acquired Target parking position, calculate the initial positioning deviation distance of the collection truck; and transmit the initial positioning deviation distance of the collection truck to the data receiving unit through the network, and display it through the LED display screen to guide the collection truck driver to adjust the collection truck position, wherein the initial positioning deviation distance of the collection truck is calculated by the following formula:
  • y represents the ordinate of the current vehicle area image
  • y 0 represents the ordinate of the target parking position collected in advance
  • D represents the actual pixel distance factor related to the height.
  • the moving deviation distance is calculated based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, and the collection truck is guided to the target parking according to the moving deviation distance
  • the position further includes: calculating the movement deviation distance of the collection card by the following formula further includes:
  • x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height; through the The network sends the moving deviation distance to the data receiving unit and displays the moving deviation distance on the LED display to continue to guide the driver to adjust the truck collection position until the truck collection is completed when the moving deviation distance is less than a threshold Positioning guidance.
  • the embodiment of the present application provides a machine vision-based quay crane control device, including: a video processor and a control module, the video processor includes a calibration module, an initial positioning module, an image cropping module, an identification module, The single and double box judgment module, the height estimation module and the movement deviation distance calculation module, wherein the calibration module is used to calibrate the target parking position of the collection truck and the height estimation of the collection truck loaded with containers; the initial positioning module uses Obtaining and using the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculating the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area Including the minimum circumscribed rectangle; the image cropping module is used to crop the target rectangular area of the container or frame detected in the vehicle body image to generate a sub-image of the vehicle body; the identification module is used to obtain and use the first The second target detection model recognizes the coordinates of the box hole on the container in the sub-image of the body and the coordinates of the
  • the device includes a camera, which is used to pre-collect the vehicle body image of the target parking position of the truck, and intermittently collect the vehicle body image;
  • the calibration module also includes a target position calibration sub-module and a height estimation calibration The submodule, wherein, the target position calibration submodule of the camera is used to identify the body image of the target parking position, and obtain the container and the corresponding box hole image of the truck at the target parking position coordinates or frame and corresponding frame guide plate image coordinates to generate a target straight line; and the height estimation and calibration sub-module uses the box hole image coordinates or the frame guide plate image coordinates to estimate a height-related pixel distance factor.
  • the initial positioning module includes a labeling submodule, a first target detection model, and a target rectangle generation submodule, wherein the first labeling submodule is used to obtain multiple historical vehicle body images from the database And label the container or vehicle frame in the multiple historical vehicle body images; the first target detection model is used to establish the first neural network Yolov5 and utilize the multiple historical vehicle body images of the label to analyze the first neural network Yolov5 is trained to obtain the first target detection model; and the target rectangle generation submodule is used to collect the current vehicle body image in real time, and utilize the first target detection model to identify the container or frame in the current vehicle body image
  • the target rectangular area of the container is used to determine whether the container is 20 feet or not according to the size of the target rectangular area of the container, and the two 20-foot containers on the same collection card are The target rectangular areas of the containers are fused to form the fused area.
  • the image cropping module is used to crop the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, wherein when the target rectangular area is a container area , the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area, wherein the box hole and text detection is performed on the first middle sub-area, and the The first upper sub-region image and the first lower sub-region image are subjected to box hole detection; and when the target rectangular region is a frame region, the historical body sub-image or the current body sub-image includes The second upper sub-area and the second lower sub-area, wherein the detection of the frame guide plate is performed on the second upper sub-area and the second lower sub-area.
  • the recognition module includes a second labeling submodule, a second target detection model, and a box hole and frame guide plate recognition submodule, wherein the second labeling submodule is used to identify the historical The box hole or the frame guide plate in the car body sub-image are marked; the second target detection model is used to set up a second neural network Yolov5 and utilize the marked historical car body sub-image to train the second neural network Yolov5 to obtain The second target detection model; and the box hole and frame guide plate identification submodule, used to use the second target detection model to identify the box hole and text or the frame guide plate in the current body sub-image, and obtain the box hole coordinates or frame guide coordinates.
  • the cameras include a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera, which are installed at the isolation strips on opposite sides of the lane on the cross beam of the quay bridge , wherein, the first camera to the fourth camera are used to capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and the fifth camera and the sixth camera are used to Taking a body image of the collection truck to calculate the initial positioning deviation distance and the movement deviation distance of the collection truck.
  • the quay bridge control device based on machine vision also includes a data receiving unit, and the identification module also includes a third labeling submodule, a third target detection model and a vehicle head and identity confirmation submodule, wherein the The third labeling sub-module is used to obtain multiple historical car head images from the database and to mark the car head and the two-dimensional code in the historical car head images; the third target detection model is used to establish a third neural network Yolov5, and use the marked historical head image to train the third neural network Yolov5 to obtain the third target detection model; the head and identity confirmation sub-module is used to collect the current head image in real time, and use the third target detection model to identify Find the collection truck head in the current head image and the two-dimensional code pasted on the collection truck head and confirm the collection card identity code; and the data receiving unit is used to be located in the collection truck cab and according to the collection card identity code Connect with the video processor through the network.
  • the identification module also includes a third labeling submodule, a third target detection model and
  • the quay crane control device based on machine vision also includes an LED display screen, and the initial positioning module is used for pre-acquired Calculate the initial positioning deviation distance of the collection truck, wherein, the initial positioning deviation distance of the collection truck is calculated by the following formula:
  • y represents the ordinate of the current vehicle area image
  • y 0 represents the ordinate of the target parking position collected in advance
  • D represents the actual pixel distance factor related to the height
  • the data receiving unit transfers the set The initial positioning deviation distance of the card is transmitted to the data receiving unit
  • the LED display screen is located in the cab of the collection truck and communicated with the data receiving unit for displaying the initial positioning deviation distance to guide the collection truck driver to adjust Card location.
  • the movement deviation distance calculation module is used to calculate the movement deviation distance of the truck through the following formula and further includes:
  • x 0 , y 0 are the midpoints of the two box holes currently detected respectively, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height;
  • the data receiving unit receives the moving deviation distance through the network; the LED display is used to display the moving deviation distance, so as to continue to guide the driver to adjust the truck collection position until the truck collection is completed when the moving deviation distance is less than a threshold Positioning guidance.
  • the present application can achieve at least one of the following beneficial effects:
  • this application proposes an initial positioning of the truck based on the detection of the container or frame area of the truck, and accurate positioning through the common features of the container or frame (box holes and guide plates). method.
  • the distance range for the guidance and positioning of the internal collection truck of the quayside crane is increased.
  • the positioning accuracy of the internal collection truck is improved through the detection and positioning of small targets such as box holes or guide plates, and the precise guidance of the internal collection truck is realized. position.
  • This application adopts a height estimation method based on the distance of the container hole. This method effectively distinguishes the guidance error caused by the different heights of the containers, and by estimating the height of the collection truck in the loaded container, it effectively adapts to the operating conditions of various container heights and improves the adaptability of the system.
  • This application adopts the comprehensive container quantity, container hole and text information on the upper surface of the container to determine single and double containers. Through the joint judgment method, the misjudgment caused by relying on a single information judgment is avoided, and the accuracy of single and double box discrimination is effectively improved.
  • Fig. 1 is a flowchart of a machine vision-based control method for quay cranes according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of installation arrangement of a camera device according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of collection cards collected by cameras, their recognition results and deviations according to an embodiment of the present application.
  • Fig. 4 is a specific flow chart of a machine vision-based quay crane control method according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of height estimation according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of vehicle head and two-dimensional code recognition according to an embodiment of the present application.
  • Fig. 7 is a schematic diagram of clipping and fusion of loaded container regions according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of clipping and fusion of frame regions according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of the working principle of the container truck in the quay crane according to the embodiment of the present application.
  • Fig. 10 is a block diagram of a machine vision-based quay crane control device according to an embodiment of the present application.
  • the machine vision-based quay crane control method includes: in step S102, calibrate the target parking position of the collection truck and the estimated height of the collection truck loaded with containers; in step S104, obtain and use the first target The detection model identifies the target rectangular area of the container or frame in the body image, and calculates the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum circumscribed rectangle; in step S106, the body image The target rectangular area of the container or frame in the image is cropped to generate the body sub-image; in step S108, obtain and use the second target detection model to identify the box hole coordinates and text or frame on the container in the body sub-image frame guide plate coordinates on the upper frame; in step S110, whether the target rectangular area of the container in the sub-image of the vehicle body is a fusion area, the box hole coordinates and the text of the middle sub-region in the sub-
  • the distance range within which the quay truck can be guided and positioned by the collection truck in the quay crane is increased through the initial positioning of the collection truck, and at the same time, small targets such as box holes or guide plates
  • the detection and positioning of the inner collection truck improve the positioning accuracy of the inner collection truck, and then realize the precise guidance and positioning of the inner collection truck, and improve the positioning accuracy of the collection truck.
  • the single and double box judgment method is carried out. Through this joint judgment method, the misjudgment caused by relying on a single information judgment can be avoided, thereby effectively improving the accuracy of single and double box discrimination. Therefore, the operating efficiency of the quay crane grabbing and releasing boxes is improved.
  • steps S102 to S116 in the machine vision-based quay crane control method will be described in detail with reference to FIG. 1 .
  • the first camera, the second camera, the third camera, the fourth camera, the fifth camera and the sixth camera are installed at the isolation strips on the opposite sides of the lane on the cross beam of the quay bridge, wherein, using the first camera to the fourth camera
  • the camera captures the front image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and uses the fifth camera and the sixth camera to capture the body image of the truck to calculate the initial positioning deviation distance and the movement deviation distance of the truck.
  • step S102 the target parking position of the truck and the estimated height of the truck loaded with containers are calibrated.
  • the calibration of the target parking position of the collection truck and the estimated height of the collection truck loaded with containers further includes: pre-collecting the body image of the target parking position of the collection truck and identifying the body image of the target parking position, so as to obtain the vehicle body image of the collection truck at the target parking position.
  • step S104 acquire and use the first object detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum The bounding rectangle.
  • acquiring and using the first target detection model to identify the target rectangular area of the container or the frame in the body image further includes: acquiring multiple historical body images from the database and detecting the container or frame in the multiple historical body images Carry out labeling; Establish the first neural network Yolov5 and use the multiple pieces of historical body images of the label to train the first neural network Yolov5 to obtain the first target detection model; and collect the current vehicle body image in real time, and use the first target detection model to identify The target rectangular area of the container or vehicle frame in the current body image is used to determine whether to grab or put the container, wherein, according to the size of the target rectangular area of the container, it is determined whether the container is 20 feet or not, and the The target rectangular areas of the two 20-foot containers merge to form a fusion area
  • Calculating the initial positioning deviation distance based on the target rectangular area and the target parking position further includes: calculating the initial positioning deviation distance of the truck based on the recognized target rectangular area of the container or frame in the current body image and the pre-acquired target parking position; And the initial positioning deviation distance of the collection truck is transmitted to the data receiving unit through the network, and displayed on the LED display screen to guide the collection truck driver to adjust the position of the collection truck.
  • the initial positioning deviation distance of the collection truck is calculated by the following formula:
  • y represents the ordinate of the current vehicle area image
  • y 0 represents the ordinate of the target parking position collected in advance
  • D represents the actual pixel distance factor related to height in mm/pixel, in other words, at a single height level
  • the actual pixel distance factor D is also different.
  • step S106 image cropping is performed on the target rectangular area of the container or frame in the vehicle body image to generate a body sub-image.
  • performing image cropping on the target rectangular area of the container or frame in the vehicle body image to generate the vehicle body sub-image further includes: clipping the historical vehicle body image or the current vehicle body image into the historical vehicle body sub-image or the current vehicle body sub-image, wherein, when When the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes the first upper sub-area, the middle sub-area and the first lower sub-area, wherein the first middle sub-area is subjected to box hole and text detection, Carrying out box hole detection on the first upper sub-region image and the first lower sub-region image; and when the target rectangular region is the frame region, the historical body sub-image or the current body sub-image includes the second upper sub-region and the second lower A sub-area, wherein the frame guide plate detection is performed on the second upper sub-area and the second lower sub-area.
  • the cropping ratio is 0.15, 0.4, 0.45; when the Y coordinate is greater than 1/3 and less than 2/3, it is 0.25, 0.4, 0.35; When the coordinate is greater than 2/3, the scale is 0.35, 0.4, 0.25. Since the image in the middle sub-region has a fixed image ratio and is shifted up and down at the center of the entire image as needed, this image cropping method can guarantee the detection of box holes and text.
  • step S108 acquire and use the second target detection model to identify the box hole coordinates and text on the container in the vehicle body sub-image or the coordinates of the frame guide plate on the frame.
  • acquiring and using the second target detection model to identify the box hole coordinates on the container or the frame guide plate coordinates on the frame in the body sub-image further includes: Annotate; set up the second neural network Yolov5 and utilize the historical body image of labeling to train the second neural network Yolov5 to obtain the second target detection model; and utilize the second target detection model to identify the box holes and text or frame guide and get box hole coordinates or frame guide coordinates.
  • step S110 based on whether the target rectangular area of the container in the body sub-image is a fusion area, that is, whether it is two 20-foot containers, the box hole coordinates and the text of the middle sub-area in the body sub-image, perform single and double container judge. Calculate the total score according to whether the container area in the container image is two 20-foot containers, the box hole and the text in the middle sub-area. The total score is calculated by the following formula:
  • weight 0 indicates the weight of the number of containers, whether weight 1 has text weight and weight 2 has box hole weight, R 0 respectively indicates whether there are two containers, whether R 1 has text, and whether R 2 has box holes, and if there is, it is 1 , nothing is 0.
  • step S112 the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image is acquired based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container.
  • the height estimation is performed for the container-laden container trucks, taking into account the individual height classes 2.4, 2.6 and 2.9 of the container. It is fixed according to the position of the camera, and the distance between the camera and the camera is different according to the height of the container, which in turn causes the distance of the box hole in the image to be different. For example, when the height level is 2.4, the distance between the box hole in the image is the smallest, and when At height class 2.9, the boxhole distance in the image is at its maximum.
  • step S114 the movement deviation distance is calculated based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, and the collection truck is guided to the target parking position according to the movement deviation distance.
  • calculating the movement deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, guiding the collection truck to the target parking position according to the movement deviation distance further includes: calculating the movement of the collection truck by the following formula Bias distances further include:
  • x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height;
  • step S116 when the spreader of the quay crane reaches directly above the container or the vehicle frame, adjust the form of the spreader of the quay crane according to the judged single or double container, and carry out the operation of grabbing or releasing the container.
  • the quay crane control device based on machine vision includes: a video processor 1002 and a control module 1018, and the video processor 1002 includes a calibration module 1004, an initial positioning module 1006, an image cropping module 1008, an identification module 1010, and single and double box judgment Module 1012, height estimation module 1014, movement deviation distance calculation module 1016, data receiving unit and LED display screen.
  • the camera is used for pre-collecting the vehicle body image of the target parking position of the collecting card, and intermittently collecting the vehicle body image.
  • the camera comprises a first camera 201, a second camera 202, a third camera 203, a fourth camera 204, a fifth camera 205 and a sixth camera 206. place.
  • the first camera 201 to the fourth camera 204 are used to capture the front image of the collection truck to confirm the identity of the vehicle and the direction of movement of the vehicle; and the fifth camera 205 and the sixth camera 206 are used to capture the body image of the collection truck to calculate The initial positioning deviation distance and moving deviation distance of the truck.
  • the calibration module 1004 is configured to calibrate the target parking position of the truck and the estimated height of the truck loaded with containers.
  • the calibration module also includes: a target position calibration sub-module and a height estimation calibration sub-module.
  • the target position calibration sub-module is used to identify the body image of the target parking position, and obtain the image coordinates of the container and the corresponding box hole or the image coordinates of the vehicle frame and the corresponding frame guide plate at the target parking position to generate the target straight line L.
  • the height estimation and calibration sub-module uses the box hole image coordinates or the frame guide image coordinates to estimate the height-related pixel distance factor.
  • the initial positioning module 1006 is configured to obtain and use the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area Include the smallest bounding rectangle.
  • the initial positioning module 1006 includes a labeling submodule, a first target detection model, and a target rectangle generation submodule.
  • the first labeling submodule is used to acquire multiple historical vehicle body images from the database and label the containers or vehicle frames in the multiple historical vehicle body images.
  • the first object detection model is used to establish the first neural network Yolov5 and train the first neural network Yolov5 by using multiple marked historical body images to obtain the first object detection model.
  • the target rectangle generation sub-module is used to collect the current body image in real time, and use the first target detection model to identify the target rectangle area of the container or frame in the current body image.
  • the initial positioning module 1006 is used to calculate the initial positioning deviation distance of the collection truck based on the recognized target rectangular area of the container or frame in the current body image and the pre-acquired target parking position, wherein the initial positioning deviation distance of the collection truck is calculated by the following formula Positioning deviation distance:
  • y represents the ordinate of the current vehicle area image
  • y 0 represents the ordinate of the target parking position collected in advance
  • D represents the actual pixel distance factor related to the height.
  • the image cropping module 1008 is configured to perform image cropping on the target rectangular area of the container or vehicle frame detected in the vehicle body image to generate a body sub-image.
  • the image cropping module 1008 is used for cutting the historical body image or the current body image into a historical body sub-image or a current body sub-image, wherein, when the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes the first upper part sub-areas, middle sub-areas and first lower sub-areas, wherein box hole and text detection is performed on the first middle sub-area, and box hole detection is performed on the first upper sub-area image and the first lower sub-area image; and When the target rectangular area is the frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein the frame guide plate is performed on the second upper sub-area and the second lower sub-area detection.
  • the recognition module 1010 is configured to obtain and use the second target detection model to recognize the box hole coordinates and text on the container in the body image or the frame guide plate coordinates on the frame.
  • the recognition module 1010 includes a third labeling submodule, a third target detection model, and a vehicle front and identity confirmation submodule.
  • the third labeling sub-module is used to acquire multiple historical vehicle front images from the database and label the vehicle front and the two-dimensional code in the historical vehicle front images.
  • the third object detection model is used to establish the third neural network Yolov5, and use the marked historical head images to train the third neural network Yolov5 to obtain the third object detection model.
  • the vehicle front and identity confirmation sub-module is used to collect the current vehicle front image in real time, and use the third target detection model to identify the collection truck head in the current vehicle front image and the QR code pasted on the collection truck head and confirm the identity code of the collection truck.
  • the recognition module 1010 also includes a second labeling submodule, a second target detection model, and a box hole and frame guide plate recognition submodule.
  • the second labeling sub-module is used to mark the box hole or the frame guide plate in the historical car body sub-image;
  • the second target detection model is used to establish the second neural network Yolov5 and use the marked historical car body sub-image to the second neural network
  • the network Yolov5 is trained to obtain the second target detection model;
  • the box hole and frame guide plate identification submodule is used to use the second target detection model to identify the box hole or frame guide plate in the current body sub-image, and obtain the box hole coordinates and frame guide coordinates.
  • the single and double box judging module 1012 is used to determine single and double boxes based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-area in the body sub-image.
  • the height estimation module 1014 is configured to obtain the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container.
  • the movement deviation distance calculation module 1016 is used to calculate the movement deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, so as to guide the collection truck to the target parking position according to the movement deviation distance.
  • the data receiving unit is used to be located in the truck cab and connect with the video processor through the network according to the identity code of the truck.
  • the data receiving unit receives the initial positioning deviation distance of the collection card through the network and transmits it to the data receiving unit, and receives the moving deviation distance through the network.
  • the LED display is located in the cab of the truck and communicated with the data receiving unit, and is used to display the initial positioning deviation distance to guide the truck driver to adjust the truck position; and display the moving deviation distance to continue to guide the driver to adjust the truck position until When the movement deviation distance is less than the threshold, the positioning guidance of the collection card is completed.
  • the control module 1018 is used to adjust the shape of the spreader of the quay crane according to the judged single or double container when the spreader of the quay crane arrives directly above the container or the frame, and carry out the operation of grabbing or releasing the container.
  • the movement deviation distance calculation module is used to calculate the movement deviation distance of the collection card through the following formula and further includes:
  • x 0 and y 0 are the midpoints of the two currently detected box holes
  • A, B, and C represent the linear equation parameters of the two box holes detected in advance
  • E represents the actual pixel distance factor related to the height.
  • the technical problem to be solved in this application is to solve the long-distance guided positioning of the internal collection truck through the initial positioning of the internal collection truck, that is, to increase the positioning distance range, and at the same time to perform effective height estimation of the collection truck container and the common characteristics of the frame and the container (guide plate or box hole) detection to complete the precise positioning of the collection truck, and at the same time determine the driving direction of the collection truck through the judgment of the front of the truck, so as to better guide the collection truck driver.
  • the middle image area of the container can be effectively obtained, and the container detection results, the text detection results on the container, and the number and distance information of the container holes can be used to jointly determine the single and double containers, thereby effectively improving the single and double containers. Box discrimination accuracy.
  • This application provides a machine vision-based quay crane internal collection truck positioning system and single and double container discrimination method, which is characterized in that the intelligent video analysis technology is used to effectively identify the container area, box hole, text, and frame and container on the internal collection truck.
  • the frame guide plate and other information are analyzed and judged to realize the positioning of the container truck in the quayside crane and the judgment of single and double boxes.
  • the special logo of the two-dimensional code on the head of the truck is identified, and the vehicle identity is confirmed and the direction of the vehicle is determined.
  • the specific steps of the method are as follows:
  • the camera function of the front part is to confirm the identity of the vehicle and the moving direction of the vehicle, and the body camera is mainly responsible for the calculation of the positioning deviation distance of the inner truck and the judgment of single and double boxes.
  • An LED display and a data receiving unit are installed in the cab of the inner truck to display the current deviation distance and direction of the inner truck.
  • the data is analyzed and identified, and the box body, box hole or frame of the vehicle at the target position is obtained , the image coordinates of the frame guide plate, and the actual distances A mm/pixel and E mm/pixel represented by each pixel of the corresponding height.
  • the target recognition algorithm YoloV5 is used to identify the container or empty frame area on the inner set card, and the container area is recognized to determine the size, distinguishing between 20 feet and non-20 feet , and then according to certain conditions, the 20-foot boxes on the same collection card are merged, and at the same time, according to the identified category (container or frame), it is judged that it is a box grabbing or boxing operation. And use the SORT (Simple Online and Realtime Tracking) target tracking method to track the inner collection card area.
  • SORT Simple Online and Realtime Tracking
  • step 2 By identifying the tracked box area center and the pre-acquired target position coordinates in step 2, compare the y coordinates to make a difference, and calculate the initial positioning deviation distance of the collection truck as follows:
  • y represents the ordinate of the current vehicle area image
  • y 0 represents the pre-collected target position
  • D represents the actual pixel distance factor
  • the calculation result is sent to the data receiving unit of the corresponding driver through the network, and displayed on the LED display screen to guide the driver to adjust the position of the internal collection card.
  • image cropping is performed on the detected container area or frame area of the internal collection truck.
  • the cropping area is divided into three sub-area images, and the middle cropping area is used for box holes and text Detection, the other two areas only detect the box hole, and for the frame area, the image is cropped into two areas, and both areas detect the frame guide.
  • the container truck area currently loaded is a fusion area, that is, whether it is two 20 feet, combined with the box hole and text detection information in the middle sub-area, determine single and double containers, as follows:
  • weight 0 indicates the weight of container quantity (0.4)
  • weight 1 indicates whether there is text weight (0.2)
  • weight 2 indicates whether there is box hole weight (0.4)
  • R 0 indicates whether there are two containers
  • R 1 indicates whether there is text
  • R 2 indicates whether there is a box hole, 1 if it is present, and 0 if it is not.
  • a certain threshold 0.6
  • x 0 , y 0 are the midpoints of the two currently detected boxholes, A, B, and C represent the pre-detected two boxholes.
  • the parameter E of the straight line equation represents the height-related pixel distance factor.
  • the calculation result is sent to the data receiving unit of the corresponding driver through the network, and displayed on the LED display screen to guide the driver to adjust the position of the internal collection card.
  • step 11 Continuously adjust and iteratively calculate in step 11. When the deviation distance offset is less than a certain threshold, the positioning guidance of the inner set card is completed.
  • the empty frame is parked at the target position of each lane on the inner collection trucks of different box types (front 20 feet, rear 20 feet, middle 20 feet, 40 feet, 45 feet), that is, the spreader can Accurately place the box, and then save the center of the rectangular area of the frame and the image coordinate data of the corresponding frame guide for different box locations, box types, lane information, etc., and use the box hole size to estimate the pixel distance A mm/pixel, The same operation is performed on the empty frame to generate the target straight line L and the estimated pixel distance factor E mm/pixel.
  • Inner truck guidance and identification of single and double boxes Confirm the current ID number of the inner truck through real-time video data, and calculate the deviation of the inner truck at the same time, and send the truck guidance information to the cab of the inner truck through the network, and carry out LED display, and at the same time detect and identify single and double boxes on the inner collection card, and send the identification results to the quay crane control system.
  • the pre-trained target detection algorithm Yolov5-0 (the output of the algorithm model is two categories, namely the container on the truck and the frame) and the target detection algorithm Yolov5-1 (the output of the algorithm model is three
  • the two categories are box holes, frame guides, and text), and then use Yolov5-0 for target recognition, and judge whether to grab or put a box according to the recognition category, and then use the size of the rectangle to determine whether it is 20 feet or not, and then according to With certain constraints, the 20-foot box is fused, that is, it is judged whether the width and height of the circumscribed rectangular areas of any two rectangular areas in the image are within a certain range. If the conditions are met, the two rectangular areas are merged into one area, and then Use the SORT tracking algorithm for target tracking.
  • the cropped image will be recognized using the YoloV5-1 target detection algorithm, which will increase the resolution of the target recognition sample, and then Better recognition of small objects such as box holes, text or guides.
  • Yolo V5 is a deep learning neural network target recognition algorithm, which mainly learns offline through samples, trains the model, and then recognizes the specified target after collecting images in real time.
  • box hole analysis if the number of detected box holes is less than 2, it is considered that no box hole has been detected, otherwise the detected box holes will be sorted according to the X coordinate, if the sorted minimum value is the same as If the minimum value in the X direction is within a certain threshold range, the box hole is considered to be detected.
  • Text analysis If text is detected in the middle sub-area, and the height and width of its rectangle meet a certain range, it is considered that the text on the upper surface of the container has been detected.
  • weight parameters of weight 0 , weight 1 , and weight 2 are 0.4, 0.2, and 0.4 respectively. If the total score is greater than 0.6, it is considered a double box, otherwise it is considered a single box.
  • step 8 Through the continuous iteration of step 8, the guidance and positioning of the inner set card is completed.
  • the internal collection truck guides and fixes the internal collection truck according to the actual deviation distance, and the quay crane control system adjusts according to the judged single and double container when the spreader of the quay crane reaches the container or the frame.
  • the form of the spreader of the quay crane is used for grabbing or putting boxes.
  • the industrial control computer stores the AI algorithm, and its main function is to collect the video image information of the 1-6 cameras, and run the inner set card guidance and positioning and single and double box discrimination algorithm software.
  • What PLC mainly receives from the video processor is the discrimination result of single and double boxes, and judges whether it is single box or double box.
  • the main purpose of the video processor to receive the operation information of the quay crane is to more accurately judge whether there is an internal set card in the quay crane, whether it is necessary to run the guidance operation and to distinguish single and double tanks.
  • video processor includes various devices, devices, and machines for processing data, for example, a video processor includes a programmable processor, a computer, multiple processors, or multiple computers, and the like.
  • the apparatus may include code for creating an execution environment for the computer program in question, for example, constituting processor firmware, a protocol stack, a database management system, an operating system, or a runtime environment, or one or more thereof. combined code.
  • the methods and logic flows described in this specification can be performed by one or more programmable processors, wherein the programmable processors execute one or more computer programs to perform operations on surveillance video and generate object detection results these functions.
  • the computer also includes or is operably coupled to one or more mass storage devices (e.g., magnetic, magneto-optical, or optical disks) for storing historical video data and data sets, to receive data from the mass storage device or to transmit data to the mass storage device, or both.
  • mass storage devices e.g., magnetic, magneto-optical, or optical disks
  • Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and storage devices including, for example: semiconductor memory devices such as EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically erasable programmable read-only memory) and flash memory devices; magnetic disks, such as internal or removable hard disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • the processes of the methods in the above embodiments can be implemented by instructing related hardware through computer programs, and the programs can be stored in a computer-readable storage medium.
  • the computer-readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory, and the like.

Abstract

A container truck guidance and single/double-container identification method and apparatus based on machine vision, which method and apparatus belong to the technical field of operation assistance for quayside container cranes at harbors, and solve the problem of existing methods of the efficiency of container grabbing/placing operations of quayside container cranes being low due to the container truck positioning precision being low and the single/double container determination accuracy being insufficient. The method comprises: calibrating a target parking position of a container truck and the height estimation of the container truck loaded with a container; calculating an initial positioning deviation distance on the basis of a target rectangular area and the target parking position; performing image cropping on a vehicle body image, so as to generate a vehicle body sub-image; identifying container hole coordinates and characters or frame guide plate coordinates by using a second target detection model; performing single/double container determination on the basis of whether the target rectangular area is a fusion area, and the container hole coordinates and the characters; estimating the height of the container; calculating a movement deviation distance, so as to guide the container truck to the target parking position; and adjusting the shape of a spreader according to determined single/double containers, so as to perform a container grabbing or placing operation. The container truck positioning precision and the single/double container determination accuracy are high, thereby improving the efficiency of container grabbing/placing operations of a quayside container crane.

Description

一种基于机器视觉的集卡引导和单双箱识别方法和装置A machine vision-based collection truck guidance and single and double box identification method and device 技术领域technical field
本申请涉及港口岸桥作业辅助技术领域,尤其涉及一种基于机器视觉的岸桥控制方法和装置。The present application relates to the technical field of port quay crane operation assistance, in particular to a machine vision-based quay crane control method and device.
背景技术Background technique
集装箱作为现代物流运输过程中的重要载体,港口对集装箱的装卸作业效率,直接影响到整个物流运输效率。岸桥顾名思义是指岸边用的桥式设备,用于将集装箱从船舶卸到码头或从码头装在船舶的重要作业工具,但在装卸货船过程中,岸桥大车静止不动,岸桥司机通过手柄控制小车和吊具,从港口内集卡中进行抓、放箱作业。由于集卡司机没有正确的目标引导,所以在每次进行内集卡抓、放箱作业时,都需要在吊具到达内集卡车架或集装箱的正上方时,由集卡司机通过人工观察或外部人员引导,才能进行准确的抓、放箱作业。另一方面,岸桥司机也不得不集中精力观察当前作业车辆中的单双箱状况,进而控制吊具的不同形态。这样不仅大大降低了集装箱的作业效率,也对增加了集卡和岸桥司机的工作负担。集卡是集装箱卡车的简称,分为内集卡和外集卡,内集卡是指在集装箱港口内运行的卡车,外集卡是指在从外面到集装箱港口内的卡车。Containers are an important carrier in the process of modern logistics and transportation, and the efficiency of loading and unloading of containers at ports directly affects the efficiency of the entire logistics and transportation. As the name implies, the quay crane refers to the bridge-type equipment used on the shore. It is an important working tool for unloading containers from the ship to the wharf or loading them on the ship from the wharf. The driver controls the trolley and the spreader through the handle, and carries out the operation of grabbing and releasing boxes from the collection truck in the port. Since the truck driver does not have the correct target guidance, the truck driver needs to manually observe when the spreader reaches the frame or container of the inner truck every time the inner truck grabs and releases the container. Or the guidance of external personnel, in order to carry out accurate grasping and putting operations. On the other hand, the quay crane driver also has to concentrate on observing the single and double box conditions in the current operating vehicle, and then control the different forms of the spreader. This not only greatly reduces the operating efficiency of the container, but also increases the workload of the drivers of the trucks and quayside cranes. Collection truck is the abbreviation of container truck, which is divided into inner collection truck and outer collection truck. The inner collection truck refers to the truck running in the container port, and the outer collection truck refers to the truck from the outside to the container port.
传统一般利用激光雷达扫描方法,识别集卡集装箱或车架,但由于激光雷达价格昂贵,功能单一,并且在精度上也不能得到有效的保证。Traditionally, the lidar scanning method is generally used to identify the container or frame of the truck, but because the lidar is expensive, the function is single, and the accuracy cannot be effectively guaranteed.
现有桥吊集卡安全定位方法预先通过摄像头标定车辆停车点,然后当车辆进入识别区时,根据标定的车辆停车点加减一定范围,对图像进行裁剪,并利用Mask-RCNN算法识别分割出集卡集装箱或车架的区域,并获取中心点,并通过和预先标定的停车点计算其欧式距离,从而引导车辆到达准确位置。The existing safety positioning method for bridge cranes and trucks uses the camera to calibrate the parking point of the vehicle in advance, and then when the vehicle enters the recognition area, a certain range is added or subtracted according to the calibrated parking point of the vehicle, the image is cropped, and the Mask-RCNN algorithm is used to identify and segment out The area of the truck container or frame, and obtain the center point, and calculate its Euclidean distance with the pre-marked parking point, so as to guide the vehicle to the accurate position.
现有方法中,存在以下问题:In the existing method, there are the following problems:
1、现有方法假定了车辆需要首先行驶到准确目标停车位置附近,所以并不能对偏差距离大时,即超出其设定一定范围的集卡进行有效的分割,进而导致不能准确的引导。1. The existing method assumes that the vehicle needs to drive to the vicinity of the exact target parking position first, so it cannot effectively segment the collection trucks when the deviation distance is large, that is, beyond a certain range, which leads to inaccurate guidance.
2、由于集装箱规格发生变化而引起高度变化时,该方法并不能有效的估计当前集卡上集装箱的高度,从而导致的引导误差。2. When the height changes due to changes in container specifications, this method cannot effectively estimate the height of the container on the current truck, resulting in a guidance error.
3、由于内集卡车架外形的复杂性,其方法比较难以对集卡车架进行有效准确的分割,进而导致引导误差。3. Due to the complexity of the shape of the inner truck frame, the method is difficult to effectively and accurately segment the truck frame, which leads to guidance errors.
4、由于该方法需要对各种工况和天气的集卡集装箱或车架进行分割二值化,所以需要大量标注集装箱或车架的轮廓mask数据,其需要的人工研发成本较高,同时其分割二维化算法运行也比较耗时,也会引起处理延时或提高硬件成本。4. Since this method needs to divide and binarize the truck containers or frames in various working conditions and weathers, it needs to label a large number of container or frame contour mask data, which requires high manual research and development costs. The operation of the segmentation 2D algorithm is also time-consuming, which may cause processing delay or increase hardware costs.
5、该方法并没有明确给出车辆行驶方向的判定方法,即车辆应该前进或后退。5. This method does not clearly provide a determination method for the direction of the vehicle, that is, the vehicle should move forward or backward.
现有的双箱检测方法的机器视觉部分是通过使用摄像机对集装箱中部进行拍摄,进而获取集装箱中部图像,利用箱孔识别模型进行箱孔识别,进而进行单双箱的判定。The machine vision part of the existing double box detection method is to use a camera to shoot the middle of the container, and then obtain the image of the middle of the container, use the box hole recognition model to identify the box hole, and then determine the single and double boxes.
现有方法中,存在以下问题:In the existing method, there are the following problems:
1)该方法仅仅依靠集装箱中间图像的箱孔识别有无判定双箱,其并没有对箱孔漏检或误检进行相应的处理,因此存在着双箱误判的隐患,降低判别正确率。1) This method only relies on the box hole recognition in the middle image of the container to determine whether there are double boxes, and it does not deal with the missing or false detection of the box holes, so there is a hidden danger of double box misjudgment, which reduces the correct rate of discrimination.
2)该方法并不能保证数据采集的准确性,即不能保证采集到的图像恰好为集装箱的中间区域,进而导致误判。2) This method cannot guarantee the accuracy of data collection, that is, it cannot guarantee that the collected image is exactly the middle area of the container, which leads to misjudgment.
发明内容Contents of the invention
鉴于上述的分析,本申请实施例旨在提供一种基于机器视觉的岸桥控制方法和装置,用以解决现有方法的集卡定位精度低和单双箱判断准确性率不足而导致岸桥抓放箱作业效率低的问题。In view of the above analysis, the embodiment of the present application aims to provide a machine vision-based quay crane control method and device to solve the problem of low positioning accuracy of the collection truck and insufficient accuracy of single and double tank judgments in the existing method, which cause the quay crane The problem of low efficiency in picking and placing boxes.
一方面,本申请实施例提供了基于机器视觉的岸桥控制方法,包括:对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定;获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于所述目标矩形区域和所述目标停车位置计算初定位偏差距离,其中,所述目标矩形区域包括最小外接矩形;对所述车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像;获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标;基于所述车身子图像中的集装箱的目标矩形区域是否是融合区域、所述车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判断;基于所述箱孔坐标或所述车架导板坐标获取所述车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计所述集装箱的高度;基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据所述移动偏差距离将所述集卡引导至所述目标停车位置;以及在所述岸桥的吊具到达所述集装箱或所述车架正上方时,根据判断的单双箱调整所述岸桥的吊具形态,进行抓箱或放箱作业。On the one hand, the embodiment of the present application provides a machine vision-based quay crane control method, including: calibrating the target parking position of the collection truck and the estimated height of the collection truck loaded with containers; acquiring and using the first target detection model to identify the vehicle body image The target rectangular area of the container or frame in the vehicle body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum circumscribed rectangle; for the container in the vehicle body image or the target rectangular area of the vehicle frame to crop the image to generate the sub-image of the vehicle body; obtain and use the second target detection model to identify the box hole coordinates and text on the container in the sub-image of the vehicle body or the frame guide plate coordinates on the vehicle frame ; Based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-area in the body sub-image, carry out single and double box judgment; based on the box hole coordinates or the The frame guide plate coordinates are used to obtain the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image to estimate the height of the container; based on the height of the container or the frame and the box hole or The target straight line generated by the frame guide plate calculates the movement deviation distance, guides the truck to the target parking position according to the movement deviation distance; and when the spreader of the quay bridge reaches the container or the vehicle frame When it is above, adjust the form of the spreader of the quay crane according to the judged single and double boxes, and carry out the operation of grabbing or putting boxes.
上述技术方案的有益效果如下:通过集卡初定位增加了岸桥内集卡可引导定位的 距离范围,同时通过箱孔或导板等小目标的检测和定位,提高了内集卡定位精度,进而实现了内集卡的精确引导定位,提高了集卡定位精度。采用综合集装箱数量、箱孔以及集装箱上表面文本信息,进行单、双箱判定方法。通过这种联合判定方法,能够避免由于依靠单一信息判定而引起的误判,进而有效的提高了单双箱判别的准确率。因此提高了岸桥抓放箱作业效率。The beneficial effects of the above-mentioned technical solution are as follows: the initial positioning of the collection truck increases the distance range within which the collection truck can be guided and positioned in the quayside crane, and at the same time, the positioning accuracy of the internal collection truck is improved through the detection and positioning of small targets such as box holes or guide plates. The precise guidance and positioning of the inner collection truck is realized, and the positioning accuracy of the collection truck is improved. Using the comprehensive container quantity, box hole and the text information on the container surface, the single and double box judgment method is carried out. Through this joint judgment method, the misjudgment caused by relying on a single information judgment can be avoided, thereby effectively improving the accuracy of single and double box discrimination. Therefore, the operating efficiency of the quay crane grabbing and releasing boxes is improved.
基于上述方法的进一步改进,对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定进一步包括:预先采集所述集卡的目标停车位置的车身图像并对所述目标停车位置的车身图像进行识别,以获取所述集卡在所述目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标;利用所述箱孔图像坐标或所述车架导板图像坐标生成目标直线并估计与高度相关的像素距离因子。Based on a further improvement of the above method, the calibration of the target parking position of the collection truck and the estimated height of the container loaded container further includes: pre-collecting the body image of the target parking position of the collection truck and calculating the body image of the target parking position Carry out identification to obtain the image coordinates of the container and the corresponding box hole or the frame and the corresponding frame guide plate image coordinates of the truck at the target parking position; using the box hole image coordinates or the frame guide plate image coordinates Generate the target line and estimate the height-dependent pixel distance factor.
基于上述方法的进一步改进,获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域进一步包括:从数据库中获取多幅历史车身图像并对所述多幅历史车身图像中的集装箱或车架进行标注;建立第一神经网络Yolov5并利用标注的多幅历史车身图像对所述第一神经网络Yolov5进行训练以获得第一目标检测模型;以及实时采集当前车身图像,并利用所述第一目标检测模型识别出所述当前车身图像中的集装箱或车架的目标矩形区域,以进行抓箱或放箱判定,其中,根据所述集装箱的目标矩形区域的大小判定所述集装箱是20尺还是非20尺,并将同一集卡上的两个20尺的集装箱的目标矩形区域融合以形成所述融合区域。Based on the further improvement of the above method, acquiring and using the first target detection model to identify the target rectangular area of the container or frame in the vehicle body image further includes: acquiring multiple historical vehicle body images from the database and analyzing the multiple historical vehicle body images Mark the container or vehicle frame in; establish the first neural network Yolov5 and use the multiple pieces of historical vehicle body images marked to train the first neural network Yolov5 to obtain the first target detection model; and collect the current vehicle body image in real time, and Use the first target detection model to identify the target rectangular area of the container or frame in the current vehicle body image, so as to judge whether to grab or put the box, wherein the determination is made according to the size of the target rectangular area of the container. Whether the container is 20 feet or not, and the target rectangular areas of two 20 feet containers on the same container are merged to form the fusion area.
基于上述方法的进一步改进,对所述车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像进一步包括:将所述历史车身图像或所述当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,当所述目标矩形区域是集装箱区域时,所述历史车身子图像或所述当前车身子图像包括第一上部子区域、中间子区域和第一下部子区域,其中,对所述第一中间子区域进行箱孔和文字检测,对所述第一上部子区域图像和所述第一下部子区域图像进行箱孔检测;以及当所述目标矩形区域是车架区域时,所述历史车身子图像或所述当前车身子图像包括第二上部子区域和第二下部子区域,其中,对所述第二上部子区域和所述第二下部子区域进行车架导板检测。Based on a further improvement of the above method, performing image clipping on the target rectangular area of the container or vehicle frame in the vehicle body image to generate a vehicle body sub-image further includes: clipping the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image image or the current body sub-image, wherein, when the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area , wherein box hole and text detection is performed on the first middle sub-region, and box hole detection is performed on the first upper sub-region image and the first lower sub-region image; and when the target rectangular region is frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein the second upper sub-area and the second lower sub-area are Frame guide inspection.
基于上述方法的进一步改进,获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标或车架上的车架导板坐标进一步包括:对所述历史车身子图像中的箱孔或车架导板进行标注;建立第二神经网络Yolov5并利用标注的历史车身子图像对所述第二神经网络Yolov5进行训练以获得第二目标检测模型;以及利用所述第二目标检测模型识别出当前车身子图像中的箱孔或车架导板,并获取箱孔坐标和文字和车架导板坐标。Based on the further improvement of the above method, acquiring and using the second target detection model to identify the box hole coordinates on the container or the frame guide plate coordinates on the frame in the sub-image of the vehicle body further includes: Mark the box hole or the frame guide plate; establish the second neural network Yolov5 and utilize the historical body image of the label to train the second neural network Yolov5 to obtain the second target detection model; and use the second target detection The model identifies box holes or frame guides in the current body subimage, and obtains box hole coordinates and text and frame guide coordinates.
基于上述方法的进一步改进,在岸桥横梁上的车道相对两侧的隔离带处安装第一摄像机、第二摄像机、第三摄像机、第四摄像机、第五摄像机和第六摄像机,其中,利用所述第一摄像机至所述第四摄像机拍摄所述集卡的车头图像,以确认车辆身份以及车辆移动方向;以及利用所述第五摄像机和所述第六摄像机拍摄所述集卡的车身图像,以计算所述集卡的初定位偏差距离和移动偏差距离。Based on the further improvement of the above method, the first camera, the second camera, the third camera, the fourth camera, the fifth camera and the sixth camera are installed at the isolation strips on the opposite sides of the lane on the quay bridge beam, wherein, using the The first camera to the fourth camera capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and use the fifth camera and the sixth camera to capture the body image of the truck, To calculate the initial positioning deviation distance and the movement deviation distance of the collection truck.
基于上述方法的进一步改进,在对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定之后进一步包括:从数据库中获取多幅历史车头图像并对所述历史车头图像中的车头及二维码进行标注;建立第三神经网络Yolov5,并利用标注的历史车头图像对所述第三神经网络Yolov5进行训练以获得第三目标检测模型;实时采集当前车头图像,并利用第三目标检测模型识别出当前车头图像中的集卡车头和所述集卡车头上粘贴的二维码并确认集卡身份编码;以及通过网络根据所述集卡身份编码连接对应集卡驾驶室内的数据接收单元。Based on the further improvement of the above method, after calibrating the target parking position of the collection truck and the estimated height of the collection truck loaded with containers, it further includes: acquiring a plurality of historical vehicle front images from the database and analyzing the vehicle front and two in the historical vehicle front images Dimensional codes are marked; the third neural network Yolov5 is established, and the third neural network Yolov5 is trained to obtain the third target detection model by using the marked historical front image; the current front image is collected in real time, and the third target detection model is used Recognizing the collection truck head in the current front image and the two-dimensional code pasted on the collection truck head and confirming the collection card identity code; and connecting the data receiving unit in the cab of the corresponding collection truck through the network according to the collection card identity code.
基于上述方法的进一步改进,基于所述目标矩形区域和所述目标停车位置计算初定位偏差距离进一步包括:基于识别出的所述当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离;以及通过所述网络将所述集卡的初定位偏差距离传输至数据接收单元,并经由LED显示屏进行显示,以引导集卡司机调整集卡位置,其中,通过以下公式计算所述集卡的初定位偏差距离:Based on a further improvement of the above method, calculating the initial positioning deviation distance based on the target rectangular area and the target parking position further includes: based on the identified target rectangular area of the container or frame in the current vehicle body image and the pre-acquired Target parking position, calculate the initial positioning deviation distance of the collection truck; and transmit the initial positioning deviation distance of the collection truck to the data receiving unit through the network, and display it through the LED display screen to guide the collection truck driver to adjust the collection truck position, wherein the initial positioning deviation distance of the collection truck is calculated by the following formula:
offset=D(y-y 0) offset=D(yy 0 )
其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子。 Among them, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to the height.
基于上述方法的进一步改进,基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据所述移动偏差距离将所述集卡引导至所述目标停车位置进一步包括:通过以下公式计算所述集卡的移动偏差距离进一步包括:Based on a further improvement of the above method, the moving deviation distance is calculated based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, and the collection truck is guided to the target parking according to the moving deviation distance The position further includes: calculating the movement deviation distance of the collection card by the following formula further includes:
Figure PCTCN2022072004-appb-000001
Figure PCTCN2022072004-appb-000001
其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子;通过所述网络将所述移动偏差距离发送至所述数据接收单元并在所述LED显示器上显示所述移动偏差距离,以继续引导司机调整集卡位置,直到所述移动偏差距离小于阈值时完成集卡定位引导。 Among them, x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height; through the The network sends the moving deviation distance to the data receiving unit and displays the moving deviation distance on the LED display to continue to guide the driver to adjust the truck collection position until the truck collection is completed when the moving deviation distance is less than a threshold Positioning guidance.
另一方面,本申请实施例提供了一种基于机器视觉的岸桥控制装置,包括:视频处理器和控制模块,所述视频处理器包括标定模块、初定位模块、图像裁剪模块、识别模块、单双箱判断模块、高度估计模块和移动偏差距离计算模块,其中,所述标定 模块,用于对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定;所述初定位模块,用于获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于所述目标矩形区域和所述目标停车位置计算初定位偏差距离,其中,所述目标矩形区域包括最小外接矩形;所述图像裁剪模块,用于对所述车身图像中检测到的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像;所述识别模块,用于获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标;所述单双箱判断模块,用于基于所述车身子图像中的集装箱的目标矩形区域是否是融合区域、所述车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判定;所述高度估计模块,用于基于所述箱孔坐标或所述车架导板坐标获取所述车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计所述集装箱的高度;所述移动偏差距离计算模块,用于基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,以根据所述移动偏差距离将所述集卡引导至所述目标停车位置;以及所述控制模块,用于在所述岸桥的吊具到达所述集装箱或所述车架正上方时,根据判断的单双箱调整所述岸桥的吊具形态,进行抓箱或放箱作业。On the other hand, the embodiment of the present application provides a machine vision-based quay crane control device, including: a video processor and a control module, the video processor includes a calibration module, an initial positioning module, an image cropping module, an identification module, The single and double box judgment module, the height estimation module and the movement deviation distance calculation module, wherein the calibration module is used to calibrate the target parking position of the collection truck and the height estimation of the collection truck loaded with containers; the initial positioning module uses Obtaining and using the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculating the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area Including the minimum circumscribed rectangle; the image cropping module is used to crop the target rectangular area of the container or frame detected in the vehicle body image to generate a sub-image of the vehicle body; the identification module is used to obtain and use the first The second target detection model recognizes the coordinates of the box hole on the container in the sub-image of the body and the coordinates of the frame guide plate on the text or the frame; Whether the target rectangular area is a fusion area, the box hole coordinates and text of the middle sub-area in the body sub-image, and perform single and double box judgment; the height estimation module is used to base on the box hole coordinates or the car body frame guide plate coordinates to obtain the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image to estimate the height of the container; the movement deviation distance calculation module is used for The height of the frame and the target straight line generated by the box hole or the frame guide plate calculate the movement deviation distance, so as to guide the collection truck to the target parking position according to the movement deviation distance; and the control module is used for When the spreader of the quay crane arrives directly above the container or the frame, adjust the shape of the spreader of the quay crane according to the judged single or double container, and carry out the operation of grabbing or releasing the container.
基于上述装置的进一步改进,包括摄像机,用于预先采集所述集卡的目标停车位置的车身图像,以及间歇地采集所述车身图像;所述标定模块还包括目标位置标定子模块和高度估计标定子模块,其中,所述摄像机所述目标位置标定子模块,用于对所述目标停车位置的车身图像进行识别,并获取所述集卡在所述目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标,以生成目标直线;以及所述高度估计标定子模块,利用所述箱孔图像坐标或所述车架导板图像坐标估计与高度相关的像素距离因子。Based on the further improvement of the above-mentioned device, it includes a camera, which is used to pre-collect the vehicle body image of the target parking position of the truck, and intermittently collect the vehicle body image; the calibration module also includes a target position calibration sub-module and a height estimation calibration The submodule, wherein, the target position calibration submodule of the camera is used to identify the body image of the target parking position, and obtain the container and the corresponding box hole image of the truck at the target parking position coordinates or frame and corresponding frame guide plate image coordinates to generate a target straight line; and the height estimation and calibration sub-module uses the box hole image coordinates or the frame guide plate image coordinates to estimate a height-related pixel distance factor.
基于上述装置的进一步改进,所述初定位模块包括标注子模块、第一目标检测模型、目标矩形生成子模块,其中,所述第一标注子模块,用于从数据库中获取多幅历史车身图像并对所述多幅历史车身图像中的集装箱或车架进行标注;所述第一目标检测模型,用于建立第一神经网络Yolov5并利用标注的多幅历史车身图像对所述第一神经网络Yolov5进行训练以获得第一目标检测模型;以及所述目标矩形生成子模块,用于实时采集当前车身图像,并利用所述第一目标检测模型识别出所述当前车身图像中的集装箱或车架的目标矩形区域,以进行抓箱或放箱判定,其中,根据所述集装箱的目标矩形区域的大小判定所述集装箱是20尺还是非20尺,并将同一集卡上的两个20尺的集装箱的目标矩形区域融合以形成所述融合区域。Based on a further improvement of the above device, the initial positioning module includes a labeling submodule, a first target detection model, and a target rectangle generation submodule, wherein the first labeling submodule is used to obtain multiple historical vehicle body images from the database And label the container or vehicle frame in the multiple historical vehicle body images; the first target detection model is used to establish the first neural network Yolov5 and utilize the multiple historical vehicle body images of the label to analyze the first neural network Yolov5 is trained to obtain the first target detection model; and the target rectangle generation submodule is used to collect the current vehicle body image in real time, and utilize the first target detection model to identify the container or frame in the current vehicle body image The target rectangular area of the container is used to determine whether the container is 20 feet or not according to the size of the target rectangular area of the container, and the two 20-foot containers on the same collection card are The target rectangular areas of the containers are fused to form the fused area.
基于上述装置的进一步改进,所述图像裁剪模块用于将所述历史车身图像或所述当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,当所述目标矩形区域是集装箱区域时,所述历史车身子图像或所述当前车身子图像包括第一上部子区域、 中间子区域和第一下部子区域,其中,对所述第一中间子区域进行箱孔和文字检测,对所述第一上部子区域图像和所述第一下部子区域图像进行箱孔检测;以及当所述目标矩形区域是车架区域时,所述历史车身子图像或所述当前车身子图像包括第二上部子区域和第二下部子区域,其中,对所述第二上部子区域和所述第二下部子区域进行车架导板检测。Based on the further improvement of the above device, the image cropping module is used to crop the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, wherein when the target rectangular area is a container area , the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area, wherein the box hole and text detection is performed on the first middle sub-area, and the The first upper sub-region image and the first lower sub-region image are subjected to box hole detection; and when the target rectangular region is a frame region, the historical body sub-image or the current body sub-image includes The second upper sub-area and the second lower sub-area, wherein the detection of the frame guide plate is performed on the second upper sub-area and the second lower sub-area.
基于上述装置的进一步改进,所述识别模块包括第二标注子模块、第二目标检测模型和箱孔及车架导板识别子模块,其中,所述第二标注子模块,用于对所述历史车身子图像中的箱孔或车架导板进行标注;所述第二目标检测模型,用于建立第二神经网络Yolov5并利用标注的历史车身子图像对所述第二神经网络Yolov5进行训练以获得第二目标检测模型;以及所述箱孔及车架导板识别子模块,用于利用所述第二目标检测模型识别出当前车身子图像中的箱孔和文字或车架导板,并获取箱孔坐标或车架导板坐标。Based on the further improvement of the above device, the recognition module includes a second labeling submodule, a second target detection model, and a box hole and frame guide plate recognition submodule, wherein the second labeling submodule is used to identify the historical The box hole or the frame guide plate in the car body sub-image are marked; the second target detection model is used to set up a second neural network Yolov5 and utilize the marked historical car body sub-image to train the second neural network Yolov5 to obtain The second target detection model; and the box hole and frame guide plate identification submodule, used to use the second target detection model to identify the box hole and text or the frame guide plate in the current body sub-image, and obtain the box hole coordinates or frame guide coordinates.
基于上述装置的进一步改进,所述摄像机包括第一摄像机、第二摄像机、第三摄像机、第四摄像机、第五摄像机和第六摄像机,安装在岸桥横梁上的车道相对两侧的隔离带处,其中,所述第一摄像机至所述第四摄像机,用于拍摄所述集卡的车头图像,以确认车辆身份以及车辆移动方向;以及所述第五摄像机和所述第六摄像机,用于拍摄所述集卡的车身图像,以计算所述集卡的初定位偏差距离和移动偏差距离。Based on the further improvement of the above-mentioned device, the cameras include a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera, which are installed at the isolation strips on opposite sides of the lane on the cross beam of the quay bridge , wherein, the first camera to the fourth camera are used to capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and the fifth camera and the sixth camera are used to Taking a body image of the collection truck to calculate the initial positioning deviation distance and the movement deviation distance of the collection truck.
基于上述装置的进一步改进,基于机器视觉的岸桥控制装置还包括数据接收单元,以及所述识别模块还包括第三标注子模块和第三目标检测模型和车头及身份确认子模块,其中,所述第三标注子模块,用于从数据库中获取多幅历史车头图像并对所述历史车头图像中的车头及二维码进行标注;所述第三目标检测模型,用于建立第三神经网络Yolov5,并利用标注的历史车头图像对所述第三神经网络Yolov5进行训练以获得第三目标检测模型;车头及身份确认子模块,用于实时采集当前车头图像,并利用第三目标检测模型识别出当前车头图像中的集卡车头和所述集卡车头上粘贴的二维码并确认集卡身份编码;以及所述数据接收单元,用于位于集卡驾驶室内并根据所述集卡身份编码通过网络与所述视频处理器连接。Based on the further improvement of the above-mentioned device, the quay bridge control device based on machine vision also includes a data receiving unit, and the identification module also includes a third labeling submodule, a third target detection model and a vehicle head and identity confirmation submodule, wherein the The third labeling sub-module is used to obtain multiple historical car head images from the database and to mark the car head and the two-dimensional code in the historical car head images; the third target detection model is used to establish a third neural network Yolov5, and use the marked historical head image to train the third neural network Yolov5 to obtain the third target detection model; the head and identity confirmation sub-module is used to collect the current head image in real time, and use the third target detection model to identify Find the collection truck head in the current head image and the two-dimensional code pasted on the collection truck head and confirm the collection card identity code; and the data receiving unit is used to be located in the collection truck cab and according to the collection card identity code Connect with the video processor through the network.
基于上述装置的进一步改进,基于机器视觉的岸桥控制装置还包括LED显示屏,所述初定位模块用于基于识别出的所述当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离,其中,通过以下公式计算所述集卡的初定位偏差距离:Based on the further improvement of the above-mentioned device, the quay crane control device based on machine vision also includes an LED display screen, and the initial positioning module is used for pre-acquired Calculate the initial positioning deviation distance of the collection truck, wherein, the initial positioning deviation distance of the collection truck is calculated by the following formula:
offset=D(y-y 0) offset=D(yy 0 )
其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子;所述数据接收单元,通过所述网络将 所述集卡的初定位偏差距离传输至数据接收单元;以及LED显示屏,位于所述集卡驾驶室内并与所述数据接收单元通信连接,用于显示所述初定位偏差距离,以引导集卡司机调整集卡位置。 Wherein, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to the height; the data receiving unit transfers the set The initial positioning deviation distance of the card is transmitted to the data receiving unit; and the LED display screen is located in the cab of the collection truck and communicated with the data receiving unit for displaying the initial positioning deviation distance to guide the collection truck driver to adjust Card location.
基于上述装置的进一步改进,所述移动偏差距离计算模块用于通过以下公式计算所述集卡的移动偏差距离进一步包括:Based on the further improvement of the above device, the movement deviation distance calculation module is used to calculate the movement deviation distance of the truck through the following formula and further includes:
Figure PCTCN2022072004-appb-000002
Figure PCTCN2022072004-appb-000002
其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子;所述数据接收单元,通过所述网络接收所述移动偏差距离;所述LED显示器,用于显示所述移动偏差距离,以继续引导司机调整集卡位置,直到所述移动偏差距离小于阈值时完成集卡定位引导。 Wherein, x 0 , y 0 are the midpoints of the two box holes currently detected respectively, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height; The data receiving unit receives the moving deviation distance through the network; the LED display is used to display the moving deviation distance, so as to continue to guide the driver to adjust the truck collection position until the truck collection is completed when the moving deviation distance is less than a threshold Positioning guidance.
与现有技术相比,本申请至少可实现如下有益效果之一:Compared with the prior art, the present application can achieve at least one of the following beneficial effects:
1、本申请通过在岸桥横梁上安装摄像头,提出了一种基于检测集卡集装箱或车架区域进行集卡初定位,并通过集装箱或车架共性特征(箱孔和导板)进行精确定位的方法。通过集卡初定位增加了岸桥内集卡可引导定位的距离范围,同时通过箱孔或导板等小目标的检测和定位,提高了内集卡定位精度,进而实现了内集卡的精确引导定位。1. By installing a camera on the crossbeam of the quay bridge, this application proposes an initial positioning of the truck based on the detection of the container or frame area of the truck, and accurate positioning through the common features of the container or frame (box holes and guide plates). method. Through the initial positioning of the collection truck, the distance range for the guidance and positioning of the internal collection truck of the quayside crane is increased. At the same time, the positioning accuracy of the internal collection truck is improved through the detection and positioning of small targets such as box holes or guide plates, and the precise guidance of the internal collection truck is realized. position.
2、本申请采用一种基于集装箱箱孔距离的高度估计方法。该方法有效的区分了由于集装箱箱高不同,引起的引导误差,通过估计装载集装箱内集卡的高度,有效地适应了各种箱高的作业工况,提高了系统的适应性。2. This application adopts a height estimation method based on the distance of the container hole. This method effectively distinguishes the guidance error caused by the different heights of the containers, and by estimating the height of the collection truck in the loaded container, it effectively adapts to the operating conditions of various container heights and improves the adaptability of the system.
3、本申请采用综合集装箱数量、箱孔以及集装箱上表面文本信息,进行单、双箱判定方法。通过联合判定方法,从而避免由于依靠单一信息判定而引起的误判,进而有效的提高了单双箱判别的准确率。3. This application adopts the comprehensive container quantity, container hole and text information on the upper surface of the container to determine single and double containers. Through the joint judgment method, the misjudgment caused by relying on a single information judgment is avoided, and the accuracy of single and double box discrimination is effectively improved.
本申请中,上述各技术方案之间还可以相互组合,以实现更多的优选组合方案。本申请的其他特征和优点将在随后的说明书中阐述,并且,部分优点可从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过说明书以及附图中所特别指出的内容中来实现和获得。In the present application, the above technical solutions can also be combined with each other to realize more preferred combination solutions. Additional features and advantages of the application will be set forth in the description which follows, and some of the advantages will be apparent from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the matter particularly pointed out in the written description and accompanying drawings.
附图说明Description of drawings
附图仅用于示出具体实施例的目的,而并不认为是对本申请的限制,在整个附图中,相同的参考符号表示相同的部件。The drawings are for the purpose of illustrating specific embodiments only and are not to be considered limiting of the application, and like reference numerals refer to like parts throughout the drawings.
图1为根据本申请实施例的基于机器视觉的岸桥控制方法的流程图。Fig. 1 is a flowchart of a machine vision-based control method for quay cranes according to an embodiment of the present application.
图2为根据本申请实施例的摄像机装置的安装布置示意图。Fig. 2 is a schematic diagram of installation arrangement of a camera device according to an embodiment of the present application.
图3为根据本申请实施例的摄像机采集到的集卡、其识别结果以及偏差的示意图。Fig. 3 is a schematic diagram of collection cards collected by cameras, their recognition results and deviations according to an embodiment of the present application.
图4为根据本申请实施例的基于机器视觉的岸桥控制方法的具体流程图。Fig. 4 is a specific flow chart of a machine vision-based quay crane control method according to an embodiment of the present application.
图5为根据本申请实施例的高度估计的示意图。Fig. 5 is a schematic diagram of height estimation according to an embodiment of the present application.
图6为根据本申请实施例的车头以及二维码识别的示意图。Fig. 6 is a schematic diagram of vehicle head and two-dimensional code recognition according to an embodiment of the present application.
图7为根据本申请实施例的装载集装箱区域裁剪和融合的示意图。Fig. 7 is a schematic diagram of clipping and fusion of loaded container regions according to an embodiment of the present application.
图8为根据本申请实施例的车架区域裁剪和融合的示意图。Fig. 8 is a schematic diagram of clipping and fusion of frame regions according to an embodiment of the present application.
图9为根据本申请实施例的岸桥内集卡工作的原理图。Fig. 9 is a schematic diagram of the working principle of the container truck in the quay crane according to the embodiment of the present application.
图10为根据本申请实施例的基于机器视觉的岸桥控制装置的框图。Fig. 10 is a block diagram of a machine vision-based quay crane control device according to an embodiment of the present application.
具体实施方式Detailed ways
下面结合附图来具体描述本申请的优选实施例,其中,附图构成本申请一部分,并与本申请的实施例一起用于阐释本申请的原理,并非用于限定本申请的范围。Preferred embodiments of the application are described in detail below in conjunction with the accompanying drawings, wherein the drawings constitute a part of the application and together with the embodiments of the application are used to explain the principles of the application and are not intended to limit the scope of the application.
本申请的一个具体实施例,公开了一种基于机器视觉的岸桥控制方法。如图1所示,基于机器视觉的岸桥控制方法包括:在步骤S102中,对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定;在步骤S104中,获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于目标矩形区域和目标停车位置计算初定位偏差距离,其中,目标矩形区域包括最小外接矩形;在步骤S106中,对车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像;在步骤S108中,获取并利用第二目标检测模型识别出车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标;在步骤S110中,基于车身子图像中的集装箱的目标矩形区域是否是融合区域、车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判断;在步骤S112中,基于箱孔坐标或车架导板坐标获取车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计集装箱的高度;在步骤S114中,基于集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据移动偏差距离将集卡引导至目标停车位置;以及在步骤S116中,在岸桥的吊具到达集装箱或车架正上方时,根据判断的单双箱调整岸桥的吊具形态,进 行抓箱或放箱作业。A specific embodiment of the present application discloses a machine vision-based control method for quay cranes. As shown in Figure 1, the machine vision-based quay crane control method includes: in step S102, calibrate the target parking position of the collection truck and the estimated height of the collection truck loaded with containers; in step S104, obtain and use the first target The detection model identifies the target rectangular area of the container or frame in the body image, and calculates the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum circumscribed rectangle; in step S106, the body image The target rectangular area of the container or frame in the image is cropped to generate the body sub-image; in step S108, obtain and use the second target detection model to identify the box hole coordinates and text or frame on the container in the body sub-image frame guide plate coordinates on the upper frame; in step S110, whether the target rectangular area of the container in the sub-image of the vehicle body is a fusion area, the box hole coordinates and the text of the middle sub-region in the sub-image of the vehicle body, carry out single and double container judgment; In step S112, the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image are obtained based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container; in step S114, based on the container or vehicle frame height and the target straight line generated by the box hole or the frame guide plate to calculate the movement deviation distance, and guide the truck to the target parking position according to the movement deviation distance; When it is above, adjust the spreader shape of the quay crane according to the judged single and double boxes, and carry out the operation of grabbing or putting boxes.
与现有技术相比,本实施例提供的基于机器视觉的岸桥控制方法中,通过集卡初定位增加了岸桥内集卡可引导定位的距离范围,同时通过箱孔或导板等小目标的检测和定位,提高了内集卡定位精度,进而实现了内集卡的精确引导定位,提高了集卡定位精度。采用综合集装箱数量、箱孔以及集装箱上表面文本信息,进行单、双箱判定方法。通过这种联合判定方法,能够避免由于依靠单一信息判定而引起的误判,进而有效的提高了单双箱判别的准确率。因此提高了岸桥抓放箱作业效率。Compared with the prior art, in the machine vision-based quay crane control method provided by this embodiment, the distance range within which the quay truck can be guided and positioned by the collection truck in the quay crane is increased through the initial positioning of the collection truck, and at the same time, small targets such as box holes or guide plates The detection and positioning of the inner collection truck improve the positioning accuracy of the inner collection truck, and then realize the precise guidance and positioning of the inner collection truck, and improve the positioning accuracy of the collection truck. Using the comprehensive container quantity, box hole and the text information on the container surface, the single and double box judgment method is carried out. Through this joint judgment method, the misjudgment caused by relying on a single information judgment can be avoided, thereby effectively improving the accuracy of single and double box discrimination. Therefore, the operating efficiency of the quay crane grabbing and releasing boxes is improved.
下文中,将参考图1对基于机器视觉的岸桥控制方法中的步骤S102至S116进行详细描述。Hereinafter, steps S102 to S116 in the machine vision-based quay crane control method will be described in detail with reference to FIG. 1 .
首先,在岸桥横梁上的车道相对两侧的隔离带处安装第一摄像机、第二摄像机、第三摄像机、第四摄像机、第五摄像机和第六摄像机,其中,利用第一摄像机至第四摄像机拍摄集卡的车头图像,以确认车辆身份以及车辆移动方向;以及利用第五摄像机和第六摄像机拍摄集卡的车身图像,以计算集卡的初定位偏差距离和移动偏差距离。将拍摄的车身图像和车头图像存储在数据库中。First, the first camera, the second camera, the third camera, the fourth camera, the fifth camera and the sixth camera are installed at the isolation strips on the opposite sides of the lane on the cross beam of the quay bridge, wherein, using the first camera to the fourth camera The camera captures the front image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and uses the fifth camera and the sixth camera to capture the body image of the truck to calculate the initial positioning deviation distance and the movement deviation distance of the truck. Store the captured body image and head image in the database.
在步骤S102中,对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定。具体地,对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定进一步包括:预先采集集卡的目标停车位置的车身图像并对目标停车位置的车身图像进行识别,以获取集卡在目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标;利用箱孔图像坐标或车架导板图像坐标生成目标直线L并估计与高度相关的像素距离因子D和E。In step S102, the target parking position of the truck and the estimated height of the truck loaded with containers are calibrated. Specifically, the calibration of the target parking position of the collection truck and the estimated height of the collection truck loaded with containers further includes: pre-collecting the body image of the target parking position of the collection truck and identifying the body image of the target parking position, so as to obtain the vehicle body image of the collection truck at the target parking position. The image coordinates of the container and the corresponding box hole or the frame and the corresponding frame guide plate image coordinates at the target parking position; use the box hole image coordinates or the frame guide plate image coordinates to generate the target line L and estimate the height-related pixel distance factors D and E .
在对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定之后进一步包括:从数据库中获取多幅历史车头图像并对历史车头图像中的车头及二维码进行标注;建立第三神经网络Yolov5,并利用标注的历史车头图像对第三神经网络Yolov5进行训练以获得第三目标检测模型;实时采集当前车头图像,并利用第三目标检测模型识别出当前车头图像中的集卡车头和集卡车头上粘贴的二维码并确认集卡身份编码;以及通过网络根据集卡身份编码连接对应集卡驾驶室内的数据接收单元。After calibrating the target parking position of the truck and the estimated height of the truck loaded with containers, it further includes: obtaining multiple historical front images from the database and marking the front and QR codes in the historical front images; establishing a third nerve Network Yolov5, and use the marked historical head image to train the third neural network Yolov5 to obtain the third target detection model; collect the current head image in real time, and use the third target detection model to identify the set truck head and The QR code pasted on the front of the truck and confirm the identity code of the truck; and connect the data receiving unit in the cab of the corresponding truck through the network according to the ID code of the truck.
在步骤S104中,获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于目标矩形区域和目标停车位置计算初定位偏差距离,其中,目标矩形区域包括最小外接矩形。具体地,获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域进一步包括:从数据库中获取多幅历史车身图像并对多幅历史车身图像中的集装箱或车架进行标注;建立第一神经网络Yolov5并利用标注的多幅历史车身图像对第一神经网络Yolov5进行训练以获得第一目标检 测模型;以及实时采集当前车身图像,并利用第一目标检测模型识别出当前车身图像中的集装箱或车架的目标矩形区域,以进行抓箱或放箱判定,其中,根据集装箱的目标矩形区域的大小判定集装箱是20尺还是非20尺,并将同一集卡上的两个20尺的集装箱的目标矩形区域融合以形成融合区域。基于目标矩形区域和目标停车位置计算初定位偏差距离进一步包括:基于识别出的当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离;以及通过网络将集卡的初定位偏差距离传输至数据接收单元,并经由LED显示屏进行显示,以引导集卡司机调整集卡位置,其中,通过以下公式计算集卡的初定位偏差距离:In step S104, acquire and use the first object detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes the minimum The bounding rectangle. Specifically, acquiring and using the first target detection model to identify the target rectangular area of the container or the frame in the body image further includes: acquiring multiple historical body images from the database and detecting the container or frame in the multiple historical body images Carry out labeling; Establish the first neural network Yolov5 and use the multiple pieces of historical body images of the label to train the first neural network Yolov5 to obtain the first target detection model; and collect the current vehicle body image in real time, and use the first target detection model to identify The target rectangular area of the container or vehicle frame in the current body image is used to determine whether to grab or put the container, wherein, according to the size of the target rectangular area of the container, it is determined whether the container is 20 feet or not, and the The target rectangular areas of the two 20-foot containers merge to form a fusion area. Calculating the initial positioning deviation distance based on the target rectangular area and the target parking position further includes: calculating the initial positioning deviation distance of the truck based on the recognized target rectangular area of the container or frame in the current body image and the pre-acquired target parking position; And the initial positioning deviation distance of the collection truck is transmitted to the data receiving unit through the network, and displayed on the LED display screen to guide the collection truck driver to adjust the position of the collection truck. The initial positioning deviation distance of the collection truck is calculated by the following formula:
offset=D(y-y 0) offset=D(yy 0 )
其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子,单位为mm/pixel,换句话说,在单个高度等级为2.4m、2.6m或2.9m时,实际像素距离因子D也不同。 Among them, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to height in mm/pixel, in other words, at a single height level When it is 2.4m, 2.6m or 2.9m, the actual pixel distance factor D is also different.
在步骤S106中,对车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像。具体地,对车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像进一步包括:将历史车身图像或当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,当目标矩形区域是集装箱区域时,历史车身子图像或当前车身子图像包括第一上部子区域、中间子区域和第一下部子区域,其中,对第一中间子区域进行箱孔和文字检测,对第一上部子区域图像和第一下部子区域图像进行箱孔检测;以及当目标矩形区域是车架区域时,历史车身子图像或当前车身子图像包括第二上部子区域和第二下部子区域,其中,对第二上部子区域和第二下部子区域进行车架导板检测。具体地,当检测矩形区域中心点Y坐标小于图像高度的1/3时,裁剪比例为0.15、0.4、0.45,当Y坐标大于1/3并小于2/3时0.25、0.4、0.35,当Y坐标大于2/3时,比例为0.35、0.4、0.25。由于中间子区域图像具有固定的图像比例,在整个图像的中心处根据需要上下偏移,这种图像裁剪方法能够保证检测到箱孔和文字。In step S106 , image cropping is performed on the target rectangular area of the container or frame in the vehicle body image to generate a body sub-image. Specifically, performing image cropping on the target rectangular area of the container or frame in the vehicle body image to generate the vehicle body sub-image further includes: clipping the historical vehicle body image or the current vehicle body image into the historical vehicle body sub-image or the current vehicle body sub-image, wherein, when When the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes the first upper sub-area, the middle sub-area and the first lower sub-area, wherein the first middle sub-area is subjected to box hole and text detection, Carrying out box hole detection on the first upper sub-region image and the first lower sub-region image; and when the target rectangular region is the frame region, the historical body sub-image or the current body sub-image includes the second upper sub-region and the second lower A sub-area, wherein the frame guide plate detection is performed on the second upper sub-area and the second lower sub-area. Specifically, when the Y coordinate of the center point of the detected rectangular area is less than 1/3 of the image height, the cropping ratio is 0.15, 0.4, 0.45; when the Y coordinate is greater than 1/3 and less than 2/3, it is 0.25, 0.4, 0.35; When the coordinate is greater than 2/3, the scale is 0.35, 0.4, 0.25. Since the image in the middle sub-region has a fixed image ratio and is shifted up and down at the center of the entire image as needed, this image cropping method can guarantee the detection of box holes and text.
在步骤S108中,获取并利用第二目标检测模型识别出车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标。具体地,获取并利用第二目标检测模型识别出车身子图像中的集装箱上的箱孔坐标或车架上的车架导板坐标进一步包括:对历史车身子图像中的箱孔或车架导板进行标注;建立第二神经网络Yolov5并利用标注的历史车身子图像对第二神经网络Yolov5进行训练以获得第二目标检测模型;以及利用第二目标检测模型识别出当前车身子图像中的箱孔和文字或车架导板,并获取箱孔坐标或车架导板坐标。In step S108 , acquire and use the second target detection model to identify the box hole coordinates and text on the container in the vehicle body sub-image or the coordinates of the frame guide plate on the frame. Specifically, acquiring and using the second target detection model to identify the box hole coordinates on the container or the frame guide plate coordinates on the frame in the body sub-image further includes: Annotate; set up the second neural network Yolov5 and utilize the historical body image of labeling to train the second neural network Yolov5 to obtain the second target detection model; and utilize the second target detection model to identify the box holes and text or frame guide and get box hole coordinates or frame guide coordinates.
在步骤S110中,基于车身子图像中的集装箱的目标矩形区域是否是融合区域, 即是否为两个20尺的集装箱、车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判断。根据集装箱图像中集装箱区域是否为两个20尺的集装箱,中间子区域的箱孔和文本,计算总得分。通过以下公式计算总得分:In step S110, based on whether the target rectangular area of the container in the body sub-image is a fusion area, that is, whether it is two 20-foot containers, the box hole coordinates and the text of the middle sub-area in the body sub-image, perform single and double container judge. Calculate the total score according to whether the container area in the container image is two 20-foot containers, the box hole and the text in the middle sub-area. The total score is calculated by the following formula:
Figure PCTCN2022072004-appb-000003
Figure PCTCN2022072004-appb-000003
其中,weight 0表示集装箱数量权重、weight 1是否有文字权重以及weight 2是否有箱孔权重,R 0分别表示是否两个集装箱、R 1是否有文字以及R 2是否有箱孔,有即为1,无即为0。当通过以上公式计算的总得分大于阈值时,判断车载集装箱为双箱;以及当通过以上公式计算的总得分小于等于阈值时,判断车载集装箱为单箱。 Among them, weight 0 indicates the weight of the number of containers, whether weight 1 has text weight and weight 2 has box hole weight, R 0 respectively indicates whether there are two containers, whether R 1 has text, and whether R 2 has box holes, and if there is, it is 1 , nothing is 0. When the total score calculated by the above formula is greater than the threshold, it is judged that the vehicle-mounted container is a double box; and when the total score calculated by the above formula is less than or equal to the threshold, it is judged that the vehicle-mounted container is a single box.
在步骤S112中,基于箱孔坐标或车架导板坐标获取车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计集装箱的高度。具体地,对于装载集装箱的内集卡进行高度估计,考虑到集装箱的单个高度等级2.4,2.6和2.9。根据摄像机位置时固定的,根据集装箱高度的不同而导致与摄像机之间的距离不同,进而导致图像中箱孔距离不同,例如,当高度等级为2.4时,图像中的箱孔距离最小,以及当高度等级2.9时,图像中的箱孔距离最大。因此,需要对装载集装箱的集卡进行集装箱高度估计,首先计算检测到的两箱孔中心点距离最近的位置线,然后比较当前图像中两箱孔之间的距离d与高度标定时的目标停车位置的车身图像中两箱孔之间的距离d k的大小,d-d k>T 0则认为是2.9等级,若d-d k<T 1则认为是2.4等级,如果在T 0~T 1范围内,则认为是2.6等级,其中,T 0=3,T 1=1。另外,车架的高度是固定的,所以不需要估计车架的高度。 In step S112, the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image is acquired based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container. Specifically, the height estimation is performed for the container-laden container trucks, taking into account the individual height classes 2.4, 2.6 and 2.9 of the container. It is fixed according to the position of the camera, and the distance between the camera and the camera is different according to the height of the container, which in turn causes the distance of the box hole in the image to be different. For example, when the height level is 2.4, the distance between the box hole in the image is the smallest, and when At height class 2.9, the boxhole distance in the image is at its maximum. Therefore, it is necessary to estimate the container height of the container truck. First, calculate the closest position line between the detected center points of the two container holes, and then compare the distance d between the two container holes in the current image with the target stop when the height is calibrated. The size of the distance d k between the two box holes in the body image of the position, dd k > T 0 is considered to be 2.9 level, if dd k < T 1 is considered to be 2.4 level, if it is within the range of T 0 ~ T 1 , Then it is regarded as a grade of 2.6, where T 0 =3 and T 1 =1. Also, the height of the frame is fixed, so there is no need to estimate the height of the frame.
在步骤S114中,基于集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据移动偏差距离将集卡引导至目标停车位置。具体地,基于集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据移动偏差距离将集卡引导至目标停车位置进一步包括:通过以下公式计算集卡的移动偏差距离进一步包括:In step S114, the movement deviation distance is calculated based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, and the collection truck is guided to the target parking position according to the movement deviation distance. Specifically, calculating the movement deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, guiding the collection truck to the target parking position according to the movement deviation distance further includes: calculating the movement of the collection truck by the following formula Bias distances further include:
Figure PCTCN2022072004-appb-000004
Figure PCTCN2022072004-appb-000004
其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子;通过网络将移动偏差距离发送至数据接收单元并在LED显示器上显示移动偏差距离,以继续引导司机调整集卡位置,直到移动偏差距离小于阈值时完成集卡定位引导。 Among them, x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height; through the network Send the movement deviation distance to the data receiving unit and display the movement deviation distance on the LED display to continue to guide the driver to adjust the position of the collection truck until the movement deviation distance is less than the threshold to complete the collection truck positioning guidance.
在步骤S116中,在岸桥的吊具到达集装箱或车架正上方时,根据判断的单双箱调整岸桥的吊具形态,进行抓箱或放箱作业。In step S116, when the spreader of the quay crane reaches directly above the container or the vehicle frame, adjust the form of the spreader of the quay crane according to the judged single or double container, and carry out the operation of grabbing or releasing the container.
本申请的另一个具体实施例,公开了一种基于机器视觉的岸桥控制装置。参考图10,基于机器视觉的岸桥控制装置包括:视频处理器1002和控制模块1018,视频处理器1002包括标定模块1004、初定位模块1006、图像裁剪模块1008、识别模块1010、单双箱判断模块1012、高度估计模块1014、移动偏差距离计算模块1016、数据接收单元和LED显示屏。Another specific embodiment of the present application discloses a machine vision-based quay crane control device. Referring to Fig. 10, the quay crane control device based on machine vision includes: a video processor 1002 and a control module 1018, and the video processor 1002 includes a calibration module 1004, an initial positioning module 1006, an image cropping module 1008, an identification module 1010, and single and double box judgment Module 1012, height estimation module 1014, movement deviation distance calculation module 1016, data receiving unit and LED display screen.
摄像机,用于预先采集集卡的目标停车位置的车身图像,以及间歇地采集车身图像。参考图2,摄像机包括第一摄像机201、第二摄像机202、第三摄像机203、第四摄像机204、第五摄像机205和第六摄像机206,安装在岸桥横梁上的车道相对两侧的隔离带处。第一摄像机201至第四摄像机204,用于拍摄集卡的车头图像,以确认车辆身份以及车辆移动方向;以及第五摄像机205和第六摄像机206,用于拍摄集卡的车身图像,以计算集卡的初定位偏差距离和移动偏差距离。The camera is used for pre-collecting the vehicle body image of the target parking position of the collecting card, and intermittently collecting the vehicle body image. Referring to Fig. 2, the camera comprises a first camera 201, a second camera 202, a third camera 203, a fourth camera 204, a fifth camera 205 and a sixth camera 206. place. The first camera 201 to the fourth camera 204 are used to capture the front image of the collection truck to confirm the identity of the vehicle and the direction of movement of the vehicle; and the fifth camera 205 and the sixth camera 206 are used to capture the body image of the collection truck to calculate The initial positioning deviation distance and moving deviation distance of the truck.
标定模块1004,用于对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定。标定模块还包括:目标位置标定子模块和高度估计标定子模块。目标位置标定子模块,用于对目标停车位置的车身图像进行识别,并获取集卡在目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标,以生成目标直线L。高度估计标定子模块,利用箱孔图像坐标或车架导板图像坐标估计与高度相关的像素距离因子。The calibration module 1004 is configured to calibrate the target parking position of the truck and the estimated height of the truck loaded with containers. The calibration module also includes: a target position calibration sub-module and a height estimation calibration sub-module. The target position calibration sub-module is used to identify the body image of the target parking position, and obtain the image coordinates of the container and the corresponding box hole or the image coordinates of the vehicle frame and the corresponding frame guide plate at the target parking position to generate the target straight line L. The height estimation and calibration sub-module uses the box hole image coordinates or the frame guide image coordinates to estimate the height-related pixel distance factor.
初定位模块1006,用于获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于目标矩形区域和目标停车位置计算初定位偏差距离,其中,目标矩形区域包括最小外接矩形。初定位模块1006包括标注子模块、第一目标检测模型、目标矩形生成子模块。第一标注子模块,用于从数据库中获取多幅历史车身图像并对多幅历史车身图像中的集装箱或车架进行标注。第一目标检测模型,用于建立第一神经网络Yolov5并利用标注的多幅历史车身图像对第一神经网络Yolov5进行训练以获得第一目标检测模型。目标矩形生成子模块,用于实时采集当前车身图像,并利用第一目标检测模型识别出当前车身图像中的集装箱或车架的目标矩形区域。初定位模块1006用于基于识别出的当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离,其中,通过以下公式计算集卡的初定位偏差距离:The initial positioning module 1006 is configured to obtain and use the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area Include the smallest bounding rectangle. The initial positioning module 1006 includes a labeling submodule, a first target detection model, and a target rectangle generation submodule. The first labeling submodule is used to acquire multiple historical vehicle body images from the database and label the containers or vehicle frames in the multiple historical vehicle body images. The first object detection model is used to establish the first neural network Yolov5 and train the first neural network Yolov5 by using multiple marked historical body images to obtain the first object detection model. The target rectangle generation sub-module is used to collect the current body image in real time, and use the first target detection model to identify the target rectangle area of the container or frame in the current body image. The initial positioning module 1006 is used to calculate the initial positioning deviation distance of the collection truck based on the recognized target rectangular area of the container or frame in the current body image and the pre-acquired target parking position, wherein the initial positioning deviation distance of the collection truck is calculated by the following formula Positioning deviation distance:
offset=D(y-y 0) offset=D(yy 0 )
其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子。 Among them, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to the height.
图像裁剪模块1008,用于对车身图像中检测到的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像。图像裁剪模块1008用于将历史车身图像或当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,当目标矩形区域是集装箱区域时,历史车身子图像或当前车身子图像包括第一上部子区域、中间子区域和第一下部子区域,其中,对第一中间子区域进行箱孔和文字检测,对第一上部子区域图像和第一下部子区域图像进行箱孔检测;以及当目标矩形区域是车架区域时,历史车身子图像或当前车身子图像包括第二上部子区域和第二下部子区域,其中,对第二上部子区域和第二下部子区域进行车架导板检测。The image cropping module 1008 is configured to perform image cropping on the target rectangular area of the container or vehicle frame detected in the vehicle body image to generate a body sub-image. The image cropping module 1008 is used for cutting the historical body image or the current body image into a historical body sub-image or a current body sub-image, wherein, when the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes the first upper part sub-areas, middle sub-areas and first lower sub-areas, wherein box hole and text detection is performed on the first middle sub-area, and box hole detection is performed on the first upper sub-area image and the first lower sub-area image; and When the target rectangular area is the frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein the frame guide plate is performed on the second upper sub-area and the second lower sub-area detection.
识别模块1010,用于获取并利用第二目标检测模型识别出车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标。识别模块1010包括第三标注子模块、第三目标检测模型和车头及身份确认子模块。第三标注子模块,用于从数据库中获取多幅历史车头图像并对历史车头图像中的车头及二维码进行标注。第三目标检测模型,用于建立第三神经网络Yolov5,并利用标注的历史车头图像对第三神经网络Yolov5进行训练以获得第三目标检测模型。车头及身份确认子模块,用于实时采集当前车头图像,并利用第三目标检测模型识别出当前车头图像中的集卡车头和集卡车头上粘贴的二维码并确认集卡身份编码。此外,识别模块1010还包括第二标注子模块、第二目标检测模型和箱孔及车架导板识别子模块。第二标注子模块,用于对历史车身子图像中的箱孔或车架导板进行标注;第二目标检测模型,用于建立第二神经网络Yolov5并利用标注的历史车身子图像对第二神经网络Yolov5进行训练以获得第二目标检测模型;以及箱孔及车架导板识别子模块,用于利用第二目标检测模型识别出当前车身子图像中的箱孔或车架导板,并获取箱孔坐标和车架导板坐标。The recognition module 1010 is configured to obtain and use the second target detection model to recognize the box hole coordinates and text on the container in the body image or the frame guide plate coordinates on the frame. The recognition module 1010 includes a third labeling submodule, a third target detection model, and a vehicle front and identity confirmation submodule. The third labeling sub-module is used to acquire multiple historical vehicle front images from the database and label the vehicle front and the two-dimensional code in the historical vehicle front images. The third object detection model is used to establish the third neural network Yolov5, and use the marked historical head images to train the third neural network Yolov5 to obtain the third object detection model. The vehicle front and identity confirmation sub-module is used to collect the current vehicle front image in real time, and use the third target detection model to identify the collection truck head in the current vehicle front image and the QR code pasted on the collection truck head and confirm the identity code of the collection truck. In addition, the recognition module 1010 also includes a second labeling submodule, a second target detection model, and a box hole and frame guide plate recognition submodule. The second labeling sub-module is used to mark the box hole or the frame guide plate in the historical car body sub-image; the second target detection model is used to establish the second neural network Yolov5 and use the marked historical car body sub-image to the second neural network The network Yolov5 is trained to obtain the second target detection model; and the box hole and frame guide plate identification submodule is used to use the second target detection model to identify the box hole or frame guide plate in the current body sub-image, and obtain the box hole coordinates and frame guide coordinates.
单双箱判断模块1012,用于基于车身子图像中的集装箱的目标矩形区域是否是融合区域、车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判定。The single and double box judging module 1012 is used to determine single and double boxes based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-area in the body sub-image.
高度估计模块1014,用于基于箱孔坐标或车架导板坐标获取车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计集装箱的高度。The height estimation module 1014 is configured to obtain the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container.
移动偏差距离计算模块1016,用于基于集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,以根据移动偏差距离将集卡引导至目标停车位置。The movement deviation distance calculation module 1016 is used to calculate the movement deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, so as to guide the collection truck to the target parking position according to the movement deviation distance.
数据接收单元,用于位于集卡驾驶室内并根据集卡身份编码通过网络与视频处理器连接。该数据接收单元通过网络接收集卡的初定位偏差距离传输至数据接收单元,并且通过网络接收移动偏差距离。LED显示屏,位于集卡驾驶室内并与数据接收单 元通信连接,用于显示初定位偏差距离以引导集卡司机调整集卡位置;以及显示移动偏差距离,以继续引导司机调整集卡位置,直到移动偏差距离小于阈值时完成集卡定位引导。The data receiving unit is used to be located in the truck cab and connect with the video processor through the network according to the identity code of the truck. The data receiving unit receives the initial positioning deviation distance of the collection card through the network and transmits it to the data receiving unit, and receives the moving deviation distance through the network. The LED display is located in the cab of the truck and communicated with the data receiving unit, and is used to display the initial positioning deviation distance to guide the truck driver to adjust the truck position; and display the moving deviation distance to continue to guide the driver to adjust the truck position until When the movement deviation distance is less than the threshold, the positioning guidance of the collection card is completed.
控制模块1018,用于在岸桥的吊具到达集装箱或车架正上方时,根据判断的单双箱调整岸桥的吊具形态,进行抓箱或放箱作业。具体地,移动偏差距离计算模块用于通过以下公式计算集卡的移动偏差距离进一步包括:The control module 1018 is used to adjust the shape of the spreader of the quay crane according to the judged single or double container when the spreader of the quay crane arrives directly above the container or the frame, and carry out the operation of grabbing or releasing the container. Specifically, the movement deviation distance calculation module is used to calculate the movement deviation distance of the collection card through the following formula and further includes:
Figure PCTCN2022072004-appb-000005
Figure PCTCN2022072004-appb-000005
其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子。 Among them, x 0 and y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height.
下文中,将参考图1至图9,以具体地实例的方式对基于机器视觉的岸桥控制方法进行详细描述。Hereinafter, with reference to FIG. 1 to FIG. 9 , the machine vision-based control method of the quay crane will be described in detail by way of specific examples.
本申请所要解决的技术问题是通过内集卡初定位,解决内集卡较远距离的引导定位,即增大定位距离范围,同时进行有效的集卡集装箱的高度估计以及车架和集装箱共性特征(导板或箱孔)的检测,完成集卡的精确定位,同时通过车头判定,确定集卡行驶方向,以便更好的引导集卡司机。另一方面,通过集卡初定位后的图像采集可有效获取集装箱中间图像区域,并综合集装箱检测结果、集装箱上文本检测结果以及箱孔数量和距离信息联合判定单双箱,进而有效提高单双箱判别准确率。The technical problem to be solved in this application is to solve the long-distance guided positioning of the internal collection truck through the initial positioning of the internal collection truck, that is, to increase the positioning distance range, and at the same time to perform effective height estimation of the collection truck container and the common characteristics of the frame and the container (guide plate or box hole) detection to complete the precise positioning of the collection truck, and at the same time determine the driving direction of the collection truck through the judgment of the front of the truck, so as to better guide the collection truck driver. On the other hand, through the image acquisition after the initial positioning of the collection truck, the middle image area of the container can be effectively obtained, and the container detection results, the text detection results on the container, and the number and distance information of the container holes can be used to jointly determine the single and double containers, thereby effectively improving the single and double containers. Box discrimination accuracy.
本申请提供了一种基于机器视觉的岸桥内集卡定位系统以及单双箱判别方法,其特征在于利用智能视频分析技术,有效识别内集卡上集装箱区域、箱孔、文本以及车架和车架导板等信息,并进行分析判别,以实现岸桥内集卡的定位以及单双箱的判定。另一方面,通过视频数据,识别内集卡车头二维码特殊标识,进行车辆身份确认以及车辆行驶方向的判定。参考图4,该方法具体步骤如下:This application provides a machine vision-based quay crane internal collection truck positioning system and single and double container discrimination method, which is characterized in that the intelligent video analysis technology is used to effectively identify the container area, box hole, text, and frame and container on the internal collection truck. The frame guide plate and other information are analyzed and judged to realize the positioning of the container truck in the quayside crane and the judgment of single and double boxes. On the other hand, through the video data, the special logo of the two-dimensional code on the head of the truck is identified, and the vehicle identity is confirmed and the direction of the vehicle is determined. With reference to Figure 4, the specific steps of the method are as follows:
1、参考图2,在岸桥横梁上安装六个相机,其中四个(编号:201、202、203、204)拍摄内集卡车头部分,另外两个(编号:205、206)拍摄内集卡车身部分,其中车头部分相机功能为确认车辆身份以及车辆移动方向,车身相机主要负责进行内集卡定位偏差距离计算以及单双箱判定。1. Referring to Figure 2, install six cameras on the beam of the quay bridge, four of them (number: 201, 202, 203, 204) to shoot the front part of the inner truck, and the other two (number: 205, 206) to shoot the inner For the truck body part, the camera function of the front part is to confirm the identity of the vehicle and the moving direction of the vehicle, and the body camera is mainly responsible for the calculation of the positioning deviation distance of the inner truck and the judgment of single and double boxes.
在内集卡驾驶室安装LED显示器和数据接收单元,用于显示内集卡当前偏差距离和偏差方向。An LED display and a data receiving unit are installed in the cab of the inner truck to display the current deviation distance and direction of the inner truck.
2、通过预先采集装有不同集装箱箱型或空车架在不同位置即待装内集卡的目标停车位置图像数据,进行数据分析识别,获取车辆在目标位置的箱体、箱孔或车架、 车架导板图像坐标以及对应高度的每个像素代表的实际距离A mm/pixel和E mm/pixel。2. By pre-collecting the image data of the target parking position with different container types or empty frames at different positions, that is, the inner container truck to be installed, the data is analyzed and identified, and the box body, box hole or frame of the vehicle at the target position is obtained , the image coordinates of the frame guide plate, and the actual distances A mm/pixel and E mm/pixel represented by each pixel of the corresponding height.
3、通过预先间歇采集装载集装箱的内集卡缓慢行驶过相机205、206视场的图像数据,进而计算靠近相机侧的两个箱孔在移动的不同间隔时的图像坐标的欧式距离,据此进行后续的装载集装箱的内集卡高度估计。3. By pre-intermittently collecting the image data of the inner container truck slowly driving through the field of view of the cameras 205 and 206, and then calculating the Euclidean distance of the image coordinates of the two container holes near the camera side at different intervals of movement, according to Carry out the subsequent height estimation of the inner container of the loaded container.
4、当车辆进入205、206相机视场后,首先通过目标识别算法YoloV5进行内集卡上集装箱或空车架区域的识别,并通过识别到集装箱区域进行大小判定,区分20尺和非20尺,然后按照一定条件,将在同一集卡上的20尺箱子进行融合,同时根据识别的类别(集装箱或车架),判定是抓箱或放箱作业。并利用SORT(Simple Online and Realtime Tracking)目标跟踪方法进行内集卡区域的跟踪。4. When the vehicle enters the field of view of the 205 and 206 cameras, firstly, the target recognition algorithm YoloV5 is used to identify the container or empty frame area on the inner set card, and the container area is recognized to determine the size, distinguishing between 20 feet and non-20 feet , and then according to certain conditions, the 20-foot boxes on the same collection card are merged, and at the same time, according to the identified category (container or frame), it is judged that it is a box grabbing or boxing operation. And use the SORT (Simple Online and Realtime Tracking) target tracking method to track the inner collection card area.
5、根据跟踪结果,前后两帧数据的迭代判定,当连续有前后两帧识别区域的几何中心的欧式距离小于一定阈值,并统计帧数大于一定数量时,进行车辆停靠判定,并确定当前车辆为工作车辆以及作业车道。5. According to the tracking results, the iterative judgment of the two frames of data before and after, when the Euclidean distance between the geometric centers of the recognition areas of the two consecutive frames is less than a certain threshold, and the number of statistical frames is greater than a certain number, the vehicle parking judgment is performed and the current vehicle is determined For working vehicles and work lanes.
6、通过车头识别相机识别内集卡车头以及车头上粘贴的二维码,并确认内集卡身份编号,通过网络连接内集卡驾驶室的数据接收单元。6. Use the front recognition camera to identify the front of the internal truck and the QR code pasted on the front, confirm the ID number of the internal truck, and connect to the data receiving unit of the cab of the internal truck through the network.
7、通过识别跟踪到的箱体区域中心与步骤2中预先获取的目标位置坐标,进行y坐标比较做差,进行集卡初定位偏差距离计算如下:7. By identifying the tracked box area center and the pre-acquired target position coordinates in step 2, compare the y coordinates to make a difference, and calculate the initial positioning deviation distance of the collection truck as follows:
offset=D(y-y 0) offset=D(yy 0 )
其中y表示当前车辆区域图像纵坐标,y 0表示预先采集的目标位置,D表示实际像素距离因子。 Among them, y represents the ordinate of the current vehicle area image, y 0 represents the pre-collected target position, and D represents the actual pixel distance factor.
将计算结果通过网络发送到相应司机的数据接收单元,并通过LED显示屏进行显示,引导司机调整内集卡位置。The calculation result is sent to the data receiving unit of the corresponding driver through the network, and displayed on the LED display screen to guide the driver to adjust the position of the internal collection card.
8、初定位完成后,对检测到的内集卡集装箱区域或车架区域进行图像裁剪,对于装载集装箱的集卡是将裁剪区域分为三个子区域图像,中间裁剪区域进行箱孔和文字的检测,其他上个两个区域只检测箱孔,对于车架区域则将图像裁剪成两个区域,两个区域均检测车架导板。8. After the initial positioning is completed, image cropping is performed on the detected container area or frame area of the internal collection truck. For the collection truck loaded with containers, the cropping area is divided into three sub-area images, and the middle cropping area is used for box holes and text Detection, the other two areas only detect the box hole, and for the frame area, the image is cropped into two areas, and both areas detect the frame guide.
9、根据当前装载集装箱的集卡区域是否是融合区域,即是否为两个20尺,结合中间子区域的箱孔和文本检测信息,判定单双箱,如下:9. According to whether the container truck area currently loaded is a fusion area, that is, whether it is two 20 feet, combined with the box hole and text detection information in the middle sub-area, determine single and double containers, as follows:
Figure PCTCN2022072004-appb-000006
Figure PCTCN2022072004-appb-000006
其中weight 0表示集装箱数量权重(0.4)、weight 1表示是否有文字权重(0.2)以及weight 2表示是否有箱孔权重(0.4),R 0表示是否两个集装箱、R 1表示是否有文字以及R 2表示是否有箱孔,有即为1,无即为0,当总得分score大于一定阈值(0.6)时,确定为双箱否则为单箱。 Among them, weight 0 indicates the weight of container quantity (0.4), weight 1 indicates whether there is text weight (0.2), and weight 2 indicates whether there is box hole weight (0.4), R 0 indicates whether there are two containers, R 1 indicates whether there is text and R 2 indicates whether there is a box hole, 1 if it is present, and 0 if it is not. When the total score is greater than a certain threshold (0.6), it is determined to be a double box, otherwise it is a single box.
10、通过利用下部子区域箱孔距离和位置,估计车载集装箱的总高度。假定车架则为同一高度,不需要高度估计。10. Estimate the total height of the on-board container by using the distance and position of the lower sub-region box holes. Assuming the frames are then the same height, no height estimation is required.
11、结合高度信息以及对应车道的目标箱孔生成的目标直线L,进行内集卡移动偏差距离的精确计算。11. Combining the height information and the target straight line L generated by the target box hole of the corresponding lane, the accurate calculation of the movement deviation distance of the inner truck is performed.
Figure PCTCN2022072004-appb-000007
Figure PCTCN2022072004-appb-000007
其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔直线方程参数E表示与高度相关的像素距离因子。 Among them, x 0 , y 0 are the midpoints of the two currently detected boxholes, A, B, and C represent the pre-detected two boxholes. The parameter E of the straight line equation represents the height-related pixel distance factor.
将计算结果通过网络发送到相应司机的数据接收单元,并通过LED显示屏进行显示,引导司机调整内集卡位置。The calculation result is sent to the data receiving unit of the corresponding driver through the network, and displayed on the LED display screen to guide the driver to adjust the position of the internal collection card.
12、步骤11的不断调整迭代计算,当偏差距离offset小于一定阈值时,完成内集卡定位引导。12. Continuously adjust and iteratively calculate in step 11. When the deviation distance offset is less than a certain threshold, the positioning guidance of the inner set card is completed.
本申请具体实施例可分为三个部分:The specific embodiment of the application can be divided into three parts:
1、内集卡目标停车位置以及装载集装箱内集卡总高估计的标定。1. Calibration of the target parking position of the container truck and the estimated total height of the container container.
(1)将装载有四个不同箱型(单20尺、双20尺、40尺、45尺)和三个不同箱高度级(2.4m、2.6m、2.9m)的内集卡分别停在每条车道的目标位置,即吊具可以准确抓箱的位置,请参考图3,然后分别保存不同箱型、不同箱高以及不同车道等情况的集装箱矩形区域的中心和对应箱孔的图像坐标数据。另一方面将空车架在待放不同箱型(前20尺、后20尺、中20尺、40尺、45尺)的内集卡分别停在每条车道的目标位置,即吊具可以准确放箱的位置,然后分别保存不同放箱位置和箱型、车道信息等情况的车架矩形区域中心以及对应车架导板的图像坐标数据,并利用箱孔尺寸估计像素距离A mm/pixel,对空车架也进行同理的操作,生成目标直线L和估计像素距离因子E mm/pixel。(1) Park the inner collection trucks loaded with four different box types (single 20 feet, double 20 feet, 40 feet, 45 feet) and three different box heights (2.4m, 2.6m, 2.9m) at the The target position of each lane, that is, the position where the spreader can accurately grab the container, please refer to Figure 3, and then save the image coordinates of the center of the container rectangular area and the corresponding box hole for different box types, different box heights, and different lanes. data. On the other hand, the empty frame is parked at the target position of each lane on the inner collection trucks of different box types (front 20 feet, rear 20 feet, middle 20 feet, 40 feet, 45 feet), that is, the spreader can Accurately place the box, and then save the center of the rectangular area of the frame and the image coordinate data of the corresponding frame guide for different box locations, box types, lane information, etc., and use the box hole size to estimate the pixel distance A mm/pixel, The same operation is performed on the empty frame to generate the target straight line L and the estimated pixel distance factor E mm/pixel.
(2)将装载有40尺且高度级别为2.6m的集装箱缓慢行驶过每条车道,然后通过间隙采集图像数据,并保存集卡在同一车道不同位置时,两个前箱孔的欧式距离,作为后续判定装载集装箱集卡高度的判定依据,参考图5。(2) Slowly drive a container loaded with 40 feet and a height of 2.6m through each lane, and then collect image data through the gap, and save the Euclidean distance between the two front box holes when the truck is at different positions in the same lane, Refer to Figure 5 as the basis for subsequent determination of the height of the container truck.
2、内集卡引导以及单双箱判别:通过实时视频数据确认当前内集卡身份编号,同时进行内集卡偏差计算,并通过网络将集卡引导信息发送至内集卡驾驶室,并进行LED显示,同时进行内集卡上单双箱的检测识别,并将识别结果发送至岸桥控制系统。2. Inner truck guidance and identification of single and double boxes: Confirm the current ID number of the inner truck through real-time video data, and calculate the deviation of the inner truck at the same time, and send the truck guidance information to the cab of the inner truck through the network, and carry out LED display, and at the same time detect and identify single and double boxes on the inner collection card, and send the identification results to the quay crane control system.
(1)首先预先训练好的目标检测算法Yolov5-0(该算法模型的输出为两个类别,分别是集卡上集装箱和车架)和目标检测算法Yolov5-1(该算法模型的输出为三个类别分别是箱孔、车架导板以及文本),然后利用Yolov5-0进行目标识别,根据识别类别进行抓箱或放箱的判定,然后利用矩形大小判定是20尺或非20尺,随后根据一定约束条件,将20尺箱子进行融合,即判断图像中任意两个矩形区域的外接矩形区域的宽、高是否在一定范围内,若符合条件,便将两个矩形区域融合为一个区域,随后利用SORT跟踪算法进行目标跟踪,当前后两帧数据的跟踪距离小于一定阈值,并小于该阈值的统计帧数大于一定数量时,判定车辆停留,并确定为当前工作车辆,并根据区域中心点,判定当前作业车道。(1) First, the pre-trained target detection algorithm Yolov5-0 (the output of the algorithm model is two categories, namely the container on the truck and the frame) and the target detection algorithm Yolov5-1 (the output of the algorithm model is three The two categories are box holes, frame guides, and text), and then use Yolov5-0 for target recognition, and judge whether to grab or put a box according to the recognition category, and then use the size of the rectangle to determine whether it is 20 feet or not, and then according to With certain constraints, the 20-foot box is fused, that is, it is judged whether the width and height of the circumscribed rectangular areas of any two rectangular areas in the image are within a certain range. If the conditions are met, the two rectangular areas are merged into one area, and then Use the SORT tracking algorithm for target tracking. When the tracking distance of the current and subsequent frames of data is less than a certain threshold, and the number of statistical frames less than the threshold is greater than a certain number, it is determined that the vehicle stops and is determined as the current working vehicle. According to the center point of the area, Determine the current working lane.
(2)在标定之后,参考图6,通过预先训练的车头Yolov5-2检测模型以及二维码定位和识别算法Apriltag进行内集卡的身份编码确认和行驶方向,据此通过网络连接该集卡驾驶室的数据接收单元。(2) After calibration, refer to Figure 6, use the pre-trained Yolov5-2 detection model of the front of the car and the two-dimensional code positioning and recognition algorithm Apriltag to confirm the identity code and driving direction of the collection card, and connect the collection card through the network accordingly Data receiving unit in the cab.
(3)根据Yolov5-0检测识别到的目标矩形区域中心坐标y以及步骤1中保存的预先获取的目标位置坐标y 0进行初定位偏差计算,并发给集卡驾驶室的数据接收单元,并通过接收单元进行偏差数据以及示意图进行显示,当初定位偏差offset小于一定阈值时,完成集卡初定位。 (3) Calculate the initial positioning deviation according to the center coordinate y of the target rectangular area detected and identified by Yolov5-0 and the pre-acquired target position coordinate y 0 saved in step 1, and send it to the data receiving unit of the truck cab, and pass The receiving unit displays the deviation data and the schematic diagram, and when the initial positioning deviation offset is less than a certain threshold, the initial positioning of the collection truck is completed.
(4)将集装箱或车架矩形区域进行图像裁剪,请参阅图7和图8,即将裁剪后的图像利用YoloV5-1目标检测算法进行识别,这样便增大了目标识别样本的分辨率,进而更好识别箱孔、文本或导板等小目标。其中,Yolo V5是一种深度学习神经网络目标识别算法,主要是通过样本离线学习,训练模型,然后在实时采集图像后进行指定目标的识别。(4) Crop the image of the container or the rectangular area of the frame, please refer to Figure 7 and Figure 8, the cropped image will be recognized using the YoloV5-1 target detection algorithm, which will increase the resolution of the target recognition sample, and then Better recognition of small objects such as box holes, text or guides. Among them, Yolo V5 is a deep learning neural network target recognition algorithm, which mainly learns offline through samples, trains the model, and then recognizes the specified target after collecting images in real time.
(5)如当前为抓箱作业时,进行单双箱检测,步骤如下:(5) If the current operation is to grab boxes, perform single and double box detection, the steps are as follows:
首先将检测到的矩形区域分为三个子区域,并利用Yolov5-1进行批量检测识别 处理,获取检测结果。First, divide the detected rectangular area into three sub-areas, and use Yolov5-1 for batch detection and recognition processing to obtain the detection results.
其次将中间子区域的检测结果进行分析,箱孔分析:如果检测箱孔数量小于2,则认为没有检测到箱孔,否则将检测到箱孔依X坐标进行排序,如果排序后的最小值与最小值X方向在一定阈值范围内,则认为检测到了箱孔。文本分析:中间子区域如果检测到文本,并且其矩形高、宽满足一定范围,则认为检测到了集装箱上表面文本。Secondly, the detection results of the middle sub-region are analyzed, box hole analysis: if the number of detected box holes is less than 2, it is considered that no box hole has been detected, otherwise the detected box holes will be sorted according to the X coordinate, if the sorted minimum value is the same as If the minimum value in the X direction is within a certain threshold range, the box hole is considered to be detected. Text analysis: If text is detected in the middle sub-area, and the height and width of its rectangle meet a certain range, it is considered that the text on the upper surface of the container has been detected.
最后综合融合信息、箱孔信息以及文本信息进行单双箱的判定,如下:Finally, the integrated fusion information, box hole information and text information are used to determine single and double boxes, as follows:
Figure PCTCN2022072004-appb-000008
Figure PCTCN2022072004-appb-000008
本申请中weight 0、weight 1、weight 2权重参数分别为0.4,0.2,0.4如果总得分score大于0.6则认为是双箱,否则认为单箱。 In this application, the weight parameters of weight 0 , weight 1 , and weight 2 are 0.4, 0.2, and 0.4 respectively. If the total score is greater than 0.6, it is considered a double box, otherwise it is considered a single box.
如果当前为放箱作业时,将矩形区域裁剪为两个子区域,利用Yolov5-1进行导板检测,进而获取检测到的导板坐标。If the current operation is to place boxes, cut the rectangular area into two sub-areas, use Yolov5-1 to detect the guide plate, and then obtain the detected coordinates of the guide plate.
(6)对集装箱或车架的上、下部子区域的箱孔或导板的检测结果进行误检测剔除,其方法是通过计算检测结果中任意两点组成的向量的模和与水平方向夹角,是否在一定范围内,进行误检测点的剔除。(6) Carry out misdetection to the detection result of the box hole or the guide plate of the upper and lower sub-regions of the container or vehicle frame, and its method is to calculate the angle between the modulus sum of the vector formed by any two points in the detection result and the horizontal direction, Whether to remove false detection points within a certain range.
(7)对于装载集装箱的内集卡进行高度估计,考虑到集装箱的单个高度等级2.4,2.6和2.9。所以需要对装载集装箱的集卡进行高度估计,首先计算检测到的两箱孔中心点距离最近的位置线,然后比较当前两箱孔之间的距离d与高度标定时的两箱孔之间的距离d k的大小,d-d k>T 0则认为是2.9等级,若d-d k<T 1则认为是2.4等级,如果在T 0~T 1范围内,则认为是2.6等级,T 0=3,T 1=1。 (7) Estimate the height of the container-loaded inner container, taking into account the individual height classes 2.4, 2.6 and 2.9 of the container. Therefore, it is necessary to estimate the height of the container truck. First, calculate the closest position line between the center points of the detected two container holes, and then compare the current distance d between the two container holes with the distance d between the two container holes when the height is calibrated. The size of the distance d k , if dd k >T 0 , it is considered as 2.9 level, if dd k <T 1 , it is considered as 2.4 level, if it is within the range of T 0 ~ T 1 , it is considered as 2.6 level, T 0 =3, T 1 =1.
(8)通过箱孔或车架导板进行内集卡引导精定位,请参考图3,在获取检测到的两个箱孔或导板的中点P,然后通过抓、放箱信息、车道信息、高度信息、箱型信息获取目标直线距离L,进而计算点P到目标直线L的直线距离,然后再乘以E,则可得到内集卡实际的偏差距离。并将该距离偏差发送至内集卡驾驶室,进行偏差距离显示。(8) Carry out the fine positioning of the inner collection card guidance through the box hole or the frame guide plate. Please refer to Figure 3. After obtaining the midpoint P of the two detected box holes or guide plates, then use the information of grabbing and placing boxes, lane information, The height information and box shape information obtain the target straight-line distance L, and then calculate the straight-line distance from point P to the target straight line L, and then multiply it by E, then the actual deviation distance of the inner set card can be obtained. And send the distance deviation to the cab of the internal truck to display the deviation distance.
通过步骤8的不断迭代,完成内集卡的引导定位。Through the continuous iteration of step 8, the guidance and positioning of the inner set card is completed.
3、内集卡根据实际的偏差距离对内集卡进行引导定,以及位岸桥控制系统在岸桥的吊具到达所述集装箱或所述车架正上方时,根据判断的单双箱调整所述岸桥的吊 具形态,进行抓箱或放箱作业。3. The internal collection truck guides and fixes the internal collection truck according to the actual deviation distance, and the quay crane control system adjusts according to the judged single and double container when the spreader of the quay crane reaches the container or the frame. The form of the spreader of the quay crane is used for grabbing or putting boxes.
参考图9,工业控制计算机存储AI算法,主要作用是采集1-6路摄像机的视频图像信息,运行内集卡引导定位和单双箱判别算法软件。PLC从视频处理器中主要接收的是单双箱的判别结果,判断现在是单箱或者双箱。视频处理器接收岸桥作业信息,主要是为了更准确判断现在岸桥是否有内集卡在进行作业,是否需要运行引导作业和单双箱判别。Referring to Figure 9, the industrial control computer stores the AI algorithm, and its main function is to collect the video image information of the 1-6 cameras, and run the inner set card guidance and positioning and single and double box discrimination algorithm software. What PLC mainly receives from the video processor is the discrimination result of single and double boxes, and judges whether it is single box or double box. The main purpose of the video processor to receive the operation information of the quay crane is to more accurately judge whether there is an internal set card in the quay crane, whether it is necessary to run the guidance operation and to distinguish single and double tanks.
术语“视频处理器”包括用于处理数据的各种装置、设备和机器,例如视频处理器包括可编程处理器、计算机、多个处理器或多个计算机等。除了硬件之外,该装置还可以包括创建用于所讨论的计算机程序的执行环境的代码,例如,构成处理器固件、协议栈、数据库管理系统、操作系统、或运行环境、或者其一个或多个组合的代码。The term "video processor" includes various devices, devices, and machines for processing data, for example, a video processor includes a programmable processor, a computer, multiple processors, or multiple computers, and the like. In addition to hardware, the apparatus may include code for creating an execution environment for the computer program in question, for example, constituting processor firmware, a protocol stack, a database management system, an operating system, or a runtime environment, or one or more thereof. combined code.
本说明书中描述的方法和逻辑流程可以由一个或多个可编程处理器来执行,其中,可编程处理器执行一个或多个计算机程序,以通过对监控视频进行操作并且生成目标检测结果来执行这些功能。The methods and logic flows described in this specification can be performed by one or more programmable processors, wherein the programmable processors execute one or more computer programs to perform operations on surveillance video and generate object detection results these functions.
通常地,计算机还包括用于存储历史视频数据和数据集的一个或多个大容量存储设备(例如,磁盘、磁光盘或光盘),或者与一个或多个大容量存储设备可操作地耦合,以从大容量存储设备接收数据或者向该大容量存储设备传输数据,或者这两者。适用于存储计算机程序指令和数据的设备包括所有形式的非易失性存储器、介质和存储设备,举例来说包括:半导体存储设备,例如EPROM(可擦可编程只读存储器)、EEPROM(电可擦可编程只读存储器)和闪存设备;磁盘,例如内部硬盘或可移动硬盘;磁光盘;以及CD-ROM光盘和DVD-ROM光盘。Typically, the computer also includes or is operably coupled to one or more mass storage devices (e.g., magnetic, magneto-optical, or optical disks) for storing historical video data and data sets, to receive data from the mass storage device or to transmit data to the mass storage device, or both. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and storage devices including, for example: semiconductor memory devices such as EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically erasable programmable read-only memory) and flash memory devices; magnetic disks, such as internal or removable hard disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
本领域技术人员可以理解,实现上述实施例方法的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读存储介质中。其中,所述计算机可读存储介质为磁盘、光盘、只读存储记忆体或随机存储记忆体等。Those skilled in the art can understand that all or part of the processes of the methods in the above embodiments can be implemented by instructing related hardware through computer programs, and the programs can be stored in a computer-readable storage medium. Wherein, the computer-readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory, and the like.
以上所述,仅为本申请较佳的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。The above is only a preferred embodiment of the present application, but the scope of protection of the present application is not limited thereto. Any person familiar with the technical field can easily conceive of changes or changes within the technical scope disclosed in this application Replacement should be covered within the protection scope of this application.

Claims (18)

  1. 一种基于机器视觉的岸桥控制方法,其特征在于,包括:A machine vision-based quay crane control method is characterized in that it includes:
    对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定;Calibrate the target parking position of the truck and estimate the height of the truck loaded with containers;
    获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于所述目标矩形区域和所述目标停车位置计算初定位偏差距离,其中,所述目标矩形区域包括最小外接矩形;Acquire and use the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, wherein the target rectangular area includes The smallest circumscribing rectangle;
    对所述车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像;Carry out image clipping on the target rectangular area of the container or vehicle frame in the vehicle body image to generate the body sub-image;
    获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标;Acquiring and using the second target detection model to identify the box hole coordinates and text on the container in the body sub-image or the frame guide plate coordinates on the frame;
    基于所述车身子图像中的集装箱的目标矩形区域是否是融合区域、所述车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判断;Based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-area in the body sub-image, single and double boxes are judged;
    基于所述箱孔坐标或所述车架导板坐标获取所述车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计所述集装箱的高度;Obtaining the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container;
    基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据所述移动偏差距离将所述集卡引导至所述目标停车位置;以及calculating a moving deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, and guiding the collection truck to the target parking position according to the moving deviation distance; and
    在所述岸桥的吊具到达所述集装箱或所述车架正上方时,根据判断的单双箱调整所述岸桥的吊具形态,进行抓箱或放箱作业。When the spreader of the quay crane arrives directly above the container or the vehicle frame, the form of the spreader of the quay crane is adjusted according to the judged single or double container, and the operation of grabbing or putting the container is carried out.
  2. 根据权利要求1所述的基于机器视觉的岸桥控制方法,其特征在于,对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定进一步包括:The quay crane control method based on machine vision according to claim 1, wherein the calibration of the target parking position of the collection truck and the estimated height of the collection truck loaded with containers further comprises:
    预先采集所述集卡的目标停车位置的车身图像并对所述目标停车位置的车身图像进行识别,以获取所述集卡在所述目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标;Pre-acquisition of the body image of the target parking position of the collection truck and identifying the body image of the target parking position, so as to obtain the image coordinates or frame of the container and the corresponding box hole of the collection truck at the target parking position and the image coordinates of the corresponding frame guide plate;
    利用所述箱孔图像坐标或所述车架导板图像坐标生成目标直线并估计与高度相关的像素距离因子。Using the box hole image coordinates or the frame guide plate image coordinates to generate a target straight line and estimate a height-related pixel distance factor.
  3. 根据权利要求1所述的基于机器视觉的岸桥控制方法,其特征在于,获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域进一步包括:The quay crane control method based on machine vision according to claim 1, wherein obtaining and utilizing the first target detection model to identify the target rectangular area of the container or vehicle frame in the body image further comprises:
    从数据库中获取多幅历史车身图像并对所述多幅历史车身图像中的集装箱或车架进行标注;Acquiring multiple historical body images from the database and labeling the containers or frames in the multiple historical body images;
    建立第一神经网络Yolov5并利用标注的多幅历史车身图像对所述第一神经网络Yolov5进行训练以获得第一目标检测模型;以及Establishing a first neural network Yolov5 and using a plurality of marked historical body images to train the first neural network Yolov5 to obtain a first target detection model; and
    实时采集当前车身图像,并利用所述第一目标检测模型识别出所述当前车身图像中的集装箱或车架的目标矩形区域,以进行抓箱或放箱判定,其中,根据所述集装箱 的目标矩形区域的大小判定所述集装箱是20尺还是非20尺,并将同一集卡上的两个20尺的集装箱的目标矩形区域融合以形成所述融合区域。Collecting the current body image in real time, and using the first target detection model to identify the target rectangular area of the container or frame in the current body image, so as to judge whether to grab or put the box, wherein, according to the target of the container The size of the rectangular area determines whether the container is 20 feet or not, and the target rectangular areas of two 20-foot containers on the same container are fused to form the fusion area.
  4. 根据权利要求3所述的基于机器视觉的岸桥控制方法,其特征在于,对所述车身图像中的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像进一步包括:The quay crane control method based on machine vision according to claim 3, wherein, performing image clipping on the target rectangular area of the container or vehicle frame in the vehicle body image to generate the vehicle body sub-image further comprises:
    将所述历史车身图像或所述当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,Clipping the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image, wherein,
    当所述目标矩形区域是集装箱区域时,所述历史车身子图像或所述当前车身子图像包括第一上部子区域、中间子区域和第一下部子区域,其中,对所述第一中间子区域进行箱孔和文字检测,对所述第一上部子区域图像和所述第一下部子区域图像进行箱孔检测;以及When the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area, wherein, for the first middle performing box hole and text detection in the sub-region, performing box hole detection on the first upper sub-region image and the first lower sub-region image; and
    当所述目标矩形区域是车架区域时,所述历史车身子图像或所述当前车身子图像包括第二上部子区域和第二下部子区域,其中,对所述第二上部子区域和所述第二下部子区域进行车架导板检测。When the target rectangular area is a frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein, for the second upper sub-area and the second upper sub-area The second lower sub-area is used to detect the frame guide plate.
  5. 根据权利要求4所述的基于机器视觉的岸桥控制方法,其特征在于,获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标或车架上的车架导板坐标进一步包括:The quay crane control method based on machine vision according to claim 4, characterized in that, acquiring and utilizing the second target detection model to identify the box hole coordinates on the container or the vehicle frame on the vehicle frame in the sub-image of the vehicle body Guide plate coordinates further include:
    对所述历史车身子图像中的箱孔或车架导板进行标注;Marking the box hole or the frame guide plate in the sub-image of the historical vehicle body;
    建立第二神经网络Yolov5并利用标注的历史车身子图像对所述第二神经网络Yolov5进行训练以获得第二目标检测模型;以及Establishing a second neural network Yolov5 and utilizing the marked historical car body images to train the second neural network Yolov5 to obtain a second target detection model; and
    利用所述第二目标检测模型识别出当前车身子图像中的箱孔和文字或车架导板,并获取箱孔坐标或车架导板坐标。Using the second target detection model to identify box holes and characters or frame guide plates in the current body sub-image, and obtain box hole coordinates or frame guide plate coordinates.
  6. 根据权利要求1所述的基于机器视觉的岸桥控制方法,其特征在于,在岸桥横梁上的车道相对两侧的隔离带处安装第一摄像机、第二摄像机、第三摄像机、第四摄像机、第五摄像机和第六摄像机,其中,The quay bridge control method based on machine vision according to claim 1, wherein a first camera, a second camera, a third camera, and a fourth camera are installed at the isolation strips on opposite sides of the lane on the quay bridge crossbeam , the fifth camera and the sixth camera, wherein,
    利用所述第一摄像机至所述第四摄像机拍摄所述集卡的车头图像,以确认车辆身份以及车辆移动方向;以及Using the first camera to the fourth camera to capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and
    利用所述第五摄像机和所述第六摄像机拍摄所述集卡的车身图像,以计算所述集卡的初定位偏差距离和移动偏差距离。Using the fifth camera and the sixth camera to shoot the body image of the truck to calculate the initial positioning deviation distance and the movement deviation distance of the truck.
  7. 根据权利要求6所述的基于机器视觉的岸桥控制方法,其特征在于,在对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定之后进一步包括:The quay crane control method based on machine vision according to claim 6, further comprising:
    从数据库中获取多幅历史车头图像并对所述历史车头图像中的车头及二维码进行标注;Obtain a plurality of historical vehicle front images from the database and mark the vehicle front and the two-dimensional code in the historical vehicle front images;
    建立第三神经网络Yolov5,并利用标注的历史车头图像对所述第三神经网络 Yolov5进行训练以获得第三目标检测模型;Set up the 3rd neural network Yolov5, and utilize the historical car head image of mark to train described 3rd neural network Yolov5 to obtain the 3rd target detection model;
    实时采集当前车头图像,并利用第三目标检测模型识别出当前车头图像中的集卡车头和所述集卡车头上粘贴的二维码并确认集卡身份编码和行驶方向;以及Collecting the current head image in real time, and using the third target detection model to identify the truck head in the current head image and the QR code pasted on the head of the truck, and confirm the identity code and driving direction of the truck; and
    通过网络根据所述集卡身份编码连接对应集卡驾驶室内的数据接收单元。The data receiving unit in the cab of the corresponding collection truck is connected through the network according to the collection card identity code.
  8. 根据权利要求7所述的基于机器视觉的岸桥控制方法,其特征在于,基于所述目标矩形区域和所述目标停车位置计算初定位偏差距离进一步包括:The machine vision-based shore crane control method according to claim 7, wherein calculating the initial positioning deviation distance based on the target rectangular area and the target parking position further comprises:
    基于识别出的所述当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离;以及Based on the recognized target rectangular area of the container or vehicle frame in the current body image and the pre-acquired target parking position, calculate the initial positioning deviation distance of the truck; and
    通过所述网络将所述集卡的初定位偏差距离传输至数据接收单元,并经由LED显示屏进行显示,以引导集卡司机调整集卡位置,其中,通过以下公式计算所述集卡的初定位偏差距离:The initial positioning deviation distance of the collection truck is transmitted to the data receiving unit through the network, and displayed on the LED display to guide the collection truck driver to adjust the location of the collection truck, wherein the initial positioning of the collection truck is calculated by the following formula Positioning deviation distance:
    offset=D(y-y 0) offset=D(yy 0 )
    其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子。 Among them, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to the height.
  9. 根据权利要求8所述的基于机器视觉的岸桥控制方法,其特征在于,基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,根据所述移动偏差距离将所述集卡引导至所述目标停车位置进一步包括:The machine vision-based quay crane control method according to claim 8, characterized in that, based on the height of the container or vehicle frame and the target straight line generated by the box hole or vehicle frame guide plate, the movement deviation distance is calculated, and according to the movement The deviation distance guiding the truck to the target parking position further includes:
    通过以下公式计算所述集卡的移动偏差距离进一步包括:Calculating the moving deviation distance of the collection card by the following formula further includes:
    Figure PCTCN2022072004-appb-100001
    Figure PCTCN2022072004-appb-100001
    其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子; Among them, x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height;
    通过所述网络将所述移动偏差距离发送至所述数据接收单元并在所述LED显示器上显示所述移动偏差距离,以继续引导司机调整集卡位置,直到所述移动偏差距离小于阈值时完成集卡定位引导。Send the movement deviation distance to the data receiving unit through the network and display the movement deviation distance on the LED display to continue to guide the driver to adjust the position of the truck until the movement deviation distance is less than the threshold and complete Set card positioning guide.
  10. 一种基于机器视觉的岸桥控制装置,其特征在于,包括:视频处理器和控制模块,所述视频处理器包括标定模块、初定位模块、图像裁剪模块、识别模块、单双箱判断模块、高度估计模块和移动偏差距离计算模块,其中,A quay crane control device based on machine vision, characterized in that it includes: a video processor and a control module, the video processor includes a calibration module, an initial positioning module, an image cropping module, an identification module, a single and double box judging module, A height estimation module and a movement deviation distance calculation module, wherein,
    所述标定模块,用于对集卡的目标停车位置以及装载集装箱的集卡高度估计进行标定;The calibration module is used to calibrate the target parking position of the collection truck and the estimated height of the collection truck loaded with containers;
    所述初定位模块,用于获取并利用第一目标检测模型识别出车身图像中的集装箱或车架的目标矩形区域,并且基于所述目标矩形区域和所述目标停车位置计算初定位 偏差距离,其中,所述目标矩形区域包括最小外接矩形;The initial positioning module is configured to obtain and use the first target detection model to identify the target rectangular area of the container or frame in the body image, and calculate the initial positioning deviation distance based on the target rectangular area and the target parking position, Wherein, the target rectangular area includes a minimum circumscribed rectangle;
    所述图像裁剪模块,用于对所述车身图像中检测到的集装箱或车架的目标矩形区域进行图像裁剪以生成车身子图像;The image cropping module is used to crop the target rectangular area of the container or vehicle frame detected in the vehicle body image to generate a body sub-image;
    所述识别模块,用于获取并利用第二目标检测模型识别出所述车身子图像中的集装箱上的箱孔坐标和文字或车架上的车架导板坐标;The recognition module is used to obtain and use the second target detection model to recognize the box hole coordinates and text on the container in the body sub-image or the frame guide plate coordinates on the frame;
    所述单双箱判断模块,用于基于所述车身子图像中的集装箱的目标矩形区域是否是融合区域、所述车身子图像中的中间子区域的箱孔坐标和文字,进行单双箱判定;The single and double container judging module is used to determine single and double containers based on whether the target rectangular area of the container in the body sub-image is a fusion area, the box hole coordinates and the text of the middle sub-region in the body sub-image ;
    所述高度估计模块,用于基于所述箱孔坐标或所述车架导板坐标获取所述车身子图像中的下部子区域图像的箱孔或车架导板的距离和位置以估计所述集装箱的高度The height estimation module is used to obtain the distance and position of the box hole or the frame guide plate of the lower sub-region image in the body sub-image based on the box hole coordinates or the frame guide plate coordinates to estimate the height of the container high
    所述移动偏差距离计算模块,用于基于所述集装箱或车架的高度和由箱孔或车架导板生成的目标直线计算移动偏差距离,以根据所述移动偏差距离将所述集卡引导至所述目标停车位置;以及The movement deviation distance calculation module is used to calculate the movement deviation distance based on the height of the container or the vehicle frame and the target straight line generated by the box hole or the frame guide plate, so as to guide the collection truck to the said target parking location; and
    所述控制模块,用于在所述岸桥的吊具到达所述集装箱或所述车架正上方时,根据判断的单双箱调整所述岸桥的吊具形态,进行抓箱或放箱作业。The control module is used to adjust the shape of the spreader of the quay crane according to the judged single or double container when the spreader of the quay crane arrives directly above the container or the frame, so as to grab or put the container Operation.
  11. 根据权利要求10所述的基于机器视觉的岸桥控制装置,其特征在于,包括摄像机,用于预先采集所述集卡的目标停车位置的车身图像,以及间歇地采集所述车身图像;The quay crane control device based on machine vision according to claim 10, characterized in that it includes a camera for pre-collecting the vehicle body image of the target parking position of the truck, and intermittently collecting the vehicle body image;
    所述标定模块还包括:目标位置标定子模块和高度估计标定子模块,其中,The calibration module also includes: a target position calibration sub-module and a height estimation calibration sub-module, wherein,
    所述目标位置标定子模块,用于对所述目标停车位置的车身图像进行识别,并获取所述集卡在所述目标停车位置处的集装箱和对应箱孔图像坐标或者车架和对应车架导板图像坐标,以生成目标直线;以及The target position calibration sub-module is used to identify the body image of the target parking position, and obtain the image coordinates of the container and the corresponding box hole or the frame and the corresponding frame of the truck at the target parking position Guide plate image coordinates to generate the target line; and
    所述高度估计标定子模块,利用所述箱孔图像坐标或所述车架导板图像坐标估计与高度相关的像素距离因子。The height estimation and calibration sub-module estimates a height-related pixel distance factor by using the box hole image coordinates or the frame guide plate image coordinates.
  12. 根据权利要求10所述的基于机器视觉的岸桥控制装置,其特征在于,所述初定位模块包括标注子模块、第一目标检测模型、目标矩形生成子模块,其中,The machine vision-based shore crane control device according to claim 10, wherein the initial positioning module includes a labeling submodule, a first target detection model, and a target rectangle generation submodule, wherein,
    所述第一标注子模块,用于从数据库中获取多幅历史车身图像并对所述多幅历史车身图像中的集装箱或车架进行标注;The first labeling submodule is used to acquire multiple historical vehicle body images from the database and label the containers or frames in the multiple historical vehicle body images;
    所述第一目标检测模型,用于建立第一神经网络Yolov5并利用标注的多幅历史车身图像对所述第一神经网络Yolov5进行训练以获得第一目标检测模型;以及The first target detection model is used to establish a first neural network Yolov5 and use a plurality of marked historical body images to train the first neural network Yolov5 to obtain a first target detection model; and
    所述目标矩形生成子模块,用于实时采集当前车身图像,并利用所述第一目标检测模型识别出所述当前车身图像中的集装箱或车架的目标矩形区域,以进行抓箱或放箱判定,其中,根据所述集装箱的目标矩形区域的大小判定所述集装箱是20尺还是非20尺,并将同一集卡上的两个20尺的集装箱的目标矩形区域融合以形成所述融合区域。The target rectangle generation sub-module is used to collect the current vehicle body image in real time, and use the first target detection model to identify the target rectangular area of the container or frame in the current vehicle body image, so as to grab or put the box Judging, wherein, according to the size of the target rectangular area of the container, it is determined whether the container is 20 feet or not, and the target rectangular areas of two 20-foot containers on the same collection card are fused to form the fusion area .
  13. 根据权利要求12所述的基于机器视觉的岸桥控制装置,其特征在于,所述图像裁剪模块用于将所述历史车身图像或所述当前车身图像裁剪为历史车身子图像或当前车身子图像,其中,The machine vision-based quay crane control device according to claim 12, wherein the image cropping module is used to clip the historical vehicle body image or the current vehicle body image into a historical vehicle body sub-image or a current vehicle body sub-image ,in,
    当所述目标矩形区域是集装箱区域时,所述历史车身子图像或所述当前车身子图像包括第一上部子区域、中间子区域和第一下部子区域,其中,对所述第一中间子区域进行箱孔和文字检测,对所述第一上部子区域图像和所述第一下部子区域图像进行箱孔检测;以及When the target rectangular area is a container area, the historical body sub-image or the current body sub-image includes a first upper sub-area, a middle sub-area and a first lower sub-area, wherein, for the first middle performing box hole and text detection in the sub-region, performing box hole detection on the first upper sub-region image and the first lower sub-region image; and
    当所述目标矩形区域是车架区域时,所述历史车身子图像或所述当前车身子图像包括第二上部子区域和第二下部子区域,其中,对所述第二上部子区域和所述第二下部子区域进行车架导板检测。When the target rectangular area is a frame area, the historical body sub-image or the current body sub-image includes a second upper sub-area and a second lower sub-area, wherein, for the second upper sub-area and the second upper sub-area The second lower sub-area is used to detect the frame guide plate.
  14. 根据权利要求13所述的基于机器视觉的岸桥控制装置,其特征在于,所述识别模块包括第二标注子模块、第二目标检测模型和箱孔及车架导板识别子模块,其中,The shore bridge control device based on machine vision according to claim 13, wherein the recognition module includes a second labeling submodule, a second target detection model and a box hole and frame guide plate recognition submodule, wherein,
    所述第二标注子模块,用于对所述历史车身子图像中的箱孔或车架导板进行标注;The second labeling submodule is used to label the box hole or the frame guide plate in the historical vehicle body image;
    所述第二目标检测模型,用于建立第二神经网络Yolov5并利用标注的历史车身子图像对所述第二神经网络Yolov5进行训练以获得第二目标检测模型;以及The second target detection model is used to establish a second neural network Yolov5 and use the marked historical car body images to train the second neural network Yolov5 to obtain a second target detection model; and
    所述箱孔及车架导板识别子模块,用于利用所述第二目标检测模型识别出当前车身子图像中的箱孔或车架导板,并获取箱孔坐标和车架导板坐标。The box hole and frame guide identification sub-module is used to use the second target detection model to identify box holes or frame guides in the current body sub-image, and obtain box hole coordinates and frame guide coordinates.
  15. 根据权利要求10所述的基于机器视觉的岸桥控制装置,其特征在于,所述摄像机包括第一摄像机、第二摄像机、第三摄像机、第四摄像机、第五摄像机和第六摄像机,安装在岸桥横梁上的车道相对两侧的隔离带处,其中,The machine vision-based quay bridge control device according to claim 10, wherein the cameras include a first camera, a second camera, a third camera, a fourth camera, a fifth camera and a sixth camera, installed on At the isolation strips on opposite sides of the lane on the cross beam of the quay bridge, where,
    所述第一摄像机至所述第四摄像机,用于拍摄所述集卡的车头图像,以确认车辆身份以及车辆移动方向;以及The first camera to the fourth camera are used to capture the head image of the truck to confirm the identity of the vehicle and the moving direction of the vehicle; and
    所述第五摄像机和所述第六摄像机,用于拍摄所述集卡的车身图像,以计算所述集卡的初定位偏差距离和移动偏差距离。The fifth camera and the sixth camera are used to take images of the body of the collection truck to calculate the initial positioning deviation distance and the movement deviation distance of the collection truck.
  16. 根据权利要求15所述的基于机器视觉的岸桥控制装置,其特征在于,还包括数据接收单元,以及所述识别模块还包括第三标注子模块和第三目标检测模型和车头及身份确认子模块,其中,The shore crane control device based on machine vision according to claim 15, further comprising a data receiving unit, and the identification module further comprising a third labeling sub-module, a third target detection model and a vehicle head and an identification sub-module module, where
    所述第三标注子模块,用于从数据库中获取多幅历史车头图像并对所述历史车头图像中的车头及二维码进行标注;The third labeling submodule is used to acquire multiple historical vehicle front images from the database and label the vehicle front and the two-dimensional code in the historical vehicle front images;
    所述第三目标检测模型,用于建立第三神经网络Yolov5,并利用标注的历史车头图像对所述第三神经网络Yolov5进行训练以获得第三目标检测模型;The third target detection model is used to set up a third neural network Yolov5, and utilizes the marked history front image to train the third neural network Yolov5 to obtain a third target detection model;
    车头及身份确认子模块,用于实时采集当前车头图像,并利用第三目标检测 模型识别出当前车头图像中的集卡车头和所述集卡车头上粘贴的二维码并确认集卡身份编码和行驶方向;以及The vehicle head and identity confirmation sub-module is used to collect the current vehicle head image in real time, and use the third target detection model to identify the collection truck head in the current vehicle head image and the QR code pasted on the collection truck head and confirm the identity code of the collection truck and direction of travel; and
    所述数据接收单元,用于位于集卡驾驶室内并根据所述集卡身份编码通过网络与所述视频处理器连接。The data receiving unit is configured to be located in the truck cab and connect to the video processor through a network according to the ID code of the truck.
  17. 根据权利要求16所述的基于机器视觉的岸桥控制装置,其特征在于,还包括LED显示屏,The machine vision-based quay crane control device according to claim 16, further comprising an LED display screen,
    所述初定位模块用于基于识别出的所述当前车身图像中的集装箱或车架的目标矩形区域与预先获取的目标停车位置,计算集卡的初定位偏差距离,其中,通过以下公式计算所述集卡的初定位偏差距离:The initial positioning module is used to calculate the initial positioning deviation distance of the truck based on the recognized target rectangular area of the container or frame in the current body image and the pre-acquired target parking position, wherein, the following formula is used to calculate the The initial positioning deviation distance of the set card:
    offset=D(y-y 0) offset=D(yy 0 )
    其中,y表示当前车辆区域图像的纵坐标,y 0表示预先采集的目标停车位置的纵坐标,D表示与高度相关的实际像素距离因子; Among them, y represents the ordinate of the current vehicle area image, y 0 represents the ordinate of the target parking position collected in advance, and D represents the actual pixel distance factor related to height;
    所述数据接收单元,通过所述网络将所述集卡的初定位偏差距离传输至数据接收单元;以及The data receiving unit transmits the initial positioning deviation distance of the collection card to the data receiving unit through the network; and
    LED显示屏,位于所述集卡驾驶室内并与所述数据接收单元通信连接,用于显示所述初定位偏差距离,以引导集卡司机调整集卡位置。The LED display screen is located in the cab of the collection truck and communicated with the data receiving unit, and is used to display the deviation distance of the initial positioning, so as to guide the truck driver to adjust the position of the collection truck.
  18. 根据权利要求17所述的基于机器视觉的岸桥控制装置,其特征在于,所述移动偏差距离计算模块用于通过以下公式计算所述集卡的移动偏差距离进一步包括:The quay crane control device based on machine vision according to claim 17, wherein the movement deviation distance calculation module is used to calculate the movement deviation distance of the truck by the following formula further comprising:
    Figure PCTCN2022072004-appb-100002
    Figure PCTCN2022072004-appb-100002
    其中,x 0,y 0分别当前检测到的两个箱孔的中点,A,B,C表示预先检测到两箱孔的直线方程参数,E表示与高度相关的实际像素距离因子; Among them, x 0 , y 0 are the midpoints of the two currently detected box holes, A, B, and C represent the linear equation parameters of the two box holes detected in advance, and E represents the actual pixel distance factor related to the height;
    所述数据接收单元,通过所述网络接收所述移动偏差距离;The data receiving unit receives the movement deviation distance through the network;
    所述LED显示器,用于显示所述移动偏差距离,以继续引导司机调整集卡位置,直到所述移动偏差距离小于阈值时完成集卡定位引导。The LED display is used to display the movement deviation distance to continue to guide the driver to adjust the position of the truck until the movement deviation distance is less than a threshold to complete the positioning guidance of the collection truck.
PCT/CN2022/072004 2021-10-29 2022-01-14 Container truck guidance and single/double-container identification method and apparatus based on machine vision WO2023070954A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111275349.2 2021-10-29
CN202111275349.2A CN114119741A (en) 2021-10-29 2021-10-29 Shore bridge control method and device based on machine vision

Publications (1)

Publication Number Publication Date
WO2023070954A1 true WO2023070954A1 (en) 2023-05-04

Family

ID=80379870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072004 WO2023070954A1 (en) 2021-10-29 2022-01-14 Container truck guidance and single/double-container identification method and apparatus based on machine vision

Country Status (2)

Country Link
CN (1) CN114119741A (en)
WO (1) WO2023070954A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116484485A (en) * 2023-06-21 2023-07-25 湖南省交通规划勘察设计院有限公司 Shaft network determining method and system
CN116882433A (en) * 2023-09-07 2023-10-13 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102568200B1 (en) * 2022-11-29 2023-08-21 (주)토탈소프트뱅크 Apparatus for guiding work position of autonomous yard tractor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104477779A (en) * 2014-12-31 2015-04-01 曹敏 System and method for alignment and safety control of trucks under bridge cranes of container wharves
WO2020133693A1 (en) * 2018-12-26 2020-07-02 上海图森未来人工智能科技有限公司 Precise parking method, apparatus and system of truck in shore-based crane area
CN112528721A (en) * 2020-04-10 2021-03-19 福建电子口岸股份有限公司 Bridge crane truck safety positioning method and system
CN113341987A (en) * 2021-06-17 2021-09-03 天津港第二集装箱码头有限公司 Automatic unmanned card collection guide system and method for shore bridge
WO2021179988A1 (en) * 2020-03-09 2021-09-16 长沙智能驾驶研究院有限公司 Three-dimensional laser-based container truck anti-smashing detection method and apparatus, and computer device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104477779A (en) * 2014-12-31 2015-04-01 曹敏 System and method for alignment and safety control of trucks under bridge cranes of container wharves
WO2020133693A1 (en) * 2018-12-26 2020-07-02 上海图森未来人工智能科技有限公司 Precise parking method, apparatus and system of truck in shore-based crane area
WO2021179988A1 (en) * 2020-03-09 2021-09-16 长沙智能驾驶研究院有限公司 Three-dimensional laser-based container truck anti-smashing detection method and apparatus, and computer device
CN112528721A (en) * 2020-04-10 2021-03-19 福建电子口岸股份有限公司 Bridge crane truck safety positioning method and system
CN113341987A (en) * 2021-06-17 2021-09-03 天津港第二集装箱码头有限公司 Automatic unmanned card collection guide system and method for shore bridge

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XU, CAIYUN ET AL.: "Container Automatic Positioning System based on Machine Vision", AUTOMATION APPLICATION, no. 3, 25 March 2019 (2019-03-25), XP009545850 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116484485A (en) * 2023-06-21 2023-07-25 湖南省交通规划勘察设计院有限公司 Shaft network determining method and system
CN116484485B (en) * 2023-06-21 2023-08-29 湖南省交通规划勘察设计院有限公司 Shaft network determining method and system
CN116882433A (en) * 2023-09-07 2023-10-13 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system
CN116882433B (en) * 2023-09-07 2023-12-08 无锡维凯科技有限公司 Machine vision-based code scanning identification method and system

Also Published As

Publication number Publication date
CN114119741A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
WO2023070954A1 (en) Container truck guidance and single/double-container identification method and apparatus based on machine vision
US10776651B2 (en) Material handling method, apparatus, and system for identification of a region-of-interest
US8379926B2 (en) Vision based real time traffic monitoring
US7336805B2 (en) Docking assistant
CN110794406B (en) Multi-source sensor data fusion system and method
EP3301612A1 (en) Barrier and guardrail detection using a single camera
CN109269478A (en) A kind of container terminal based on binocular vision bridge obstacle detection method
CN110378957B (en) Torpedo tank car visual identification and positioning method and system for metallurgical operation
CN108364466A (en) A kind of statistical method of traffic flow based on unmanned plane traffic video
CN113885532B (en) Unmanned floor truck control system of barrier is kept away to intelligence
JP2005157731A (en) Lane recognizing device and method
CN114119742A (en) Method and device for positioning container truck based on machine vision
CN117369460A (en) Intelligent inspection method and system for loosening faults of vehicle bolts
CN105740832B (en) A kind of stop line detection and distance measuring method applied to intelligent driving
CN115755888A (en) AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method
CN117115249A (en) Container lock hole automatic identification and positioning system and method
CN115880673A (en) Obstacle avoidance method and system based on computer vision
CN114355894A (en) Data processing method, robot and robot system
CN114445636A (en) Train bottom item mapping method
Malik High-quality vehicle trajectory generation from video data based on vehicle detection and description
US20210380119A1 (en) Method and system for operating a mobile robot
WO2022170633A1 (en) Rail transit vehicle collision avoidance detection method based on vision and laser ranging
CN114119496A (en) Machine vision-based shore bridge single-box and double-box detection method and device
CN116009563B (en) Unmanned robot scribing method integrating laser radar and depth camera
CN109583269A (en) A kind of detection method of vehicle trade line

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22884874

Country of ref document: EP

Kind code of ref document: A1