CN117745804A - Position relation determining method, device, electronic equipment, system and storage medium - Google Patents

Position relation determining method, device, electronic equipment, system and storage medium Download PDF

Info

Publication number
CN117745804A
CN117745804A CN202311668226.4A CN202311668226A CN117745804A CN 117745804 A CN117745804 A CN 117745804A CN 202311668226 A CN202311668226 A CN 202311668226A CN 117745804 A CN117745804 A CN 117745804A
Authority
CN
China
Prior art keywords
camera
lock head
cameras
lock
lock hole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311668226.4A
Other languages
Chinese (zh)
Inventor
王剑涛
杨庆研
郑军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Matrixtime Robotics Shanghai Co ltd
Original Assignee
Matrixtime Robotics Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matrixtime Robotics Shanghai Co ltd filed Critical Matrixtime Robotics Shanghai Co ltd
Priority to CN202311668226.4A priority Critical patent/CN117745804A/en
Publication of CN117745804A publication Critical patent/CN117745804A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control And Safety Of Cranes (AREA)

Abstract

The invention provides a position relation determining method, a position relation determining device, electronic equipment, a position relation determining system and a storage medium, and relates to the field of image processing. The four cameras are respectively arranged at the periphery of a vehicle body area of a truck collecting position, the installation height of the four cameras is at a preset height above a cargo carrying plate of the truck collecting position, and the four cameras are used for shooting four corners of the cargo carrying plate of the truck collecting position. The optical axes of the two cameras are parallel to the width direction of the truck collecting position, the optical axes of the two cameras are parallel to the length direction of the truck collecting position, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate. Therefore, when the lock hole and the lock head at the bottom of the container appear in the visual field of each camera during boxing, the three-dimensional coordinate difference is determined based on the current image acquired by each camera and is sent to the crane controller, so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference, and the automatic boxing operation is realized accurately.

Description

Position relation determining method, device, electronic equipment, system and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, an electronic device, a system, and a storage medium for determining a positional relationship.
Background
In the process of carrying out automatic operation from container lifting to a container truck (container truck) in a container yard of a port, the key of the automatic operation is as follows: it is necessary to accurately position the relative position between the container and the pallet held by the spreader of the RMG (rail mounted gantry crane).
In the prior art, one way is to use a lidar for determining the positional relationship. However, first, the calculation speed based on the point cloud data collected by the laser radar is slow, which results in slow automatic loading and unloading operation efficiency. Secondly, in the operation process, the laser radar is frequently vibrated and needs to be frequently maintained or replaced, but the purchase cost of the laser radar is higher, so that the loss cost caused by the replacement of the laser radar is higher.
Another way in the prior art is to make a determination of the positional relationship based on depth prediction of the neural network. This approach involves image stitching of multiple cameras, so multiple view cameras need to be calibrated, which is complex and costly when the cameras are far apart. The training of the neural network of the single camera requires a large amount of real data, especially real depth data, which is difficult to acquire, and therefore, the cost is high.
Disclosure of Invention
The invention aims to provide a position relation determining method, a position relation determining device, an electronic device, a position relation determining system and a storage medium, so as to solve the problems in the prior art.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides a method for determining a positional relationship, which is applied to an electronic device, where the electronic device is respectively connected with four cameras and a crane controller in a communication manner; the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axes of the two cameras are parallel to the width direction of the truck collecting parking space, the optical axes of the two cameras are parallel to the length direction of the truck collecting parking space, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate; the method comprises the following steps:
when the lock hole at the bottom of the container and the lock head appear in the view field of each camera, determining a three-dimensional coordinate difference based on the current image acquired by each camera; the three-dimensional coordinate difference reflects the distance between the lock hole and the lock head in the width direction and the length direction of the integrated card parking space and the height direction perpendicular to the ground;
And sending the three-dimensional coordinate difference to the crane controller so that the crane controller controls a crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference.
In an alternative embodiment, before the step of determining the three-dimensional coordinate difference based on the current image acquired by each camera when determining that the lock hole and the lock head at the bottom of the container appear in the view of each camera, the method further comprises:
receiving images acquired by each camera at intervals of a preset period;
obtaining a marked image corresponding to the current image of each camera in real time;
when a lock hole outline frame and a lock head mark frame exist in each mark image, determining that a lock hole and a lock head at the bottom of the container appear in the view field of each camera;
the step of determining a three-dimensional coordinate difference based on the current image acquired by each camera includes:
and determining the three-dimensional coordinate difference based on the lockhole outline frame, the lock head mark frame and the pixel resolution in each mark image.
In an alternative embodiment, the step of determining the three-dimensional coordinate difference based on the keyhole outline frame, the lock mark frame and the pixel resolution in each of the mark images includes:
Determining the pixel resolution of each marked image based on the preset actual width of the lock hole and the outline frame of the lock hole in each marked image;
based on the lockhole outline frame and the lock head mark frame in each mark image, respectively determining respective pixel coordinates of the lockhole center and the lock head center in each mark image;
for each marked image, determining the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the marked image based on the pixel resolution in the marked image and the pixel coordinates of the lock hole center and the lock head center;
and determining the three-dimensional coordinate difference based on the transverse actual distance and the longitudinal actual distance between the lock hole corresponding to each marking image and the lock head.
In an optional embodiment, the step of determining the lateral actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the marker image based on the pixel resolution in the marker image, the pixel coordinates of the lock hole center and the lock head center, includes:
determining a horizontal pixel distance and a longitudinal pixel distance between the lock hole center and the lock head center based on respective pixel coordinates of the lock hole center and the lock head center in the mark image;
And converting the horizontal pixel distance and the longitudinal pixel distance between the lock hole center and the lock head center by using the pixel resolution in the current image to obtain the horizontal actual distance and the longitudinal actual distance between the lock hole corresponding to the mark image and the lock head.
In an optional embodiment, the first camera and the third camera are parallel to the width direction of the card collecting parking space, and the second camera and the fourth camera are parallel to the length direction of the card collecting parking space;
the step of determining the three-dimensional coordinate difference based on the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to each marking image comprises the following steps:
calculating an average value by using the transverse actual distances corresponding to the marking images of the first camera and the third camera respectively to obtain the interval distance between the lock hole and the lock head in the length direction of the card collecting parking space;
calculating an average value by using the transverse actual distances corresponding to the marking images of the second camera and the fourth camera, so as to obtain the interval distance between the lock hole and the lock head in the width direction of the card collecting parking space;
And calculating the average value of the longitudinal actual distances corresponding to all the marked images to obtain the spacing distance between the lock hole and the lock head in the height direction perpendicular to the ground.
In an optional embodiment, the step of obtaining, in real time, a marker image corresponding to the current image of each camera includes:
performing target detection on the received current image of each camera in real time by utilizing a pre-trained target detection model to obtain a marked image corresponding to each current image; the training process of the target detection model comprises the following steps:
acquiring a data set; the data set comprises a plurality of sample images, wherein the sample images are marked with a lockhole outline frame reflecting the lockhole position and a lock head mark frame reflecting the lock head position;
training a pre-established neural network model by using the data set to obtain the target detection model.
In a second aspect, the present invention provides a positional relationship determining apparatus applied to an electronic device, where the electronic device is respectively connected to four cameras and a crane controller in a communication manner; the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axes of the two cameras are parallel to the width direction of the truck collecting parking space, the optical axes of the two cameras are parallel to the length direction of the truck collecting parking space, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate; the device comprises:
The processing module is used for determining three-dimensional coordinate differences based on the current image acquired by each camera when determining that a lock hole at the bottom of the container and the lock head appear in the view field of each camera; the three-dimensional coordinate difference reflects the distance between the lock hole and the lock head in the width direction and the length direction of the integrated card parking space and the height direction perpendicular to the ground;
and the transmission module is used for transmitting the three-dimensional coordinate difference to the crane controller so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference.
In a third aspect, the present invention provides an electronic device comprising: a memory storing a software program that is executed by the processor when the electronic device is running to implement the positional relationship determination method as described in the foregoing first aspect.
In a fourth aspect, the present invention provides a loading system comprising an electronic device as described in the third aspect, and four cameras and crane controllers communicatively connected to each of the electronic devices; wherein:
the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position;
Among the four cameras, the optical axis of two cameras with the width direction of collection card parking stall is parallel, the optical axis of two cameras with the length direction of collection card parking stall is parallel, and the left side all includes in the field of vision of every camera the tapered end of board that carries cargo.
In a fifth aspect, the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the positional relationship determination method of the foregoing first aspect.
Compared with the prior art, the embodiment of the invention provides a position relation determining method, a device, electronic equipment, a system and a storage medium. The optical axes of the two cameras are parallel to the width direction of the truck collecting position, the optical axes of the two cameras are parallel to the length direction of the truck collecting position, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate. In this way, in the automatic container loading operation process of lifting the container to the collecting card by the crane, when a lock hole and a lock head at the bottom of the container appear in the view field of each camera, the three-dimensional coordinate difference is determined based on the current image collected by each camera and sent to the crane controller, so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying board based on the three-dimensional coordinate difference. Compared with the prior art, the invention has lower cost for realizing loading and unloading operation by using the cameras, and because of the scene geometric characteristics formed by the installation positions of the four cameras and the optical axis direction, the three-dimensional coordinate difference can be calculated without image splicing to reflect the distance between the lock hole and the lock head in the width direction, the length direction and the height direction perpendicular to the ground of the truck collecting parking space, and further the crane controller can accurately realize the automated box loading operation by using the position relationship between the lock hole at the bottom of the container and the lock head of the truck collecting reflected by the three-dimensional coordinate difference.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a loading and unloading system according to an embodiment of the present invention.
Fig. 2 is a top view of a camera mounting position according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of a method for determining a position relationship according to an embodiment of the present invention.
Fig. 4 is a second flowchart of a method for determining a positional relationship according to an embodiment of the present invention.
Fig. 5 is a side view of a boxing operation according to an embodiment of the present invention.
Fig. 6 is a top view of a camera mounting position according to a second embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a positional relationship determining apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, if the terms "upper", "lower", "inner", "outer", and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus it should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
Here, an application scenario of the embodiment of the present invention is first described.
Referring to fig. 1, fig. 1 is a schematic diagram of a loading and unloading system according to an embodiment of the invention. The handling system includes an electronic device and four cameras and crane controllers communicatively coupled to the electronic device.
Referring to fig. 2 on the basis of fig. 1, fig. 2 is a top view of a camera mounting position according to an embodiment of the present invention. In fig. 2, an empty truck is parked in a truck-loading bay, which can be divided into a head area and a body area.
In connection with fig. 2, in an embodiment of the present invention, the installation requirements and roles of the four cameras are as follows:
1. the four cameras are respectively arranged around the vehicle body area of the truck collecting position, and the installation height of the four cameras can be at a preset height above the cargo carrying plate of the truck collecting position; in order not to interfere with the travel of the collector card, the mounting positions of the four cameras can only be on or off the collector truck bit line;
2. Among the four cameras, the optical axes of the two cameras (namely, the direction from the camera position to the imaging view center point) are parallel to the width direction of the truck-mounted position, and the optical axes of the two cameras are parallel to the length direction of the truck-mounted position;
3. four cameras can shoot four corners of the cargo board of the collecting card respectively. In order to automatically pack the container when the crane carries out the operation, the camera can be simultaneously shot to the lock head on the cargo board and the lock hole at the bottom of the container clamped by the lifting appliance, and the installation height of the camera is required to satisfy: 1. the left side of each camera view may include a lock head of the cargo plate; 2. the field of view of each camera may take a position at a set height up, for example 1m or 1.5m, at the corresponding lock head.
Assuming a cargo plate of the header card has a height of 1m above the ground, the predetermined height may be 0.5m or 0.7m, which is an example only and is not limiting herein.
It should be noted that fig. 2 is merely an example, and is not limited thereto.
The position relation determining method provided by the embodiment of the invention can be applied to electronic equipment, wherein the electronic equipment can be independent computing equipment and a server, and can be integrated with a crane controller.
Referring to fig. 3, fig. 3 is a flowchart of a method for determining a positional relationship according to an embodiment of the present invention, the method includes steps S140 to S150:
And S140, when the lock hole and the lock head at the bottom of the container appear in the visual field of each camera, determining a three-dimensional coordinate difference based on the current image acquired by each camera.
In this embodiment, the three-dimensional coordinate difference can reflect the distances between the lock hole and the lock head in the width direction and the length direction of the truck collecting parking space and in the height direction perpendicular to the ground.
And S150, sending the three-dimensional coordinate difference to a crane controller so that the crane controller controls the crane lifting appliance to place the container on the pallet based on the three-dimensional coordinate difference.
In this embodiment, the crane controller may control movement of the crane spreader based on the three-dimensional coordinate difference, so that the container held by the crane spreader approaches the cargo board and the lock locking holes are aligned one by one, and then the container is placed on the cargo board.
According to the position relation determining method provided by the embodiment of the invention, four cameras are respectively arranged around a vehicle body area of a truck collecting position and at the preset height above a cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position. The optical axes of the two cameras are parallel to the width direction of the truck collecting position, the optical axes of the two cameras are parallel to the length direction of the truck collecting position, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate. Thus, when the lock hole and the lock head at the bottom of the container appear in the visual field of each camera during boxing, the three-dimensional coordinate difference is determined based on the current image acquired by each camera and is sent to the crane controller, so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference. Compared with the prior art, the invention has lower cost for realizing loading and unloading operation by using the cameras, and because of the scene geometric characteristics formed by the installation positions of the four cameras and the optical axis direction, the three-dimensional coordinate difference can be calculated without image splicing to reflect the distance between the lock hole and the lock head in the width direction, the length direction and the height direction perpendicular to the ground of the truck collecting parking space, and further the crane controller can accurately realize the automated box loading operation by using the position relationship between the lock hole at the bottom of the container and the lock head of the truck collecting reflected by the three-dimensional coordinate difference.
In an alternative implementation mode, in the process of boxing operation by the crane controller, the electronic equipment can control the camera to collect images at regular time so as to monitor the position relationship between the lock head and the lock hole in time.
Correspondingly, referring to fig. 4, before the step S140, S110 to S130 are further included:
s110, receiving images acquired by each camera at intervals of a preset period.
In the present embodiment, the preset period depends on the electronic device, because the electronic device can control the image acquisition frequency of the camera, for example, the image acquisition frequency is set to 20hz (i.e., acquired every 50 ms) or 40hz (i.e., acquired every 25 ms), which is only an example and not limited herein.
S120, obtaining a marked image corresponding to the current image of each camera in real time.
In this embodiment, the electronic device may perform object detection on the current image of each camera received each time, so as to obtain a mark image corresponding to each current image.
Optionally, the substeps of step S120 may include:
s121, performing target detection on the received current image of each camera in real time by utilizing a pre-trained target detection model to obtain a marked image corresponding to each current image.
In this embodiment, the target detection model may be used to detect lock holes and lock heads in the current image. The training process of the target detection model may include:
(1) Acquiring a data set, wherein the data set comprises a plurality of sample images, and the sample images are marked with a lockhole outline frame reflecting the lockhole position and a lock head mark frame reflecting the lock head position;
(2) Training a pre-established neural network model by utilizing the data set to obtain a target detection model.
The object detection model may be any existing object detection algorithm, for example: YOLO (You Only Look Once, an algorithm for target detection using convolutional neural networks), R-CNN (Region-based Convolutional Network method, regional convolutional neural networks), fast R-CNN (Fast Region-based Convolutional Network method), and the like.
S130, when a lock hole outline frame and a lock head mark frame exist in each mark image, determining that a lock hole and a lock head at the bottom of the container appear in the view field of each camera.
In this embodiment, when the bottom of the container held by the crane does not enter the field of view of the camera, only the lock mark frame is in the mark image obtained based on the current image collected by the camera. When the bottom of the container clamped by the crane enters the field of view of the camera, a lock hole outline frame and a lock head mark frame exist in the mark image obtained based on the current image acquired by the camera.
With continued reference to fig. 4, the process of determining the three-dimensional coordinate difference based on the current image acquired by each camera in step S140 may include the following sub-steps:
s141, determining three-dimensional coordinate differences based on the lockhole outline frame, the lock head mark frame and the pixel resolution in each mark image.
Optionally, referring to fig. 5, a lock hole is formed in the middle of the square lock hole iron sheet at the bottom of the container. In the marked image, the lockhole outline frame can surround the lockhole iron sheet, and the lock head mark frame can surround the lock head.
In an alternative implementation manner, the pixel resolution in each marked image can be calculated first, and then the pixel resolution is utilized to convert the distance between the centers of the pixels of the lock hole outline frame and the lock head mark frame, so as to obtain the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to each marked image. And then integrating the transverse actual distances and the longitudinal actual distances corresponding to the four mark images to obtain the three-dimensional coordinate difference reflecting the integral position relation of the lock hole and the lock head.
Thus, the substeps of step S141 described above may include S141-1 to S141-4:
s141-1, determining the pixel resolution of each marked image based on the preset actual width of the lock hole and the outline frame of the lock hole in each marked image.
Optionally, for the ith camera, the determining the pixel resolution of the marker image corresponding to the current image acquired by the ith camera may include:
(1) Acquiring the keyhole pixel width in each marked image based on the keyhole outline frame in the marked image; (2) Calculating the ratio of the preset actual width of the lock hole to the pixel width of the lock hole in the current image to obtain the pixel resolution in the marked image;
wherein, the keyhole pixel widthMay be the width of the transverse axis of the keyhole outline frame in the image coordinate system. Referring to FIG. 5, the actual width of the predetermined lock hole is +.>The side length of the keyhole iron sheet is measured in advance. Then the pixel resolution is obtained as
S141-2, respectively determining respective pixel coordinates of a lock hole center and a lock head center in each marked image based on the lock hole outline frame and the lock head mark frame in each marked image.
In this embodiment, the coordinates of the center point of the lock hole outline frame in the image coordinate system may be calculated to obtain the pixel coordinates of the lock hole center, and the coordinates of the center point of the lock head mark frame in the image coordinate system may be calculated to obtain the pixel coordinates of the lock head center.
S141-3, for each marked image, determining the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the marked image based on the pixel resolution in the marked image and the respective pixel coordinates of the center of the lock hole and the center of the lock head.
In this embodiment, for a tag image, the process of determining the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the tag image may include:
s141-31, determining a horizontal pixel distance and a longitudinal pixel distance between the lock hole center and the lock head center based on respective pixel coordinates of the lock hole center and the lock head center in the marked image;
s141-32, converting the horizontal pixel distance and the longitudinal pixel distance between the center of the lock hole and the center of the lock head by using the pixel resolution in the current image, and obtaining the horizontal actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the marked image.
In an alternative example, for the marker image corresponding to the ith camera, the pixel resolution R is calculated i . Assuming that the pixel coordinates of the lock hole center and the lock head center in the marked image are utilized, the transverse pixel distance delta u between the lock hole center and the lock head center in the marked image is calculated P Longitudinal pixel distance Deltav P . The transverse actual distance Deltau between the lock hole corresponding to the mark image and the lock head A =Δu P ×R i Longitudinal actual distance Deltav A =Δv P ×R i
S141-4, determining a three-dimensional coordinate difference based on the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to each marking image.
In this embodiment, since the imaging area of each camera is perpendicular to the ground, as can be seen from fig. 2, the horizontal actual distance and the vertical actual distance, which are finally determined by the images acquired by the two cameras with the optical axis in the length direction of the truck-collecting position, respectively reflect the distance between the lock head and the lock hole in the images in the width direction of the truck-collecting position and the height direction perpendicular to the ground; and the transverse actual distance and the longitudinal actual distance which are finally determined by the images acquired by the two cameras with the optical axis in the width direction of the truck collecting position respectively reflect the distance between the lock head and the lock hole in the images in the length direction of the truck collecting position and the height direction perpendicular to the ground.
In an alternative example, please refer to fig. 6, the first camera (i.e. camera No. 1 in fig. 6) and the third camera (i.e. camera No. 3 in fig. 6) with the optical axis parallel to the width direction of the truck-mounted position, and the second camera (i.e. camera No. 2 in fig. 6) and the fourth camera (i.e. camera No. 4 in fig. 6) with the optical axis parallel to the length direction of the truck-mounted position. At this time, the substeps of the step S141-4 may include S141-41 to S141-43:
s141-41, calculating a mean value by using the transverse actual distances corresponding to the marking images of the first camera and the third camera, and obtaining the spacing distance between the lock hole and the lock head in the length direction of the card collecting parking space.
In this embodiment, with reference to fig. 6, the optical axes of camera No. 1 and camera No. 3 are parallel to the width direction of the pick-up truck, and the lateral actual distances of the marker images of both reflect the distance between the lock head and the lock hole in the length direction of the pick-up truck. Therefore, the average value of the transverse actual distances corresponding to the marking images of the camera No. 1 and the camera No. 3 can be regarded as the interval distance between the lock hole and the lock head in the length direction of the truck collecting parking space in the whole three-dimensional space.
S141-42, calculating a mean value by using the transverse actual distances corresponding to the marking images of the second camera and the fourth camera, and obtaining the spacing distance between the lock hole and the lock head in the width direction of the card collecting parking space.
In this embodiment, with reference to fig. 6, the optical axes of camera No. 2 and camera No. 4 are parallel to the length direction of the truck-mounted position, and the lateral actual distances of the marker images of both reflect the distance between the lock head and the lock hole in the width direction of the truck-mounted position. Therefore, the average value of the transverse actual distances corresponding to the marking images of the camera No. 2 and the camera No. 4 can be regarded as the interval distance between the lock hole and the lock head in the width direction of the card collecting parking space in the whole three-dimensional space.
S141-43, calculating the average value of the longitudinal actual distances corresponding to all the marked images, and obtaining the spacing distance between the lock hole and the lock head in the height direction perpendicular to the ground.
In the present embodiment, referring to fig. 6, since the imaging area of each camera is perpendicular to the ground, the longitudinal actual distances of the marker images of cameras No. 1 to No. 4 each reflect the distance between the lock head and the lock hole in the width direction perpendicular to the ground. Therefore, the average value of the longitudinal actual distances corresponding to the marker images of cameras 1 to 4 can be regarded as the interval distance between the lock hole and the lock head in the width direction perpendicular to the ground in the whole three-dimensional space.
The calculation mode of the three-dimensional coordinate difference of the lock head and the lock hole is described in detail under the condition that the size of the container is just matched with the size of the cargo board of the collection card. For example, a 40-gauge container exactly matches the header card shown in fig. 6.
There is a case where the container is small in size and the container is placed either in front of the pallet or in the rear of the pallet during loading. Therefore, in the container loading operation process, only the lock head and the lock hole can be detected in the images acquired by the two cameras.
For example, the set card shown in fig. 6 may put down two 20-ruler containers, if one 20-ruler container is to be put in front of the cargo board, then only the lock and the lock hole are simultaneously present in the fields of view of camera No. 1 and camera No. 4, then the electronic device may generate the mark images corresponding to the current images collected by camera No. 1 and camera No. 4, and then execute steps S141-1 to S141-13 to process the two mark images, so that the transverse actual distance and the longitudinal actual distance between the lock hole and the lock corresponding to the two mark images can be obtained, and the three-dimensional coordinate difference is determined in the following manner:
(1) The transverse actual distance corresponding to the marking image of the camera No. 1 is directly regarded as the interval distance between the lock hole and the lock head in the length direction of the card collecting parking space in the whole three-dimensional space;
(2) And directly regarding the transverse actual distance corresponding to the marked image of the camera No. 4 as the interval distance between the lock hole and the lock head in the width direction of the card collecting parking space in the whole three-dimensional space.
(3) And taking the average value of the longitudinal actual distances corresponding to the mark images of the camera No. 1 and the camera No. 4 as the interval distance between the lock hole and the lock head in the width direction perpendicular to the ground in the whole three-dimensional space.
It should be noted that, in the above method embodiment, the execution sequence of each step is not limited by the drawing, and the execution sequence of each step is based on the actual application situation.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
first, the invention utilizes the image collected by the camera to determine the position relation between the lock head and the lock hole in the container loading process, so that the cost is reduced, and compared with the processing of the point cloud data of the laser radar, the operation speed of the scheme is faster. Although a plurality of cameras are used, image stitching among different cameras is not involved, so that the method is simple and convenient to use, and time and labor cost are saved. Although the target detection is involved, the labeling of complex depth data is not needed, and the labeling cost is saved.
The second, four cameras are installed around the car body area of the truck collecting parking space, the optical axes of the two cameras are parallel to the length direction of the truck collecting parking space, the optical axes of the other two cameras are parallel to the width direction of the truck collecting parking space, and in the process of boxing, the lock heads and the close lock holes on the truck collecting cargo board which can be correspondingly positioned in the visual field of each camera form the specific scene collection characteristic, so that the calculated three-dimensional coordinate difference can reflect the distances between the lock holes and the lock heads in the width direction, the length direction of the truck collecting parking space and the height direction perpendicular to the ground.
Thirdly, the invention calculates the pixel resolution in each marked image, and then converts the distance between the centers of the pixels of the lockhole outline frame and the lock head mark frame by using the pixel resolution to obtain the transverse actual distance and the longitudinal actual distance between the lockhole and the lock head corresponding to each marked image. Because the imaging area of the camera is vertical to the ground, the transverse actual distance and the longitudinal actual distance which are finally determined by the images acquired by the two cameras with the optical axis in the length direction of the truck collecting position respectively reflect the distance between the lock head and the lock hole in the images in the width direction of the truck collecting position and the height direction vertical to the ground; and the transverse actual distance and the longitudinal actual distance which are finally determined by the images acquired by the two cameras with the optical axis in the width direction of the truck collecting position respectively reflect the distance between the lock head and the lock hole in the images in the length direction of the truck collecting position and the height direction perpendicular to the ground. And respectively integrating the transverse actual distances and the longitudinal actual distances corresponding to the four mark images to obtain the three-dimensional coordinate difference reflecting the integral position relation of the lock hole and the lock head. Therefore, the three-dimensional coordinate difference between the lock hole and the lock head can be accurately calculated by utilizing the scene geometric characteristic, so that the crane controller can accurately realize automatic box loading operation.
In order to perform the corresponding steps in the above-described method embodiments and in each possible implementation, an implementation of the position relation determining device is given below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a positional relationship determining apparatus according to an embodiment of the present invention. The position relation determining device 200 is applied to electronic equipment, and the electronic equipment is respectively connected with four cameras and a crane controller in a communication way; the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axis of two cameras is parallel with the width direction of album truck position, and the optical axis of two cameras is parallel with the length direction of album truck position, and the tapered end that the left side all includes the cargo board in the field of vision of every camera. The positional relationship determination apparatus 200 includes: a processing module 220 and a transmission module 230.
The processing module 220 is used for determining a three-dimensional coordinate difference based on the current image acquired by each camera when determining that a lock hole and a lock head at the bottom of the container appear in the view field of each camera; the three-dimensional coordinate difference reflects the distance between the lock hole and the lock head in the width direction and the length direction of the card collecting parking space and the height direction perpendicular to the ground;
And the transmission module 230 is used for sending the three-dimensional coordinate difference to the crane controller so that the crane controller controls the crane lifting appliance to place the container on the pallet based on the three-dimensional coordinate difference.
Optionally, the position relationship determining apparatus 200 may further include a detection module 210, where the detection module 210 may be used to implement steps S110 to S130 described above. The processing module 220 may be configured to implement the step S140 and its sub-steps, and the transmission module 230 may be configured to implement the step S150.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described position relationship determining apparatus 200 may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 300 comprises a processor 310, a memory 320 and a bus 330, the processor 310 being connected to the memory 320 via the bus 330.
The memory 320 may be used to store a software program, for example, a software program corresponding to the positional relationship determination apparatus 200 as provided in the embodiment of the present invention. The processor 310 performs various functional applications and data processing by running a software program stored in the memory 320 to implement the positional relationship determination method as provided by the embodiment of the present invention.
Wherein the memory 320 may be, but is not limited to: RAM (Random Access Memory ), ROM (Read Only Memory), FLASH (FLASH Memory), PROM (Programmable Read-Only Memory, programmable Read Only Memory), EPROM (Erasable Programmable Read-Only Memory, erasable Read Only Memory), EEPROM (Electric Erasable Programmable Read-Only Memory, electrically erasable Read Only Memory), and the like.
The processor 310 may be an integrated circuit chip with signal processing capabilities. The processor 310 may be a general purpose processor including: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; it is also possible that: DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It is to be understood that the configuration shown in fig. 8 is merely illustrative, and that electronic device 300 may also include more or fewer components than those shown in fig. 8, or have a different configuration than that shown in fig. 8. The components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
The embodiment of the invention also provides a loading system, which comprises the electronic equipment, and four cameras and crane controllers which are all in communication connection with the electronic equipment; wherein: the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axis of two cameras is parallel with the width direction of album truck position, and the optical axis of two cameras is parallel with the length direction of album truck position, and the tapered end that the left side all includes the cargo board in the field of vision of every camera.
The embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and the computer program is executed by a processor to realize the position relation determining method disclosed in the embodiment. The computer readable storage medium may be, but is not limited to: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, RAM, PROM, EPROM, EEPROM, FLASH magnetic disk or an optical disk.
In summary, the embodiment of the invention provides a position relation determining method, a device, electronic equipment, a system and a storage medium. The optical axes of the two cameras are parallel to the width direction of the truck collecting position, the optical axes of the two cameras are parallel to the length direction of the truck collecting position, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate. In this way, in the automatic container loading operation process of lifting the container to the collecting card by the crane, when a lock hole and a lock head at the bottom of the container appear in the view field of each camera, the three-dimensional coordinate difference is determined based on the current image collected by each camera and sent to the crane controller, so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying board based on the three-dimensional coordinate difference. Compared with the prior art, the invention has lower cost for realizing loading and unloading operation by using the cameras, and because of the scene geometric characteristics formed by the installation positions of the four cameras and the optical axis direction, the three-dimensional coordinate difference can be calculated without image splicing to reflect the distance between the lock hole and the lock head in the width direction, the length direction and the height direction perpendicular to the ground of the truck collecting parking space, and further the crane controller can accurately realize the automated box loading operation by using the position relationship between the lock hole at the bottom of the container and the lock head of the truck collecting reflected by the three-dimensional coordinate difference.
The present invention is not limited to the above embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The position relation determining method is characterized by being applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with the four cameras and the crane controller; the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axes of the two cameras are parallel to the width direction of the truck collecting parking space, the optical axes of the two cameras are parallel to the length direction of the truck collecting parking space, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate;
the method comprises the following steps:
when the lock hole at the bottom of the container and the lock head appear in the view field of each camera, determining a three-dimensional coordinate difference based on the current image acquired by each camera; the three-dimensional coordinate difference reflects the distance between the lock hole and the lock head in the width direction and the length direction of the integrated card parking space and the height direction perpendicular to the ground;
And sending the three-dimensional coordinate difference to the crane controller so that the crane controller controls a crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference.
2. The method of claim 1, further comprising, prior to the step of determining a three-dimensional coordinate difference based on the current image captured by each camera when determining the presence of the lock hole and the lock head at the bottom of the container within the field of view of the respective camera:
receiving images acquired by each camera at intervals of a preset period;
obtaining a marked image corresponding to the current image of each camera in real time;
when a lock hole outline frame and a lock head mark frame exist in each mark image, determining that a lock hole and a lock head at the bottom of the container appear in the view field of each camera;
the step of determining a three-dimensional coordinate difference based on the current image acquired by each camera includes:
and determining the three-dimensional coordinate difference based on the lockhole outline frame, the lock head mark frame and the pixel resolution in each mark image.
3. The method of claim 2, wherein the step of determining the three-dimensional coordinate difference based on the keyhole outline box, the lock cylinder mark box, and the pixel resolution in each of the mark images comprises:
Determining the pixel resolution of each marked image based on the preset actual width of the lock hole and the outline frame of the lock hole in each marked image;
based on the lockhole outline frame and the lock head mark frame in each mark image, respectively determining respective pixel coordinates of the lockhole center and the lock head center in each mark image;
for each marked image, determining the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the marked image based on the pixel resolution in the marked image and the pixel coordinates of the lock hole center and the lock head center;
and determining the three-dimensional coordinate difference based on the transverse actual distance and the longitudinal actual distance between the lock hole corresponding to each marking image and the lock head.
4. The method of claim 3, wherein the step of determining the lateral actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to the mark image based on the pixel resolution in the mark image, the respective pixel coordinates of the lock hole center and the lock head center, comprises:
determining a horizontal pixel distance and a longitudinal pixel distance between the lock hole center and the lock head center based on respective pixel coordinates of the lock hole center and the lock head center in the mark image;
And converting the horizontal pixel distance and the longitudinal pixel distance between the lock hole center and the lock head center by using the pixel resolution in the current image to obtain the horizontal actual distance and the longitudinal actual distance between the lock hole corresponding to the mark image and the lock head.
5. The method of claim 3, wherein the first camera and the third camera are parallel to the width direction of the pallet and the second camera and the fourth camera are parallel to the length direction of the pallet;
the step of determining the three-dimensional coordinate difference based on the transverse actual distance and the longitudinal actual distance between the lock hole and the lock head corresponding to each marking image comprises the following steps:
calculating an average value by using the transverse actual distances corresponding to the marking images of the first camera and the third camera respectively to obtain the interval distance between the lock hole and the lock head in the length direction of the card collecting parking space;
calculating an average value by using the transverse actual distances corresponding to the marking images of the second camera and the fourth camera, so as to obtain the interval distance between the lock hole and the lock head in the width direction of the card collecting parking space;
And calculating the average value of the longitudinal actual distances corresponding to all the marked images to obtain the spacing distance between the lock hole and the lock head in the height direction perpendicular to the ground.
6. The method according to claim 2, wherein the step of obtaining, in real time, a marker image corresponding to the current image of each camera, comprises:
performing target detection on the received current image of each camera in real time by utilizing a pre-trained target detection model to obtain a marked image corresponding to each current image; the training process of the target detection model comprises the following steps:
acquiring a data set; the data set comprises a plurality of sample images, wherein the sample images are marked with a lockhole outline frame reflecting the lockhole position and a lock head mark frame reflecting the lock head position;
training a pre-established neural network model by using the data set to obtain the target detection model.
7. The position relation determining device is characterized by being applied to electronic equipment, wherein the electronic equipment is respectively in communication connection with the four cameras and the crane controller; the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position; among the four cameras, the optical axes of the two cameras are parallel to the width direction of the truck collecting parking space, the optical axes of the two cameras are parallel to the length direction of the truck collecting parking space, and the left side of each camera in the visual field comprises a lock head of the cargo carrying plate;
The device comprises:
the processing module is used for determining three-dimensional coordinate differences based on the current image acquired by each camera when determining that a lock hole at the bottom of the container and the lock head appear in the view field of each camera; the three-dimensional coordinate difference reflects the distance between the lock hole and the lock head in the width direction and the length direction of the integrated card parking space and the height direction perpendicular to the ground;
and the transmission module is used for transmitting the three-dimensional coordinate difference to the crane controller so that the crane controller controls the crane lifting appliance to place the container on the cargo carrying plate based on the three-dimensional coordinate difference.
8. An electronic device, comprising: a memory storing a software program that when executed by the electronic device performs the positional relationship determination method as recited in any one of claims 1-6.
9. A loading system comprising the electronic device of claim 8, and four cameras and crane controllers communicatively coupled to each of the electronic devices; wherein:
the four cameras are respectively arranged at the periphery of a vehicle body area of the truck collecting position and at preset heights above the cargo carrying plate of the truck collecting position, and are used for shooting four corners of the cargo carrying plate of the truck collecting position;
Among the four cameras, the optical axis of two cameras with the width direction of collection card parking stall is parallel, the optical axis of two cameras with the length direction of collection card parking stall is parallel, and the left side all includes in the field of vision of every camera the tapered end of board that carries cargo.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the positional relationship determination method of any one of claims 1 to 7.
CN202311668226.4A 2023-12-06 2023-12-06 Position relation determining method, device, electronic equipment, system and storage medium Pending CN117745804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311668226.4A CN117745804A (en) 2023-12-06 2023-12-06 Position relation determining method, device, electronic equipment, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311668226.4A CN117745804A (en) 2023-12-06 2023-12-06 Position relation determining method, device, electronic equipment, system and storage medium

Publications (1)

Publication Number Publication Date
CN117745804A true CN117745804A (en) 2024-03-22

Family

ID=90260105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311668226.4A Pending CN117745804A (en) 2023-12-06 2023-12-06 Position relation determining method, device, electronic equipment, system and storage medium

Country Status (1)

Country Link
CN (1) CN117745804A (en)

Similar Documents

Publication Publication Date Title
WO2021197345A1 (en) Method and apparatus for measuring remaining volume in closed space on basis of laser radar
CN112154454A (en) Target object detection method, system, device and storage medium
CN113874927A (en) Parking detection method, system, processing device and storage medium
CN114998824A (en) Vehicle loading and unloading task monitoring method, device and system
CN109335964B (en) Container twist lock detection system and detection method
CN112027918A (en) Detection method for preventing lifting of container truck based on machine vision
CN114022537B (en) Method for analyzing loading rate and unbalanced loading rate of vehicle in dynamic weighing area
US10697757B2 (en) Container auto-dimensioning
CN115880372A (en) Unified calibration method and system for external hub positioning camera of automatic crane
CN111814739A (en) Method, device and equipment for detecting express package volume and storage medium
CN112581519B (en) Method and device for identifying and positioning radioactive waste bag
KR20210087989A (en) Systems and methods for loading containers on landing targets
CN117745804A (en) Position relation determining method, device, electronic equipment, system and storage medium
CN111932576B (en) Object boundary measuring method and device based on depth camera
CN111372051B (en) Multi-camera linkage blind area detection method and device and electronic equipment
CN115239806A (en) Vehicle head anti-smashing protection method, system, equipment and medium based on machine vision
CN109740524B (en) Monocular vision vehicle monitoring method and device
CN114119742A (en) Method and device for positioning container truck based on machine vision
JPH0979821A (en) Obstacle recognizing device
CN113335825A (en) Dynamic positioning method and device for goods location and electronic equipment
CN113140007B (en) Concentrated point cloud-based set card positioning method and device
CN116342858B (en) Object detection method, device, electronic equipment and storage medium
CN117078138B (en) Truck loading information processing method and server
CN117581274A (en) Method for monitoring cargo holds
CN113720325B (en) Environment change detection method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination