CN114494453A - Automatic loading and unloading method and automatic loading and unloading system based on radar and camera - Google Patents

Automatic loading and unloading method and automatic loading and unloading system based on radar and camera Download PDF

Info

Publication number
CN114494453A
CN114494453A CN202111647223.3A CN202111647223A CN114494453A CN 114494453 A CN114494453 A CN 114494453A CN 202111647223 A CN202111647223 A CN 202111647223A CN 114494453 A CN114494453 A CN 114494453A
Authority
CN
China
Prior art keywords
point cloud
cloud data
solid
cargo
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111647223.3A
Other languages
Chinese (zh)
Inventor
董邓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Multiway Robotics Shenzhen Co Ltd
Original Assignee
Multiway Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Multiway Robotics Shenzhen Co Ltd filed Critical Multiway Robotics Shenzhen Co Ltd
Priority to CN202111647223.3A priority Critical patent/CN114494453A/en
Publication of CN114494453A publication Critical patent/CN114494453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G67/00Loading or unloading vehicles
    • B65G67/02Loading or unloading land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an automatic loading and unloading method based on radar and a camera and an automatic loading and unloading system, wherein the method is applied to the automatic loading and unloading system, the automatic loading and unloading system comprises a vision system, the vision system comprises two solid-state laser radars and a camera placed below the solid-state laser radars, and the method comprises the following steps: acquiring point cloud data acquired by a solid-state laser radar and image data acquired by a camera; acquiring a cargo image coordinate according to the image data, and mapping the point cloud data into an image coordinate system according to a first calibration external parameter to acquire cargo point cloud data; and determining the position information of the cargo according to the cargo point cloud data. The goods point cloud data are obtained according to the first calibration external parameters by calibrating the first calibration external parameters of the solid-state laser radar and the camera in advance, so that the position information of goods is determined according to the goods point cloud data, and the problem that goods loading and unloading are failed due to the fact that the deviation of the goods vehicle stop position is large is solved.

Description

Automatic loading and unloading method and automatic loading and unloading system based on radar and camera
Technical Field
The invention relates to the technical field of automation, in particular to an automatic loading and unloading method and an automatic loading and unloading system based on radar and a camera.
Background
Automatic loading transport is the last process that AGV is automatic, and when present AGV transports goods, need know the preset position at goods place in advance or need the target point coordinate that goods transport corresponds, but when because the freight train stop position is inaccurate, lead to actual goods position and goods to predetermine the position deviation great easily, and then lead to the freight train loading and unloading failure.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an automatic loading and unloading method and an automatic loading and unloading system based on radar and a camera, and aims to solve the problem of failure in loading and unloading of goods when the deviation of the goods stopping position is large.
In order to achieve the above object, the present invention provides a radar and camera based truck loading and unloading method applied to a truck loading and unloading system, wherein the truck loading and unloading system comprises a vision system, the vision system comprises two solid state laser radars and a camera placed below the solid state laser radars, and the radar and camera based truck loading and unloading method comprises the following steps:
acquiring point cloud data acquired by the solid-state laser radar and image data acquired by the camera;
acquiring cargo image coordinates of the cargo in an image coordinate system according to the image data;
determining a first calibration external parameter, mapping the point cloud data into the image coordinate system according to the first calibration external parameter to obtain cargo point cloud data corresponding to each pixel point corresponding to the cargo image coordinate, wherein the first calibration external parameter is a relative pose relation of the image coordinate system and a solid-state laser radar coordinate system;
and determining the position information of the cargo according to the cargo point cloud data.
Optionally, the step of determining the first calibration external parameter includes:
acquiring historical point cloud data obtained by scanning the calibration plate by the solid-state laser radar and historical image data obtained by shooting the calibration plate by the camera;
extracting target historical point cloud data of each angular point in a calibration plate from the historical point cloud data, and determining first coordinate information of the angular point of the calibration plate in the solid-state laser radar coordinate system according to the target historical point cloud data;
determining second coordinate information of the corner point of the calibration plate in the image coordinate system according to the historical image data;
determining the corresponding relation between the image coordinate system and the solid-state laser radar coordinate system according to the first coordinate information and the second coordinate information;
and determining the corresponding relation as the relative pose relation, and determining the relative pose relation as the first calibration external parameter.
Optionally, the point cloud data includes first point cloud data and second point cloud data, and the step of mapping the point cloud data into the image coordinate system according to the first calibration external parameter includes:
acquiring second calibration external parameters corresponding to the first solid-state laser radar and the second solid-state laser radar;
registering the first point cloud data and the second point cloud data according to the second calibration external reference to obtain registered point cloud data;
and mapping the point cloud data after registration to the image coordinate system according to the first calibration external parameter.
Optionally, the step of determining the target location information of the cargo according to the cargo point cloud data includes:
acquiring a mapping relation between the solid laser radar coordinate system and a map coordinate system;
mapping the cargo point cloud data into the map coordinate system according to the mapping relation so as to obtain the map coordinate of the cargo under the map coordinate system;
determining the map coordinates as the location information.
Optionally, after the step of determining the location information of the cargo according to the cargo point cloud data, the method further includes:
and sending the position information to a forklift so that the forklift can execute unloading operation according to the map coordinates.
Optionally, the steps of the radar and camera based lift truck method further comprise:
acquiring the remaining area of the boxcar according to the point cloud data;
when the residual area is larger than or equal to a preset area, determining a library position according to the residual area;
and sending the storage position to a forklift so that the forklift can carry out loading operation according to the storage position.
Optionally, the step of obtaining the remaining area of the boxcar according to the point cloud data includes:
extracting edge point cloud data of the boxcar according to the point cloud data;
determining the effective length and the effective width of the boxcar according to the edge point cloud data;
and determining the remaining area of the boxcar according to the effective length and the effective width.
Optionally, the step of determining a bin according to the remaining area includes:
acquiring area information of goods to be placed;
and distributing corresponding target storage positions for the goods to be placed according to the area information and the residual area.
In addition, in order to achieve the above object, the present invention further provides a truck system including a vision system including two solid state lidar and a camera disposed below the solid state lidar.
Optionally, the auto-handler system further comprises: memory, a processor, and a radar and camera based lift truck program stored on and executable on the memory, which when executed by the processor implements the steps of the radar and camera based lift truck method as described above.
Further, to achieve the above objects, the present invention also provides a computer readable storage medium having stored thereon a radar and camera based lift truck program which when executed by a processor implements the steps of the radar and camera based lift truck method as described above.
The invention provides an automatic loading and unloading method based on radar and a camera, which is characterized in that an automatic loading and unloading system is arranged, the automatic loading and unloading system comprises a visual system, the visual system comprises two solid-state laser radars and a camera arranged below the solid-state laser radars, the solid-state laser radars and the camera are calibrated jointly to obtain a first calibration external parameter, point cloud data of cargos are collected through the solid-state laser radars and image data of the cargos are collected through the camera in the actual loading process, cargo image coordinates of the cargos in an image coordinate system are obtained according to the image data, then the point cloud data are mapped to the image coordinate system through the first calibration external parameter to obtain cargo point cloud data corresponding to each cargo image coordinate, after the cargo point cloud data are obtained, the position information of the cargos is determined according to the cargo point cloud data, according to the invention, the two solid-state laser radars are arranged to enlarge the visual field of the vision system so as to accurately acquire the point cloud data of the goods, and the accurate position information of the goods can be accurately acquired through the two solid-state laser radars and the camera, so that the problem that the goods cannot be accurately loaded and unloaded due to inaccurate parking position of the truck is solved, and the technical problem that the cost for acquiring the position information by using the traditional vision system is high is also solved.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow diagram of a first embodiment of a radar and camera based lift truck method of the present invention;
FIG. 3 is a schematic diagram of a vision system according to the present invention;
FIG. 4 is a diagram illustrating a first embodiment of the present invention for acquiring location information of goods based on images;
FIG. 5 is a diagram illustrating point cloud data respectively corresponding to image coordinates obtained according to the first embodiment of the present invention;
FIG. 6 is a flowchart illustrating a detailed process of step S40 of the first embodiment of the radar and camera based truck loading and unloading method of the present invention;
FIG. 7 is a flowchart illustrating a detailed process of step S30 of a second embodiment of a radar and camera based truck loading and unloading method in accordance with the present invention;
FIG. 8 is a schematic flow diagram of a third embodiment of a radar and camera based lift truck method of the present invention;
fig. 9 is a diagram illustrating a third embodiment of the present invention allocating a corresponding target storage location for the cargo according to the volume information and the remaining area.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring point cloud data acquired by the solid-state laser radar and image data acquired by the camera; acquiring a cargo image coordinate of a cargo in an image coordinate system according to the image data, and mapping the point cloud data into the image coordinate system according to a first calibration external parameter to acquire cargo point cloud data corresponding to each pixel point corresponding to the cargo image coordinate, wherein the first calibration external parameter is determined according to the relative pose relationship of the image coordinate system and a solid-state laser radar coordinate system; and determining the target position information of the cargo according to the cargo point cloud data.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be terminal equipment such as a smart phone, a tablet computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and radar and camera based lift truck programs.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting a background server and communicating data with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke the radar and camera based lift truck program stored in the memory 1005 and perform the following operations:
acquiring point cloud data acquired by the solid-state laser radar and image data acquired by the camera;
acquiring the cargo image coordinates of the cargo in an image coordinate system according to the image data;
determining a first calibration external parameter, mapping the point cloud data into the image coordinate system according to the first calibration external parameter to obtain cargo point cloud data corresponding to each pixel point corresponding to the cargo image coordinate, wherein the first calibration external parameter is a relative pose relation of the image coordinate system and a solid-state laser radar coordinate system;
and determining the position information of the cargo according to the cargo point cloud data.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
acquiring historical point cloud data obtained by scanning the calibration plate by the solid-state laser radar and historical image data obtained by shooting the calibration plate by the camera;
extracting target historical point cloud data of each angular point in a calibration plate from the historical point cloud data, and determining first coordinate information of the angular point of the calibration plate in the solid-state laser radar coordinate system according to the target historical point cloud data;
determining second coordinate information of the corner point of the calibration plate in the image coordinate system according to the historical image data;
determining the corresponding relation between the image coordinate system and the solid-state laser radar coordinate system according to the first coordinate information and the second coordinate information;
and determining the corresponding relation as the relative pose relation, and taking the relative pose relation as the first calibration external parameter.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
acquiring second calibration external parameters corresponding to the first solid-state laser radar and the second solid-state laser radar;
registering the first point cloud data and the second point cloud data according to the second calibration external reference to obtain registered point cloud data;
and mapping the point cloud data after registration to the image coordinate system according to the first calibration external parameter.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
acquiring a mapping relation between the solid laser radar coordinate system and a map coordinate system;
mapping the cargo point cloud data into the map coordinate system according to the mapping relation so as to obtain the map coordinate of the cargo under the map coordinate system;
determining the map coordinates as the location information.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
and sending the position information to a forklift so that the forklift can execute unloading operation according to the map coordinates.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
acquiring the remaining area of the boxcar according to the point cloud data;
when the residual area is larger than or equal to a preset area, determining a library position according to the residual area;
and sending the target storage position to a forklift for the forklift to carry out loading operation according to the target storage position.
Further, the processor 1001 may invoke a radar and camera based lift truck program stored in the memory 1005 and also perform the following operations:
extracting edge point cloud data of the boxcar according to the point cloud data;
determining the effective length and the effective width of the boxcar according to the edge point cloud data;
and determining the remaining area of the boxcar according to the effective length and the effective width.
Further, processor 1001 may invoke a radar and camera based lift truck program stored in memory 1005, and also perform the following operations:
acquiring area information of goods to be placed;
and distributing corresponding storage positions for the goods to be placed according to the area information and the residual area.
Referring to fig. 2, a first embodiment of a radar and camera based lift truck method of the present invention provides a radar and camera based lift truck method comprising:
step S10, point cloud data collected by the solid laser radar and image data collected by the camera are obtained;
step S20, acquiring goods image coordinates of goods in an image coordinate system according to the image data;
step S30, determining a first calibration external parameter, mapping the point cloud data into the image coordinate system according to the first calibration external parameter to obtain cargo point cloud data corresponding to each pixel point corresponding to the cargo image coordinate, wherein the first calibration external parameter is a relative position and posture relation according to the image coordinate system and the solid-state laser radar coordinate system;
and step S40, determining the target position information of the goods according to the goods point cloud data.
In this embodiment, the terminal in the embodiment is an automatic loading and unloading vehicle system, the automatic loading and unloading vehicle system includes a vision system, fig. 3 is a schematic view of an architecture of the vision system, the vision system includes two solid state lidar and a camera placed below the solid state lidar, the camera is an RGB camera, and a visual field range of the vision system is 120 ° horizontally and 70 ° vertically.
Optionally, in a further embodiment, the vision system comprises at least two solid state lidar and at least one camera placed below the solid state lidar.
Optionally, the vision system may be disposed directly above the parking position of the truck, or may be disposed at a side of the parking position of the truck, specifically, the vision system may be disposed according to different types of trucks.
Optionally, after the truck is parked stably, a corresponding vision system is started, the solid-state laser radar is used for acquiring point cloud data of the truck, the camera is used for acquiring image data of the truck, the point cloud data comprises point cloud data of goods and point cloud data of background information, and the image comprises goods images and background images.
Optionally, after the point cloud data and the image data are obtained, image information of the goods is detected according to the image data, and the image information includes corresponding pixel points of the goods in the image data. As shown in fig. 4, fig. 4 shows an exemplary diagram for acquiring image information of goods based on image data. Specifically, the manner of acquiring the cargo image coordinates of the cargo is as follows: according to the method, RGB color data are collected by the camera, cargo image coordinates of cargos are obtained on the basis of a deep learning algorithm YOLO frame, the YOLO detection algorithm frame integrates a data enhancement function, training tasks can be completed only by providing few training samples, a good training result is achieved, the application effect is excellent in a scene with a single background environment, the YOLO algorithm is improved, a frame detection function with an angle is added, and cargos with a rotation angle can be detected.
Optionally, after acquiring a cargo image coordinate of the cargo in an image coordinate system, acquiring cargo point cloud data of the cargo in a solid-state laser radar system according to a first calibration external parameter, where the first calibration external parameter is relative pose information between the image coordinate system and the solid-state laser radar coordinate.
Optionally, the step of determining the first calibration external parameter includes:
acquiring historical point cloud data obtained by scanning the calibration plate by the solid-state laser radar and historical image data obtained by shooting the calibration plate by the camera;
extracting target historical point cloud data of each angular point in a calibration plate from the historical point cloud data, and determining first coordinate information of the angular point of the calibration plate in the solid-state laser radar coordinate system according to the target historical point cloud data;
determining second coordinate information of the corner point of the calibration plate in the image coordinate system according to the historical image data;
determining the corresponding relation between the image coordinate system and the solid-state laser radar coordinate system according to the first coordinate information and the second coordinate information;
and determining the corresponding relation as the relative pose relation, and taking the relative pose relation as the first calibration external parameter.
Optionally, the calibration plate may be a rectangular thin wood plate, the solid-state lidar acquires point cloud data of the rectangular thin wood plate for multiple times, the point cloud data is used as historical point cloud data, the camera acquires historical image data of the rectangular thin wood plate for multiple times, after the historical point cloud data and the historical image data are acquired, the first coordinate information of the angular point of the calibration plate is determined according to the target historical point cloud data, and it can be understood that the first coordinate information is the point cloud coordinate of the angular point under the solid-state lidar coordinate system.
Optionally, second coordinate information of the corner point of the calibration plate under the cargo image coordinate is determined according to the historical image, and the second coordinate information includes the corner point image coordinate of the corner point under an image coordinate system.
Optionally, after the first coordinate information and the second coordinate information are obtained, a mapping relation between the second coordinate information of each corner point and the first coordinate information is established according to the first coordinate information and the second coordinate information, so as to establish a nonlinear model, the nonlinear model is solved by using a Ceres library, so that the relative pose relations of the image coordinate system and the solid-state laser radar coordinate system are determined according to the mapping relation, and the calibration of the solid-state laser radar and the camera is completed.
Optionally, after obtaining the relative pose relationship between the image coordinate system and the solid-state lidar coordinate system, determining the relative pose relationship as the first calibration external parameter.
Optionally, after acquiring a cargo image coordinate of the cargo in an image coordinate system, mapping point cloud data acquired by the solid state laser radar under the cargo image coordinate system according to the first calibration external parameter to acquire cargo point cloud data corresponding to pixel points corresponding to the cargo image coordinate, as shown in fig. 5, fig. 5 shows an example diagram of acquiring cargo point cloud data corresponding to pixel points corresponding to the cargo image coordinate, where the cargo point cloud data corresponding to the cargo image coordinate is a three-dimensional coordinate of the cargo in the solid state laser radar coordinate.
Optionally, in an actual unloading process, in order to unload the forklift, actual position information of the forklift needs to be informed, where the actual position information is a coordinate value based on a map coordinate, and based on this, after the cargo point cloud data is obtained, the cargo point cloud data needs to be converted into an actual coordinate in a map coordinate system, and based on this, referring to fig. 6, an embodiment of the present application provides a method for determining actual position information of the cargo according to the cargo point cloud data.
Optionally, the step S40 includes:
step S41, acquiring the mapping relation between the solid laser radar coordinate system and the world map coordinate system;
step S42, mapping the cargo point cloud data to the map coordinate system according to the mapping relation so as to obtain the map coordinate of the cargo under the map coordinate system;
step S43, determining the map coordinates as the target position information.
Optionally, the mapping relationship is a relative pose relationship between the solid-state lidar coordinate system and the map coordinate system, and specifically, the mapping relationship may be obtained by obtaining calibration external parameters of the image coordinate system and the map coordinate system, and further obtaining the calibration external parameters of the solid-state lidar coordinate system and the map coordinate system according to the first calibration external parameters of the image coordinate system and the solid-state lidar coordinate system.
Optionally, the external reference of the image coordinate system and the map coordinate system may be obtained by calibrating by a hand-eye calibration method, specifically, the forklift is used as an ultra-long "mechanical arm" with four degrees of freedom, the calibration plate is fixed on the forklift, the forklift is moved for multiple times, coordinates of the forklift relative to the map coordinate system after the forklift is moved each time and an image of the calibration plate shot by a camera are obtained, further coordinates of the calibration plate under the image coordinate system are obtained, after the world coordinate and the coordinates are obtained, an eye-to-hand (eye-out-of-hand) nonlinear solving model is established, and the external reference of the image coordinate system and the map coordinate is obtained according to the nonlinear solving model.
Optionally, after obtaining the calibration external reference of the image coordinate system and the map coordinate system, the relative pose relationship between the solid-state lidar coordinate system and the map coordinate system may be determined through the first calibration external reference of the image coordinate system and the solid-state lidar coordinate system.
Optionally, after the mapping relationship is obtained, the cargo point cloud data corresponding to the cargo is mapped into the map coordinate system according to the mapping relationship, so as to obtain a map coordinate of the cargo in the map coordinate system.
Optionally, after the map coordinates are acquired, the map coordinates are determined as the position information of the cargo.
Optionally, after obtaining the position information, the forklift needs to be controlled to unload the goods from the truck according to the position information, based on which, the truck system further includes a forklift, and after the step S30, the method further includes:
and sending the target position information to a forklift so that the forklift can execute unloading operation according to the map coordinates.
Optionally, the position information includes a map coordinate of the cargo in a map coordinate system, and the forklift obtains the map coordinate of the cargo after acquiring the position information, and then moves to a carrying position corresponding to the map coordinate according to the map coordinate, so as to perform unloading operation.
In the embodiment of the application, by arranging an automatic loading and unloading vehicle system, the automatic loading and unloading vehicle system comprises a visual system, the visual system comprises two solid-state laser radars and a camera placed below the solid-state laser radars, a first calibration external parameter corresponding to the solid-state laser radars and the camera and a mapping relation between the solid-state laser radars and a map coordinate system are obtained by calibrating the solid-state laser radars and the camera and the solid-state laser radars and the map coordinate system in advance, in the subsequent actual operation, point cloud data are collected by the solid-state laser radars and image data are collected by the camera, further cargo image coordinates of cargos in an image coordinate system are obtained according to the image data, the first calibration external parameter is called, and the point cloud data are mapped to the image coordinate system according to the first calibration external parameter, the method comprises the steps of acquiring cargo point cloud data corresponding to pixel points corresponding to cargo image coordinates, calling a mapping relation between a solid laser radar coordinate system and a map coordinate system, mapping the cargo point cloud data under the map coordinate system according to the mapping relation, acquiring map coordinates of the cargo under the map coordinate system, determining the map coordinates as position information of the cargo, and further sending the position information to a forklift to enable the forklift to carry out unloading operation according to the map coordinates.
Optionally, the vision system includes two solid state lidar, in the process of actually acquiring the point cloud data, the point cloud data acquired by different solid state lidar is different, and the solid state lidar coordinate systems corresponding to different solid state lidar are different, based on which, the point cloud data acquired by different solid state lidar needs to be registered, based on which, referring to fig. 7, based on the first embodiment, the step S30 includes:
step S31, acquiring second calibration external parameters corresponding to the first solid-state laser radar and the second solid-state laser radar;
step S32, registering the first point cloud data and the second point cloud data according to the second calibration external reference to obtain registered point cloud data;
and step S33, mapping the point cloud data after registration to the image coordinate system according to the first calibration external parameter.
Optionally, this application embodiment uses two solid state laser radars for example to analyze, the vision system includes first solid state laser radar and second solid state laser radar, the point cloud data that first solid state laser radar gathered are first point cloud data, the point cloud data that second solid state laser radar gathered are second point cloud data, the solid state laser radar coordinate system that first solid state laser radar corresponds is first solid state laser radar coordinate system, the solid state laser radar coordinate system that second solid state laser radar corresponds is second solid state laser radar coordinate system.
Optionally, after the first point cloud data and the second point cloud data are obtained, the first point cloud data and the second point cloud data need to be registered to obtain registered point cloud data, the first calibration external reference is called, and the registered point cloud data are mapped to the image coordinate system to obtain cargo point cloud data of the cargo.
Optionally, the registering of the first point cloud data and the second point cloud data is performed by: calibrating the first solid-state laser radar and the second solid-state laser radar to obtain second calibration external parameters corresponding to the first solid-state laser radar and the second solid-state laser radar, wherein the calibration mode of the first solid-state laser radar and the second solid-state laser radar is as follows: and converting first historical point cloud data acquired by a first solid-state laser radar into a second solid-state laser radar coordinate system corresponding to the second solid-state laser radar, or converting second point cloud data acquired by the second solid-state laser radar into a first solid-state laser radar coordinate system corresponding to the first solid-state laser radar.
Optionally, in the embodiment of the present application, the first historical point cloud data acquired by the first solid-state lidar is converted into a second solid-state lidar coordinate system corresponding to the second solid-state lidar for example analysis.
Optionally, the first solid-state lidar and the second solid-state lidar have overlapping fields of view of a preset angle, wherein the overlapping fields of view are 20 degrees. Based on this, the manner of converting the first historical point cloud data acquired by the first solid-state laser radar to the second solid-state laser radar coordinate system corresponding to the second solid-state laser radar may be to acquire the first historical point cloud data acquired by the first solid-state laser radar and the second historical point cloud data acquired by the second solid-state laser radar, extract the first target historical point cloud data of the first solid-state laser radar in the overlapped view and the second target historical point cloud data of the second solid-state laser radar in the overlapped view, use radar assembly information as an initial pose, solve a pose relationship between the first solid-state laser radar and the second solid-state laser radar through an ICP registration algorithm, and use a pose obtained after iterative registration as the second calibration external reference, where the ICP registration algorithm is a local registration algorithm.
Optionally, after the second calibration external parameter is obtained, the first point cloud data and the second point cloud data are registered according to the second calibration external parameter, so that the first point cloud data is converted into a second solid-state laser radar coordinate system corresponding to the second solid-state laser radar.
It is understood that the registered point cloud data is point cloud data based on the second solid state lidar coordinate system.
Optionally, after the second calibration external parameter is obtained, the second calibration external parameter is stored, so that the second calibration external parameter can be directly called after the first point cloud data and the second point cloud data are subsequently obtained.
In the embodiment of the application, the first solid-state laser radar and the second solid-state laser radar are calibrated to obtain second calibration external parameters of the first solid-state laser radar and the second solid-state laser radar, and then after the first point cloud data and the second point cloud data are collected, the first point cloud data and the point cloud data are registered according to the second calibration external parameters, so that the first point cloud data and the second point cloud data are located in the same solid-state laser radar coordinate system.
Optionally, based on the above embodiment, referring to fig. 8, the automatic loading and unloading vehicle system may control a forklift to perform a loading operation, and the radar and camera based automatic loading and unloading vehicle method further includes the steps of:
step S50, obtaining the remaining area of the boxcar according to the point cloud data;
step S60, when the residual area is larger than or equal to the preset area, determining a library position according to the residual area;
and step S70, sending the storage position to the forklift so that the forklift can execute a loading operation according to the target storage position.
Optionally, the point cloud data is point cloud data obtained by registering the first point cloud data and the second point cloud data, the point cloud data includes point cloud data of the boxcar, and the step of obtaining the remaining area of the boxcar according to the point cloud data includes:
extracting edge point cloud data of the boxcar according to the point cloud data;
determining the effective length and the effective width of the boxcar according to the edge point cloud data;
and determining the remaining area of the boxcar according to the effective length and the effective width.
Optionally, the manner of extracting the edge point cloud data of the boxcar according to the point cloud data is that a camera based on the visual system acquires an image of the boxcar, edge image coordinates of the boxcar are acquired according to the image of the boxcar, and the point cloud data is mapped to an image coordinate system according to the first calibration external parameter so as to acquire the edge point cloud data corresponding to each pixel point of the edge coordinates.
Optionally, the edge point cloud data includes edge three-dimensional coordinates of the boxcar edge, and then the effective length and the effective width of the boxcar are calculated according to the edge three-dimensional coordinates. Optionally, the height of the boxcar can be calculated according to the three-dimensional coordinates of the edge.
Optionally, the effective length is a remaining length of the boxcar, and the effective width is a remaining width of the boxcar.
Optionally, after obtaining the effective length and the effective width of the boxcar, determining the remaining area of the boxcar according to the length and the width.
Optionally, after the remaining area is obtained, whether the remaining area is larger than or equal to a preset area is judged, when the remaining area is larger than or equal to the preset area, it represents that the freight car has an empty storage position for placing goods, and when the remaining area is smaller than the preset area, it represents that the freight car has no empty storage position for placing goods.
Optionally, when the remaining area is greater than or equal to the preset area, planning a library position according to the remaining area.
Optionally, an embodiment of the present application further provides a method for determining a corresponding storage location of goods to be placed according to a remaining area, where the step S60 includes:
acquiring area information of goods to be placed;
and distributing corresponding storage positions for the goods to be placed according to the area information and the residual area.
Optionally, the goods to be placed are goods to be placed in a boxcar, and the area information is standard area information of a single goods, namely the area of one goods. Optionally, the area information may also be total area information of all the cargos, and after the total area information of all the cargos is obtained, the area information of a single cargo is calculated according to the total area information and the number of the cargos.
Optionally, after the area information of the goods is obtained, after the standard area information of each goods is determined according to the area information, a corresponding storage position is allocated to the goods according to the standard area information and the remaining area, and the storage position may be a storage position serial number or a coordinate of the storage position in a map coordinate system. Referring to fig. 9, fig. 9 is a diagram illustrating an example of allocating corresponding storage locations for the goods according to the area information and the remaining area, and it can be understood that one storage location can be used for placing goods with a standard area.
Optionally, after the target storage position is determined, the storage position is sent to the forklift, and after the forklift receives the storage position, the goods are placed on the storage position according to the coordinates corresponding to the storage position.
In the embodiment of the application, the point cloud data of the boxcar is acquired through a vision system, the edge point cloud data of the plane of the boxcar is extracted, the residual area of the boxcar is determined according to the edge point cloud data, the residual area is the corresponding storage position of the goods distribution, and the forklift is controlled according to the storage position to place the goods on the storage position.
Further, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a radar and camera based lift truck program that, when executed by a processor, performs the steps of the various embodiments as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A radar and camera based lift truck method for use in a lift truck system, the lift truck system including a vision system including two solid state lidar and a camera disposed below the solid state lidar, the radar and camera based lift truck method comprising the steps of:
acquiring point cloud data acquired by the solid-state laser radar and image data acquired by the camera;
acquiring the cargo image coordinates of the cargo in an image coordinate system according to the image data;
determining a first calibration external parameter, mapping the point cloud data into the image coordinate system according to the first calibration external parameter to obtain cargo point cloud data corresponding to each pixel point corresponding to the cargo image coordinate, wherein the first calibration external parameter is a relative pose relation of the image coordinate system and a solid-state laser radar coordinate system;
and determining the position information of the cargo according to the cargo point cloud data.
2. The radar and camera based lift truck method of claim 1, wherein the step of determining a first calibration external parameter comprises:
acquiring historical point cloud data obtained by scanning the calibration plate by the solid-state laser radar and historical image data obtained by shooting the calibration plate by the camera;
extracting target historical point cloud data of each angular point in a calibration plate from the historical point cloud data, and determining first coordinate information of the angular point of the calibration plate in the solid-state laser radar coordinate system according to the target historical point cloud data;
determining second coordinate information of the corner point of the calibration plate in the image coordinate system according to the historical image data;
determining the corresponding relation between the image coordinate system and the solid-state laser radar coordinate system according to the first coordinate information and the second coordinate information;
and determining the corresponding relation as the relative pose relation, and taking the relative pose relation as the first calibration external parameter.
3. A radar and camera based lift truck method according to claim 1 wherein said point cloud data comprises first point cloud data and second point cloud data, said step of mapping said point cloud data into said image coordinate system according to a first calibration parameter comprising:
acquiring second calibration external parameters corresponding to the first solid-state laser radar and the second solid-state laser radar;
registering the first point cloud data and the second point cloud data according to the second calibration external reference to obtain registered point cloud data;
and mapping the point cloud data after registration to the image coordinate system according to the first calibration external parameter.
4. A radar and camera based lift truck method as claimed in claim 1, wherein said step of determining location information of said cargo from said cargo point cloud data comprises:
acquiring a mapping relation between the solid laser radar coordinate system and a map coordinate system;
mapping the cargo point cloud coordinate to the map coordinate system according to the mapping relation so as to obtain a map coordinate of the cargo under the map coordinate system;
determining the map coordinates as the location information.
5. The radar and camera based lift truck method of claim 4, wherein said step of determining location information for said cargo from said cargo point cloud coordinates further comprises, after said step of:
and sending the position information to a forklift so that the forklift can execute unloading operation according to the map coordinates.
6. The radar and camera based lift truck method of claim 1, wherein the steps of the radar and camera based lift truck method further comprise:
acquiring the remaining area of the boxcar according to the point cloud data;
when the residual area is larger than or equal to a preset area, determining a library position according to the residual area;
and sending the storage position to a forklift so that the forklift can carry out loading operation according to the storage position.
7. The radar and camera based lift truck method of claim 6 wherein said step of obtaining remaining area of the boxcar from said point cloud data comprises:
extracting edge point cloud data of the boxcar according to the point cloud data;
determining the effective length and the effective width of the boxcar according to the edge point cloud data;
and determining the remaining area of the boxcar according to the effective length and the effective width.
8. The radar and camera based lift truck method of claim 6 wherein said step of determining a bay from said remaining area comprises:
acquiring area information of goods to be placed;
and distributing corresponding storage positions for the goods to be placed according to the area information and the residual area.
9. An auto-lift truck system, characterized in that it comprises a vision system comprising two solid state lidar and a camera placed below the solid state lidar.
10. The automated handler system of claim 9, further comprising: memory, a processor, and a radar and camera based lift truck program stored on and executable on the memory, which when executed by the processor implements the steps of the radar and camera based lift truck method of any one of claims 1 to 8.
CN202111647223.3A 2021-12-29 2021-12-29 Automatic loading and unloading method and automatic loading and unloading system based on radar and camera Pending CN114494453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111647223.3A CN114494453A (en) 2021-12-29 2021-12-29 Automatic loading and unloading method and automatic loading and unloading system based on radar and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111647223.3A CN114494453A (en) 2021-12-29 2021-12-29 Automatic loading and unloading method and automatic loading and unloading system based on radar and camera

Publications (1)

Publication Number Publication Date
CN114494453A true CN114494453A (en) 2022-05-13

Family

ID=81508965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111647223.3A Pending CN114494453A (en) 2021-12-29 2021-12-29 Automatic loading and unloading method and automatic loading and unloading system based on radar and camera

Country Status (1)

Country Link
CN (1) CN114494453A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792343A (en) * 2022-06-21 2022-07-26 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, and method and device for acquiring image data
CN115407355A (en) * 2022-11-01 2022-11-29 小米汽车科技有限公司 Library position map verification method and device and terminal equipment
CN116425088A (en) * 2023-06-09 2023-07-14 未来机器人(深圳)有限公司 Cargo carrying method, device and robot
CN116449387A (en) * 2023-06-15 2023-07-18 南京师范大学 Multi-dimensional environment information acquisition platform and calibration method thereof
WO2023236872A1 (en) * 2022-06-09 2023-12-14 劢微机器人(深圳)有限公司 Unloading method based on fusion of radar and camera, and detection apparatus and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236872A1 (en) * 2022-06-09 2023-12-14 劢微机器人(深圳)有限公司 Unloading method based on fusion of radar and camera, and detection apparatus and storage medium
CN114792343A (en) * 2022-06-21 2022-07-26 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, and method and device for acquiring image data
CN114792343B (en) * 2022-06-21 2022-09-30 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, method and device for acquiring image data
CN115407355A (en) * 2022-11-01 2022-11-29 小米汽车科技有限公司 Library position map verification method and device and terminal equipment
CN115407355B (en) * 2022-11-01 2023-01-10 小米汽车科技有限公司 Library position map verification method and device and terminal equipment
CN116425088A (en) * 2023-06-09 2023-07-14 未来机器人(深圳)有限公司 Cargo carrying method, device and robot
CN116425088B (en) * 2023-06-09 2023-10-24 未来机器人(深圳)有限公司 Cargo carrying method, device and robot
CN116449387A (en) * 2023-06-15 2023-07-18 南京师范大学 Multi-dimensional environment information acquisition platform and calibration method thereof
CN116449387B (en) * 2023-06-15 2023-09-12 南京师范大学 Multi-dimensional environment information acquisition platform and calibration method thereof

Similar Documents

Publication Publication Date Title
CN114494453A (en) Automatic loading and unloading method and automatic loading and unloading system based on radar and camera
CN115205373A (en) Unloading method based on radar and camera fusion, detection device and storage medium
CN112132523B (en) Method, system and device for determining quantity of goods
EP4016457A1 (en) Positioning method and apparatus
CN114455511A (en) Forklift loading method and equipment and computer readable storage medium
CN110825832A (en) SLAM map updating method, device and computer readable storage medium
CN114358145A (en) Forklift unloading method, forklift and computer-readable storage medium
KR20110027460A (en) A method for positioning and orienting of a pallet based on monocular vision
CN112379387A (en) Automatic goods location calibration method, device, equipment and storage medium
US20180114336A1 (en) Positioning method and image capturing device thereof
US10697757B2 (en) Container auto-dimensioning
CN115018895A (en) Goods placing method, device, equipment and storage medium for high-level goods shelf of unmanned forklift
CN113222970A (en) Vehicle loading rate detection method and device, computer equipment and storage medium
CN111483914B (en) Hanger attitude identification method, device, equipment and storage medium
CN112378333A (en) Method and device for measuring warehoused goods
CN114648233A (en) Dynamic station cargo carrying method and system
CN113110433A (en) Robot posture adjusting method, device, equipment and storage medium
WO2023213070A1 (en) Method and apparatus for obtaining goods pose based on 2d camera, device, and storage medium
CN116341772A (en) Library position planning method and device, electronic equipment and storage medium
CN114758163B (en) Forklift movement control method and device, electronic equipment and storage medium
CN110853008A (en) SLAM map quality assessment method, device and computer readable storage medium
CN113345023B (en) Box positioning method and device, medium and electronic equipment
CN113978987B (en) Pallet object packaging and picking method, device, equipment and medium
CN113160310A (en) High-order goods shelf goods taking and placing method, device, equipment and storage medium
CN113379829A (en) Camera-based dimension measurement method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination